Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Life Quotes That Will Change Your Life

Life’s better with the right words.

And life quotes can help us live better.

Life quotes are a simple way to share some of the deepest insights on the art of living, and how to live well.

While some people might look for wisdom in a bottle, or in a book, or in a guru at the top of a mountain, surprisingly, a lot of the best wisdom still exists as quotes.

The problem is they are splattered all over the Web.

The Ultimate Life Quotes Collection

My ultimate Life Quotes collection is an attempt to put the best quotes right at your fingertips.

I wanted this life quotes collection to answer everything from “What is the meaning of life?” to “How do you live the good life?” 

I also wanted this life quotes collection to dive deep into all angles of life including dealing with challenges, living with regrets, how to find your purpose, how to live with more joy, and ultimately, how to live a little better each day.

The World’s Greatest Philosophers at Your Fingertips

Did I accomplish all that?

I’m not sure.  But I gave it the old college try.

I curated quotes on life from an amazing set of people including Dr. Seuss, Tony Robbins, Gandhi, Ralph Waldo Emerson, James Dean, George Bernard Shaw, Virginia Woolf, Buddha, Lao Tzu, Lewis Carroll, Mark Twain, Confucius, Jonathan Swift, Henry David Thoreau, and more.

Yeah, it’s a pretty serious collection of life quotes.

Don’t Die with Your Music Still In You

There are many messages and big ideas among the collection of life quotes.  But perhaps one of the most important messages is from the late, great Henry David Thoreau:

“Most men lead lives of quiet desperation and go to the grave with the song still in them.” 

And, I don’t think he meant play more Guitar Hero.

If you’re waiting for your chance to rise and shine, chances come to those who take them.

Not Being Dead is Not the Same as Being Alive

E.E. Cummings reminds us that there is more to living than simply existing:

“Unbeing dead isn’t being alive.” 

And the trick is to add more life to your years, rather than just add more years to your life.

Define Yourself

Life quotes teach us that living live on your terms starts by defining yourself.  Here are big, bold words from Harvey Fierstein that remind us of just that:

“Never be bullied into silence. Never allow yourself to be made a victim. Accept no one’s definition of your life; define yourself.”

Now is a great time to re-imagine all that you’re capable of.

We Regret the Things We Didn’t Do

It’s not usually the things that we do that we regret.  It’s the things we didn’t do:

“Of all sad words of tongue or pen, the saddest are these, ‘It might have been.”  – John Greenleaf Whittier

Have you answered to your calling?

Leave the World a Better Place

One sure-fire way that many people find their path is they aim to leave the world at least a little better than they found it.

“To laugh often and much; to win the respect of intelligent people and the affection of children…to leave the world a better place…to know even one life has breathed easier because you have lived. This is to have succeeded.” -- Ralph Waldo Emerson

It’s a reminder that we can measure our life by the lives of the people we touch.

You Might Also Like

7 Habits of Highly Motivated People

10 Leadership Ideas to Improve Your Influence and Impact

Boost Your Confidence with the Right Words

The Great Inspirational Quotes Revamped

The Great Leadership Quotes Collection Revamped

Categories: Architecture, Programming

Australia - July/August 2015

Coding the Architecture - Simon Brown - Fri, 04/17/2015 - 17:42

It's booked! Following on from my trip to Australia and the YOW! 2014 conference in December last year, I'll be back in Australia during July and August. The rough plan is to arrive in Perth and head east; visiting at least Melbourne, Brisbane and Sydney again. I'm hoping to schedule some user group talks and limited number of 1-day workshops and/or talks along the way too (public and/or in-house).

If you're interested in me visiting your office/company during my trip, please just drop me a note at simon.brown@codingthearchitecture.com. Thanks!

Categories: Architecture

Stuff The Internet Says On Scalability For April 17th, 2015

Hey, it's HighScalability time:

A fine tribute on Silicon Valley & hilarious formula evaluating Peter Gregory's positive impact on humanity.

  • 118/196: nations becoming democracies since mid19th century; $70K: nice minimum wage; 70 million: monthly StackExchange visitors; 1 billion: drone planted trees; 1,000 Years: longest-exposure camera shot ever

  • Quotable Quotes:

    • @DrQz: #Performance modeling is really about spreading the guilt around.

    • @natpryce: “What do we want?” “More levels of indirection!” “When do we want it?” “Ask my IDateTimeFactoryImplBeanSingletonProxy!”

    • @BenedictEvans: In the late 90s we were euphoric about what was possible, but half what we had sucked. Now everything's amazing, but we worry about bubbles

    • Calvin Zito on Twitter: "DreamWorks Animation: One movie, 250 TB to make.10 movies in production at one time, 500 million files per movie. Wow."

    • Twitter: Some of our biggest MySQL clusters are over a thousand servers.

    • @SaraJChipps: It's 2015: open source your shit. No one wants to steal your stupid CRUD app. We just want to learn what works and what doesn't.

    • Calvin French-Owen: And as always: peace, love, ops, analytics.

    • @Wikipedia: Cut page load by 100ms and you save Wikipedia readers 617 years of wait annually. Apply as Web Performance Engineer

    • @IBMWatson: A person can generate more than 1 million gigabytes of health-related data.

    • @allspaw: "We’ve learned that automation does not eliminate errors." (yes!)  

    • @Obdurodon: Immutable data structures solve everything, in any environment where things like memory allocators and cache misses cost nothing.

    • KaiserPro: Pixar is still battling with lots of legacy cruft. They went through a phase of hiring the best and brightest directly from MIT and the like.

    • @Obdurodon: Immutable data structures solve everything, in any environment where things like memory allocators and cache misses cost nothing.

    • @abt_programming: "Duplication is far cheaper than the wrong abstraction" - @sandimetz

    • @kellabyte: When I see places running 1,200 containers for fairly small systems I want to scream "WHY?!"

    • chetanahuja: One of the engineers tried running our server stack on a raspberry for a laugh.. I was gobsmacked to hear that the whole thing just worked (it's a custom networking protocol stack running in userspace) if just a bit slower than usual.

  • Chances are if something can be done with your data, it will be done. @RMac18: Snapchat is using geofilters specific to Uber's headquarter to poach engineers.

  • Why (most) High Level Languages are Slow. Exactly this by masterbuzzsaw: If manual memory management is cancer, what is manual file management, manual database connectivity, manual texture management, etc.? C# may have “saved” the world from the “horrors” of memory management, but it introduced null reference landmines and took away our beautiful deterministic C++ destructors.

  • Why NFS instead of S3/EBS? nuclearqtip with a great answer: Stateful; Mountable AND shareable; Actual directories; On-the-wire operations (I don't have to download the entire file to start reading it, and I don't have to do anything special on the client side to support this; Shared unix permission model; Tolerant of network failures Locking!; Better caching ; Big files without the hassle.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Is LeSS meer dan SAFe?

Xebia Blog - Fri, 04/17/2015 - 14:48

(Grote) Nederlandse bedrijven die op zoek zijn naar een oplossing om de voordelen die hun Agile teams brengen op te schalen, gebruiken vooral het Scaled Agile Framework (SAFe) als referentiemodel. Dit model is -ook voor managers- zeer toegankelijke opgezet en trainingen en gecertificeerde consultants zijn beschikbaar. Al in 2009 beschreven Craig Larman en Bas Vodde hun ervaringen met de toepassing van Scrum in grote organisaties (onder andere Nokia) in hun boeken 'Scaling Lean & Agile Development' en 'Practices for Scaling Lean & Agile Development'. De methode noemden ze Large Scale Scrum, afgekort LeSS.
LeSS heeft de afgelopen jaren een onopvallend bestaan geleid. Onlangs is besloten dit waardevolle gedachtengoed meer in de spotlights te zetten. Er komt in de zomer een derde boek, de site less.works is gelanceerd, er is een trainingen tournee gestart en Craig en Bas geven acte de presence op de toonaangevende conferenties. Zo zal Bas 4 juni als keynote optreden tijdens Xebicon 2015, in Amsterdam. Is LeSS meer of minder dan SAFe? Of min of meer SAFe?

Wat is LeSS?
Less is dus een methode om een grote(re) organisatie met Agile teams als basis in te richten. Zoals de naam verraadt, Scrum is daarbij het uitgangspunt. Er zijn 2 smaken: ‘gewoon’ LeSS, tot 8 teams en Less Huge, vanaf 8 teams. LeSS bouwt op verplichte regels (rules), bijvoorbeeld "An Overall Retrospective is held after the Team Retrospectives to discuss cross-team and system-wide issues, and create improvement experiments. This is attended by Product Owner, ScrumMasters, Team Representatives, and managers (if there are any).” Daarnaast kent LeSS principles (ontwerp criteria). De principes vormen het referentie raamwerk op basis waarvan je de juiste ontwerp besluiten neemt. Tenslotte zijn er de Guidelines en Experiments, de dingen die in de praktijk bij organisaties succesvol of juist niet zijn gebleken. LeSS gaat naast het basis framework verder dieper in op:

  • Structure (de organisatie structuur)
  • Management (de -veranderende- rol van management)
  • Technical Excellence (sterk gebaseerd op XP en Continuous Delivery)
  • Adoption (de transformatie naar de LeSS organisatie).

LeSS in een notendop
De basis van LeSS is dat Large Scale Scrum = Scrum! Net als SAFe wordt in LeSS gezocht naar hoe Scrum toegepast kan worden op een groep van zeg 100 man. LeSS blijft het dichtst bij Scrum: er is 1 sprint, met 1 Product Owner, 1 product backlog, 1 planning en 1 sprint review, waarin 1 product wordt gerealiseerd. Dit is dus anders dan in SAFe, waarin een opgeblazen sprint is gedefinieerd (de Product Increment). Om deze 1 sprint implementatie te kunnen waarmaken is naast een hele sterke whole product focus, bijvoorbeeld ook een technisch platform nodig, dat dit ondersteunt. Waar SAFe pragmatisch een geleidelijke invoering van Agile at Scale toestaat, is LeSS strenger in de klaar-voor-de-start eisen. Er moet een structuur worden neergezet die de cultuur van de 'contract game’ doorbreekt. De cultuur van overvragen, druk, onduidelijkheid, verrassingen, en afrekenende verantwoordelijkheid.

LeSS is meer en minder SAFe
De recente inspanning om LeSS toegankelijk te maken gaan ongetwijfeld leiden tot een sterk toenemende aandacht voor deze aansprekende benadering voor de inrichting van Agile at Scale. LeSS is anders dan SAFe, al hebben beide modellen vooral in hun inspiratiebronnen ook veel gemeen.
Beide modellen kiezen een andere insteek, bijvoorbeeld mbt:

  • hoe Scrum toe te passen op een cluster van teams
  • de benadering van de transformatie naar Agile at Scale
  • hoe oplossingen worden gebracht: SAFe geeft de oplossing, LeSS de voors en tegens van keuzes

Opvallend is verder dat SAFe (met het portfolioniveau) uitlegt hoe de verbinding tussen strategie en backlogs gelegd moet worden. LeSS besteedt daarentegen meer aandacht aan de transformatie (Adoption) en Agile op hele grote schaal (LeSS Huge).

Of een organisatie kiest voor LeSS of SAFe, zal afhangen wat het best past bij de organisatie. Past bij de veranderambitie en bij de ‘agility’ op moment van starten. Sterk ‘blauwe’ organisaties zullen kiezen voor SAFe, organisaties die een overtuigende stap richting een Agile organisatie durven te zetten, zullen eerder kiezen voor LeSS. In beide gevallen loont het om kennis te nemen van de oplossingen die de andere methode biedt.

Announcing General Availability of Azure Premium Storage

ScottGu's Blog - Scott Guthrie - Thu, 04/16/2015 - 18:01

I’m very excited to announce the general availability release of Azure Premium Storage. It is now available with an enterprise grade SLA and is available for everyone to use.

Microsoft Azure now offers two types of storage: Premium Storage and Standard Storage. Premium Storage stores data durably on Solid State Drives (SSDs) and provides high performance, low latency, disk storage with consistent performance delivery guarantees.

image

Premium Storage is ideal for I/O-sensitive workloads - and is especially great for database workloads hosted within Virtual Machines.  You can optionally attach several premium storage disks to a single VM, and support up to 32 TB of disk storage per Virtual Machine and drive more than 64,000 IOPS per VM at less than 1 millisecond latency for read operations. This provides an incredibly fast storage option that enables you to run even more workloads in the cloud.

Using Premium Storage, Azure now offers the ability run more demanding applications - including high-volume SQL Server, Dynamics AX, Dynamics CRM, Exchange Server, MySQL, Oracle Database, IBM DB2, MongoDB, Cassandra, and SAP solutions. Durability

Durability of data is of utmost importance for any persistent storage option. Azure customers have critical applications that depend on the persistence of their data and high tolerance against failures. Premium Storage keeps three replicas of data within the same region, and ensures that a write operation will not be confirmed back until it has been durably replicated. This is a unique cloud capability provided only be Azure today.

In addition, you can also optionally create snapshots of your disks and copy those snapshots to a Standard GRS storage account - which enables you to maintain a geo-redundant snapshot of your data that is stored > 400 miles away from your primary Azure region for disaster recovery purposes. Available Regions

Premium Storage is available today in the following Azure regions:

  • West US
  • East US 2
  • West Europe
  • East China
  • Southeast Asia
  • West Japan

We will expand Premium Storage to run in all Azure regions in the near future. Getting Started

You can easily get started with Premium Storage starting today. Simply go to the Microsoft Azure Management Portal and create a new Premium Storage account. You can do this by creating a new Storage Account and selecting the “Premium Locally Redundant” storage option (note: this option is only listed if you select a region where Premium Storage is available).

Then create a new VM and select the “DS” series of VM sizes. The DS-series of VMs are optimized to work great with Premium Storage. When you create the DS VM you can simply point it at your Premium Storage account and you’ll be all set. Learning More

Learn more about Premium Storage from Mark Russinovich's blog post on today's release.  You can also see a live 3 minute demo of Premium Storage in action by watching Mark Russinovich’s video on premium storage. In it Mark shows both a Windows Server and Linux VM driving more than 64,000 disk IOPS with low latency against a durable drive powered by Azure Premium Storage.

image

You can also visit the following links for more information:

Summary

We are very excited about the release of Azure Premium Storage. Premium Storage opens up so many new opportunities to use Azure to run workloads in the cloud – including migrating existing on-premises solutions.

As always, we would love to hear feedback via comments on this blog, the Azure Storage MSDN forum or send email to mastoragequestions@microsoft.com.

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Paper: Large-scale cluster management at Google with Borg

Joe Beda (@jbeda): Borg paper is finally out. Lots of reasoning for why we made various decisions in #kubernetes. Very exciting.

The hints and allusions are over. We now have everything about Google's long rumored Borg project in one iconic Google style paper: Large-scale cluster management at Google with Borg.

When Google blew our minds by audaciously treating the Datacenter as a Computer it did not go unnoticed that by analogy there must be an operating system for that datacenter/computer.

Now we have the story behind a critical part of that OS:

Google's Borg system is a cluster manager that runs hundreds of thousands of jobs, from many thousands of different applications, across a number of clusters each with up to tens of thousands of machines.

It achieves high utilization by combining admission control, efficient task-packing, over-commitment, and machine sharing with process-level performance isolation. It supports high-availability applications with runtime features that minimize fault-recovery time, and scheduling policies that reduce the probability of correlated failures. Borg simplifies life for its users by offering a declarative job specification language, name service integration, real-time job monitoring, and tools to analyze and simulate system behavior.

We present a summary of the Borg system architecture and features, important design decisions, a quantitative analysis of some of its policy decisions, and a qualitative examination of lessons learned from a decade of operational experience with it.

Virtually all of Google’s cluster workloads have switched to use Borg over the past decade. We continue to evolve it, and have applied the lessons we learned from it to Kubernetes

The next version of Borg was called Omega and Omega is being rolled up into Kubernetes (steersman, helmsman, sailing master), which has been open sourced as part of Google's Cloud initiative.

Note how the world has changed. A decade ago when Google published their industry changing Big Table and Map Reduce papers they launched a thousand open source projects in response. Now we are not only seeing Google open source their software instead of others simply copying the ideas, the software has been released well in advance of the paper describing the software.

The future is still in balance. There's a huge fight going on for the future of what software will look like, how it is built, how it is distributed, and who makes the money. In the search business keeping software closed was a competitive advantage. In the age of AWS the only way to capture hearts and minds is by opening up your software. Interesting times.

Related Articles
Categories: Architecture

Experimenting with Swift and UIStoryboardSegues

Xebia Blog - Wed, 04/15/2015 - 21:58

Lately I've been experimenting a lot with doing things differently in Swift. I'm still trying to find best practices and discover completely new ways of doing things. One example of this is passing objects from one view controller to another through a segue in a single line of code, which I will cover in this post.

Imagine two view controllers, a BookViewController and an AuthorViewController. Both are in the same storyboard and the BookViewController has a button that pushes the AuthorViewController on the navigation controller through a segue. To know which author we need to show on the AuthorViewController we need to pass an author object from the BookViewController to the AuthorViewController. The traditional way of doing this is giving the segue an identifier and then setting the object:

class BookViewController: UIViewController {

    var book: Book!

    override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
        if segue.identifier == "ShowAuthor" {
            let authorViewController = segue.destinationViewController as! AuthorViewController
            authorViewController.author = book.author
        }
    }
}

class AuthorViewController: UIViewController {

    var author: Author!
}

And in case we would use a modal segue that shows a AuthorViewController embedded in a navigation controller, the code would be slightly more complex:

if segue.identifier == "ShowAuthor" {
  let authorViewController = (segue.destinationViewController as! UINavigationController).viewControllers[0] as! AuthorViewController
  authorViewController.author = book.author
}

Now let's see how we can add an extension to UIStoryboardSegue that makes this a bit easier and works the same for both scenarios. Instead of checking the segue identifier we will just check on the type of the destination view controller. We assume that based on the type the same object is passed on, even when there are multiple segues going to that type.

extension UIStoryboardSegue {

    func destinationViewControllerAs<T>(cl: T.Type) -> T? {
        return destinationViewController as? T ?? (destinationViewController as? UINavigationController)?.viewControllers[0] as? T
    }
}

What we've done here is add the method destinationViewControllerAs to UIStoryboardSegue that checks if the destinationViewController is of the generic type T. If it's not, it will check if the destinationViewController is a navigation controller and if it's first view controller is of type T. If it finds either one, it will return that instance of T. Since it can also be nil, the return type is an optional T.

It's now incredibly simple to pass on our author object to the AuthorViewController:

override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
  segue.destinationViewControllerAs(AuthorViewController.self)?.author = book.author
}

No need to check any identifiers anymore and less code. Now I'm not saying that this is the best way to do it or that it's even better than the traditional way of doing things. But it does show that Swift offers us new ways of doing things and it's worth to experiment to find best practices.

The source code of the samples and extension is available on https://github.com/lammertw/StoryboardSegueExtension.

Full Stack Tuning for a 100x Load Increase and 40x Better Response Times

A world that wants full stack developers also needs full stack tuners. That tuning process, or at least the outline of a full stack tuning process is something Ronald Bradford describes in not quite enough detail in Improving performance – A full stack problem.

The general philosophy is:

  • Understanding were to invest your energy first, know what the return on investment can be.
  • Measure and verify every change.

He lists several tips for general website improvements:

Categories: Architecture

SATURN conference in Baltimore, MD

Coding the Architecture - Simon Brown - Wed, 04/15/2015 - 09:25

Following on from the CRAFT conference in Budapest next week, I'm heading straight across the water to the SATURN conference, which is taking place in Baltimore, Maryland. SATURN is much more focussed around software architecture than many of the other events I attend and I had fantastic time when I attended in 2013, so I'm delighted to be invited back. I'm involved with a number of different things at the conference as follows.

  • Tuesday April 28th - Microservices Trial - I'll be on a panel with Len Bass and Sam Newman, debating whether a microservices architecture is just an attractive nuisance rather than something that's actually useful.
  • Wednesday April 29th - Software architecture as code - a talk about how much of the software architectural intent remains in the code and how we can represent this architectural model as code.
  • Wednesday April 29th - Office hours - an open discussion about anything related to software architecture.
  • Thursday April 30th - My Silver Toolbox - I'll be doing a short talk in Micheal Keeling's session about some of the software architecture tools I find indispensable.

SATURN 2015 brochure

This is my last scheduled trip to the US this year, so please do come and grab me if you want to chat.

Categories: Architecture

Sponsored Post: OpenDNS, MongoDB, Internap, Aerospike, Nervana, SignalFx, InMemory.Net, Couchbase, VividCortex, Transversal, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • The Cloud Platform team at OpenDNS is building a PaaS for our engineering teams to build and deliver their applications. This is a well rounded team covering software, systems, and network engineering and expect your code to cut across all layers, from the network to the application. Learn More

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • Nervana Systems is hiring several engineers for cloud positions. Nervana is a startup based in Mountain View and San Diego working on building a highly scalable deep learning platform on CPUs, GPUs and custom hardware. Deep Learning is an AI/ML technique breaking all the records by a wide-margin in state of the art benchmarks across domains such as image & video analysis, speech recognition and natural language processing. Please apply here and mention “highscalability.com” in your message.

  • Linux Web Server Systems EngineerTransversal. We are seeking an experienced and motivated Linux System Engineer to join our Engineering team. This new role is to design, test, install, and provide ongoing daily support of our information technology systems infrastructure. As an experienced Engineer you will have comprehensive capabilities for understanding hardware/software configurations that comprise system, security, and library management, backup/recovery, operating computer systems in different operating environments, sizing, performance tuning, hardware/software troubleshooting and resource allocation. Apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • MongoDB World brings together over 2,000 developers, sysadmins, and DBAs in New York City on June 1-2 to get inspired, share ideas and get the latest insights on using MongoDB. Organizations like Salesforce, Bosch, the Knot, Chico’s, and more are taking advantage of MongoDB for a variety of ground-breaking use cases. Find out more at http://mongodbworld.com/ but hurry! Super Early Bird pricing ends on April 3.
Cool Products and Services
  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • Looking for a scalable NoSQL database alternative? Aerospike is validating the future of ACID compliant NoSQL with our open source Key-Value Store database for real-time transactions. Download our free Community Edition or check out the Trade-In program to get started. Learn more.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • Benchmark: MongoDB 3.0 (w/ WiredTiger) vs. Couchbase 3.0.2. Even after the competition's latest update, are they more tired than wired? Get the Report.

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

CRAFT conference in Budapest

Coding the Architecture - Simon Brown - Tue, 04/14/2015 - 10:31

I'm heading to Budapest next week for the 2nd annual CRAFT conference, which is about software craftsmanship and modern software development. It was one of my favourite conferences from last year (my talk was called Agility and the essence of software architecture) so I'm really looking forward to going back. I'll be speaking about software architecture as a workshop, conference talk and meetup.

  • Workshop (22nd April) - Simple sketches for diagramming your software architecture - my popular software architecture sketching workshop.
  • Meetup (22nd April) - Software architecture vs code - a short talk at the Full Stack Budapest meetup where I'll be looking at why those software architecture diagrams you have on the wall never quite reflect the code.
  • Talk (24th April) - Software architecture as code - a talk about how we should stop drawing software architecture diagrams in tools like Visio and instead try to extract as much architecture information from the code as possible, supplementing the model where necessary.

CRAFT in 2014

See you there. :-)

Categories: Architecture

Questions with a license to kill in the Sprint Review

Xebia Blog - Tue, 04/14/2015 - 09:19

A team I had been coaching held a sprint review to show what they had achieved and to get feedback from stakeholders. Among these were managers, other teams, enterprise architects, and other interested colleagues.

In the past sprint they had built and realized the automation of part of the Continuous Delivery pipeline. This was quite a big achievement for the team. The organization had been struggling for quite some time to get this working, and the team had realized this in a couple of sprints!

Team - "Anyone has questions or wants to know more?"
Stakeholder - "Thanks for the demo. How does the shown solution deal with 'X'?"

The team replied with a straightforward answer to this relatively simple question.

Stakeholder - "I have more questions related to the presented solution and concerns on corporate level, but this is probably not the good time to go into details."

What just happened and how did the team respond?

First, let me describe how the dialogue continued.

Team - "Let's make time now because it is important. What do you have on your mind?"
Stakeholder - "On corporate level solution Y is defined to deal with the company's concern related to Z compliance. I am concerned with how your solution will deal with this. Please elaborate on this."
[Everybody in the organization knows that Z compliance has been a hot topic during the past year.]

Team - "We have thought of several alternatives to deal with this issue. One of these is to have a coupling with another system that will provide the compliance. Also we see possibilities in altering the ....."
Stakeholder - "The other system has issues coping with this and is not suited for what you want it to do. What are your plans for dealing with this?"

The team replied with more details after which the stakeholder asked even more detailed questions....

How dit the team get itself out of this situation?

After a couple of questions and answers the team responded with "Look, the organisation has been struggling to find a working solution for quite some time now and has't succeeded. Therefor, we are trying a new and different approach. Since this is new we don't have all the answers yet. Next steps will deal with your concerns."

Team - "Thanks for your feedback and see you all at the next demo!"

Killing a good idea

In the above dialogue between the team and one stakeholder during the sprint review the stakeholder kept asking details questions about specific aspects of the solution. He also related these to well-known corporate issues of which the importance was very clear to everyone. Thereby, consciously or unconsciously, intimidating the audience whether the approach chosen by the team is a good one and perhaps should be abandoned.

This could be especially dangerous if not appropriately dealt with. For instance, managers at first being supportive of the (good) idea might turn against the approach, even though the idea is a good one.

Dealing with these and other difficult questions

In his book 'Buy-in - saving your good idea from getting shot down' John Kotter describes 4 basic types of attack:

  1. Fear mongering
  2. Death by delay
  3. Confusion
  4. Ridicule

Attacks can be one of these four or any combination of these. The above attack is an example of a combination of 'Fear mongering' (relating to the fear that important organisational concerns are not properly addressed) and 'Confusion' (asking about many details that are not yet worked out).

In addition, Kotter describes 24 basic attacks. The attack as described above is an example of attack no. 6.

Don't worry. No need to remember all 24 responses; they all follow a very simple strategy that can be applied:

Step 1: Invite the stakeholder(s) to ask their questions,

Step 2: Respect the person asking the question by taking his point seriously,

Step 3: Respond in a reasonable and concise way.

The team did well by inviting the stakeholder to come forward with his questions. This is good marketing to the rest of the stakeholders: this shows the team believes in the idea (their solution) and is confident to respond to any (critical) question.

Second, the team responded in a respectful way taking the question serious as a valid concern. The team did so by responding in a concise and reasonable way.

As Kotter explains, it is not about convincing that one critical  stakeholder, but it's about not losing the rest of the stakeholders!

References

"Buy-in - saving your good idea from getting shot down" - John P. Kotter & Lorne A. Whitehead, https://hbr.org/product/buy-in-saving-your-good-idea-from-getting-shot-dow/an/12703-HBK-ENG

"24 attacks & responses" - John P. Kotter & Lorne Whitehead, http://nextgen.kotterinternational.com/our-principles/buy-in/24-attacks-and-24-responses

More nxlog logging tricks

Agile Testing - Grig Gheorghiu - Tue, 04/14/2015 - 00:11
In a previous post I talked about "Sending Windows logs to Papertrail with nxlog". In the mean time I had to work through a couple of nxlog issues that weren't quite obvious to solve -- hence this quick post.

Scenario 1: You don't want to send a given log file to Papertrail

My solution:

In this section:

# Monitor MyApp1 log files 
START_ANGLE_BRACKET Input MyApp1 END_ANGLE_BRACKET
 Module im_file
 File 'C:\\MyApp1\\logs\\*.log' 
 Exec $Message = $raw_event; 
 Exec if $Message =~ /GET \/ping/ drop(); 
 Exec if file_name() =~ /.*\\(.*)/ $SourceName = $1; 
 SavePos TRUE 
 Recursive TRUE 
START_ANGLE_BRACKET /Input END_ANGLE_BRACKET

add a line which drops the current log line if the file name contains the pattern you are looking to skip. For example, for a file name called skip_this_one.log (from the same log directory), the new stanza would be:
# Monitor MyApp1 log files 
START_ANGLE_BRACKET Input MyApp1 END_ANGLE_BRACKET
 Module im_file
 File 'C:\\MyApp1\\logs\\*.log' 
 Exec $Message = $raw_event; 
 Exec if $Message =~ /GET \/ping/ drop();  Exec if file_name() =~ /skip_this_one.log/ drop();
 Exec if file_name() =~ /.*\\(.*)/ $SourceName = $1; 
 SavePos TRUE 
 Recursive TRUE 
START_ANGLE_BRACKET /Input END_ANGLE_BRACKET
Scenario 2: You want to prefix certain log lines depending on their directory of origin
Assume you have a test app and a dev app running on the same box, with the same exact log format, but with logs saved in different directories, so that in the Input sections you would have 
File 'C:\\MyTestApp\\logs\\*.log' for the test app and File 'C:\\MyDevApp\\logs\\*.log' for the dev app.
The only solution I found so far was to declare a filewatcher_transformer Processor section for each app. The default filewatcher_transformer section I had before looked like this:

START_ANGLE_BRACKET  Processor filewatcher_transformer END_ANGLE_BRACKET  Module pm_transformer    # Uncomment to override the program name  # Exec $SourceName = 'PROGRAM NAME';  Exec $Hostname = hostname();  OutputFormat syslog_rfc5424START_ANGLE_BRACKET/Processor END_ANGLE_BRACKET
I created instead these 2 sections:
START_ANGLE_BRACKET Processor filewatcher_transformer_test END_ANGLE_BRACKET  Module pm_transformer    # Uncomment to override the program name  # Exec $SourceName = 'PROGRAM NAME';  Exec $SourceName = "TEST_" + $SourceName;  Exec $Hostname = hostname();  OutputFormat syslog_rfc5424START_ANGLE_BRACKET/Processor END_ANGLE_BRACKET

START_ANGLE_BRACKET Processor filewatcher_transformer_dev END_ANGLE_BRACKET  Module pm_transformer    # Uncomment to override the program name  # Exec $SourceName = 'PROGRAM NAME';  Exec $SourceName = "DEV_" + $SourceName;  Exec $Hostname = hostname();  OutputFormat syslog_rfc5424START_ANGLE_BRACKET/Processor END_ANGLE_BRACKET
As you can see, I chose to prefix $SourceName, which is the name of the log file in this case, with either TEST_ or DEV_ depending on the app.
There is one thing remaining, which is to define a specific route for each app. Before, I had a common route for both apps:
START_ANGLE_BRACKET  Route 2 END_ANGLE_BRACKET
Path MyAppTest, MyAppDev=> filewatcher_transformer => syslogoutSTART_ANGLE_BRACKET /Route END_ANGLE_BRACKET
I replaced the common route with the following 2 routes, each connecting an app with its respective Processor section.
START_ANGLE_BRACKET  Route 2 END_ANGLE_BRACKET
Path MyAppTest=> filewatcher_transformer_test => syslogoutSTART_ANGLE_BRACKET /Route END_ANGLE_BRACKET
START_ANGLE_BRACKET  Route 3 END_ANGLE_BRACKET
Path MyAppDev=> filewatcher_transformer_dev => syslogoutSTART_ANGLE_BRACKET /Route END_ANGLE_BRACKET
At this point, I restarted the nxlog service and I started to see log filenames in Papertrail of the form DEV_errors.log and TEST_errors.log.

Three Fast Data Application Patterns

This is guest post by John Piekos, VP Engineering at VoltDB. I understand this is a little PRish, but I think the ideas are solid.

The focus of many developers and architects in the past few years has been on Big Data, specifically mining historical intelligence from the Data Lake (usually a Hadoop stack containing terabytes to petabytes of data).

Now, product architects are asking how they can use this business intelligence for competitive advantage. As a result, application developers have come to see the value of using and acting in real-time on streams of fast data; using OLAP reporting wisdom, they can realize the benefits of both fast data and Big Data. As a result, a new set of application patterns have emerged. The applications are designed to capture value from fast-moving streaming data, before it reaches Hadoop.

At VoltDB we call this new breed of applications “fast data” applications. The goal of these fast data applications is to do more than just push data into Hadoop asap, but also to capture real-time value from the data the moment the data arrives.  

Because traditional databases historically haven’t been fast enough, developers have been forced to go to great effort to build fast data applications - they build complex multi-tier systems often involving a handful of tools typically utilizing a dozen or more servers.  However, a new class of database technology, especially NewSQL offerings, has changed this equation.

If you have a relational database that is fast enough, highly available, and able to scale horizontally, the ability to build fast data applications becomes less esoteric and much more manageable. Three new real-time application patterns have emerged as the necessary dataflows to implement real-time applications. These patterns, enabled by new, fast database technology, are:

Categories: Architecture

Stuff The Internet Says On Scalability For April 10th, 2015

Hey, it's HighScalability time:


Beautiful, isn't it? It's the cerebral cortex of a rat that is organized like a mini-Internet.
  • $47 million: value of Cannabis per square km; $3.7 trillion: worldwide IT spending in 2014;  $41B: spend on spectrum; 48,000 square km: How Much Land Would it Take to Power the US via Solar; 2,000: Hadoop clusters in the world; 650 pounds: projected size of ET
  • Quotable Quotes:
    • John Hugg: The number one rule of 21st century data management: If a problem can be solved with an instance of MySQL, it’s going to be.
    • @sarahnovotny: "there is no compression algorithm for experience" - great quote from Andy Jassy at #AWSSummit
    • Steve Martin: I did stand-up comedy for eighteen years. Ten of those years were spent learning, four years were spent refining, and four were spent in wild success.
    • Yossi Vardi: Revenues kill the dream.
    • @AWSSummits: AdRoll's retargeting and real-time betting operates at 6 billion impressions/day at 100ms latency on #AWS #AWSSummit 
    • @AWS_Partners: Nike is operating 70+ services as production loads in #aws today #AWSSummit 
    • @bernardgolden: S3 usage up 102% YOY, ec2 93%: #AWSSummit
    • @bernardgolden: AWS growing over 40% yoy. Next earnings announcement s/b v interesting. #awssummit 
    • @AlexBalk: Here is my Apple Watch review: Your life is largely meaningless. No gadget can obscure its emptiness. You are dying every day.
    • Jonas: Google: all apps become search. Facebook: all apps become feeds. 
    • @jon_moore: most scalable/fast/reliable systems follow these principles: elastic; responsive; resilient; message-driven. #phillyete
    • mrmondo: NVMe [Non-Volatile Memory Express] is one of the most important changes to storage over the past decade.
    • Peter Thiel: Often the smarter people are more prone to trendy, fashionable thinking because they can pick up on things, they can pick up on cues more easily, and so they’re even more trapped by it than people of average ability
    • @nickstenning: The women and men who wrote the nearly bug-free code that controlled a $4Bn space shuttle and the lives of astronauts worked 8am to 5pm.

  • Have you been let down by miracle materials like carbon nanotubes, buckyballs, and graphene? MOFs  (metal–organic frameworks) are here and they are real. This Nature podcast and article tells you all about them (about 13 minutes in). MOFs are scaffolds made of metal containing nodes linked by carbon-based struts. They are pieces that you can plug together and build up into big networks which have spaces in-between. It's those spaces that make MOFs useful. You can trap things in those holes and do things to the molecules when they are trapped. You can store gasses like methane and hydrogen. You can separate mixture of things by varying the pore sizes. Carbon capture is one big use. They also can be used as chemical sensors, maybe in some future version of your watch. Also perhaps write-once-read-many times memory.

  • Is Amazon recreating the Sun ecosystem in the cloud? We now have the Amazon Elastic File System so everything is remote mounted. WorkSpaces feels like diskless workstations. Storage is over on some NAS. The database is somewhere on the network. And so on. Let's hope NFS lock contention failures and network UI jitter don't also make a comeback. OK, I don't remember having anything like Amazon Machine Learning

  • Etsy is giving Facebook's HipHop Virtual Machine (HHVM) for PHP a try. Why? Their API and web code was diverging under parallel development pressures. And they were developing many small API endpoints that used many small requests instead of larger requests that do more work per request. And instead of sharing state in an inherently shared nothing architecture they went with the strategy of just making things faster. This is where HHMV comes in.

  • OK, that's impressive. Migrating from Heroku to AWS (using Docker). It took two engineers about one month. Performance increased 2x and average API response time dropped from around 220ms to under 100ms, and our background task execution times dropped in half as well. Half the number of servers were needed.

  • I was excited to see AWS is opening up Lambda. It's close to some ideas I've been talking about for a while (Building Super Scalable Systems, What Google App Engine Price Changes Say About The Future Of Web Architecture). When it first came out I rehabed my atrophied node.js skills and gave it a shot. Played around a bit, got some code working, but the problem was Lambda only exposed a few integration points and none of those were anything I cared about. Now, they've made Lambda much more general and in the process much more useful. Worth another look. I also suspect their NFS product was necessary to generalize Lambda. Code could be instantly available on every machine via a mount point. Just like back in the day.

  • How Early Adopters Are Using Unikernels - With and Without Containers: The creator of MirageOS, Anil Madhavapeddy’s group is working on a new tool stack called Jitsu (Just-in-Time Summoning of Unikernels), which can start a unikernel in ~20ms in response to a network request. < Also, Towards Heroku for Unikernels: Part 2 - Self Scaling Systems.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Boost Your Confidence with the Right Words

Confidence is one of those things that makes all the difference when it comes to realizing your potential.

So many people have great ideas and great ambitions, but lack the confidence to follow through when it counts.

They hold themselves back.

Their amazing and bold ideas turn into lackluster ideas, as fear starts talking (if they talk at all.)

A while back, a team in HR interviewed me on confidence, because enough people pointed back to me as somebody they saw as confident.

What HR wanted to know is, where does my confidence come from?

For me, it mostly came from a relentless focus on making impact.

I put my focus on doing great things, creating raving fan customers, and taking on big challenges.

Where you put your focus, instantly changes your confidence.

If you’re too worried about how you look or how you sound, then you aren’t putting enough focus on the amazing thing you are trying to do.

So it wasn’t confidence per se.  It was more like putting my focus on the right things.

But there was more to it.  I was confident because of a few basic beliefs:

  1. I believed I’ll figure it out
  2. I believed that if I screw up, I’ll learn faster
  3. I believed that I don’t need to be the answer, but that I can always find the answer

Another thing that helped is that one of our leaders was relentless about “exposing our thinking.”   He wanted us to always detach ourselves from our ideas.   He wanted us to present ideas without being attached to them so they could be criticized and evaluated in more objective ways.

This sounds simple, but you’d be surprised how difficult it can be to detach yourself from ideas.   But the beauty is that when you are able to do this, your focus changes from defending your ideas, to really helping to create better ideas.   And this little shift takes you from fear or lack of confidence, to purposeful exploration, with full confidence.

Anyway, I think we become the thoughts that we think, so I think it’s really important to fill our head with the right words on confidence.

To that end, here is roundup of some of the greatest confidence quotes of all time:

Confidence Quotes

See if you can find at least three that you can use to help add more juice to your day.

If there is one that I find myself referring to all the time, to remind myself to get up to bat, it’s this one:

“A ship is safe in harbor, but that’s not what ships are for.” – William Shedd

I learned it long ago, and it’s served me well, ever since.

While that one reminds me to do what I do best, it’s really this one that inspires me to expand what I’m capable of:

“Life shrinks or expands in proportion to one’s courage.” — Anaïs Nin

I hope you find the right words that give your confidence just the boost it needs, when you need it most.

JD

Categories: Architecture, Programming

Hadoop and the OpenDataPlatform

hadoop-logo-square

Pivotal, IBM and Hortonworks announced today the “Open Data Platform” (ODP) – an attempt to standardize Hadoop. This move seems to be backed up by IBM, Teradata and others that appear as sponsors on the initiative site.

This move has a lot of potential and a few possible downsides.

ODP promises standardization – Cloudera’s Mike Olson downplays the importance of this “Every vendor shipping a Hadoop distribution builds off the Hadoop trunk. The APIs, data formats and semantics of trunk are stable. The project is a decade old, now, and the global Hadoop community exercises its governance obligations responsibly. There’s simply no fundamental incompatibility among the core Hadoop components shipped by the various vendors.”

I disagree. While it is true that there are no “fundamental incompatibility” there is a lot of non-fundamental ones. Each release by each vendor includes backport of features that are somewhere on the main trunk but far from the stable release. This means, that as a vendor, we have to both test our solutions on multiple distributions and work around the  subtle incompatibilities. We also have to limit ourselves to the lowest common denominator of the different platforms (or not support a distro) – for instance, until today, IBM did not support Yarn or Spark on their distribution

Hopefully standardization around common core will also mean that the involved vendors will deliver their value-add on that core unlike today where the offerings are based on proprietary extensions (this is true for Pivotal, IBM etc. not so much for Hortonworks). Today, we can’t take Impala and run it on Pivotal can we take Hawk and run it on HDP . With ODP we would, hopefully,  be able  mix-and-match and have installations where we can, say,  use IBM’s BigSQL with GemFire HD running on HDP and other such mixes. This can be good news for these vendors by enlarging their addressable market and for us a users by increasing our choice and reducing lock-in.

So what are the downsides/possible problems?

Well, for one we need to see that the scenarios I described above will actually happen and this isn’t just a marketing ploy. Another problem, the elephant in the room if you will,  is that the move is not complete –  Cloudera, a major Hadoop player, is not part of this move and as can be seen in the post referenced above, are against it. This is also true for MapR. With these two vendors out we still have multiple vendors to deal with and the problems ODP sets to solve will not disappear. I guess if ODP was led by the ASF or some other more “impartial” party it would have been easier to digest but as it is now all I can do is hope that both ODP will live to its expectations and that in the long run Cloudera and MapR will also join this initiative .

 

 

Categories: Architecture

Random thoughts on big data

4367409632_a58b2798ca_m

I began blogging in 2005, back then I managed to post something new almost everyday. Now, 10 years after, I hardly post anything. I was beginning to think I don’t have anything left to say but I recently noticed I have quite a few posts in various states of “draft”. I guess that  I am spending too much thinking about how to get a polished idea out there, rather than just go on and write what’s on my mind. This post is an attempt to change that by putting some thought I have (on big data in this case) without worrying too much on how complete and polished they are.

Anyway, here we go:

  1. All data is time-series – When data is added to the big data store (Hadoop or otherwise) it is already historical i.e. it is being imported from a transactional system, even if it is being streamed into the platform. If you treat the data as historical and somehow version it (most simplistic is  adding timestamp) before you store it it would enable you to see how this data changes over time  – and when you relate it to other data you’d be able to see both the way the system was at the time of that data particular data (e.g. event) was created as well as getting the latest version and see its state now. Essentially  treating all data as slow changing dimensions gives you enhanced capabilities when you’d want to analyse the data later.
  2. Enrich data with “foreign keys” before persisting it (or close to it). Usually a data source is not stand alone and it can be related to other data – either form the same source or otherwise. Solving some of these correlations when the data is fresh and context is known can save a lot of time doing them later both because you’d probably do these correlations multiple times instead of once as well as because when the data is ingested the context and relations are more obvious than when, say, a year later you try to make sense of the data and recall its relations to other data.
  3. Land data in a queue – This ties nicely to the previous two points as the type of enrichments mentioned above are suited well for handling in streaming. if you lend all data in a queue you can gain a unified ingestion pipeline for both batch and streaming data. Naturally not all the computations can be handled in streaming, but you’d be able to share at least some of the pipeline.
  4. Lineage is important (and doesn’t get enough attention) – raw data is just that but to get insights you need to process it , enrich and aggregate it – a lot of times this creates a disconnect between the end result and the original data. Understanding how insights were generated is important both for debugging problems as well as ensuring compliance (for actions that demand that).
  5. Not everything is big data – Big data is all the rage today but a lot of problems don’t require that. Not only that when you make the move to a distributed system you both complicate the solution and more importantly slow the processing (until, of course, you hit the threshold where the data can’t be handled by a single machine). This is even truer for big data systems where you have a lot of distributed nodes so the management (coordination etc.) is more costly (back at Nice we referred to  the initialization time for Hadoop jobs as “waking the elephant”  and we wanted to make sure we need to wait it).
  6. Don’t underestimate scale-up – A related point to the above. Machines today are quite powerful and when the problem is only “bigish” it might be that a scale-up solution would solve it better and cheaper. Read for example “Scalability! But at what COST?” by Frank McSherry and “Command-line tools can be 235x faster than your Hadoop cluster” by Adam Drake as two examples for points 5 & 6

This concludes this batch of thoughts. Comments and questions are welcomed

image by Nilay Gandhi

Categories: Architecture

Another elasticsearch blog post, now about Shield

Gridshore - Thu, 01/29/2015 - 21:13

<p>I just wrote another piece of text on my other blog. This time I wrote about the recently release elasticsearch plugin called Shield. If you want to learn more about securing your elasticsearch cluster, please head over to my other blog and start reading</p>

http://amsterdam.luminis.eu/2015/01/29/elasticsearch-shield-first-steps-using-java/

The post Another elasticsearch blog post, now about Shield appeared first on Gridshore.

Categories: Architecture, Programming

New blog posts about bower, grunt and elasticsearch

Gridshore - Mon, 12/15/2014 - 08:45

Two new blog posts I want to point out to you all. I wrote these blog posts on my employers blog:

The first post is about creating backups of your elasticsearch cluster. Some time a go they introduced the snapshot/restore functionality. Of course you can use the REST endpoint to use the functionality, but how easy is it if you can use a plugin to handle the snapshots. Or maybe even better, integrate the functionality in your own java application. That is what this blogpost is about, integrating snapshot/restore functionality in you java application. As a bonus there are the screens of my elasticsearch gui project snowing the snapshot/restore functionality.

Creating elasticsearch backups with snapshot/restore

The second blog post I want to put under you attention is front-end oriented. I already mentioned my elasticsearch gui project. This is an Angularjs application. I have been working on the plugin for a long time and the amount of javascript code is increasing. Therefore I wanted to introduce grunt and bower to my project. That is what this blogpost is about.

Improve my AngularJS project with grunt

The post New blog posts about bower, grunt and elasticsearch appeared first on Gridshore.

Categories: Architecture, Programming