Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

The Microsoft Take on Containers and Docker

This is a guest repost by Mark Russinovich, CTO of Microsoft Azure (and novelist!). We all benefit from a vibrant competitive cloud market and Microsoft is part of that mix. Here's a good container overview along with Microsoft's plan of attack. Do you like their story? Is it interesting? Is it compelling?

You can’t have a discussion on cloud computing lately without talking about containers. Organizations across all business segments, from banks and major financial service firms to e-commerce sites, want to understand what containers are, what they mean for applications in the cloud, and how to best use them for their specific development and IT operations scenarios.

From the basics of what containers are and how they work, to the scenarios they’re being most widely used for today, to emerging trends supporting “containerization”, I thought I’d share my perspectives to better help you understand how to best embrace this important cloud computing development to more seamlessly build, test, deploy and manage your cloud applications.

Containers Overview

In abstract terms, all of computing is based upon running some “function” on a set of “physical” resources, like processor, memory, disk, network, etc., to accomplish a task, whether a simple math calculation, like 1+1, or a complex application spanning multiple machines, like Exchange. Over time, as the physical resources became more and more powerful, often the applications did not utilize even a fraction of the resources provided by the physical machine. Thus “virtual” resources were created to simulate underlying physical hardware, enabling multiple applications to run concurrently – each utilizing fractions of the physical resources of the same physical machine.

We commonly refer to these simulation techniques as virtualization. While many people immediately think virtual machines when they hear virtualization, that is only one implementation of virtualization. Virtual memory, a mechanism implemented by all general purpose operating systems (OSs), gives applications the illusion that a computer’s memory is dedicated to them and can even give an application the experience of having access to much more RAM than the computer has available.

Containers are another type of virtualization, also referred to as OS Virtualization. Today’s containers on Linux create the perception of a fully isolated and independent OS to the application. To the running container, the local disk looks like a pristine copy of the OS files, the memory appears only to hold files and data of a freshly-booted OS, and the only thing running is the OS. To accomplish this, the “host” machine that creates a container does some clever things.

The first technique is namespace isolation. Namespaces include all the resources that an application can interact with, including files, network ports and the list of running processes. Namespace isolation enables the host to give each container a virtualized namespace that includes only the resources that it should see. With this restricted view, a container can’t access files not included in its virtualized namespace regardless of their permissions because it simply can’t see them. Nor can it list or interact with applications that are not part of the container, which fools it into believing that it’s the only application running on the system when there may be dozens or hundreds of others.

For efficiency, many of the OS files, directories and running services are shared between containers and projected into each container’s namespace. Only when an application makes changes to its containers, for example by modifying an existing file or creating a new one, does the container get distinct copies from the underlying host OS – but only of those portions changed, using Docker’s “copy-on-write” optimization. This sharing is part of what makes deploying multiple containers on a single host extremely efficient.

Second, the host controls how much of the host’s resources can be used by a container. Governing resources like CPU, RAM and network bandwidth ensure that a container gets the resources it expects and that it doesn’t impact the performance of other containers running on the host. For example, a container can be constrained so that it cannot use more than 10% of the CPU. That means that even if the application within it tries, it can’t access to the other 90%, which the host can assign to other containers or for its own use. Linux implements such governance using a technology called “cgroups.” Resource governance isn’t required in cases where containers placed on the same host are cooperative, allowing for standard OS dynamic resource assignment that adapts to changing demands of application code.

The combination of instant startup that comes from OS virtualization and reliable execution that comes from namespace isolation and resource governance makes containers ideal for application development and testing. During the development process, developers can quickly iterate. Because its environment and resource usage are consistent across systems, a containerized application that works on a developer’s system will work the same way on a different production system. The instant-start and small footprint also benefits cloud scenarios, since applications can scale-out quickly and many more application instances can fit onto a machine than if they were each in a VM, maximizing resource utilization.

Comparing a similar scenario that uses virtual machines with one that uses containers highlights the efficiency gained by the sharing. In the example shown below, the host machine has three VMs. In order to provide the applications in the VMs complete isolation, they each have their own copies of OS files, libraries and application code, along with a full in-memory instance of an OS. Starting a new VM requires booting another instance of the OS, even if the host or existing VMs already have running instances of the same version, and loading the application libraries into memory. Each application VM pays the cost of the OS boot and the in-memory footprint for its own private copies, which also limits the number of application instances (VMs) that can run on the host.

App Instances on Host

The figure below shows the same scenario with containers. Here, containers simply share the host operating system, including the kernel and libraries, so they don’t need to boot an OS, load libraries or pay a private memory cost for those files. The only incremental space they take is any memory and disk space necessary for the application to run in the container. While the application’s environment feels like a dedicated OS, the application deploys just like it would onto a dedicated host. The containerized application starts in seconds and many more instances of the application can fit onto the machine than in the VM case.

Containers on Host

Docker’s Appeal
Categories: Architecture

Quote of the Day

Herding Cats - Glen Alleman - Wed, 08/19/2015 - 14:55

The door of a bigoted mind opens outwards. The pressure of facts merely closes it more snugly.
- Ogden Nash

When there are new ideas being conjectured, it is best for the conversation to establish the principles on which those ideas can be tested. Without this the person making the conjecture has to defined the idea on personality, personal anecdotes, and  personal experience alone

Categories: Project Management

KISS — One Best Practice to Rule Them All

Making the Complex Simple - John Sonmez - Wed, 08/19/2015 - 13:00

Why KISS isn’t easy Let’s talk about KISS, or “Keep It Simple, Stupid.” But before I go any further, just think a little about your favorite best practice when writing code. Is it DRY—Don’t Repeat Yourself? Or are you more a YAGNI—You Aren’t Gonna Need It—person? Do you follow SOLID principles? Or are you really […]

The post KISS — One Best Practice to Rule Them All appeared first on Simple Programmer.

Categories: Programming

Quote of the Month August 2015

From the Editor of Methods & Tools - Wed, 08/19/2015 - 09:39
Acknowledging that something isn’t working takes courage. Many organizations encourage people to spin things in the most positive light rather than being honest. This is counterproductive. Telling people what they want to hear just defers the inevitable realization that they won’t get what they expected. It also takes from them the opportunity to react to […]

You Are Not Agile If . . .

Principles really count because they define a set of behaviors, they can clear a path through the fog.

Principles really count because they define a set of behaviors, they can clear a path through the fog.

Every once in a while I get very confused when someone tells me they are Agile, but then list a whole bunch of their practices that make 1950’s management and software development practices look enlightened. Practicing and being Agile has meaning. Being Agile begins by embracing a set of principles. Whether you embrace the 12 principles in the Agile Manifesto or the 24 suggested by Gil Broza in his book The Agile Mind-Set (SPaMCAST 352), without a foundation of principles you can’t achieve stability in Agile. In the end principles really count because they define a set of behaviors. Organizations that don’t embrace the principles of Agile can do some of the techniques, but they can never really be Agile. There are all sorts of reasons why people and organizations eschew some or all of the Agile principles; however, whatever the reason rejecting any of the principles injures your ability to be Agile. The most pernicious reason for an organization saying they are Agile but not being Agile sits directly at the feet of management.

Management is critical to implementing Agile. Once upon a time Agile transformations were a bottom-up phenomenon. Teams would adopt Agile and then use the experience and value they delivered to sell the concept to management. These days every conference and magazine touts the benefits of being Agile to C-level executives. The marketing machine has ensured that every executive understands the value of being Agile; therefore, transformation has become a top-down phenomenon. The problem is that the value is sold but not the need to transform behavior, leading to a scenario where the people pushing Agile down don’t truly understand the necessity to embrace and principles and change how they behave. In this scenario Management views Agile is a developer thing . . . not a management thing. A few years ago I was sitting in on a meeting that a CIO was having with his direct reports to kick-off an Agile transformation. For me, the real nails on the chalkboard moment was when the CIO very passionately stated, “we will embrace one or two the 12 Agile principles this year and think about the others once we get our feet wet.” I think I shattered a molar on the spot (I now have a crown). In essence what he was saying was that he wanted all the benefits of Agile, but had no intention of changing his behavior, nor did he expect his managers to change. The goal of being Agile moved farther away as the result of that meeting. If you want to determine whether you have management support for Agile, look for these three symptoms.

  1. “Agile” projects with all three basic constraints fixed: budget, scope and duration. Typically Agile projects have fixed teams; therefore, they can vary either duration (to deliver a fixed scope) or scope (to deliver to a fixed duration). Fixing all three constraints prevents teams from respecting the Agile principles and using basic Agile techniques. For example, with all three constraints fixed teams will need to force their velocity/productivity to the level needed to deliver regardless of the amount of technical debt incurred by the process.
  2. Command and control project management. Agile teams self-organize and self-manage. Teams where someone is playing a command and control role are not Agile. This type of leadership can’t persist unless it is acceptable to management.
  3. Detailed tasks schedules at the sprint level (or worse yet at a release level). Detailed tasks schedules, typically accompanied by task-level estimates and assignments are a reflection of classic task-level project management, not Agile. Task schedules are caused when Agile practitioners either do not have the proper training or when they are required to develop schedules that strip the ability to self-manage and self-organize.

Management has a strong influence on how Agile is practiced. When management fails to understand their obligation to support and promote Agile, other strongly entrenched positions will generate resistance and compromises that reduce the effectiveness of Agile principles and techniques.

Categories: Process Management

Release Burn Down Brought to Life

Xebia Blog - Tue, 08/18/2015 - 23:48

Inspired by the blog of Mike Cohn [Coh08] "Improving On Traditional Release Burndown Charts" I created a time lapsed version of it. It also nicely demonstrates that forecasts of "What will be finished?" (at a certain time) get better as the project progresses.

The improved traditional release burn down chart clearly show what (a) is finished (light green), (b) what will very likely be finished (dark green), (c) what will perhaps be finished, and perhaps not (orange), and (d) what almost is guaranteed not to be finished (red).

This knowledge supports product owners in ordering the backlog based on the current knowledge.


The result is obtained doing a Monte Carlo simulation of a toy project, using a fixed product backlog of around 100 backlog items with various sized items. The amount of work realized also varies per projectday based on a simple uniform probability distribution.

Forecasting is done using a 'worst' velocity and a 'best' velocity. Both are determined using only the last 3 velocities, i.e. only the last 3 sprints are considered.

The 2 grey lines represent the height of the orange part of the backlog, i.e. the backlog items that might be or not be finished. This also indicates the uncertainty over time of what actually will be delivered by the team at the given time.


The Making Of...

The movie above has been created using GNU plot [GNU Plot] for drawing the charts, and ffmpeg [ffmpeg] has been used to creat the time lapsed movie from the set of charts.


Over time the difference between the 2 grey lines gets smaller, a clear indication of improving predictability and reduction of risk. Also, the movie shows that the final set of backlog items done is well between the 2 grey lines from the start of the project.

This looks very similar to the 'Cone of Uncertainty'. Besides that the shape of the grey lines only remotely resembles a cone, another difference is that the above simulation merely takes statistical chances into account. The fact that the team gains more knowledge and insight over time, is not considered in the simulation, whereas it is an important factor in the 'Cone of Uncertainty'.


[Coh08] "Improving On Traditional Release Burndown Charts", Mike Cohn, June 2008,

[GNU Plot] Gnu plot version 5.0, "A portable command-line driven graphing utility",

[ffmpeg] "A complete, cross-platform solution to record, convert and stream audio and video",

Project Cage Match: Multitasking vs. Critical Chain

Good Requirements - Jeffrey Davidson - Tue, 08/18/2015 - 23:11

Last week I was able to participate in University of Texas at Dallas’ 9th Annual PM Symposium. It’s one of my very favorite events (this was my 4th time) because the audience tends to come back year-after-year and many of them come back to see me. They’re very supportive, participative, and it’s a good time.


This year I spoke about the problems with multitasking. I will post a link to the paper after they may it public, but in the meantime here are the slides. Enjoy!


Categories: Requirements

17 Theses on Software Estimation (Expanded)

10x Software Development - Steve McConnell - Tue, 08/18/2015 - 17:51

This post is part of an ongoing discussion with Ron Jeffries, which originated from some comments I made about #NoEstimates. You can read my original "17 Theses on Software Estimation" post here. That post has been completely subsumed by this post if you want to just read this one. You can read Ron's response to my original 17 Theses article here. This post doesn't respond to Ron's post per se. It has been expanded to address points he raised, but responses to him are more implicit than explicit. 

Arriving late to the #NoEstimates discussion, I’m amazed at some of the assumptions that have gone unchallenged, and I’m also amazed at the absence of some fundamental points that no one seems to have made so far. The point of this article is to state unambiguously what I see as the arguments in favor of estimation in software and put #NoEstimates in context.  

1. Estimation is often done badly and ineffectively and in an overly time-consuming way. 

My company and I have taught upwards of 10,000 software professionals better estimation practices, and believe me, we have seen every imaginable horror story of estimation done poorly. There is no question that “estimation is often done badly” is a true observation of the state of the practice. 

2. The root cause of poor estimation is usually lack of estimation skills. 

Estimation done poorly is most often due to lack of estimation skills. Smart people using common sense is not sufficient to estimate software projects. Reading two page blog articles on the internet is not going to teach anyone how to estimate very well. Good estimation is not that hard, once you’ve developed the skill, but it isn’t intuitive or obvious, and it requires focused self-education or training. 

One of the most common estimation problems is people engaging with so-called estimates that are not really Estimates, but that are really Business Targets or requests for Commitments. You can read more about that in my estimation book or watch my short video on Estimates, Targets, and Commitments. 

3. Many comments in support of #NoEstimates demonstrate a lack of basic software estimation knowledge. 

I don’t expect most #NoEstimates advocates to agree with this thesis, but as someone who does know a lot about estimation I think it’s clear on its face. Here are some examples

(a) Are estimation and forecasting the same thing? As far as software estimation is concerned, yes they are. (Just do a Google or Bing search of “definition of forecast”.) Estimation, forecasting, prediction--it's all the same basic activity, as far as software estimation is concerned. 

(b) Is showing someone several pictures of kitchen remodels that have been completed for $30,000 and implying that the next kitchen remodel can be completed for $30,000 estimation? Yes, it is. That’s an implementation of a technique called Reference Class Forecasting. 

(c) Is doing a few iterations, calculating team velocity, and then using that empirical velocity data to project a completion date count as estimation? Yes it does. Not only is it estimation, it is a really effective form of estimation. I’ve heard people argue that because velocity is empirically based, it isn’t estimation. Good estimation is empirically based, so that argument exposes a lack of basic understanding of the nature of estimation. 

(d) Is counting the number of stories completed in each sprint rather than story points, calculating the average number of stories completed each sprint, and using that for sprint planning, estimation? Yes, for the same reasons listed in point (c). 

(e) Most of the #NoEstimates approaches that have been proposed, including (c) and (d) above, are approaches that were defined in my book Software Estimation: Demystifying the Black Art, published in 2006. The fact that people people are claiming these long-ago-published techniques as "new" under the umbrella of #NoEstimates is another reason I say many of the #NoEstimates comments demonstrate a lack of basic software estimation knowledge. 

(f) Is estimation time consuming and a waste of time? One of the most common symptoms of lack of estimation skill is spending too much time on ineffective activities. This work is often well-intentioned, but it’s common to see well-intentioned people doing more work than they need to get worse estimates than they could be getting.

(g) Is it possible to get good estimates? Absolutely. We have worked with multiple companies that have gotten to the point where they are delivering 90%+ of their projects on time, on budget, with intended functionality. 

One reason many people find estimation discussions (aka negotiations) challenging is that they don't really believe the estimates they came up with themselves. Once you develop the skill needed to estimate well -- as well as getting clear about whether the business is really talking about an estimate, a target, or a commitment -- estimation discussions become more collaborative and easier. 

4. Being able to estimate effectively is a skill that any true software professional needs to develop, even if they don’t need it on every project. 

“Estimation often doesn't work very well, therefore software professionals should not develop estimation skill” – this is a common line of reasoning in #NoEstimates. This argument doesn't make any more sense than the argument, "Scrum often doesn't work very well, therefore software professionals should not try to use Scrum." The right response in both cases is, "Get better at the practice," not "Throw out the practice altogether." 

#NoEstimates advocates say they're just exploring the contexts in which a person or team might be able to do a project without estimating. That exploration is fine, but until someone can show that the vast majority of projects do not need estimates at all, deciding to not estimate and not develop estimations skills is premature. And my experience tells me that when all the dust settles, the cases in which no estimates are needed will be the exception rather than the rule. Thus software professionals will benefit -- and their organizations will benefit -- from developing skill at estimation. 

I would go further and say that a true software professional should develop estimation skill so that you can estimate competently on the numerous projects that require estimation. I don't make these claims about software professionalism lightly. I spent four years as chair of the IEEE committee that oversees software professionalism issues for the IEEE, including overseeing the Software Engineering Body of Knowledge, university accreditation standards, professional certification programs, and coordination with state licensing bodies. I spent another four years as vice-chair of that committee. I also wrote a book on the topic, so if you're interested in going into detail on software professionalism, you can check out my book, Professional Software Development. Or you can check out a much briefer, more specific explanation in my company's white paper about our Professional Development Ladder

5. Estimates serve numerous legitimate, important business purposes.

Estimates are used by businesses in numerous ways, including: 

  • Allocating budgets to projects (i.e., estimating the effort and budget of each project)
  • Making cost/benefit decisions at the project/product level, which is based on cost (software estimate) and benefit (defined feature set)
  • Deciding which projects get funded and which do not, which is often based on cost/benefit
  • Deciding which projects get funded this year vs. next year, which is often based on estimates of which projects will finish this year
  • Deciding which projects will be funded from CapEx budget and which will be funded from OpEx budget, which is based on estimates of total project effort, i.e., budget
  • Allocating staff to specific projects, i.e., estimates of how many total staff will be needed on each project
  • Allocating staff within a project to different component teams or feature teams, which is based on estimates of scope of each component or feature area
  • Allocating staff to non-project work streams (e.g., budget for a product support group, which is based on estimates for the amount of support work needed)
  • Making commitments to internal business partners (based on projects’ estimated availability dates)
  • Making commitments to the marketplace (based on estimated release dates)
  • Forecasting financials (based on when software capabilities will be completed and revenue or savings can be booked against them)
  • Tracking project progress (comparing actual progress to planned (estimated) progress)
  • Planning when staff will be available to start the next project (by estimating when staff will finish working on the current project)
  • Prioritizing specific features on a cost/benefit basis (where cost is an estimate of development effort)

These are just a subset of the many legitimate reasons that businesses request estimates from their software teams. I would be very interested to hear how #NoEstimates advocates suggest that a business would operate if you remove estimates for each of these purposes.

The #NoEstimates response to these business needs is typically of the form, “Estimates are inaccurate and therefore not useful for these purposes” rather than, “The business doesn’t need estimates for these purposes.” 

That argument really just says that businesses are currently operating on the basis of much worse predictions than they should be, and probably making poorer decisions as a result, because the software staff are not providing very good estimates. If software staff provided more accurate estimates, the business would make better decisions in each of these areas, which would make the business stronger. 

The other #NoEstimates response is that "Estimates are always waste." I don't agree with that. By that line of reasoning, daily stand ups are waste. Sprint planning is waste. Retrospectives are waste. Testing is waste. Everything but code-writing itself is waste. I realize there are Lean purists who hold those views, but I don't buy any of that. 

Estimates, done well, support business decision making, including the decision not to do a project at all. Taking the #NoEstimates philosophy to its logical conclusion, if #NoEstimates eliminates waste, then #NoProjectAtAll eliminates even more waste. In most cases, the business will need an estimate to decide not to do the project at all.  

In my experience businesses usually value predictability, and in many cases, they value predictability more than they value agility. Do businesses always need predictability? No, there are few absolutes in software. Do businesses usually need predictability? In my experience, yes, and they need it often enough that doing it well makes a positive contribution to the business. Responding to change is also usually needed, and doing it well also makes a positive contribution to the business. This whole topic is a case where both predictability and agility work better than either/or. Competency in estimation should be part of the definition of a true software professional, as should skill in Scrum and other agile practices. 

6. Part of being an effective estimator is understanding that different estimation techniques should be used for different kinds of estimates. 

One thread that runs throughout the #NoEstimates discussions is lack of clarity about whether we’re estimating before the project starts, very early in the project, or after the project is underway. The conversation is also unclear about whether the estimates are project-level estimates, task-level estimates, sprint-level estimates, or some combination. Some of the comments imply ineffective attempts to combine kinds of estimates—the most common confusion I’ve read is trying to use task-level estimates to estimate a whole project, which is another example of lack of software estimation skill. 

You can see a summary of estimation techniques and their areas of applicability here. This quick reference sheet assumes familiarity with concepts and techniques from my estimation book and is not intended to be intuitive on its own. But just looking at the categories you can see that different techniques apply for estimating size, effort, schedule, and features. Different techniques apply for small, medium, and large projects. Different techniques apply at different points in the software lifecycle, and different techniques apply to Agile (iterative) vs. Sequential projects. Effective estimation requires that the right kind of technique be applied to each different kind of estimate. 

Learning these techniques is not hard, but it isn't intuitive. Learning when to use each technique, as well as learning each technique, requires some professional skills development. 

When we separate the kinds of estimates we can see parts of projects where estimates are not needed. One of the advantages of Scrum is that it eliminates the need to do any sort of miniature milestone/micro-stone/task-based estimates to track work inside a sprint. If I'm doing sequential development without Scrum, I need those detailed estimates to plan and track the team's work. If I'm using Scrum, once I've started the sprint I don't need estimation to track the day-to-day work, because I know where I'm going to be in two weeks and there's no real value added by predicting where I'll be day-by-day within that two week sprint. 

That doesn't eliminate the need for estimates in Scrum entirely, however. I still need an estimate during sprint planning to determine how much functionality to commit to for that sprint. Backing up earlier in the project, before the project has even started, businesses need estimates for all the business purposes described above, including deciding whether to do the project at all. They also need to decide how many people to put on the project, how much to budget for the project, and so on. Treating all the requirements as emergent on a project is fine for some projects, but you still need to decide whether you're going to have a one-person team treating requirements as emergent, or a five-person team, or a 50-person team. Defining team size in the first place requires estimation. 

7. Estimation and planning are not the same thing, and you can estimate things that you can’t plan. 

Many of the examples given in support of #NoEstimates are actually indictments of overly detailed waterfall planning, not estimation. The simple way to understand the distinction is to remember that planning is about “how” and estimation is about “how much.” 

Can I “estimate” a chess game, if by “estimate” I mean how each piece will move throughout the game? No, because that isn’t estimation; it’s planning; it’s “how.”

Can I estimate a chess game in the sense of “how much”? Sure. I can collect historical data on the length of chess games and know both the average length and the variation around that average and predict the length of a game. 

More to the point, estimating an individual software project is not analogous to estimating one chess game. It’s analogous to estimating a series of chess games. People who are not skilled in estimation often assume it’s more difficult to estimate a series of games than to estimate an individual game, but estimating the series is actually easier. Indeed, the more chess games in the set, the more accurately we can estimate the set, once you understand the math involved. 

This all goes back to the idea that we need estimates for different purposes at different points in a project. An agile project may be about "steering" rather than estimating once the project gets underway. But it may not be allowed to get underway in the first place if there aren't early estimates that show there's a business case for doing the project. 

8. You can estimate what you don’t know, up to a point. 

In addition to estimating “how much,” you can also estimate “how uncertain.” In the #NoEstimates discussions, people throw out lots of examples along the lines of, “My project was doing unprecedented work in Area X, and therefore it was impossible to estimate the whole project.” This is essentially a description of the common estimation mistake of allowing high variability in one area to insert high variability into the whole project's estimate rather than just that one area's estimate. 

Most projects contain a mix of precedented and unprecedented work (also known as certain/uncertain, high risk/low risk, predictable/unpredictable, high/low variability--all of which are loose synonyms as far as estimation is concerned). Decomposing the work, estimating uncertainty in each area, and building up an overall estimate that includes that uncertainty proportionately is one technique for dealing with uncertainty in estimates. 

Why would that ever be needed? Because a business that perceives a whole project as highly risky might decide not to approve the whole project. A business that perceives a project as low to moderate risk overall, with selected areas of high risk, might decide to approve that same project. 

9. Both estimation and control are needed to achieve predictability. 

Much of the writing on Agile development emphasizes project control over project estimation. I actually agree that project control is more powerful than project estimation, however, effective estimation usually plays an essential role in achieving effective control. 

To put this in Agile Manifesto-like terms:

We have come to value project control over project estimation, 
as a means of achieving predictability

As in the Agile Manifesto, we value both terms, which means we still value the term on the right. 

#NoEstimates seems to pay lip service to both terms, but the emphasis from the hashtag onward is really about discarding the term on the right. This is another case where I believe the right answer is both/and, not either/or

I wrote an essay when I was Editor in Chief of IEEE Software called "Sitting on the Suitcase" that discussed the interplay between estimation and control and discussed why we estimate even though we know the activity has inherent limitations. This is still one of my favorite essays. 

10. People use the word "estimate" sloppily. 

No doubt. Lack of understanding of estimation is not limited to people tweeting about #NoEstimates. Business partners often use the word “estimate” to refer to what would more properly be called a “planning target” or “commitment.”  

The word "estimate" does have a clear definition, for those who want to look it up.  

The gist of these definitions is that an "estimate" is something that is approximate, rough, or tentative, and is based upon impressions or opinion. People don't always use the word that way, and you can see my video on that topic here

Because people use the word sloppily, one common mistake software professionals make is trying to create a predictive, approximate estimate when the business is really asking for a commitment, or asking for a plan to meet a target, but using the word “estimate” to ask for that. It's common for businesses to think they have a problem with estimation when the bigger problem is with their commitment process. 

We have worked with many companies to achieve organizational clarity about estimates, targets, and commitments. Clarifying these terms makes a huge difference in the dynamics around creating, presenting, and using software estimates effectively. 

11. Good project-level estimation depends on good requirements, and average requirements skills are about as bad as average estimation skills. 

A common refrain in Agile development is “It’s impossible to get good requirements,” and that statement has never been true. I agree that it’s impossible to get perfect requirements, but that isn’t the same thing as getting good requirements. I would agree that “It is impossible to get good requirements if you don’t have very good requirement skills,” and in my experience that is a common case.  I would also agree that “Projects usually don’t have very good requirements,” as an empirical observation—but not as a normative statement that we should accept as inevitable. 

Like estimation skill, requirements skill is something that any true software professional should develop, and the state of the art in requirements at this time is far too advanced for even really smart people to invent everything they need to know on their own. Like estimation skill, a person is not going to learn adequate requirements skills by reading blog entries or watching short YouTube videos. Acquiring skill in requirements requires focused, book-length self-study or explicit training or both. 

If your business truly doesn’t care about predictability (and some truly don’t), then letting your requirements emerge over the course of the project can be a good fit for business needs. But if your business does care about predictability, you should develop the skill to get good requirements, and then you should actually do the work to get them. You can still do the rest of the project using by-the-book Scrum, and then you’ll get the benefits of both good requirements and Scrum.

From my point of view, I often see agile-related claims that look kind of like this, What practices should you use if you have: 

  • Mediocre skill in Estimation
  • Mediocre skill in Requirements
  • Good to excellent skill in Scrum and Related Practices

Not too surprisingly, the answer to this question is, Scrum and Related Practices. I think a more interesting question is, What practices should you use if you have: 

  • Good to excellent skill in Estimation
  • Good to excellent skill in Requirements
  • Good to excellent skill in Scrum and related practices

Having competence in multiple areas opens up some doors that will be closed with a lesser skill set. In particular, it opens up the ability to favor predictability if your business needs that, or to favor flexibility if your business needs that. Agile is supposed to be about options, and I think that includes the option to develop in the way that best supports the business. 

12. The typical estimation context involves moderate volatility and a moderate levels of unknowns

Ron Jeffries writes, “It is conventional to behave as if all decent projects have mostly known requirements, low volatility, understood technology, …, and are therefore capable of being more or less readily estimated by following your favorite book.” I don’t know who said that, but it wasn’t me, and I agree with Ron that that statement doesn’t describe most of the projects that I have seen. 

I think it would be more true to say, “The typical software project has requirements that are knowable in principle, but that are mostly unknown in practice due to insufficient requirements skills; low volatility in most areas with high volatility in selected areas; and technology that tends to be either mostly leading edge or mostly mature." In other words, software projects are challenging, but the challenge level is manageable. If you have developed the full set of skills a software professional should have, you will be able to overcome most of the challenges or all of them. 

Of course there is a small percentage of projects that do have truly unknowable requirements and across-the-board volatility. I consider those to be corner cases. It’s good to explore corner cases, but also good not to lose sight of which cases are most common. 

13. Responding to change over following a plan does not imply not having a plan. 

It’s amazing that in 2015 we’re still debating this point. Many of the #NoEstimates comments literally emphasize not having a plan, i.e., treating 100% of the project as emergent. They advocate a process—typically Scrum—but no plan beyond instantiating Scrum. 

According to the Agile Manifesto, while agile is supposed to value responding to change, it also is supposed to value following a plan. The Agile Manifesto says, "there is value in the items on the right" which includes the phrase "following a plan." 

While I agree that minimizing planning overhead is good project management, doing no planning at all is inconsistent with the Agile Manifesto, not acceptable to most businesses, and wastes some of Scrum's capabilities. One of the amazingly powerful aspects of Scrum is that it gives you the ability to respond to change; that doesn’t imply that you need to avoid committing to plans in the first place. 

My company and I have seen Agile adoptions shut down in some companies because an Agile team is unwilling to commit to requirements up front or refuses to estimate up front. As a strategy, that’s just dumb. If you fight your business about providing estimates, even if you win the argument that day, you will still get knocked down a peg in the business’s eyes. 

I've commented in other contexts that I have come to the conclusion that most businesses would rather be wrong than vague. Businesses prefer to plant a stake in the ground and move it later rather than avoiding planting a stake in the ground in the first place. The assertion that businesses value flexibility over predictability is Agile's great unvalidated assumption. Some businesses do value flexibility over predictability, but most do not. If in doubt, ask your business. 

If your business does value predictability, use your velocity to estimate how much work you can do over the course of a project, and commit to a product backlog based on your demonstrated capacity for work. Your business will like that. Then, later, when your business changes its mind—which it probably will—you’ll still be able to respond to change. Your business will like that even more.  

14. Scrum provides better support for estimation than waterfall ever did, and there does not have to be a trade off between agility and predictability. 

Some of the #NoEstimates discussion seems to interpret challenges to #NoEstimates as challenges to the entire ecosystem of Agile practices, especially Scrum. Many of the comments imply that estimation will somehow impair agility. The examples cited to support that are mostly examples of unskilled misapplications of estimation practices, so I see them as additional examples of people not understanding estimation very well. 

The idea that we have to trade off agility to achieve predictability is a false trade off. If we define "agility" to mean, "no notion of our destination" or "treat all the requirements on the project as emergent," then of course there is a trade off, by definition. If, on the other hand, we define "agility" as "ability to respond to change," then there doesn't have to be any trade off. Indeed, if no one had ever uttered the word “agile” or applied it to Scrum, I would still want to use Scrum because of its support for estimation and predictability, as well as for its support for responding to change. 

The combination of story pointing, velocity calculation, product backlog, short iterations, just-in-time sprint planning, and timely retrospectives after each sprint creates a nearly perfect context for effective estimation. To put it in estimation terminology, story pointing is a proxy based estimation technique. Velocity is calibrating the estimate with project data. The product backlog (when constructed with estimation in mind) gives us a very good proxy for size. Sprint planning and retrospectives give us the ability to "inspect and adapt" our estimates. All this means that Scrum provides better support for estimation than waterfall ever did. 

If a company truly is operating in a high uncertainty environment, Scrum can be an effective approach. In the more typical case in which a company is operating in a moderate uncertainty environment, Scrum is well-equipped to deal with the moderate level of uncertainty and provide high predictability (e.g., estimation) at the same time. 

15. There are contexts where estimates provide little value. 

I don’t estimate how long it will take me to eat dinner, because I know I’m going to eat dinner regardless of what the estimate says. If I have a defect that keeps taking down my production system, the business doesn’t need an estimate for that because the issue needs to get fixed whether it takes an hour, a day, or a week. 

The most common context I see where estimates are not done on an ongoing basis and truly provide little business value is online contexts, especially mobile, where the cycle times are measured in days or shorter, the business context is highly volatile, and the mission truly is, “Always do the next most useful thing with the resources available.” 

In both these examples, however, there is a point on the scale at which estimates become valuable. If the work on the production system stretches into weeks or months, the business is going to want and need an estimate. As the mobile app matures from one person working for a few days to a team of people working for a few weeks, with more customers depending on specific functionality, the business is going to want more estimates. As the group doing the work expands, they'll need budget and headcount, and those numbers are determined by estimates. Enjoy the #NoEstimates context while it lasts; don’t assume that it will last forever. 

16. This is not religion. We need to get more technical and more economic about software discussions. 

I’ve seen #NoEstimates advocates treat these questions of requirements quality, estimation effectiveness, agility, and predictability as value-laden moral discussions. "Agile" is a compliment and "Waterfall" is an invective. The tone of the argument is more moral than economic. The arguments are of the form, "Because this practice is good," rather than of the form, "Because this practice supports business goals X, Y, and Z." 

That religion isn’t unique to Agile advocates, and I’ve seen just as much religion on the non-Agile sides of various discussions. It would be better for the industry at large if people could stay more technical and economic more often. 

Agile is About Creating Options, Right?

I subscribe to the idea that engineering is about doing for a dime what any fool can do for a dollar, i.e., it's about economics. If we assume professional-level skills in agile practices, requirements, and estimation, the decision about how much work to do up front on a project should be an economic decision about which practices will achieve the business goals in the most cost-effective way. We consider issues including the cost of changing requirements and the value of predictability. If the environment is volatile and a high percentage of requirements are likely to spoil before they can be implemented, then it’s a bad economic decision to do lots of up front requirements work. If predictability provides little or no business value, emphasizing up front estimation work would be a bad economic decision.

On the other hand, if predictability does provide business value, then we should support that in a cost-effective way. If we do a lot of the requirements work up front, and some requirements spoil, but most do not, and that supports improved predictability, that would be a good economic choice. 

The economics of these decisions are affected by the skills of the people involved. If my team is great at Scrum but poor at estimation and requirements, the economics of up front vs. emergent will tilt toward Scrum. If my team is great at estimation and requirements but poor at Scrum, the economics will tilt toward estimation and requirements. 

Of course, skill sets are not divinely dictated or cast in stone; they can be improved through focused self-study and training. So we can treat the decision to invest in skills development as an economic issue too. 

Decision to Develop Skills is an Economic Decision Too

What is the cost of training staff to reach competency in estimation and requirements? Does the cost of achieving competency exceed the likely benefits that would derive from competency? That goes back to the question of how much the business values predictability. If the business truly places no value on predictability, there won’t be any ROI from training staff in practices that support predictability. But I do not see that as the typical case. 

My company and I can train software professionals to approach competency in both requirements and estimation in about a week. In my experience most businesses place enough value on predictability that investing a week to make that option available provides a good ROI to the business. Note: this is about making the option available, not necessarily exercising the option on every project. 

My company and I can also train software professionals to approach competency in a full complement of Scrum and other Agile technical practices in about a week. That produces a good ROI too. In any given case, I would recommend both sets of training. If I had to recommend only one or the other, sometimes I would recommend starting with the Agile practices. But my real recommendation is to "embrace the and" and develop both sets of skills.  

For context about training software professionals to "approach competency" in requirements, estimation, Scrum, and other Agile practices, I am using that term based on work we've done with our  Professional Development Ladder. In that ladder we define capability levels of "Introductory," "Competence," "Leadership," and "Mastery." A few days of classroom training will advance most people beyond Introductory and much of the way toward Competence in a particular skill. Additional hands-on experience, mentoring, and feedback will be needed to cement Competence in an area. Classroom study is just one way to acquire these skills. Self-study or working with an expert mentor can work about as well. The skills aren't hard to learn, but they aren't self-evident either. As I've said above, the state of the art in estimation, requirements, and agile practices has moved well beyond what even a smart person can discover on their own. Focused professional development of some kind or other is needed to acquire these skills. 

Is a week enough to accomplish real competency? My company has been training software professionals for almost 20 years, and our consultants have trained upwards of 50,000 software professionals during that time. All of our consultants are highly experienced software professionals first, trainers second. We don't have any methodological ax to grind, so we focus on what is best for each individual client. We all work hands-on with clients so we know what is actually working on the ground and what isn't, and that experience feeds back into our training. We have also also invested heavily in training our consultants to be excellent trainers. As a result, our service quality is second to none, and we can make a tremendous amount of progress with a few days of training. Of course additional coaching, mentoring and support are always helpful. 

17. Agility plus predictability is better than agility alone. 

Skills development in practices that support estimation and predictability vs. practices that support agility is not an either/or choice. A truly agile business would be able to be flexible when needed, or predictable when needed. A true software professional will be most effective when skilled in both skill sets. 

If you think your business values agility only, ask your business what it values. Businesses vary, and you might work in a business that truly does value agility over predictability or that values agility exclusively. Many businesses value predictability over agility, however, so don't just assume it's one or the other.  

I think it’s self-evident that a business that has both agility and predictability will outperform a business that has agility only. With today's powerful agile practices, especially Scrum, there's no reason we can't have both.  

Overall, #NoEstimates seems like the proverbial solution in search of a problem. I don't see businesses clamoring to get rid of estimates. I see them clamoring to get better estimates. The good news for them is that agile practices, Scrum in particular, can provide excellent support for agility and estimation at the same time. 

My closing thought, in this hash tag-happy discussion, is that #AgileWithEstimationWorksBest -- and #EstimationWithAgileWorksBest too. 


Sponsored Post: Surge, Redis Labs,, VoltDB, Datadog, MongoDB, SignalFx, InMemory.Net, Couchbase, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • VoltDB's in-memory SQL database combines streaming analytics with transaction processing in a single, horizontal scale-out platform. Customers use VoltDB to build applications that process streaming data the instant it arrives to make immediate, per-event, context-aware decisions. If you want to join our ground-breaking engineering team and make a real impact, apply here.  

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Surge 2015. Want to mingle with some of the leading practitioners in the scalability, performance, and web operations space? Looking for a conference that isn't just about pitching you highly polished success stories, but that actually puts an emphasis on learning from real world experiences, including failures? Surge is the conference for you.

  • Your event could be here. How cool is that?
Cool Products and Services
  • MongoDB Management Made Easy. Gain confidence in your backup strategy. MongoDB Cloud Manager makes protecting your mission critical data easy, without the need for custom backup scripts and storage. Start your 30 day free trial today.

  • In a recent benchmark for NoSQL databases on the AWS cloud, Redis Labs Enterprise Cluster's performance had obliterated Couchbase, Cassandra and Aerospike in this real life, write-intensive use case. Full backstage pass and and all the juicy details are available in this downloadable report.

  • Real-time correlation across your logs, metrics and events. just released its operations data hub into beta and we are already streaming in billions of log, metric and event data points each day. Using our streaming analytics platform, you can get real-time monitoring of your application performance, deep troubleshooting, and even product analytics. We allow you to easily aggregate logs and metrics by micro-service, calculate percentiles and moving window averages, forecast anomalies, and create interactive views for your whole organization. Try it for free, at any scale.

  • In a recent benchmark conducted on Google Compute Engine, Couchbase Server 3.0 outperformed Cassandra by 6x in resource efficiency and price/performance. The benchmark sustained over 1 million writes per second using only one-sixth as many nodes and one-third as many cores as Cassandra, resulting in 83% lower cost than Cassandra. Download Now.

  • Datadog is a monitoring service for scaling cloud infrastructures that bridges together data from servers, databases, apps and other tools. Datadog provides Dev and Ops teams with insights from their cloud environments that keep applications running smoothly. Datadog is available for a 14 day free trial at

  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Sumo Logic alternative.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here:

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required.

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Quickly Help Users Identify Their Needs

Mike Cohn's Blog - Tue, 08/18/2015 - 15:00

Many projects bog down during requirements gathering. This is dangerous because it wastes valuable time, often setting up a project to be late.

To avoid this problem, I like to engage users and stakeholders in very lightweight story-writing workshops. Before anyone arrives in the meeting room, I will prepare the room by making sure there are pens and index cards within reach of every chair in the room. Even if you will be using a tool to maintain your product backlog, pen and paper are great for this type of early brainstorming.

I then write on a whiteboard or large sheet of paper:

As a ________

I want _________

so that ________

You’ll probably recognize that as the usual format of a user story, but your users may not. So I tell everyone that we’re gathered to fill in the blanks. We may start with a brief effort to identify who the products users are (the first of the three blanks). But most of the time will be spent filling in the second and third blanks (what and why).

I find that 90 minutes of this is usually plenty of brainstorming to load up a few months worth of high-priority items in a product backlog. That’s a guideline and will, of course, need to be adjusted based on the number of participants, previous agreement on the product vision, and other factors. Save time by avoiding prioritization discussions. Leave that to the product owner to be done outside the story-writing workshop.

Our Brave New World of 4K Displays

Coding Horror - Jeff Atwood - Tue, 08/18/2015 - 10:39

It's been three years since I last upgraded monitors. Those inexpensive Korean 27" IPS panels, with a resolution of 2560×1440 – also known as 1440p – have served me well. You have no idea how many people I've witnessed being Wrong On The Internet on these babies.

I recently got the upgrade itch real bad:

  • 4K monitors have stabilized as a category, from super bleeding edge "I'm probably going to regret buying this" early adopter stuff, and beginning to approach mainstream maturity.

  • Windows 10, with its promise of better high DPI handling, was released. I know, I know, we've been promised reasonable DPI handling in Windows for the last five years, but hope springs eternal. This time will be different!™

  • I needed a reason to buy a new high end video card, which I was also itching to upgrade, and simplify from a dual card config back to a (very powerful) single card config.

  • I wanted to rid myself of the monitor power bricks and USB powered DVI to DisplayPort converters that those Korean monitors required. I covet simple, modern DisplayPort connectors. I was beginning to feel like a bad person because I had never even owned a display that had a DisplayPort connector. First world problems, man.

  • 1440p at 27" is decent, but it's also … sort of an awkward no-man's land. Nowhere near high enough resolution to be retina, but it is high enough that you probably want to scale things a bit. After living with this for a few years, I think it's better to just suck it up and deal with giant pixels (34" at 1440p, say), or go with something much more high resolution and trust that everyone is getting their collective act together by now on software support for high DPI.

Given my great experiences with modern high DPI smartphone and tablet displays (are there any other kind these days?), I want those same beautiful high resolution displays on my desktop, too. I'm good enough, I'm smart enough, and doggone it, people like me.

I was excited, then, to discover some strong recommendations for the Asus PB279Q.

The Asus PB279Q is a 27" panel, same size as my previous cheap Korean IPS monitors, but it is more premium in every regard:

  • 3840×2160
  • "professional grade" color reproduction
  • thinner bezel
  • lighter weight
  • semi-matte (not super glossy)
  • integrated power (no external power brick)
  • DisplayPort 1.2 and HDMI 1.4 support built in

It is also a more premium monitor in price, at around $700, whereas I got my super-cheap no-frills Korean IPS 1440p monitors for roughly half that price. But when I say no-frills, I mean it – these Korean monitors didn't even have on-screen controls!

4K is a surprisingly big bump in resolution over 1440p — we go from 3.7 to 8.3 megapixels.

But, is it … retina?

It depends how you define that term, and from what distance you're viewing the screen. Per Is This Retina:

27" 3840×2160 'retina' at a viewing distance of 21" 27" 2560×1440 'retina' at a viewing distance of 32"

With proper computer desk ergonomics you should be sitting with the top of your monitor at eye level, at about an arm's length in front of you. I just measured my arm and, fully extended, it's about 26". Sitting at my desk, I'm probably about that distance from my monitor or a bit closer, but certainly beyond the 21" necessary to call this monitor 'retina' despite being 163 PPI. It definitely looks that way to my eye.

I have more words to write here, but let's cut to the chase for the impatient and the TL;DR crowd. This 4K monitor is totally amazing and you should buy one. It feels exactly like going from the non-retina iPad 2 to the retina iPad 3 did, except on the desktop. It makes all the text on your screen look beautiful. There is almost no downside.

There are a few caveats, though:

  • You will need a beefy video card to drive a 4K monitor. I personally went all out for the GeForce 980 Ti, because I might want to actually game at this native resolution, and the 980 Ti is the undisputed fastest single video card in the world at the moment. If you're not a gamer, any midrange video card should do fine.

  • Display scaling is definitely still a problem at times with a 4K monitor. You will run into apps that don't respect DPI settings and end up magnifying-glass tiny. Scott Hanselman provided many examples in January 2014, and although stuff has improved since then with Windows 10, it's far from perfect.

    Browsers scale great, and the OS does too, but if you use any desktop apps built by careless developers, you'll run into this. The only good long term solution is to spread the gospel of 4K and shame them into submission with me. Preach it, brothers and sisters!

  • Enable DisplayPort 1.2 in the monitor settings so you can turn on 60Hz. Trust me, you do not want to experience a 30Hz LCD display. It is unspeakably bad, enough to put one off computer screens forever. For people who tell you they can't see the difference between 30fps and 60fps, just switch their monitors to 30hz and watch them squirm in pain.

    Viewing those comparison videos, I begin to understand why gamers want 90Hz, 120Hz or even 144Hz monitors. 60fps / 60 Hz should be the absolute minimum, no matter what resolution you're running. Luckily DisplayPort 1.2 enables 60 Hz at 4K, but only just. You'll need DisplayPort 1.3+ to do better than that.

  • Disable the crappy built in monitor speakers. Headphones or bust, baby!

  • Turn down the brightness from the standard factory default of retina scorching 100% to something saner like 50%. Why do manufacturers do this? Is it because they hate eyeballs? While you're there, you might mess around with some basic display calibration, too.

This Asus PB279Q 4K monitor is the best thing I've upgraded on my computer in years. Well, actually, thing(s) I've upgraded, because I am not f**ing around over here.

Flo monitor arms, front view, triple monitors

I'm a long time proponent of the triple monitor lifestyle, and the only thing better than a 4K display is three 4K displays! That's 11,520×2,160 pixels to you, or 6,480×3,840 if rotated.

(Good luck attempting to game on this configuration with all three monitors active, though. You're gonna need it. Some newer games are too demanding to run on "High" settings on a single 4K monitor, even with the mighty Nvidia 980 Ti.)

I've also been experimenting with better LCD monitor arms that properly support my preferred triple monitor configurations. Here's a picture from the back, where all the action is:

Flo monitor arms, triple monitors, rear view

These are the Flo Monitor Supports, and they free up a ton of desk space in a triple monitor configuration while also looking quite snazzy. I'm fond of putting my keyboard just under the center monitor, which isn't possible with any monitor stand.

Flo monitor arm suggested multi-monitor setups

With these Flo arms you can "scale up" your configuration from dual to triple or even quad (!) monitor later.

4K monitors are here, they're not that expensive, the desktop operating systems and video hardware are in place to properly support them, and in the appropriate size (27") we can finally have an amazing retina display experience at typical desktop viewing distances. Choose the Asus PB279Q 4K monitor, or whatever 4K monitor you prefer, but take the plunge.

In 2007, I asked Where Are The High Resolution Displays, and now, 8 years later, they've finally, finally arrived on my desktop. Praise the lord and pass the pixels!

Oh, and gird your loins for 8K one day. It, too, is coming.

[advertisement] Building out your tech team? Stack Overflow Careers helps you hire from the largest community for programmers on the planet. We built our site with developers like you in mind.
Categories: Programming

Iterables, Iterators and Generator functions in ES2015

Xebia Blog - Mon, 08/17/2015 - 20:45

ES2015 adds a lot of new features to javascript that make a number of powerful constructs, present in other languages for years, available in the browser (well as soon as support for those features is rolled out of course, but in the meantime we can use these features by using a transpiler such as Babeljs or Traceur).
Some of the more complicated additions are the iterator and iterable protocols and generator functions. In this post I'll explain what they are and what you can use them for.

The Iterable and Iterator protocols

These protocols are analogues to, for example, the Java Interfaces and define the contract that an object must adhere to in order to be considered an iterable or an iterator. So instead of new language features they leverage existing constructs by agreeing upon a convention (as javascript does not have a concept like an interface in other typed languages). Let's have a closer look at these protocols and see how they interact with each other.

The Iterable protocol

This protocol specifies that for an object object to be considered iterable (and usable in, for example, a `for ... of` loop) it has to define a function with the special key "Symbol.iterator" that returns an object that adheres to the iterator protocol. That is basically the only requirement. For example say you have a datastructure you want to iterate over, in ES2015 you would do that as follows:

class DataStructure {
  constructor(data) { = data;

  [Symbol.iterator]() {
    let current = 0
    let data =;
    return  {
      next: function () {
        return {
          value: data[current++],
          done: current > data.length
let ds = new DataStructure(['hello', 'world']);

console.log([...ds]) // ["hello","world"]

The big advantage of using the iterable protocol over using another construct like `for ... in` is that you have more clearly defined iteration semantic (for example: you do not need explicit hasOwnProperty checking when iterating over an array to filter out properties on the array object but not in the array). Another advantage is the when using generator functions you can benefit of lazy evaluation (more on generator functions later).

The iterator protocol

As mentioned before, the only requirement for the iterable protocol is for the object to define a function that returns an iterator. But what defines an iterator?
In order for an object to be considered an iterator it must provided a method named `next` that returns an object with 2 properties:
* `value`: The actual value of the iterable that is being iterated. This is only valid when done is `false`
* `done`: `false` when `value` is an actual value, `true` when the iterator did not produce a new value

Note that when you provided a `value` you can leave out the `done` property and when the `done` property is `true` you can leave out the value property.

The object returned by the function bound to the DataStructure's `Symbol.iterator` property in the previous example does this by returning the entry from the array as the value property and returning `done: false` while there are still entries in the data array.

So by simply implementing both these protocols you can turn any `Class` (or `object` for that matter) into an object you can iterate over.
A number of built-ins in ES2015 already implement these protocols so you can experiment with the protocol right away. You can already iterate over Strings, Arrays, TypedArrays, Maps and Sets.

Generator functions

As shown in the earlier example implementing the iterable and iterator protocols manually can be quite a hassle and is error-prone. That is why a language feature was added to ES2015: generator functions. A generator combines both an iterable and an iterator in a single function definition. A generator function is declared by adding an asterisk (`*`) to the function name and using yield to return values. Big advantage of using this method is that your generator function will return an iterator that, when its `next()` method is invoked will run up to the first yield statement it encounters and will suspend execution until `next()` is called again (after which it will resume and run until the next yield statement). This allows us to write an iteration that is evaluated lazily instead of all at once.

The following example re-implements the iterable and iterator using a generator function producing the same result, but with a more concise syntax.

class DataStructure {
  constructor(data) { = data;

  *[Symbol.iterator] () {
    let data =;
    for (let entry of data) {
      yield entry;
let ds = new DataStructure(['hello', 'world']);

console.log([...ds]) // ["hello","world"]
More complex usages of generators

As mentioned earlier, generator functions allow for lazy evaluation of (possibly) infinite iterations allowing to use constructs known from more functional languages such as taking a limited subset from an infinte sequence:

function* generator() {
  let i = 0;
  while (true) {
    yield i++;

function* take(number, gen) {
  let current = 0;
  for (let result of gen) {
    yield result;
    if (current++ >= number) {
console.log([...take(10, generator())]) // [0,1,2,3,4,5,6,7,8,9]
console.log([...take(10, [1,2,3])]) // [1,2,3]

Delegating generators
Within a generator it is possible to delegate to a second generator making it possible to create recursive iteration structures. The following example demonstrates a simple generator delegating to a sub generator and returning to the main generator.

function* generator() {
  yield 1;
  yield* subGenerator()
  yield 4;

function* subGenerator() {
  yield 2;
  yield 3;

console.log([...generator()]) // [1,2,3,4]

Develop a sweet spot for Marshmallow: Official Android 6.0 SDK & Final M Preview

Android Developers Blog - Mon, 08/17/2015 - 19:09

By Jamal Eason, Product Manager, Android

Android 6.0 Marshmallow

Whether you like them straight out of the bag, roasted to a golden brown exterior with a molten center, or in fluff form, who doesn’t like marshmallows? We definitely like them! Since the launch of the M Developer Preview at Google I/O in May, we’ve enjoyed all of your participation and feedback. Today with the final Developer Preview update, we're introducing the official Android 6.0 SDK and opening Google Play for publishing your apps that target the new API level 23 in Android Marshmallow.

Get your apps ready for Android Marshmallow

The final Android 6.0 SDK is now available to download via the SDK Manager in Android Studio. With the Android 6.0 SDK you have access to the final Android APIs and the latest build tools so that you can target API 23. Once you have downloaded the Android 6.0 SDK into Android Studio, update your app project compileSdkVersion to 23 and you are ready to test your app with the new platform. You can also update your app to targetSdkVersion to 23 test out API 23 specific features like auto-backup and app permissions.

Along with the Android 6.0 SDK, we also updated the Android Support Library to v23. The new Android Support library makes it easier to integrate many of the new platform APIs, such as permissions and fingerprint support, in a backwards-compatible manner. This release contains a number of new support libraries including: customtabs, percent, recommendation, preference-v7, preference-v14, and preference-leanback-v17.

Check your App Permissions

Along with the new platform features like fingerprint support and Doze power saving mode, Android Marshmallow features a new permissions model that streamlines the app install and update process. To give users this flexibility and to make sure your app behaves as expected when an Android Marshmallow user disables a specific permission, it’s important that you update your app to target API 23, and test the app thoroughly with Android Marshmallow users.

How to Get the Update

The Android emulator system images and developer preview system images have been updated for supported Nexus devices (Nexus 5, Nexus 6, Nexus 9 & Nexus Player) to help with your testing. You can download the device system images from the developer preview site. Also, similar to the previous developer update, supported Nexus devices will receive an Over-the-Air (OTA) update over the next couple days.

Although the Android 6.0 SDK is final, the devices system images are still developer preview versions. The preview images are near final but they are not intended for consumer use. Remember that when Android 6.0 Marshmallow launches to the public later this fall, you'll need to manually re-flash your device to a factory image to continue to receive consumer OTA updates for your Nexus device.

What is New

Compared to the previous developer preview update, you will find this final API update fairly incremental. You can check out all the API differences here, but a few of the changes since the last developer update include:

  • Android Platform Change:
    • Final Permissions User Interface — we updated the permissions user interface and enhanced some of the permissions behavior.
  • API Change:
    • Updates to the Fingerprint API — which enables better error reporting, better fingerprint enrollment experience, plus enumeration support for greater reliability.
Upload your Android Marshmallow apps to Google Play

Google Play is now ready to accept your API 23 apps via the Google Play Developer Console on all release channels (Alpha, Beta & Production). At the consumer launch this fall, the Google Play store will also be updated so that the app install and update process supports the new permissions model for apps using API 23.

To make sure that your updated app runs well on Android Marshmallow and older versions, we recommend that you use Google Play’s newly improved beta testing feature to get early feedback, then do a staged rollout as you release the new version to all users.

Categories: Programming

SE-Radio Episode 235: Ben Hindman on Apache Mesos

Ben Hindman talks to Jeff Meyerson about Apache Mesos, a distributed systems kernel. Mesos abstracts away many of the hassles of managing a distributed system. Hindman starts with a high-level explanation of Mesos, explaining the problems he encountered trying to run multiple instances of Hadoop against a single data set. He then discusses how Twitter uses […]
Categories: Programming

30 Day Sprints for Personal Development: Change Yourself with Skill

"What lies behind us and what lies before us are small matters compared to what lies within us. And when we bring what is within us out into the world, miracles happen." -- Ralph Waldo Emerson

I've written about 30 Day Sprints before, but it's time to talk about them again:

30 Day Sprints help you change yourself with skill.

Once upon a time, I found that when I was learning a new skill, or changing a habit, or trying something new, I wasn't getting over that first humps, or making enough progress to stick with it.

At the same time, I would get distracted by shiny new objects.  Because I like to learn and try new things, I would start something else, and ditch whatever else I was trying to work on, to pursuit my new interest.  So I was hopping from thing to thing, without much to show for it, or getting much better.

I decided to stick with something for 30 days to see if it would make a difference.  It was my personal 30 day challenge.  And it worked.   What I found was that sticking with something past two weeks, got me past those initial hurdles.  Those dips that sit just in front of where breakthroughs happen.

All I did was spend a little effort each day for 30 days.  I would try to learn a new insight or try something small each day.  Each day, it wasn't much.  But over 30 days, it accumulated.  And over 30 days, the little effort added up to a big victory.

Why 30 Day Sprints Work So Well

Eventually, I realized why 30 Day Sprints work so well.  You effectively stack things in your favor.  By investing in something for a month, you can change how you approach things.  It's a very different mindset when you are looking at your overall gain over 30 days versus worrying about whether today or tomorrow gave you immediate return on your time.  By taking a longer term view, you give yourself more room to experiment and learn in the process.

  1. 30 Day Sprints let you chip away at the stone.  Rather than go big bang or whole hog up front, you can chip away at it.  This takes the pressure off of you.  You don't have to make a breakthrough right away.  You just try to make a little progress and focus on the learning.  When you don't feel like you made progress, you at least can learn something about your approach.
  2. 30 Day Sprints get you over the initial learning curve.  When you are taking in new ideas and learning new concepts, it helps to let things sink in.  If you're only trying something for a week or even two weeks, you'd be amazed at how many insights and breakthroughs are waiting just over that horizon.  Those troughs hold the keys to our triumphs.
  3. 30 Day Sprints help you stay focused.  For 30 days, you stick with it.  Sure you want to try new things, but for 30 days, you keep investing in this one thing that you decided was worth it.  Because you do a little every day, it actually gets easier to remember to do it. But the best part is, when something comes up that you want to learn or try, you can add it to your queue for your next 30 Day Sprint.
  4. 30 Day Sprints help you do things better, faster, easier, and deeper.  For 30 days, you can try different ways.  You can add a little twist.  You can find what works and what doesn't.  You can keep testing your abilities and learning your boundaries.  You push the limits of what you're capable of.  Over the course of 30 days, as you kick the tires on things, you'll find short-cuts and new ways to improve. Effectively, you unleash your learning abilities through practice and performance.
  5. 30 Day Sprints help you forge new habits.  Because you focus for a little bit each day, you actually create new habits.  A habit is much easier to put in place when you do it each day.  Eventually, you don't even have to think about it, because it becomes automatic.  Doing something every other day, or every third day, means you have to even remember when to do it.  We're creatures of habit.  Just replace how you already spend a little time each day, on your behalf.

And that is just the tip of the iceberg.

The real power of 30 Day Sprints is that they help you take action.  They help you get rid of all the excuses and all the distractions so you can start to achieve what you’re fully capable of.

Ways to Make 30 Day Sprints Work Better

When I first started using 30 Day Sprints for personal development, the novelty of doing something more than a day or a week or even two weeks, was enough to get tremendous value.  But eventually, as I started to do more 30 Day Sprints, I wanted to get more out of them.

Here is what I learned:

  1. Start 30 Day Sprints at the beginning of each month.  Sure, you can start 30 Day Sprints whenever you want, but I have found it much easier, if the 17th of the month, is day 17 of my 30 Day Sprint.  Also, it's a way to get a fresh start each month.  It's like turning the page.  You get a clean slate.  But what about February?  Well, that's when I do a 28 Day Sprint (and one day more when Leap Year comes.)
  2. Same Time, Same Place.  I've found it much easier and more consistent, when I have a consistent time and place to work on my 30 Day Sprint.  Sure, sometimes my schedule won't allow it.  Sure, some things I'm learning require that I do it from different places.  But when I know, for example, that I will work out 6:30 - 7:00 A.M. each day in my living room, that makes things a whole lot easier.  Then I can focus on what I'm trying to learn or improve, and not spend a lot of time just hoping I can find the time each day.  The other benefit is that I start to find efficiencies because I have a stable time and place, already in place.  Now I can just optimize things.
  3. Focus on the learning.  When it's the final inning and the score is tied, and you have runners on base, and you're up at bat, focus is everything.  Don't focus on the score.  Don't focus on what's at stake.  Focus on the pitch.  And swing your best.  And, hit or miss, when it's all over, focus on what you learned.  Don't dwell on what went wrong.  Focus on how to improve.  Don't focus on what went right.  Focus on how to improve.  Don't get entangled by your mini-defeats, and don't get seduced by your mini-successes.  Focus on the little lessons that you sometimes have to dig deeper for.

Obviously, you have to find what works for you, but I've found these ideas to be especially helpful in getting more out of each 30 Day Sprint.  Especially the part about focusing on the learning.  I can't tell you how many times I got too focused on the results, and ended up missing the learning and the insights. 

If you slow down, you speed up, because you connect the dots at a deeper level, and you take the time to really understand nuances that make the difference.

Getting Started

Keep things simple when you start.  Just start.  Pick something, and make it your 30 Day Sprint. 

In fact, if you want to line your 30 Day Sprint up with the start of the month, then just start your 30 Day Sprint now and use it as a warm-up.  Try stuff.  Learn stuff.  Get surprised.  And then, at the start of next month, just start your 30 Day Sprint again.

If you really don't know how to get started, or want to follow a guided 30 Day Sprint, then try 30 Days of Getting Results.  It's where I share my best lessons learned for personal productivity, time management, and work-life balance.  It's a good baseline, because by mastering your productivity, time management, and work-life balance, you will make all of your future 30 Day Sprints more effective.

Boldly Go Where You Have Not Gone Before

But it's really up to you.  Pick something you've been either frustrated by, inspired by, or scared of, and dive in.

Whether you think of it as a 30 Day Challenge, a 30 Day Improvement Sprint, a Monthly Improvement Sprint, or just a 30 Day Sprint, the big idea is to do something small for 30 days.

If you want to go beyond the basics and learn everything you can about mastering personal productivity, then check out Agile Results, introduced in Getting Results the Agile Way.

Who knows what breakthroughs lie within?

May you surprise yourself profoundly.

Categories: Architecture, Programming

Observation Issues and Root Cause Analysis

Herding Cats - Glen Alleman - Mon, 08/17/2015 - 17:29

Todd Little posted a comment on "How To Lie With Statistics," about his observations on the chart contained in that original post. 

6a00d8341ca4d953ef01a3fd22c4b1970bAs Todd mentions in his response

Screen Shot 2015-08-17 at 8.33.20 AM
The Cone of Uncertainty chart comes from the original work of Barry Boehm, "Reducing Estimation Uncertainty with Continuous Estimation: Assessment Tracking with 'Cone of Uncertainty.'" In this paper Dr. Boehm speaks to the lack of continuous updating of the estimates made early in the program as the source of unfavorable cost and schedule outcomes. 

As long as the projects are not re-assessed or the estimations not re-visited, the cones of uncertainty are not effectively reduced [1]. 

The Cone of Uncertainty is a notional example of how to increase the accuracy and precision of software development estimates with continuous reassessments. For programs in the federal space subject to FAR 34.2 and DFARS 34.201, reporting Estimate to Complete (ETC) and Estimates at Completion (EAC) is mandatory on a monthly basis. This is rarely done in the commercial world with the expected results shown in Todd's chart for his data and Demarco's data.

The core issue from current research at PARCA ( from Root Cause Analysis (where I have worked as a support contractor) shows many of the issues are poor estimates when the program was baselined and failure to update the ETC and EAC with credible information about risks and physical percent complete

The data reported in Todd's original chart are the results of the projects based on estimates that may or may not have been credible. So the analysis of the outcomes of the completed projects is Open Loop  ...

... that is the target estimate measured against the actual outcomes May or May not Have Been Against Credible Estimates. So showing project overages doesn't actually provide the needed information the correct this problem. The estimate may have been credible, but the execution failed to perform as planned.

With this Open Loop assessment it is difficult to determine any corrective actions. Todd's complete presentation "Uncertainty Surrounding Cone of Uncertainty,"  speaks to some of the Possible root cause of the mismatch between Estimates and Actuals. As Todd mentions in his response, this was not the purpose his chart. Rather I'd suspect just to show the existence of this gap.

The difficulty however is pointing out observations of problems, while useful to confirm there is a problem, does little to correct the underlying cause of the problem.

At a recent ICEEA conference in San Diego, Dr. Boehm and several others spoke about this estimating problem.  Several books and papers were presented addressing this issue.

Both these resources , and many more, speak to the Root Causes of both the estimating problem and the programmatic issues of staying on plan.

This is the Core Problem That Has To Bee Addressed 

We need both good estimates and good execution to arrive as planned. There is plenty of evidence that we have an estimating problem. Conferences (ICEAA and AACE) speak to these. As well as government and FFRDC organizations (search for Root Cause Analysis here PARCA, IDA, MITRE, RAND, and SEI).

But the execution side is also a Root Cause. Much research has been done on procedures and process for Keeping the Program Green. For example the work presented at ICEAA "The Cure for Cost and Schedule Growth" where more possible Root Causes are addressed from our research.

While Todd's chart shows the problem, the community - cost and schedule community - is still struggling with the corrective action. The chart is ½ the story. The other ½ is the poor performance on the execution side IF we had a credible baseline to execute against. 

To date both sides of the problem are unsolved and there for we have Open Loop Control with neither the proper steering target nor the proper control of the system to steer toward that target. Without corrections to both estimating, planning, scheduling, and execution, there is little hope in improving the probability of success in the software development domain.

Using Todd's chart from the full presentation, the core question that remains unanswered in many domains is

How can we increase the credibility of the estimate to complete earlier in the program?

Screen Shot 2015-08-17 at 10.05.23 AM


  • In the feasibility stage what is a credible estimate, and how can that estimate be improved as the program moves left to right?
  • What are the measures of credibility?
  • How can these measures be informed as the project progresses?
  • What are the physical  processes to assure those estimates are increasing in accuracy and precision?

By the way the term possible error comes from historical data. And like all How to Lie With Statistics charts that historical data is self selected, so a specific domain, classification of projects, and most importantly, the maturity of the organization making the estimates and executing the program.

Much research has shown the maturity of the acquirer influences the accuracy and precision of the estimates. Our poster child is Stardust, with on time, on budget, working outcomes due to both government and contractor Program Manager's maturity for managing in the presence of uncertainty. Which is one of the source of this material

Managing in the presence of uncertainty from Glen Alleman

[1] Boehm, B. “Software Engineering Economics”. Prentice-Hall, 1981.  

Related articles Why Guessing is not Estimating and Estimating is not Guessing Estimating and Making Decisions in Presence of Uncertainty Making Conjectures Without Testable Outcomes
Categories: Project Management

How Autodesk Implemented Scalable Eventing over Mesos

This is a guest post by Olivier Paugam, SW Architect for the Autodesk Cloud. I really like this post because it shows how bits of infrastructure--Mesos, Kafka, RabbitMQ, Akka, Splunk, Librato, EC2--can be combined together to solve real problems. It's truly amazing how much can get done these days by a small team.

I was tasked a few months ago to come up with a central eventing system, something that would allow our various backends to communicate with each other. We are talking about activity streaming backends, rendering, data translation, BIM, identity, log reporting, analytics, etc.  So something really generic with varying load, usage patterns and scaling profile.  And oh, also something that our engineering teams could interface with easily.  Of course every piece of the system should be able to scale on its own.

I obviously didn't have time to write too much code and picked up Kafka as our storage core as it's stable, widely used and works okay (please note I'm not bound to using it and could switch over to something else).  Now I of course could not expose it directly and had to front-end it with some API. Without thinking much I also rejected the idea of having my backend manage the offsets as it places too much constraint on how one deals with failures for instance.

So what did I end up with?

Categories: Architecture

How to Use Continuous Planning

If you’ve read Reasons for Continuous Planning, you might be wondering, “How can we do this?” Here are some ideas.

You have a couple of preconditions:

  • The teams get to done on features often. I like small stories that the team can finish in a day or so.
  • The teams continuously integrate their features.

Frequent features with continuous integration creates an environment in which you know that you have the least amount of work in progress (WIP). Your program also has a steady stream of features flowing into the code base. That means you can make decisions more often about what the teams can work on next.

Now, let’s assume you have small stories. If you can’t imagine how to make a small story, here is an example I used last week that helped someone envision what a small story was:

Imagine you want a feature set called “secure login” for your product. You might have stories in this order:

  1. A person who is already registered can login with their user id and password. For this, you only need to have a flat file and a not-too-bright parser—maybe even just a lookup in the flat file. You don’t need too many cases in the flat file. You might only have two or three. Yes, this is a minimal story that allows you to write automated tests to verify that it works even when you refactor.
  2. A person who is not yet registered can create a new id and password.
  3. After the person creates a new id and password, that person can log in. You might think of the database schema now. You might not want the entire schema yet. You might want to wait until you see all the negative stories/features. (I’m still thinking flat file here.)
  4. Now, you might add the “parse-all-possible-names” for login. You would refactor Story #2 to use a parser, not copy names and emails into a flat file. You know enough now about what the inputs to your database are, so you can implement the parser.
  5. You want to check for people that you don’t want to log in. These are three different small stories. You might need a spike to consider which stories you want to do when, or do some investigation.
    • Are they from particular IP addresses (web) or physical locations?
    • Do you need all users to follow a specific name format?
    • Do you want to use a captcha (web) or some other robot-prevention device for login (three tries, etc.)?

Maybe you have more stories here. I am at the limit of what I know for secure login. Those of you who implement secure login might think I am past my limit.

These five plus stories are a feature set for secure login. You might not need more than stories 1, 2, and 3 the first time you touch this feature set. That’s fine. You have the other stories waiting in the product backlog.

If you are a product owner, you look at the relative value of each feature against each other feature. Maybe you need this team to do these three first stories and then start some revenue stories. Maybe the Accounting team needs help on their backlog, and this feature team can help. Maybe the core-of-the-product team needs help. If you have some kind of login, that’s good enough for now. Maybe it’s not good enough for an external release. It’s good enough for an internal release.

Your ability to change what feature teams do every so often is part of the value of agile and lean product ownership—which helps a program get to done faster.

You might have an initial one-quarter backlog that might look like this:


Start at the top and left.

You see the internal releases across the top. You see the feature sets across just under the internal releases. This part is still a wish list.

Under the feature sets are the actual stories in the details. Note how the POs can change what each team does, to create a working skeleton.

The details are in the stories at the bottom.

This is my picture.You might want something different from this.

The idea is to create a Minimum Viable Product for each demo and to continue to improve the walking skeleton as the project teams continue to create the product.

Because you have release criteria for the product as a whole, you can ask as the teams demo, “What do we have to do to accomplish our release criteria?” That question allows and helps you replan for the next iteration (or set of stories in the kanban). Teams can see interdependencies because their stories are small. They can ask each other, “Hey can you do the file transfer first, before you start to work on the Engine?”

The teams work with their product owners. The product owners (product owner team) work together to develop and replan the next iteration’s plan which leads to replanning the quarter’s plan. You have continuous planning.

You don’t need a big meeting. The feature team small-world networks help the teams see what they need, in the small. The product owner team small-world network helps the product owners see what they need for the product over the short-term and the slightly longer term. The product manager can meet with the product owner team at least once a quarter to revise the big picture product roadmap.

You can do this if the teams have small stories, if they pay attention to technical excellence and use continuous integration.

In a program, you want smallness to go big. Small stories lead to more frequent internal releases (every day is great, at least once a month). More frequent internal releases lead to everyone seeing progress, which helps people accomplish their work.

You don’t need a big planning meeting. You do need product owners who understand the product and work with the teams all the time.

The next post will be about whether you want resilience or prediction in your project/program. Later :-)

Categories: Project Management

You Have to Master the Rules Before You Can Break the Rules

Making the Complex Simple - John Sonmez - Mon, 08/17/2015 - 13:00

A good portion of my advice ends up being  misunderstood. I’m all for being a rule breaker. I, in general, have a contempt for authority. I thumb my nose at gatekeepers. I genuinely believe the path of going to school, getting good grades, and working a shitty 9-to-5 job for the rest of your life, […]

The post You Have to Master the Rules Before You Can Break the Rules appeared first on Simple Programmer.

Categories: Programming

Persistence with Docker containers - Team 1: GlusterFS

Xebia Blog - Mon, 08/17/2015 - 10:05

This is a follow-up blog from KLM innovation day

The goal of Team 1 was to have GlusterFS cluster running in Docker containers and to expose the distributed file system to a container by ‘mounting’ it through a so called data container.

Setting up GlusterFS was not that hard, the installation steps are explained here [installing-glusterfs-a-quick-start-guide].

The Dockerfiles we eventually created and used can be found here [docker-glusterfs]
Note: the Dockerfiles still containe some manual steps because you need to tell GlusterFS about the other node so they can find each other. In an real environment this could be done by for example Consul.

Although setting up GlusterFS cluster was not hard, mounting it on CoreOS proved much more complicated. We wanted to mount the folder through a container using the GlusterFS client but to achieve that the container needs to run in privileged mode or with ‘SYS_ADMIN’ capabilities. This has nothing to do with GlusterFS it self, Docker doesn’t allow mounts without these options. Eventually mounting of the remote folder can be achieved but exposing this mounted folder as Docker volume is not possible. This is an Docker shortcoming, see docker issue here

Our second - not so prefered - method was mounting the folder in CoreOS itself and then using it in a container. The problem here is that CoreOS does not have support for GlusterFS client but does have NFS support. So to make this work we exposed GlusterFS through NFS, the step to enable it can be found here [Using_NFS_with_Gluster].  After enabling NFS on GlusterFS we mounted the exposed folder in CoreOS and used it in a container which worked fine.

Mounting GlusterFS through NFS was not what we wanted, but luckily Docker released their experimental volume plugin support. And our luck did not end there, because it turned out David Calavera had already created a volume plugin for GlusterFS. So to test this out we used the experimental Docker version 1.8 and run the plugin with the necessary settings. This all worked fine but this is where our luck ran out. When using the experimental Docker daemon, in combination with this plugin, we can see in debug mode the plugin connects to the GlusterFS and saying it is mounting the folder. But unfortunately it receives an error, it seems from the server and then unmounts the folder.

The volume plugin above is basically a wrapper around the GlusterFS client. We also found a Go API for GlusterFS. This could be used to create a pure Go implementation of the volume plugin, but unfortunately we ran out of time to actually try this.


Using distributed files system like GlusterFS or CEPH sounds very promising especially combined with the Docker volume plugin which hides the implementation and allows to decouple the container from the storage of the host.


Between the innovation day and this blog post the world evolved and Docker 1.8 came out and with it a CEPH docker volume plugin.