Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Updated Material Design Guidelines and Resources

Google Code Blog - Fri, 10/17/2014 - 18:00

When we first published the Material Design guidelines back in June, we set out to create a living document that would grow with feedback from the community. In that time, we’ve seen some great work from the community in augmenting the guidelines with things like Sketch templates, icon downloads and screens upon screens of inspiring visual and motion design ideas. We’ve also received a lot of feedback around what resources we can provide to make using Material Design in your projects easier.

So today, in addition to publishing the final Android 5.0 SDK for developers, we’re publishing our first significant update to the Material Design guidelines, with additional resources including:

  • Updated sticker sheets in PSD, AI and Sketch formats
  • A new icon library ZIP download
  • Updated color swatch downloads
  • Updated whiteframe downloads, including better baseline grid text alignment and other miscellaneous fixes

The sticker sheets have been updated to reflect the latest refinements to the components and integrated into a single, comprehensive sticker sheet that should be easier to use. An aggregated sticker sheet is also newly available for Adobe Photoshop and Sketch—two hugely popular requests. In the sticker sheet, you can find various elements that make up layouts, including light and dark symbols for the status bar, app bar, bottom toolbar, cards, dropdowns, search fields, dividers, navigation drawers, dialogs, the floating action button, and other components. The sticker sheets now also include explanatory text for elements.

Note that the images in the Components section of the guidelines haven't yet been updated (that’s coming soon!), so you can consider the sticker sheets to be the most up-to-date version of the components.

Also, the new system icons sticker sheet contains icons commonly used in Android across different apps, such as icons used for media playback, communication, content editing, connectivity, and so on.

Stay tuned for more enhancements as we incorporate more of your feedback—remember to share your suggestions on Google+! We’re excited to continue evolving this living document with you!

For more on Material Design, check out these videos and the new getting started guide for Android developers.

Posted by Roman Nurik, Design Advocate
Categories: Programming

How to deploy a Docker application into production on Amazon AWS

Xebia Blog - Fri, 10/17/2014 - 17:00

Docker reached production status a few months ago. But having the container technology alone is not enough. You need a complete platform infrastructure before you can deploy your docker application in production. Amazon AWS offers exactly that: a production quality platform that offers capacity provisioning, load balancing, scaling, and application health monitoring for Docker applications.

In this blog, you will learn how to deploy a Docker application to production in five easy steps.

For demonstration purposes, you are going to use the node.js application that was build for CloudFoundry and used to demonstrate Deis in a previous post. A truly useful app of which the sources are available on github.

1. Create a Dockerfile

First thing you need to do is to create a Dockerfile to create an image. This is quite simple: you install the node.js and npm packages, copy the source files and install the javascript modules.

FROM    ubuntu:latest
# Install nodejs npm
RUN apt-get update
RUN apt-get install -y nodejs npm
# add application sources
COPY . /app
RUN cd /app; npm install
# Expose the default port
EXPOSE  5000
# Start command
CMD ["nodejs", "/app/web.js"]
2. Test your docker application

Now you can create the Docker image and test it.

$ docker build -t sample-nodejs-cf .
$ docker run -d -p 5000:5000 sample-nodejs-cf

Point your browser at http://localhost:5000, click the 'start' button and Presto!

3. Zip the sources

Now you know that the instance works, you zip the source files. The image will be build on Amazon AWS based on your Dockerfile.

$ zip -r /tmp/ .
4. Deploy Docker application to Amazon AWS

Now you install and configure the amazon AWS command line interface (CLI) and deploy the docker source files to elastic beanstalk.  You can do this all manually, but here you use the script that I created.

$ \
         sample-nodejs-cf \
         /tmp/ \

After about 8-10 minutes your application is running. The output should look like this..

INFO: creating application sample-nodejs-cf
INFO: Creating environment demo-env for sample-nodejs-cf
INFO: Uploading for sample-nodejs-cf, version 1412948762.
upload: ./ to s3://elasticbeanstalk-us-east-1-233211978703/
INFO: Creating version 1412948762 of application sample-nodejs-cf
INFO: demo-env in status Launching, waiting to get to Ready..
INFO: demo-env in status Launching, waiting to get to Ready..
INFO: Updating environment demo-env with version 1412948762 of sample-nodejs-cf
INFO: demo-env in status Updating, waiting to get to Ready..
INFO: demo-env in status Updating, waiting to get to Ready..
INFO: Version 1412948762 of sample-nodejs-cf deployed in environment
INFO: current status is Ready, goto
5. Test your Docker application on the internet!

Your application is now available on the Internet. Browse to the designated URL and click on start. When you increase the number of instances at Amazon, they will appear in the application. When you deploy a new version of the application, you can observe how new versions of the application  appear without any errors on the client application.

For more information, goto Amazon Elastic Beanstalk adds Docker support. and Dockerizing a Node.js Web App.

Stuff The Internet Says On Scalability For October 17th, 2014

Hey, it's HighScalability time:

What could this be? Swarms of drones painting 3D light sculptures against the night sky!
  • Quotable Quotes:
    • Visnja Zeljeznjak: Steve Jobs' product pricing formula: cost of materials x 3 + 33%
    • Benedict Evans: We now have over 2bn iOS and Android devices on earth, and this will grow in the next few years to well over 3bn.
    • @ClearStoryData: It's true! Avg beer drinker attracts 4.4% more Mosquitos than water drinker #Strataconf
    • Leslie Lamport: The core idea of the problem of that notion of causality came about because of my familiarity with special relativity...where one event could causally effect another depended on weather or not information from one could physically reach the other.
    • @laurelatoreilly: Fascinating session about cargo ships going dark to shift market prices #IoT #strataconf "your decisions are only as good as your data"
    • @muratdemirbas: Distributed/decentralized coordination is expensive & hard to scale. Centralized coordination is cheap & scales easily using hierarchies.
    • @froidianslip: ”Kafka is awesome. We heard it cures cancer." -- @gwenshap #Strataconf
    • @timoreilly: RT @grapealope: The self-driving car has 6000 sensors, and takes readings at 4Hz. That's a lot of data. @MCSrivas #strataconf #MapR
    • @froidianslip: Love the paraphrase borrowed from Ray Bradbury, "Any sufficiently complex configuration is indistinguishable from code." #Strataconf
    • @matei_zaharia: Spark shatters MapReduce's 100 TB and 1 PB sort records... with 10x fewer nodes
    • @msallstr: “Synchronous calls in this environment are the crystal meth of programming”  @mjpt777 on the new   reactive manifesto 
    • @postwait: “If you put them under enough stress, perfectly rational people will panic and start believing in science” #priceless
    • Ilya Grigorik: It's great to see access from mobile is around 30% faster compared to last year.
    • @ryandotsmith: Recently migrated an async system to SQS. Much simple. Tiny latency. Here is the code (maybe a gem?)

  • People just don't appreciate the power of messy. The problematic culture of "Worse is Better". There's an implied notion here that people can't recognize better when they see it. Better is not a platonic ideal. It can't be proved by argument. Better, like evolution, is something that works itself out in practice. Like evolution, Worse is Better is an algorithm for stepping through a possibility space by jumping from one working phenotype to the next more adapted working phenotype. And for many, that's better. Not Ideal, but Better.

  • The Times They Are a-Changin'. Docker and Microsoft partner to drive adoption of distributed applications. What's the goal? nickstinemates: Package your Windows app in a docker container, use same tooling you would otherwise use to deploy to a docker engine running on a Windows host. Package your Linux app in a docker container, use same tooling you would otherwise use to deploy to a docker engine running on a Linux host.

  • Leandro Pereira writes a fine autobiography in Life of a HTTP request, as seen by my toy web server. All the stages of life are there. Socket creation. Acceptance. Scheduling. Coroutines. Reading requests. Parsing requests. All the way to the reply and the death of the connection. A lot to learn if you want to look at the simplified internals of a service.

  • Wonderful talk: Call Me Maybe: Carly Rae Jepsen and the Perils of Network Partitions. Kyle Kingsbury takes a detailed look at different partition problems in different databases. There are split brains. Masters dying. Lost data. General network mayhem. It's great. The lesson: what's written down in the marketing documentation is not always what you get. Test your application and see what really happens. The world is not simple. A dumb solution where you understand the failure modes can be a good choice.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Then When Given

Xebia Blog - Fri, 10/17/2014 - 14:50

People who practice ATDD all know how frustrating it can be to write automated examples. Especially when you get stuck overthinking the preconditions of examples.

This post describes an alternative approach to writing acceptance tests: write them backwards!

Imagine that you are building the very first online phone book. We need to define an acceptance tests for viewing the location of a florist. Using the Given-When-Then formula you would probably describe the behaviour like this.

Given I am on the online phone book homepage
When I type “Florist” in the business type field
And I click …

Most of the time you will be discussing and describing details that have nothing to do with viewing the location of a florist. To avoid this, write down the Then clause of the formula first.
Make sure the Then clause contains an observable result.

Then I see the location “Floriststreet 123”

Next, we will try to answer the following question: What caused the Then clause?
Make sure the When clause contains an actor and an action.

When I click “View map” of the search result
Then I see the location “Floriststreet 123”

The last thing we will need to do is answer the following question: Why can I perform that action?
Make sure the Given clause contains a simple precondition.

Given I see a search result for florist “Floral Designs”
When I click “View map” of the search result
Then I see the location “Floriststreet 123”

You might have noticed that I left out certain parts where the user goes to the homepage and selects UI objects in the search area. It was not worth mentioning in the Given-When-Then formula. Too much details make us lose focus of what we really want to check. The essence of this acceptance test is clicking on the link "View map" and exposing the location to the user.

Try it a couple of times and let me know how it went.

Agile Risk Management: Ad-hoc Processes

The CFO here wants to move away from vague generic invoices because he feels (rightly so) that the agency interprets the relationship as having carte blanche...

The CFO here wants to move away from vague generic invoices because he feels (rightly so) that the agency interprets the relationship as having carte blanche…

There are many factors that cause variability in the performance of projects and releases, including complexity, the size of the work, people and process discipline. Consistency and predictability are difficult when the process is being made up on the spot. Agile has come to reflect (at least in practice) a wide range of values ranging from faster delivery to more structured frameworks such as Scrum, Extreme Programing and Scale Agile Framework Enterprise. Lack of at least some structure nearly always increases the variability in delivery and therefore the risk to the organization.

I recently received the following note from a reader (and listener to the podcast) who will remain nameless (all names redacted at the request of the reader).

“All of the development is outsourced to a company with many off-shore and a few on-site resources.

The development agency has, somehow, sold the business on the idea that because they are “Agile”, their ability to dynamically/quickly react and implement requires a lack of formal “accounting.”  The CFO here wants to move away from vague generic invoices because he feels (rightly so) that the agency interprets the relationship as having carte blanche to work on anything and everything ever scratched out on a cocktail napkin without proper project charters, buy-in, and SOW.”

This observation reflects a risk to the organization of an ill-defined process in terms the value that get delivered to the business, financial risk and from the risk to customer satisfaction. Repeatability and consistency of process are not a dirty words.

Scrum and other Agile frameworks are light-weight empirical models. At their most basic levels they summarized as:

  1. Agree upon what your are going to do (build a backlog),
  2. Plan work directly ahead (sprint/iteration planning),
  3. Build a little bit while interacting with the customer (short time box development),
  4. Review what has been done with the stakeholders (demonstration),
  5. Make corrections to the process (retrospective),
  6. Repeat as needed until the goals of the work are met.

Deming would have recognized the embedded plan-do-check-act cycle. There is nothing ad-hoc about the frame even though it is not overly prescriptive.

I recently toured a research facility for a major snack manufacturer. The people in the labs were busy dreaming up the next big snack food. Personnel were involved in both “pure” and applied research, both highly creative endeavors. When I asked about the process they were using what was described was something similar to Scrum. Creatively being pursued within a framework to reduce risk.

Ad-hoc software development and maintenance was never in style. In today’s business environment where software in an integral the delivery of value, just winging the process of development increases risk of an already somewhat risky proposition.

Categories: Process Management

Intellectual Honesty

Software Requirements Blog - - Thu, 10/16/2014 - 17:00
During a recent discussion in the office, the term “intellectual honesty” was bandied about. At Seilevel, intellectual honesty is part of our stated core values, but it’s a term that’s easily misunderstood and misused. Feeling that I needed to understand better what this term really means, I hit the search engines hard. I also, as I […]
Categories: Requirements

Quote of the Month October 2014

From the Editor of Methods & Tools - Thu, 10/16/2014 - 11:48
Minimalism also applies in software. The less code you write, the less you have to maintain. The less you maintain, the less you have to understand. The less you have to understand, the less chance of making a mistake. Less code leads to fewer bugs. Source: “Quality Code: Software Testing Principles, Practices, and Patterns”, Stephen Vance, Addison-Wesley

Agile Risk Management: Dealing With Size

Overall project size influences variability.

Overall project size influences variability.

Risk is reflection of possibilities, stuff that could happen if the stars align. Therefore projects are highly influenced by variability. There are many factors that influence variability including complexity, process discipline, people and the size of the work. The impact of the size can be felt in two separate but equally important manners. The first is the size of the overall project and second is the size any particular unit of work.

Overall project size influences variability by increasing the sheer number of moving parts that have to relate to each other. As an example, the assembly of an automobile is a large endeavor and is the culmination of a number of relatively large subprojects. Any significant variance in how the subprojects are assembled along the path of building the automobile will cause problems in the final deliverable. Large software projects require extra coordination, testing, integration to ensure that all of the pieces fit together, deliver the functionality customers and stakeholders expect and act properly. All of these extra steps increase the possibility of variance.

Similarly large pieces of work, user stories in Agile, cause similar problems as noted for large projects, but at the team level. For example, when any piece of work enters a sprint the first step in the process of transforming that story into value is planning. Large pieces of work are more difficult to plan, if for no other reason that they take longer to break down into tasks increasing the likelihood that something will not be considered generating a “gotcha” later in the sprint.

Whether at a project or sprint level, smaller is generally simpler, and simpler generates less variability. There are a number of techniques for managing size.

  1. Short, complete sprints or iterations. The impact of time boxing on reducing project size has been discussed and understood in mainstream since the 1990’s (see Hammer and Champy, Reengineering the Corporation). Breaking a project into smaller parts reduces the overhead of coordination and provides faster feedback and more synchronization points, which reduces the variability. Duration of a sprint acts as constraint to the amount of work that can be taken and into a sprint and reasonably be completed.
  2. Team sizes of 5 to 9 are large enough to tackle real work while maintaining the stabile team relationships needed to coordinate the development and integration of functionality that can potentially be shipped at the end of each sprint. Team size constrains the amount of work that can enter a sprint and be completed.
  3. Splitting user stories is a mechanism to reduce the size of a specific piece of work so that it can be complete faster with fewer potential complications that cause variability. The process of splitting user stories breaks stories down into smaller independent pieces (INVEST) that be developed and tested faster. Smaller stories are less likely to block the entire team if anything goes wrong. This reduces variability and generates feedback more quickly, thereby reducing the potential for large batches of rework if the solution does not meet stakeholder needs. Small stories increases flow through the development processes reducing variability.

I learned many years ago that supersizing fries at the local fast food establishment was a bad idea in that it increased the variability in my caloric intake (and waistline). Similarly, large projects are subject to increased variability. There are just too many moving parts, which leads to variability and risk. Large user stories have exactly the same issues as large project just on a smaller scale. Agile techniques of short sprints and small team size provide constraints so that teams can control the size of work they are considering at any point in time. Teams need to take the additional step of breaking down stories into smaller pieces to continue to minimize the potential impact of variability.

Categories: Process Management

The Results of My First OKRs (Running)

NOOP.NL - Jurgen Appelo - Wed, 10/15/2014 - 21:34
2014-09-20 14.52.07

A popular topic in the new one-day Management 3.0 workshop is the OKRs system for performance measurement. (See Google’s YouTube video here.) Instead of explaining what OKRs are, I will just share with you the result of my first iteration. If you read this, you will get the general idea of how the OKRs system works.

The post The Results of My First OKRs (Running) appeared first on NOOP.NL.

Categories: Project Management

Big data and analytics – Interesting tidbits for business analysts and product managers

Software Requirements Blog - - Wed, 10/15/2014 - 20:12
I’m at Strata+Hadoop World today as a first timer as part of the data driven business day tutorial. I got to present in the middle of it on requirements analytics. But this whole day is awesome, like a crash course in big data and kinds of results it can get you. The schedule is here […]
Categories: Requirements

Testing CDN and geolocation with

Agile Testing - Grig Gheorghiu - Wed, 10/15/2014 - 19:31
Assume you want to migrate to a new CDN provider. Eventually you'll have to point as a CNAME to a domain name handled by the CDN provider, let's call it To test this setup before you put it in production, the usual way is to get an IP address corresponding to, then associate with that IP address in your local /etc/hosts file.

This works well for testing most of the functionality of your web site, but it doesn't work when you want to test geolocation-specific features such as displaying the currency based on the users's country of origin. For this, you can use a nifty feature from the amazing free service WebPageTest.

On the main page of WebPageTest, you can specify the test location from the dropdown. It contains a generous list of locations across the globe. To fake your DNS setting and point, you can specify something like this in the Script tab:

setDNSName example.cdnprovider.comnavigate
This will effectively associate the page you want to test with the CDN provider-specified URL, so you will hit the CDN first from the location you chose.

Building Better Business Cases for Digital Initiatives

It’s hard to drive digital initiatives and business transformation if you can’t create the business case.  Stakeholder want to know what their investment is supposed to get them

One of the simplest ways to think about business cases is to think in terms of stakeholders, benefits, KPIs, costs, and risks over time frames.

While that’s the basic frame, there’s a bit of art and science when it comes to building effective business cases, especially when it involves transformational change.

Lucky for us, in the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned in building better business cases for digital initiatives.

What I like about their guidance is that it matches my experience

Link Operational Changes to Tangible Business Benefits

The more you can link your roadmap to benefits that people care about and can measure, the better off you are.

Via Leading Digital:

“You need initiative-based business cases that establish a clear link from the operational changes in your roadmap to tangible business benefits.  You will need to involve employees on the front lines to help validate how operational changes will contribute to strategic goals.”

Work Out the Costs, the Benefits, and the Timing of Return

On a good note, the same building blocks that apply to any business case, apply to digital initiatives.

Via Leading Digital:

“The basic building blocks of a business case for digital initiatives are the same as for any business case.  Your team needs to work out the costs, the benefits, and the timing of the return.  But digital transformation is still uncharted territory.  The cost side of the equation is easier, but benefits can be difficult to quantify, even when, intuitively, they seem crystal clear.”

Start with What You Know

Building a business case is an art and a science.   To avoid getting lost in analysis paralysis, start with what you know.

Via Leading Digital:

“Building a business case for digital initiatives is both an art an a science.  With so many unknowns, you'll need to take a pragmatic approach to investments in light of what you know and what you don't know.

Start with what you know, where you have most of the information you need to support a robust cost-benefit analysis.  A few lessons learned from our Digital Masters can be useful.”

Don’t Build Your Business Case as a Series of Technology Investments

If you only consider the technology part of the story, you’ll miss the bigger picture.  Digital initiatives involves organizational change management as well as process change.  A digital initiative is really a change in terms of people, process, and technology, and adoption is a big deal.

Via Leading Digital:

“Don't build your business case as a series of technology investments.  You will miss a big part of the costs.  Cost the adoption efforts--digital skill building, organizational change, communication, and training--as well as the deployment of the technology.  You won't realize the full benefits--or possibly any benefits--without them.”

Frame the Benefits in Terms of Business Outcomes

If you don’t work backwards from the end-in-mind, you might not get there.  You need clarity on the business outcomes so that you can chunk up the right path to get there, while flowing continuous value along the way.

Via Leading Digital:

“Frame the benefits in terms of the business outcomes you want to reach.  These outcomes can be the achievement of goals or the fixing of problems--that is, outcomes that drive more customer value, higher revenue, or a better cost position.  Then define the tangible business impact and work backward into the levers and metrics that will indicate what 'good' looks like.  For instance, if one of your investments is supposed to increase digital customer engagement, your outcome might be increasing engagement-to-sales conversation.  Then work back into the main metrics that drive this outcome, for example, visits, like inquiries, ratings, reorders, and the like.

When the business impact5 of an initiative is not totally clear, look at companies that have already made similar investments.  Your technology vendors can also be a rich, if somewhat biased, source of business cases for some digital investments.”

Run Small Pilots, Evaluate Results, and Refine Your Approach

To reduce risk, start with pilots to live and learn.   This will help you make informed decisions as part of your business case development.

Via Leading Digital:

“But, whatever you do, some digital investment cases will be trickier to justify, be they investments in emerging technologies or cutting-edge practices.  For example, what is the value of gamifying your brand's social communities?  For these types of investment opportunities, experiment with a test-and-learn approach.  State your measures of success, run small pilots, evaluate results, and refine your approach.  Several useful tools and methods exist, such as hypothesis-driven experiments with control groups, or A/B testing.  The successes (and failures) of small experiments can then become the benefits rationale to invest at greater scale.  Whatever the method, use an analytical approach; the quality of your estimated return depends on it.

Translating your vision into strategic goals and building an actionable roadmap is the firs step in focusing your investment.  It will galvanize the organization into action.  But if you needed to be an architect to develop your vision, you need to be a plumber to develop your roadmap.  Be prepared to get your hands dirty.”

While practice makes perfect, business cases aren’t about perfect.  Their job is to help you get the right investment from stakeholders so you can work on the right things, at the right time, to make the right impact.

You Might Also Like

Cloud Changes the Game from Deployment to Adoption

How Digital is Changing Physical Experiences

McKinsey on Unleashing the Value of Big Data Analytics

Categories: Architecture, Programming

Using a SSD Cache in Front of EBS Boosted Throughput by 50%, for Free

Using EBS has lots of advantages--reliability, snapshotting, resizing--but overcoming the performance problems by using Provisioned IOPS is expensive. 

Swrve, an integrated marketing and A/B testing and optimization platform for mobile apps, did something clever. They are using the c3.xlarge EC2 instances, that have two 40GB SSD devices per instance, as a cache.

They found through testing RAID-0 striping using a 4-way stripe along with enhanceio, effectively increased throughput by over 50%, for free. With no filesystem corruption problems.

How is it free? "We were planning on upgrading to the C3 class of instance anyway, and sticking with EBS as the backing store. Once you’re using an instance which has SSD ephemeral storage, there are no additional fees to use that hardware."

For great analysis, lots of juicy details, graphs, and configuration commands, please take a look at How we increased our EC2 event throughput by 50%, for free

Categories: Architecture

Podcast with Cesar Abeid Posted

Cesar Abeid interviewed me, Project Management for You with Johanna Rothman. We talked about my tools for project management, whether you are managing a project for yourself or managing projects for others.

We talked about how to use timeboxes in the large and small, project charters, influence, servant leadership, a whole ton of topics.

I hope you listen. Also, check out Cesar’s kickstarter campaign, Project Management for You.

Categories: Project Management

Connecting the Dots Between Technical Performance and Earned Value Management

Herding Cats - Glen Alleman - Wed, 10/15/2014 - 15:36

We gave a recent College of Performance Management webinar on using techncial progress to inform Earned Value. Here's the annotated charts.

Categories: Project Management

Docker and Microsoft: Integrating Docker with Windows Server and Microsoft Azure

ScottGu's Blog - Scott Guthrie - Wed, 10/15/2014 - 14:30

I’m excited to announce today that Microsoft is partnering with Docker, Inc to enable great container-based development experiences on Linux, Windows Server and Microsoft Azure.

Docker is an open platform that enables developers and administrators to build, ship, and run distributed applications. Consisting of Docker Engine, a lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments.

Earlier this year, Microsoft released support for Docker containers with Linux on Azure.  This support integrates with the Azure VM agent extensibility model and Azure command-line tools, and makes it easy to deploy the latest and greatest Docker Engine in Azure VMs and then deploy Docker based images within them.   Docker Support for Windows Server + Docker Hub integration with Microsoft Azure

Today, I’m excited to announce that we are working with Docker, Inc to extend our support for Docker much further.  Specifically, I’m excited to announce that:

1) Microsoft and Docker are integrating the open-source Docker Engine with the next release of Windows Server.  This release of Windows Server will include new container isolation technology, and support running both .NET and other application types (Node.js, Java, C++, etc) within these containers.  Developers and organizations will be able to use Docker to create distributed, container-based applications for Windows Server that leverage the Docker ecosystem of users, applications and tools.  It will also enable a new class of distributed applications built with Docker that use Linux and Windows Server images together.


2) We will support the Docker client natively on Windows.  Developers and administrators running Windows will be able to use the same standard Docker client and interface to deploy and manage Docker based solutions with both Linux and Windows Server environments.



3) Docker for Windows Server container images will be available in the Docker Hub alongside the Docker for Linux container images available today.  This will enable developers and administrators to easily share and automate application workflows using both Windows Server and Linux Docker images.

4) We will integrate Docker Hub with the Microsoft Azure Gallery and Azure Management Portal.  This will make it trivially easy to deploy and run both Linux and Windows Server based Docker images in Microsoft Azure.

5) Microsoft is contributing code to Docker’s Open Orchestration APIs.  These APIs provide a portable way to create multi-container Docker applications that can be deployed into any datacenter or cloud provider environment. This support will allow a developer or administrator using the Docker command line client to launch either Linux or Windows Server based Docker applications directly into Microsoft Azure from his or her development machine.

Exciting Opportunities Ahead

At Microsoft we continue to be inspired by technologies that can dramatically improve how quickly teams can bring new solutions to market. The partnership we are announcing with Docker today will enable developers and administrators to use the best container tools available for both Linux and Windows Server based applications, and to run all of these solutions within Microsoft Azure.  We are looking forward to seeing the great applications you build with them.

You can learn more about today’s announcements here and here.

Hope this helps,

Scott omni

Categories: Architecture, Programming

Agile Risk Management: Dealing With Complexity

Does the raft have a peak load?

Software development is inherently complex and therefore risky. Historically there have been many techniques leveraged to identify and manage risk. As noted in Agile and Risk Management, much of the risk in projects can be put at the feet of variability. Complexity is one of the factors that drives complexity. Spikes, prioritization and fast feedback are important techniques for recognizing and reducing the impact of complexity.

  1. Spikes provide a tool to develop an understanding of an unknown within a time box. Spikes are a simple tool used to answer a question, gather information, perform a specific piece of basic research, address project risks or break a large story down. Spikes generate knowledge, which both reduces the complexity intrinsic in unknowns and provides information that can be used to simplify the problem being studied in the spike.
  2. Prioritization is a tool used to order the project or release backlog. Prioritization is also a powerful tool of reducing the impact of variability. It generally easier to adapt to a negative surprise in a project earlier in the lifecycle. Teams can allocate part of their capacity to the most difficult stories early in the project using complexity as a criterion for prioritization.
  3. Fast feedback is the single most effective means of reducing complexity (short of avoidance). Core to the DNA of Agile and lean frameworks is the “inspect and adapt” cycle. Deming’s Plan-Do-Check-Act cycle is one representation of “inspect and adapt” as are retrospectives, pair-programming and peer reviews. Short iterative cycles provide a platform to effectively apply the model of “inspect and adapt” to reduce complexity based on feedback. When teams experience complexity they have a wide range of tool to share and seek feedback ranging from daily stand-up meetings to demonstrations and retrospectives.
    Two notes on feedback:

    1. While powerful, feedback only works if it is heard.
    2. The more complex (or important) a piece of work is, the shorter the feedback cycle should be.

Spikes, prioritization and feedback are common Agile and lean techniques. The fact that they are common has led some Agile practitioners to feel that frameworks like Scrum has negated the need to deal specifically with risks at the team level. These are powerful tools for identifying and reducing complexity and the variability complexity generates however they need to be combined with other tools and techniques to manage the risk that is part of all projects and releases.

Categories: Process Management

24hrs in F#

Phil Trelford's Array - Tue, 10/14/2014 - 18:55

The easiest place to see what’s going on in the F# community is to follow the #fsharp hash tag on Twitter. The last 24hrs have been as busy as ever, to the point where it can be hard to keep up these days.

Here’s some of the highlights


Build Stuff conference to feature 8 F# speakers:

@c4fsharp + @silverSpoon & the @theburningmonk too :)

— headintheclouds (@ptrelford) October 13, 2014

and workshops including:

FP Days programme is now live, and features key notes from Don Syme & Christophe Grand, and presentations from:

.@dsyme and @cgrand are Keynoting #fpdays this year!! #fsharp #clojure

— FP Days (@FPDays) October 14, 2014

New Madrid F# meetup group announced:

#fsharp I just created a Madrid F# Meetup Group. Join and let's organize our first meetup! Pls RT

— Alfonso Garcia-Caro (@alfonsogcnunez) October 14, 2014

F# MVP Rodrigo Vidal announces DNA Rio de Janeiro:

Encontro do #DNA Rio de Janeiro dia 21/10/2014 na @caelum #csharp #fsharp @netarchitects

— Rodrigo Vidal (@rodrigovidal) October 13, 2014

Try out the new features in FunScript at MF#K Copenhagen:

MF#K - Let's make some fun stuff with F#unScript #MFK #fsharp

— Ramón Soto Mathiesen (@genTauro42) October 14, 2014

Mathias Brandewinder will be presenting some of his work on F# & Azure in the Bay Area

Pretty excited to show some F# & #Azure stuff I have been working on at BayAzure tomorrow Tues Oct 14: #fsharp

— Mathias Brandewinder (@brandewinder) October 13, 2014

Let’s get hands on session in Portland announced:

#Portland #fsharp meetup next week: Classics Mashup hands-on. #ComeJoinUs!

— James D'Angelo (@jjvdangelo) October 14, 2014

Riccardo Terrell will be presenting Why FP? in Washington DC

#fsharp @dcfsharp I will present at the DC 10.14.2014 6:30 pm @DCFsharp Riccardo Terrell

— DC F# (@DCFSharp) October 13, 2014

Sessions featuring FsCheck, Paket & SpecFlow scheduled in Vinnitsa (Ukraine)

Будем рассказывать с @skalinets про #FSharp RT @dou_calendar: Vinnitsa Ciklum .NET Saturday, 18 октября, Винница

— Akim Boyko (@AkimBoyko) October 14, 2014


Get Doctor Who stats with F# Data HTML Type Provider:

Get Doctor Who stats using #fsharp

— FSharp.Data (@FSharpData) October 13, 2014

Major update to FSharp.Data.SqlClient:

FSharp.Data.SqlClient major update: SqlEnumProvider merged in. #fsharp #typeprovider

— Dmitry Morozov (@mitekm) October 13, 2014

ProjectScaffold now uses Paket:

Excited to announce ProjectScaffold now uses @PaketManager by default! #dotNET #fsharp

— Paulmichael Blasucci (@pblasucci) October 14, 2014

Microsoft Research presentation on new DBpedia Type Provider:

Ultimate powerful data access with ``F# Type Provider Combinators`` video demo by Andrew Stevenson #fsharp #DBpedia

— Functional Software (@Functional_S) October 13, 2014


More F#, Xamarin.Forms and MVVM by Brad Pillow

More #fsharp, #xamarinForms and MVVM, blog:, code:

— Brad Pillow (@BradPillow) October 14, 2014

Cross-platform MSBuild API by Robin Neatherway

New blog post: Cross-platform MSBuild API usage in #fsharp at

— Robin Neatherway (@rneatherway) October 14, 2014

Hacking the Dream Cheeky Thunder Missle Launcher by Jamie Dixon:

Want more?

Check out Sergey Tihon’s F# Weekly!

Categories: Programming

Project Risk Management, PMBOK, DoD PMBOK and Edmund Conrow’s Book

Herding Cats - Glen Alleman - Tue, 10/14/2014 - 17:08

Effective Risk ManagementIn a recent post to “Who Is Ed Conrow?” a responder asked about the differences between the PMBOK® Risk approach and the DoD PMBOK risk approaches as well as a summary of the book Effective Risk Management: Some Keys to Success, Edmund Conrow. Ed worked the risk management processes for a NASA proposal I was on. I was the IMP/IMS lead, so integrating Risk with the Integrated Master Plan / Integrtaed Master Schedule in the mannder he prescribed was a live changing experience. I was naive before, but no longer after that proposal won ˜$7B for the client.


Let me start with a few positioning statements:

  1. Project risk management is a topic with varying levels of understanding, interest, and applicability. The PMI PMBOK® provides a “starter kit” view of project risk. It covers the areas of risk management but does not have guidance on actually “doing” risk management. This results many times in the false sense that “if we’re following PMBOK® then we’re OK.”
  2. The separation of technical risk from programmatic risk is not clearly discussed in PMBOK®. While not a “show stopper” issue for some projects, programmatic risk management is critical for enterprise class projects. By enterprise I mean, ERP, large product development, large construction, aerospace and defense class programs. In fact, DI-MGMT-81861 mandates programmatic risk management processes for procurements greater than $20M. $20M sounds like a very large sum of money for the typical internal IT development project. It hardly keeps the lights on an aerospace and defense program.
  3. The concepts around schedule variances are misunderstood and misapplied in almost every discussion of risk management in the popular literature. The common red-herring is the ineffective use of Critical Path and PERT. This of course is simply a false statement in domains outside IT or small low risk projects. Critical Path, PERT and Monte Carlo Simulation are mandated by government procurement guidelines. Not that these guides make them “correct.” What makes them correct is that programs measurably benefit from their application. This discipline is called Program Planning and Controls (PP&C) and is a profession in aerospace, defense, and large construction. No amount of “claiming the processes don’t work” removes the documented facts they do, when properly applied. Anyone wishing to learn more about these approaches to programmatic risk management need only look to the NASA, Department of Energy, and Department of Defense Risk Management communities.

With all my biases out of the way, let’s look at the DoD PMBOK®

  1. First, the DoD PMBOK® is free and can be found at It appears to be gone so now you can find it here. This is a US Department of Defense approved document. It provides supplemental guidance to the PMI PMBOK®, but in fact replaces a large portion of PMI PMBOK®.
  2. Chapter 11 of DoD PMBOK® is Risk. It starts with the Risk Management Process areas – in Figure 11-1, page 125. (I will not put them here, because you should down load the document and turn to that page.) The diagram contains six (6) process areas. The same number as the PMI PMBOK®. But what’s missing from PMI PMBOK® and present in DoD PMBOK® is how these processes interact to provide a framework for Risk Management.
  3.  There are several missing critical concepts in PMI PMBOK® that are provided in DoD PMBOK®.
    • The Risk Management structure in Figure 11-1 is the first. Without the connections between the process areas in some form other than “linear” the management of technical and programmatic risk turns into a “check list.” This is the approach of PMI PMBOK® - to provide guidance on the process areas and leave it up to the reader to develop a process around these areas. This is also the approach of CMMI. This is an area too important to leave it up to the read to develop the process.
    • The concept of the Probability and Impact matrix is fatally flawed in PMI PMBOK®. It turns out you cannot multiple probability of occurrence with the impact of this occurrence. These “numbers” are not numbers in the sense of integer or real valued numbers. They are probability distributions themselves. Multiplication is not an operation that can be applied to a probability distribution. Distributions are integral equations and the multiplication operator ´ is actually the convolution operator Ä.
    • The second fatal flaw of the PMI PMBOK® approach to probability of occurrence and impact is these “numbers” are uncalibrated cardinal values. That is they have no actually meaning since their “units of measure” are not calibrated.

Page 124 of DoD PMBOK® summarizes the principles of Risk Management as developed in two seminal sources.

  1. Effective Risk Management: Some Keys to Success, Edmund Conrow, American Institute of Aeronautics and Astronautics, 2000.
  2. Risk Management Guide for DoD Acquisition, Sixth Edition, (Version 1.0), August 2006, US Department of Defense, which is part of the Defense Acquisition Guide, §11.4), which is published within the Office of the Under Secretary of Defense, Acquisition, Technology and Logistics  (OUSD(AT&L)),  Systems and Software Engineering / Enterprise Development.

Now all these pedantic references are here for a purpose. This is how people who manage risk for a living, manage risk. By risk, I mean technical risk that results in loss of mission, loss of life. Programmatic risk that results in loss of Billions of Tax Payer dollars. They are serious enough about risk management to not let the individual project or program manager interpret the vague notions in PMI PMBOK®. These may appear to be harsh words, but the road to the management of enterprise class projects is littered with disasters. You can read every day of IT projects that are 100% over budget, 100% behind schedule. From private firms to the US Government, the trail of destruction is front page news.

A Slight Diversion – Why are Enterprise Projects So Risky?

There are many reasons for failure – too many to mention – but one is the inability to identify and mitigate risk. The words “indentify” and “mitigate,” sound simple. They are listed in the PMI PMBOK® and the DoD PMBOK®. However, here is where the problem starts:

  1. The process of separating risk from issue.
  2. Classifying  the statistical nature risk.
  3. Managing the risk process independently from project management and technical development.
  4. Incorporating the technical risk mitigation processes into the schedule.
  5. Modeling the impact of technical risk on programmatic risk.
  6. Modeling the programmatic risk independent from the technical risk.

Using Conrow as a Guide

Here is one problem. When you use the complete phrase “Project Risk Management” with Google, you get ~642,000 hits. There are so many books, academic papers, and commercial articles on Risk Management, where do we start? Ed Conrow’s book is probably not the starting point for learning how to practice risk management on your project. However, it might be the ending point. If you are in the software development business, a good starting point is – Managing Risk: Methods for Software Systems Development, Elaine M. Hall, Addison Wesley, 1998. Another broader approach is Continuous Risk Management Guidebook, Software Engineering Institute, August 1996. While these two sources focus on software, they provide the foundation for the discussion of risk management as a discipline.

There are public sources as well:

  1. Start with the NASA Risk Management page,
  2. Software Engineering Institute’s Risk Management page,
  3. A starting point for other resources from NASA,
  4. Department of Energy’s Risk Management Handbook, (which uses the DoD Risk Process Map described above)

However, care needs the be taken once you go outside the government boundaries. Here are many voices plying the waters of “risk management,” as well as other voices with “axes to grind” regarding project management methods and risk management processes. The result is many times a confusing message full of anecdotes, analogies, and alternative approaches to the topic of Risk Management.

Conrow in his Full Glory

Before starting into the survey of the Conrow book, let me state a few observations:

  1. This book is tough going. I mean really tough going. Tough in the way a graduate statistical mechanics book is tough going. Or a graduate micro-economics of managerial finance book is tough going. It “professional grade” stuff. By “professional grade” I mean – written by professionals to be used by professionals.
  2. Not every problem has the same solution need. Conrow’s solutions may not be appropriate for a software development project with 4 developers and a customer in the same room. But from projects that have multiple teams, locations, and stake holders some type of risk management is needed.
  3. The book is difficult to read for another reason. It assumes you have a “a reasonable understanding of the issues” around risk management. This means this is not a tutorial style or “risk management for dummies” book. It is a technical reference book. There is little in the way of introductory material or bringing the reader up top speed before presenting the material.

From the introduction:

The purpose of this book is two-fold: first, to provide key lessons learned that I have documented from performing risk management on a wide variety of programs, and second, to assist you, the reader, in developing and implementing an effective risk management process on your program.

A couple of things here. One is the practical experience in risk management. Many in the risk management “talking” community have limited experience with risk management in the way Ed does. I first met Ed on a proposal for a $8B Manned Spaceflight program. He was responsible for the risk strategy and the conveying of that strategy in the proposal. The proposal resulted in an award and now our firm provides Program Planning and Controls for a major subsystem of the program. In this role programmatic and technical risk management is part of the Statement of Work flowed down from the prime contractor.

Second Ed is a technical advisor to the US Arms Control and Disarmament Agency as well as a consultant industry and government on risk management. These “resume” items are meant to show that the practice of risk management is just that – a practice. Speaking about risk management and doing risk management on high risk programs are two different things.

One of Ed’s principle contributions to the discipline was the development of a micro-economic framework of risk management in which the design feasibility (or technical performance) is traded against cost and schedule.

In the end, this is a reference text for the process of managing the risk of projects, written by a highly respected practitioner.

What does the Conrow Book have to offer over the Standard approach?

Ed’s book contains the current “best practices” for managing technical and programmatic risk. These practices are used on high risk, high value programs. The guidelines in Ed’s book are generally applicable to many other classes of projects as well. But there are several critical elements that differentiate this approach from the pedestrian approach to risk management.

  1. The introduction of the “ordinal risk scale.” This approach is dramatically different than the PMI PMBOK description of risk in which the probability of occurrence is multiplied by the consequences to produce a “risk rating.” Neither the probability of occurrence nor the consequences are calibrated in anyway. The result is a meaningless number that may satisfy the C-Level that “risk management” is being done on the program. By ordinal risk scales it is meant a classification of risk, say – A,B,C,D,E,F – that are descriptive in nature. Not just numbers. By the way, the use of letters is intentional. If numbers are used to ordinal risk ranks, there is a tendency to do arithmetic on them. Compute the average risk rank, or multiply them by the consequences. Letters remove this notion and prevent the first failure of the common risk management approach – to produce an overall risk measure.

The ordinal approach works like this. Ed describes some classes of risk scales which include: maturity, sufficiency, complexity, uncertainty, estimative probability, and probability based scales.

A maturity risk scale would be:


Scale Level

Basic principles observed


Concept design analyzed for performance


Breadboard or brassboard validation in relevant environment


Prototype passes performance tests


Item deployed and operational


 The critical concept is to relate the risk ordinal value to an objective measure. For a maturity risk assessment, some “calibration” of what it means to have the “basic principles observed” must be developed. This approach can be applied to the other classes – sufficiency, complexity, uncertainty, estimative probability and probability based scales.

It’s the estimative probability that is important to cost and schedule people in our PP&C practice. The estimative probability scale attempts to relate a word to a probability value. “High” to 80%. An ordinal estimative probability scale using point estimates derived from a statistical analysis of survey data might look like.


Median probability value

Scale Level













Almost no chance



Calibrating these risk scales is the primary analysis task of building a risk management system. What does it mean to have a “medium” risk, in the specific problem domain?

  1. The second item is the formal use of a risk management process. Simply listing the risk process areas – as is done in PMBOK – is not only poor project management practice, it is poor risk management practice. The process to be used are shown on page 135 of The application of these processes are described in detail. No process area is optional. All are needed. All need to be performed in the right relationship to each other.

These two concepts are the ones that changed the way I perform risk management on the programs I’m involved with and how we advise our clients. They are paradigm changing concepts. No more simple mined arithmetic with probabilities and consequences. No more uncalibrated risk scales. No more tolerating those who claim PERT, Critical Path, and Monte Carlo are unproven, obsolete, or “wrong headed” approaches.

Get Ed’s book. It’ll cost way too much when compared to the “paperback” approach to risk. But for those tasked with “managing risk,” this is the starting point.


Categories: Project Management

Sponsored Post: Apple, Hypertable, VSCO, Gannett, Sprout Social, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here. 
    • Senior Engineer: Mobile Services. The Emerging Technologies/Mobile Services team is looking for a proactive and hardworking software engineer to join our team. The team is responsible for a variety of high quality and high performing mobile services and applications for internal use. We seek an accomplished server-side engineer capable of delivering an extraordinary portfolio of features and services based on emerging technologies to our internal customers. Please apply here.
    • Apple Pay Automation Engineer. The Apple Pay group within iOS Systems is looking for a outstanding automation engineer with strong experience in building client and server test automation. We work in an agile software development environment and are building infrastructure to move towards continuous delivery where every code change is thoroughly tested by push of a button and is considered ready to be deployed if we choose so. Please apply here
    • Site Reliability Engineer. As a member of the Apple Pay SRE team, you’re expected to not just find the issues, but to write code and fix them. You’ll be involved in all phases and layers of the application, and you’ll have a direct impact on the experience of millions of customers. Please apply here.
    • Software Engineering Manager. In this role, you will be communicating extensively with business teams across different organizations, development teams, support teams, infrastructure teams and management. You will also be responsible for working with cross-functional teams to delivery large initiatives. Please apply here

  • VSCO. Do you want to: ship the best digital tools and services for modern creatives at VSCO? Build next-generation operations with Ansible, Consul, Docker, and Vagrant? Autoscale AWS infrastructure to multiple Regions? Unify metrics, monitoring, and scaling? Build self-service tools for engineering teams? Contact me (Zo, and let’s talk about working together.

  • Gannett Digital is looking for talented Front-end developers with strong Python/Django experience to join their Development & Integrations team. The team focuses on video, user generated content, API integrations and cross-site features for Gannett Digital’s platform that powers sites such as, or Please apply here.

  • Platform Software Engineer, Sprout Social, builds world-class social media management software designed and built for performance, scale, reliability and product agility. We pick the right tool for the job while being pragmatic and scrappy. Services are built in Python and Java using technologies like Cassandra and Hadoop, HBase and Redis, Storm and Finagle. At the moment we’re staring down a rapidly growing 20TB Hadoop cluster and about the same amount stored in MySQL and Cassandra. We have a lot of data and we want people hungry to work at scale. Apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Sign Up for New Aerospike Training Courses.  Aerospike now offers two certified training courses; Aerospike for Developers and Aerospike for Administrators & Operators, to help you get the most out of your deployment.  Find a training course near you.

  • November TokuMX Meetups for Those Interested in MongoDB. Join us in one of the following cities in November to learn more about TokuMX and hear TokuMX use cases. 11/5 - London;11/11 - San Jose; 11/12 - San Francisco. Not able to get to these cities? Check out our website for other upcoming Tokutek events in your area -
Cool Products and Services
  • Hypertable Inc. Announces New UpTime Support Subscription Packages. The developer of Hypertable, an open-source, high-performance, massively scalable database, announces three new UpTime support subscription packages – Premium 24/7, Enterprise 24/7 and Basic. 24/7/365 support packages start at just $1995 per month for a ten node cluster -- $49.95 per machine, per month thereafter. For more information visit us on the Web at Connect with Hypertable: @hypertable--Blog.

  • FoundationDB launches SQL Layer. SQL Layer is an ANSI SQL engine that stores its data in the FoundationDB Key-Value Store, inheriting its exceptional properties like automatic fault tolerance and scalability. It is best suited for operational (OLTP) applications with high concurrency. Users of the Key Value store will have free access to SQL Layer. SQL Layer is also open source, you can get started with it on GitHub as well.

  • Diagnose server issues from a single tab. Scalyr replaces all your monitoring and log management services with one, so you can pinpoint and resolve issues without juggling multiple tools and tabs. Engineers say it's powerful and easy to use. Customer support teams use it to troubleshoot user issues. CTO's consider it a smart alternative to Splunk, with enterprise-grade functionality, sane pricing, and human support. Trusted by in-the-know companies like Codecademy – learn more!

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture