Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

If You Want to Thrive at Microsoft

I was reading back through Satya Nadella’s email on Bold Ambition and Our Core, and a few things caught my eye.

One of them was the idea that if you want to thrive at Microsoft, you need to drive change.

Satya writes:

“And if you want to thrive at Microsoft and make a world impact, you and your team must add numerous more changes to this list that you will be enthusiastic about driving.

Nothing is off the table in how we think about shifting our culture to deliver on this core strategy. Organizations will change. Mergers and acquisitions will occur. Job responsibilities will evolve. New partnerships will be formed. Tired traditions will be questioned. Our priorities will be adjusted. New skills will be built. New ideas will be heard. New hires will be made. Processes will be simplified. And if you want to thrive at Microsoft and make a world impact, you and your team must add numerous more changes to this list that you will be enthusiastic about driving.”

Change is in the air, and Satya has given everyone a license to thrive by re-imagining how to change the world, or at least their part of it.

For me, I’m focused on how to accelerate business transformation with Cloud, Mobile, Social, Big Data and the Internet of Things.

Together, these technology trends are enabling new end-to-end customer experiences, workforce transformation, and operations transformation.

It’s all about unleashing what individuals and businesses are capable of.

Categories: Architecture, Programming

Who Removes Your Obstacles?

In self-organizing teams, teams remove their own obstacles. It’s a good idea. It can be difficult in practice.

In Scrum, the Scrum Master is supposed to facilitate removing the team’s obstacles that the team can’t remove. It’s a good idea. It can be difficult in practice.

And, what if you aren’t doing Scrum, or you’re transitioning to agile and you don’t yet have a self-organizing team? Maybe you have an agile project manager. Maybe you have a team facilitator. Not every team needs a titled manager-type, you know. (Even I don’t think that, and I come from project management.)

Maybe the team bumps up against an obstacle they can’t remove, even if they try. Why? Because the obstacles the team can’t remove tend to fall in these categories:

  • Cross-functional problems across several teams or across the organization
  • Problems up the hierarchy in the organization
  • Problems that occur both places, as in over there in another department and higher up in the hierarchy

Oh boy. Someone who either used to be technical or used to be a first-line manager is supposed to talk to a VP of Support or Sales or the CIO or the CTO or “the Founder of the Company” and ask for help removing an impediment. Unless the entire organization is already agile, can you see that this is a problem or a potential problem?

Chances are good that during an organization’s transition to agile, the team’s facilitator (regardless of the title) will need help from a more senior manager to remove obstacles. Not for the team. For the rest of the organization.

Now, I would love it if the person who is supposed to remove obstacles was that designated facilitator (Scrum Master, agile project manager, whomever). And, that designated facilitator had the personal power to do it all by him or herself. But, I’m a realist. In my clients, that doesn’t happen without management support.

Is it a problem if a manager removes obstacles?

I don’t think so, as long as the manager supports the team, and doesn’t prevent the team from solving its own problems.

Here are examples I would expect the team to solve on its own:

  • Not finishing stories inside an iteration because there is too much work in progress or each person take his or her own story. This is a problem the team can manage by reviewing the board, or by pairing or swarming. The team has many options for solving this problem.
  • Too much work in progress. Again, the team can often manage this problem. The first step is to see it.
  • Not getting to “done” on stories. The first steps are to make the stories smaller, to make sure you have acceptance criteria, to work with the product owner on making stories smaller, those kinds of things. Maybe the first step is to make sure you have integrated testers into your team. But that might require managers to help.
  • Having trouble deciding what team norms are and the resulting issues from that.

Here are obstacles mid-level managers or more senior managers might help with:

  • Creating cross-functional teams, not silo’d teams. I keep seeing “developer” teams and “tester” teams being told “Go Agile!” No, agile is a cross-functional team where you have all the roles you need on one team to accomplish the work. This misconception is often a management misconception.
  • Which project is #1 (managing the project portfolio). If the team has a ton of interruptions or a demand for multitasking because management can’t decide which project is #1, I would expect some manager to take this obstacle on and resolve it. The pressures for multitasking or interruptions often arise from outside the team.
  • How the organization rewards people. At some point, agile bumps up against the culture of individual rewards. Managers might be the ones who need to manage HR.
  • When the product owner goes missing, or the team has no product owner. The team is now missing a key piece to their success and what makes agile work. Very few teams can demand a product owner, especially if they are new to agile.

This is a form of management as servant leadership, where the manager serves the team.

Do you see how certain problems are inside-the-team problems and the team can solve them? Other problems are not inside-the-team problems, and are part of the organization’s system. The organization needs to decide, “How committed to agile are we?” and address these issues.

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Tue, 12/02/2014 - 15:02

Plans are only good intentions unless they immediately degenerate into hard work. - Peter F. Drucker

Categories: Project Management

ScrumMasters Should Not Also Be Product Owners

Mike Cohn's Blog - Tue, 12/02/2014 - 15:00

Hey, ScrumMaster: Step away from the index cards! You should not be a team’s product owner if you are also the team’s ScrumMaster.

Different individuals should fill these two roles, and there are many reasons for this. Let’s consider a few of them in this post.

Each Has a Different Focus

First, a product owner and ScrumMaster are focused on different aspects of a Scrum project. The product owner spends his or her time thinking about what to build. That should largely be determined independent of the capabilities of the team, which are the concern of the ScrumMaster.

That is, while a product owner is determining what to build, the ScrumMaster is helping the team work together so they can.

In many ways, this can be thought of as similar to the idea that a team’s programmer and tester should be separate. Sure, a good programmer can test and a good tester can program. But, separating those roles is usually a good idea.

Each Is Probably Pretty Busy

Second, it is quite likely that being either ScrumMaster or product owner requires full- or near full-time attention. Putting one person in both roles at the same time will almost certainly shortchange one of the two.

The Two Roles Require Different Personalities

There is some overlap in the skills and personality traits that make good product owners and ScrumMasters. However, the roles are different, and it is extremely unlikely that someone will be able to excel at both, especially at the same time.

I’ve written elsewhere about how the different types of tasks each performs makes this so.

A Natural Tension Exists Between the Roles

Additionally, a natural tension should exist between the two roles. Although each is undeniably committed to the success of the product or system being developed, product owners naturally want more, more, more.

ScrumMasters, on the other hand, are more attuned to the issues that can arise from a team under undue pressure to deliver more, more, more. When a balance exists between the roles, a product owner is free to follow his or her natural tendency to ask for more, safe with the assurance that the ScrumMaster is there to prevent pushing too hard.

Are There Ever Exceptions?

Certainly. I’ve encountered many situations in which the ScrumMaster and product owner were the same person, and where I felt that was appropriate. Some of these have included small organizations that could not afford the luxury of dedicated or separate individuals.

Other situations were small teams who had started in the pursuit of a technical product owner’s vision. On such small teams, any one personality can have an outsized effect on the team, regardless of any formal role played by the person.

Other exceptions have been ScrumMasters involved in contract development. It is common on such a project for the “true product owner” to exist within the client asking for the software to be built.

Unfortunately, it is also common for such true product owners not to want to be deeply involved in the project at the level a Scrum team needs. It is in these cases that a good ScrumMaster often steps up and into the role as a proxy for that true product owner.

So, sure there are exceptions—just like there are to any rule. However, none of those exceptions should exist for the long term. And anyone in both rules simultaneously should be aware of the challenges the dual role presents.

New blogpost on kibana 4 beta

Gridshore - Tue, 12/02/2014 - 12:55

If you are like me interested in elasticsearch and kibana, than you might be interested in a blog post I wrote on my employers blog about the new Kibana 4 beta. If so, head over to my employers blog:

http://amsterdam.luminis.eu/2014/12/01/experiment-with-the-kibana-4-beta/

The post New blogpost on kibana 4 beta appeared first on Gridshore.

Categories: Architecture, Programming

35 Best Quotes from Management 3.0 #Workout

NOOP.NL - Jurgen Appelo - Tue, 12/02/2014 - 11:58
35-best-quotes

For several months, I posted and tweeted quotes from the new #Workout book. I then measured how often they were retweeted and liked on Twitter and on Facebook. And this is the result! The most popular quotes from Management 3.0 #Workout. Enjoy.

The post 35 Best Quotes from Management 3.0 #Workout appeared first on NOOP.NL.

Categories: Project Management

Snowflakes

Phil Trelford's Array - Tue, 12/02/2014 - 08:45

Welcome to day 2 of the F# Advent Calendar in English, and don’t miss Scott Wlaschin’s introduction to property-based testing from yesterday.

In A Christmas Carol Charles Dickens wrote of cold winters with snow as a matter of course. White Christmases were common during the Little Ice Age that lasted from the 1550s to the 1850s. Nowadays the chances of a snowfall on Christmas day are much lower, but the imagery of a white Christmas persists.

In this post we’ll generate our snowflakes instead.

Koch Snowflake

The Koch snowflake is a mathematical curve constructed from an equilateral triangle where each line segment is recursively altered:

Koch

The picture above was generated in the F# REPL, using WinForms to display a bitmap:

let snowflake (graphics:Graphics) length =
   use pen = new Pen(Color.White)
   let angle = ref 0.0
   let x = ref ((float width/2.0) - length/2.0)
   let y = ref ((float height/2.0) - length/3.0)
   let rec segment n depth =
      if depth = 0 then
         line n
      else
         segment (n/3.0) (depth-1)
         rotate -60.0
         segment (n/3.0) (depth-1)
         rotate 120.0
         segment (n/3.0) (depth-1)
         rotate -60.0
         segment (n/3.0) (depth-1)
   and line n =
      let r = !angle * Math.PI / 180.0
      let x2 = !x + cos(r) * n
      let y2 = !y + sin(r) * n
      graphics.DrawLine(pen, float32 !x,float32 !y, float32 x2, float32 y2)
      x := x2
      y := y2
   and rotate a =
      angle := !angle + a
   let depth = 5
   segment length depth
   rotate 120.0  
   segment length depth  
   rotate 120.0
   segment length depth

The full snippet is available on F# Snippets: http://fssnip.net/oA

Paper and Scissors

Snowflakes can be created by folding paper and cutting holes with scissors. We can get a similar effect using transparent polygons and rotational symmetry:

Paper

Here the polygons are selected randomly and like snowflakes each one is different:

let paperSnowflake () =   
   let image = new Bitmap(int width, int height)
   use graphics = Graphics.FromImage(image)  
   use brush = new SolidBrush(Color.Black)
   graphics.FillRectangle(brush, 0, 0, int width, int height)
   graphics.TranslateTransform(float32 (width/2.0),float32 (height/2.0))
   let color = Color.FromArgb(128,0,128,255)
   use brush = new SolidBrush(color)
   let rand = Random()
   let polys =
      [for i in 1..12 ->
         let w = rand.Next(20)+1 // width
         let h = rand.Next(20)+1 // height
         let m = rand.Next(h)    // midpoint
         let s = rand.Next(30)   // start
         [|0,s; -w,s+m; 0,s+h; w,s+m|]
      ]
   for i in 0.0..60.0..300.0 do
      graphics.RotateTransform(float32 60.0)
      let poly points =
         let points = [|for (x,y) in points -> Point(x*5,y*5)|]
         graphics.FillPolygon(brush,points)
      polys |> List.iter poly 
   image

 

The full snippet is on F# Snippets: http://fssnip.net/oB

Another interesting method of generating snowflakes is cellular automata, but I’ll leave that as an exercise for the reader.

Happy holidays!
Categories: Programming

Agile Risk Management: A Lean Process

Did you anticipate the jellyfish or not?

Did you anticipate the jellyfish or not?

 

On February 12 at 5:44 AM a sink hole in Bowling Green, Kentucky opened up below the National Corvette Museum Skydome exhibit area swallowing eight cars. The risk of the sinkhole had not been foreseen; therefore no one had monitored the risk. The problem was discovered when a motion detector was triggered. The motion detector was in place to monitor and mitigate a more commonplace risk. I am convinced that the risk managers at the museum were convinced they had anticipated all reasonable risks. Every time methodologists, developers and project managers (of any stripe) decide that they have found the perfect process or method that risk-proofs their project that disaster is right around the corner. We have described how Agile mitigates some types of risks (Agile and Risk Management), however risk in general still needs to be managed and controlled in any size project. Software risk management is crucial to the success of all software development, enhancement and maintenance projects by avoiding problems that can be avoided and recognizing those that can’t be avoided. A lean process to manage risk in Agile projects is shown below:

  1. Identify knowable risks. – Identify the knowable risks when generating the initial backlog. Remember that risk identification, like the identification of users stories, will be an iterative process. Teams that have trouble identifying risks could leverage a checklist or a list of common risks.
  2. Build mitigation for common risks into the definition of done. – The definition of done is the requirements that the software must meet to be considered complete. Common risks such as the failure of the software to integrate properly can be built into the definition of done, which provides mitigation for the risk.
  3. Generate stories for less common risks and add them to the projects backlog. – Risks that can’t be mitigated and monitored through the definition of done can be treated as a specialized form of a user story and added directly to the product backlog.
  4. Review risks when grooming stories. – Just identifying and listing risks is useful, but not sufficient. It is necessary to review the risks placed on the product backlog.  The backlog grooming process provides a good platform for periodic risk review to ensure that if a risk needs to be mitigated steps can be added to the next sprint or that if risks are becoming issues that steps can be taken immediately.
  5. Carve out time during planning to identify emerging risks. – Project environments are dynamic. New risks may emerge or current risks evolve as the project progresses. Carving out time to reflect on risk as part of planning will help the team avoid being surprised.

Why is the discussion of risk important? Projects need understand what can reasonably happen and be prepared. While very few projects would have to plan and prepare to deal with a sink hole opening up and eating their data center or the project team (unless they were in Bowling Green) potential risks that are reasonable should be anticipated. In a world where most projects are integrated into an organizations value chain, potential disruptions or failures need to be explored and monitored.


Categories: Process Management

3, 2, 1 Code-in: Inviting teens to contribute to open source

Google Code Blog - Mon, 12/01/2014 - 21:00

Code-in 2014 logo

We believe that the key to getting students excited about computer science is to give them opportunities at ever younger ages to explore their creativity with computer science. That’s why we’re running the Google Code-in contest again this year, and today’s the day students can go to the contest site, register and start looking for tasks that interest them.

Ignacio Rodriguez was just 10 years old when he became curious about Sugar, the open source learning platform introduced nationally to students in Uruguay when he was in elementary school. With the encouragement of his teacher, Ignacio started asking questions of the developers writing and maintaining the code and he started playing around with things, a great way to learn to code. When he turned 14 he entered the Google Code-in contest completing tasks that included writing two new web services for Sugar and he created four new Sugar activities. He even continued to mentor other students throughout the contest period.  His enthusiasm for coding and making the software even better for future users earned him a spot as a 2013 Grand Prize Winner.

Ignacio is one of the 1,575 students from 78 countries that have participated in Google Code-in since 2010. We are encouraging 13-17 year old students to explore the many varied ways they can contribute to open source software development through the Google Code-in contest. Because open source is a collaborative effort, the contest is designed as a streamlined entry point for students into software development by having mentors assigned to each task that a student works on during the contest. Students don’t have to be coders to participate; as with any software project, there are many ways to contribute to the project.  Students will be able to choose from documentation, outreach, research, training, user interface and quality assurance tasks in addition to coding tasks.

This year, students can choose tasks created by 12 open source organizations working ondisaster relief, content management, desktop environments, gaming, medical record systems for developing countries, 3D solid modeling computer-aided design and operating systems to name a few.  

For more information on the contest, please visit the contest site where you can find the timeline, Frequently Asked Questions and information on each of the open source projects students can work with during the seven week contest.

Good luck students!
By Stephanie Taylor, Open Source Programs
Categories: Programming

Auth0 Architecture - Running in Multiple Cloud Providers and Regions

This is a guest post by Jose Romaniello, Head of Engineering, at Auth0.

Auth0 provides authentication, authorization and single sign on services for apps of any type: mobile, web, native; on any stack.

Authentication is critical for the vast majority of apps. We designed Auth0 from the beginning with multipe levels of redundancy. One of this levels is hosting. Auth0 can run anywhere: our cloud, your cloud, or even your own servers. And when we run Auth0 we run it on multiple-cloud providers and in multiple regions simultaneously.

This article is a brief introduction of the infrastructure behind app.auth0.com and the strategies we use to keep it up and running with high availability.

Core Service Architecture

The core service is relatively simple:

  • Front-end servers: these consist of several x-large VMs, running Ubuntu on Microsoft Azure.

  • Store: mongodb, running on dedicated memory optimized X-large VMs.

  • Intra-node service routing: nginx

All components of Auth0 (e.g. Dashboard, transaction server, docs) run on all nodes. All identical.

Multi-cloud / High Availability
Categories: Architecture

What It Actually Means to Market Yourself as a Software Developer

Making the Complex Simple - John Sonmez - Mon, 12/01/2014 - 17:00

Today is your lucky day! No, really it is. I am going to tell you exactly what it means to market yourself as a software developer and why it just might not be such a bad thing. Believe me, I know what you are thinking. I get a lot of flak about the idea of marketing yourself or doing any ... Read More

The post What It Actually Means to Market Yourself as a Software Developer appeared first on Simple Programmer.

Categories: Programming

About snowmen and mathematical proof why agile works

Xebia Blog - Mon, 12/01/2014 - 16:05

Last week I had an interesting course by Roger Sessions on Snowman Architecture. The perishable nature of Snowmen under any serious form of pressure fortunately does not apply to his architecture principles, but being an agile fundamentalist I noticed some interesting patterns in the math underlying the Snowmen Architecture that are well rooted in agile practices. Understanding these principles may give facts to feed your gut feeling about these philosophies and give mathematical proof as to why Agile works.

Complexity

“What has philosophy got to do with measuring anything? It's the mathematicians you have to trust, and they measure the skies like we measure a field. “ - Galileo Galilei, Concerning the New Star (1606).

In his book “Facts and Fallacies of Software Engineering” Robert Glass implied that when the functionality of a system increases by 25% the complexity of it effectively doubles. So in formula form:

                      

This hypothesis is supported by empirical evidence, and also explains why planning poker that focuses on the complexity of the implementation rather than functionality delivered is a more accurate estimator of what a team can deliver in sprint.

Basically the smaller you can make the functionality the better, and that is better to the power 3 for you! Once you start making functionality smaller, you will find that your awesome small functionality needs to talk to other functionalities in order to be useful for an end user. These dependencies are penalized by Roger’s model.

“An outside dependency contributes as much complexity as any other function, but does so independently of the functions.”

In other words, splitting a functionality of say 4 points (74 complexity points) in two equal separate functions reduces the overall complexity to 17 complexity points. This benefit however vanishes when each module has more than 3 connections.

An interesting observation that one can derive from this is a mathematical model that helps you to find which functions “belong” together. It stands to reason that when those functions suffer from technical interfacing, they will equally suffer from human interfaces. But how do we find which functions “belong” together, and does it matter if we get it approximately right? 

Endless possibilities

“Almost right doesn’t count” – Dr. Taylor; on landing a spacecraft after a 300 million miles journey 50 meter from a spot with adequate sunlight for the solar panels. 

Partitioning math is incredibly complex, and the main problem with the separation of functions and interfaces is that it has massive implications if you get it “just about right”. This is neatly covered by “the Bell number” (http://en.wikipedia.org/wiki/Bell_number).

These numbers grow quite quickly e.g. a set of 2 functions can be split 2 ways, but a set of 3 already has 5 options, at 6 it is 203 and if your application covers a mere 16 business functions, we already have more than 10 billion ways to create sets, and only a handful will give that desired low complexity number.

So how can math help us to find the optimal set division? the one with the lowest complexity factor?

Equivalence Relations

In order to find business functions that belong together or at lease have so much in common that the number of interfaces will outweigh the functional complexity, we can resort to the set equivalence relation (http://en.wikipedia.org/wiki/Equivalence_relation). It is both the strong and the weak point in the Snowmen architecture. It provides a genius algorithm for separating a set in the most optimal subsets (and doing so in O(n + k log k) time). The equivalence relation that Session proposes is as follows:

            Two business functions {a, b} have synergy if, and only if, from a business perspective {a} is not useful without {b} and visa versa.

The weak point is the subjective measurement in the equation. When played at a too high level it will all be required, and on a too low level not return any valuable business results.

In my last project we split a large eCommerce platform in the customer facing part and the order handling part. This worked so well that the teams started complaining that the separation had lowered their knowledge of each other’s codebase, since very little functionality required coding on both subsystems.

We effectively had reduced complexity considerable, but could have taken it one step further. The order handling system was talking to a lot of other systems in order to get the order fulfilled. From a business perspective we could have separated further, reducing complexity even further. In fact, armed with Glass’s Law, we’ll refactor the application to make it even better than it is today.

Why bother?

Well, polynomial growing problems can’t be solved with linear solutions.

Polynomial problems vs linear solutions plotted against time

Polynomial problems vs linear solutions plotted against time

As long as the complexity is below the solution curve, things will be going fine. Then there is a point in time where the complexity surpasses our ability to solve it. Sure we can add a team, or a new technology, but unless we change nature of our problem, we are only postponing the inevitable.

This is the root cause why your user stories should not exceed the sprint boundaries. Scrum forces you to chop the functionality into smaller pieces that move the team in a phase where linear development power supersedes the complexity of the problem. In practice, in almost every case where we saw a team breaking this rule, they would end up at the “uh-oh moment” at some point in the future, at that stage where there are no neat solutions any more.

So believe in the math and divide your complexity curve in smaller chunks, where your solution capacity exceeds the problems complexity. (As a bonus you get a happy and thriving team.)

Systems Engineering in the Enterprise IT Domain

Herding Cats - Glen Alleman - Mon, 12/01/2014 - 06:14

Systems Engineering has two components

  • System - a set of interrelated components working together toward some common objective.
  • Engineering - the application of scientific principles to practical ends; as the design, construction and operation of efficient and economical structures, equipment, and systems.

When we work in the Enterprise IT domain or any Software Intensive Systems ...

...systems engineering is focused on the system as a whole; it emphasizes its total operation. It looks at the system from the outside, that is, at its interactions with  other systems and the environment, as well as from the inside. It is concerned  not only with the engineering design of the system but also with external factors, which can significantly constrain the design. These include the identification of customer needs, the system operational environment, interfacing systems, logistics  support requirements, the capabilities of operating personnel, and such other  factors as must be correctly reflected in system requirements documents and accommodated in the system design. [Systems Engineering Principles and Practices, Alexander Kossiakoff, John Wiley & Sons]

So what does this mean in practice?

It means when we start without knowing what DONE looks like, no method, no technique, clever process is going to help us discover what DONE looks like, until we spend a pile of money and expend a lot of time trying out various ideas in our search for DONE. 

What this means is that emergent requirements, mean wandering around looking for what DONE looks like. We need to state DONE in units that connect with Capabilities to fulfill a mission or deliver success for a business case.

What this doesn't mean is that we need the requirements up front. In fact we may not actually what the requirements up front. If we don't know what DONE means, those requirements must change and that change costs much more money then writing down what DONE looks like in units of measure meaningful to the decision makers.

So Here's Some Simple Examples of What A Capability Sounds like

  • We need the capability to pre-process insurance claims at $0.07 per transaction rather than the current $0.11 per transaction.
  • We need the capability to remove 1½ hours from the retail ordering process once the merger is complete.
  • We need the capability to change the Wide Field Camera and the internal nickel hydride batteries, while doing no harm to the telescope.
  • We need the capability to fly 4 astronauts to the International Space Station, dock, stay 6 months, and return safely.
  • We need the capability to control the Hell Fire Missile with a new touch panel while maintaining existing navigation and guidance capabilities in the helicopter.
  • We need the capability to comply with FAR Part 15 using the current ERP system and its supporting work processes.

Here's a more detailed example

Identifying System Capabilities is the starting point for any successful program. Systems Capabilities are not direct requirements, but statements of what the system should provide in terms of “abilities.”

For example there are three capabilities needed for the Hubble Robotic Service Mission:

  • Do no harm to the telescope - it is very fragile
  • Change the Wide Field Camera - was built here in Boulder
  • Connect the battery umbilical cable - like our cell phones they wear out

How is this to be done and what are the technical, operational, safety and mission assurance requirements? Don’t really know yet, but the Capabilities guide their development. The critical reason for starting with capabilities is to establish a home for all the requirements.

To answer the questions:

  • Why is this requirement present?
  • Why is this requirement needed?
  • What business or mission value does fulfilling this requirement provide?

Capabilities statements can then be used to define the units of measure for program progress. Measuring progress with physical percent complete at each level is mandatory. But measuring how the Capabilities are being fulfilled is most meaningful to the customer. The “meaningful to the customer” unit of measures are critical to the success of any program. Without these measures the program may be cost, schedule, and technically successful but fail to fulfill the mission.

This is the difference between fit for purpose and Fit for Use.

The process flow below is the starting point for identifying the Needed Capabilities and determining their priorities. Starting with the Capabilities prevents the “Bottom Up” requirements gathering process from producing a “list” of requirements – all needed – that is missing a well formed topology. This Requirements Architecture is no different than the Technical or Programmatic architecture of the system.

Capabilities Based Planning (CBP) focuses on “outputs” rather than “inputs.”

These “outputs” are the mission capabilities that are fulfilled by the program. Without the capabilities, it is never clear the mission will be a success, because there is no clear and concise description of what success means. Success means providing the needed capabilities, on or near schedule and cost. The concept of CBP recognizes the interdependence of systems, strategy, organization, and support in delivering the capability, and the need to examine options and trade‒offs in terms of performance, cost and risk to identify optimum development investments. CBP relies on Use Cases and scenarios to provide the context to measure the level of capability.

Here's One Approach For Capturing the Needed Capabilities

Screen Shot 2014-11-30 at 8.26.59 PM

In Order To Capture These Needed Capabilities We Need To...

Screen Shot 2014-11-30 at 8.29.12 PM

What Does All This Mean?

When we hear of all the failures of IT projects, and other projects for that matter, the first question that must be answered is 

What was the root cause of the failure?

Research has shown that unclear, vague, and many times conflicting requirements are the source of confusion about what DONE looks like. In the absence of a definitive description of DONE in units of effectiveness and performance, those requirements have no home to be assessed for their appropriateness. 

Related articles Estimating Guidance Complex Project Management Populist versus Technical View of Problems
Categories: Project Management

SPaMCAST 318 – Rob Cross, Big Data and Data Analytics In Software Development

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 318

SPaMCAST 318 features our interview with Rob Cross.  Rob and I discussed his InfoQ article “How to Incorporate Data Analytics into Your Software Process.”  Rob provides ideas on how the theory of big data can be incorporated in to big action that provides “ah-ha” moments for executives and developers alike.

Rob Cross has been in the software development industry for over 15 years in various capacities.  He has worked for several start-up businesses including his current company, PSC.  These companies have been focused on providing software quality, security and performance data to organizations leveraging state of the art technologies.  Rob’s current company has analyzed over 8 billion lines of code as an independent software assessment company on products ranging from military systems, medical devices, satellite systems, video games to Wall Street exchanges.

Rob’s email: rc@proservicescorp.com

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change of on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

Why Are Requirements So Hard To Get Right? IT projects have been around in one form or another since the 1940’s. Looking back in the literature describing the history of IT, the topic of requirements in general and identification of requirements specifically have been top of mind since day one.

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 318 – Rob Cross, Big Data and Data Analytics In Software Development

Software Process and Measurement Cast - Sun, 11/30/2014 - 23:00

SPaMCAST 318 features our interview with Rob Cross.  Rob and I discussed his InfoQ article “How to Incorporate Data Analytics into Your Software Process.”  Rob provides ideas on how the theory of big data can be incorporated in to big action that provides “ah-ha” moments for executives and developers alike.

Rob Cross has been in the software development industry for over 15 years in various capacities.  He has worked for several start-up businesses including his current company, PSC.  These companies have been focused on providing software quality, security and performance data to organizations leveraging state of the art technologies.  Rob's current company has analyzed over 8 billion lines of code as an independent software assessment company on products ranging from military systems, medical devices, satellite systems, video games to Wall Street exchanges. 

 Rob's email: rc@proservicescorp.com

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change of on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

Why Are Requirements So Hard To Get Right? IT projects have been around in one form or another since the 1940’s. Looking back in the literature describing the history of IT, the topic of requirements in general and identification of requirements specifically have been top of mind since day one. 

Upcoming Events

DCG Webinars:

Agile Risk Management - It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Protractor: Angular testing made easy

Google Testing Blog - Sun, 11/30/2014 - 18:50
By Hank Duan, Julie Ralph, and Arif Sukoco in Seattle

Have you worked with WebDriver but been frustrated with all the waits needed for WebDriver to sync with the website, causing flakes and prolonged test times? If you are working with AngularJS apps, then Protractor is the right tool for you.

Protractor (protractortest.org) is an end-to-end test framework specifically for AngularJS apps. It was built by a team in Google and released to open source. Protractor is built on top of WebDriverJS and includes important improvements tailored for AngularJS apps. Here are some of Protractor’s key benefits:

  • You don’t need to add waits or sleeps to your test. Protractor can communicate with your AngularJS app automatically and execute the next step in your test the moment the webpage finishes pending tasks, so you don’t have to worry about waiting for your test and webpage to sync. 
  • It supports Angular-specific locator strategies (e.g., binding, model, repeater) as well as native WebDriver locator strategies (e.g., ID, CSS selector, XPath). This allows you to test Angular-specific elements without any setup effort on your part. 
  • It is easy to set up page objects. Protractor does not execute WebDriver commands until an action is needed (e.g., get, sendKeys, click). This way you can set up page objects so tests can manipulate page elements without touching the HTML. 
  • It uses Jasmine, the framework you use to write AngularJS unit tests, and Javascript, the same language you use to write AngularJS apps.

Follow these simple steps, and in minutes, you will have you first Protractor test running:

1) Set up environment

Install the command line tools ‘protractor’ and ‘webdriver-manager’ using npm:
npm install -g protractor

Start up an instance of a selenium server:
webdriver-manager update & webdriver-manager start

This downloads the necessary binary, and starts a new webdriver session listening on http://localhost:4444.

2) Write your test
// It is a good idea to use page objects to modularize your testing logic
var angularHomepage = {
nameInput : element(by.model('yourName')),
greeting : element(by.binding('yourName')),
get : function() {
browser.get('index.html');
},
setName : function(name) {
this.nameInput.sendKeys(name);
}
};

// Here we are using the Jasmine test framework
// See http://jasmine.github.io/2.0/introduction.html for more details
describe('angularjs homepage', function() {
it('should greet the named user', function(){
angularHomepage.get();
angularHomepage.setName('Julie');
expect(angularHomepage.greeting.getText()).
toEqual('Hello Julie!');
});
});

3) Write a Protractor configuration file to specify the environment under which you want your test to run:
exports.config = {
seleniumAddress: 'http://localhost:4444/wd/hub',

specs: ['testFolder/*'],

multiCapabilities: [{
'browserName': 'chrome',
// browser-specific tests
specs: 'chromeTests/*'
}, {
'browserName': 'firefox',
// run tests in parallel
shardTestFiles: true
}],

baseUrl: 'http://www.angularjs.org',
};

4) Run the test:

Start the test with the command:
protractor conf.js

The test output should be:
1 test, 1 assertions, 0 failures


If you want to learn more, here’s a full tutorial that highlights all of Protractor’s features: http://angular.github.io/protractor/#/tutorial

Categories: Testing & QA

Spark: Write to CSV file with header using saveAsFile

Mark Needham - Sun, 11/30/2014 - 09:21

In my last blog post I showed how to write to a single CSV file using Spark and Hadoop and the next thing I wanted to do was add a header row to the resulting row.

Hadoop’s FileUtil#copyMerge function does take a String parameter but it adds this text to the end of each partition file which isn’t quite what we want.

However, if we copy that function into our own FileUtil class we can restructure it to do what we want:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.io.IOUtils;
import java.io.IOException;
 
public class MyFileUtil {
    public static boolean copyMergeWithHeader(FileSystem srcFS, Path srcDir, FileSystem dstFS, Path dstFile, boolean deleteSource, Configuration conf, String header) throws IOException {
        dstFile = checkDest(srcDir.getName(), dstFS, dstFile, false);
        if(!srcFS.getFileStatus(srcDir).isDir()) {
            return false;
        } else {
            FSDataOutputStream out = dstFS.create(dstFile);
            if(header != null) {
                out.write((header + "\n").getBytes("UTF-8"));
            }
 
            try {
                FileStatus[] contents = srcFS.listStatus(srcDir);
 
                for(int i = 0; i < contents.length; ++i) {
                    if(!contents[i].isDir()) {
                        FSDataInputStream in = srcFS.open(contents[i].getPath());
 
                        try {
                            IOUtils.copyBytes(in, out, conf, false);
 
                        } finally {
                            in.close();
                        }
                    }
                }
            } finally {
                out.close();
            }
 
            return deleteSource?srcFS.delete(srcDir, true):true;
        }
    }
 
    private static Path checkDest(String srcName, FileSystem dstFS, Path dst, boolean overwrite) throws IOException {
        if(dstFS.exists(dst)) {
            FileStatus sdst = dstFS.getFileStatus(dst);
            if(sdst.isDir()) {
                if(null == srcName) {
                    throw new IOException("Target " + dst + " is a directory");
                }
 
                return checkDest((String)null, dstFS, new Path(dst, srcName), overwrite);
            }
 
            if(!overwrite) {
                throw new IOException("Target " + dst + " already exists");
            }
        }
        return dst;
    }
}

We can then update our merge function to call this instead:

def merge(srcPath: String, dstPath: String, header:String): Unit =  {
  val hadoopConfig = new Configuration()
  val hdfs = FileSystem.get(hadoopConfig)
  MyFileUtil.copyMergeWithHeader(hdfs, new Path(srcPath), hdfs, new Path(dstPath), false, hadoopConfig, header)
}

We call merge from our code like this:

merge(file, destinationFile, "type,count")

I wasn’t sure how to import my Java based class into the Spark shell so I compiled the code into a JAR and submitted it as a job instead:

$ sbt package
[info] Loading global plugins from /Users/markneedham/.sbt/0.13/plugins
[info] Loading project definition from /Users/markneedham/projects/spark-play/playground/project
[info] Set current project to playground (in build file:/Users/markneedham/projects/spark-play/playground/)
[info] Compiling 3 Scala sources to /Users/markneedham/projects/spark-play/playground/target/scala-2.10/classes...
[info] Packaging /Users/markneedham/projects/spark-play/playground/target/scala-2.10/playground_2.10-1.0.jar ...
[info] Done packaging.
[success] Total time: 8 s, completed 30-Nov-2014 08:12:26
 
$ time ./bin/spark-submit --class "WriteToCsvWithHeader" --master local[4] /path/to/playground/target/scala-2.10/playground_2.10-1.0.jar
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.propertie
...
14/11/30 08:16:15 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
14/11/30 08:16:15 INFO SparkContext: Job finished: saveAsTextFile at WriteToCsvWithHeader.scala:49, took 0.589036 s
 
real	0m13.061s
user	0m38.977s
sys	0m3.393s

And if we look at our destination file:

$ cat /tmp/singlePrimaryTypes.csv
type,count
THEFT,859197
BATTERY,757530
NARCOTICS,489528
CRIMINAL DAMAGE,488209
BURGLARY,257310
OTHER OFFENSE,253964
ASSAULT,247386
MOTOR VEHICLE THEFT,197404
ROBBERY,157706
DECEPTIVE PRACTICE,137538
CRIMINAL TRESPASS,124974
PROSTITUTION,47245
WEAPONS VIOLATION,40361
PUBLIC PEACE VIOLATION,31585
OFFENSE INVOLVING CHILDREN,26524
CRIM SEXUAL ASSAULT,14788
SEX OFFENSE,14283
GAMBLING,10632
LIQUOR LAW VIOLATION,8847
ARSON,6443
INTERFERE WITH PUBLIC OFFICER,5178
HOMICIDE,4846
KIDNAPPING,3585
INTERFERENCE WITH PUBLIC OFFICER,3147
INTIMIDATION,2471
STALKING,1985
OFFENSES INVOLVING CHILDREN,355
OBSCENITY,219
PUBLIC INDECENCY,86
OTHER NARCOTIC VIOLATION,80
RITUALISM,12
NON-CRIMINAL,12
OTHER OFFENSE ,6
NON - CRIMINAL,2
NON-CRIMINAL (SUBJECT SPECIFIED),2

Happy days!

The code is available as a gist if you want to see all the details.

Categories: Programming

Spark: Write to CSV file

Mark Needham - Sun, 11/30/2014 - 08:40

A couple of weeks ago I wrote how I’d been using Spark to explore a City of Chicago Crime data set and having worked out how many of each crime had been committed I wanted to write that to a CSV file.

Spark provides a saveAsTextFile function which allows us to save RDD’s so I refactored my code into the following format to allow me to use that:

import au.com.bytecode.opencsv.CSVParser
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext._
 
def dropHeader(data: RDD[String]): RDD[String] = {
  data.mapPartitionsWithIndex((idx, lines) => {
    if (idx == 0) {
      lines.drop(1)
    }
    lines
  })
}
 
// https://data.cityofchicago.org/Public-Safety/Crimes-2001-to-present/ijzp-q8t2
val crimeFile = "/Users/markneedham/Downloads/Crimes_-_2001_to_present.csv"
 
val crimeData = sc.textFile(crimeFile).cache()
val withoutHeader: RDD[String] = dropHeader(crimeData)
 
val file = "/tmp/primaryTypes.csv"
FileUtil.fullyDelete(new File(file))
 
val partitions: RDD[(String, Int)] = withoutHeader.mapPartitions(lines => {
  val parser = new CSVParser(',')
  lines.map(line => {
    val columns = parser.parseLine(line)
    (columns(5), 1)
  })
})
 
val counts = partitions.
  reduceByKey {case (x,y) => x + y}.
  sortBy {case (key, value) => -value}.
  map { case (key, value) => Array(key, value).mkString(",") }
 
counts.saveAsTextFile(file)

If we run that code from the Spark shell we end up with a folder called /tmp/primaryTypes.csv containing multiple part files:

$ ls -lah /tmp/primaryTypes.csv/
total 496
drwxr-xr-x  66 markneedham  wheel   2.2K 30 Nov 07:17 .
drwxrwxrwt  80 root         wheel   2.7K 30 Nov 07:16 ..
-rw-r--r--   1 markneedham  wheel     8B 30 Nov 07:16 ._SUCCESS.crc
-rw-r--r--   1 markneedham  wheel    12B 30 Nov 07:16 .part-00000.crc
-rw-r--r--   1 markneedham  wheel    12B 30 Nov 07:16 .part-00001.crc
-rw-r--r--   1 markneedham  wheel    12B 30 Nov 07:16 .part-00002.crc
-rw-r--r--   1 markneedham  wheel    12B 30 Nov 07:16 .part-00003.crc
...
-rwxrwxrwx   1 markneedham  wheel     0B 30 Nov 07:16 _SUCCESS
-rwxrwxrwx   1 markneedham  wheel    28B 30 Nov 07:16 part-00000
-rwxrwxrwx   1 markneedham  wheel    17B 30 Nov 07:16 part-00001
-rwxrwxrwx   1 markneedham  wheel    23B 30 Nov 07:16 part-00002
-rwxrwxrwx   1 markneedham  wheel    16B 30 Nov 07:16 part-00003
...

If we look at some of those part files we can see that it’s written the crime types and counts as expected:

$ cat /tmp/primaryTypes.csv/part-00000
THEFT,859197
BATTERY,757530
 
$ cat /tmp/primaryTypes.csv/part-00003
BURGLARY,257310

This is fine if we’re going to pass those CSV files into another Hadoop based job but I actually want a single CSV file so it’s not quite what I want.

One way to achieve this is to force everything to be calculated on one partition which will mean we only get one part file generated:

val counts = partitions.repartition(1).
  reduceByKey {case (x,y) => x + y}.
  sortBy {case (key, value) => -value}.
  map { case (key, value) => Array(key, value).mkString(",") }
 
 
counts.saveAsTextFile(file)

part-00000 now looks like this:

$ cat !$
cat /tmp/primaryTypes.csv/part-00000
THEFT,859197
BATTERY,757530
NARCOTICS,489528
CRIMINAL DAMAGE,488209
BURGLARY,257310
OTHER OFFENSE,253964
ASSAULT,247386
MOTOR VEHICLE THEFT,197404
ROBBERY,157706
DECEPTIVE PRACTICE,137538
CRIMINAL TRESPASS,124974
PROSTITUTION,47245
WEAPONS VIOLATION,40361
PUBLIC PEACE VIOLATION,31585
OFFENSE INVOLVING CHILDREN,26524
CRIM SEXUAL ASSAULT,14788
SEX OFFENSE,14283
GAMBLING,10632
LIQUOR LAW VIOLATION,8847
ARSON,6443
INTERFERE WITH PUBLIC OFFICER,5178
HOMICIDE,4846
KIDNAPPING,3585
INTERFERENCE WITH PUBLIC OFFICER,3147
INTIMIDATION,2471
STALKING,1985
OFFENSES INVOLVING CHILDREN,355
OBSCENITY,219
PUBLIC INDECENCY,86
OTHER NARCOTIC VIOLATION,80
NON-CRIMINAL,12
RITUALISM,12
OTHER OFFENSE ,6
NON - CRIMINAL,2
NON-CRIMINAL (SUBJECT SPECIFIED),2

This works but it’s quite a bit slower than when we were doing the aggregation across partitions so it’s not ideal.

Instead, what we can do is make use of one of Hadoop’s merge functions which squashes part files together into a single file.

First we import Hadoop into our SBT file:

libraryDependencies += "org.apache.hadoop" % "hadoop-hdfs" % "2.5.2"

Now let’s bring our merge function into the Spark shell:

import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs._
 
def merge(srcPath: String, dstPath: String): Unit =  {
  val hadoopConfig = new Configuration()
  val hdfs = FileSystem.get(hadoopConfig)
  FileUtil.copyMerge(hdfs, new Path(srcPath), hdfs, new Path(dstPath), false, hadoopConfig, null)
}

And now let’s make use of it:

val file = "/tmp/primaryTypes.csv"
FileUtil.fullyDelete(new File(file))
 
val destinationFile= "/tmp/singlePrimaryTypes.csv"
FileUtil.fullyDelete(new File(destinationFile))
 
val counts = partitions.
reduceByKey {case (x,y) => x + y}.
sortBy {case (key, value) => -value}.
map { case (key, value) => Array(key, value).mkString(",") }
 
counts.saveAsTextFile(file)
 
merge(file, destinationFile)

And now we’ve got the best of both worlds:

$ cat /tmp/singlePrimaryTypes.csv
THEFT,859197
BATTERY,757530
NARCOTICS,489528
CRIMINAL DAMAGE,488209
BURGLARY,257310
OTHER OFFENSE,253964
ASSAULT,247386
MOTOR VEHICLE THEFT,197404
ROBBERY,157706
DECEPTIVE PRACTICE,137538
CRIMINAL TRESPASS,124974
PROSTITUTION,47245
WEAPONS VIOLATION,40361
PUBLIC PEACE VIOLATION,31585
OFFENSE INVOLVING CHILDREN,26524
CRIM SEXUAL ASSAULT,14788
SEX OFFENSE,14283
GAMBLING,10632
LIQUOR LAW VIOLATION,8847
ARSON,6443
INTERFERE WITH PUBLIC OFFICER,5178
HOMICIDE,4846
KIDNAPPING,3585
INTERFERENCE WITH PUBLIC OFFICER,3147
INTIMIDATION,2471
STALKING,1985
OFFENSES INVOLVING CHILDREN,355
OBSCENITY,219
PUBLIC INDECENCY,86
OTHER NARCOTIC VIOLATION,80
RITUALISM,12
NON-CRIMINAL,12
OTHER OFFENSE ,6
NON - CRIMINAL,2
NON-CRIMINAL (SUBJECT SPECIFIED),2

The full code is available as a gist if you want to play around with it.

Categories: Programming

Kanban: Process Improvement and Bottlenecks Revisited

Bottlenecks constrain flow!

Bottlenecks constrain flow!

 

We are revisiting one of more popular essays from 2013 and will return to Re-read Saturday next week with Chapter 4 “Creating A Guiding Coalition.”  

Kanban implementation is a powerful tool to focus the continuous improvement efforts of teams and organizations on delivering more value.  Kanban, through the visualization of work in progress, helps to identify constraints (this is an implementation of the Theory of Constraints).  Add to visualization the core principles discussed in the Daily Process Thoughts, Kanban: An Overview and Introduction which include feedback loops and transparency to regulate the process and an impetus to improve as a team and evolve using models and the scientific method and we have a process improvement engine.

Kanban describes units of work that are blocked or stalled as bottlenecks.  Finding and removing bottlenecks increases the flow of work through the process, therefore increasing the delivery of value.

A perfect example of a bottleneck exists in the highway system in Cleveland, Ohio (the closest major city to my home).  A highway (three lanes in each direction) sweeps into town along the shore of Lake Erie.  When it reaches the edge of downtown the highway makes a nearly 90 degree left hand turn.  The turn is known as Dead Man’s Curve.   Instantly cars and trucks must slow down.  Even when there is no accident the traffic can backup for miles during rush hour.  The turn is a constraint that creates a bottleneck.   If the city wanted to improve the flow of traffic, removing the Dean Man’s Curve bottleneck would help substantially.

Here’s an IT example to see how a bottleneck is identified and how a team could attack the bottleneck. We will use a simple Kanban board.   In this example, the team has a backlog similarly sized units of work.  Each step of the process has a WIP limit.  One of the core practices in Kanban is that WIP limits are not to be systematically violated.

Each step can have different WIP limits.

Each step can have different WIP limits.

As work is drawn through the process, there will be a bottleneck as soon as the analysis for the first wave of work is completed because development only has the capacity to start work on four items. In our example of an application of Kanban, when a unit of work completes the analysis step it will be pulled into the development step only if capacity exists.  In this case one unit of work is immediately blocked and becomes inventory (shown below as the item marked with the letter “B”.

Unbalanced process flows cause bottlenecks

Unbalanced process flows cause bottlenecks

The team has three basic options.  The first is to continue to pull more items into the analysis step and continue to build inventory until the backlog is empty.  This option creates a backlog of work that is waiting for the feedback, increasing the potential rework as defects are found and new decisions are made.  The second possibility is that team members swarm to the blocked unit and add capacity to a step until the blocked unit is freed.  This solution makes a sense if the reason for the blockage is temporary, like a developer that is out sick.  The third (and preferred) option is to change the process to create a balanced flow of work.  In this example, the goal would be to rearrange people and tools to create a balanced WIP limits.

Process improvement maximizes throughput.

Process improvement maximizes throughput.

 Visually tracking work is a powerful tool for identifying bottlenecks.  Kanban’s core practices dissuade practitioners from violating WIP because it limits the stress in the process, which leads to technical debt, defects and rework. Other core practices provide a focus on continuous process improvement so that when a bottleneck is identified, the team works to remove it.  Continually improving the flow work through the development process increases an organization’s ability to deliver value to customers.

 


Categories: Process Management

Estimating Guidance - Updated

Herding Cats - Glen Alleman - Sat, 11/29/2014 - 20:29

There is an abundance of estimating guidance to counter the abundance of ill-informed notions about estimating. Here's some we use on our programs,

The list goes one for 100's of other soruces. Google "software cost estimating." But here's the core issue from the opening line in the Welcome section of Software Estimation: Demystifying the Black ArtSteve McConnell

The most unsuccessful three years in the education of cost estimators appears to be fifth-grade arithmetic - Norman R. Augustine

Augustine is former Chairman and CEO of Martin Marietta. His seminal book Augustine's Laws, describes the complexities and conundrums of today's business management and offers solutions. Anyone interested in learning how successful management of complex technology based firms is done, should read that book.

All Project Processes Driven By Uncertainty

The hope that uncertainty can be "programmed" out of a project is a false hope. However, we can manage in the presence of these uncertainties by understanding the risk they represent, and addressing each in an appropriate manner. In Against the Gods: The Remarkable Story of Risk, author Peter Bernstein states one of the major intellectual triumphs of the modern world is the transformation of uncertainty from a matter of fate to an area of study. And so, risk analysis is the process of assessing risks, while risk management uses risk analysis to devise management strategies to reduce or ameliorate risk. 

Estimating the outcomes of our choices - the opportunity cost paradigm of Microeconomics - is an integral part to managing in the presence of uncertainty. To successfully develop a credible estimate we need to identify and address four types of uncertainly on projects:

  1. Normal variations occur in the completion of tasks arising from normal work processes. Deming has shown that these uncertainties are just part of the process and attempts to control them, plan around them, or otherwise remove them is a waste of time. Mitigation's for these normal variations include fine-grained assessment points in the plan verifying progress. The assessment of these activities should be done in a 0% or 100% manner. Buffers and schedule margin are inserted in front of the critical activities to protect their slippage. Statistical process control approaches forecast further slippage.
  2. Foreseen uncertainties that are identified but have uncertain influences. Mitigation's for these unforeseen uncertainties are done by the creation of contingent paths forward are defined in the plan. These on ramp and off ramp points can be taken if needed.
  3. Unforeseen uncertainties are events that can’t be identified in the planning process. When these unforeseen uncertainties appear new approaches must be developed.
  4. Chaos appears when the basic structure of the project becomes unstable, with no ability to forecast its occurrence are the uncertainties that produced. In the presence of chaos, continuous verification of the project’s strategy is needed. Major iterations of deliverables can isolate these significant disruptions.

Managing in the Presence of Uncertainty

Uncertainty management is essential for any significant project. Certain information about key project cost, performance, and schedule attributes are often unknown until the project is underway. The emerging risks from these uncertainties can be identified early in the project that impact the project later are often termed “known unknowns.” These risks can be mitigated with a good risk management process. For risks that are beyond the vision of the project team a properly implemented risk management process can also rapidly quantify the risks impact and provide sound plans for mitigating its affect.

Uncertainty and the resulting risk management is concerned with the outcome of future events, whose exact outcome is unknown, and with how to deal with these uncertainties. Outcomes are categorized as favorable or unfavorable, and risk management is the art and science of planning, assessing, handling, and monitoring future events to ensure favorable outcomes. A good risk management process is proactive and fundamentally different than issue management or problem solving, which is reactive.

Risk management is an important skill applied to a wide variety of projects. In an era of downsizing, consolidation, shrinking budgets, increasing technological sophistication, and shorter development times, risk management provides valuable insight to help key project personnel plan for risks, alert them of potential issues, analyze these issues, and develop, implement, and monitor plans to address risks  long before they surface as issues and adversely affect project cost, performance, and schedule.

Project management in the presence of uncertainty and the risks this creates requires - actually mandates - estimating the outcomes from these uncertainties. As Tim Lister advises in "Risk Management Is Project Management for Adults"IEEE Software. May 1997.

Risk Management is Project Management for Adults

In the End

So those conjecturing that software estimating can't be done, have either missed that 5th grade class or are intentionally ignoring the basis of all business decision making processes - the assessment of opportunity costs using Microeconomics.

As De Marco and Lister state:

An almost-defining characteristic of adulthood is a willingness to confront the unpleasantness of life, from the niggling to the cataclysmic.

Related articles Assessing Value Produced By Investments Mike Cohn's Agile Quotes Complex Project Management Software Estimating for Non Trival Projects Estimating Guidance Software Estimation in an Agile World
Categories: Project Management