Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Final Notes on How To Measure Anything by Douglas Hubbard

How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

I am traveling this week in India for the 13th CSI/IFPUG International Software Measurement & Analysis Conference: “Creating Value from Measurement”. Read more about it here. In the meantime, enjoy some classic content, and I’ll be back with new blog entries next week.

Final Notes of HTMA

Last week we completed the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition.  Boiling the whole book into a single punchline yields the statement: “Anything that is important and is observable is measurable with some work.” If you need a caveat, you could consider adding that while anything is measurable, often we know enough that if we take the time estimate the value of the measure before we jump directly into measuring that we know enough and spend the effort on other variables. The ability to test the economic value of a measure (cost versus benefit) is perhaps the most important reminder I got during this re-read.

I spend a lot of my professional life helping people and organizations measure “things”. Sometimes measures and measurement plans either don’t spend the time to identify the decisions they are collecting data to make OR worse, only play lip service to the need that step. When organizations or a measurement analyst makes this mistake it is nearly impossible to determine the value an organization will derive from the measurement effort. Hubbard’s rational for his framework for Applied Information Economics calls out the fallacy of jumping directly to measuring something before you crisply define WHAT YOU ARE GOING TO DO WITH THE DATA. Hubbard reminds us that every measurement activity must be with my favorite question “why.” If you get nothing else from the book that is well worth the price.

For me there are couple of other major ideas that re-read brought home for me:

  1. I need to refresh my memory of Bayesian Statistics. It is easy to fall into the trap of using statistics that make assumptions of no prior knowledge or that everything lives below a normal curve, but that is myopic.  Do I need a good workbook, any suggestions?
  2. I need to do a few articles on Monte Carlo analysis for the blog.  I mentioned Monte Carlo in polite conversation the other day, only to receive a deer in the headlights look from my conversation partner. It is time to help educate more people and to refresh my knowledge at the same time. In the interim feel free to reach out to my friend Mauricio Aguiar and ask for his presentation on the topic.

A few days ago I was listening to a GAO Experts call for a discussion focused on developing alternative analyzes. During the call, a very senior participant suggested that there was no need to spend time quantifying qualitative data used in making billion dollar investments even when that data was important to the decision process. I found it difficult to believe that if the qualitative data was an important component in making the decision, that the effort to measure the qualitative aspects did not have value.  If I knew the participant I would send him a copy of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition.  Maybe that is a bit passive-aggressive or represents a small arrogance on my part, but perhaps it would be balanced by a bit of enlightenment for him.

 

Past installments of the Re-read Saturday of  How To Measure Anything, Third Edition

Chapter 1: The Challenge of Intangibles

Chapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily

Chapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t

Chapter 4: Clarifying the Measurement Problem

Chapter 5: Calibrated Estimates: How Much Do You Know Now?

Chapter 6: Quantifying Risk Through Modeling

Chapter 7: Quantifying The Value of Information

Chapter 8 The Transition: From What to Measure to How to Measure

Chapter 9: Sampling Reality: How Observing Some Things Tells Us about All Things

Chapter 10: Bayes: Adding To What You Know Now

Chapter 11: Preferences and Attitudes: The Softer Side of Measurement

Chapter 12: The Ultimate Measurement Instrument: Human Judges

Chapter 13: New Measurement Instruments for Management

Chapter 14: A Universal Measurement Method: Applied Information Economics


Categories: Process Management

12 Principles of Agile with and without Estimates

Herding Cats - Glen Alleman - Thu, 03/09/2017 - 06:11

It's popular to speak about the Agile Manifesto and the 12 Principles of Agile. When we hear that the next big thing in agile is Not Estimating, let's look to see how those 12 Principles can be applied without those estimates? 

12 Principles of Agile

How estimates help implement these principles

Without estimates what is missing

1

Satisfy customer with continuous delivery of value

Value cannot be determined without knowing the cost to deliver that value and the time frame the cost is expended. This is basic managerial finance when spending other people’s money

Business strategy is based on the delivery of value at the planned time for that value for the planned cost of that value

Without estimating the delivered value to the business for the estimated cost and time to delivery that value, the balance sheet will be wholly uninformed about when the breakeven date for this expenditure

2

Welcome change to improve the value of products

When change arrives, the first question is what will be the cost benefit of this change.

Since software development usually operated in the presence of uncertainty, an estimate of the cost and benefit of the change needs to be made.

Without estimating the impact of the requested change, those paying for the change are exposed to Un-Validated Risk.

Will this change impact our return-on-investment?

Will this change cause technical, cost, or schedule impacts we don’t know about until it’s too late?

3

Deliver working products often

How often? Every month, every two weeks? Every day?

Estimating the impact, absorption rate, produced monetary value or disrupting costs is part of the decision making process.

 

4

Work together with customers

Always a good idea, no matter the development method.

Those customer may have a fiduciary obligation to know the Estimate at Completion and the Estimate to Complete as you spend their money in the presence of uncertainty.

So when those customers ask when do you think we’ll be able to start using the software? Or, what’s you estimate of the features we can deliver before the trade show? Or our board is interested in our sales forecast for the feature you’re developing What are you going to say, Oh we don’t estimate our work, or time, or how much money you need to give us. We’re a No Estimates shop, you’ll just have to trust we show up on time, and not spend too much of your money and the marketing and sales folks will get what they need to reach breakeven as planned

5

Build products with motivated individuals

Yep, can’t go wrong here

 

6

Promote sustained develop rhythms

Sustained rhythms need some insight in what can be sustained in the presence of uncertainty./ Uncertainty that created risk. Uncertainty the results for Aleatory processes of the staff, tools, environments, externalities.

If you aren’t observing (empirical) and modeling with that data, the confidence that the rhythm can be sustained has no basis of fact.

It’s the old FedEx ad, where the guy on the phone says sure I can do that, several times, then when he hangs up askes himself, how can I do that

7

Measure progress with working products

If you are measuring after the fact you are executing the project with Open Loop Control.

What will be cost to get this working product on the date we need the working product?

What variance to this planned performance are appearing and what effort, cost or changes are need to get back on plan is Closed Loop Control.

Without estimates, there is only Open Loop Control.

8

Use face-to-face communication whenever possible

Yes, good process.

But face-to-face is just a mechanism.

Answering questions in the face-by-face manner about direction and outcomes in the future in the presence of uncertainty is Closed Loop Control

 

9

Technical excellence and good design improves agility

Technically compliant products are good products. But there is Epistemic and Aleatory uncertainty is all products. What are the upper and lower control limits for these measures. What are the impacts on the product when the actual measures go outside the upper and lower control limits.

Knowing how to set these upper and lower limits, knowing the Epistemic and Aleatory uncertainties that create risk to staying inside the bands is all done through estimates

Without estimates there can be no Closed Loop Control in the presence of uncertainty.

10

Simplicity is essential

This is a platitude. How simple is essential is the actual question.

This is a Systems Engineering question.  We need to remember H. L. Mencken’s quote

For every complex problem there is an answer that is clear, simple, and wrong.
so how simple will some estimates, since the system has not yet been built.

Assessing the parameters for how simple requires estimating the impact of the various components of the system on the complexity of the system.

This is a systems engineering function. Addressable in many case by the Design Structure Matrix paradigm and the probabilistic and statistical interactions of these components

11

The best architecture and requirements emerge from self-organizing teams

This is actually an untested claim. System architecture in non-trivial systems can be guided by architecture reference design. ToGAF, DoDAF, Zachman and other formal framework.

Again this is a Systems Engineering process

Without estimates the interactions both collaborative and interference cannot be assessed, until the product or service is complete

12

Regularly reflect and make adjustments on improving performance

Making adjustments to the technical and programmatic process in the presence of uncertainty requires estimating the impacts of those adjustments.

Without estimating the impacts of a decision, when the processes are stochastic, leaves the decision makers in the dark.

Hoepfully it's clear that estimating is part of making decisions in the agile paradogm, just like it is in any paradigm where uncertanty exists.

To suggest that decision can be made in the presence of uncertainty withoiut estimating the impact of those decisions, ignores to basic principles of Microeconomics of decision making 

Related articles Making Decisions In The Presence of Uncertainty Eyes Wide Shut - A View of No Estimates Root Cause Analysis
Categories: Project Management

12 Principles of Agile with and without Estimates

Herding Cats - Glen Alleman - Thu, 03/09/2017 - 06:11

It's popular to speak about the Agile Manifesto and the 12 Principles of Agile. When we hear that the next big thing in agile is Not Estimating, let's look to see how those 12 Principles can be applied without those estimates? 

12 Principles of Agile

How estimates help implement these principles

Without estimates what is missing

1

Satisfy customer with continuous delivery of value

Value cannot be determined without knowing the cost to deliver that value and the time frame the cost is expended. This is basic managerial finance when spending other people’s money

Business strategy is based on the delivery of value at the planned time for that value for the planned cost of that value

Without estimating the delivered value to the business for the estimated cost and time to delivery that value, the balance sheet will be wholly uninformed about when the breakeven date for this expenditure

2

Welcome change to improve the value of products

When change arrives, the first question is what will be the cost benefit of this change.

Since software development usually operated in the presence of uncertainty, an estimate of the cost and benefit of the change needs to be made.

Without estimating the impact of the requested change, those paying for the change are exposed to Un-Validated Risk.

Will this change impact our return-on-investment?

Will this change cause technical, cost, or schedule impacts we don’t know about until it’s too late?

3

Deliver working products often

How often? Every month, every two weeks? Every day?

Estimating the impact, absorption rate, produced monetary value or disrupting costs is part of the decision making process.

 

4

Work together with customers

Always a good idea, no matter the development method.

Those customer may have a fiduciary obligation to know the Estimate at Completion and the Estimate to Complete as you spend their money in the presence of uncertainty.

So when those customers ask when do you think we’ll be able to start using the software? Or, what’s you estimate of the features we can deliver before the trade show? Or our board is interested in our sales forecast for the feature you’re developing What are you going to say, Oh we don’t estimate our work, or time, or how much money you need to give us. We’re a No Estimates shop, you’ll just have to trust we show up on time, and not spend too much of your money and the marketing and sales folks will get what they need to reach breakeven as planned

5

Build products with motivated individuals

Yep, can’t go wrong here

 

6

Promote sustained develop rhythms

Sustained rhythms need some insight in what can be sustained in the presence of uncertainty./ Uncertainty that created risk. Uncertainty the results for Aleatory processes of the staff, tools, environments, externalities.

If you aren’t observing (empirical) and modeling with that data, the confidence that the rhythm can be sustained has no basis of fact.

It’s the old FedEx ad, where the guy on the phone says sure I can do that, several times, then when he hangs up askes himself, how can I do that

7

Measure progress with working products

If you are measuring after the fact you are executing the project with Open Loop Control.

What will be cost to get this working product on the date we need the working product?

What variance to this planned performance are appearing and what effort, cost or changes are need to get back on plan is Closed Loop Control.

Without estimates, there is only Open Loop Control.

8

Use face-to-face communication whenever possible

Yes, good process.

But face-to-face is just a mechanism.

Answering questions in the face-by-face manner about direction and outcomes in the future in the presence of uncertainty is Closed Loop Control

 

9

Technical excellence and good design improves agility

Technically compliant products are good products. But there is Epistemic and Aleatory uncertainty is all products. What are the upper and lower control limits for these measures. What are the impacts on the product when the actual measures go outside the upper and lower control limits.

Knowing how to set these upper and lower limits, knowing the Epistemic and Aleatory uncertainties that create risk to staying inside the bands is all done through estimates

Without estimates there can be no Closed Loop Control in the presence of uncertainty.

10

Simplicity is essential

This is a platitude. How simple is essential is the actual question.

This is a Systems Engineering question.  We need to remember H. L. Mencken’s quote

For every complex problem there is an answer that is clear, simple, and wrong.
so how simple will some estimates, since the system has not yet been built.

Assessing the parameters for how simple requires estimating the impact of the various components of the system on the complexity of the system.

This is a systems engineering function. Addressable in many case by the Design Structure Matrix paradigm and the probabilistic and statistical interactions of these components

11

The best architecture and requirements emerge from self-organizing teams

This is actually an untested claim. System architecture in non-trivial systems can be guided by architecture reference design. ToGAF, DoDAF, Zachman and other formal framework.

Again this is a Systems Engineering process

Without estimates the interactions both collaborative and interference cannot be assessed, until the product or service is complete

12

Regularly reflect and make adjustments on improving performance

Making adjustments to the technical and programmatic process in the presence of uncertainty requires estimating the impacts of those adjustments.

Without estimating the impacts of a decision, when the processes are stochastic, leaves the decision makers in the dark.

Hoepfully it's clear that estimating is part of making decisions in the agile paradogm, just like it is in any paradigm where uncertanty exists.

To suggest that decision can be made in the presence of uncertainty withoiut estimating the impact of those decisions, ignores to basic principles of Microeconomics of decision making 

Related articles Making Decisions In The Presence of Uncertainty Eyes Wide Shut - A View of No Estimates Root Cause Analysis
Categories: Project Management

Conclusion of The 7 Habits Reread

Rereading The 7 Habits of Highly Effective People.

I am traveling this week in India for the 13th CSI/IFPUG International Software Measurement & Analysis Conference: “Creating Value from Measurement”. Read more about it here. In the meantime, enjoy some classic content, and I’ll be back with new blog entries next week.

Over eight of the past 9 weeks I have chronicled my re-read of The 7 Habits of Highly Effective People. As I noted when I began this endeavor, this book and the advice it has provided have been helpful for me as I have addressed the turning points in my life. The habits that Stephen Covey posits are a framework that reminds me that decisions and growth come from my core values. Our values, which we own, control and refine our circle of influence.  Our circle of influence, those areas that we can effect, can be expanded by being proactive, having an end in mind, knowing what to put first, thinking win-win, listening, finding synergy in the world around us and continually sharpening the saw.   In other words, through the 7 Habits of Highly Effective People.

The impact I have on the world requires constant reinforcement. And in an environment that emphasizes what I should be concerned about without providing access to the tools to expand my circle of influence, I need to take control of our both my circle of concern and influence. I think it is important to take a step back on a daily or weekly basis to reflect and remind myself about what is really important. It helps me make sure that my focus is true.

Whether today or sometime in the future, everyone will face real concerns, concerns that we can and should deal with. Ignoring them is not an option.  In his lecture, Slaying The Dragon Within Us, Jordan Peterson says “if we don’t deal with our dragons they will continue to grow” The 7 Habits of Highly Effective People provide us with tools to understand how to deal with our dragons.

One final thought, if you don’t have a copy of the book, buy one (I would loan you mine, but I suspect I will read it again).  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version Kindle Version

The re-read entries:


Categories: Process Management

Services and Aspects -when everyone is super no-one is

For me, one of the telltale signs the whole microservices hoopla is a consultant marketing ploy is the whole microservices vs. monoliths thing, as if there is nothing between. On the other hand it seems that these days everything that happens to be delivered at an endpoint and runs in its own processes is called a microservices.

The problem with these two perceptions is that it is like their eponym (services) , microservices still hold the same principles as SOA such as they should built around a business capabilities, they should have their own database, etc.

Now, if every independently deployed component is a microservice, and needs to have all the traits of a micro service, life gets really complicated. These traits, separation and autonomy, requires work , hard work – to avoid API coupling, transactional coupling, temporal coupling, internal structures coupling and so on and so forth.?—?Instead we get to the point that everything is called a “micro service” even if it does not live to all the principles and when everything is a microservice nothing is

Personally I think this all-or-nothing is counter productive. It is better to acknowledge that services are indeed these things that are built around business capabilities, APIs (or contracts) delivered at endpoints and governed by external policies with autonomy and what-not. While on the other hand there are motivations to break services to smaller semi-independent components?—?which can exhibit most of the principles and benefits like independent deployment and development cycles, but can still share some dependencies esp. around data-structures and storage. I call these semi-independent component aspects.

I think it is important to make this distinction between aspects and services to ensure that different aspects from different services still uphold the service boundaries. It is important to break services to aspects to maintain the flexibility and simplicity as services grow in size and overall complexity.For example in ReconGate’s system I working on these days, we have some services that are small and only have one aspect such as the Users service, which holds the system users and their passwords information. However, we also have more complex services with multiple aspects. For instance, one of the services is made of 3 aspects: One aspect deals with ingesting event data into the service and performs transformations and data munching (building relations graphs of new and existing data), another aspect deals with user driven mutations to that data and the third aspects provides a query API (in graphQL) for accessing the that data. Each aspect has its own life cycle, each aspect is independently deployed they are using multiple languages (Scala and JavaScript) but they do share data on the one hand and they maintain set boundaries from other services.

Controlling the coupling levels and maintaining service boundaries is important. Understanding what’s an aspect and what’s a service allows controlling the overall architecture and making sure it doesn’t become a tangled web of interdependencies and at the same time increase flexibility and turnaround times by using smaller components. Using two distinct terms , helps, in my opinion, reason about these two forces better and control the overall picture

Categories: Architecture

SE-Radio Episode 284: John Allspaw on System Failures: Preventing, Responding, and Learning From

John Allspaw CTO of Etsy speaks with Robert Blumen about systemic failures and outages; how are systems defended against outages?; why do they fail anyway?; why are failures not entirely preventable?; why do outages involve multiple failures?; the time that Etsy identified it’s own office as a potential source of fraud; the human as part […]
Categories: Programming

The secret to making people buy your product

Xebia Blog - Tue, 03/07/2017 - 19:21

There is no greater waste than building something extremely efficient, well architectured (is that a word?), with high quality that nobody wants. Yet we see it all the time. We have the Agile manifesto and Scrum probably to thank for that (the seeing bit.) “Our highest priority is to satisfy the customer through early and […]

The post The secret to making people buy your product appeared first on Xebia Blog.

Anecdotes are not Facts nor are They Case Studies

Herding Cats - Glen Alleman - Tue, 03/07/2017 - 18:56

An anecdote is usually a short narrative of an interesting, amusing, or biographical incident. Anecdotal are sometimes the starting point of a proper investigation, but it is all too often the ending point and every point of a pseudo-investigation. 

Man prefers to believe what he prefers to be true.
– Sir Francis Bacon

There is a broader statement about those believing in anecdotal evidence, either found by themselves or heard from others. Anecdotal evidence is often used in politics, journalism, blogs, and other contexts to make or imply generalizations based on very limited and cherry-picked examples, rather than reliable statistical valid studies.

You can do little with persons who are disposed to accept these curious ... superstitions. You have no fulcrum you can rest upon to lift an error out of such minds as these, often highly endowed with knowledge and talent, sometimes with genius, but commonly richer in the imaginative than the observing and reasoning faculties.
– Oliver Wendell Holmes MD, in remarks to graduating class of Bellevue Hospital Medical College in 1871.

 

Related articles Thinking, Talking, Doing on the Road to Improvement Making Conjectures Without Testable Outcomes
Categories: Project Management

Frankensteining Software: Recycling Parts of Legacy Systems

From the Editor of Methods & Tools - Tue, 03/07/2017 - 16:25
Evolving the software architecture of legacy systems for unintended use is difficult. The architectures are not documented well, the team that built the system has often moved on, old and out-of-date code is permanently intertwined, and the technology trends of the present are dramatically different from when the system was first developed. This is the […]

XLA - TensorFlow, compiled

Google Code Blog - Mon, 03/06/2017 - 22:18
By the XLA team within Google, in collaboration with the TensorFlow team


One of the design goals and core strengths of TensorFlow is its flexibility. TensorFlow was designed to be a flexible and extensible system for defining arbitrary data flow graphs and executing them efficiently in a distributed manner using heterogenous computing devices (such as CPUs and GPUs).

But flexibility is often at odds with performance. While TensorFlow aims to let you define any kind of data flow graph, it’s challenging to make all graphs execute efficiently because TensorFlow optimizes each op separately. When an op with an efficient implementation exists or when each op is a relatively heavyweight operation, all is well; otherwise, the user can still compose this op out of lower-level ops, but this composition is not guaranteed to run in the most efficient way.


This is why we’ve developed XLA (Accelerated Linear Algebra), a compiler for TensorFlow. XLA uses JIT compilation techniques to analyze the TensorFlow graph created by the user at runtime, specialize it for the actual runtime dimensions and types, fuse multiple ops together and emit efficient native machine code for them - for devices like CPUs, GPUs and custom accelerators (e.g. Google’s TPU).


Fusing composable ops for increased performance

Consider the tf.nn.softmax op, for example. It computes the softmax activations of its parameter as follows:
CodeCogsEqn.gif

Softmax can be implemented as a composition of primitive TensorFlow ops (exponent, reduction, elementwise division, etc.):
softmax = exp(logits) / reduce_sum(exp(logits), dim)

This could potentially be slow, due to the extra data movement and materialization of temporary results that aren’t needed outside the op. Moreover, on co-processors like GPUs such a decomposed implementation could result in multiple “kernel launches” that make it even slower.

XLA is the secret compiler sauce that helps TensorFlow optimize compositions of primitive ops automatically. Tensorflow, augmented with XLA, retains flexibility without sacrificing runtime performance, by analyzing the graph at runtime, fusing ops together and producing efficient machine code for the fused subgraphs.

For example, a decomposed implementation of softmax as shown above would be optimized by XLA to be as fast as the hand-optimized compound op.

More generally, XLA can take whole subgraphs of TensorFlow operations and fuse them into efficient loops that require a minimal number of kernel launches. For example:
Many of the operations in this graph can be fused into a single element-wise loop. Consider a single element of the bias vector being added to a single element from the matmul result, for example. The result of this addition is a single element that can be compared with 0 (for ReLU). The result of the comparison can be exponentiated and divided by the sum of exponents of all inputs, resulting in the output of softmax. We don’t really need to create the intermediate arrays for matmul, add, and ReLU in memory.


s[j] = softmax[j](ReLU(bias[j] + matmul_result[j]))

A fused implementation can compute the end result within a single element-wise loop, without allocating needless memory. In more advanced scenarios, these operations can even be fused into the matrix multiplication.


XLA helps TensorFlow retain its flexibility while eliminating performance concerns.

On internal benchmarks, XLA shows up to 50% speedups over TensorFlow without XLA on Nvidia GPUs. The biggest speedups come, as expected, in models with long sequences of elementwise operations that can be fused to efficient loops. However, XLA should still be considered experimental, and some benchmarks may experience slowdowns.


In this talk from TensorFlow Developer Summit, Chris Leary and Todd Wang describe how TensorFlow can make use of XLA, JIT, AOT, and other compilation techniques to minimize execution time and maximize computing resources.

Extreme specialization for executable size reduction

In addition to improved performance, TensorFlow models can benefit from XLA for restricted-memory environments (such as mobile devices) due to the executable size reduction it provides. tfcompile is a tool that leverages XLA for ahead-of-time compilation (AOT) - a whole graph is compiled to XLA, which then emits tight machine code that implements the ops in the graph. Coupled with a minimal runtime this scheme provides considerable size reductions.

For example, given a 3-deep, 60-wide stacked LSTM model on android-arm, the original TF model size is 2.6 MB (1 MB runtime + 1.6 MB graph); when compiled with XLA, the size goes down to 600 KB.
Click here for a higher resolution image.
This size reduction is achieved by the full specialization of the model implied by its static compilation. When the model runs, the full power and flexibility of the TensorFlow runtime is not required - only the ops implementing the actual graph the user is interested in are compiled to native code. That said, the performance of the code emitted by the CPU backend of XLA is still far from optimal; this part of the project requires more work.

Support for alternative backends and devices

To execute TensorFlow graphs on a new kind of computing device today, one has to re-implement all the TensorFlow ops (kernels) for the new device. Depending on the device, this can be a very significant amount of work.

By design, XLA makes supporting new devices much easier by adding custom backends. Since TensorFlow can target XLA, one can add a new device backend to XLA and thus enable it to run TensorFlow graphs. XLA provides a significantly smaller implementation surface for new devices, since XLA operations are just the primitives (recall that XLA handles the decomposition of complex ops on its own). We’ve documented the process for adding a custom backend to XLA on this page. Google uses this mechanism to target TPUs from XLA.

Conclusion and looking forward

XLA is still in early stages of development. It is showing very promising results for some use cases, and it is clear that TensorFlow can benefit even more from this technology in the future. We decided to release XLA to TensorFlow Github early to solicit contributions from the community and to provide a convenient surface for optimizing TensorFlow for various computing devices, as well as retargeting the TensorFlow runtime and models to run on new kinds of hardware.

Categories: Programming

Neo4j: apoc.date.parse – java.lang.IllegalArgumentException: Illegal pattern character ‘T’ / java.text.ParseException: Unparseable date: “2012-11-12T08:46:15Z”

Mark Needham - Mon, 03/06/2017 - 21:52

I often find myself wanting to convert date strings into Unix timestamps using Neo4j’s APOC library and unfortunately some sources don’t use the format that apoc.date.parse expects.

e.g.

return apoc.date.parse("2012-11-12T08:46:15Z",'s') 
AS ts

Failed to invoke function `apoc.date.parse`: 
Caused by: java.lang.IllegalArgumentException: java.text.ParseException: Unparseable date: "2012-11-12T08:46:15Z"

We need to define the format explicitly so the SimpleDataFormat documentation comes in handy. I tried the following:

return apoc.date.parse("2012-11-12T08:46:15Z",'s',"yyyy-MM-ddTHH:mm:ssZ") 
AS ts

Failed to invoke function `apoc.date.parse`: 
Caused by: java.lang.IllegalArgumentException: Illegal pattern character 'T'

Hmmm, we need to quote the ‘T’ character – we can’t just include it in the pattern. Let’s try again:

return  apoc.date.parse("2012-11-12T08:46:15Z",'s',"yyyy-MM-dd'T'HH:mm:ssZ") 
AS ts

Failed to invoke function `apoc.date.parse`: 
Caused by: java.lang.IllegalArgumentException: java.text.ParseException: Unparseable date: "2012-11-12T08:46:15Z"

The problem now is that we haven’t quoted the ‘Z’ but the error doesn’t indicate that – not sure why!

We can either quote the ‘Z’:

return  apoc.date.parse("2012-11-12T08:46:15Z",'s',"yyyy-MM-dd'T'HH:mm:ss'Z'") 
AS ts

╒══════════╕
│"ts"      │
╞══════════╡
│1352709975│
└──────────┘

Or we can match the timezone using ‘XXX’:

return  apoc.date.parse("2012-11-12T08:46:15Z",'s',"yyyy-MM-dd'T'HH:mm:ssXXX") 
AS ts

╒══════════╕
│"ts"      │
╞══════════╡
│1352709975│
└──────────┘

The post Neo4j: apoc.date.parse – java.lang.IllegalArgumentException: Illegal pattern character ‘T’ / java.text.ParseException: Unparseable date: “2012-11-12T08:46:15Z” appeared first on Mark Needham.

Categories: Programming

Part 4 of Thinking Serverless —  Addressing Security Issues

This is a guest repost by Ken Fromm, a 3x tech co-founder — Vivid Studios, Loomia, and Iron.io. Here's Part 1 and 2 and 3

This post is the last of a four-part series of that will dive into developing applications in a serverless way. These insights are derived from several years working with hundreds of developers while they built and operated serverless applications and functions.

The platform was the serverless platform from Iron.io but these lessons can also apply to AWS LambdaGoogle Cloud FunctionsAzure Functions, and IBM’s OpenWhisk project.

Arriving at a good definition of cloud IT security is difficult especially in the context of highly scalable distributed systems like those found in serverless platforms. The purpose of this post is to not to provide an exhaustive set of principles but instead highlight areas that developers, architects, and security officers might wish to consider when evaluating or setting up serverless platforms.

Serverless Processing — Similar But Different

High-scale task processing is certainly not a new concept in IT as it has parallels that date back to the days of job processing on mainframes. The abstraction layer provided by serverless process — in combination with large-scale cloud infrastructure and advanced container technologies — does, however, bring about capabilities that are markedly different than even just a few years ago.

By plugging into an serverless computing platforms, developers do not need to provision resources based on current or anticipated loads or put great effort into planning for new projects. Working and thinking at the task level means that developers are not paying for resources they are not using. Also, regardless of the number of projects in production or in development, developers using serverless processing do not have to worry about managing resources or provisioning systems.

While serving as Iron.io’s security officer, I answered a number of security questionnaires from customers. One common theme is that they were all in need of a serious update to bring them forward into this new world. Very few had any accommodation for cloud computing much less serverless processing.

Most questionnaires still viewed servers as persistent entities needing constant care and feeding. They presumed physical resources as opposed to virtualization, autoscaling, shared resources, and separation of concerns. Their questions lack differentiation between data centers and development and operation centers. A few still asked for the ability to physically inspect data centers which is, by and large, not really an option these days. And very few addressed APIs, logging, data persistence, or data retention.

The format of the sections below follows the order found in many of these security questionnaires as well as several cloud security policies. The order has been flipped a bit to start with areas where developers can have an impact. Later sections will address platform and system issues which teams will want to be aware of but are largely in the domain of serverless platforms and infrastructure providers.

Security Topics

Data Security
Categories: Architecture

Deep dive into Windows Server Containers and Docker – Part 2 – Underlying implementation of Windows Server Containers

Xebia Blog - Mon, 03/06/2017 - 09:41

With the introduction of Windows Server 2016 Technical Preview 3 in August 2015, Microsoft enabled the container technology on the Windows platform. While Linux had its container technology since August 2008 such functionality was not supported on Microsoft operating systems before. Thanks to the success of Docker on Linux, Microsoft decided almost 3 years ago to start working on a container implementation for […]

The post Deep dive into Windows Server Containers and Docker – Part 2 – Underlying implementation of Windows Server Containers appeared first on Xebia Blog.

Software Process and Measurement Cast 433 – Delayed

SPaMCAST Logo

http://www.spamcast.net

The Software Process and Measurement Cast 433, an interview with Jeff Dalton is delayed. I had anticipated recording the introductory material for the interview with Jeff Dalton in my hotel room in Mumbai. However, the microphone broke during my flight to India and rather than shouting into the built-in microphone in my laptop I am working on alternate methods. I may be able to record using my cell phone. If the quality is acceptable, I will release the podcast early in the week. If not, we will skip a week and resume when I have access to my studio equipment. For now, I need to learn to pack better and to have a backup microphone while traveling!

Do you have suggestions for remote recording alternatives?


Categories: Process Management

Software Process and Measurement Cast 433 - Delayed

Software Process and Measurement Cast - Mon, 03/06/2017 - 00:15

The Software Process and Measurement Cast 433, an interview with Jeff Dalton is delayed. I had anticipated recording the introductory material for the interview with Jeff Dalton in my hotel room in Mumbai. However, the microphone broke during my flight to India and rather than shouting into the built-in microphone in my laptop I am working on alternate methods. I may be able to record using my cell phone. If the quality is acceptable, I will release the podcast early in the week. If not, we will skip a week and resume when I have access to my studio equipment. For now, I need to learn to pack better and to have a backup microphone while traveling!

Do you have any suggestions?

Categories: Process Management

Definition of Done

Herding Cats - Glen Alleman - Sun, 03/05/2017 - 19:41

The common definition of the Definition of Done in agile software development is (mostly from the Scrum Alliance and other official Scrum sites):

  • A simple list of activities (coding, comments, unit testing, integration, release notes, design documents, etc.) that add verifiable and demonstrable value to the product.
  • Very simple definition of done can be encapsulated as follows: Code Complete, Test Complete, Approved by Product Owner.
  • Done means coded to standards, reviewed, implemented with unit Test-Driven Development (TDD), tested with 100 percent test automation, integrated and documented.

And many more.

These Measures have Little meaning to the Decision Makers

The Agile Definition of Done is a coders view of the project. When those paying for the work ask are you Done? Or better yet when will you be Done? and the coders answer we've run all our Unit Tests we've deployed to Pre-Prod or even we've deployed to Prod, that has very little meaning for the business.

What is of meaning of Done to the business is simple...

  • Does the code produce the Planned Capabilities that were paid for?
  • Do those capabilities meet the Measures of Effectiveness needed to accomplish the mission of the system in operational units of success closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions?
    • These are stated in units meaningful to the buyer, focused on capabilities independent of any technical implementation, Are connected to the mission success.
  • Do those capabilities meet the Measures of Performance that characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions?
    • These are attributes to assure the system has the capability and capacity to perform.
    • They are an assessment that assures it will meet the design requirements to satisfy the Measures of Effectiveness.
  • Are the Key Performance Parameters known that represent the capabilities and characteristics so significant that failure to meet them can be a cause of reevaluation, reassessing, or termination of the project?
    • Do these Key Performance Parameters have a threshold or objective value?
    • Do they characterize the major drivers of performance?
    • Are they considered Critical to Customer?
  • Are the Technical Performance Measures stated in units that can determine how well the systems or system element is satisfying or expected to satisfy a technical requirement or goal?
    • The Technical Performance Measures assess design progress, define compliance to performance requirements, identify risk, are limited to the critical thresholds, and include projects performance.

Here's how these are related

Screen Shot 2017-03-05 at 8.02.36 AM

These are the Definitions of Done.

Long ago, I had the unfortunate experience of being assigned the role of Maintenance Officer for our aircraft. I was not trained nor had experience running the maintenance shop. When the commanding officer came and asked Mr. Alleman when will this aircraft be ready for a mission? My answer was a detailed explanation of all the work that was being performed to get ready? That wasn't my question son was his response. When will we fly? You're confusing Effort with Results. He turned on his heel and walked away. I never forget that traumatic experience.

Never confuse effort with results

The Definition of Done MUST be in actionable units of measure meaningful to the decision makers.

Related articles The Art of Systems Architecting Just Because You Say Words, It Doesn't Make Then True Estimating Processes in Support of Economic Analysis Open Loop Thinking v. Close Loop Thinking Risk Management is How Adults Manage Projects
Categories: Project Management

Managing in Presence of Uncertainty

Herding Cats - Glen Alleman - Sun, 03/05/2017 - 05:59

Let's say you're the project or program manager of a large complex system. Maybe an aircraft, or a building, or an ERP system deployment. Your project is valued in the 100's of millions of dollars. Or say your project is s simple straight forward set of activities. Maybe the planting of a new row of trees on your land, or remodeling your kitchen. 

No matter the project, the domain, the product or service, there are Five Immutable Principles of project success. For each of the Five Principles, there is uncertainty. The uncertainty is always there, it doesn't go away with specific actions in specific domains, or with the use of any tools, processes, or practices.

All project work operates in the presence of uncertainty. This is an immutable principle that impacts planning, execution, performance measures, decision making, risk, budgeting, and overall business and technical management of the project and the business funding the project no matter the domain, context, technology or any methods.
We cannot escape these two uncertainties - reducible and irreducible - and must learn how to manage in the presence of these uncertainties.

If we're going to successfully manage project work in the presence of this uncertainty, we need a framework in which we can make decisions based on the underlying probabilistic and statistical processes that create the uncertainties. The raw material for managing in the presence of uncertainty include answers to the following questions:

  • Do we have a plan for what attributes of the deliverables in units of measure meaningful to the decision makers?
    • Do we have measures of Effectiveness, Performance, all the ...ilities.
  • Are we making progress to plan for the technical and operational aspects of the deliverables? This includes the Measures of Effectiveness (MoE), Measures of Performance (MoP), Key Performance Parameters (KPP), and Technical Performance Measures (TPM) of the deliverables.
    • Is each of these measures being met for the planned cost at the planned time? 
    • Are the upper and lower control limits of each measure inside the planned acceptable performance range at any specific point in time on the path to Done?
    • If not, what are the root cause and corrective actions to bring the performance back inside the bounds?
  • Are we being effective with our budget - that is are we earning our budget.
    • A $1 spent produces a $1 in value return at the planned time for that return.

Let's start with a clear and concise description of the problem of successfully managing projects in the presence of uncertainty:

Accurate software cost and schedule estimations are essential for non-trivial software projects. In many cases, once the estimates have been made (at proposal or authorization to proceed), recalibrate and reduction the uncertainty of the initial estimates is not always performed. As a software project progresses, more information about the project is known. This knowledge can be used to assess and re-estimate the effort required to complete the project. With more accurate estimations and less uncertainties, the probability of success of the project outcome can be increased. Abstracted from [3]

The paradigm of using past performance to update the estimates and risks to the project fits into a paradigm of the Cone of Uncertainty. The Cone is a convenient way to describe the upper and lower confidence levels of any parameter of the project, technical, cost, or schedule needed to be adhered to for project success. The notion of the Cone of Uncertainty has a ling history [5], [6], and [7], where reducing the uncertainty in key parameters of the project was recognized as a critical success factor.

The Cone of Uncertainty can be used to assess the past performance of the project. If the actual data is outside the Cone of Uncertainty, then the project was subject to some cause. Four sample causes are shown below. 

Before going let's look at the definition of the Cone of Uncertainty because it is misdefined and misused by some [4] in Wikipedia. Here's the generic Cone of Uncertainty. As the project progresses, the uncertainty of the project attributes should be reducing. If they are not reducing, then the project is headed for trouble, the and management processes of the project has failed to performance its job. 

Cone of Uncertainty
As the project proceeds the upper and lower bounds of the Planned performance of this parameter should be reducing if we are to land softly at the end of the project. 

If the actual value of the measured parameter, when compared to the planned value of the parameter is  outside the bounds of the planned range, then the project is no longer headed to success. Some corrective action is needed to get back inside the bounds. This is called back to green where we work.

There still seems to be confusion around the Cone of Uncertainty. Let me take another crack of describing how the CoU came about, how it's used in our Aerospace, Defense, and Enterprise IT domain and how not having a plan to reduced the uncertainty of the TPMs or any other measure of program performance is one of the top four root causes of project performance shortfall.

Let's start with Mr. Bliss's chart. From research conducted a the Performance Assessment and Root Cause Analyses department in the Office of Secretary of Defense for Acquisition, Technology, and Logistics, here the top four Root Causes of unanticipated Cost and Schedule growth. Many of the programs we work are Software Intensive System of Systems.  The software is now a major component of most weapons and space flight systems. So getting the software right means increasing the probability of program success.

Bliss Chart

These four root causes are:

  1. Unrealistic performance expectations - we have optimism that the resulting product or service will performance at the needed levels to deliver the needed capabilities to meet the business goals. Notice the term unrealistic. Without a realistic assessment of what can be delivered, we have no way to assess that our expectations can be met.
  2. Unrealistic cost and schedule estimates based on inadequate risk-adjusted growth models. If we have models, informed by empirical data, that out risk-adjusted cost and schedule is credible, then we're late, over budget, and the deliverables won't likely meet our needs on day one. These estimate can certainly be based on empirical data, but reference classes, nearest neighbor models, parametric models, Monte Carlo simulations are all methods to make estimates as well.
  3. Inadequate assessment of risk and unmitigated exposure to these risks. Risk Management is How Adults Manage Projects - Tim Lister. We need a formal risk management process. Mitigation of the risks needs to be on the master schedule, or we need the margin for cost and schedule.
  4. Unanticipated technical issues without alternative plans - and solutions to maintain effectiveness. Technical issues always arise. Having plans to address them when they turn into issues are needed.

Inverted Logic

If someone suggests that the cone of uncertainty doesn't work for their project, it is critical to determine why, through root cause analysis.

For example for a recent paper abstract

Software development project schedule estimation has long been a difficult problem. The Standish CHAOS Report indicates that only 20 percent of projects finish on time relative to their original plan. Conventional wisdom proposes that estimation gets better as a project progresses. This concept is sometimes called the cone of uncertainty, a term popularized by Steve McConnell (1996). The idea that uncertainty decreases significantly as one obtains new knowledge seems intuitive. Metrics collected from Landmark's projects show that the estimation accuracy of project duration followed a lognormal distribution, and the uncertainty range was nearly identical throughout the project, in conflict with popular interpretation of the "cone of uncertainty"

So here are some unanswered questions:

  • Why are the attributes not staying inside the Upper and Lower control limits that were planned for those parameters to be successful?
  • Why did only 20% of the projects finish on time?
  • Why did the estimates NOT get better?
  • BTW the term was NOT popularized by McConnell, the term goes all the way back to 1958 on the chemical plant construction industry.
  • Why were the estimate ranges essentially the same? Why did they not improve with new information about the project attributes? 

The referenced paper doesn't answer these. Instead, it suggests the Cone of Uncertainty is not the proper model for software estimating, with no evidence other than anecdotal examples from observed projects.

If the observed data is outside the Cone, then an answer is needed why is this the case before assuming the Cone is of no use. 

This approach ignores the core principle of reducing uncertainty - with management intent - as the project progresses and discovers new information, collects past performance to be used for estimating future performance, and applies active risk management.

The referenced paper doesn't say why. Instead, it suggests the Cone of Uncertainty is not the proper model for software estimating, with no evidence other than anecdotal examples from observed projects. This approach ignores the core principle of closed loop control. If the control systems is designed to reduce uncertainty - as any project would want to, and the uncertainty is Not reduced, go find out why and take corrective action. Don't toss out the notion that the Cone of Uncertainty is the wrong paradigm for controlling the project performance.

It's not about the project performance data not fitting inside the cone, it's about WHY did the project performance data NOT fit inside the cone of uncertanty?

Without an answer to this why, and testable corrective action for the Root Cause, the project has little hope of showing up on time, on budget, and delivering the needed capabilities the customer has paid for. 

Resources [8], [9], and [10] provide some more background.

Wrap Up

The Cone of Uncertainty is the Planned reduction of uncertainty for project attributes, cost, schedule, and technical. When your project numbers are outside the planned upper and lower control limits, you've got a problem that requires management intervention - a corrective action. When it is mentioned that the cone of uncertainty is not applicable because of observed excursions outside the control limits, then that project is out of control to the planned control limits. NOT because the cone of uncertainty is wrong in principle.

References

  1. Cone of Uncertainty definition (Wikipedia)
  2. Steve McConnell's Cone of Uncertainty
  3. "Reducing Estimation Uncertainty with Continuous Assessment: Tracking the “Cone of Uncertainty” Pongtip Aroonvatanaporn, Chatchai Sinthop, Barry Boehm, ASE’10, September 20–24, 2010, Antwerp, Belgium. 
  4. "A Production Model for Construction: A Theoretical Framework," Ricardo Antunes  and Vicente Gonzalez, Buildings 2015, 5(1), 209-228
  5. "Accuracy Considerations for Capital Cost Estimation", Carl H. Bauman, Industrial & Engineering Chemistry, April 1958.
  6. Software Engineering Economics, Barry Boehm, Prentice-Hall, 1981.
  7. "The COCOMO 2.0 Software Cost Estimation Model,"Barry Boehm, et al., International Society of Parametric Analysis (May 1995).
  8. “Coping with the Cone of Uncertainty: An Empirical Study of the SAIV Process Model,” Da Yang, Barry Boehm, Ye Yang, Qing Wang, and Mingshu Li, ICSP 2007, LNCS 4470, pp. 37–48, 2007.
  9. “Reducing Estimation Uncertainty with Continuous Assessment: Tracking the 'Cone of Uncertainty’” Pongtip Aroonvatanaporn, Chatchai Sinthop and Barry Boehm, Center for Systems and Software Engineering University of Southern California Los Angeles, CA 90089, ASE’10, September 20–24, 2010, Antwerp, Belgium, 2010.
  10. “Shrinking The Cone Of Uncertainty With Continuous Assessment For Software Team Dynamics In Design And Development, Pongtip Aroonvatanaporn,” Ph.D. Thesis, University of Southern California, August 2012.
     
Related articles Essential Reading List for Managing Other People's Money Complex, Complexity, Complicated Monte Carlo Simulation of Project Performance Architecture -Center ERP Systems in the Manufacturing Domain Hope is not a Strategy Herding Cats: Cone of Uncertainty - Part Cinq (Updated) Incremental Delivery of Features May Not Be Desirable
Categories: Project Management

Mindset: The New Psychology of Success: Re-Read Week 6, Chapter 6 Relationships: Mindsets in Love (or Not)

Mindset Book Cover
Today we review Chapter 6 in Carol Dweck’s Mindset: The New Psychology of Success (buy your copy and read along).  In Chapter 6, we explore the impact of mindsets on relationships.  While this chapter is focused primarily on personal relationships, we can also use the ideas in this chapter to hone relationships within teams and the broader business environment. We can see differences in how in our mindset affects how we deal with the ups and downs of relationships. While the trials and tribulations of more intimate relationships are important, our re-read will ultimately focus on how our knowledge of mindsets can be used in transformations and coaching.

A fixed mindset believes that performance stems from a set of fixed attributes.  Rejection is seen as a reflection of a personal flaw which sets a label (e.g. failure).  When those with a fixed mindset perceive they have a negative label, they will tend to lash out at those around them.  Because they are protecting their ego those with a fixed mindset begin plotting revenge in an attempt to repair their ego. People with a growth mindset will use the ups and downs of relationships as a feedback mechanism. When slights occur they will tend to forgive and move on.

All relationships are a complicated set of interrelated systems.  Making and maintaining relationships takes work. However as we have seen in previous chapters, those with a fixed mindset believe what does not come naturally has little value.  This perception causes those with a fixed mindset to abandon relationships that require work to establish or maintain.  

Another common issue in relationships where one or more partner has a fixed mindset is that assumption that both (or all for different groupings) are of one mind.  This assumption suppresses communication, putting further stress on the relationship and letting individuals ascribe motives to actions and comments that might not be true.

An exercise suggested by Dweck to determine which mindset are being held in relationships is to ask each party the following questions: As a husband, I have the right to ______ and my wife has the duty to _____.  Using a development team as a model – As a developer, I have the right ______ and the tester has the duty to ____.  Switch the role order depending on the primary role being played.  Asking the exercise participants to answer the question will help participants to explain how they anticipate the obligations of a relationship being distributed.  In the process, the words in the stories that are generated will help to expose the mindsets of the particants, which is useful promoting awareness within the relationship.

As we have noted in earlier chapters, problems indicate character flaws to people with a fixed mindset.  At one point in my life, I actually walked away from a friendship when I noticed that someone heavily salted their buttered bread and stopped dating a girl when she put ketchup on a filet.  I believe I have changed, but at the time I saw those problems as insurmountable character flaws.  Rather than discuss the situation (and show a bit more tolerance), I choose to bail out.  These sorts of issues happen in teams everyday reducing team effectiveness.  Remember to confront the situation, not the person.  

In relationships, people with a fixed mindset see others as adversaries to be competed with.  The parties in a relationship that include people with fixed mindsets will often have significantly different power levels (one powerful and the other more submissive).  When those with a fixed mindset see the flaws in their partners they will tend to exploit those flaws to improve their ego and when slighted will seek revenge.  

Organizational Transformation:  Remember that bullying and revenge are influenced by fixed mindsets.  Change can be threatening to people with a fixed mindset. They see change as an attack on their character; threatening their success.  This can be exacerbated if the roll out is done via brute force (bullying) which can generate negative reactions. such as revenge or passive aggressive behavior in the workplace.  As a transformation leader, it is imperative to understand that change can be viewed as a rejection of closely held personal beliefs.  When talking about or leading change separate how you talk about people from how you talk about the roles you are changing.

Team Coaching: Software development has been described as a team sport.  Teams are a reflection of the relationships between team members.  Mindsets can directly affect how team members view each other.  While the chapter focuses on primarily on individual relationships, we can see many of the same patterns in the relationships between team members.  Stress causes individuals with fixed mindsets to focus on the personal faults of others creating distance or even going as far as to ascribe blame to fellow team members.  Coaches have to help teams to separate people from roles and help team members not to blame people, but rather to focus on how to resolve situations and improve outcomes.

Previous Entries of the re-read of Mindset:
• Basics and Introduction
• Chapter 1: Mindsets
• Chapter 2: Inside the Mindsets
• Chapter 3: The Truth About Ability and Accomplishment
• Chapter 4: Sports: The Mindset of a Champion
• Chapter 5: Business: Mindset and Leadership


Categories: Process Management

Managing in Presence of Uncertainty

Herding Cats - Glen Alleman - Sat, 03/04/2017 - 19:59

Let's say you're the project or program manager of a large complex system. Maybe an aircraft, or a building, or an ERP system deployment. Your project is valued in the 100's of millions of dollars. Or say your project is s simple straight forward set of activities. Maybe the planting of a new row of trees on your land, or remodeling your kitchen. 

No matter the project, the domain, the product or service, there are Five Immutable Principles of project success. For each of the Five Principles, there is uncertainty. The uncertainty is always there, it doesn't go away with specific actions in specific domains, or with the use of any tools, processes, or practices.

All project work operates in the presence of uncertainty. This is an immutable principle that impacts planning, execution, performance measures, decision making, risk, budgeting, and overall business and technical management of the project and the business funding the project no matter the domain, context, technology or any methods.
We cannot escape these two uncertainties - reducible and irreducible - and must learn how to manage in the presence of these uncertainties.

If we're going to successfully manage project work in the presence of this uncertainty, we need a framework in which we can make decisions based on the underlying probabilistic and statistical processes that create the uncertainties. The raw material for managing in the presence of uncertainty include answers to the following questions:

  • Do we have a plan for what attributes of the deliverables in units of measure meaningful to the decision makers?
    • Do we have measures of Effectiveness, Performance, all the ...ilities.
  • Are we making progress to plan for the technical and operational aspects of the deliverables? This includes the Measures of Effectiveness (MoE), Measures of Performance (MoP), Key Performance Parameters (KPP), and Technical Performance Measures (TPM) of the deliverables.
    • Is each of these measures being met for the planned cost at the planned time? 
    • Are the upper and lower control limits of each measure inside the planned acceptable performance range at any specific point in time on the path to Done?
    • If not, what are the root cause and corrective actions to bring the performance back inside the bounds?
  • Are we being effective with our budget - that is are we earning our budget.
    • A $1 spent produces a $1 in value return at the planned time for that return.

Let's start with a clear and concise description of the problem of successfully managing projects in the presence of uncertainty:

Accurate software cost and schedule estimations are essential for non-trivial software projects. In many cases, once the estimates have been made (at proposal or authorization to proceed), recalibrate and reduction the uncertainty of the initial estimates is not always performed. As a software project progresses, more information about the project is known. This knowledge can be used to assess and re-estimate the effort required to complete the project. With more accurate estimations and less uncertainties, the probability of success of the project outcome can be increased. Abstracted from [3]

The paradigm of using past performance to update the estimates and risks to the project fits into a paradigm of the Cone of Uncertainty. The Cone is a convenient way to describe the upper and lower confidence levels of any parameter of the project, technical, cost, or schedule needed to be adhered to for project success. The notion of the Cone of Uncertainty has a ling history [5], [6], and [7], where reducing the uncertainty in key parameters of the project was recognized as a critical success factor.

The Cone of Uncertainty can be used to assess the past performance of the project. If the actual data is outside the Cone of Uncertainty, then the project was subject to some cause. Four sample causes are shown below. 

Before going let's look at the definition of the Cone of Uncertainty because it is misdefined and misused by some [4] in Wikipedia. Here's the generic Cone of Uncertainty. As the project progresses, the uncertainty of the project attributes should be reducing. If they are not reducing, then the project is headed for trouble, the and management processes of the project has failed to performance its job. 

Cone of Uncertainty
As the project proceeds the upper and lower bounds of the Planned performance of this parameter should be reducing if we are to land softly at the end of the project. 

If the actual value of the measured parameter, when compared to the planned value of the parameter is  outside the bounds of the planned range, then the project is no longer headed to success. Some corrective action is needed to get back inside the bounds. This is called back to green where we work.

There still seems to be confusion around the Cone of Uncertainty. Let me take another crack of describing how the CoU came about, how it's used in our Aerospace, Defense, and Enterprise IT domain and how not having a plan to reduced the uncertainty of the TPMs or any other measure of program performance is one of the top four root causes of project performance shortfall.

Let's start with Mr. Bliss's chart. From research conducted a the Performance Assessment and Root Cause Analyses department in the Office of Secretary of Defense for Acquisition, Technology, and Logistics, here the top four Root Causes of unanticipated Cost and Schedule growth. Many of the programs we work are Software Intensive System of Systems.  The software is now a major component of most weapons and space flight systems. So getting the software right means increasing the probability of program success.

Bliss Chart

These four root causes are:

  1. Unrealistic performance expectations - we have optimism that the resulting product or service will performance at the needed levels to deliver the needed capabilities to meet the business goals. Notice the term unrealistic. Without a realistic assessment of what can be delivered, we have no way to assess that our expectations can be met.
  2. Unrealistic cost and schedule estimates based on inadequate risk-adjusted growth models. If we have models, informed by empirical data, that out risk-adjusted cost and schedule is credible, then we're late, over budget, and the deliverables won't likely meet our needs on day one. These estimate can certainly be based on empirical data, but reference classes, nearest neighbor models, parametric models, Monte Carlo simulations are all methods to make estimates as well.
  3. Inadequate assessment of risk and unmitigated exposure to these risks. Risk Management is How Adults Manage Projects - Tim Lister. We need a formal risk management process. Mitigation of the risks needs to be on the master schedule, or we need the margin for cost and schedule.
  4. Unanticipated technical issues without alternative plans - and solutions to maintain effectiveness. Technical issues always arise. Having plans to address them when they turn into issues are needed.

Inverted Logic

If someone suggests that the cone of uncertainty doesn't work for their project, it is critical to determine why, through root cause analysis.

For example for a recent paper abstract

Software development project schedule estimation has long been a difficult problem. The Standish CHAOS Report indicates that only 20 percent of projects finish on time relative to their original plan. Conventional wisdom proposes that estimation gets better as a project progresses. This concept is sometimes called the cone of uncertainty, a term popularized by Steve McConnell (1996). The idea that uncertainty decreases significantly as one obtains new knowledge seems intuitive. Metrics collected from Landmark's projects show that the estimation accuracy of project duration followed a lognormal distribution, and the uncertainty range was nearly identical throughout the project, in conflict with popular interpretation of the "cone of uncertainty"

Why are the attributes not staying inside the Upper and Lower control limits that were planned for those parameters to be successful? Why did only 20% of the projects finish on time? Why did the estimates NOT get better? BTW the term was NOT popularized by McConnell, the term goes all the way back to 1958 on the chemical plant construction industry. Why were the estimate ranges essentially the same? Why did they not improve with new information about the project attributes? 

The referenced paper doesn't say why. Instead, it suggests the Cone of Uncertainty is not the proper model for software estimating, with no evidence other than anecdotal examples from observed projects. This approach ignores the core principle of 

The referenced paper doesn't say why. Instead, it suggests the Cone of Uncertainty is not the proper model for software estimating, with no evidence other than anecdotal examples from observed projects. This approach ignores the core principle of closed loop control. If the control systems is designed to reduce uncertainty - as any project would want to, and the uncertainty is Not reduced, go find out why and take corrective action. Don't toss out the notion that the Cone of Uncertainty is the wrong paradigm for controlling the project performance.

It's not about the project performance data not fitting inside the cone, it's about WHY did the project performance data NOT fit inside the cone of uncertanty?

Without an answer to this why, and testable corrective action for the Root Cause, the project has little hope of showing up on time, on budget, and delivering the needed capabilities the customer has paid for. 

Resources [8], [9], and [10] provide some more background.

Wrap Up

The Cone of Uncertainty is the Planned reduction of uncertainty for project attributes, cost, schedule, and technical. When your project numbers are outside the planned upper and lower control limits, you've got a problem that requires management intervention - a corrective action. When it is mentioned that the cone of uncertainty is not applicable because of observed excursions outside the control limits, then that project is out of control to the planned control limits. NOT because the cone of uncertainty is wrong in principle.

References

  1. Cone of Uncertainty definition (Wikipedia)
  2. Steve McConnell's Cone of Uncertainty
  3. "Reducing Estimation Uncertainty with Continuous Assessment: Tracking the “Cone of Uncertainty” Pongtip Aroonvatanaporn, Chatchai Sinthop, Barry Boehm, ASE’10, September 20–24, 2010, Antwerp, Belgium. 
  4. "A Production Model for Construction: A Theoretical Framework," Ricardo Antunes  and Vicente Gonzalez, Buildings 2015, 5(1), 209-228
  5. "Accuracy Considerations for Capital Cost Estimation", Carl H. Bauman, Industrial & Engineering Chemistry, April 1958.
  6. Software Engineering Economics, Barry Boehm, Prentice-Hall, 1981.
  7. "The COCOMO 2.0 Software Cost Estimation Model,"Barry Boehm, et al., International Society of Parametric Analysis (May 1995).
  8. “Coping with the Cone of Uncertainty: An Empirical Study of the SAIV Process Model,” Da Yang, Barry Boehm, Ye Yang, Qing Wang, and Mingshu Li, ICSP 2007, LNCS 4470, pp. 37–48, 2007.
  9. “Reducing Estimation Uncertainty with Continuous Assessment: Tracking the 'Cone of Uncertainty’” Pongtip Aroonvatanaporn, Chatchai Sinthop and Barry Boehm, Center for Systems and Software Engineering University of Southern California Los Angeles, CA 90089, ASE’10, September 20–24, 2010, Antwerp, Belgium, 2010.
  10. “Shrinking The Cone Of Uncertainty With Continuous Assessment For Software Team Dynamics In Design And Development, Pongtip Aroonvatanaporn,” Ph.D. Thesis, University of Southern California, August 2012.
     
Related articles Essential Reading List for Managing Other People's Money Complex, Complexity, Complicated Monte Carlo Simulation of Project Performance
Categories: Project Management

Software Development for the 21st Century

Herding Cats - Glen Alleman - Fri, 03/03/2017 - 23:01

Alistair Cockburn's talk on 21st Century Software development. 

Pay attention the hoax of #Noestimates and most major misunderstandings of the Agile Manifesto and the 12 Principle

Categories: Project Management