Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

SPaMCAST 448 ‚Äď Uncertainty in Software Development, TameFlow, Leading QA

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

SPaMCAST 448 features our essay on uncertainty. Al Pittampalli said, ‚Äúuncertainty and complexity produce anxiety we wish to escape.‚ÄĚ Dealing with uncertainty is part of nearly everything we do our goal should be to address uncertainty head on.

The second column features Steve Tendon talking about Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban published J Ross (buy a copy here). We tackle Chapter 18.  

Our third column is the return of Jeremy Berriault and his QA Corner. Jeremy discusses leading in  QA.  Jeremy  blogs at https://jberria.wordpress.com/

Re-Read Saturday News

Chapter 10 concludes our re-read of Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson which was published by Henry Holt and Company in 2015. ¬†This week’s chapter is titled, The Experience of Holacracy. In this chapter, Robertson wraps up most of the loose ends. Next week we will conclude this re-read with some final comments and thoughts.

 

Catch up on the all of the Holacracy entries:

Week 1:  Logistics and Introduction

Week 2: Evolving Organization

Week 3: Distribution Authority

Week 4: Organizational Structure

Week 5: Governance

Week 6: Operations

Week 7: Facilitating Governance

Week 8: Strategy and Dynamic Control

Week 9 Adopting Holacracy

Week 10: Moving Toward Holacracy

Week 11: Experience of Holacracy

In two weeks we will begin the next book in our Re-read series,  The Science of Successful Organizational Change. (I ordered my copy have you?). Remember to use the link to buy a copy in order to support the podcast and blog. The reread will be led by Steven Adams.   I am looking forward to sitting on the other side of the table during the next re-read! Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

 

A Call To Action

If you even got a single new idea this week while listening to the podcast, please give the SPaMCAST a short, honest review in iTunes, Stitcher or wherever you are listening.  If you leave a review please send a copy to spamcastinfo@gmail.com.  Reviews help guide people to the cast!

Next SPaMCAST

SPaMCAST 449  will feature our interview with Jasveer Singh.  We discussed his new book, Functional Software Size Measurement Methodology with Effort Estimation and Performance Indication.  Jasveer, proposes a new sizing methodology for estimation and other measurement processes.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

 


Categories: Process Management

SPaMCAST 448 - Uncertainty in Software Development, TameFlow, Leading QA

Software Process and Measurement Cast - Sun, 06/25/2017 - 22:00

SPaMCAST 448 features our essay on uncertainty. Al Pittampalli said, ‚Äúuncertainty and complexity produce anxiety we wish to escape.‚ÄĚ Dealing with uncertainty is part of nearly everything we do our goal should be to address uncertainty head on.

The second column features Steve Tendon talking about Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban published J Ross (buy a copy here). We tackle Chapter 18.  

Our third column is the return of Jeremy Berriault and his QA Corner. Jeremy discusses leading in  QA.  Jeremy  blogs at https://jberria.wordpress.com/

Re-Read Saturday News

Chapter 10 concludes our re-read of Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson which was published by Henry Holt and Company in 2015.  This week's chapter is titled, The Experience of Holacracy. In this chapter, Robertson wraps up most of the loose ends. Next week we will conclude this re-read with some final comments and thoughts.

 

Catch up on the all of the Holacracy entries:

Week 1:  Logistics and Introduction

Week 2: Evolving Organization

Week 3: Distribution Authority

Week 4: Organizational Structure

Week 5: Governance

Week 6: Operations

Week 7: Facilitating Governance

Week 8: Strategy and Dynamic Control

Week 9 Adopting Holacracy

Week 10: Moving Toward Holacracy

Week 11: Experience of Holacracy

In two weeks we will begin the next book in our Re-read series,  The Science of Successful Organizational Change. (I ordered my copy have you?). Remember to use the link to buy a copy in order to support the podcast and blog. The reread will be led by Steven Adams.   I am looking forward to sitting on the other side of the table during the next re-read! Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

 

A Call To Action

If you even got a single new idea this week while listening to the podcast, please give the SPaMCAST a short, honest review in iTunes, Stitcher or wherever you are listening.  If you leave a review please send a copy to spamcastinfo@gmail.com.  Reviews help guide people to the cast!

Next SPaMCAST

SPaMCAST 449  will feature our interview with Jasveer Singh.  We discussed his new book, Functional Software Size Measurement Methodology with Effort Estimation and Performance Indication.  Jasveer, proposes a new sizing methodology for estimation and other measurement processes.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

 

Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Sun, 06/25/2017 - 18:48

Great spirits have always found violent opposition from mediocrities. The latter cannot understand it when a man does not thoughtlessly submit to hereditary prejudices but honestly and courageously uses his intelligence. Albert Einstein, in  The Tao of Systems Engineering: An Engineer's Survival Guide (Kindle Locations 324-326). Ronald Paul Sherwin

Categories: Project Management

Holacracy: Re-read Week 11, Chapter 10: The Experience of Holacracy

Book Cover

Holacracy

In approximately two weeks we will begin the next book is, The Science of Successful Organizational Change. Remember to use the link to buy a copy to support the podcast and blog. The reread will be led by Steven Adams. I am looking forward to sitting on the other side of the table during the next re-read!

Chapter 10 completes Holacracy.  This week’s chapter is titled, The Experience of Holacracy.  This chapter sums up the changes that are generated when an organization adopts Holacracy.  Adopting Holacracy requires learning different ways to control and manage; perhaps it would be better to say lead a company.  While every organization has different issues as they adopt Holacracy, Robertson has observed a few fairly common themes.

Toppling the Hero

Heroic management does not fit the Holacracy model.  The distribution and democratization of decision making create an environment in which the hero is out of step with the organization.  Heroic forms of management concentrate decision making in one spot: the heroic CEO which forms a single point of failure.  Travis Kalanick, the CEO of Uber is an example of the type of problem that occurs when decision making is concentrated in one heroic figure. The organization is limited to the decision making throughput of the hero.

The idea of reducing the dependence on a hero has been a topic of discussion in organizational circles as long as I can remember.  While the reliance on heroics has been reduced, embracing Holacracy still requires a culture shift.  That culture shift requires heroic leaders to give up the power that provides them comfort (and probably and endorphin hit when they ride to the rescue).  At the same time roles that have avoided making decisions find fewer places to hide.  In either case, embracing Holacracy requires abandoning what is currently comfortable to find a new equilibrium in which authority and leadership are redistributed.

Operating as the Victim

The parent-child relationship (see blog) represents the standard management-worker operating model.  This relationship is often so ingrained that shifting the relationship requires significant energy and can cause consequences.  The redistribution of authority that is central to Holacracy changes the relationship between all of the roles within an organization.  Holacracy makes it more difficult to spend time grumbling about a problem when you have the authority to change the process.

One of the signs of change discussed by Robertson is when people begin to feel discomfort about not seeking consensus, begin apologizing for making certain decisions, or for rocking the organizational boat.  The discomfort is a sign that the parent-child relationship is evolving to something more akin to parent-adult relationship which is healthy for making decisions.

Moving Beyond a Personal Paradigm

Holacracy separates the idea of roles from individuals.  The separation of person and role along the distribution of authority moves a team or organization away from the need to rely on an individual, but rather to rely on the role.

The Evolution of the Organization

This section reminds us that Holacracy helps an organization shift from a static representation to an evolving, dynamic structure in which governance and decision making is distributed.

Next week some final thoughts.

Remember to buy a copy of Holacracy (use the link to help support and defray the costs of the Software Process and Measurement Cast blog and podcast).

Previous Entries in the re-read:

Week 1:  Logistics and Introduction

Week 2: Evolving Organization

Week 3: Distribution Authority

Week 4: Organization Structure

Week 5: Governance

Week 6: Operations

Week 7: Facilitating Governance

Week 8: Strategy and Dynamic Control

Week 9: Adopting Holacracy

Week 10: When You Are Not Ready

 


Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Sat, 06/24/2017 - 15:39

As to methods, there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble. ‚ąí¬†Harrington Emerson.

from an advertisement for The Book of Business, in Colliers: National Weekly, 1917 

When we hear a claim or conjecture about some method that fixes some named dysfunction, and that claim or conjecture clearly violates established principles, one of several outcomes can occur. Ask what do you mean or what principle informs your conjecture here? Many times this will be the start of a dialogue on the topic and learning on both side can start. Other teams, those being asked that question take offense to being asked to explain their position. In later case, it is best to move on, since willfully ignoring the principles that form the basis of the discussion is not going to lead to a shared exchange of ideas.  

 

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Fri, 06/23/2017 - 18:32

‚ÄúWe trained hard, but it seemed that every time we were beginning to form up into teams, we would be reorganized. I was to learn later in life that we tend to meet any new situation by reorganizing; and a wonderful method it can be for creating the illusion of progress while producing confusion, inefficiency, and demoralization.‚ÄĚ ‚Äď Petronius Arbiter (210 B.C.)

Categories: Project Management

Just a Reminder About Estimating Software Projects

Herding Cats - Glen Alleman - Fri, 06/23/2017 - 17:52

Here's a clear and concise discussion of the estimating topic.

And just a reminder for making decisions in the presence of uncertainty.

There is NO Means

Categories: Project Management

Possibilities for Managing Individual and Team Compensation

There’s a twitter¬†conversation about how to manage team compensation. How can we pay people what they are worth and compensate a team for their value?

I know there are teams where I was quite valuable—not because I was “the” leader, but because I helped the team achieve our goals. And, there are teams where I was not as valuable. I did great work, but my contribution wasn’t as valuable as other people on the team.

That is the crux of the matter when we think about team compensation.

Here’s how I managed this problem in the past:

  • I created (with HR’s input) technical career levels. I often had 5 levels: associate¬†engineer, engineer, senior engineer, principal engineer, consulting engineer. The principal and consulting had first-level manager and group manager as parallel. In addition, I had a couple more management levels (director and VP).
  • I wrote down expertise criteria that differentiated each level. The criteria focused on breadth of responsibility, collaboration capability, and strategic vs. tactical thinking. HR made me add “typical education” which I amended to say “or years of experience.
  • I asked my groups to provide me feedback on these criteria.
  • When I was sure the criteria were correct, I met one-on-one to make sure we each agreed where each person fit into the criteria. Some people were on the verge of a promotion. Some were not. We worked together to make sure we were both comfortable with their title and compensation.

Now, I had the ability to provide people individual compensation and promotions. And, I could provide team-based compensation. Here’s how it worked.

One guy, John, wanted a promotion to senior engineer. He was a high-initiative person. He coached and mentored people in the team. He got along with everyone quite well, and his solutions were often good. It was the often that was the difficult part. When he got an idea in his head, it would take a disaster to convince him he was wrong. His judgment was not as good as a senior engineer needed to have.

I’d provided feedback in our weekly one-on-ones explaining both my happiness and concerns with his judgment, depending on the circumstance. (This was not agile. We used staged-delivery, a feature-driven approach. I was the manager of several groups.) I asked him if he wanted coaching, and yes, he did, but not from me. That was fine. I arranged a coaching arrangement with someone who was a principal engineer (2 levels up).

The two of them met twice a week for several weeks. I think each meeting was about 20-30 minutes. The coach asked questions, provided guidance and options. The engineer learned a ton in that month and started to explore other options with his team. He started to realize his very first idea was not the be-all and end-all for the work.

It took him several months of practice, and I was able to promote him to be a senior engineer.

People need to know what the criteria are—why the org values them. If the salary ranges are too tight, there is no flexibility in hiring. If the salary ranges are too loose, it’s too easy to have discrimination in payment, especially if someone started their first job too low. (Yes, I have experienced salary discrimination.)

Let me provide a little context for team compensation. John’s team was involved in a new product. We didn’t know much about the customers and product management wasn’t much help. (I said this is before agile.) John asked the tech writer, Susan, for help in understand what customers wanted.

Susan guided the entire project. She helped the team understand the requirements. Because Susan was a principal engineer, she had customer contacts and she used them. She created what we would now recognize as a ranked backlog. John had the idea of a “pre-beta,” which were builds we provided to a select group of customers. You might think of this as a series of MVP (Minimum Viable Products) to these customers. The customers provided feedback to Susan, who used that feedback to guide the team.

We released the product and it was a great success. My VP came to me and told me I would get a $10k bonus (a ton of money back then). I said I had not enough to do with the project, and that the team would get the money. My boss cocked an eyebrow and said, “I don’t want to lose any of them.” I told him I would make it right, somehow.

I went to the team and told them I had been chosen to receive a $10k bonus, which I thought was wrong. They all agreed!

I asked them to explain how they wanted to divide the money. (I was thinking evenly.) Before I even had a chance to pass out stickies, John said, “Susan should get the most. She was the heart and soul of this project.” Everyone nodded their heads.

I said that was great, but let’s do a private vote in case not everyone agreed. I passed out stickies and asked people to write down how they wanted to divide it. Every person said: 40% to Susan, the rest evenly. Well, one person added me in the evenly part. I thanked the person and demurred.

That’s what we did. Susan asked for part of her increased percentage to be a team dinner with spouses/significant others and they invited Mark and me.

The team knows who did what. The team can manage bonuses.

I don’t know that this is the “best” approach. I have always wanted to know what my organization wanted from me. I have found a career ladder in the form of expertise criteria a great way to accomplish this. In addition, I want to know that if there is extra compensation, the team will receive that extra as a team. Every project I’ve ever been on was a team effort. Agile approaches make that even more obvious.

Categories: Project Management

Shell: Create a comma separated string

Mark Needham - Fri, 06/23/2017 - 13:26

I recently needed to generate a string with comma separated values, based on iterating a range of numbers.

e.g. we should get the following output where n = 3

foo-0,foo-1,foo-2

I only had the shell available to me so I couldn’t shell out into Python or Ruby for example. That means it’s bash scripting time!

If we want to iterate a range of numbers and print them out on the screen we can write the following code:

n=3
for i in $(seq 0 $(($n > 0? $n-1: 0))); do 
  echo "foo-$i"
done

foo-0
foo-1
foo-2

Combining them into a string is a bit more tricky, but luckily I found a great blog post by Andreas Haupt which shows what to do. Andreas is solving a more complicated problem than me but these are the bits of code that we need from the post.

n=3
combined=""

for i in $(seq 0 $(($n > 0? $n-1: 0))); do 
  token="foo-$i"
  combined="${combined}${combined:+,}$token"
done
echo $combined

foo-0,foo-1,foo-2

This won’t work if you set n<0 but that’s ok for me! I’ll let Andreas explain how it works:

  • ${combined:+,} will return either a comma (if combined exists and is set) or nothing at all.
  • In the first invocation of the loop combined is not yet set and nothing is put out.
  • In the next rounds combined is set and a comma will be put out.

We can see how it in action by printing out the value of $combined after each iteration of the loop:

n=3
combined=""

for i in $(seq 0 $(($n > 0 ? $n-1: 0))); do 
  token="foo-$i"
  combined="${combined}${combined:+,}$token"
  echo $combined
done

foo-0
foo-0,foo-1
foo-0,foo-1,foo-2

Looks good to me!

The post Shell: Create a comma separated string appeared first on Mark Needham.

Categories: Programming

Product Roadmap:  A Roadmap for Every Answer

Left and Right

 

Product Roadmap:  A Roadmap for Every Answer

Product roadmaps are a tool used to visually present a business strategy. Roadmaps serve multiple goals. The goals of roadmap development generally are varied, including not only the ultimate roadmap itself but also the journey to develop the roadmap.  Typical goals include:

  • Describing the vision and strategy visually. ¬†According to Pearson Prentice Hall, 65% of people are visual learners. ¬†A roadmap provides a powerful tool to connect with an audience.
  • Provide a guiding document for executing the strategy. A strategy is often visionary, the roadmap provides a path for action that moves past motivating words into tangible actions.
  • Get internal stakeholders in alignment. My four-year-old grandson‚Äôs favorite word is why. ¬†Frankly, it is contagious. ¬†A roadmap allows members of an organization to determine whether what they are working on fits into the big picture. ¬†The roadmap helps to answer ‚Äúwhy.‚ÄĚ
  • Facilitating the discussion of options and scenario planning. ¬†The journey to an initial roadmap and the process for maintaining the roadmap (a roadmap is not a destination) provides a process and an environment to discuss and evaluate how an organization is going to pursue its strategies to reach its goals.
  • Communication vehicle to external stakeholders, including customers. This is a classic use of roadmaps. Tuned for marketing, roadmaps are often invaluable for generating feedback as well locking in customers.

The myriad of goals that roadmaps can address implies that there are a number of different types of roadmaps.  Three different types of roadmaps are typical.

  1. Product/Business roadmaps are the classic version that provides the visualization of features and services required to deliver on a set of goal(s) and strategy.  The primary audiences for a product/business roadmap include executives (at a high level) and middle management (at a more granular level).
  2. Marketing roadmaps are typically an external communication vehicle predominately used for firms to communicate with customers outside the firm.  Note: I have seen teams serving internal customer bases develop marketing roadmaps.  The primary audiences for a marketing roadmap include sales, marketing, and customers.      
  3. IT roadmaps generally represent the planned evolution of one or more of the architectures stewarded by IT.  Architectures exist (whether documented or not) for information, systems, technology to name a few.  All of the IT architectures function within the business architecture and should enable the business’s strategy and goal.  IT roadmaps tend to follow the same audience pattern as product/business roadmaps with the exception that the level of detail sometimes is driven down to the sprint level (bad choice).  Remember that roadmaps are not plans!

The granularity of any roadmap is driven by what the tool is being used to communicate and, to some extent, hubris.  High-level product/business roadmaps tend to include high-level strategic initiatives the specifics of which fade the farther in the future the roadmap peers into the future.  Specificity in the future is a form of hubris.  Granularity can spin down into releases by period, specific features and in some cases maintenance and bug fixes (enter hubris again).

Roadmaps can serve many masters and answer many questions however there is not a one size fits all solution.

We will next tackle a suggested hierarchy of roadmaps in a typical corporate setting

 


Categories: Process Management

What’s new in WebView security

Android Developers Blog - Thu, 06/22/2017 - 22:29
Posted by Xiaowen Xin and Renu Chaudhary, Android Security Team

The processing of external and untrusted content is often one of the most important functions of an app. A newsreader shows the top news articles and a shopping app displays the catalog of items for sale. This comes with associated risks as the processing of untrusted content is also one of the main ways that an attacker can compromise your app, i.e. by passing you malformed content.

Many apps handle untrusted content using WebView, and we've made many improvements in Android over the years to protect it and your app against compromise. With Android Lollipop, we started delivering WebView as an independent APK, updated every six weeks from the Play store, so that we can get important fixes to users quickly. With the newest WebView, we've added a couple more important security enhancements.

Isolating the renderer process in Android O

Starting with Android O, WebView will have the renderer running in an isolated process separate from the host app, taking advantage of the isolation between processes provided by Android that has been available for other applications.

Similar to Chrome, WebView now provides two levels of isolation:

  1. The rendering engine has been split into a separate process. This insulates the host app from bugs or crashes in the renderer process and makes it harder for a malicious website that can exploit the renderer to then exploit the host app.
  2. To further contain it, the renderer process is run within an isolated process sandbox that restricts it to a limited set of resources. For example, the rendering engine cannot write to disk or talk to the network on its own. It is also bound to the same seccomp filter (blogpost on seccomp is coming soon) as used by Chrome on Android. The seccomp filter reduces the number of system calls the renderer process can access and also restricts the allowed arguments to the system calls.
Incorporating Safe Browsing

The newest version of WebView incorporates Google's Safe Browsing protections to detect and warn users about potentially dangerous sites.. When correctly configured, WebView checks URLs against Safe Browsing's malware and phishing database and displays a warning message before users visit a dangerous site. On Chrome, this helpful information is displayed more than 250 million times a month, and now it's available in WebView on Android.

Enabling Safe Browsing

To enable Safe Browsing for all WebViews in your app, add in a manifest tag:

<manifest>
     <meta-data android:name="android.webkit.WebView.EnableSafeBrowsing"
                android:value="true" />
      . . .
     <application> . . . </application>
</manifest>

Because WebView is distributed as a separate APK, Safe Browsing for WebView is available today for devices running Android 5.0 and above. With just one added line in your manifest, you can update your app and improve security for most of your users immediately.

.blogimage img { width: 100%; border: 0; margin: 0; padding: 0; } .floatimage img { float: right; width: 45%; }
Categories: Programming

Decisions Without Estimates?

Herding Cats - Glen Alleman - Thu, 06/22/2017 - 20:52

There is a posted question at an agile conference. Can you make a decision without an estimate? Like many discussions in the domain of agile, the statement is made without any evidence that it is true, nor can even be true in principle. This type of fallacy is common. 

First a principle ...

There is NO means of making credible decisions in the presence of uncertainty without first estimating the outcome of that decision. This is a foundation principle of the economics of probabilistic decision making. To suggest you can make such a decision without estimates  willfully ignores that principle.

Decisions without estimates

Let's look at each of these from the point of view of Managerial Finance and Economics of Decision Making in the presence of Uncertainty. These two points of view are the basis of any credible business management process.

It's not your money, spend it wisely

First, let's establish a singular framing assumption

  • All project work¬†is uncertain - this uncertainty¬†comes in two types. Reducible (Epistemic) and Irreducible (Aleatory).
  • These uncertainties are probabilistic and statistical respectively.
  • Knowing the behaviors¬†of these uncertainties¬†- the probabilistic and statistical models - is part of deciding what to do.
  • The impact on the project from these probabilities¬†is also¬†probabilistic. Knowing this impact requires assessing the probabilities.

If you want to decide what's the probability of occurrence of some Epistemic uncertainty or the statistical processes for some aleatory activity, you need to estimate. Don't guess. Don't assume, Estimate. This process is the basis of all risk management. And Risk Management is How Adults Manage Projects - Tim Lister.

So with this in mind let's look at the conjectured process that can be performed on projects without estimating in the presence of uncertainty.

  • Longest entry Barrier First - so how would you know what the longest is without making an estimate since that longest is likely a probabilistic number in the future? If you the longest upfront, it's already happened.
  • Prototyping¬†First - OK, which feature do we prototype? How much effort is the prototype? What are we expecting to learn from the prototype before we start spending the customer's money?¬†
  • Strategic Items First - strategies are hypotheses. Hypotheses have uncertainties. Those¬†uncertainties¬†impact the validation of the Strategy. How can you assess the strategy¬†without making estimates of the impact of the outcome of the¬†hypothesis?
  • Customer Oriented First - does the customer have absolute confidence of what comes first? Does the customer operate in the presence of uncertainty?
  • High Risk First - all risk comes from uncertainty. Epistemic¬†uncertainty. Aleatory uncertainty. No decision can be made in the presence of risk - derived from its uncertainties¬†- without estimating the impact of that risk, the cost to mitigate that risk, the residual risk after mitigation.¬†Risk Management is How Adults Manage Projects - Tim Lister.¬†Be an adult, manage the project as a¬†Risk Manager. Be an Adult, estimate the impact of the risk on the probability of success.
  • Management Decision - OK, management decides. Any uncertainties¬†resulting from that decision? No, proceed. Yes? Any estimate on the impact of that decision on the probability of success? Just because management decided does not remove tuncertaintynty, unless that uncertainty has been analyzed.¬†
  • Reducing Cost First - how much cost, what sources of cost, what's the probability that the cost reduction will be effective?¬†
  • Minimal¬†Scope - how do you know what minimal is in the presence of uncertainty without estimating?
  • POC to Decide - same as Management Decision

The is NO principle by which you can make a credible decision in the presence of uncertainty (reducible and irreducible) without estimating the impact of that decision on the probability of success of your project.

 

Related articles What's the Smell of Dysfunction? Root Cause of Project Failure IT Risk Management Making Decisions In The Presence of Uncertainty Herding Cats: Five Immutable Principles of Project Success Estimating Processes in Support of Economic Analysis Herding Cats: Decisions Without Estimates? Herding Cats: Risk Management for Agile Software Development Projects
Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Thu, 06/22/2017 - 16:29

Management of Software Development projects is not like a Rocky movie, where strong desire overcomes lack of skill, experience, and talent.

Skill, experience, tools, principles, processes, practices and talent are all needed to increase the probability success. Not understanding that estimates are rarely for those spending the money but for those paying for the Value produced by those spending the money. Not seeking the highest possible level of each of these, or tolerating less than effective application of these,  will result in disappointing results for those paying for and those developing the software system.

Categories: Project Management

Book of the Month

Herding Cats - Glen Alleman - Thu, 06/22/2017 - 15:22

Death  of ExpertiseThe Death of Expertise describes how established knowledge is challenged and why this is critical in our modern society.

With nearly unlimited access to information - the information age - we've become narcissistic and misguided intellectual egalitarians.

Everything from webMD to Wikipedia, normal people are now experts.

Any idea now demands to be taken seriously. Unsubstantiated claims have the same visibility is tested theories and evidence-based principles, processes, and practices

In our software development domain, anyone with a blog and a Twitter account can make statements. Present company included.

The trouble is fact checking these claims takes effort.

With the right tag lines, those making bogus claims can collect followers with ease.

This book is a rear guard action on behave of those who actually know what they are talking about. 

Categories: Project Management

Avoiding deeply nested component trees

Xebia Blog - Thu, 06/22/2017 - 09:52

By passing child components down instead of data you can avoid passing data down through many levels of components. It also makes your components more reusable. Even multibrand components become much easier to build. Overall it is a pattern which improves your frontend code a lot! The Problem When building frontends you will pass data […]

The post Avoiding deeply nested component trees appeared first on Xebia Blog.

Programming Across Paradigms

From the Editor of Methods & Tools - Wed, 06/21/2017 - 18:46
What’s in a programming paradigm? How did the major paradigms come to be, and why? Once we’ve sworn our love to one paradigm, does a program written under any other still smell as sweet? Can functional programmers learn anything from the object-oriented paradigm, or vice versa? In this talk, we’ll try to understand what we […]

Quote of the Day

Herding Cats - Glen Alleman - Wed, 06/21/2017 - 15:33

While management and leadership are related and often treated as the same, their central functions are different. Managers clearly provide some leadership, and leaders obviously perform some management.
However, there are unique functions performed by leaders that are not performed by managers. My observation over the past forty years ... is that we develop a lot of good managers, but very few leaders.
Let me explain the difference in functions that they perform:

  • A manager takes care of where you are; a leader takes you to a new place.
  • A manager is concerned with doing things right; a leader is concerned with doing the right things.
  • A manager deals with complexity; a leader deals with uncertainty.
  • A manager creates policies; a leader establishes principles.
  • A manager sees and hears what is going on; a leader hears when there is no sound and sees when there is no light.
  • A manager finds answers and solutions; a leader formulates the questions and identifies the problems.

- James E. Colvard

from "Systems Engineering Newsletter, June 2017, Project Performance International, PO Box 2385, Ringwood North Victoria, 3134, Australia

Categories: Project Management

Product Roadmaps: Basics and Context

A Roadmap Provides Direction

Product roadmaps are a tool used to visually present an approach to translating a business strategy into the real world. The visualization of the impact of a strategy on a product allows all relevant constituencies to grasp how a product and its enablers are intended to evolve.  

In order to create and use product roadmaps, there are several key concepts and components that need to be agreed upon.  

The first concept that needs to be agreed upon is the definition of a product. Mike Cohn, of Mountain Goat Software, defines a product as ‚Äúa product is something (physical or not) that is created through a process and that provides benefits to a market.‚ÄĚ This definition gets rid of the distinction between tangible products and services to focus the idea value delivery. ¬†The most critical part of the definition is that the product must provide a benefit to a market. ¬†Much of internal IT‚Äôs work is either as part of the larger business-driven product or as an enabler. For example, learning management is a product delivered by human resources (HR) to the individuals within an organization. The system (e.g. PeopleSoft or Workday) and the network enable the product. The business strategy will define the future of the product which will in turn influence the enablers. ¬†

Some organizations make a major distinction between services and products.  The distinction is often made because services are intangible (can’t generate an inventory) while a product is tangible.  Making a distinction between a product and service may well impact how a strategy is delivered but is less useful when defining a  product roadmap or when determining whether a product owner should exist.  From a roadmap perspective, treating product and services the same is the simplest and best approach.

Many organizations create software-centric (or at least involved products) others use software to enable the delivery of value. ¬†Software-centric software functions will be identified on the product roadmap. For example, an ATM product roadmap might include purely software driven multi-lingual or voice activation functions. When the organization’s products are less software-centric the roadmap requires some form of linkage between the product and the technology which may be on two separate roadmaps. ¬†The potential for two very different paths is most obvious when leveraging commercial packages (COTS) or software as a service (SaaS). ¬†¬†In both cases, the software is serving multiple masters which may not match the business roadmap. Deciding on which part of the organization need to be covered by the roadmap is critical and can drive the need for a multilevel approach.

What ends up on a product roadmap depends on what you define as a product.  Internal IT organizations have a tendency to equate platforms to products and internal personnel as customers. This can lead to developing separate technology and business roadmaps.  While the best solution is to take a business first approach when separate roadmaps are generated, the organization will require a process to sync the two levels of roadmaps.


Categories: Process Management

Misinterpretations of the Cone of Uncertainty

Herding Cats - Glen Alleman - Tue, 06/20/2017 - 20:31

The Cone of Uncertainty is a framing assumption used to model the needed reduction in some parameter of interest in domains ranging from software development to hurricane forecasting.

This extended post covers

  1. The framing assumptions of the Cone of Uncertainty.
  2. The Cone of Uncertainty as a Technical Performance Measure.
  3. Closed Loop Stochastic Adaptive control in the presence of Evolving Uncertainty.

These topics are connected in a simple manner.

All project work is uncertain (probabilistic and statistical). Uncertainty comes in two forms - Epistemic and Aleatory. Uncertainty creates Risk. Risk management requires active reduction of risk. Active reduction requires we have a desired reduction goal, perform the work needed to move the parameter toward the goal - inside the control limits, and measure progress toward the rduction goal. Management of this reduction work and measurement of the progress toward the goals is a Close Loop Control System paradigm. Closed Loop Control, has a goal, an action, a measurement, and a corrective action. These activities, their releationships and values are defined in a PLAN for increasing the probability of success for the project. The creation and management of the Plan is usually performed by the Program Planning and Controls group where I work. 

Framing Assumptions for the Cone of Uncertainty 

Of late, Cone of Uncertainty has become the mantra of No Estimates advocates claiming that data is needed BEFORE the Cone is of any use. This is course is NOT the correct use of the Cone. And without this data, the Cone has no value in the management of the work. This fallacy comes from a collection of data that did not follow the needed and planned reduction of uncertainty for the cost estimates of a set of software development projects. 

The next fallacy of this conjecture is that the root cause for why those projects did not have their uncertainty reduced in some defined and desirable manner was never provided.  The result of the observation is a symptom of something. But the Cause was not sought in any attributable way. This is a common problem in low maturity development organizations. We blame something for our problem, without finding out the cause of the something or even if that something is the actual cause.

In our domain, Root Cause Analysis is mandated before ANY suggested change for improvement, prevention, or corrective actions are taken. Our RCA method is the Apollo Method. Apollo is distinctly different from other root cause approaches in that each effect requires a Condition and Action. Please read the book in the previous sentence to see why this is critical and how it is done. Then every time you hear some statement about an observed problem, you can ask what's the cause (both condition and action)?

So when you hear I have data that shows that the Cone (or for that matter anything) does not follow the expected and desired outcomes and there is no Root Cause to explain that unexpected outcome - ignore that conjecture, until the reason for the observed outcome is found and corrective actions have been identified and those corrective actions have been confirmed to actually fix the observed problem. 

So let's recap what the Cone of Uncertainty is all about from those who have created the concept. Redefining the meaning and then arguing my data doesn't match that is a poor start to improving the probability of project success.

The Cone of Uncertanty describes the uncertainty in the behaviors or measurement of a project parameter and how that uncertanty needs to be reduced as the project proceeds to increase the Probability of Project Success.
If that parameter is NOT being reduced at the planned rate, then the Probability of Success is not increased in the planned needed manner.

If the parameter of interest is not being reduced as needed, go find out why and fix it, or you'll be late, over budget, and the technical outcome unacceptable. The Cone of Uncertanty does NOT need data to validate it is the correct paradigm. The Cone of Uncertainty is the framework for improving the needed performance of the project. It's a Principle. 

Here's some background on the topic of the Cone of Uncertainty. Each of these can be found with Google.

  1. Misinterpretations of the ‚ÄúCone of Uncertainty‚ÄĚ in Florida during the 2004 Hurricane Season, Kenneth Broad, Anthony Leiserowitz, Jessica Weinkle, and Marissa Steketee, American Meteorological Society, May 2007
  2. Reducing Estimation Uncertainty with Continuous Assessment: Tracking the ‚ÄúCone of Uncertainty‚ÄĚ, Pongtip Aroonvatanaporn, Chatchai Sinthop, Barry Boehm
  3. ‚ÄúShrinking The Cone Of Uncertainty With Continuous Assessment For Software Team Dynamics In Design And Development,‚ÄĚ Pongtip Aroonvatanaporn,‚ÄĚ Ph.D. Thesis, University of Southern California, August 2012.
  4. "Improving Software Development Tracking and Estimation Inside the Cone of Uncertainty, "Pongtip Aroonvatanaporn, Thanida Hongsongkiat, and Barry Boehm.

The Cone of Uncertainty as a Technical Performance Measure

Technical Performance Measures are one of four measures describing how a project is making progress to plan. These measures - combined - provide insight into the probability of program success.

  • Measure of Effectiveness
  • Measure of Performance
  • Technical Performance Measure
  • Key Performance Parameters

Here's a workshop that we give twice a year at the College of Performance Management's conference. The definition of a TPM is on page 14. 

The reduction of Uncertainty (as in the Cone shape of the Uncertainty) can be considered a Technical Performance Measure. The Planners of the program define the needed uncertainty at a specific point in time, measure that uncertainty at that point in time and assess the impact on the probability of program success by comparing planned versus actual.

This approach is the same for any other TPM - Risk, Weight, Throughput, and even Earned Value Management parameters. This is the basis of the closed loop control system used to manage the program with empirical data from past performance compared to the planned performance at specific points in time.

This Closed Loop Program Control Process provides the decision makers with actionable information. The steering target - the MOE, MOP, TPM, KPP - is defined upfront and evolves as the programs progress with new information. Same for the Uncertainties in the program measures. This approach is also the basis of Analysis of Alternatives and other trades when it is determined that the desired measures cannot be achieved. Decisions are made to adjust the work, adjust the requirements, or adjust the measures.

Close Loop Control Systems in the Presence of Emerging Stochastic Behaviours

Risk management is how adults manage projects - Tim Lister. All risk comes from uncertainty. Uncertainty comes in two forms - Epistemic (reducible) and Aleatory (Irreducible). Here's an overview of how to manage in the presence of uncertainty

 

Let's look at the Closed Loop Control System paradigm for managing projects. Control systems exist in many domains. From simple fixed setpoint systems like your thermostat controlling the HVAC system, or a PID controller running a cracking column at a refinery, to multi-dimensional evolving target control systems guiding an AIM-9L missile (which I wrote SW for), to multi-target acquisition systems needed to steer midcourse interceptors to their targets, to cloud-based enterprise ERP systems, to autonomous flight vehicles operating in mixed environments, with mission planning while in flight and collision avoidance with manned vehicles - the current generation of swarm UAVs in the battlefield.

A concept that spans all these systems - and is shared with project management control systems (we work in Program Planning and Controls as a working title) is the idea of a PLAN.

  • What is the Plan to keep the temperature in the room at 76 degrees? It's a simple plan, run the A/C or Heater, measure the current temperature, compare that to the desired temperature, adjust appropriately, all the way up to
  • An evolving battle management system where the¬†control loop for individual vehicles interacts with¬†control loops of other vehicles (manned and unmanned) as the emerging mission, terrain, weather, and mission.
  • What are the units of measure for this Plan? What are the probabilistic and statistical behaviors of these units measure? How can we measure the progress of the project toward compliance with these measures in the presence of uncertainty?
  • How can the¬†variance analysis of these measures be used to take corrective actions to¬†keep the program GREEN?

All these control systems share the same framing assumptions - there is a goal, the needed capabilities to accomplish that goal are known, I have  PLAN by which that goal can be accomplsihed, that PLAN may evolve - with manual intervention or with autonomous intervention - as the PLAN evolves, the Mission evolves, and the situation evolves.

 Project work is a complex, adaptive, stochastic process with evolving Goals, resources, and situation, all operating in the presence of uncertainty. Here are the moving parts for any non-trivial project.Screen Shot 2017-06-19 at 8.30.26 AM
In the control system for project management, like the control system for those non-trivial examples above - the PLAN for the behavior of the parameter of interest is the starting point. The feedback (closed loop feedback) of the variance between the desired Value and the Actual Value at the time of the measurement is connected with other information on the program to define the corrective actions needed to Keep the Program Green.

Like control systems in automation with closed loop control, a project control system must be implemented that controls cost, ensures technical performance, and manages schedules. All successful control systems require measurements. All closed loop control systems have a plan to steer toward. A temperature for a simple thermostate all the way to a target cost, schedule, and techncial performance for a complex software project. In the project management world those measurements (inputs/calculated values) are called metrics and the greater the frequency of measurement (up to a point - the Nyquist sample rate), the more accurate the control. All measurements need to be compared to an expectation - the Set Point.  When deviations are noticed action is required to modify the processes that produce the output - the product or service produced by the project.

Here are some resources of closed loop control systems for project management based on following the plan for the performance of the Parameter of Interest

  1. "Feedback Control in Project-Based Management," L. Scibile,¬†ST Division ‚Äď Monitoring and Communication Group (ST/MC) CERN, Geneva, Switzerland
Back to the Cone of Uncertainty

The notion¬†of the¬†Cone of Uncertainty¬†has been around for awhile. Barry Boehm's work in ‚ÄúSoftware Engineering Economics‚ÄĚ. Prentice-Hall, 1981.¬†

But first, let's establish a framing assumption. When you hear of projects where uncertainty is not reduced as the project progresses, ask a simple question. Why is this the case? Why, as the project progresses with new information, delivered products, reduced risk, is the overall uncertainty not being reduced? Go find the root cause of this, before claiming uncertainty doesn't reduce. Uncertainty as a principle for all projects, should be reducing thorugh the direct action of Project Management. If uncertainty is not reducing the case may be - bad management, an out of control project, or you're working in pure research world where things like that happen.

As well never measure any project parameter as a ratio. Relative numbers - ordinal - are meaningless when making decisions. Relative uncertainty is one. Relative to what? Cardinal numbers are measures used to make decisions.

So a quick review again. What is the Cone of Uncertainty?

  • The Cone is a project management framework describing the uncertainty aspects of estimates (cost and schedule) and other project attributes (cost, schedule, and technical performance parameters). Estimates of cost, schedule, technical¬†performance on the left side of the cone have a lower probability of being precise and accurate than estimates on the right side of the cone. This is due to many reasons. One is levels of uncertainty¬†early in the project. Aleatory and Epistemic uncertainties, which create the risk to the success of the project. Other uncertainties that create risk include:
    • Unrealistic performance expectation with missing Measures of Effectiveness and Measures of Performance
    • Inadequate assessment of risks and unmitigated exposure to these risks with proper handling plans.
    • Unanticipated technical issues with alternative plans and solutions to maintain effectiveness
  • Since all project work contains uncertainty, reducing this uncertainty¬†- which reduces risk - is the role of the project team and their management. Either the team itself, the Project or Program Manager, or on larger programs the Risk Management owner.¬†

Here's another simple definition of the Cone of Uncertainty: 

The Cone of Uncertainty describes the evolution of the measure of uncertainty during a project. For project success, uncertainty not only must decrease over time, but must also diminishe its impact on the project's outcome. This is done by active risk management, through probabalistic decision-making. At the beginning of a project, there is usually little known about the product or work results. Estimates are needed but are subject to large level of uncertainty. As more research and development is done, more information is learned about the project, and the uncertainty then decreases, reaching 0% when all risk has been mitigated or transferred. This usually happens by the end of the project.

So the question is? - How much variance reduction needs to take place in the project attributes (risk, effectiveness, performance, cost, schedule - shown below) at what points in time, to increase the probability of project success? This is the basis of Closed Loop Project Control  Estimates of the needed reduction of uncertanty, estimates of the possisble reduction of uncertainty, and estimates of the effectiveness of these reduction efforts are the basis of the Close Loop Project Control System.

This is the paradigm of the Cone of Uncertainty - it's a planned development compliance engineering tool, not an after the fact data collection tool

The Cone is NOT the result of the project's past performance. The Cone IS the Planned boundaries (upper and lower limits) of the needed reduction in uncertainty (or other performance metrics) as the project proceeds. When actual measures of cost, schedule, and technical performance are outside the planned cone of uncertainty, corrective actions must be taken to move those uncertanties inside the cone, if the project is going to meet it's cost, schedule, and technical performance goals. 

If your project's uncertanties are outside the Planned boundaries at the time when they should be inside the planned boundaries, then you are reducing the proabbility of project success

The Measures that are modeled in the Cone of Uncertainty are the Quantitative basis of a control process that establishes the goal for the performance measures. Capturing the actual performance, comparing it to the planned performance, and compliance with the upper and lower control limits provides guidance for making adjustments to maintain the variables perform inside their acceptable limits.

The Benefits of the Use of the Cone of Uncertainty 

The planned value, the upper and lower control limits, the measures of actual values from a Close Loop Control System - a measurement based feedback process to improve the effectiveness and efficiency of the project management processes.

  • Analyzing trends that help focus on problem areas at the earliest point in time - when the¬†variable under control¬†starts misbehaving, intervention can be taken. No need to wait till the end to find out¬†you're not going to make it.
  • Providing early insight into error-prone products that can then be corrected earlier and thereby at lower cost - when the trends are headed to the UCL and LCL, intervention can take place.
  • Avoiding or minimizing cost overruns and schedule slips by detecting them early - by observing trends to breaches of the UCL and LCL.
    enough in the project to implement corrective actions
  • Performing better technical planning, and making adjustments to resources based on discrepancies between planned and actual progress.

A critical success factor for all project work is Risk Management. And risk management includes the management of all kinds of risks. Risks from all kinds of sources of uncertainty, including technical risk, cost risk, schedule, management risk. Each of these uncertainties and the risks they produce can take on a range of values described by probability and statistical distribution functions. Knowing what ranges are possible and knowing what ranges are acceptable is a critical project success factor.

We need to know the Upper Control Limits (UCL) and Lower Control Limit (LCL) of the ranges of all the variables that will impact the success of our project. We need to know these ranges as a function of time With this paradigm we have logically connected project management processes with Control System processesIf the variances, created by uncertainty going outside the UCL and LCL. Here's a work in progress paper "Is there an underlying Theory of Project Management," that addresses some of the issues with control of project activities.

This is the critical concept in successful project management

We must have a Plan for the critical attributes - Mission Effectiveness, Technical Performance, Key Performance Parameters - for the items. If these are not compliant, the project has become subject to one of the Root Causes of program performance shortfall. We must have a burndown or burnup plan for producing the end item deliverables for the program that match those parameters over the course of the program. Of course, we have a wide range of possible outcomes for each item in the beginning. And as the program proceeds the variances measures on those items move toward compliance of the target number in this case Weight.

Screen Shot 2017-01-13 at 4.21.56 PM

Here's another example of the Cone of Uncertainty, in this case, the uncertainty in weight of a vehicle as designed by an engineering team. The UCL and LCL are defined BEFORE the project starts. These are the allowable ranges of the weight values for the object at specific points in time. When the actual weight or the projected weight goes outside that range Houston We Have A Problem.  These are used to inform the designer of the progress of the project as it proceeds. Staying inside the control limits is the Planned progress path to the final goal - in this case, temperature.

Wrap Up of this Essay

The Cone of Uncertanty, is a signaling boundary of a Closed Loop Control system used to manage the project to success with feedback from the comparison of the desired uncertainty in some parameter to the actual uncertainty in that parameter.

This boundary is defined BEFORE work start to serve as the PLANNED target to steer toward for a specific parameter. In simplier closed loop control systems, this is called the Set Point†

† The setpoint is the desired or target value for an essential variable of a system, often used to describe a standard configuration or norm for the system. In project management or engineering development, the set point is a stochastic variable that is evolving as the program progresses and may be connected in non-linear ways with other set points for other parameters.

  Related articles Are Estimates Really The Smell of Dysfunction? What's the Smell of Dysfunction? Herding Cats: The Economics of Decision Making on Software Projects Capabilities Based Planning - Part 2
Categories: Project Management

Floating Point Quality: Less Floaty, More Pointed

James Bach’s Blog - Tue, 06/20/2017 - 19:14

Years ago I sat next to the Numerics Test Team at Apple Computer. I teased them one day about how they had it easy: no user interface to worry about; a stateless world; perfectly predictable outcomes. The test lead just heaved a sigh and launched into a rant about how numerics testing is actually rather complicated and brimming with unexpected ambiguities. Apparently, there are many ways to interpret the IEEE floating point standard and learned people are not in agreement about how to do it. Implementing floating point arithmetic on a digital platform is a matter of tradeoffs between accuracy and performance. And don’t get them started about HP… apparently HP calculators had certain calculation bugs that the scientific community had grown used to. So the Apple guys had to duplicate the bugs in order to be considered “correct.”

Among the reasons why floating point is a problem for digital systems is that digital arithmetic is discrete and finite, whereas real numbers often are not. As my colleague Alan Jorgensen says “This problem arises because computers do not represent some real numbers accurately. Just as we need a special notation to record one divided by three as a decimal fraction: 0.33333‚Ķ., computers do not accurately represent one divided by ten. This has caused serious financial problems and, in at least one documented instance, death.”

Anyway, Alan just patented a process that addresses this problem “by computing two limits (bounds) containing the represented real number that are carried through successive calculations.¬† When the result is no longer sufficiently accurate the result is so marked, as are further calculations using that value. ¬†It is fail-safe and performs in real time.¬† It can operate in conjunction with existing hardware and software.¬† Conversion between existing standardized floating point and this new bounded floating point format are simple operations.”

If you are working with systems that must do extremely accurate and safe floating point calculations, you might want to check out the patent.

Categories: Testing & QA