Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=3' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Tinder: How does one of the largest recommendation engines decide who you'll see next?

We've heard a lot about the Netflix recommendation algorithm for movies, how Amazon matches you with stuff, and Google's infamous PageRank for search. How about Tinder? It turns out Tinder has a surprisingly thoughtful recommendation system for matching people.

This is from an extensive profile, Mr. (Swipe) Right?, on Tinder founder Sean Rad:

Categories: Architecture

Tinder: How does one of the largest recommendation engines decide who you'll see next?

We've heard a lot about the Netflix recommendation algorithm for movies, how Amazon matches you with stuff, and Google's infamous PageRank for search. How about Tinder? It turns out Tinder has a surprisingly thoughtful recommendation system for matching people.

This is from an extensive profile, Mr. (Swipe) Right?, on Tinder founder Sean Rad:

Categories: Architecture

Four Tips for Pair Writing

I am shepherding an experience report for XP 2016. A shepherd is sort-of like a technical editor. I help the writer(s) tell their story in the best possible way. I enjoy it and I learn from working with the authors to tell their stories.

The writers for this experience report want to pair-write. They have four co-authors. I offered them suggestions you might find useful:

Tip 1: Use question-driven writing

When you think about the questions you want to answer, you have several approaches to whatever you write. An experience report has this structure: what the initial state was and the pain there; what you did (the story of your work, the experience); and the end state, where you are now. You can play with that a little, but the whole point of an experience report is to document your experience. It is a story.

If you are not writing an experience report, organize your writing into the beginning, middle, end. If it’s a tips piece, each tip has a beginning, middle, end. It depends on how long the piece is.

When you use question-driven writing, you ask yourself, “What do people need to know in this section?” If you have a section about the software interacting with the hardware, you can ask the “What do people need to know” and “How can I show the interactions with bogging down in too much detail” questions. You might have other questions. I find those two questions useful.

Tip 2: Pair-write

I do this in several ways with my coauthors. We often discuss for a few minutes what we want to say in the article. If you have a longer article, maybe you discuss what you want to discuss in this section.

One person takes the keyboard (the driver). The other person watches the words form on the page (the navigator). When I pair-write with google docs, I offer to fix the spelling of the other person.

I don’t know about you, but my spelling does not work when I know someone is watching my words. It just doesn’t. When I pair, I don’t want the writer to back up. I don’t want to back the cursor up and I don’t want the other person to back up. I want to go. Zoom, zoom, zoom. That means I offer to fix the spelling, so the other person does not have to.

This doesn’t work all the time. I’m okay with the other person declining my offer, as long as they don’t go backwards. I become an evil witch when I have to watch someone use the delete/backspace key. Witch!

Tip 3: Consider mobbing/swarming on the work

If you write with up to four people (I have not written with more than four people), you might consider mobbing. One person has the keyboard, the other three make suggestions. I have done this just a few times and the mobbing made me crazy. We did not have good acceptance criteria, so each person had their own idea of what to do. Not a recipe for success. (That’s why I like question-driven writing.)

On the other hand, I have found that when we make a list of sections—maybe not a total outline—pairs of people can work on their writing at the same time. Each pair takes a section, works on that, and returns to the team with the section ready for review. I have also been in a position where someone did some research and returned to the writing team.

I have also been in a position where someone did some research and returned to the writing team.

Tip 4: Use a Short Timebox for Writing

When I pair, I write or navigate in no more than 15-minute timeboxes. You might like an even shorter timebox. With most of my coauthors, I don’t turn on a timer. I write for one-to-several paragraphs and then we switch. We have a little discussion and then we’re writing again. Most of my timeboxes are 5-7 minutes and then we switch.

Pair Writing Traps

I have seen these traps when pair-writing:

  1. One person dictates to the other person. That smacks of first-author, all-the-rest-of-you-are-peons approach.
  2. One or both of you talk without writing. No. If someone isn’t writing in the first 90 seconds, you’re talking, not writing. Write. (This is the same problem as discussing the design without writing code to check your assumptions about the design.)

I didn’t realize I would make this a series. The post about writing by yourself is Four Tips to Writing Better and Faster.

I have a special registration for my writing workshop for pairs. If you are part of a pair, take a look and see if this would work for you.

Categories: Project Management

Backlog ordering done right!

Xebia Blog - Wed, 01/27/2016 - 11:00

Various methods exist for helping product owners to decide which backlog item to start first. That this pays off to do so (more or less) right has been shown in blogs of Maurits Rijk and Jeff Sutherland.

These approaches to ordering backlog items all assume that items once picked up by the team are finished according to the motto: 'Stop starting, start finishing'. An example of a well-known algorithm for ordering is Weighted Shortest Job First (WSJF).

For items that may be interrupted, this results not in the best scheduling possible. Items that usually are interrupted by other items include story map slices, (large) epics, themes, Marketable Features and possibly more.

In this blog I'll show what scheduling is more optimal and how it works.

ruitenwisser_dots

Weighted Shortest Job First (WSJF)

In WSJF scheduling of work, i.e. product backlog items, is based on both the effort and (business) value of the item. The effort may be stated in duration, story points, or hours of work. The business value may be calculated using Cost of Delay or as is prescribed by SAFe.

When effort and value are known for the backlog items, each item can be represented by a dot. See the picture to the right.
The proper scheduling is obtained by sweeping the dashed line from the bottom right to the upper left (like a windshield wiper).

 

ruitenwisserIn practice both the value and effort are not precisely known but estimated. This means that product owners will treat dots that are 'close' to each other the same. The picture to the left shows this process. All green sectors have the same ROI (business value divided by effort) and have roughly the same value for their WSJF.

Product owners will probably schedule items according to: green cells from left-to-right. Then consider the next 'row' of cells from left-to-right.

 

Other Scheduling Rules

It is known at least since the 1950's (and probably earlier) that WSJF is the most optimal scheduling mechanism if both value and size are known. The additional condition is that preemption, i.e. interruption of the work, is not allowed.

If either of these 3 conditions (known value, known size, no preemption) is not valid, WSJF is not the best mechanism and other scheduling rules are more optimal. Other mechanisms are (for a more comprehensive overview and background see e.g. Table 3.1, page 146 in [Kle1976]):

No preemption allowed

  1. no value, no effort: FIFO
  2. only effort: SJF / SEPT
  3. only value: on value
  4. effort & value: WSJF / SEPT/C
  5. Story map slices: WSJF (no preemption)

FIFO = First in, First out
SEPT = Shortest Expected Processing Time
SJF = Shortest Job First
C = Cost

Examples: (a) user stories on the sprint backlog: WSJF, (b) production incidents: FIFO or SJF, (c) story map slices that represent a minimal marketable feature (or short Feature). Leaving out a single user story from a Feature creates no business value (that's why it is a minimal marketable feature) and starting such a slice also means completing it before starting anything else. These are scheduled using WSJF. (d) User stories that are part of Feature; they represent no value by themselves, but all are necessary to complete the Feature they belong to. Schedule these according to SJF.

Preemption allowed

  1. no value: SIRPT (SIJF)
  2. effort & value: SIRPT/C or WSIJF (preemption)
  3. SIRPT = Shortest Imminent Remaining Processing Time

SIRPT/C = Shortest Imminent Remaining Processing Time, weighted by Cost
SIJF = Shortest Imminent Job First
WSIJF = Weighted Shortest Imminent Job First

The 'official' naming for WSIJF is SIRPT/C. In this blog I'll use Weighted Shortest Imminent Job First, or WSIJF.

Examples: (a) story map slices that contain more than one Feature (minimal marketable feature). We call these Feature Sets. These are scheduled using WSIJF, (b) (Large) Epics that consist of more than 1 Feature Set, or epics that are located at the top-right of the windshield-wiper-diagram. The latter are usually split in smaller one containing most value for less effort. Use WSIJF.

Summary

  • User Story (e.g. on sprint backlog and not part of a Feature): WSJF
  • User Story (part of a Feature): SJF
  • Feature: WSJF
  • Feature Set: WSIJF
  • Epics, Story Maps: WSIJF
Weighted Shortest Imminent Job First (WSIJF)

Mathematically, WSIJF is not as simple to calculate as is WSJF. Perhaps in another blog I'll explain this formula too, but in this blog I'll just describe what WSIJF does in words and show how it affects the diagram with colored sections.

WSIJF: Work that is very likely to finish in the next periods, has large priority

What does this mean?

Remember that WSIJF only applies to work that is allowed to be preempted in favour of other work. Preemption happens at certain points in time. Familiar examples are Sprints, Releases (Go live events), or Product Increments as used in the SAFe framework.

The priority calculation takes into account:

  • the probability (or chance) that the work is completed in the next periods,
  • if completed in the next periods, the expected duration, and
  • the amount of time already spent.

ruitenwisser_dots_wsijfExample. Consider a Scrum team that has a cadence of 2-week sprints and time remaining to the next release is 3 sprints. For every item on the backlog determine the chance for completing it in the next sprint and if completed, divide by the expected duration. Likewise for completing the same it in the next 2 and 3 sprints. For each item you'll get 3 numbers. The value divided by the maximum of these is the priority of the backlog item.

Qualitatively, the effect of WSIJF is that items with large effort get less priority and items with smaller effort get larger priority. This is depicted in the diagram to the right.

Example: Quantifying WSIJF

In the previous paragraph I described the basics of WSIJF and only qualitatively indicated its effect. In order to make this concrete, let's consider large epics that have been estimated using T-shirt sizes. Since WSIJF affects the sizing part and to less extent the value part, I'll not consider the value in this case. In a subtle manner value also plays a role, but for the purpose of this blog I'll not discuss it here.

Teams are free to define T-shirt sizes as they like. In this blog, the following 5 T-shirt sizes are used:

  • ­XS ~ < 1 Sprint
  • S ~ 1 – 2 Sprints
  • M ~ 3 – 4 Sprints
  • L ~ 5 – 8 Sprints
  • XL ~ > 8 Sprints

Items of size XL take around 8 sprints, so typically 4 months. These are very large items.

probabilitiesOf course, estimates are just what they are: estimates. Items may take less or more sprints to complete. In fact, T-shirt sizes correspond to probability distributions: an 'M'-sized item has a probability to complete earlier than 3 sprints or may take longer than 4 sprints. For these distributions I'll take:

  • ­XS ~ < 1 Sprint (85% probability to complete within 1 Sprint)
  • ­S ~ 1 – 2 Sprints (85% probability to complete within 3 Sprints)
  • ­M ~ 3 – 4 Sprints (85% probability to complete within 6 Sprints)
  • ­L ~ 5 – 8 Sprints (85% probability to complete within 11 Sprints)
  • ­XL ~ > 8 Sprints (85% probability to complete within 16 Sprints)

As can be seen from the picture, the larger the size of the item the more uncertainty in completing it in the next period.

Note: for the probability distribution, the Wald or Inverse Gaussian distribution has been used.

Based on these distributions, we can calculate the priorities according to WSIJF. These are summarized in the following table:

wsijf_table

Column 2 specifies the probability to complete an item in the next period, here the next 4 sprints. In the case of an 'M' this is 50%.
Column 3 shows that, if the item is completed, what the expected duration will be. For an 'M' sized item this is 3.22 Sprints.
Column 4 contains the calculated priority as 'value of column 2' divided by 'value of column 3'.
The last column shows the value as calculated using SJF.

The table shows that items of size 'S' have the same priority value in both the SIJF and SJF schemes. Items larger than 'S' actually have a much lower priority as compared to SJF.

Note: there are slight modifications to the table when considering various period lengths and taking into account the time already spent on items. This additional complexity I'll leave for a future blog.

In practice product owners only have the estimated effort and value at hand. When ordering the backlog according to the colored sections shown earlier in this blog, it is easiest to use a modified version of this picture:

ruitenwisser_done_right

Schedule the work items according to the diagram above, using the original value and effort estimates: green cells from left to right, then the next row from left to right.

Conclusion

Most used backlog prioritization mechanisms are based on some variation of ROI (value divided by effort). While this is the most optimal scheduling for items for which preemption is not allowed, it is not the best way to schedule items that are allowed to be preempted.

As a guide line:

  • Use WSJF (Weighted Shortest Job First) for (smaller) work items where preemption is not allowed, such as individual user stories with (real) business value on the sprint backlog and Features (minimal marketable features, e.g. slices in a story map).
  • Use SJF (Shortest Job First) for user stories within a Feature.
  • Use WSIJF (Weighted Shortest Imminent Job First) for larger epics and collections of Features (Feature Set), according to the table above, or more qualitatively using the modified sector chart.
References

[Kle1976] Queueing Systems, Vol. 2: Computer Applications, Version 2, Leonard Kleinrock, 1976

[Rij2011] A simulation to show the importance of backlog prioritisation, Maurits Rijk, June 2011, https://maurits.wordpress.com/2011/06/08/a-simulation-to-show-the-importance-of-backlog-prioritization/

[Sut2011] Why a Good Product Owner Will Increase Revenue at Least 20%, Jeff Sutherland, June 2011, https://www.scruminc.com/why-product-owner-will-increase-revenue/

The Role of Stories In Presentations, or The Two Reasons to Use Stories As A Presentation Tool

9684430136_09e543796e_k.jpg

A good story makes information engaging

Presentations are the lingua franca of many . . . OK most corporate IT departments. Presentations are used for many purposes, such as to inspire, inform, persuade or some combination of purposes. The problem is not that presentation are a common communication vehicle, but rather they are often misused. I recently attended a Chamber of Commerce meeting where I watched a presenter go through slide after slide full of bullet points, charts and graphs.  Trouble is, I can’t remember much of the presentation a week later. If he had approached the presentation as a story using one of common story structures and added specific vignettes, the presentation would have had a better chance at making an emotional connection and being memorable.

Story structures are tools to build a connection with an audience and aid absorption of the entire overall message. An example of a common story structure used to guide a presentation is called the “Mountain”. The Mountain begins with by describing a current state, shows how challenges are overcome as the story moves away from the current state towards a conclusion which that satisfies a need. I often use this structure to describe a project or an organizational assessment.  Each step along the path can be highlighted using relevant and powerful vignettes to highlight specific points and to increase the audience’s connect to the presentation.

The most basic goal of a presentation is for the audience to remember what was said. In a Wall Street Journal article, Cliff Atkinson, a communications consultant and author of Beyond Bullet Points, suggested that raw data is not as persuasive and memorable as many in business believe.  Mr. Atkinson suggests distilling what is important and wrapping them in an engaging story so they can be remembered. The Inc Magazine blog entry by Riley Gibson makes a similar point, suggesting that stories create interest and investment so that audiences can “hear” and accept what you are saying. One measure of stickiness is whether salient data and stories can be remembered after the presentation. Richard A Krueger in Using Stories in Evaluation (2010, pp. 404-405) stated, “Evidence suggests that people have an easier time remembering a story than recalling numerical data.” The story structure provides a container to hold the data and message that is at the heart of the presentation so people can remember. This similar to my son-in-law’s uncanny ability to remember movie lines.  The story provides a scaffold to make the facts and vignettes memorable. Supporting this thesis are any number of study guides prepared for students, such as the one published by the Michigan State University College of Osteopathic Medicine that suggests using a story (more emotive the better) to enhance long-term retention and recall.

As I was leaving the Chamber of Commerce meeting, I overheard someone say they were glad that at least there were appetizers before the presentation, because they had no clue what the point of the presentation was. The comment was harsh, but even I, the ultimate data geek, had a hard time remembering the punchline of the presentation. Whether a presentation is developed to inspire, inform or persuade, if the presentation does not connect with the audience, then the time and effort for all parties are wasted (even if the refreshments were good).

Next up: Four Types of Story Arcs Useful in Business
Four More Types of Story Arcs Useful in Business
Five Tips For Good Stories


Categories: Process Management

[New eBook] Download The No-nonsense Guide to App Monetization

Google Code Blog - Tue, 01/26/2016 - 21:23
Originally posted on the Inside AdMob Blog.

There are many questions to answer when developing a new app. One of the most important being, “what’s the best way to make money?”

Research firm Canalys predicts that by 2019, there will be 6.9 billion mobile phones in the world, in the hands of nearly 75% of the Earth’s population.* With growing demand in new markets and so many options for monetization, answering this question can be complicated.

Today we’re launching a new ebook “The No-nonsense Guide to App Monetization”, the latest in our No-nonsense series. This guide is designed for app developers starting to consider how to monetize their app. It provides a comprehensive overview of app monetization and shares helpful examples and practical tips to get you started.

In 10 minutes you’ll learn:
  • What the seven primary app monetization models are and the pros and cons for each
  • How to choose the right monetization strategy for your app
  • Important considerations to keep in mind when implementing your monetization plan

Download a free copy here.

The No-nonsense Guide to App Monetization
Also, within the next few weeks, we’ll be releasing blog posts with app developers sharing candid stories and helpful tips on app monetization. Our next post will highlight the tactic Christoph Pferschy, the app developer behind Hydro Coach, used to scalably release 22 localized versions of his app.

Until then, for more tips on app monetization, be sure to stay connected on all things AdMob by following our Twitter and Google+ pages.

Posted by Joe Salisbury, Product Specialist, AdMob

* Canalys 2015, “Worldwide smart phones forecast overview 2015-2019”


Categories: Programming

[New eBook] Download The No-nonsense Guide to App Monetization

Google Code Blog - Tue, 01/26/2016 - 21:23
Originally posted on the Inside AdMob Blog.

There are many questions to answer when developing a new app. One of the most important being, “what’s the best way to make money?”

Research firm Canalys predicts that by 2019, there will be 6.9 billion mobile phones in the world, in the hands of nearly 75% of the Earth’s population.* With growing demand in new markets and so many options for monetization, answering this question can be complicated.

Today we’re launching a new ebook “The No-nonsense Guide to App Monetization”, the latest in our No-nonsense series. This guide is designed for app developers starting to consider how to monetize their app. It provides a comprehensive overview of app monetization and shares helpful examples and practical tips to get you started.

In 10 minutes you’ll learn:
  • What the seven primary app monetization models are and the pros and cons for each
  • How to choose the right monetization strategy for your app
  • Important considerations to keep in mind when implementing your monetization plan

Download a free copy here.

The No-nonsense Guide to App Monetization
Also, within the next few weeks, we’ll be releasing blog posts with app developers sharing candid stories and helpful tips on app monetization. Our next post will highlight the tactic Christoph Pferschy, the app developer behind Hydro Coach, used to scalably release 22 localized versions of his app.

Until then, for more tips on app monetization, be sure to stay connected on all things AdMob by following our Twitter and Google+ pages.

Posted by Joe Salisbury, Product Specialist, AdMob

* Canalys 2015, “Worldwide smart phones forecast overview 2015-2019”


Categories: Programming

A Simple Way to Run a Sprint Retrospective

Mike Cohn's Blog - Tue, 01/26/2016 - 16:00

There are perhaps as many ways to run a retrospective as there are teams to conduct them. I want to describe my favorite way, especially because it's an approach that has stood the test of time, having worked for years with many, many teams.

The Start, Stop and Continue Retrospective

I like to conduct a sprint retrospective by asking team members what they would start, stop and continue doing. This type of meeting becomes known as a “start, stop and continue” meeting.

The start items are things a team member thinks the team should add to its process. Some examples would be:

  • Showing the software to customers early
  • Specifying acceptance tests early and with customers
  • Doing code inspections
  • Being on time for daily standups
  • Finishing one story before starting the next

Items on the stop list are things that someone on the team thinks are inefficient or are wasting time. The team should stop doing these. Examples from past retrospectives include:

  • Checking in code without being sure all tests will pass
  • Taking more than 15 minutes for daily scrum meetings
  • Skipping product backlog refinement meetings when we’re feeling behind at the end of a sprint

The continue list contains items the team would like to continue to emphasize but that are not yet habits. So any of the start or stop items above could go onto the continue list and stay there for a few sprints.

Eventually--once the item became a habit--it would be removed from the continue list. Otherwise, the continue would become tremendously long.

Ask for Items in Different Ways

A ScrumMaster can ask team members for items in different ways. The easiest is just to say, “Yell them out,” and team members are free to intersperse start items with stops and continues. This is my default mode.

But, it can get repetitious sprint after sprint. So, I’ll mix things up and sometimes I’ll go around the room asking each person to give me one item, perhaps making two passes around the room before opening it up for additional items.

Other times, I’ll want to emphasize a specific type of item--often the stops. So I’ll ask all team members to yell out nothing but things to stop doing. Or, I’ll combine approaches and go person-by-person around the room asking each to identify one thing to stop in the team’s current process.

There are plenty of ways to mix up the idea generation in a start-stop-continue retrospective so that it will take a long time before it gets boring or repetitious.

Vote

After enough ideas have been generated, have team members vote for the most important item or items. It’s often obvious when it’s time to do this because the creativity has died down and new ideas are not coming very quickly.

The ScrumMaster can have each team member vote for the one, most important idea or can use any typical multi-voting approach. For example, give each team member three votes to allocate as they wish (including all three votes to the same items).

I like multi-voting in a retrospective. The nature of most retrospective items is that many do not really take time to do it. Many are more behavioral. Consider being on time for daily standups from the examples above. That doesn’t take any time. In fact, perhaps it saves time.

Multi-voting would allow a team to choose to work on that behavior and perhaps another couple of items. Generally, I’d pick no more than three. Even if they don’t take any (or much) time, choosing too many items does detract from the importance of those selected.

In addition to voting for new items to pursue, discuss whether items on the continue list have been achieved, are no longer important or should be otherwise removed from the list.

The Next Retrospective

In the next retrospective, I suggest the ScrumMaster bring the list of ideas generated at the previous retrospective--both the ideas chosen to be worked on and those not. These can help jump-start discussion for the next retrospective.

I tend to write them on a large sheet of paper and tape it to the wall without any fanfare or discussion. The items are just there if the team needs them or wants to refer to them. I then facilitate a new start, stop, continue discussion.

Benefits of Start, Stop and Continue

I find that conducting retrospectives this way is fast, easy, non-threatening and it works. A start, stop and continue meeting is very action-oriented. No time is spent focused on feelings. We don’t ask team members how they felt during a sprint; were they happy or sad, warm or fuzzy.

Each item generated will lead directly to a change in behavior. The team will start doing something, or they will stop doing something, or they will continue doing something until it becomes a habit.

Yes, I’m prepared for many people to leave comments saying it’s important to work through people's’ feelings first. Or that we won’t know how to act until we’ve first dealt with how people feel. Go ahead. In some cases that may be true. But in plenty of other cases, we can identify what to do (“we need to start testing sooner”) directly.

And that’s the strength of a start, stop, continue approach to sprint retrospectives.

What Do You Think?

What do you think? How do you like to run retrospectives? And specifically, is there anything you’d like to start, stop or continue about your own retrospectives?

Software Development Conferences Forecast January 2016

From the Editor of Methods & Tools - Tue, 01/26/2016 - 09:34
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP), DevOps and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods […]

The Problem with #Noestimates

Herding Cats - Glen Alleman - Tue, 01/26/2016 - 06:48

The notion of a balanced discussion of the topic of Not estimating work on agile projects has yet to produce any principles on which this can occur. The #NoEstimates advocates have not provided the principles by which a decision about how to best spend the customers money in the presence of uncertainty of the outcomes (Microeconomics of decision making) without estimating the impact of the decision. 

The one principle based suggestion is Slicing and is considered #Noestimates. But of course slicing is the basis of making an estimate by reducing the variances of the work into a narrow range. Then assessing the effort to produce similar work in the future. This depends on that future work being like the work in the past. This also assumes there are no emergent uncertainties that create risk to that work. Either naturally occurring uncertainties (aleatory) or event based uncertainties (epistemic).

As well, slicing is also standard practice in all credible Basis of Estimate processes used in all domains where a non-trivial amount of money is at risk.

Without a foundational set of principles to support the conjecture that decisions can be made without knowing the outcome of that decision, #Noestimates is a solution looking for a problem to solve. 

And there are many problems, starting with Bad Management, a variety of biases, cooking the books for political reasons, lack of experience, skill, and knowledge in estimating, poor understanding of risk management, and numerous other root causes of project difficulties and many times downright failure because the knowledge of who much something will cost, when it will be ready for use, and what actually will be produced is not available to the decision makers.

I'm headed to Florida this week for a conference where Agile and Earned Value Management Systems are the topic. I'm reminded by a NASA Cost Director colleague there are three reasons projects get over budget, behind schedule, and don't produce the expected outcomes:

  1. We couldn't know- it's a science project and we're inventing new physics
  2. We didn't know - we didn't do our homework
  3. We don't want to know - if we knew, we would cancel the project before it starts

To date no evidence has been put forward to show how to make a decision when spending other peoples money in the presence of uncertainty without estimating the impact of that decision so an informed choice can be made on how to proceed.

Related articles Who's Budget is it Anyway? Making Decisions In The Presence of Uncertainty Eyes Wide Shut - A View of No Estimates
Categories: Project Management

Design of a Modern Cache

This is a guest post by Benjamin Manes, who did engineery things for Google and is now doing engineery things for a new load documentation startup, LoadDocs.

Caching is a common approach for improving performance, yet most implementations use strictly classical techniques. In this article we will explore the modern methods used by Caffeine, an open-source Java caching library, that yield high hit rates and excellent concurrency. These ideas can be translated to your favorite language and hopefully some readers will be inspired to do just that.

Eviction Policy

A cache’s eviction policy tries to predict which entries are most likely to be used again in the near future, thereby maximizing the hit ratio. The Least Recently Used (LRU) policy is perhaps the most popular due to its simplicity, good runtime performance, and a decent hit rate in common workloads. Its ability to predict the future is limited to the history of the entries residing in the cache, preferring to give the last access the highest priority by guessing that it is the most likely to be reused again soon...

Categories: Architecture

When Should You Improve?

Making the Complex Simple - John Sonmez - Mon, 01/25/2016 - 14:00

Resolutions for the New Year seldom have the lasting impact they intend to. Unpacking the reasons for this is difficult. The human psyche is so complex it can be hard to determine all the reasons for why we fall short. The problem with taking the New Year as an opportunity to change our habits is […]

The post When Should You Improve? appeared first on Simple Programmer.

Categories: Programming

Reinventing Testing: What is Integration Testing? (part 2)

James Bach’s Blog - Mon, 01/25/2016 - 10:45

These thoughts have become better because of these specific commenters on part 1: Jeff Nyman, James Huggett, Sean McErlean, Liza Ivinskaia, Jokin Aspiazu, Maxim Mikhailov, Anita Gujarathi, Mike Talks, Amit Wertheimer, Simon Morley, Dimitar Dimitrov, John Stevenson. Additionally, thank you Michael Bolton and thanks to the student whose productive confusion helped me discover a blindspot in my work, Anita Gujarathi.

Integration testing is a term I don’t use much– not because it doesn’t matter, but because it is so fundamental that it is already baked into many of the other working concepts and techniques of testing. Still, in the past week, I decided to upgrade my ability to quickly explain integration, integration risk, and integration testing. This is part of a process I recommend for all serious testers. I call it: reinventing testing. Each of us may reinvent testing concepts for ourselves, and engage in vigorous debates about them (see the comments on part 1, which is now the most commented of any post I have ever done).

For those of you interested in getting to a common language for testing, this is what I believe is the best way we have available to us. As each of us works to clarify his own thinking, a de facto consensus about reasonable testing ontology will form over time, community by community.

So here we go…

There several kinds of testing that involve or overlap with or may even be synonymous with integration testing, including: regression testing, system testing, field testing, interoperability testing, compatibility testing, platform testing, and risk-based testing. Most testing, in fact, no matter what it’s called, is also integration testing.

Here is my definition of integration testing, based on my own analysis, conversations with RST instructors (mainly Michael Bolton), and stimulated by the many commenters from part 1. All of my assertions and definitions are true within the Rapid Software Testing methodology namespace, which means that you don’t have to agree with me unless you claim to be using RST.

What is integration testing?

Integration testing is:
1. Testing motivated by potential risk related to integration.
2. Tests designed specifically to assess risk related to integration.

Notes:

1. “Motivated by” and “designed specifically to” overlap but are not the same. For instance, if you know that a dangerous criminal is on the loose in your neighborhood you may behave in a generally cautious or vigilant way even if you don’t know where the criminal is or what he looks like. But if you know what he looks like, what he is wearing, how he behaves or where he is, you can take more specific measures to find him or avoid him. Similarly, a newly integrated product may create a situation where any kind of testing may be worth doing, even if that testing is not specifically aimed at uncovering integration bugs, as such; OR you can perform tests aimed at exposing just the sort of bugs that integration typically causes, such as by performing operations that maximize the interaction of components.

The phrase “integration testing” may therefore represent ANY testing performed specifically in an “integration context”, or applying a specific “integration test technique” in ANY context.

This is a special case of the difference between risk-based test management and risk-based test design. The former assigns resources to places where there is potential risk but does not dictate the testing to be performed; whereas the latter crafts specific tests to examine the product for specific kinds of problems.

2. “Potential risk” is not the same as “risk.” Risk is the danger of something bad happening, and it can be viewed from at least three perspectives: probability of a bad event occurring, the impact of that event if it occurs, and our uncertainty about either of those things. A potential risk is a risk about which there is substantial uncertainty (in other words, you don’t know how likely the bug is to be in the product or you don’t know how bad it could be if it were present). The main point of testing is to eliminate uncertainty about risk, so this often begins with guessing about potential risk (in other words, making wild guesses, educated guesses, or highly informed analyses about where bugs are likely to be).

Example: I am testing something for the first time. I don’t know how it will deal with stressful input, but stress often causes failure, so that’s a potential risk. If I were to perform stress testing, I would learn a lot about how the product really handles stress, and the potential risk would be transformed into a high risk (if I found serious bugs related to stress) or a low risk (if the product handled stress in a consistently graceful way).

What is integration?

General definition from the Oxford English Dictionary: “The making up or composition of a whole by adding together or combining the separate parts or elements; combination into an integral whole: a making whole or entire.”

Based on this, we can make a simple technical definition related to products:

Integration is:
v. the process of constructing a product from parts.
n. a product constructed from parts.

Now, based on General Systems Theory, we make these assertions:

An integration, in some way and to some degree:

  1. Is composed of parts:
  • …that come from differing sources.
  • …that were produced for differing purposes.
  • …that were produced at different times.
  • …that have differing attributes.
  1. Creates or represents an internal environment for its parts:
  • …in which its parts interact among themselves.
  • …in which its parts depend on each other.
  • …in which its parts interact with or depend on an external environment.
  • …in which these things are not visible from the outside.
  1. Possesses attributes relative to its parts:
  • …that depend on them.
  • …that differ from them.

Therefore, you might not be able to discern everything you want to know about an integration just by looking at its parts.

This is why integration risk exists. In complex or important systems, integration testing will be critically important, especially after changes have been made.

It may be possible to gain enough knowledge about an integration to characterize the risk (or to speak more plainly: it may be possible to find all the important integration bugs) without doing integration testing. You might be able to do it with unit testing. However, that process, although possible in some cases, might be impractical. This is the case partly because the parts may have been produced by different people with different assumptions, because it is difficult to simulate the environment of an integration prior to actual integration, or because unit testing tends to focus on what the units CAN do and not on what they ACTUALLY NEED to do. (If you unit test a calculator, that’s a lot of work. But if that calculator will only ever be asked to add numbers under 50, you don’t need to do all that work.)

Integration testing, although in some senses being complex, may actually simplify your testing since some parts mask the behavior of other parts and maybe all you need to care about is the final outputs.

Notes:

1. “In some way and to some degree” means that these assertions are to be interpreted heuristically. In any specific situation, these assertions are highly likely to apply in some interesting or important way, but might not. An obvious example is where I wrote above that the “parts interact with each other.” The stricter truth is that the parts within an integration probably do not EACH directly interact with ALL the other ones, and probably do not interact to the same degree and in the same ways. To think of it heuristically, interpret it as a gentle warning such as  “if you integrate something, make it your business to know how the parts might interact or depend on each other, because that knowledge is probably important.”

By using the phrase “in some way and to some degree” as a blanket qualifier, I can simplify the rest of the text, since I don’t have to embed other qualifiers.

2. “Constructing from parts” does not necessarily mean that the parts pre-existed the product, or have a separate existence outside the product, or are unchanged by the process of integration. It just means that we can think productively about pieces of the product and how they interact with other pieces.

3. A product may possess attributes that none of its parts possess, or that differ from them in unanticipated or unknown ways. A simple example is the stability of a tripod, which is not found in any of its individual legs, but in all the legs working together.

4. Disintegration also creates integration risk. When you takes things away, or take things apart, you end up with a new integration, and that is subject to the much the same risk as putting them together.

5. The attributes of a product and all its behaviors obviously depend largely on the parts that comprise it, but also on other factors such as the state of those parts, the configurations and states of external and internal environments, and the underlying rules by which those things operate (ultimately, physics, but more immediately, the communication and processing protocols of the computing environment).

6. Environment refers to the outside of some object (an object being a product or a part of a product), comprising factors that may interact with that object. A particular environment might be internal in some respects or external in other respects, at the same time.

  • An internal environment is an environment controlled by the product and accessible only to its parts. It is inside the product, but from the point vantage point of some of parts, it’s outside of them. For instance, to a spark plug the inside of an engine cylinder is an environment, but since it is not outside the car as a whole, it’s an internal environment. Technology often consists of deeply nested environments.
  • An external environment is an environment inhabited but not controlled by the product.
  • Control is not an all-or-nothing thing. There are different levels and types of control. For this reason it is not always possible to strictly identify the exact scope of a product or its various and possibly overlapping environments. This fact is much of what makes testing– and especially security testing– such a challenging problem. A lot of malicious hacking is based on the discovery that something that the developers thought was outside the product is sometimes inside it.

7. An interaction occurs when one thing influences another thing. (A “thing” can be a part, an environment, a whole product, or anything else.)

8. A dependency occurs when one thing requires another thing to perform an action or possess an attribute (or not to) in order for the first thing to behave in a certain way or fulfill a certain requirement. See connascence and coupling.

9. Integration is not all or nothing– there are differing degrees and kinds. A product may be accidentally integrated, in that it works using parts that no one realizes that it has. It may be loosely integrated, such as a gecko that can jettison its tail, or a browser with a plugin. It may be tightly integrated, such as when we take the code from one product and add it to another product in different places, editing as we go. (Or when you digest food.) It may preserve the existing interfaces of its parts or violate them or re-design them or eliminate them. The integration definition and assertions, above, form a heuristic pattern– a sort of lens– by which we can make better sense of the product and how it might fail. Different people may identify different things as parts, environments or products. That’s okay. We are free to move the lens around and try out different perspectives, too.

Example of an Integration Problem

bitmap

This diagram shows a classic integration bug: dueling dependencies. In the top two panels, two components are happy to work within their own environments. Neither is aware of the other while they work on, let’s say, separate computers.

But when they are installed together on the same machine, it may turn out that each depends on factors that exclude the other. Even though the components themselves don’t clash (the blue A box and the blue B boxes don’t overlap). Often such dependencies are poorly documented, and may be entirely unknown to the developer before integration time.

It is possible to discover this through unit testing… but so much easier and probably cheaper just to try to integrate sooner rather than later and test in that context.

 

Categories: Testing & QA

Android Developer Story: Music app developer DJIT builds higher quality experiences and a successful businesses on Android

Android Developers Blog - Sun, 01/24/2016 - 23:23

Posted by Lily Sheringham, Google Play team

Paris-based DJiT is the creator of edjing, one of the most downloaded DJ apps in the world, it now has more than 60 million downloads and a presence in 182 countries. Following their launch on Android, the platform became the largest contributor of business growth, with 50 percent of total revenue and more than 70 percent of new downloads coming from their Android users.

Hear from Jean-Baptiste Hironde, CEO & Co-founder, Séverine Payet, Marketing Manager, and Damien Delépine, Android Software Engineer, to learn how DJit improved latency on new Android Marshmallow, as well as leveraged other Android and Google Play features to create higher quality apps.



Find out more about building great audio apps and how to find success on Google Play.

Categories: Programming

SPaMCAST 377 – Evan Leybourn, No More Projects

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

We begin year 10 of the Software Process and Measurement Cast with our Interview with Evan Leybourn. Evan returns to the Software Process and Measurement Cast to discuss the “end to IT projects.” We discussed the idea of #NoProject and continuous delivery, and whether this is just an “IT” thing or something that can encompass the entire business.  Evan’s views are informative and bit provocative.  I have not stopped thinking about the concepts we discussed since originally taping the interview.

Evan last appeared on SPaMCAST 284 – Evan Leybourn, Directing The Agile Organization to discuss his book Directing the Agile Organization.

Evan’s Bio
Evan pioneered the field of Agile Business Management; applying the successful concepts and practices from the Lean and Agile movements to corporate management. He keeps busy as a business leader, consultant, non-executive director, conference speaker, internationally published author and father.

Evan has a passion for building effective and productive organizations, filled with actively engaged and committed people. Only through this, can organizations flourish. His experience while holding senior leadership and board positions in both private industry and the government has driven his work in business agility and he regularly speaks on these topics at local and international industry conferences.

As well as writing “Directing the Agile Organization.”, Evan currently works for IBM in Singapore to help them become a leading agile organization. As always, all thoughts, ideas, and comments are his own and do not represent his clients or employer.

All of Evan’s contact information and blog can be accessed on his website.

Remember to help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Six, we discussed using risk in quantitative analysis and the Monte Carlo analysis.

 

Upcoming Events

I am facilitating the CMMI Capability Challenge. This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on February 17 at 11 AM EST.

http://cmmiinstitute.com/conferences#thecapabilitychallenge

 

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on the relationship between done and value. The essay is in response to a question from Anteneh Berhane.  Anteneh called me to ask one of the hardest questions I had ever been asked: why doesn’t the definition of done include value?

We will also have columns from Jeremy Berriault’s QA Corner and Steve Tendon discussing the next chapter in the book  Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

Clojure: First steps with reducers

Mark Needham - Sun, 01/24/2016 - 23:01

I’ve been playing around with Clojure a bit today in preparation for a talk I’m giving next week and found myself writing the following code to apply the same function to three different scores:

(defn log2 [n]
  (/ (Math/log n) (Math/log 2)))
 
(defn score-item [n]
  (if (= n 0) 0 (log2 n)))
 
(+ (score-item 12) (score-item 13) (score-item 5)) 9.60733031374961

I’d forgotten about folding over a collection but quickly remembered that I could achieve the same result with the following code:

(reduce #(+ %1 (score-item %2)) 0 [12 13 5]) 9.60733031374961

The added advantage here is that if I want to add a 4th score to the mix all I need to do is append it to the end of the vector:

(reduce #(+ %1 (score-item %2)) 0 [12 13 5 6]) 12.192292814470767

However, while Googling to remind myself of the order of the arguments to reduce I kept coming across articles and documentation about reducers which I’d heard about but never used.

As I understand they’re used to achieve performance gains and easier composition of functions over collections so I’m not sure how useful they’ll be to me but I thought I’d give them a try.

Our first step is to bring the namespace into scope:

(require '[clojure.core.reducers :as r])

Now we can compute the same result using the reduce function:

(r/reduce #(+ %1 (score-item %2)) 0 [12 13 5 6]) 12.192292814470767

So far, so identical. If we wanted to calculate individual scores and then filter out those below a certain threshold the code would behave a little differently:

(->>[12 13 5 6]
    (map score-item)
    (filter #(> % 3))) (3.5849625007211565 3.700439718141092)
 
(->> [12 13 5 6]
     (r/map score-item)
     (r/filter #(> % 3))) #object[clojure.core.reducers$folder$reify__19192 0x5d0edf21 "clojure.core.reducers$folder$reify__19192@5d0edf21"]

Instead of giving us a vector of scores the reducers version returns a reducer which can pass into reduce or fold if we want an accumulated result or into if we want to output a collection. In this case we want the latter:

(->> [12 13 5 6]
     (r/map score-item)
     (r/filter #(> % 3))
     (into [])) (3.5849625007211565 3.700439718141092)

With a measly 4 item collection I don’t think the reducers are going to provide much speed improvement here but we’d need to use the fold function if we want processing of the collection to be done in parallel.

One for next time!

Categories: Programming

SPaMCAST 378 – Evan Leybourn, No More Projects

Software Process and Measurement Cast - Sun, 01/24/2016 - 23:00

We begin year 10 of the Software Process and Measurement Cast with our Interview with Evan Leybourn. Evan returns to the Software Process and Measurement Cast to discuss the "end to IT projects." We discussed the idea of #NoProject and continuous delivery, and whether this is just an “IT” thing or something that can encompass the entire business.  Evan’s views are informative and bit provocative.  I have not stopped thinking about the concepts we discussed since originally taping the interview.

Evan last appeared on SPaMCAST 284 – Evan Leybourn, Directing The Agile Organization to discuss his book Directing the Agile Organization.

Evan’s Bio
Evan pioneered the field of Agile Business Management; applying the successful concepts and practices from the Lean and Agile movements to corporate management. He keeps busy as a business leader, consultant, non-executive director, conference speaker, internationally published author and father.

Evan has a passion for building effective and productive organizations, filled with actively engaged and committed people. Only through this, can organizations flourish. His experience while holding senior leadership and board positions in both private industry and the government has driven his work in business agility and he regularly speaks on these topics at local and international industry conferences.

As well as writing "Directing the Agile Organization.", Evan currently works for IBM in Singapore to help them become a leading agile organization. As always, all thoughts, ideas, and comments are his own and do not represent his clients or employer.

All of Evan’s contact information and blog can be accessed on his website.

Remember to help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Six, we discussed using risk in quantitative analysis and the Monte Carlo analysis.

 

Upcoming Events

I am facilitating the CMMI Capability Challenge. This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on February 17 at 11 AM EST.

http://cmmiinstitute.com/conferences#thecapabilitychallenge

 

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on the relationship between done and value. The essay is in response to a question from Anteneh Berhane.  Anteneh called me to ask one of the hardest questions I had ever been asked: why doesn’t the definition of done include value?

We will also have columns from Jeremy Berriault’s QA Corner and Steve Tendon discussing the next chapter in the book  Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

The Man Who Opened the Door

James Bach’s Blog - Sun, 01/24/2016 - 22:09

I just heard that Ed Yourdon died.

In ’93 or early ’94, I got a strange email from him. He had heard about me in Mexico and he wanted to meet. I had never been to Mexico and I had never met or spoken to Ed. I was shocked: One of the most famous people in the software development methodology world wanted to talk to me, a test manager in Silicon Valley who had almost no writing and had spoken at only one conference! This was the very first time I realized that I had begun to build a reputation in the industry.

Ed was not the first famous guy I had met. I met Boris Beizer at that conference I mentioned, and that did not go well (we yelled at each other… he told me that I was full of shit… that kind of thing). I thought that might be the end of my ambition to rock the testing industry, if the heavy hitters were going to hate me.

Ed was a heavy hitter. I owned many of his books and I had carefully read his work on structured analysis. He was one of my idols.

So we met. We had a nice dinner at the Hyatt in Burlingame, south of San Francisco. He told me I needed to study systems thinking more deeply. He challenged me to write a book and asked me to write articles for American Programmer (later renamed to the Cutter IT Journal).

The thing that got to me was that Ed treated me with respect. He asked me many questions. He encouraged me to debate him. He pushed me to write articles on the CMM and on Good Enough Software– both subjects that got me a lot of favorable attention.

On the day of our meeting, he was 49– the same age I am now. He set me on a path to become a guy like him– because he showed me (as many others would later do, as well) that the great among us are people who help other people aspire to be great, too. I enjoy helping people, but reflecting on how I was helped reminds me that it is not just fun, it’s a moral imperative. If Ed reached out his hand to me, some stranger, how can I not do the same?

Ed saw something in me. Even now I do not want to disappoint him.

Rest in Peace, man.

 

Categories: Testing & QA

How To Measure Anything, Chapter 6: Quantifying Risk Through Modeling

HTMA

How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

Chapter 6 of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition, is titled: Quantifying Risk Through Modeling. Chapter 6 builds on the basics described in Chapter 4 (define the decision and data that will be needed) and Chapter 5 (determine what is known). Hubbard addresses the process of quantifying risk in two overarching themes. The first theme is the quantification of risk and the second is using the Monte Carlo analysis to model outcomes.

Risk is the possibility that a loss or some other bad thing will occur. The possibility that something equates to uncertainty. Risk is often expressed in qualitative terms such low, medium, high and the ever popular really high rather than in quantified terms. However, that qualitative approach generates ambiguity. Qualitative designations provide little measurement value outside of a sort of measurement placebo effect. Even though an absolute value for risk might not be knowable, risk can be quantified as a range of values. The qualification of risk is important, both in terms of understanding risk and in terms of usefulness for defining overall outcomes. Defining a range of risks makes understanding the amount of risk in any decision less ambiguous. Quantifying risk is also the foundation of further measurement needed for decision-making. Hubbard hammers home the point that measurement is done to inform some decision that is uncertain and has negative consequences if it turns out wrong.

The goal of Monte Carlo analysis is to provide input for better decision making under uncertainty. When you allow yourself to use ranges and probabilities, you really don’t really have to assume anything you don’t know for a fact (Chapter 5 showed us how to estimate based on what we know). All risks can be expressed by the range of uncertainty on the costs and benefits and the probabilities of events that may affect them. Turning a range of estimates into a set of predicted outcomes requires math. Monte Carlo analysis is a mathematical technique that uses the estimated uncertainty in decisions to furnish a range of possible outcomes and the probabilities they will occur.

Monte Carlo analysis can incorporate a wide range of scenarios and variables.  Hubbard points out that it is easy to get carried away with the detail of the model. Models should only be as sophisticated as needed to add value to a decision. Remember, as Gene Hughson of Form Follows Function says, “all models are wrong.”  Models are abstractions of real life, there is always detail you leave out, no matter how sophisticated the model becomes. Hubbard suggests that model users always ask whether a new more complex model is an improvement on any alternative model. Quantification provides a platform for making consistent choices in order to clearly state how risk-averse or risk tolerant any organization really is.

Hubbard closes this chapter by stating a risk paradox.

“If an organization uses quantitative risk analysis at all, it is usually for routine operation decisions.  The largest, most risky decisions get the least amount of risk analysis.”

The combination of estimation (Chapter 5), quantifying risk and Monte Carlo analysis may seem complex keeps many decision makers from using the technique, this is especially true in software development, hence the paradox.  For example, every software development estimation problem, whether Agile, lean or plan based, has a large degree of uncertainty embedded in the process and therefore, is a perfect candidate to use Monte Carlo analysis. However, very few estimator understand or use the technique. Learning Monte Carlo analysis (and using the one of the many tools to do the mathematical heavy lifting) or alternately hiring some to perform risk analysis are both paths to addressing adding quantitative data to decision making.  When making decisions under conditions of uncertainty,  Monte Carlo analysis is a necessity to do the math needed. 

Previous Installments in Re-read Saturday, How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

How To Measure Anything, Third Edition, Introduction

Chapter 1: The Challenge of Intangibles

Chapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily

Chapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t

Chapter 4: Clarifying the Measurement Problem

Chapter 5: Calibrated Estimates: How Much Do You Know Now?


Categories: Process Management

Project Management is a Closed Loop Control System

Herding Cats - Glen Alleman - Sat, 01/23/2016 - 23:27

Without a desired delivery date, target budget, and expected capabilities, a control system is of little interest to those providing the money at business level. There is no way to assure those needs – date, budget, capabilities – can be met with the current capacity for work, efficacy of that work process, or budget absorption of that work. 

With a need date, target budget, and expected capability outcome, a control system is the basis of increasing the probability of success. These targets are the baseline to steer toward. Without a steering target the management of the project is Open Loop. There are two types of control systems

  • Closed Loop Control – where the output signal has direct impact on the control action.
  • Open Loop Control – where the output signal has no direct impact on the control action.

An Open Loop Control control system is a non-feedback system, where the output – the desired state – has no influence or effect on the control action of the input signal. In an Open-Loop control system the output – the desired state– is neither measured nor “fed back” for comparison with the input.  An Open-Loop system is expected to faithfully follow its input command or set point regardless of the final result. An Open-Loop system has no knowledge of the output condition – the difference between desired state and actual state – so cannot self-correct any errors it could make when the preset value drifts, even if this results in large deviations from the preset value.

An Closed-loop Control System, is a feedback control system which uses the concept of an open loop system as its forward path but has one or more feedback loops between its output and its input. In Closed Loop control, there is a “feedback,” signal that means some portion of the output is returned “back” to the input to form part of the systems excitation.

Closed-loop systems are designed to automatically achieve and maintain the desired output condition by comparing it with the actual condition. Closed Loop control systems do this by generating an error signal which is the difference between the output and the reference input. A “closed-loop system” is a fully automatic control system in which its control action being dependent on the output in some way.

Key Differences Between Open Loop and Closed Loop control

Open Loop Control

  • Controller has some knowledge of the output condition.
  • The desired condition is not present in the control loop – hence the Open Loop.
  • Any corrective action requires an operator input to change the behavior of the system to achieve a desired output condition
  • No comparison between actual output condition and the desired output conditions.
  • Close Loop control has no regulation or control action over the output condition ! Each input condition determine a fixed operating condition for the controller.
  • Changes or disturbances in external conditions does not result in a direct output change unless the controller and manually altered.

Closed Loop Control

  • Controller has some knowledge of the output condition.
  • The desired condition is compared to the actual condition to create an error signal. This signal is the difference between the input signal (the desired dryness) and the output signal (the current dryness).
  • Closed loop means feedback not just for recording the output, but for comparing with the desired state to take corrective action.
  • Output condition errors are adjust by changes in the controller function by measure difference between output and desired condition.
  • Output conditions are stable in the presence of an unstable system.
  • Reliable and repeatable output performance results from corrective actions taken from the error signal.

Using Closed Loop Control for a Project

  • The Setting – we work in an enterprise IT environment, a product development company, or on a mission critical software project.
  • The Protagonist – Those providing the money need information to make decisions
  • The Imbalance – it’s not clear how to make decisions in the absence of information about, the cost, schedule, and technical outcomes from those decisions.
  • Restoring the Balance – when a decision is made, it needs to based on the principles of microeconomics, at least in a governance based organization. The decision
  • Recommended Solution – start with a baseline estimate of the cost, schedule, and technical performance. Execute work and measure the productivity of that work.

Using these measures to calculate the variance between planned and actual. Take management action to adjust the productivity, the end date, the budget – using all variables produce a new Estimate To Complete to manage toward.

This is a closed loop control system

The Microeconomics of Decision in Closed Loop Control

Microeconomics is the study of how people make decisions in resource-limited situation on a person scale. It deals with decision that individual and organizations make on such issues as how much insurance to buy, which word processor to buy, what prices to change for pro ducts and services, which path to take in a project. Throughput the project lifecycle, these decision making opportunities. Each decision impacts the future behavior of the project and is informed by past performance and the probabilistic and statistical processes of the underlying project activities. To make an informed decision about the project, estimates are made using this information.

Microeconomics applied to projects is a well understood and broadly applied discipline in cost account and business strategy and execution. Decision making based on alternatives, their assessed value and forecast cost. Both these values are probabilistic. Microeconomics is the basis of Real Options and other statistical decision making. Without this paradigm decision are made not knowing the future impact of those decisions, their cost, schedule, or technical impacts. This is counter to good business practices in any domain.

Let's Look At An Open Loop Control System

Screen Shot 2016-01-23 at 3.17.28 PMThis is all fine and dandy. But where are we going? What's the probability we will arrive at our desired destination if we knew what that destination was? Do we have what we need to reach that desired destination if we knew what it was? In Open Loop Control these questions have no answers.

Let's Look at a Closed Loop Control System

Screen Shot 2016-01-23 at 3.18.42 PM

We want to manage our projects with Closed Loop Control Systems

Related articles Who's Budget is it Anyway? Your Project Needs a Budget and Other Things The Actual Science in Management Science Control Systems - Their Misuse and Abuse Building a Credible Performance Measurement Baseline Open Loop Thinking v. Close Loop Thinking
Categories: Project Management