Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Four Attributes That Support Incremental Change Initiatives

A Thali is an incremental meal

Incrementalism–doing small changes in order to achieve a larger effect–comes in many styles and flavors.  The many variations of this approach have titles such experimentation, continuous process improvement, kaizen events, and plan-do-check-act cycles (PDCA).  To paraphrase 20th Century toothpaste commercials, 4 out of 5 process improvement professionals recommend incrementalism.  Agile and Lean are full of examples of branded incremental change models including the Toyota Production System, Scrum and Kanban (when applied to process improvement). We can see the impact of incrementalism on how these frameworks are constructed by observing the individual techniques such as sprints and time boxes, daily stand-ups both in scrum and XP, and retrospectives, to name a few.  Each technique reinforces taking small steps and seeking feedback for re-planning.  If you don’t want to consider frameworks or techniques remember that the agile mantra of inspect and adapt is a statement of incrementalism.   Incrementalism makes changes to how work is done by shifting the focus from the one big project or implementation to taking small steps, gathering feedback and then reacting. This approach to change is not new. Deming popularized the PDCA cycle early in the 20th Century.  Practitioners embrace incrementalism because making many small changes one after another provides feedback fast, which enhances organizational learning and mitigates many of the risks seen in Big Bang models. Four attributes support learning and risk reduction:

  1. Learning – PDCA type or inspect and adapt models all are built on the expectation that when a change is made, the impact will be reviewed and then the feedback will be used to improve how work is done.  Feedback is used as a learning device, where the faster feedback is generated the higher the possibility of learning.
  2. Risk mitigation – Steven Adams, agile consultant (SPaMCAST 412) stated, “Continuous process improvement is a less risky route.  But could be the slower.” Incremental changes typically will not imperil the organization in the way Big Bang or “bet the farm” type changes could.  For example which has more risk: a bank merger or adding hundreds of customers one at a time through a production interface? While this might not be a perfect analogy the larger change that gets feedback only when it’s completed will ALWAYS be riskier. Along with reducing the risk that size generates, smaller increments help ensure that that change programs don’t wander away from the vision that launched them.  Todd Field, senior project manager and Scrum master described it as, “I believe you need to have a Big Bang vision and an incremental improvements plan.“  Techniques like delivery cadence (e.g. Scrum’s sprint cycle) keep changes small and requiring product owner and stakeholder’s acceptance expose risks before they can become issues making incremental changes safer.
  3. Accumulation – Incremental changes building toward an overall goal are often compared to compound interest.  Small changes build on each other until the return is significantly higher than simply making each of the changes in isolation.  Dominique Bourget, the Process Philosopher described this concept as “It is like losing weight… you get more benefit by exercising one hour each day than to exercise 30 hours in a row on the last day of the month.”
  4. Adaptation to the pace of change in external environment – Software development environments are very dynamic.  New methods, techniques and tools are investigated, implemented and discarded as organizations try to get more done within corporate budgetary restraints. We all know the mantra faster, better, and cheaper.  Because of the rate of long term change in the development environment, change programs often lose focus or sponsorship.  Incremental changes are better at reacting to change and adjusting to the level of urgency within an organization.  Kristie Lawrence, IFPUG Past President, suggested that “continuous process improvement allows you to slowly and surely improve. The trick is to manage the scope of what is being improved – changing one thing changes the entire “system” and surface things that you never knew about.”  Implementing small changes provides a feedback loop to continually test the need for further changes.

Incremental changes provide organizations with a tool to minimize the risk of change.  Agile pundits, originally made the point in terms of software development.  The same ideas that make incremental change useful for software development are equally useful for improving the value of continuous improvement all across the business while reducing risk. As a result,  practitioners are predisposed to championing incremental change.


Categories: Process Management

Four Attributes That Support Incremental Change Initiatives

A Thali is an incremental meal

Incrementalism–doing small changes in order to achieve a larger effect–comes in many styles and flavors.  The many variations of this approach have titles such experimentation, continuous process improvement, kaizen events, and plan-do-check-act cycles (PDCA).  To paraphrase 20th Century toothpaste commercials, 4 out of 5 process improvement professionals recommend incrementalism.  Agile and Lean are full of examples of branded incremental change models including the Toyota Production System, Scrum and Kanban (when applied to process improvement). We can see the impact of incrementalism on how these frameworks are constructed by observing the individual techniques such as sprints and time boxes, daily stand-ups both in scrum and XP, and retrospectives, to name a few.  Each technique reinforces taking small steps and seeking feedback for re-planning.  If you don’t want to consider frameworks or techniques remember that the agile mantra of inspect and adapt is a statement of incrementalism.   Incrementalism makes changes to how work is done by shifting the focus from the one big project or implementation to taking small steps, gathering feedback and then reacting. This approach to change is not new. Deming popularized the PDCA cycle early in the 20th Century.  Practitioners embrace incrementalism because making many small changes one after another provides feedback fast, which enhances organizational learning and mitigates many of the risks seen in Big Bang models. Four attributes support learning and risk reduction:

  1. Learning – PDCA type or inspect and adapt models all are built on the expectation that when a change is made, the impact will be reviewed and then the feedback will be used to improve how work is done.  Feedback is used as a learning device, where the faster feedback is generated the higher the possibility of learning.
  2. Risk mitigation – Steven Adams, agile consultant (SPaMCAST 412) stated, “Continuous process improvement is a less risky route.  But could be the slower.” Incremental changes typically will not imperil the organization in the way Big Bang or “bet the farm” type changes could.  For example which has more risk: a bank merger or adding hundreds of customers one at a time through a production interface? While this might not be a perfect analogy the larger change that gets feedback only when it’s completed will ALWAYS be riskier. Along with reducing the risk that size generates, smaller increments help ensure that that change programs don’t wander away from the vision that launched them.  Todd Field, senior project manager and Scrum master described it as, “I believe you need to have a Big Bang vision and an incremental improvements plan.“  Techniques like delivery cadence (e.g. Scrum’s sprint cycle) keep changes small and requiring product owner and stakeholder’s acceptance expose risks before they can become issues making incremental changes safer.
  3. Accumulation – Incremental changes building toward an overall goal are often compared to compound interest.  Small changes build on each other until the return is significantly higher than simply making each of the changes in isolation.  Dominique Bourget, the Process Philosopher described this concept as “It is like losing weight… you get more benefit by exercising one hour each day than to exercise 30 hours in a row on the last day of the month.”
  4. Adaptation to the pace of change in external environment – Software development environments are very dynamic.  New methods, techniques and tools are investigated, implemented and discarded as organizations try to get more done within corporate budgetary restraints. We all know the mantra faster, better, and cheaper.  Because of the rate of long term change in the development environment, change programs often lose focus or sponsorship.  Incremental changes are better at reacting to change and adjusting to the level of urgency within an organization.  Kristie Lawrence, IFPUG Past President, suggested that “continuous process improvement allows you to slowly and surely improve. The trick is to manage the scope of what is being improved – changing one thing changes the entire “system” and surface things that you never knew about.”  Implementing small changes provides a feedback loop to continually test the need for further changes.

Incremental changes provide organizations with a tool to minimize the risk of change.  Agile pundits, originally made the point in terms of software development.  The same ideas that make incremental change useful for software development are equally useful for improving the value of continuous improvement all across the business while reducing risk. As a result,  practitioners are predisposed to championing incremental change.


Categories: Process Management

Future of Java 8 Language Feature Support on Android

Android Developers Blog - Tue, 03/14/2017 - 22:17
Posted by James Lau, Product Manager 
At Google, we always try to do the right thing. Sometimes this means adjusting our plans. We know how much our Android developer community cares about good support for Java 8 language features, and we're changing the way we support them.

We've decided to add support for Java 8 language features directly into the current javac and dx set of tools, and deprecate the Jack toolchain. With this new direction, existing tools and plugins dependent on the Java class file format should continue to work. Moving forward, Java 8 language features will be natively supported by the Android build system. We're aiming to launch this as part of Android Studio in the coming weeks, and we wanted to share this decision early with you.

We initially tested adding Java 8 support via the Jack toolchain. Over time, we realized the cost of switching to Jack was too high for our community when we considered the annotation processors, bytecode analyzers and rewriters impacted. Thank you for trying the Jack toolchain and giving us great feedback. You can continue using Jack to build your Java 8 code until we release the new support. Migrating from Jack should require little or no work.

We hope the new plan will pave a smooth path for everybody to take advantage of Java 8 language features on Android. We'll share more details when we release the new support in Android Studio.
Categories: Programming

Caveats and pitfalls of cookie domains

Xebia Blog - Tue, 03/14/2017 - 22:16

Not too long ago, we ran into an apparent security issue at my current assignment - people could sign in with a regular account, but get the authentication and permissions of an administrator user (a privilege escalation bug). As it turned out, the impact  of the security issue was low, as the user would need […]

The post Caveats and pitfalls of cookie domains appeared first on Xebia Blog.

SE-Radio Episode 285: James Cowling on Dropbox’s Distributed Storage System

James Cowling of Dropbox (architect of their distributed storage system), speaks to Robert Blumen about their move from Amazon’s S3 to their own infrastructure; The show covers: the size, scope and scale of Dropbox’s data management; their experience on Amazon’s S3; why S3 over time did not meet their needs; how the decision was made […]
Categories: Programming

Misunderstanding Making Decisions in the Presence of Uncertainty (Update Part 2)

Herding Cats - Glen Alleman - Tue, 03/14/2017 - 17:12

There was a Tweet a few days ago from one of the founders of eXtreme Programming, that said...

What happens if you shift focus from "accurate estimation" to "reliably shipping by a date"? 

This quote shows the missing concept of the processes for making decisions in the presence of uncertainty and the processes and events that create uncertainty that impact the reliability of making the date to ship value.

The answer is ... You can't shift focus from accurate estimate to reliably shipping by a date ...

Accurate and precise estimates (to predefined values as shown in the target picture below) are needed before you can reliably ship products by a date. Because you can't know that date with any needed level of confidence without making estimates about the reducible and irreducible uncertanties that inpact that date.

So the answer to the question is.

In the presence of uncertanity, You can't reliably ship by a date without estimating the impact of those uncertanties on the probability of making the date.

Since uncertainty creates risk, managing in the presence of uncertainty requires Risk Management, we can now answer the question, with:

  • If you want a reliable shipping date, you have to discover and handle the uncertainties in the work needed to produce the outcomes to be delivered on that date.
  • You have to estimate the needed schedule, cost, and technical performance margins needed to protect that date from the Aleatory uncertainties.
  • You have to estimate the probabilistic occurrence of the epistemic uncertainties that will impact that date and provide a Plan B an intervention, or some corrective action to protect that date.

Each of these uncertainties creates risk to meeting that reliable shipping date. And as we all know

Risk Management is How Adults Manage Projects - Tim Lister

Details of the Answer to the Question

First, let's establish a principle. All project work has uncertainty. Uncertainty comes from the lack of precision and accuracy about the possible values of a measurement of a project attribute.

There is naturally occurring variability from uncontrolled processes. There is a probability of the occurrence of a future event. This absence of knowledge (Epistemic uncertainty)  can be modeled as a probability of occurrence or a statistical distribution of the natural variability. If your project has no uncertainty, there is no need to estimate. All outcomes are certain, occurring with 100% probability, and with 0% variance. Turns out in the 

These uncertainties come in two forms. Naturally occurring variances (Aleatory uncertainty) and Event based probabilities (Epistemic uncertainty).

The naturally occurring variability comes from uncontrolled and uncontrollable processes. This uncertainty is modeled as a statistical distribution from past performance or an underlying statistical process model, usually stochastic (stationary or non-stationary). The probability of a random event is the absence of knowledge. This uncertainty is modeled as a probability of occurrence or a statistical distribution of the natural variability. If your project has no uncertainty, there is no need to estimate. All outcomes are uncertain, occurring with 100% probability, and with 0% variance.

A formal definition of uncertainty in the project decision-making paradigm is ...

Situation where the current state of knowledge is such that (1) the order or nature of things is unknown, (2) the consequences, extent, or magnitude of circumstances, conditions, or events is unpredictable, and (3) credible probabilities to possible outcomes cannot be assigned. 

If your project has no uncertainty, there is no need to estimate. Then the planned ship date is deterministic. All outcomes are certain, occurring with 100% probability, and with 0% variance.

Turns out in the real world there is no such project.

When we say uncertainty, we speak about a future state of the system that is not fixed or determined. Uncertainty is related to three aspects of the management of projects:

  1. The external world - the activities of the project itself.
  2. Our knowledge of this world - the planned and actual behaviors of the project.
  3. Our perception of this world - the data and information we receive about these behaviors.

Let's revisit the two flavors of uncertainty - uncertainty that can be reduced (Epistemic) and uncertainty that cannot be reduced (Aleatory)

Screen Shot 2017-03-11 at 1.06.11 PM
Aleatory uncertainties are unknowns that differ each time we assess them. They are values drawn from an underlying population of possible values. They are uncertainties that we can't do anything about. They cannot be suppressed or removed. My drive from my house to my secret parking spot on the east side of Denver International Airport is shown at 47 minutes by Google Maps. If I ask Google what's the duration at a specific time of day, 3 days from now, I'll get a different number. When I get on the farm road to I-25 and get to I-25 I may find a different time. This time is the random variances of distance and traffic conditions. I need margin to protect me from being late to the parking spot.

The naturally occurring work effort in the development of a software feature - even if we've built the feature before - is an irreducible uncertainty. The risk is created when we have not accounted for this natural variances in our management plan for the project. If we do not have a sufficient buffer to protect the plan from these naturally occurring variances, our project will be impacted in unfavorable ways.

The notion (as suggested in the quote) of shifting from accurate (what ever that means) ways of estimating to reliability shipping be a date is not physically possible since the irreducible and reducible uncertainties are always present.

Dealing with Aleatory (irreducible) uncertainty and the resulting risk requires we have margin. Aleatory uncertainty is expressed as a process variability. Work effort variances, productivity variances, quality of product and resulting rework variances.  Epistemic uncertainties are systematic, caused by things we know about in principle. The probability of something happening. An aleatory risk is expressed as a relation to a value. A percentage of that value. This is the motivation for short work intervals found in agile development. 

Epistemic uncertainties are systematic, caused by things we know about in principle. The probability of something happening. This uncertainty is introduced by a probabilistic event, rather than a naturally occurring process. Epistemic uncertainty is introduced by an assumption about the world in which the system is embedded. This assumption can be from the lack of data - an ontological uncertainty. Epistemic uncertainties have probabilities of occurrence. The likelihood of a failure for example. Epistemic uncertainty can also occur when there is a subjective evaluation of the system - a risk from a rare event or an event with little or no empirical data. Epistemic uncertainty can also occur from the incompleteness of knowledge - a major hazard or condition not identified or a causal mechanism the remains undetected. And epistemic uncertainty can also occur from undetected design errors, introduced by ontological uncertainties into the system behavior. 

Epistemic uncertainty can also occur when there is a subjective evaluation of the system - a risk from a rare event or an event with little or no empirical data. Epistemic uncertainty can also occur from the incompleteness of knowledge - a major hazard or condition not identified or a causal mechanism the remains undetected. And epistemic uncertainty can also occur from undetected design errors, introduced by ontological uncertainties into the system behavior. 

Before completing this post, let's look quickly at procession and accuracy as mention in the original quote. All estimates have precision and accuracy. Deciding how much precision and accuracy is needed for a credible estimate is critical to the success of that decision. One starting point is the value at risk. By determining the value at risk, we can determine how much precision and accuracy is needed and how much time and cost we should put into the estimating process.

Screen Shot 2017-03-11 at 12.39.08 PM

Let's go back to the original quote.

What happens if you shift focus from "accurate estimation" to "reliably shipping by a date"? 

With our knowledge of Epistemic and Aleatory uncertainty, we now know we cannot reliably ship by a date, without knowing the extent of the reducible and irreducible uncertainties, that protect that date with margin or reserve for the irreducible uncertainties and specific actions, redundancies, or interventions for the reducible uncertainties. To know how much margin or reserve for irreducible uncertainties and performance of the of the redundancies we now need to know. 

For our credible estimate, we must have a desired and measurable:

  • Precision - how small is the variance of the estimate?
  • Accuracy - how close is the estimate to the actual value?
  • Bias - what impacts on precision and accuracy come from human judgments?

So in the end, if we are to make a decision in the presence of uncertainty, we MUST make estimates to develop a reliable shipping date while producing an accurate and precise estimate of the cost, schedule, and technical performance of the product shipped on that date.

So it comes down to this, no matter how many times those claiming otherwise, so I'll shout this to make it clear to everyone...

YOU CANNOT MAKE A DECISION IN THE PRESENCE OF UNCERTAINTY (reducible or irreducible) WITHOUT MAKING ESTIMATES

A Short List of Resources for Managing in the Presence of Uncertainty

Risk Management is essential for development and production programs. Information about key project cost, performance, and schedule attributes is often uncertain or unknown until late in the program.
Risk issues that can be identified early in the program, which will potentially impact the program later, termed Known Unknowns can be alleviated with good risk management. in Effective Risk Management 2nd Edition, Edmund Conrow, AIAA, 2003

 

Papers on Risk Management

  1. “Quantifying Uncertainty in Early Lifecycle Cost Estimation (QUELCE),” Robert Ferguson, Dennis Goldenson, James McCurley, Robert Stoddard, David Zubrow, and Debra Anderson, Technical Report, CMU/SEI-2011-TR-026 ESC-TR-2011-026
  2. “The Development of Progress Plans Using a Performance–Based Expert Judgment Model to Assess Technical Performance and Risk,” Justin W. Eggstaff, Thomas A. Mazzuchi, and Shahram Sarkani, Systems Engineering, Volume 17, Issue 4, Winter 2014, Pages: 375–391
  3. “Using the Agile Methodology to Mitigate the Risks of Highly Adaptive Projects,” Dana Roberson and Mary Anne Herndon, 10th Annual CMMI Technology Conference And User Group, November 5 – 8, 2012, Denver, CO
  4. “Hybrid–Agile Software Development Anti–Patterns, Risks, and Recommendations,” Paul E. McMahon, Cross Talk: The Journal of Defense Software Engineering, July/August 2015, pp. 22–26.
  5. “Using the Agile Methodology to Mitigate the Risks of Highly Adaptive Projects,” Dana Roberson and Mary Anne Herndon, 10th Annual CMMI Technology Conference And User Group, November 5 – 8, 2012, Denver, CO.
  6. “Assessment of risks introduced to safety critical software by agile practices — A Software Engineer’s Perspective,” Janusz Górski Katarzyna Łukasiewicz, AGH University of Science and Technology, University in Kraków, Poland, Computer Science, Vol 13, No 4.
  7. “Ready & Fit: Understanding Agile Adoption Risk in DoD and Other Highly Regulated Settings,” Suzanne Miller and Mary Ann Lapham, 25th Annual Software Technology Conference, Salt Lake City, 8-10 April 2013.
  8. “Architecting Large Scale Agile Software Development: A Risk–Driven Approach,” Ipek Ozkaya, Michael Gagliardi, Robert L. Nord, CrossTalk: The Journal of Defense Software Engineering, May/June 2013.
  9. “Risk Management Method using Data from EVM in Software Development Projects,” Akihiro Hayashi and Nobuhiro Kataoka, International Conference on Computational Intelligence for Modelling, Control and Automation, Vienna, Austria, Dec. 10 to Dec. 12, 2008.
  10. “Analyse Changing Risk of Organizational Factors in Agile Project Management,” Shi Tong, Chen Jianbin, and Fang DeYing, The 1st International Conference on Information Science and Engineering (ICISE2009).
  11. “Modeling Negative User Stories is Risky Business,” Pankaj Kamthan and Nazlie Shahmir, 2016 IEEE 17th International Symposium on High Assurance Systems Engineering.
  12. “Project Risk Management Model Based on PRINCE2 and Scrum Frameworks,” Martin Tomanek, Jan Juricek, The International Journal of Software Engineering & Applications (IJSEA), January 2015, Volume 6, Number 1, ISSN: 0975-9018
  13. “How to identify risky IT projects and avoid them turning into black swans,” Magne Jørgensen, Ernst & Young: Nordic Advisory Learning Weekend, Riga, 2016.
  14. “A Methodology for Exposing Software Development Risk in Emergent System Properties,” Technical Report 11-101, April 21, 2001, Victor Basili, Lucas Layman, and Marvin Zelkowitz, Fraunhofer Center for Experimental Software Engineering, College Park, Maryland.
  15. “Outlining a Model Integrating Risk Management and Agile Software Development,” Jaana Nyfjord and Mira Kajko-Mattsson, 34th Euromicro Conference Software Engineering and Advanced Applications.
  16. “Towards a Contingency Theory of Enterprise Risk Management,” Anette Mikes Robert Kaplan, Working Paper 13–063 January 13, 2014, AAA 2014 Management Accounting Section (MAS) Meeting Paper
  17. “Agile Development and Software Architecture: Understanding Scale and Risk,” Robert L. Nord, IEEE Software Technology Conference, 2012, Salt Lake City, 23-26 April, 2012.
  18. “Using Risk to Balance Agile and Plan Driven Methods,” Barry Boehm and Richard Turner, IEEE Computer, June 2003.
  19. “Does Risk Management Contribute to IT Project Success? A Meta-Analysis of Empirical Evidence,” Karel de Bakker, Albert Boonstra, Hans Wortmann, International Journal of Project Management, 2010.
  20. “A Model for Risk Management in Agile Software Development,” Ville Ylimannela, Communications of Cloud Software.
  21. “Product Security Risk Management in Agile Product Management,” Antti Vähä-Sipilä, OWASP AppSec Research, 2010
  22. “A Probabilistic Software Risk Assessment and Estimation Model for Software Projects,” Chandan Kumar and Dilip Kumar Yadav, Eleventh International Multi-Conference on Information Processing-2015 (IMCIP-2015)
  23. “Risk: The Final Agile Frontier,” Troy Magennis, Agile 2015.
  24. ”Risk Management and Reliable Forecasting using Un-Reliable Date,” Troy Magennis, Lean Kanban, Central Europe, 2014.
  25. “Management of risks, uncertainties and opportunities on projects: time for a fundamental shift,” Ali Jaafari, International Journal of Project Management 19 (2001) 89-101.
  26. On Uncertainty, Ambiguity, and Complexity in Project Management,” Michael T. Pich, Christoph H. Loch, and Arnoud De Meyer, Management Science © 2002 INFORMS, Vol. 48, No. 8, August 2002 pp. 1008–1023
  27. “Risk Options and Cost of Delay,” Troy Magennis, LKNA 2014.
  28. “Transforming project risk management into project uncertainty management,” Stephen Ward and Chris Chapman, International Journal of Project Management 21 (2003) 97–105.
  29. “Risk-informed decision-making in the presence of epistemic uncertainty,” Didier Dubois, Dominique Guyonnet, International Journal of General Systems, Taylor & Francis, 2011, 40 (2), pp. 145-167.
  30. “A case study of risk management in agile systems development,” Sharon Coyle and Kieran Conboy, 17th European Conference on Information Systems (Newell S, Whitley EA, Pouloudi N, Wareham J, Mathiassen L eds.), 2567-2578, Verona, Italy, 2009
  31. “Risk management in agile methods: a study of DSDM in practice,” Sharon Coyle, 10th International Conference on eXtreme Programming and Agile Processes in Software Engineering, 2009.
  32. Distinguishing Two Dimensions Of Uncertainty,” Craig R. Fox and Gülden Ülkümen, in Perspectives on Thinking, Judging, and Decision Making, Brun, W., Keren, G., Kirkeboen, G., & Montgomery, H. (Eds.), 2011.
  33. Two Dimensions of Subjective Uncertainty: Clues from Natural Language,” Craig R. Fox and Gülden Ülkümen.
  34. “An Essay Towards Solving a Problem in the Doctrine of Chance,” By the late Rev. Mr. Bayes, communicated by Mr. Price, in a letter to John Canton, M. A. and F. R. S.
  35. “Playbook: Enterprise Risk Management for the U.S. Federal Government, in support of OMB Circular A-123.”
  36. “Joint Agency Cost Schedule Risk and Uncertainty Handbook,” Naval Center for Cost Analysis, 12 March 2014.
  37. “Quantitative Risk ‒ Phases 1 & 2: A013 ‒ Final Technical Report SERC-2013-TR-040-3,” Walt Bryzik and Gary Witus, November 12, 2013, Stevens Institute of Technology
  38. “Distinguishing Two Dimensions of Uncertainty,” Craig Fox and Gülden Ülkumen, in Perspectives of Thinking, Judging, and Decision Making
  39. “Using Risk to Balance Agile and Plan Driven Methods,” Barry Boehm and Richard Turner, IEEE Computer, June 2003
  40. “Commonalities in Risk Management and Agile Process Models,” Jaana Nyfjord and Mira Kajko-Mattsson, International Conference on Software Engineering Advances(ICSEA 2007).
  41. “Software Risk Management in Practice: Shed Light on Your Software Product,” Jens Knodel, Matthias Naab, Eric Bouwers, Joost Visser, IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER), 2015
  42. “Software risk management,” Sergey M. Avdoshin and Elena Y. Pesotskaya, 7th Central and Eastern European Software Engineering Conference, 2011.
  43. “A New Perspective on GDSD Risk Management Agile Risk Management,” Venkateshwara Mudumba and One-Ki (Daniel) Lee, International Conference on Global Software Engineering, 2010
  44. “Using Risk Management to Balance Agile Methods: A Study of the Scrum Process,” Benjamin Gold and Clive Vassell, 2nd International Conference of Knowledge-Based Engineering and Innovation, November 5-6, 2015
  45. “Using Velocity, Acceleration, and Jerk to Manage Agile Schedule Risk,” Karen M. Bumbary, 2016 International Conference on Information Systems Engineering
  46. “The Risks of Agile Software Development Learning from Adopters,” Amany Elbanna and Suprateek Sarker, IEEE Software, September/October 2016
  47. “Software Delivery Risk Management: Application of Bayesian Networks in Agile Software Development,” Ieva Ancveire, Ilze Gailite, Made Gailite, and Janis Grabis, Information Technology and Management Science, 2015/18.
  48. “Lightweight Risk Management in Agile Projects,” Edzreena Edza Odzaly, Des Greer, Darryl Stewart, 26th Software Engineering Knowledge Engineering Conference (SEKE), November 2015.
  49. “A Method of Software Requirements Analysis Considering the Requirements Volatility from the Risk Management Point of View,” Yunarso Anang, Masakazu Takahashi, and Yoshimichi Watanabe, 22nd International Symposium on QFD, Boise, Idaho.
  50. “Analyse Changing Risk of Organizational Factors in Agile Project Management,” Shi Tong, Chen Jiabin, and Fang DeYing, The 1st International Conference on Information Science and Engineering (ICISE2009)
  51. “Outlining a Model Integrating Risk Management and Agile Software Development,” Jaana Nyfjord and Mira Kajko-Mattsson, 34th Euromicro Conference Software Engineering and Advanced Applications. 2009.
  52. “How Do Real Options Concepts Fit in Agile Requirements Engineering?,” Zornitza Racheva and Maya Daneva, Eighth ACIS International Conference on Software Engineering Research, Management and Applications, 2010..
  53. NASA Risk Informed Decision Making Handbook, NASA/SP-2010-576, Version 1.0 April, 2010.
  54. “Managing Risk Within A Decision Analysis Framework,” Homayoon Dezfuli, Robert Youngblood, Joshua Reinert, NASA Risk Management Page.
  55. “NASA Risk Management Handbook,” NASA/SP-2011-3422, Version 1.0, November 2011.
  56. “Risk Management For Software Projects In An Agile Environment – Adopting Key Lessons From The Automotive Industry,” Oana Iamandi, Marius Dan, and Sorin Popescu, Conference: MakeLearn and TIIM Joint International Conference 2015: Managing Intellectual Capital and Innovation for Sustainable and Inclusive Society, Bari, Italy, 2015
  57. “Role of Agile Methodology in Software Development,” Sonia Thakur and Amandeep Kaur, International Journal of Computer Science and Mobile Computing, Volume 2, Issue 10, October 2013, pp. 86-90
  58. “Agile Risk Management Workshop,” Alan Moran Agile Business Conference, 08.10.2014, London England
  59. “Embrace Risk! An Agile approach to risk management,” Institute for Agile Risk Management, 2014.
  60. “Risks in distributed agile development: A review,” Suprika Vasudeva Shrivastava and Urvashi Rathod, Procedia - Social and Behavioral Sciences 133 ( 2014 ) 417 – 424, ICTMS-2013
  61. “Risk and uncertainty in project management decision-making,” Karolina Koleczko, Public Infrastructure Bulletin, Vol. 1, Issue. 8 [2012], Art. 13.
  62. “Uncertainty and Project Management: Beyond the Critical Path Mentality,” A. De Meyer, C. Loch, And M. Pich, INSEAD Working Paper, 2001.
  63. “Proposal of Risk Management Metrics for Multiple Project Software Development,” Miguel Wanderleya, Júlio Menezes Jr., Cristine Gusmãoa, Filipe Limaa, Conference on ENTERprise Information Systems / International Conference on Project Management / Conference on Health and Social Care Information Systems and Technologies, CENTERIS / ProjMAN / HCist 2015 October 7-9, 2015.
  64. “Outling a Model Integration Risk Management and Agile Software Development,” Jaana Nyfjord and Mira Kajko-Mattsson, 34th Euromicro Conference Software Engineering and Advanced Applications, 2008.
  65. “The Impact Of Risk Checklists On Project Manager’s Risk Perception And Decision-Making Process,” Lei Li, Proceedings of the Southern Association for Information Systems Conference, Savannah, GA, USA March 8th–9th, 2013.
  66. “Identifying The Risks Associated With Agile Software Development: An Empirical Investigation.” Amany Elbanna, MCIS 2014 Proceedings. Paper 19.
  67. “Uncertainty, Risk, and Information Value in Software Requirements and Architecture,” Emmanuel Letier, David Stefan, and Earl T. Barr, ICSE ’14, May 31 – June 7, 2014, Hyderabad, India
  68. “Software project risk analysis using Bayesian networks with causality constraints,” Yong Hu, Xiangzhou Zhang, E. W. T. Ngai, Ruichu Cai, and Mei Liu, Decision Support Systems 56 (2013) 439–449.
  69. “Risk Based Scrum Method: A Conceptual Framework,” Nitin Uikey and Ugrasen Suman, 2015 2nd International Conference on “Computing for Sustainable Global Development, 11th - 13th March, 2015.
  70. “Implementation of Risk Management with SCRUM to Achieve CMMI Requirements,” Eman Talal Alharbi, M. Rizwan Jameel Qureshi, International Journal Computer Network and Information Security, 2014, 11, 20-25.
  71. “Risk, ambiguity, and the Savage axioms,” Daniel Ellsberg, The Quarterly Journal of Economic, Vol. 75, No. 4, pp. 643-669, Nov 1961.
  72. “Analytical Method for Probabilistic Cost and Schedule Risk Analysis: Final Report,” Prepared for NASA, 5 April 2013.
  73. “Risk, Ambiguity, and the Savage Axioms,” Daniel Ellsberg, August 1961, RAND Corporation, Report P-2173.
  74. “Dealing with Uncertainty Arising Out of Probabilistic Risk Assessment,: Kenneth Solomon, William Kastenberg, and Pamela Nelson, RAND Corporation, R-3045-ORNL, September 1983.
  75. “Using Risk to Balance Agile and Plan-Driven Methods,” Barry Boehm and Richard Turner, IEEE Computer, June 2003.
  76. “Epistemic Uncertainty Analysis: An Approach Using Expert Judgment and Evidential Credibility,” Patrick Hester, International Journal of Quality, Statistics, and Reliability, Volume 2012, Article ID 617481, 8 pages
  77. “Representation of Analysis Results Involving Aleatory and Epistemic Uncertainty,” J. C. Helton, J. D. Johnson, W. L. Oberkampf, C. J. Sallaberry, Sandia Report SAND2008-4379, 2008.
Books on Risk Management
  1. Probabilistic Risk Assessment and Management for Engineers and Scientist 2nd Edition, Ernest J. Henley and Hiromitsu Kumamoto, IEEE Press, 2000.
  2. Agile Risk Management and Scrum, Alan Moran, Institute for Agile Risk Management, 2014.
  3. Managing the Unknown: A New Approach to Managing High Uncertainty and Risk in Projects 1st Edition, Christoph H. Loch, Arnoud DeMeyer, and Michael Pich, Wiley, 2006.
  4. Identifying and Managing Project Risk: Essential Tools for Failure-Proofing Your Project, Tom Kendrick, AMACOM, 3rd Edition, March 2015
  5. Integrated Cost and Schedule Control in Project Management, Second Edition 2nd Edition, Ursula Kuehn, Management Concepts, 2010.
  6. Effective Risk Management, 2nd Edition, Edmund Conrow, AIAA, 2003.
  7. Effective Opportunity Management for Project: Exploiting Positive Risk, David Hillson, Taylor & Francis, 2004.
  8. Project Risk Management: Process, Techniques, and Insights, 2nd Edition, Chris Chapman and Stephen Ward, John Wiley & Sons, 2003.
  9. Managing Project Risk and Uncertainty: A Constructively Simple Approach to Decision Making, Chris Chapman and Stephen Ward, John Wiley & Sons, 2002
  10. Technical Risk Management, Jack Michaels, Prentice Hall, 1996.
  11. Managing Risk: Methods for Software Systems Development, Elaine Hall, Software Engineering Institute, Addison Wesley, 1998.
  12. Software Engineering Risk Management: Finding your Path Through the Jungle, Version 1.0, Dale Karolak, IEEE Computer Society, 1998.
  13. Risk Happens: Managing Risk and Avoiding Failure in Business Projects, Mike Clayton, Marshall Cavendish, 2011.
  14. Waltzing with Bears: Managing Risk on Software Projects, Tom Demarco and Timothy Lister, Dorset House, 2003.
  15. Software Engineering Risk Management, Dale Karolak, IEEE Computer Society Press, 1996.
  16. Practical Project Risk Management: The ATOM Methodology, David Hillson, Management Concepts Press, 2012.
  17. Risk Management in Software Development Projects, John McManus, Routledge, 2003.
  18. Department of Defense Risk, Issue, and Opportunity Management Guide for Defense Acquisition Programs, June 2015, Office of the Deputy Assistant Secretary of Defense for Systems Engineering Washington, D.C.
  19. Project Risk Management: Process, techniques, and Insights, 2nd Edition, Chris Chapman and Stephan Ward, John Wiley & Sons, 2003.
  20. Technical Risk Management, Jack Michaels, Prentice Hall, 1996.
  21. Software Engineering Risk Management, Dale Walter Karolak, IEEE Computer Society, 1996.
  22. Software Engineering Risk Management: Finding Your Path Through the Jungle, Version 1.0, Dale Walter Karolak, IEEE Computer Society, 1998.
  23. Managing Risk: Methods for Software Systems Development, Elaine Hall, Addison Wesley, 1998.
  24. Risk Happens!: Managing Risk and Avoiding Failure in Business Projects, Mike Clayton, 2011.
  25. Probability Methods for Cost Uncertainty Analysis: A Systems Engineering Perspective, Paul Garvey, CRC Press, 2000.
  26. A Beginners Guide to Uncertainty of Measurement, Stephanie Bell, National Physics Laboratory, 1999.
  27. Practical Risk Assessment for Project Management, Stephen Grey,
  28. Assessment and Control of Software Risks, Capers Jones, Prentice Hall, 1993.
  29. Distinguishing Two Dimensions of Uncertainty, Craig Fox and Gülden Ülkumen, in Perspectives of Thinking, Judging, and Decision Making

 

 

Related articles Making Decisions In The Presence of Uncertainty The Flaw of Empirical Data Used to Make Decisions About the Future Managing in Presence of Uncertainty Herding Cats: Decision Making On Software Development Projects Making Decisions in the Presence of Uncertainty Managing in the Presence of Uncertainty
Categories: Project Management

Sponsored Post: Aerospike, Loupe, Clubhouse, GoCardless, Auth0, InnoGames, Contentful, Stream, Scalyr, VividCortex, MemSQL, InMemory.Net

Who's Hiring?
  • GoCardless is building the payments network for the internet. We’re looking for DevOps Engineers to help scale our infrastructure so that the thousands of businesses using our service across Europe can take payments. You will be part of a small team that sets the direction of the GoCardless core stack. You will think through all the moving pieces and issues that can arise, and collaborate with every other team to drive engineering efforts in the company. Please apply here.

  • InnoGames is looking for Site Reliability Engineers. Do you not only want to play games, but help building them? Join InnoGames in Hamburg, one of the worldwide leading developers and publishers of online games. You are the kind of person who leaves systems in a better state than they were before. You want to hack on our internal tools based on django/python, as well as improving the stability of our 5000+ Debian VMs. Orchestration with Puppet is your passion and you would rather automate stuff than touch it twice. Relational Database Management Systems aren't a black hole for you? Then apply here!

  • Contentful is looking for a JavaScript BackEnd Engineer to join our team in their mission of getting new users - professional developers - started on our platform within the shortest time possible. We are a fun and diverse family of over 100 people from 35 nations with offices in Berlin and San Francisco, backed by top VCs (Benchmark, Trinity, Balderton, Point Nine), growing at an amazing pace. We are working on a content management developer platform that enables web and mobile developers to manage, integrate, and deliver digital content to any kind of device or service that can connect to an API. See job description.
Fun and Informative Events
  • Analyst Webinar: Forrester Study on Hybrid Memory NoSQL Architecture for Mission-Critical, Real-Time Systems of Engagement. Thursday, March 30, 2017 | 11 AM PT / 2 PM ET. In today’s digital economy, enterprises struggle to cost-effectively deploy customer-facing, edge-based applications with predictable performance, high uptime and reliability. A new, hybrid memory architecture (HMA) has emerged to address this challenge, providing real-time transactional analytics for applications that require speed, scale and a low total cost of ownership (TCO). Forrester recently surveyed IT decision makers to learn about the challenges they face in managing Systems of Engagement (SoE) with traditional database architectures and their adoption of an HMA. Join us as our guest speaker, Forrester Principal Analyst Noel Yuhanna, and Aerospike’s VP Marketing, Cuneyt Buyukbezci, discuss the survey results and implications for your business. Learn and register

  • Your event here!
Cool Products and Services
  • www.site24x7.com : Monitor End User Experience from a global monitoring network. 

  • ButterCMS is an API-first CMS that quickly integrates into your app. Rapidly build CMS-powered experiences in any programming language. Great for blogs, marketing pages, knowledge bases, and more. Butter plays well with Ruby, Rails, Node.js, Go, PHP, Laravel, Python, Flask, Django, and more.

  • Working on a software product? Clubhouse is a project management tool that helps software teams plan, build, and deploy their products with ease. Try it free today or learn why thousands of teams use Clubhouse as a Trello alternative or JIRA alternative.

  • A note for .NET developers: You know the pain of troubleshooting errors with limited time, limited information, and limited tools. Log management, exception tracking, and monitoring solutions can help, but many of them treat the .NET platform as an afterthought. You should learn about Loupe...Loupe is a .NET logging and monitoring solution made for the .NET platform from day one. It helps you find and fix problems fast by tracking performance metrics, capturing errors in your .NET software, identifying which errors are causing the greatest impact, and pinpointing root causes. Learn more and try it free today.

  • Auth0 is the easiest way to add secure authentication to any app/website. With 40+ SDKs for most languages and frameworks (PHP, Java, .NET, Angular, Node, etc), you can integrate social, 2FA, SSO, and passwordless login in minutes. Sign up for a free 22 day trial. No credit card required. Get Started Now.

  • Build, scale and personalize your news feeds and activity streams with getstream.io. Try the API now in this 5 minute interactive tutorial. Stream is free up to 3 million feed updates so it's easy to get started. Client libraries are available for Node, Ruby, Python, PHP, Go, Java and .NET. Stream is currently also hiring Devops and Python/Go developers in Amsterdam. More than 400 companies rely on Stream for their production feed infrastructure, this includes apps with 30 million users. With your help we'd like to ad a few zeros to that number. Check out the job opening on AngelList.

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex is a SaaS database monitoring product that provides the best way for organizations to improve their database performance, efficiency, and uptime. Currently supporting MySQL, PostgreSQL, Redis, MongoDB, and Amazon Aurora database types, it's a secure, cloud-hosted platform that eliminates businesses' most critical visibility gap. VividCortex uses patented algorithms to analyze and surface relevant insights, so users can proactively fix future performance problems before they impact customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

If you are interested in a sponsored post for an event, job, or product, please contact us for more information.

Categories: Architecture

Time Sensitive: Free Access to The Better User Stories Mini-Course

Mike Cohn's Blog - Tue, 03/14/2017 - 15:00

Today I want to let you know about a new mini-course I created to help overcome some of the common and challenging problems with user stories.

It’s free to register and you can access the first video instantly, or watch it a little later at your convenience. Once you do sign-up I’ll also send you an email to let you know as soon as the next video is released.

Please note: This training is free but will only be available for the next 2 weeks

Guarantee your spot by signing up for the course today

About The Better User Stories ‘Mini-Course’

Last year I did a survey to discover what challenges were stopping people write successful user stories. Nearly 2,000 people got in touch to highlight the following issues:

  • Not writing stories that truly focus on the user’s needs
  • Wondering how to keep a team engaged from writing to development
  • Splitting stories quickly without compromising value
  • Not knowing when to add detail, or how much to include

Plus many, many more. I wanted to create a mini-course that would tackle some of these issues, and I wanted to offer it to you for free.

Even though there’s no fee to access the videos, the training isn’t light-touch, an introduction, or theory-filled. It’s based on practical materials I’ve used for teaching user stories to more than 20,000 people over the last fifteen years. What’s more, you’ll also have the chance to comment, ask questions and discuss the training featured in each video.

Join in the discussion by watching the first video now

Watch out for even more resources to help you with user stories

To go alongside the launch of the mini-course, over the next couple of weeks, both the blog and weekly tips email will feature lessons and advice on how to write better user stories.

And if you really want you and your team to master this topic, there will be an option to unlock more in-depth, advanced training (details about that coming soon).

Today, get instant access to video 1: Three Tips for Successful Story Mapping in a Story-Writing Workshop

The first video is available now. This 20 minute training looks at some of the common mistakes people make at the early stage of writing user stories, particularly when conducting a story-writing workshop.

In this video you’ll learn:

  • Why people struggle to find the balance between too much, and too little team engagement when writing user stories.
  • How to save a significant amount of time in future iteration planning by inviting the right people to your story-writing workshop
  • A simple, but powerful method of visualizing the relationship between stories
  • Practical ways to make sure your team focuses on the user’s needs at all times
  • Methods to help you prioritize and plan stories, fast

Click here to access the first video

Questions about the training? Already watched the first video? I’d love to hear from you in the comments below.

Quote of the Day

Herding Cats - Glen Alleman - Tue, 03/14/2017 - 13:51

Magic, it must be remembered, is an art which demands collaboration between the artist and his public - E. M. Butler The Myth of the Magus, 1948

When we hear the magic of making decisions in the presence of uncertainty without the need to estimate the cost, outcome, or impact of that decision, it's surely magical.

Categories: Project Management

Quote of the Month March 2017

From the Editor of Methods & Tools - Tue, 03/14/2017 - 09:30
Have you ever made a decision, implemented the decision, and then been confronted with a raft of questions: “What were you thinking?”, “Why did you not do that?”, or “Why did you do it that way?” How can you avoid this disconnect? Bring the stakeholders together and build the value model as a team. Agree […]

Deploying ERP with Agile

Herding Cats - Glen Alleman - Mon, 03/13/2017 - 18:01

Here's a paper from a prior agile conference that is still applicable today

Screen Shot 2017-03-13 at 10.59.04 AM

Categories: Project Management

SPaMCAST 433 – Jeff Dalton, Holacracy is the Future

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 433 features our interview with Jeff Dalton discussing holacracy.  Holocracy.org defines holacracy as, “a complete, packaged system for self-management in organizations. Holacracy replaces the traditional management hierarchy with a new peer-to-peer “operating system” that increases transparency, accountability, and organizational agility.” Jeff has implemented holacracy in his own firm and others and has a lot to share about this exciting form of management and leadership.

Jeff Dalton is President of Broadsword, a Process Innovation firm, and Chief Evangelist at AgileCxO.org, an Agile Leadership Research and Development center that develops models for high-performing agile teams.  Jeff is principle author of “A Guide to Scrum and CMMI,” published by the CMMI Institute, and is a SCAMPI Lead Appraiser and Certified Agile Leadership Consultant that specializes in software product development, self-organizing teams, and performance modeling.  His upcoming book, the “Agile Performance Holarchy: A New Model for Outrageously High Performance” will be released in September of 2017.

Jeff’s previous appearances on the Software Process and Measurement Cast include

SPaMCAST 366 – Jeff Dalton, 12 Attributes of Great and Agile Organizations

SPaMCAST 296 – Jeff Dalton, CMMI, Agile, Resiliency

SPaMCAST 176 – Jeff Dalton, CMMI, Scrum and Agile

Re-Read Saturday News

We will pick up our re-read of Carol Dweck’s Mindset: The New Psychology of Success (buy your copy and read along) next week.

Every week we discuss a chapter then consider the implications of what we have “read” from the point of view of someone pursuing an organizational transformation and also how to use the material when coaching teams.  

Remember to buy a copy of Carol Dweck’s Mindset and read along!

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

In the Software Process and Measurement Cast, will feature our essay on Change Implementations – To Big Bang or Not To Big Bang? We will also have great columns from Steve Tendon and Gene Hughson.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 433 - Jeff Dalton, Holacracy is the Future

Software Process and Measurement Cast - Sun, 03/12/2017 - 22:00

The Software Process and Measurement Cast 433 features our interview with Jeff Dalton discussing holacracy.  Holocracy.org defines holacracy as, “a complete, packaged system for self-management in organizations. Holacracy replaces the traditional management hierarchy with a new peer-to-peer “operating system” that increases transparency, accountability, and organizational agility.” Jeff has implemented holacracy in his own firm and others and has a lot to share about this exciting form of management and leadership.

Jeff Dalton is President of Broadsword, a Process Innovation firm, and Chief Evangelist at AgileCxO.org, an Agile Leadership Research and Development center that develops models for high-performing agile teams.  Jeff is principle author of “A Guide to Scrum and CMMI,” published by the CMMI Institute, and is a SCAMPI Lead Appraiser and Certified Agile Leadership Consultant that specializes in software product development, self-organizing teams, and performance modeling.  His upcoming book, the “Agile Performance Holarchy: A New Model for Outrageously High Performance” will be released in September of 2017.

Jeff’s previous appearances on the Software Process and Measurement Cast include

SPaMCAST 366 – Jeff Dalton, 12 Attributes of Great and Agile Organizations

SPaMCAST 296 – Jeff Dalton, CMMI, Agile, Resiliency

SPaMCAST 176 - Jeff Dalton, CMMI, Scrum and Agile

 

Re-Read Saturday News

We will pick up our re-read of Carol Dweck’s Mindset: The New Psychology of Success (buy your copy and read along) next week.

Every week we discuss a chapter then consider the implications of what we have “read” from the point of view of someone pursuing an organizational transformation and also how to use the material when coaching teams.  

Remember to buy a copy of Carol Dweck’s Mindset and read along!

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

In the Software Process and Measurement Cast, will feature our essay on Change Implementations - To Big Bang or Not To Big Bang? We will also have great columns from Steve Tendon and Gene Hughson.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Sun, 03/12/2017 - 13:39

Trust a witness in all matters in which neither his self-interest, his passions, his prejudices, nor the love of the marvelous is strongly concerned. When they are involved, require corroborative evidence in exact proportion to the contravention of probability by the thing testified - Thomas Henry Huxley (1825-1895)

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Sun, 03/12/2017 - 02:46

Magic, it must be remembered, is an art which demands collaboration between the artist and his public - E. M. Butler The Myth of the Magus, 1948

When we hear the magic of making decisions in the presence of uncertainty without the need to estimate the cost, outcome, or impact of that decision, it's surely magical.

Categories: Project Management

The Goal: A Process of Ongoing Improvement, Part 18

I am traveling this week in India for the 13th CSI/IFPUG International Software Measurement & Analysis Conference: “Creating Value from Measurement”. Read more about it here. In the meantime, enjoy some classic content, and I’ll be back with new blog entries next week.

I had intended to spend the last entry our re-read of the The Goal waxing poetic about the afterward in the book titled “Standing on the Shoulders of Giants”. Suffice it to say that the afterward does an excellent job describing the practical and theoretical basis for Goldratt and Cox’s ideas that ultimately shaped the both lean and process improvement movements since 1984.

Previous Installments:

Part 1       Part 2       Part 3      Part 4      Part 5 
Part 6       Part 7      Part 8     Part 9      Part 10
Part 11     Part 12      Part 13    Part 14    Part 15
Part 16    Part 17

The Goal is important because it introduced and explained the theory of constraints (TOC), which has proven over and over again to be critical to anyone managing a system. The TOC says that the output of any manageable system is limited by a small number of constraints and that all typical systems have at least one constraint. I recently had a discussion with a colleague that posited that not all systems have constraints. He laid out a scenario in which if you had unlimited resources and capability it would be possible to create a system without constraints. While theoretically true, it would be safe to embrace the operational hypothesis that any realistic process has at least one constraint. Understanding the constraints that affect a process or system provides anyone with an interest in process improvement with a powerful tool to deliver effective change. I do mean anyone! While the novel is set in a manufacturing environment, it is easy to identify how the ideas can apply to any setting where work follows a systematic process. For example, software development and maintenance is a process that takes business needs and transforms those needs into functionality. The readers of the Software Process and Measurement Blog should recognize that ideas in The Goal are germane to the realm of information technology.

As we have explored the book, I have shared how I have been able to apply the concepts explored to illustrate that what Goldratt and Cox wrote was applicable in the 21st century workplace. I also shared how others reacted to the book when I read it in public or talked about to people trapped next to me on numerous flights. Their reaction reminded me that my reaction was not out of the ordinary. The Goal continues to affect people years after it was first published. For example, the concept of the TOC and the Five Focusing Steps proved useful again this week. I was asked to discuss process improvement with a team comprised of tax analysts, developers and testers. Each role is highly specialized and there is little cross-specialty work-sharing. With a bit of coaching the team was able to identify their process flow and to develop a mechanism to identify their bottleneck(s) to improve their through put. Even though the Five Focusing Steps were never mentioned directly, we were able agree on an improvement approach that would find the constraint, help them exploit the constraint, subordinate the other steps in the process to the constraint, support improving the capacity of the constraint, then reiterate the analysis if the step was no longer a constraint. Had I never read The Goal, we might not have found a way to improve the process.

Perhaps re-reading the book or just carrying it around has made me overly sensitive to the application of the TOC and the other concepts in the book. However, I don’t think that was the real reason the material is useful. Others have been equally impacted, for example, Steve Tendon, author of Tame The Flow, and currently a columnist on the Software Process and Measurement Cast suggests that The Goal and the TOC has had a significant influence on his groundbreaking process improvement ideas. Bottom line, if you have not read or re-read The Goal I strongly suggest that you make the time to read the book. The Goal is an important book if you manage processes or are interested in improving how work is done in the 21st century.

I would like to hear from you! Can you tell me:

  1. How has The Goal impacted how you work?
  2. Have you been able to put the ideas in the book into practice?
  3. What are the successes and difficulties you faced when leveraging the Theory of Constraints?
  4. Do you use the Socratic method to identity and fix problems?
  • Also, if you don’t have a copy of The Goal, buy one and read it! If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version.

Categories: Process Management

Stuff The Internet Says On Scalability For March 10th, 2017

Hey, it's HighScalability time:

 

Darknet is 4x more resilient than the Internet. An apt metaphor? (URV)
If you like this sort of Stuff then please support me on Patreon.
  • > 5 9s: Spanner availability; 200MB: random access from DNA storage; 215 Pbytes/gram: DNA storage; 287,024: Google commits to open source; 42: hours of audio gold; 33: minutes to get back into programming after interruption; 12K: Chinese startups started per day; 35 million: tons of good shipped under Golden Gate Bridge; 209: mph all-electric Corvette; 500: Disney projects in the cloud; 40%: rise in CO2; 

  • Quoteable Quotes:
    • Marc Rogers: Anything man can make man can break
    • @manupaisable: 10% of machines @spotify rebooted every hour because of defunct #docker - war stories by @i_maravic @qconlondon
    • @robertcottrell: “the energy cost of each bitcoin transaction is enough to power 3.17 US households for a day”
    • Eric Schmidt: We put $30 billion into this platform. I know this because I approved it. Why replicate that?
    • dim: It uses p30 technology. Just basic things, gliders and lightweight spaceships. Basically, the design goes top-down: At the very top, there's the clock. It is a 11520 period clock. Note that you need about 10.000 generations to ensure the display is updated appropriately, but the design should still be stable with a clock of smaller period (about 5.000 or so - the clock needs to be multiple of 60).
    • Luke de Oliveira: Most people in AI forget that the hardest part of building a new AI solution or product is not the AI or algorithms — it’s the data collection and labeling. Standard datasets can be used as validation or a good starting point for building a more tailored solution.
    • @violetblue: Did a lot of people not know that the CIA is a spy agency?
    • @viktorklang: Async is not about *performance*—it is about *scalability*. Let your friends know
    • stillsut: The difference is in the old days, you adapted to computer. Now, computer must adapt to you.
    • Eric Brewer: Spanner uses two-phase commit to achieve serializability, but it uses TrueTime for external consistency, consistent reads without locking, and consistent snapshots.
    • Emily Waltz: Nomura’s molecular robot differs in that it is composed entirely of biological and chemical components, moves like a cell, and is controlled by DNA.
    • Chris Anderson: Most of the devices in our life, from our cars to our homes, are “entropic,” which is to say they get worse over time. Every day they become more outmoded. But phones and drones are “negentropic” devices. Because they are connected, they get better, because the value comes from the software, not hardware
    • William Dutton: Most people using the internet are actually more social than those who are not using the internet
    • @swardley:  ... by 2016, you should have dabbled / learn / tested serverless.  "Go IaaS" or "build our biz as a cloud" in 2017 is #facepalm
    • Bradford Cross: The incompetent segment: the incompetent segment isn’t going to get machine learning to work by using APIs. They are going to buy applications that solve much higher level problems. Machine learning will just be part of how they solve the problems.
    • @denormalize: What do we want? Machine readable metadata! When do we want it? ERROR Line 1: Unexpected token `
    • @Ocramius: "And we should get rid of users: users are not pure, since they modify the state of our system" #confoo
    • Morning Paper: The most important overarching lesson from our study is this: a single file-system fault can induce catastrophic outcomes in most modern distributed storage systems. 
    • Linus Torvalds: And if the DRM "maintenance" is about sending me random half-arsed crap in one big pull, I'm just not willing to deal with it. This is like the crazy ARM tree used to be.
    • Shaun McCormick: Technical Debt is a Positive and Necessary Step in software engineering
    • @tdierks: Hello, my name is Tim. I'm a lead at Google with over 30 years coding experience and I need to look up how to get length of a python string.
    • @codinghorror: I colocated a $600 Ali Express mini pc for $15/month and it is 2x faster than "the cloud"
    • @antirez: "Group chat is like being in an all-day meeting with random participants and no agenda".
    • @sriramhere: Wise man once wrote "As flexible as it is, compute in AWS is optimized for the old capex world." @sallamar
    • @wattersjames: AI will come to your company carefully disguised as a lot of ETL and data-pipeline work...
    • ceejayoz: Lambda's billed in 100 millisecond increments. EC2 servers are billed in one hour increments. If you need short tasks that run in bursty workloads, Lambda's (potentially) a no-brainer.
    • @codinghorror: we have not found bare metal colocation to be difficult, with one exception: persistent file storage. That part, strangely, is quite hard.
    • @jbeda: Lesson from 10 years at Google: this is true until it isn't. Sometimes you *can* build a better mouse trap.
    • jfoutz: I agree. It's genius in a Lex Luthor kind of way. If I understood the full scope of the application, I like to think i'd decline to work on that. It's easy to imagine engineers working on small parts of the system, and never really connecting the dots that the whole point is to evade law enforcement.
    • dsr_:  It's harder (but not impossible) to have complete service lossage like this [Slack] in a federated protocol. That's why you didn't hear about the great email collapse of 2006.
    • throw_away_777: I agree that neural nets are state-of-the-art and do quite well on certain types of problems (NLP and vision, which are important problems). But a lot of data is structured (sales, churn, recommendations, etc), and it is so much easier to train an xgboost model than a neural net model. 
    • @GossiTheDog: #Vault7 CIA - Wiki that Wikileaks released is/was on hosted on DEVLAN, the CIA's "dirty" development network - a major architecture error.
    • Alison Gopnik: new studies suggest that both the young and the old may be especially adapted to receive and transmit wisdom. We may have a wider focus and a greater openness to experience when we are young or old than we do in the hurly-burly of feeding, fighting and reproduction that preoccupies our middle years.
    • @pierre: Wow, audacious to say the least. Intentionally flagging authorities to mislead them. It's like the VW emissions code
    • Joan Gamell: Starting with the obvious: the CIA uses JIRA, Confluence and git. Yes, the very same tools you use every day and love/hate. 
    • Chris Baraniuk: The networks of genes in each animal is a bit like the network of neurons in our brains, which suggests they might be "learning" as they go
    • futurePrimitive: Managers seem to think that programming is typing. No. Programming is *thinking*. The stuff that *looks* like work to a manager (energetic typing) only happens after the hard work is done silently in your head.
    • @danielbryantuk: "There is no such thing as a 'stateless' architecture. It's just someone else's problem" @jboner #qconlondon
    • Platypus: There's no panacea for vendor lock-in. Not even open source, but open source alone gets you further than any number of standards that don't cover what really matters or vendor-provided tools that might go away at any moment. It's the first and best tool for dealing with lock-in, even if it's not perfect. 
    • @tpuddle: @cliff_click talk at #qconlondon about fraud detection in financial trades. Searching 1 billion trades a day "is not that big". !
    • @charleshumble: "Something I see in about 95% of the trading data sets is there are a small number of bad guys hammering it." Cliff Click #qconlondon

  • You may not be able to hear doves cry, but you can listen to machines talk. Elevators to be precise. Watch them chat away as they selflessly shuttle to and fro. Yes, it is as exciting as you might imagine. Though probably not very different than the interior dialogue of your average tool.

  • It used to be that winners wrote history. Now victors destroy data. Terabytes of Government Data Copied

  • Battling legacy code seems to be the number one issue on Stack Overflow, as determined by top books mentioned on Stack Overflow. Not surprising. What was surprising is what's not on the list: algorithm books. Books on the craft of programming took top honors. Gratifying, but at odds with current interviewing dogma. The top 10 books: Working Effectively with Legacy Code; Design Patterns; Clean Code; Java concurrency in practice; Domain-driven Design; JavaScript; Patterns of Enterprise Application Architecture;  Code Complete; Refactoring; Head First Design Patterns.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Password Rules Are Bullshit

Coding Horror - Jeff Atwood - Fri, 03/10/2017 - 12:16

Of the many, many, many bad things about passwords, you know what the worst is? Password rules.

If we don't solve the password problem for users in my lifetime I am gonna haunt you from beyond the grave as a ghost pic.twitter.com/Tf9EnwgoZv

— Jeff Atwood (@codinghorror) August 11, 2015

Let this pledge be duly noted on the permanent record of the Internet. I don't know if there's an afterlife, but I'll be finding out soon enough, and I plan to go out mad as hell.

The world is absolutely awash in terrible password rules:

But I don't need to tell you this. The more likely you are to use a truly random password generation tool, like us über-geeks are supposed to, the more likely you have suffered mightily – and daily – under this regime.

Have you seen the classic XKCD about passwords?

To anyone who understands information theory and security and is in an infuriating argument with someone who does not (possibly involving mixed case), I sincerely apologize.

We can certainly debate whether "correct horse battery staple" is a viable password strategy or not, but the argument here is mostly that length matters.

That's What She Said

No, seriously, it does. I'll go so far as to say your password is too damn short. These days, given the state of cloud computing and GPU password hash cracking, any password of 8 characters or less is perilously close to no password at all.

So then perhaps we have one rule, that passwords must not be short. A long password is much more likely to be secure than a short one … right?

What about this four character password?

Categories: Programming

Planning before Scheduling

Herding Cats - Glen Alleman - Fri, 03/10/2017 - 05:05

Planning is an unnatural process, it’s much more fun to get on with it. The real benefit of not planning is that failure comes as a complete surprise and is not preceded by months of worry.
‒ Sir John Harvey Jones

And of course 

Screen Shot 2017-03-09 at 8.52.33 PM 

and ...

For which of you, intending to build a tower, sitteth not down first, and counteth the cost, whether he have sufficient to finish it? Lest haply, after he hath laid the foundation, and is not able to finish it, all the behold it begin to mock him, saying, This man began to build, and was not able to finish ―Luke 14:28-30

The Plan describes where we are going, the various paths we can take to reach our destination, and the progress or performance assessment points along the way to assure we are on the right path. These assessment points measures the “maturity” of the product or service against the planned maturity. This is the only real measure of progress – not the passage of time or consumption of money.

Without a Plan, the only purpose of the Schedule is to show the sequence of work, the dependencies between the work activities, and collecting progress the passage of time and consumption of money.

The Plan - actually the Integrated Master Plan - is the basis of the Integrated Master Schedule

Screen Shot 2017-03-09 at 9.08.14 PM

This decomposition is not unique to the IMP/IMS paradigm. Without some form of decomposition of what “done” looks like, it is difficult to connect the work of the project to the outcomes of the project. This decomposition – which is hierarchical – provides the mechanism to increase cohesion and decrease coupling of the work effort. This coupling and cohesion comes from the systems architecture world is has been shown to increase the robustness of systems. The project cost, schedule, and resulting deliverables are a system, subject to these coupling and cohesion.

The mechanics of the Integrated Master Plan and Integrated Master Schedule looks like this

Screen Shot 2017-03-09 at 9.09.55 PM

Related articles Build a Risk Adjusted Project Plan in 6 Steps Architecture -Center ERP Systems in the Manufacturing Domain Herding Cats: How to Deal With Complexity In Software Projects? IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

The Fallacy of Wild Numbers

Herding Cats - Glen Alleman - Fri, 03/10/2017 - 03:57

DeMacro made this post, which has been picked up by the agile community to mean estimating is a waste

My early metrics book, Controlling Software Projects: Management, Measurement, and Estimation (Prentice Hall/Yourdon Press, 1982), played a role in the way many budding software engineers quantified work and planned their projects. […] The book’s most quoted line is its first sentence: “You can’t control what you can’t measure.” This line contains a real truth, but I’ve become increasingly uncomfortable with my use of it.

Implicit in the quote (and indeed in the book’s title) is that control is an important aspect, maybe the most important, of any software project. But it isn’t. Many projects have proceeded without much control but managed to produce wonderful products such as Google Earth or Wikipedia.

To understand control’s real role, you need to distinguish between two drastically different kinds of projects:

  • Project A will eventually cost about a million dollars and produce value of around $1.1 million.

  • Project B will eventually cost about a million dollars and produce value of more than $50 million.

What’s immediately apparent is that control is really important for Project A but almost not at all important for Project B. This leads us to the odd conclusion that strict control is something that matters a lot on relatively useless projects and much less on useful projects. It suggests that the more you focus on control, the more likely you’re working on a project that’s striving to deliver something of relatively minor value.

Let's start with some logic assessment in the context of an actual business.

  • If I'm investing $1,000,000 (long form for effect) and only getting back, $1,100,000 - that is $100,000 return on a $1,000,000 investment, that's a 10% return on investment. Assuming the investment of $1,000,000 is just labor, a standard burdened rate gets me, 10 staff, over a year's work. The natural variances of work productivity, staff turnover, delays and disruptions, and other event-based and naturally occurring variances has a high chance of wiping out or greatly reducing my $100,000 return.
  • If I'm investing $1,000,000 and getting $50,000,000 back, that's a 500% return on my investment. Implying there is no need to measure the project performance and by implication no need to estimate. This, of course, ignores the need to know how much of that $50M minus the $1M am I willing to loose? This is the value at risk discussion.

Let's look at a few of the companies SEC 10K to see what their Return on Equity and Return on Investment is ...

The 50 to 1 return on a project is a logical fallacy of exaggeration /  stretching the truth / overstatement occurs when a point is made by saying something that would be true, but the truth has been distorted in some way. 

Having been through a few startups, with one went public, one failed, and one got bought and being married to a person who has gone through 6 startups, I have some familiarity with how startups work. 

A 50 to 1 ROI is unheard of. There is mention of Google maps as a 50:1 return. Google Maps makes money by monetizing the Pins. The 2015 10K shows $74.5B total revenue for Google. The total cost of revenue is 37% and the R&D expenses as a percentage of revenue is 15%. The details of individual products are in the balance sheet, but there is no 500% ROI on Maps or any other product used as an example.

As well, again from hands-on experience with startups, those investing need to know when they'll get their money back. If they are not an external investor, the internal CFO needs that same information. DeMacro had made several other off the wall comments - one in IEEE Computer - that appear to be misinformed about how projects and business work in the 21st century. For example in "Software Engineering: An Idea Whose Time Has Come and Gone?" where the original 10% and 500% return example comes from, he goes on with an analogy of parenting teenagers where he says ...

Now apply “You can’t control what you can’t measure” to the teenager. Most things that really matter—honor, dignity, discipline, personality, grace under pressure, values, ethics, resourcefulness, loyalty, humor, kindness—aren’t measurable.

I don't know where he grew up, but where I grew up - the Texas Panhandle - those attributes were certainly measurable. So I get a bit skeptical when a thought leader makes statements that are not logical. The measures of attributes are exemplified in the Boy Scouts, on sports teams, sitting in the church pews, observed in the volunteer activities, and further on in adulthood, in the leadership activities of the military and business management.

Don't fall for the Strawman Fallacy, where exaggerating, misrepresenting, or just completely fabricating an argument takes place. This kind of dishonesty serves to undermine honest rational debate.

This is common in the #NoEstimates community.

  • We can build a ticketing system in 24hrs.
  • All estimates are evil.
  • I can make decisions in the presence of uncertainty without estimating the impact of those decisions.
  • I really don't mean NO when I say NO Estimates, I mean YES, but it means NO, so give me $1,000 and I'll let you hear me say that in person.

 

Related articles The Microeconomics of Decision Making in the Presence of Uncertainty Systems Thinking, System Engineering, and Systems Management Risk Management is How Adults Manage Projects
Categories: Project Management