Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
It is popular to make a list of¬†maxims for developing products, managing projects, or managing business processes. Some are based on experience, some based on surveys, some based on principles and practices of a profession.
Here's mine based on counter examples of the¬†sole contributor ¬†paradigm. The sole (or small group) contributor paradigm means maxims gathered from personal experience from a person's engagement on the job. One example for the sole contributor, used without permission and with full attribution is¬†Woody Zuill's list. There are others¬†Five Project Maxims, 18 Maxims of Successful IT Consulting¬†and other. But I like Woody's framework best, because his topics fit best with our processes on complex, mission critical, software intensive programs and the hands on integration with process. Although Woody would likely not agree, both technical skills and formal process frameworks are critical success factors in any sufficiently complex domain - both are needed.
Doing the work is guided by the Strategy and Performance Goals of the needed Capabilities.
Without a clear and concise understanding of what ¬†DONE looks like in Measures of Effectiveness for the needed capabilities, all the project work has no home. It's just a list of features or functions captured by the development team from the customer or product owner.
It's the capability to accomplish a business strategy that defines the mission and vision of the project. Why are we doing this project? How will we recognize we've accomplished our mission?¬†The capabilities delivered by the project starts with the fulfillment of Critical Success Factors. Which in turn implements a Performance Goal in support of a Strategic Objective that measurably benefits the business or supports a mission.
Responding to Change is impossible unless the system is easy to change, easy to maintain, easy to fix, and easy to enhance.
The ability to easily change a product or a process starts and ends with the architecture. This understanding began with Notes on the Synthesis of Form, Christopher Alexander, 1964. It's the architecture that enables the change, assuring that coupling among the components is minimized, cohesion between the static and dynamic processes is maximized, separation of concerns is traceable to all architecture decisions. If you're developing these as you go - allowing them to¬†emerge - you're going to be disappointed when you discover your product is coupled in ways you didn't know, has weak cohesion among it's parts, and has cross cutting concerns which result in a tangled mess when you start to make changes.
The notion that the best architectures emerge are suggested by those not working on complex systems interdependent components, but on systems with lower levels of compelxity between the components. Imagine an enterprise ERP system, a software intensive manufacturing system, the 32 flight and weapons computers on the F-35, the multiple levels of interaction of Future Combat System¬†(I worked the rebaslining of the IMP/IMS for Class I), and process control system found in a nuclear power station.
Now ask,¬†would you like the architecture of these software systems to emerge as the development takes place?
Here's where to start for architecture in the enterprise IT domain. There are architectures for realtime embedded systems as well. For defense systems DoDAF is the architecture framework. So when you here¬†responding to change ask -¬†what's the mechanism that allows you to do that, when the system you're working on is complex, high risk, critically important - say banking, navigation and control, oil & gas supply chain, electric power generation and delivery, health care, drug development, retail, transportation? You get the idea
The notion of Question Everything ignores the fundamentals of every professional process improvement paradigm.¬†
Working on projects is not about the needs of the individual. It's about the needs of the whole. Personal desires must be subordinate to the needs of the mission success. It's not about you. It's about the customer and the governance framework in which the customer operates her business or fulfills her mission.¬†
Questions are great, you can learn at lot from questions. But questions asked without doing your homework are a waste of your time and those you are asking the question to. Go do your homework. Learn about¬†ITIL, COBIT, INCOSE Systems Engineering, SEI, and other professional frameworks first. Then you'll have a basis for your questions. Then start with the root cause test for your questions. When someone says those haven't worked in their experience, don't just ask the 5 whys, seek the root cause.
The Why's approach may be able to reveal the symptoms. But to get at the root cause a deeper assessment is needed. One based on a process framework. A place to the Whys to land.¬†Why didn't the work team follow the established test procedures? Why didn't the customer establish a set of needed capabilities before we started developing stories for the software development effort? These whys then reveal the root cause. The whys need to have actionable outcomes, not just the question. 1st graders can ask¬†why.
Process improvement needs to ask why, but it can only deliver value when there is an actionable answer. No actionable answer in units of measure meaningful to the decision makers? The¬†question everything paradigm is¬†Muda (waste).
Working Product is product that meets the Technical Performance Measures (TPM), the Measure of Performance (MoP), and Measure of Effectiveness (MoE) as defined by the Concept of Operations (ConOps), Statement of Objectives¬†(SOO), and Statement of Work (SOW).
Without stating these attributes of the working product there is no way to tell if it is the right working product. Right for the needed capabilities. Right for the strategy. Right for the technical, operational, and performance requirements. Simply saying¬†working product in the absence of these measures is ignoring the large context of¬†effectiveness and¬†strategic value. When we hear¬†many software features have little value, we can only determine that if the planned strategic value is defined and tested along the way. This is NOT¬†Big Design Up Front. It is the core of¬†strategy making.¬†
But the notion of having working software be put to immediate use needs a domain and context for it to be useful. Otherwise it's just another platitude of the agile vocabulary. Working on orbit for a Navigation and Guidance computer may not happen for 9 months. That's the time it takes to get to Mars. So¬†working needs an operational definition. Working in the full fidelity emulator of the space craft. Working in the ¬†complete¬†Verification and Validation¬†(100% thread coverage) of the emergency shutdown system. (I was one of gthe orginal architects of this system). Working in the full transaction processes system test bed.
Crunch-time is a symptom of harmful and counter-productive attitudes.
It's got nothing to do with attitudes, and everything to do with competent and mature business management and processes. Newspapers have crunch time every day, sometime twice or three times a day. Banks have¬†crunch time¬†every month. Surgeons¬†have¬†crunch time once they make their fist cut. 3 miles out onto a¬†hot LZ in I-Corp, 1969, is¬†crunch time¬†delivering critical supplies to Fire Base Rip Cord.¬†Flying to New York City has¬†crunch time¬†every time the 777 pushes back from the gate at LAX. It's not attitude, it's competency to manage in the presence of uncertainty and deliver as promised because you've been trained, have experience and a support system. But in the end it's the process. Process rules.
Knowing the capacity for work starts with knowing the demand for work. Throughput can only be determined if you know both demand and capacity. Then and only then, can you add¬†margin for the irreducible uncertainties. And¬†reserve for the reducible uncertainties.¬†
We are only innovators of our process if we are ¬†capable ¬†of providing the innovative solution within the governance framework of our business.
If it ain't broke don't fix it.¬†If it's broken first find the root cause and fix that cause. Rarely in modern business is the a broken process that didn't work right at one time. A critical success factor for all process improvement is to determine the root cause of the failure. Then and only then examine if there is a process problem. If so, fix the process. If not, fix the application of the process. Stop wasting time looking for solutions to the wrong problem.
The object of all projects is to deliver value to those paying you to do the work.
Writing software for money is not the same as producing art for money. If you're producing art for money, you're probably not a very good artist. If you're treating your job of producing value for money as art, you're probably not getting a lot of repeat customers.
Customers bought the capability to do something, they only care if you're self-actualized if they are a relative. Customers are happy when you've fulfilled their need to possess a capability for the expected cost on the expected day. There must lots of opportunities for participants on a project to receive personal satisfaction, grow together as a team, increase their skills, and even be innovative - but the customer rarely is willing to pay for that directly. It better be wrapped in the overhead rate.¬†Good artists copy, great artist steal - Pablo Picasso. Good firms hire people already prepared to succeed. Read¬†Making the Impossible Possible: Leading Extraordinary Performance: The Story of Rocky Flats for specific actgionable advice on doing all the things needed for success, including all the people processes. The abstract is here.
The more we understand that improvement is hard work. There is no free ride.¬†
Nobody Ever Gets Credit for Fixing Problems that Never Happened: Creating and Sustaining Process Improvement¬†is a start. They suggest less than 10% of the firms adopting Toyota's TQM actually apply it properly. This loops back to the¬†question everything nonsense, when the questioning is uninformed by the missing¬†root cause analysis¬†of the dysfunction. The source of Dysfunction in the workplace must be determined before any suggestions for improvement can be made. Stating something is the¬†smell of dysfunction is like stating ¬†what's that rotten smell as we drive by the recycling center/ Look out the window and see the source. Find the source before doing anything.
At the end of the day successfully managing projects is hard work. But there is plenty of advice. This is one of my favorites.¬†
Ten rules for common sense program management from Glen Alleman So the wrap up here starts with¬†establish an architectural framework. A framework for product development, programmatic management (cost, schedule, risk, performance assessment), and most of all a framework for responding to the rapidly emerging forces of the market place, technology, and competition. Remember Steve Job's ideas on Innovation. Related articles Agile Project Management Requirements Elicitation Performance Based Management The "Real" Root Cause of IT Project Failure One of the Problems with Emergent Design What's Missing from Project Management in IT
Thanks to my 17,000 readers. It's been a busy year. Working on both the policy ¬†and execution side of large progams. Finished the book after 9 months work, coming in Feburary. The year ended with a focus on estimating and risk. The slew of mails, blog posts, and twitter exchanges is always a good indication of an important topic.
Here's a summary of the estimating topic
Along with the estimating discussion is the discussion of project risk
Some other topics that were important in 2013 include:
2014 opens with the continuation on the policy side with updates to integrate Technical Performance Measures with ¬†Earned Value Management and on the execution side of¬†triage for major defense programs.
Organizations go through different levels of maturity as they grow. Some will say that they "learn to perform", others will say that they "become ossified and slow", yet others would say that they "mature". In my view neither of these classifications is accurate, they all touch on possible outcomes of a process of "aging". Organizations grow older, but what dynamics do we see in these organizations?Organizations grow older, they age just like us!
A recurring pattern in organizations is that they create, develop and install processes. Processes are, for practical purposes, sets of rules that define how work should happen in those organizations. They are the rules we follow daily when we work.
These rules are necessary for a common understanding of expectations and roles for each of us. We need those rules or processes so that we know what to expect, and what is expected of us. Or do we?
What is the role for rules in an organization?
In the study of Chaos and Complex systems scientists have found that Complex or Chaotic systems exhibit infinitely complex behavior starting from very simple - perhaps even simplistic - rules. The most common example of these simple rules in documentaries about Chaos and Complexity is the way ants find their way to a new food source. The rules they follow are simple:
PS: you can find a much more detailed explanation here.
Following these simple rules Ants can not only find food, but feed an entire colony. In fact, when observed from an external view point we see complex System behavior even if one Ant alone follows a very simple set of rules.
The complex behavior we witness is Complex, and Adaptive. Hence the term Complex Adaptive System (or CAS).
What does this have to do with us - humans - and companies?
In investigating CAS we have found that the more complex the rules that we define, the less likely that the system (or company/organization) will be Adaptive. In fact, the opposite is true. Companies often put rules in place to "clarify, and specify" the expected behavior, thereby making it simple - or even simplistic. One glaring example of this phenomena is the way companies develop highly sophisticated goal-setting processes that eventually end up setting goals that distort the behavior of the organization in a way that makes it lose sight of what matters: their adapation to the environment they exist in (customers, suppliers, society, etc.).The more complex the process and rules, the less Adaptable the organization will be!
But there are more examples of this phenomena whereby defining complex rule systems leads, invariably, to simple - even simplistic - behavior.What's the role for rules?
What is now clear from research, is that simple rules can lead to Complex and Adaptive behavior in the "system" or organization. For us managers, this means that we must avoid the temptation to develop complex set of rules and must be on the lookout for rules that add burden to the organization and possibly constraining behavior to the point that the organization is unable to adapt to the changes it faces in the market.
The recipe to foster adaptability in the organization is simple: when possible remove rules, when in doubt remove rules. Add rules only when the cost of not doing so is prohibitive (legal boundaries for example) or when you've learned something about your environment that should be codified for everybody to follow (you found out that a certain technology is too expensive or unstable).
But here is the most important rule for you: All rules should be created as a result of a root-cause analysis, never as a result of a knee-jerk reaction to some unplanned or unpredictable outcome.The most important rule: No rules should be established without a thorough Root-cause analysis!
The quote "Keep it Simple" really means: use less rules and more feedback! Like Agile...
Image Credit: John Hammik, follow him on twitter
Todd Little and Steve McConnell use a charting method that collects data from projects and then plots it in the following way. For Little's data its the initial estimated duration versus the actual duration.
and for McConnell's data it's the estimated completon date versus the actual completion date.
So Where's the Rub?
These charts show that project estimates exceed some¬†ideal estimate on a number of projects - the sampled projects. If we were sitting in the statistics class in an engineering, physics, chemistry, biology course, here's some questions that need answers.
What's missing are several things.
The Core Issue with Using Past Numbers
What To Do Next?
The first thing to do is go out to the book store and get a book on statistical forecasting or statistical estimating that has actual math in the book. Next is to ask some hard questions?
Then read all you can find on¬†reference class forecasting and¬†statistical inference. Data is not information. Cause is not correlation.¬†
There's really no way out of this. Spending other peoples money, at least money they are no willing lose, means having some process of estimating the probability of success.
The Final Thought
Plot cost and schedule for your projects asa¬†Joint Probability. Below is a Monte Carlo Simulation of the Joint Cost and Schedule for a program. A similar chart is needed but using a collection of projects. Take Little's and McConnell's sample projects and plot both cost and schedule. There may be correlations between original cost and actual cost, versus original schedule and actual schedule. Big projects have higher risk - restating the obvious by the way. Higher risk project may have wider variances in performance - also restating the obvious.
But these one dimension - one independent variable - plots of cost overrun versus original cost estimates just show the uncalibrated, un-normalized, non-root-cause data. It's just a chart. Of little value for taking corrective action.Quote of the Day - Correlation, Causation, and Statistics Reference Class Forecasting for Software Estimating Three Kinds of Uncertainty About the Estimated Cost and Schedule Let's Stop Guessing and Learn How to Estimate Statistics, Bad Statistics, and Damn Lies
When we start talking about anything and everything to do with project management, it's best to start with the¬†end in mind. What is this project supposed to provide for the customer? What¬†Capability will the customer posses when the project is complete? This is the¬†Capabilities Based Planning paradigm used in our Defense and Space domain. CBP started in the Australian Forces, moved to the Canadian Forces, and came to the US DOD. From there it is a natural jump to any project based planning process to ask¬†what do we need to successfully complete our business mission?
So here's the framework for starting with the end in mind, as Mr. Covey advised years ago. The Capabilities being delivered on or near the need date, at or near to desired cost are developed with the following steps. This notion of¬†on or near is critical. If the Capabilities show up later than needed or cost more than planned, their¬†value to the business is diluted. By how much? Depends on the amount. There are many examples of projects way over budget, way late that still delivered value, Sydney Opera House, DIA here in Denver. But another¬†example of if the date was missed¬†by 2 weeks on a 5 year program with the result of total loss of mission.
As he looked at numbers e noticed that the oscillations of his model did not repeat.
In fact he entered the numbers again, and again, and again but his model would refuse to behave the same way twice.
During our long education we were taught that given the initial state of a system (the present parameters that define the state of the system) plus the equations that fully describe the system, you will be able to plot the future behavior of the system forever.
And we have plenty of evidence that this approach works. For one, we are able to predict when Comet Halley will next visit our corner of the galaxy. And many astronomers were able to predict that last visit thousands of years ago!
So, why was his model stubbornly behaving differently every time he punched the numbers in? After all he knew the system perfectly - he had designed it!
The best way he had to describe this "unpredictable" behavior was with the word "chaotic", a never ending sequence of never repeating patters. Nothing was the same even if the initial state was the same and he was the one defining the equations for this toy-weather model. I mean he had defined ALL the equations...
It took a few days, but he figured it out. He had entered the parameters with a precision of 1/1000, but during processing the computer executing the model would use numbers with a precision of 1/1000000. On initial consideration this did not seem a relevant difference, after all a difference of 1/1000 was equivalent to having a butterfly flap its wings in China and having that create a storm in North America.Systems that would never repeat in behavior even if they ran for ever
Later, this and other experiments would be repeated all over the world, in many different domains but the results would be similar. All over the world scientists were discovering other systems that were "sensitive dependence to initial condition" (aka suffered from the Butterfly effect), the scientific definition for "chaos" which later became the popular term to describe systems that would never repeat in behavior even if they ran for ever. These systems exhibited infinite variety of behavior when certain conditions were met. What were those conditions? That we will explore in a later postPhoto credit: John Hammink, follow him on twitter
Either write something worth reading or do something worth writing about -¬†Benjamin Franklin
When there are simple and many times simple minded platitudes about solving complex and many times wicked or intractable problems in our project management world, think of Ben.
Tell me something actionable, with corrective outcomes in units of measure meaningful to the decision makers. This almost always involves doing something that is obvious, measurable, and that has been done before, but we just forgot, didn't pay attention, or have chosen to stop doing.
My favorite to date is the link to a NYT article about how the federal government needs to improve it's acquisition process. This article lists programs, all analyzed by GAO, all well reviewed and corrective actions being taken. The OP of this article has no connection, other than reading a poorly informed news paper article, about the problems and the possible corrections.
So Ben's advice can be applied here. Do something worth reading about, stop restating the obvious. Or do something outside your personal anecdotal experience worth writing about for others to¬†test in their domain. Either way, we've got very qualified people identifying gaps, root causes, abnd proposed corrective actions from most every problem in the planet. Start with research on what they're saying first, then see if what is claimed to be the¬†smell of dysfunction has a solution that can be directly applied to your domain. If not, there is a real opportunity to contribute to the art and practice of sepnding other peoples money in exchange for creating value.¬†
Twas the Night Before Baseline
(With apologies to Clement Clarke Moore)
Twas the eve for the holidays,
And all through the shop,
Our consultants were working,
Would they ever get to stop?
All the CAMs had corrected
their IMSs with care,
With hopes that the IBR interviews
Would not be nightmares.
When all of a sudden
There arose such email chatter,
IMS updates were piling up,
So which ones would matter?
And what did I see
To my eye‚Äôs disbelief?
But even more IMS updates
In BoxNet motif!!
But jolly fat fingers
flew over keyboards,
And corrections were entered
Errors vanquished evermore!
Now CAs! Now WADs!
Now WPs! Now RAMS!
On CARs! On Baselines!
On Schedules! On CAMs!
To the TMT briefings,
To the top TMT Generals,
Now CAM away, CAM away, CAM away all!
Courtesy of Jay Charleston, CAM on a¬†DTRA Anti-Virus Program. Our team was literally ¬†working the baseline submittal in late December for an IBR in mid January. Only the strong survive the 72 hours straight of re-baselining $350M a bio-pharma drug development program.¬†
For it is your business, when the wall next door catches fire - Horace
The notion of¬†challanging everything without providing either the root cause, an example beyond personal opinion, peer reviewed samples from an appropriate domain, counter examples for both the problem and he corrective action, and successed remedies with know benefial outcomes just chatting and best kept to the bar stool. No expected actionable outcomes.¬†
I had a boss who was great at saying, “Terri did this. Jen did that. JR did this other thing.” We all knew who had learned about different areas of the system, who had succeeded at which parts of testing or development or project management. It was great.
She didn’t just tell us. Nope, our boss told her bosses.
That’s one of the reasons I had many opportunities to grow in that particular job. Not just because I worked hard and did a good job. But because my boss told her management team.
Contrast that with some other places I’ve worked, especially where command-and-control still had a foothold.
I once led a small team where we were implementing a process control application. It was a difficult project. My manager knew what we were doing, but we were on the hairy edge of success/failure the entire time. I took a one-week vacation, and my team continued while I was away.
Another VP across the organization—not my manager—inserted himself in my project while I was away. For that entire week, he “managed” our customer. I had been the sole customer contact up until then. All hell broke loose.
I returned from vacation to a gazillion voice mails on my personal answering machine. (This is before the days of cell phones :-) It took me a month of plane rides to fix this customer problem.
When we released that project, it was successful. At the next Ops meeting, he told everyone that he personally had overseen the project. My manager did not participate at Ops meetings.
Afterwards, my manager asked me about the project. “I thought you and the team were working on this project?”
“We’re still cleaning up. You should see what Andy, Bill, and Mack have done. They have really performed. I’m just about ready to write my post-project review for you,” I replied.
My manager frowned. “Well according to VP, you had nothing to do with this project. It was all him, and only him.”
“Are you serious?” I was flabbergasted. “You know how hard I’ve been working on this. You’ve been signing my expense reports. Wait a minute. What about the guys? Did he say anything about them?”
“No, he didn’t mention them either. He pulled this project out of the fire, all by himself.”
“Do you believe him?” I couldn’t believe what I was hearing.
“Well, I know you’ve been busy, but I don’t know if you’ve been distracted by other things.”
And that’s when I learned about the value of weekly one-on-ones (I’d been flying, so I hadn’t had one in a few weeks), email status reports or somehow letting my boss know what I was doing, and giving credit up. If I’d been giving credit all along for my project team, no one would have been distracted by the bozo VP.
I fixed that, pretty darn quick, and apologized to my team. They laughed it off. I’d built a lot of trust with them already. But my manager had a gazillion fires to fight. He had no idea who was managing what.
That is the topic of this month’s management myth: People Don’t Need External Credit.
The more you give credit, the more you look good.
Reality is never so tidy - Sherlock Holmes
When we hear of tidy fixes to complex problems, remember Sherlock. Before deciding that any idea is viable for improving our lot in life around project success, ask if the suggested solution has answers to these questions. If so there are 5 other practices - universal project success practices - that must be in place next. But these Principles are the foundation of success for all projects, not matter the domain, the management processes, the organizational structure, the tools, anything.
Don’t bother making goals. Just get rid of waste and create options. And let the good things happen.
There is a post that references a concept I've come to use that puts uncertainty into three classes. This post it not exactly what I said, so let me clarify it is bit.
First some background. I work on an engagement that provides advice to an office inside the Office of Secretary¬†of Defense (OSD). This office, the inside, is responsible for determining the Root Cause of program performance for ACAT1 (Acquisition Category 1) programs.
These are large programs. Larger than $5B. In most domains outside the ACAT1's this numer is ridiculously large. But inside the circle of large defense programs, $5B is really not that much money. Joint Strike in a¬†Congressional Quarterly¬†and the Government Accountability Office indicated a "Total estimated program cost now $400B," nearly twice initial cost.¬†DDG-1000 is $21,214 Million, yes that $21,214,000,000.
No IT or software development project would come within a millionth of that. If you're interested there are reports at Rand and IDA for the current issues. There are certaintly multi-million dollar IT projects. The ACA web site is probably going to be in the range of $85M to several 100 million. The facts are still coming in. So anyone who says they know and doesn't work directly in the program, proably doesn't know and is making up numbers. GAO will get to the real numbers soon we hope.
Principles Rule, Practices Follow, Everything Else is BS
The principles of cost and schedule estimating, assessment of the related technical and programmatic gaps are the same in all domains for every scale. From small to billion. Why? Because it's the same problem no matter the scale.
The¬†soliloquy in the movie makes a good point -handling the truth is actually very difficult for almost everyone outside the domain - in many instances.
We want the simple answer. We want it all to be fine. We really don't want to do the heavy lifting needed to come up with an answer. We want the simple answer. Many times we don't want an answer at all, we want to just do our job and ignore the fuduciary responsibility to tell others what the cost and schedule impacts are, or even to do our¬†job of discovering that¬†DONE looks like before we start spending other peoples money.
So here's the way out of the trap of at least (1) and (2)¬†
But the words used in the original post that referenced my post are not my intent, nor are they part of any process I work in.
Here's a list of other posts on this topic. It's a crtically important topic. One that deserves deatiled analysis. One that we're obligated to know and use when it's not our money we're spending. It's called Governance.
Here's some more discussion on Estimating for fun and profit.Related articles How NOT to Estimate Anything Facts and Fallacies of Estimating Software Cost and Schedule Let's Stop Guessing and Learn How to Estimate How to Accurately Project Cost Estimates Uncertainty is the Source of Risk Complex Problems Require Better Solutions Managing In The Presence Uncertainty Probabilistic Cost and Schedule Processes Cost (and Schedule) Estimating Foundations
An interview with Laurence McCahill about happy startups.
Passion, purpose, people, profits..
The post How to Be a Happy Startup? – 15 Minutes on Air with Laurence McCahill appeared first on NOOP.NL.
These are the 20 best books I read in 2013, ranked by number of highlights on Amazon Kindle.
The notion of¬†Challange Everything is likley one that produces little progress without understanding first what the¬†Root Cause is the observed problems are. To discover the¬†root causes we must seperate out observations from our personal biases. The root causes come in three flavors
So when¬†challanging everything be prepared to speak to the root causes, some correcrive actions to the root causes, oh yea, and some actual hands on experience in the specific domain with both the problem and the potential solutions. Otherwise, you're likely to be labeled a¬†whinner.¬†
Order ¬†of Apparent Chaos -¬†I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expresses by the "Law of Frequency Error." The law would have been personified by the Greeks and deified if they had know of it. It reigns with serenity and in complete self-effacement amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason -¬†Francis Galton¬†Natural Inheritance (1889).
Wheh there is mention that the future cannot be forecast or estimates of past, present, and future cannot be made, careful consideration must be given to the speakers lack of understanding of basic statistics. One place to start is¬†Principles of Statistics, M. G. Bulmer.
Probability and statistics rule our project world. We must treat all aspects of project work, technical, cost and schedule, as random varaibles drawn from an underlying probabilty distribution - either discrete or continuous. Without considering the random nature of these project elements and their behaviours, our decision making capabilities are severely limited. When we ignore them, fail to consider them, and preceed in their presence, we will be disapointed with the outcomes.