Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
The current issue of ICEEAWorld, has an article on estimating on agile projects.¬†
To build a credible estimate for any project, in any domain, to produce a solution to any problem, we need to start with a few core ideas.
If you're missing any of the items in this picture, it's going to be a disappointing effort. Some may even call it a¬†waste to estimate. But not for the reasons you think. It ¬†is a waste to estimate if you don't know how to estimate. But estimate are not for you, unless you're the one providing the money. They're for those providing the money, expecting the outcomes from that expense show show up on some need date, with the needed value that provide them with the ability to earn back the money.
Rally Software is a local firm providing tools for the management of agile project. Project Managers provide the¬†glue for all human endeavors involving complex work processes. Rally has those tools as do many others. Rally also has message that needs to be addressed by the project management community. Organizing, planning, executing social projects is one of the roles projects managers can contribute to.
SIM posium 2014 - Denver from Ryan Martens Related articles Estimating Guidance When the Solution to Bad Management is a Bad Solution Measures of Program Performance Should I Be Estimating My Work? Assessing Value Produced By Investments
I love the concept of OKRs (Objectives & Key Results) and while experimenting with this idea at Happy Melly, I’m trying to figure out how to adapt the practice to fit my own context. One thing my team members and I have noticed over the last few months is that it’s hard to remember what we have committed to.
Mike Cohn of Mountain Goat Software has a collection of 101 Agile Quotes.¬†
There are few I have heart burn with, but the vast majority are right on.¬†
Some of my favorite:
By now all the different editions of my new book Management 3.0 #Workout are finished and published. The book, easily the most colorful and practical management book in the world, is available as PDF, ePub, Kindle and in a printed edition. And, although writing a book is great fun, finishing a book feels even better! Especially since it‚Äôs worth a little celebration. If you‚Äôve read any of my work, you know I love to celebrate. :o)
When we read on a blog post that¬†estimates are not meaningful unless you are doing very trivial work, ‚Ä† I wonder if the poster has worked on any non-trivial software domain. Places like GPS OCX, SAP consolidation, Manned Space Flight Avionics, or maybe Health Insurance Provider Networks. Because without some hands on experience in those non-trivial domains, it's be hard to actually knowing what you're talking about when it comes to estimating the spend of other peoples money.
Maybe some background on estimates for nontrivial work will shed light on this ill informed notion that only trivial projects can be estimated.
These are a small sample of papers from one journal on software estimating for misison critical, some times¬†National Asset projects.¬†
Go to Cross Talk, The Journal of Defense Software Engineering, and search for "estimating" to get 10 pages of 10 articles on this topic alone. This notion of estimating in non-trivial domains is well developed, well documented, and many examples of tools, processes, and principles.¬†
If Do Your Homework and the Test is much easier.
It could be that the¬†original poster has little experience in mission critical, national asset, enterprise class, software intensive systems. Or it could be the poster simply doesn't know what making estimates for project that spends other peoples money, many times significant amounts of money, is all about.
And of course most of the problems describes as the basis for¬†Not Estimating - the illogical notion that if we can't do something well, let's stop doing it - starts with not knowing what Done Looks Like in any units of measure meaningful to the decision makers.¬†
So start here with my favorite enterprise architect blog amd his list of books when you follow the link to the bottom.
So when you have some sense of what¬†DONE looks like in terms of capabilities, the estimating process is now on solid ground. From that solid ground you can ask¬†have we done any like this before? Or better yet¬†can we f ind someone who has done something like this before? Or maybe¬†can we look around to see what looks like our problem and figure out how long it took them by simply asking them? I
If the answer to any of those questions is NO and you're NOT working in a research and development domain, then don't start the project because you're not qualified to do the work, you don't know what you're doing and you're going to waste your customers money.
‚Ä† Scroll to the bottom of¬†http://zuill.us/WoodyZuill/category/estimating/ and search for "A Thing I Can Estimate," to see the phrase, and remember the questions and the answers above. If you're not answering those in some positive way, you're now on a death march project starting day one, because you don't know what done looks like for the needed capabilities. Not the requirements, not the code, not the testing - that's all straight forward. Without some notion of what the system is supposed to do, you're never recognize it if it were ever to come into view. And since the customer doesn't know as well, all the money they're spending to find out has to be written off as IRAD or flushed down the toliet as a waste of time and effort in the end. And then you'll know why ¬†Standish (improperly) reports projects fail.¬†Basis of #NoEstimates are 27 Year Old Reports Estimating Guidance Should I Be Estimating My Work? Assessing Value Produced By Investments Trust but Verify How to Estimate Software Development
In a recent email exchange, the paper by Todd Little showing projects that exceeded their estimates was used as an example for how porrly we estimate, and ultimately one of the reasons to adopt the #NoEstimates paradigm of making decisions in the absence of estimates of cost, schedule, and the probability that the needed capabilities will show up on time and be what the customer wanted.
Sherlock here had it right. This picture by the way ¬†is borrowed from Mike Cohn's eBook of 101 quotes for agile.
I've written about Little's paper before, but it's worth repeating.
It's very sporty to use examples of bad mathematics, bad management, bad processes, bad practices as the basis for something new. This is essential the basis the book¬†Hard Facts, dangerous Half-Truths: Profiting from Evidence-Based Management. When we start talking about something new and disruptive in the absence of the data, facts, root causes, and underlying governance and principles, we're treading on very thin ground in terms of credibility.
Here's the core principle of all software development
Customers exchange money for value. The definition of ¬†value needs to be in units of measure that allows them to make decisions about the future value of that value. That value is exchanged for a cost. A future cost as well.¬†
Both this future cost and future value are probabilistic in nature, due to the uncertainties in the work processes, technologies, markets, productivity and all the¬†...ilities associated with project work. In the presence of uncertainty, nothing is for certain - a tautology. There are two type of uncertainty - reducible and irreducible. Reducibe we can do something about. We can spend money to reduce the risk associated with the uncertainty. Irreducible, we can't. We can only have¬†margin,¬†management reserve, or a¬†Plan B.
To make decisions in the presence of these uncertainties - reducible and irreducible - we need to estimate the uncertainty, the cost of¬†handling the uncertainty, and the value produced by the work driven by these uncertainties. When we fail to make these estimates, the uncertainties don't go away. When we slice the work into small chunks, we might also slice the uncertainties into small chunks - this is the basis of agile and the paradigm of¬†Little Bets. But the uncertainties are still there, unless we've explicitly bought them down or installed margin and reserve. They didn't go away. And what you don't know - or choose to explicitly ignore - can hurt you.
So Sherlock is right don't put forth a theory without the data.¬†
¬†Related articles Should I Be Estimating My Work? Basis of #NoEstimates are 27 Year Old Reports Estimating Guidance Baloney Claims: Pseudo - science and the Art of Software Methods Assessing Value Produced By Investments Trust but Verify Anecdotal Evidence is not Actually Evidence
If you read my Three Alternatives to Making Smaller Stories, you noticed one thing. In each of these examples, the problem was in the teams’ ability to show progress and create interim steps. But, what about when you have a “wicked” problem, when you don’t know if you can create the answer?
If you are a project manager, you might be familiar with the idea of “wicked” problems from¬†¬† from the book Wicked Problems, Righteous Solutions: A Catalog of Modern Engineering Paradigms. If you are a designer/architect/developer, you might be familiar with the term from Rebecca Wirfs-Brock’s book, Object Design: Roles, Responsibilities, and Collaborations.
You see problems like this in new product development, in research, and in design engineering. You see it when you have to do exploratory design, where no one has ever done something like this before.
Your problem requires innovation. Maybe your problem requires discussion with your customer or your fellow designers. You need consensus on what is a proper design.
When I taught agile to a group of analog chip designers, they created landing zones, where they kept making tradeoffs to fit the timebox they had for the entire project, to make sure they made the best possible design in the time they had available.
If you have a wicked problem, you have plenty of risks. What do you do with a risky project?
Now, in return, the team solving this wicked problem owes the organization an update every week, or, at the most, every two weeks about what they are doing. That update needs to be a demo. If it’s not a demo, they need to show something. If they can’t in an agile project, I would want to know why.
Sometimes, they can’t show a demo. Why? Because they encountered a Big Hairy Problem.
Here’s an example. I suffer from vertigo due to loss of (at least) one semi-circular canal in my inner ear. My otoneurologist is one of the top guys in the world. He’s working on an implantable gyroscope. When I started seeing him four years ago, he said the device would be available in “five more years.”
Every year he said that. Finally, I couldn’t take it anymore. Two years ago, I said, “I’m a project manager. If you really want to make progress, start asking questions each week, not each year. You won’t like the fact that it will make your project look like it’s taking longer, but you’ll make more progress.” He admitted last year that he took my advice. He thinks they are down to four years and they are making more rapid progress.
I understand if a team learns that they don’t receive the answers they expect during a given week. What I want to see from a given week is some form of a deliverable: a demo, answers to a question or set of questions, or the fact that we learned something and we have generated more questions. If I, as a project manager/program manager, don’t see one of those three outcomes, I wonder if the team is running open loop.
I’m fine with any one of those three outcomes. They provide me value. We can decide what to do with any of those three outcomes. The team still has my trust. I can provide information to management, because we are still either delivering or learning. Either of those outcomes provides value. (Do you see how a demo, answers or more questions provides those outcomes? Sometimes, you even get production-quality code.)
Why do questions work? The questions work like tests. They help you see where you need to go. Because you, my readers, work in software, you can use code and tests to explore much more rapidly than my otoneurologist can. He has to develop a prototype, test in the lab and then work with animals, which makes everything take longer.
Even if you have hardware or mechanical devices or firmware, I bet you simulate first. You can ask the questions you need answers to each week. Then, you answer those questions.
Here are some projects I’ve worked on in the past like this:
The questions are like your tests. You take a scientific approach, asking yourself, “What questions do I need to answer this week?” You have a big question. You break that question down into smaller questions, one or two that you can answer (you hope) this week. You explore like crazy, using the people who can help you explore.
Exploratory design is tricky. You can make it agile, also. Don’t assume that the rest of your project can wait for your big breakthrough.¬† Use questions like your tests. Make progress every day.
I thank Rebecca Wirfs-Brock for her review of this post. Any remaining mistakes are mine.
On a twitter discussions and email exchanges there is a notion of¬†populist books versus¬†technical¬†books ¬†used to address issues and problems encountered in our project management domains. My recent book¬†Performance-Based Project Management¬ģ is a populist book. There are principles, practices, and processes in the book that can be put to use on real projects, but very few equations and numbers. It's mostly narrative about increasing the probability of project success. But the to calculate that probability based on other numbers, processes, and systems is not there. That's the realm of¬†Technical books and journal papers.
The content of the book was developed with the help of editors at American Management Association, the publisher. The Acquisition Editor contacted me about writing a book for the customers of AMA. He explained up front AMA is in the money making business of selling books. And that although I may have many good ideas, even ideas that people might want to read about, it's an AMA book and I'll be getting lots of help developing those ideas into a book that will make money for AMA.
The distinction between a populist book and a technical book are the differences between a book that addresses a broad audience with a general approach to the topic and a¬†deep dive book focused on a narrow audience.
But one other disticntion is for most of the technical approaches, some form of calculation takes place to support the materials found in the populist material. One simple example is estimating. There are estimating articles and some books that lay out the principles of estimates. We have those in our domain in the form of guidelines and a few texts. But to calculate the Estimate To Complete in a statistically sound manner, technical knowledge and the underlying mathematics of non-linear, non-stationary, stochastic processes (Monte Carlo Simulation of the projects work structure) is needed.¬†
Two examples of populist versus technical
Two from my past two from my current work.¬†
These two books are about the same topic. General relativity and its description of the shape of our universe. ¬†One is a best selling popularization of the topic, found in many home libraries of those interested in this fascinating topic. The one on the left is on my shelf from a graduate school course on General Relativity along with Misner, Thorne, and Wheeler's¬†Gravity.
Dense is an understatement for the math and the results of the book on the left. So if you want to calculate something about a rapidly spinning Black Hole, you're going to need that book. The book on the right will talk about those Black Holes in non-mathematical terms, but no numbers come out from that description.
The book on the left is about probabilistic processes in everyday life that we misunderstand or are biased to misunderstand. The many cognitive biases we use to convince ourselves we are making the right decisions on projects are illustrated through nice charts and graphs.
We use the book on the left in our work with non-stationary stochastic process of complex project cost and schedule modeling. Making these decisions is critical to quantifying how technical and economics risk may affect a system's cost. This book is a treatment of how probability methods are applied to model, measure, and manage risk, schedule, and cost engineering for advanced systems. Garvey's shows how to construct models, do the calculations, and make decisions with these calculations.
Here's The Point - Finally
If you come across a suggestion that decisions can be made in the absence of knowing anything about the future numbers or about actually¬†doing the math, put that suggestion in the class of¬†populist¬†descriptions of a complex topic.
If you can't calculate something, then you can't make a decision based on the evidence represented by numbers. If you can't decide based on the math, then the only way left is to decide on intuition, hunchs, opinion, or some other seriously flawed non-analytical basis.
Just a reminder from Mr. Deming stated in yesterday's post
If it's not your money, there's likley an expectation that those providing the money are intestered in the calculations needed to make those decisions.¬†
Action accordinglyRelated articles Estimating Guidance When the Solution to Bad Management is a Bad Solution Measures of Program Performance Should I Be Estimating My Work? Decision Making Without Estimates? Trust but Verify Anecdotal Evidence is not Actually Evidence Why Trust is Hard Baloney Claims: Pseudo - science and the Art of Software Methods
All ideas require credible evidence to be tested, suspect ideas require that even more so - Deep Inelastic Scattering, thesis adviser, University of California, 1978
When the response to questions about the applicability of an idea is push back with accusations that those asking the questions in an attempt to determine the applicability and truth of the statement are somehow¬†afraid of that truth, suggests there is little evidence as a¬†test¬†of those conjectures.
When there are proposals that ignore the principles of business, microeconomics, control systems theory, and are based on well know¬†bad management practices, with well know and easy to apply corrective actions - there is¬†no there, there.¬†
So without a testable process, in a testable domain, with evidence based assessment of appliability, outcomes, and benefits, any suggestion is opinion at best and blather at worse.Related articles Trust but Verify Assessing Value Produced By Investments Should I Be Estimating My Work?
When I was in Israel a couple of weeks ago teaching workshops, one of the big problems people had was large stories. Why was this a problem? If your stories are large, you can’t show progress, and more importantly, you can’t change.
For me, the point of agile is the transparency—hey, look at what we’ve done!—and the ability to change. You can change the items in the backlog for the next iteration if you are working in iterations. You can change the project portfolio. You can change the features. But, you can’t change anything if you continue to drag on and on and on for a give feature. You’re not transparent if you keep developing a feature. You become a black hole.
Managers start to ask, “What you guys doing? When will you be done? How much will this feature cost?” Do you see where you need to estimate more if the feature is large? Of course, the larger the feature, the more you need to estimate and the more difficult it is to estimate well.
The reason Pawel and I and many other people like very small stories—size of 1—means that you deliver something every day or more often. You have transparency. You don’t invest a ton of work without getting feedback on the work.
The people I met a couple of weeks ago felt (and were) stuck. One guy was doing intricate SQL queries. He thought that there was no value until the entire query was done. Nope, that’s where he is incorrect. There is value in interim results. Why? How else would you debug the query? How else would you discover if you had the database set up correctly for product performance?
I suggested that every single atomic transaction was a valuable piece. That the way to build small stories was to separate this hairy SQL statement was at the atomic transaction. I bet there are other ways, but that was a good start. He got that aha look, so I am sure he will think of other ways.
Another guy was doing algorithm development. Now, I know one issue with algorithm development is you have to keep testing performance or reliability or something else when you do the development. Otherwise, you fall off the deep end. You have an algorithm tuned for one aspect of the system, but not another one. The way I’ve done this in the past is to support algorithm development with a variety of tests.
This is the testing continuum from Manage It! Your Guide to Modern, Pragmatic Project Management. See the unit and component testing parts? If you do algorithm development, you need to test each piece of the algorithm—the inner loop, the next outer loop, repeat for each loop—with some sort of unit test, then component test, then as a feature. And, you can do system level testing for the algorithm itself.
Back when I tested machine vision systems, I was the system tester for an algorithm we wanted to go “faster.” I created the golden master tests and measured the performance. I gave my tests to the developers. Then, as they changed the inner loops, they created their own unit tests. (No, we were not smart enough to do test-driven development. You can be.) I helped create the component-level tests for the next-level-up tests. We could run each new potential algorithm against the golden master and see if the new algorithm was better or not.
I realize that you don’t have a product until everything works. This is like saying in math that you don’t have an answer until you have the finished the entire calculation. And, you are allowed—in fact, I encourage you—to show your interim work. How else can you know if you are making progress?
Another participant said that he was special. (Each and every one of you is special. Don’t you know that by now??) He was doing firmware development. I asked if he simulated the firmware before he downloaded to the device. “Of course!” he said. “So, simulate in smaller batches,” I suggested. He got that far-off look. You know that look, the one that says, “Why didn’t I think of that?”
He didn’t think of it because it requires changes to their simulator. He’s not an idiot. Their simulator is built for an entire system, not small batches. The simulator assumes waterfall, not agile. They have some technical debt there.
Here are the three ways, in case you weren’t clear:
You want to deliver value in your projects. Short stories allow you to do this. Long stories stop your momentum. The longer your project, and the more teams (if you work on a program), the more you need to keep your stories short. Try these alternatives.
Do you have other scenarios I haven’t discussed? Ask away in the comments.
This turned into a two-parter. Read Make Stories Small When You Have “Wicked” Problems.
Do you know about the Conscious Software Development Telesummit? Michael Smith is interviewing more than 20 experts about all aspects of software development, project management, and project portfolio management. He’s releasing the interviews in chunks, so you can¬† listen and not lose work time. Isn’t that smart of him?
If you haven’t signed up yet, do it now. You get access to all of the interviews, recordings, and transcripts for all the speakers. That’s the Conscious Software Development Telesummit. Because you should make conscious decisions about what to do for your software projects.
The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.
I have a bit of a problem with all the hatred shown to so-called vanity metrics.
Eric Ries first defined vanity metrics in his landmark book, The Lean Startup. Ries says vanity metrics are the ones that most startups are judged by—things like page views, number of registered users, account activations, and things like that.
Ries says that vanity metrics are in contrast to actionable metrics. He defines an actionable metric as one that demonstrates clear cause and effect. If what causes a metric to go up or down is clear, the metric is actionable. All other metrics are vanity metrics.
I’m pretty much OK with all this so far. I’m big on action. I’ve written in my books and in posts here that if a metric will not lead to a different action, that metric is not worth gathering. I’ve said the same of estimates. If you won’t behave differently by knowing a number, don’t waste time getting that number.
So I’m fine with the definitions of “actionable” and “vanity” metrics. My problem is with some of the metrics that are thrown away as being merely vanity. For example, the number one hit on Google today when I searched for “vanity metrics” was an article on TechCrunch.
They admit to being guilty of using them and cite such metrics as 1 million downloads and 10 million registered users.
But are such numbers truly vanity metrics?
One chapter in Succeeding with Agile, is about metrics. In it, I wrote about creating a balanced scorecard and using both leading and lagging indicators. A lagging indicator is something you can measure after you have done something, and can be used to determine if you achieved a goal.
If your goal is improving quality, a lagging indicator could be the number of defects reported in the first 30 days after the release. That would tell you if you achieved your goal—but it comes with the drawback of not being at all available until 30 days after the release.
A leading indicator, on the other hand, is available in advance, and can tell you if you are on your way to achieving a goal.
The number of nightly tests that pass would be a leading indicator that a team is on its way to improving quality. The number of nightly tests passing, though, is a vanity metric in Ries’ terms. It can be easily manipulated; the team could run the same or similar tests many times to deliberately inflate the number of tests. Therefore, the linkage because cause and effect is weak. More passing tests do not guarantee improved quality.
But is the number of passing tests really a vanity metric? Is it really useless?
To show that it’s not, consider a few other metrics you’re probably familiar with: your cholesterol value, your blood pressure, your resting pulse, even your weight. A doctor can use these values and learn something about your health. If your total cholesterol value is 160, a heart attack is probably not imminent. A value of 300, though, and it’s a good thing you’re visiting your doctor.
These are leading indicators. They don’t guarantee anything. I could have a cholesterol value of 160 and have a heart attack as soon as I walk out of the doctor’s office. The only true lagging indicator would be the number of heart attacks I’ve had in the last year. Yes, absolutely a much better metric, but not available until the end of the year.
So should we avoid all vanity metrics? No. Vanity metrics can possess meaningful information. They are often leading indicators. If a website’s goal is to sell memberships then number of memberships sold is that company’s key, actionable metric.
But number of unique new visitors—a vanity metric—can be a great leading indicator. More new visitors should lead to more memberships sold. Just like more passing tests should lead to higher quality. It’s not guaranteed, but it is indicative.
The TechCrunch article I mentioned has the right attitude. It says, “Vanity metrics aren’t completely useless, just don’t be fooled by them.” The real danger of vanity metrics is that they can be gamed. We can run tests that can’t fail. We can buy traffic to our site that we know will never translate into paid memberships, but make the traffic metrics look good.
As long as no one is doing things like that, vanity metrics can serve as good leading indicators. Just keep in mind that they don’t measure what you really care about. They merely indicate whether you’re on the right path.
I regularly get the question, “How Do You Do It?”
“How are you able to travel so much and not get sick of it?”
“How can you read 50+ books per year and also write your own?”
Gosh, I don’t know.
In a sufficiently complex project we need measures of progress to plan beyond¬†burning down our list of same sized stories, which by the way require non-trivial work to make same sized and keep same sized. And of course if this same size-ness does not have a sufficiently small variance all that effort is a waste.
But let's assume we're not working on a sufficiently small project where¬†same sized¬†work efforts can be developed, we need measures of progress related to the Effectiveness of the deliverables and the Performance of those deliverable in producing that effectiveness for the customer.
Here's a recent webinar on this topic.
Measurement News Webinar from Glen Alleman And of course we need to define in what domain this approach can be applied, what domain it is too much, and what domain it is actually not enough. Paradigm of agile project management from Glen Alleman Then the actual conversation about any approach to¬†Increasing the Probability of Success for our work efforts can start. Along with identifying the underlying Root Causes of any impediments to that goal that exist today and the corrective actions needed to remove them.¬† Without knowing the root cause and corrective actions any suggested solution has little value as it is speculative at best and nonsense at worse.
Much has been written about the¬†Estimating Problem, the optimism bias, the planning fallacy, and other related issues with estimating in the presence of Dilbert-isk style management. The notion that the solution to the¬†estimating problem is not to estimate, but to start work, measure the performance of the work, and use that to forecast completion dates and efforts is essentially falling into the trap Steve Martin did in LA Story.¬†
Using yesterday's weather becasue he was too lazy to make tomorrow's forecast
By the way each of those issues has a direct and applicable solution. So next time you hear someone use them as the basis of a new idea, ask if they have tried the¬†known to work solution to the planning fallacy, estimating bias, optimism bias, and the myriad of other project issues with knowing solutions?
All measuring performance to date does is measure¬†yesterday's weather. This¬†yesterday's weather paradigm has been well studied. If in fact your project is based on¬†Climate then yesterday's weather is likely a good indicator of tomorrow's weather.
The problem of course with the¬†yesterday's weather approach, is the same problem Steve Martin had in LA Story when he used a previously recorded weather forecast for the next day.¬†
Today's weather turned out not to be like yesterday's weather.
Those posting that stories settle down to a rhythm assume - and we know what assume means - that the variances in the work efforts are settling down as well. That would mean the word assume comes true¬†Ass out of U and Me. That's a hugely naive approach, without actual confirmation that the variances are small enough to not impact the past performance. When you have statistical processes looking like this, from small sampled projects in the absence of actual reference class - in this case self-reference class - you're also being hugely naive about the possible behaviours of stochastic processes.
Then when you¬†slice the work to same sized efforts - this is actually process used in the domains we work: DOD, DOE, ERP - you're actually estimating future performance base on a¬†reference class and calling it Not Estimating.
So when you hear examples and Bad Management, over commitment of work, assigning a project manager to a project that is 100's of time larger than that PM has ever experienced and expecting success, getting a credible estimate and cutting it in half, or any other Dilbert style management process - and you start with dropping the core process needed to increase the probability of success.
This approach is itself contrary to good project management principles, which are quite simple:
Principles and Practices of Performance-Based Project Management¬ģ from Glen Alleman ¬†
¬†If we start with a solution to a problem of Bad Management, before assuring that the Principles and Practices of Good Management are in place, we'll be¬†paving the cow path as we say in our enterprise, space, defense domain. This means that the solution will have not actually fixed the problem. It will have not treated the root cause of the problem, just addressed the symptoms.
There is no substitute for Good Management.
And when you hear there is a¬†smell¬†of bad management and there is no enumeration of the root causes and the corrective actions to those root causes, remember Ingio Montoya's retort to Vizzini's statement
You keep using that word. I do not think it means what you think it means.
That word is dysfunction, smell, root cause - all of which are missing the actual innumerated root causes, assessment of the possible corrective actions, and resulting removal of the symptoms.¬†
I speak about this approach from my hands on experience working the¬†Performance Assessment and Root Cause Analysis¬†on programs that are in the headlines.Related articles Should I Be Estimating My Work? Estimating Guidance Assessing Value Produced By Investments Basis of #NoEstimates are 27 Year Old Reports