Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
There are many factors that cause variability in the performance of projects and releases, including complexity, the size of the work, people and process discipline. Consistency and predictability are difficult when the process is being made up on the spot. Agile has come to reflect (at least in practice) a wide range of values ranging from faster delivery to more structured frameworks such as Scrum, Extreme Programing and Scale Agile Framework Enterprise. Lack of at least some structure nearly always increases the variability in delivery and therefore the risk to the organization.
I recently received the following note from a reader (and listener to the podcast) who will remain nameless (all names redacted at the request of the reader).
āAll of the development is outsourced to a company with many off-shore and a few on-site resources.
The development agency has, somehow, sold the business on the idea that because they are “Agile”, their ability to dynamically/quickly react and implement requires a lack of formal “accounting.” Ā The CFO here wants to move away from vague generic invoices because he feels (rightly so) that the agency interprets the relationship as having carte blanche to work on anything and everything ever scratched out on a cocktail napkin without proper project charters, buy-in, and SOW.ā
This observation reflects a risk to the organization of an ill-defined process in terms the value that get delivered to the business, financial risk and from the risk to customer satisfaction. Repeatability and consistency of process are not a dirty words.
Scrum and other Agile frameworks are light-weight empirical models. At their most basic levels they summarized as:
Deming would have recognized the embedded plan-do-check-act cycle. There is nothing ad-hoc about the frame even though it is not overly prescriptive.
I recently toured a research facility for a major snack manufacturer. The people in the labs were busy dreaming up the next big snack food. Personnel were involved in both āpureā and applied research, both highly creative endeavors. When I asked about the process they were using what was described was something similar to Scrum. Creatively being pursued within a framework to reduce risk.
Ad-hoc software development and maintenance was never in style. In todayās business environment where software in an integral the delivery of value, just winging the process of development increases risk of an already somewhat risky proposition.
Risk is reflection of possibilities, stuff that could happen if the stars align. Therefore projects are highly influenced by variability. There are many factors that influence variability including complexity, process discipline, people and the size of the work. The impact of the size can be felt in two separate but equally important manners. The first is the size of the overall project and second is the size any particular unit of work.
Overall project size influences variability by increasing the sheer number of moving parts that have to relate to each other. As an example, the assembly of an automobile is a large endeavor and is the culmination of a number of relatively large subprojects. Any significant variance in how the subprojects are assembled along the path of building the automobile will cause problems in the final deliverable. Large software projects require extra coordination, testing, integration to ensure that all of the pieces fit together, deliver the functionality customers and stakeholders expect and act properly. All of these extra steps increase the possibility of variance.
Similarly large pieces of work, user stories in Agile, cause similar problems as noted for large projects, but at the team level. For example, when any piece of work enters a sprint the first step in the process of transforming that story into value is planning. Large pieces of work are more difficult to plan, if for no other reason that they take longer to break down into tasks increasing the likelihood that something will not be considered generating a āgotchaā later in the sprint.
Whether at a project or sprint level, smaller is generally simpler, and simpler generates less variability. There are a number of techniques for managing size.
I learned many years ago that supersizing fries at the local fast food establishment was a bad idea in that it increased the variability in my caloric intake (and waistline). Similarly, large projects are subject to increased variability. There are just too many moving parts, which leads to variability and risk. Large user stories have exactly the same issues as large project just on a smaller scale. Agile techniques of short sprints and small team size provide constraints so that teams can control the size of work they are considering at any point in time. Teams need to take the additional step of breaking down stories into smaller pieces to continue to minimize the potential impact of variability.
A popular topic in the new one-day Management 3.0 workshop is the OKRs system for performance measurement. (See Googleās YouTube video here.) Instead of explaining what OKRs are, I will just share with you the result of my first iteration. If you read this, you will get the general idea of how the OKRs system works.
Itās hard to drive digital initiatives and business transformation if you canāt create the business case. Stakeholder want to know what their investment is supposed to get them
One of the simplest ways to think about business cases is to think in terms of stakeholders, benefits, KPIs, costs, and risks over time frames.
While thatās the basic frame, thereās a bit of art and science when it comes to building effective business cases, especially when it involves transformational change.
Lucky for us, in the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned in building better business cases for digital initiatives.
What I like about their guidance is that it matches my experienceLink Operational Changes to Tangible Business Benefits
The more you can link your roadmap to benefits that people care about and can measure, the better off you are.
Via Leading Digital:
āYou need initiative-based business cases that establish a clear link from the operational changes in your roadmap to tangible business benefits. You will need to involve employees on the front lines to help validate how operational changes will contribute to strategic goals.āWork Out the Costs, the Benefits, and the Timing of Return
On a good note, the same building blocks that apply to any business case, apply to digital initiatives.
Via Leading Digital:
āThe basic building blocks of a business case for digital initiatives are the same as for any business case. Your team needs to work out the costs, the benefits, and the timing of the return. But digital transformation is still uncharted territory. The cost side of the equation is easier, but benefits can be difficult to quantify, even when, intuitively, they seem crystal clear.āStart with What You Know
Building a business case is an art and a science. To avoid getting lost in analysis paralysis, start with what you know.
Via Leading Digital:
āBuilding a business case for digital initiatives is both an art an a science. With so many unknowns, you'll need to take a pragmatic approach to investments in light of what you know and what you don't know.
Start with what you know, where you have most of the information you need to support a robust cost-benefit analysis. A few lessons learned from our Digital Masters can be useful.āDonāt Build Your Business Case as a Series of Technology Investments
If you only consider the technology part of the story, youāll miss the bigger picture. Digital initiatives involves organizational change management as well as process change. A digital initiative is really a change in terms of people, process, and technology, and adoption is a big deal.
Via Leading Digital:
āDon't build your business case as a series of technology investments. You will miss a big part of the costs. Cost the adoption efforts--digital skill building, organizational change, communication, and training--as well as the deployment of the technology. You won't realize the full benefits--or possibly any benefits--without them.āFrame the Benefits in Terms of Business Outcomes
If you donāt work backwards from the end-in-mind, you might not get there. You need clarity on the business outcomes so that you can chunk up the right path to get there, while flowing continuous value along the way.
Via Leading Digital:
āFrame the benefits in terms of the business outcomes you want to reach. These outcomes can be the achievement of goals or the fixing of problems--that is, outcomes that drive more customer value, higher revenue, or a better cost position. Then define the tangible business impact and work backward into the levers and metrics that will indicate what 'good' looks like. For instance, if one of your investments is supposed to increase digital customer engagement, your outcome might be increasing engagement-to-sales conversation. Then work back into the main metrics that drive this outcome, for example, visits, like inquiries, ratings, reorders, and the like.
When the business impact5 of an initiative is not totally clear, look at companies that have already made similar investments. Your technology vendors can also be a rich, if somewhat biased, source of business cases for some digital investments.āRun Small Pilots, Evaluate Results, and Refine Your Approach
To reduce risk, start with pilots to live and learn. This will help you make informed decisions as part of your business case development.
Via Leading Digital:
āBut, whatever you do, some digital investment cases will be trickier to justify, be they investments in emerging technologies or cutting-edge practices. For example, what is the value of gamifying your brand's social communities? For these types of investment opportunities, experiment with a test-and-learn approach. State your measures of success, run small pilots, evaluate results, and refine your approach. Several useful tools and methods exist, such as hypothesis-driven experiments with control groups, or A/B testing. The successes (and failures) of small experiments can then become the benefits rationale to invest at greater scale. Whatever the method, use an analytical approach; the quality of your estimated return depends on it.
Translating your vision into strategic goals and building an actionable roadmap is the firs step in focusing your investment. It will galvanize the organization into action. But if you needed to be an architect to develop your vision, you need to be a plumber to develop your roadmap. Be prepared to get your hands dirty.ā
While practice makes perfect, business cases arenāt about perfect. Their job is to help you get the right investment from stakeholders so you can work on the right things, at the right time, to make the right impact.You Might Also Like
Using EBS has lots of advantages--reliability, snapshotting, resizing--but overcoming the performance problems by using Provisioned IOPS is expensive.
Swrve, an integrated marketing and A/B testing and optimization platform for mobile apps, did something clever. They are using the c3.xlarge EC2 instances, that have two 40GB SSD devices per instance, as a cache.
They found through testing RAID-0 striping using a 4-way stripe along with enhanceio, effectively increased throughput by over 50%, for free. With no filesystem corruption problems.
How is it free? "We were planning on upgrading to the C3 class of instance anyway, and sticking with EBS as the backing store. Once you’re using an instance which has SSD ephemeral storage, there are no additional fees to use that hardware."
For great analysis, lots of juicy details, graphs, and configuration commands, please take a look at How we increased our EC2 event throughput by 50%, for free.
Cesar Abeid interviewed me, Project Management for You with Johanna Rothman. We talked about my tools for project management, whether you are managing a project for yourself or managing projects for others.
We talked about how to use timeboxes in the large and small, project charters, influence, servant leadership, a whole ton of topics.
I hope you listen. Also, check out Cesar’s kickstarter campaign, Project Management for You.
We gave a recent College of Performance Management webinar on using techncial progress to inform Earned Value. Here's the annotated charts.
Iām excited to announce today that Microsoft is partnering with Docker, Inc to enable great container-based development experiences on Linux, Windows Server and Microsoft Azure.
Docker is an open platform that enables developers and administrators to build, ship, and run distributed applications. Consisting of Docker Engine, a lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments.
Earlier this year, Microsoft released support for Docker containers with Linux on Azure. This support integrates with the Azure VM agent extensibility model and Azure command-line tools, and makes it easy to deploy the latest and greatest Docker Engine in Azure VMs and then deploy Docker based images within them. Docker Support for Windows Server + Docker Hub integration with Microsoft Azure
Today, Iām excited to announce that we are working with Docker, Inc to extend our support for Docker much further. Specifically, Iām excited to announce that:
1) Microsoft and Docker are integrating the open-source Docker Engine with the next release of Windows Server. This release of Windows Server will include new container isolation technology, and support running both .NET and other application types (Node.js, Java, C++, etc) within these containers. Developers and organizations will be able to use Docker to create distributed, container-based applications for Windows Server that leverage the Docker ecosystem of users, applications and tools. It will also enable a new class of distributed applications built with Docker that use Linux and Windows Server images together.
2) We will support the Docker client natively on Windows. Developers and administrators running Windows will be able to use the same standard Docker client and interface to deploy and manage Docker based solutions with both Linux and Windows Server environments.
3) Docker for Windows Server container images will be available in the Docker Hub alongside the Docker for Linux container images available today. This will enable developers and administrators to easily share and automate application workflows using both Windows Server and Linux Docker images.
4) We will integrate Docker Hub with the Microsoft Azure Gallery and Azure Management Portal. This will make it trivially easy to deploy and run both Linux and Windows Server based Docker images in Microsoft Azure.
5) Microsoft is contributing code to Dockerās Open Orchestration APIs. These APIs provide a portable way to create multi-container Docker applications that can be deployed into any datacenter or cloud provider environment. This support will allow a developer or administrator using the Docker command line client to launch either Linux or Windows Server based Docker applications directly into Microsoft Azure from his or her development machine.Exciting Opportunities Ahead
At Microsoft we continue to be inspired by technologies that can dramatically improve how quickly teams can bring new solutions to market. The partnership we are announcing with Docker today will enable developers and administrators to use the best container tools available for both Linux and Windows Server based applications, and to run all of these solutions within Microsoft Azure. We are looking forward to seeing the great applications you build with them.
Hope this helps,
Software development is inherently complex and therefore risky. Historically there have been many techniques leveraged to identify and manage risk. As noted in Agile and Risk Management, much of the risk in projects can be put at the feet of variability. Complexity is one of the factors that drives complexity. Spikes, prioritization and fast feedback are important techniques for recognizing and reducing the impact of complexity.
Spikes, prioritization and feedback are common Agile and lean techniques. The fact that they are common has led some Agile practitioners to feel that frameworks like Scrum has negated the need to deal specifically with risks at the team level. These are powerful tools for identifying and reducing complexity and the variability complexity generates however they need to be combined with other tools and techniques to manage the risk that is part of all projects and releases.
The easiest place to see whatās going on in the F# community is to follow the #fsharp hash tag on Twitter. The last 24hrs have been as busy as ever, to the point where it can be hard to keep up these days.
Hereās some of the highlights
Build Stuff conference to feature 8 F# speakers:October 13, 2014
and workshops including:
FP Days programme is now live, and features key notes from Don Syme & Christophe Grand, and presentations from:
New Madrid F# meetup group announced:October 14, 2014
Mathias Brandewinder will be presenting some of his work on F# & Azure in the Bay AreaOctober 13, 2014
Letās get hands on session in Portland announced:October 14, 2014
Riccardo Terrell will be presenting Why FP? in Washington DCOctober 13, 2014
Get Doctor Who stats with F# Data HTML Type Provider:October 13, 2014
Major update to FSharp.Data.SqlClient:October 13, 2014
Microsoft Research presentation on new DBpedia Type Provider:October 13, 2014
BlogsOctober 14, 2014
Check out Sergey Tihonās F# Weekly!
In a recent post to āWho Is Ed Conrow?ā a responder asked about the differences between the PMBOKĀ® Risk approach and the DoD PMBOK risk approaches as well as a summary of the book Effective Risk Management: Some Keys to Success, Edmund Conrow. Ed worked the risk management processes for a NASA proposal I was on. I was the IMP/IMS lead, so integrating Risk with the Integrated Master Plan / Integrtaed Master Schedule in the mannder he prescribed was a live changing experience. I was naive before, but no longer after that proposal won Ė$7B for the client.
Let me start with a few positioning statements:
With all my biases out of the way, letās look at the DoD PMBOKĀ®
Page 124 of DoD PMBOKĀ® summarizes the principles of Risk Management as developed in two seminal sources.
Now all these pedantic references are here for a purpose. This is how people who manage risk for a living, manage risk. By risk, I mean technical risk that results in loss of mission, loss of life. Programmatic risk that results in loss of Billions of Tax Payer dollars. They are serious enough about risk management to not let the individual project or program manager interpret the vague notions in PMI PMBOKĀ®. These may appear to be harsh words, but the road to the management of enterprise class projects is littered with disasters. You can read every day of IT projects that are 100% over budget, 100% behind schedule. From private firms to the US Government, the trail of destruction is front page news.
A Slight Diversion ā Why are Enterprise Projects So Risky?
There are many reasons for failure ā too many to mention ā but one is the inability to identify and mitigate risk. The words āindentifyā and āmitigate,ā sound simple. They are listed in the PMI PMBOKĀ® and the DoD PMBOKĀ®. However, here is where the problem starts:
Using Conrow as a Guide
Here is one problem. When you use the complete phrase āProject Risk Managementā with Google, you get ~642,000 hits. There are so many books, academic papers, and commercial articles on Risk Management, where do we start? Ed Conrowās book is probably not the starting point for learning how to practice risk management on your project. However, it might be the ending point. If you are in the software development business, a good starting point is ā Managing Risk: Methods for Software Systems Development, Elaine M. Hall, Addison Wesley, 1998. Another broader approach is Continuous Risk Management Guidebook, Software Engineering Institute, August 1996. While these two sources focus on software, they provide the foundation for the discussion of risk management as a discipline.
There are public sources as well:
However, care needs the be taken once you go outside the government boundaries. Here are many voices plying the waters of ārisk management,ā as well as other voices with āaxes to grindā regarding project management methods and risk management processes. The result is many times a confusing message full of anecdotes, analogies, and alternative approaches to the topic of Risk Management.
Conrow in his Full Glory
Before starting into the survey of the Conrow book, let me state a few observations:
From the introduction:
The purpose of this book is two-fold: first, to provide key lessons learned that I have documented from performing risk management on a wide variety of programs, and second, to assist you, the reader, in developing and implementing an effective risk management process on your program.
A couple of things here. One is the practical experience in risk management. Many in the risk management ātalkingā community have limited experience with risk management in the way Ed does. I first met Ed on a proposal for a $8B Manned Spaceflight program. He was responsible for the risk strategy and the conveying of that strategy in the proposal. The proposal resulted in an award and now our firm provides Program Planning and Controls for a major subsystem of the program. In this role programmatic and technical risk management is part of the Statement of Work flowed down from the prime contractor.
Second Ed is a technical advisor to the US Arms Control and Disarmament Agency as well as a consultant industry and government on risk management. These āresumeā items are meant to show that the practice of risk management is just that ā a practice. Speaking about risk management and doing risk management on high risk programs are two different things.
One of Edās principle contributions to the discipline was the development of a micro-economic framework of risk management in which the design feasibility (or technical performance) is traded against cost and schedule.
In the end, this is a reference text for the process of managing the risk of projects, written by a highly respected practitioner.
What does the Conrow Book have to offer over the Standard approach?
Edās book contains the current ābest practicesā for managing technical and programmatic risk. These practices are used on high risk, high value programs. The guidelines in Edās book are generally applicable to many other classes of projects as well. But there are several critical elements that differentiate this approach from the pedestrian approach to risk management.
The ordinal approach works like this. Ed describes some classes of risk scales which include: maturity, sufficiency, complexity, uncertainty, estimative probability, and probability based scales.
A maturity risk scale would be:
Basic principles observed
Concept design analyzed for performance
Breadboard or brassboard validation in relevant environment
Prototype passes performance tests
Item deployed and operational
Ā The critical concept is to relate the risk ordinal value to an objective measure. For a maturity risk assessment, some ācalibrationā of what it means to have the ābasic principles observedā must be developed. This approach can be applied to the other classes ā sufficiency, complexity, uncertainty, estimative probability and probability based scales.
Itās the estimative probability that is important to cost and schedule people in our PP&C practice. The estimative probability scale attempts to relate a word to a probability value. āHighā to 80%. An ordinal estimative probability scale using point estimates derived from a statistical analysis of survey data might look like.
Median probability value
Almost no chance
Calibrating these risk scales is the primary analysis task of building a risk management system. What does it mean to have a āmediumā risk, in the specific problem domain?
These two concepts are the ones that changed the way I perform risk management on the programs Iām involved with and how we advise our clients. They are paradigm changing concepts. No more simple mined arithmetic with probabilities and consequences. No more uncalibrated risk scales. No more tolerating those who claim PERT, Critical Path, and Monte Carlo are unproven, obsolete, or āwrong headedā approaches.
Get Edās book. Itāll cost way too much when compared to the āpaperbackā approach to risk. But for those tasked with āmanaging risk,ā this is the starting point.
If any of these items interest you there's a full description of each sponsor below. Please click to read more...
I recently took part in a conversation about compensation of employees. Some readers offered criticism on the Merit Money practice, described in my new Workout book, claiming that Merit Money is just another way to incentivize people. The feedback I received was, “Money doesn’t motivate people”, followed by, “Don’t incentivize people” and “Just pay people well”.
Let me explain why I think this advice is useless.
Just a few more weeks and it's the AngularJS Training Week at Xebia in Hilversum (The Netherlands). 4 days full with AngularJS content, from 17 to 20 October, 2014. In these different days we will cover the AngularJS basics, AngularJS advanced topics, Tooling & Scaffolding and Testing with Jasmine, Karma and Protractor.
If you already have some experience or if you are only interested in one or two of the topics, then you can sign up for just the days that are of interest to you.
Visit www.angular-training.com for a full overview of the days and topics or sign up on the Xebia Training website using the links below.
One of the questions that I and other #NoEstimates proponents hear quite often is: How can we make decisions on what projects we should do next, without considering the estimated time it takes to deliver a set of functionality?
Although this is a valid question, I know there are many alternatives to the assumptions implicit in this question. These alternatives - which I cover in this post - have the side benefit of helping us focus on the most important work to achieve our business goals.
Below I list 5 different decision-making strategies (aka decision making models) that can be applied to our software projects without requiring a long winded, and error prone, estimation process up front.What do you mean by decision-making strategy?
A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you chose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.
Some possible goals for business strategies might be:
Other types of business goals are possible, and it is also possible to mix several goals in one business strategy.
Different decision-making strategies should be considered for different business goals. The 5 different decision-making strategies listed below include examples of business goals they could help you achieve. But before going further, we must consider one key aspect of decision making: Risk Management.
The two questions that I will consider when defining a decision-making strategy are:
All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your risk profile?A different risk profile requires different decisions
Each decision we make has an impact on the following risk dimensions:
The categorization above is not the only possible. However it is very practical, and maps well to decisions regarding which projects to invest in.
There may good reasons to accept increasing your risk exposure in one or more of these categories. This is true if increasing that exposure does not go beyond your acceptable risk profile. For example, you may accept a larger exposure to technical risks (the risk of how), if you believe that the project has a very low risk of missing market needs (the risk of what).
An example would be migrating an existing product to a new technology: you understand the market (the product has been meeting market needs), but you will take a risk with the technology with the aim to meet some other business need.Aligning decisions with business goals: decision-making strategies
When making decisions regarding what project or work to undertake, we must consider the implications of that work in our business or strategic goals, therefore we must decide on the right decision-making strategy for our company at any time.Decision-making Strategy 1: Do the most important strategic work first
If you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.Decision-making Strategy 2: Do the highest technical risk work first
When you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.Decision-making Strategy 3: Do the easiest work first
Suppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.Decision-making Strategy 4: Do the legal requirements first
In medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.Decision-making Strategy 5: Liability driven investment model
This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.
These are just 5 possible investment or decision-making strategies that can help you make project decisions, or even business decisions, without having to invest in estimation upfront.
None of these decision-making strategies guarantees success, but then again nothing does except hard work, perseverance and safe experiments!
In the upcoming workshops (Helsinki on Oct 23rd, Stockholm on Oct 30th) that me and Woody Zuill are hosting, we will discuss these and other decision-making strategies that you can take and start applying immediately. We will also discuss how these decision making models are applicable in day to day decisions as much as strategic decisions.
If you want to know more about what we will cover in our world-premiere #NoEstimates workshops don't hesitate to get in touch!Your ideas about decision-making strategies that do not require estimation
You may have used other decision-making strategies that are not covered here. Please share your stories and experiences below so that we can start collecting ideas on how to make good decisions without the need to invest time and money into a wasteful process like estimation.