Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Sponsored Post: Apple, Chartbeat, Monitis, Netflix, Salesforce, Blizzard Entertainment, Cloudant, CopperEgg, Logentries, Wargaming.net, PagerDuty, Gengo, ScaleOut Software, Couchbase, MongoDB, BlueStripe, AiScaler, Aerospike, LogicMonitor, AppDynamics, Ma

Who's Hiring?

  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here.
    • Mobile Services Software Engineer. The Emerging Technologies/Mobile Services team is looking for a proactive and hardworking software engineer to join our team. The team is responsible for a variety of high quality and high performing mobile services and applications for internal use. Please apply here
    • Senior Software Engineer. Join Apple's Internet Applications Team, within the Information Systems and Technology group, as a Senior Software Engineer. Be involved in challenging and fast paced projects supporting Apple's business by delivering Java based IS Systems. Please apply here.
    • Sr Software Engineer. Join Apple's Internet Applications Team, within the Information Systems and Technology group, as a Senior Software Engineer. Be involved in challenging and fast paced projects supporting Apple's business by delivering Java based IS Systems. Please apply here.
    • Senior Security Engineer. You will be the ‘tip of the spear’ and will have direct impact on the Point-of-Sale system that powers Apple Retail globally. You will contribute to implementing standards and processes across multiple groups within the organization. You will also help lead the organization through a continuous process of learning and improving secure practices. Please apply here.
    • Quality Assurance Engineer - Mobile Platforms. Apple’s Mobile Services/Emerging Technology group is looking for a highly motivated, result-oriented Quality Assurance Engineer. You will be responsible for overseeing quality engineering of mobile server and client platforms and applications in a fast-paced dynamic environment. Your job is to exceed our business customer's aggressive quality expectations and take the QA team forward on a path of continuous improvement. Please apply here.

  • Chartbeat measures and monetizes attention on the web. Our traffic numbers are growing, and so is our list of product and feature ideas. That means we need you, and all your unparalleled backend engineer knowledge to help up us scale, extend, and evolve our infrastructure to handle it all. If you've these chops: www.chartbeat.com/jobs/be, come join the team!

  • The Salesforce.com Core Application Performance team is seeking talented and experienced software engineers to focus on system reliability and performance, developing solutions for our multi-tenant, on-demand cloud computing system. Ideal candidate is an experienced Java developer, likes solving real-world performance and scalability challenges and building new monitoring and analysis solutions to make our site more reliable, scalable and responsive. Please apply here.

  • Sr. Software Engineer - Distributed Systems. Membership platform is at the heart of Netflix product, supporting functions like customer identity, personalized profiles, experimentation, and more. Are you someone who loves to dig into data structure optimization, parallel execution, smart throttling and graceful degradation, SYN and accept queue configuration, and the like? Is the availability vs consistency tradeoff in a distributed system too obvious to you? Do you have an opinion about asynchronous execution and distributed co-ordination? Come join us

  • Java Software Engineers of all levels, your time is now. Blizzard Entertainment is leveling up its Battle.net team, and we want to hear from experienced and enthusiastic engineers who want to join them on their quest to produce the most epic customer-facing site experiences possible. As a Battle.net engineer, you'll be responsible for creating new (and improving existing) applications in a high-load, high-availability environment. Please apply here.

  • Engine Programmer - C/C++. Wargaming|BigWorld is seeking Engine Programmers to join our team in Sydney, Australia. We offer a relocation package, Australian working visa & great salary + bonus. Your primary responsibility will be to work on our PC engine. Please apply here

  • Human Translation Platform Gengo Seeks Sr. DevOps Engineer. Build an infrastructure capable of handling billions of translation jobs, worked on by tens of thousands of qualified translators. If you love playing with Amazon’s AWS, understand the challenges behind release-engineering, and get a kick out of analyzing log data for performance bottlenecks, please apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Your event here.
Cool Products and Services
  • Now track your log activities with Log Monitor and be on the safe side! Monitor any type of log file and proactively define potential issues that could hurt your business' performance. Detect your log changes for: Error messages, Server connection failures, DNS errors, Potential malicious activity, and much more. Improve your systems and behaviour with Log Monitor.

  • The NoSQL "Family Tree" from Cloudant explains the NoSQL product landscape using an infographic. The highlights: NoSQL arose from "Big Data" (before it was called "Big Data"); NoSQL is not "One Size Fits All"; Vendor-driven versus Community-driven NoSQL.  Create a free Cloudant account and start the NoSQL goodness

  • Finally, log management and analytics can be easy, accessible across your team, and provide deep insights into data that matters across the business - from development, to operations, to business analytics. Create your free Logentries account here.

  • CopperEgg. Simple, Affordable Cloud Monitoring. CopperEgg gives you instant visibility into all of your cloud-hosted servers and applications. Cloud monitoring has never been so easy: lightweight, elastic monitoring; root cause analysis; data visualization; smart alerts. Get Started Now.

  • PagerDuty helps operations and DevOps engineers resolve problems as quickly as possible. By aggregating errors from all your IT monitoring tools, and allowing easy on-call scheduling that ensures the right alerts reach the right people, PagerDuty increases uptime and reduces on-call burnout—so that you only wake up when you have to. Thousands of companies rely on PagerDuty, including Netflix, Etsy, Heroku, and Github.

  • Aerospike in-Memory NoSQL database is now Open Source. Read the news and see who scales with Aerospike. Check out the code on github!

  • consistent: to be, or not to be. That’s the question. Is data in MongoDB consistent? It depends. It’s a trade-off between consistency and performance. However, does performance have to be sacrificed to maintain consistency? more.

  • Do Continuous MapReduce on Live Data? ScaleOut Software's hServer was built to let you hold your daily business data in-memory, update it as it changes, and concurrently run continuous MapReduce tasks on it to analyze it in real-time. We call this "stateful" analysis. To learn more check out hServer.

  • LogicMonitor is the cloud-based IT performance monitoring solution that enables companies to easily and cost-effectively monitor their entire IT infrastructure stack – storage, servers, networks, applications, virtualization, and websites – from the cloud. No firewall changes needed - start monitoring in only 15 minutes utilizing customized dashboards, trending graphs & alerting.

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Teams Should Go So Fast They Almost Spin Out of Control

Mike Cohn's Blog - Tue, 06/24/2014 - 15:00

Yes, I really did refer to guitarist Alvin Lee in a Certified Scrum Product Owner class last week. Here's why.

I was making a point that Scrum teams should strive to go as fast as they can without going so fast they spin out of control. Alvin Lee of the band Ten Years After was a talented guitarist known for his very fast solos. Lee's ultimate performance was of the song "I'm Going Home" at Woodstock. During the performance, Lee was frequently on the edge of flying out of control, yet he kept it all together for some of the best 11 minutes in rock history.

I want the same of a Scrum team--I want them going so fast they are just on the verge of spinning out of control yet are able to keep it together and deliver something classic and powerful.

Re-watching Ten Years After's Woodstock performance I'm struck by a couple of other lessons, which I didn't mention in class last week:

One: Scrum teams should be characterized by frequent, small hand-offs. A programmer gets eight lines of code working and yells, "Hey, Tester, check it out." The tester has been writing automated tests while waiting for those eight lines and runs the tests. Thirty minutes later the programmer has the next micro-feature coded and ready for testing. Although a good portion of the song is made up of guitar solos, they aren't typically long solos. Lee plays a solo and soon hands the song back to his bandmates, repeating for four separate solos through the song.

Two: Scrum teams should minimize work in progress. While "I'm Going Home" is a long song (clocking in at over eleven minutes), there are frequent "deliveries" of interpolated songs throughout the performance. Listen for "Blue Suede Shoes, "Whole Lotta Shaking" and others, some played for just a few seconds.

OK, I'm probably nuts, and I certainly didn't make all these points in class. But Alvin Lee would have made one great Scrum teammate. Let me know what you think in the comments below.

We're All Looking for the Simple Fix - There Isn't One

Herding Cats - Glen Alleman - Tue, 06/24/2014 - 14:39

Light Bulb

Every project domain is looking for a simple answer to complex problems. There isn't a simple answer to complex problems. There are answers, but they require hard work, understanding, skill, experience, and tenacity to address the hard problems of showing up on time, on or near the planned cost, and have some acceptable probability that the products or services produced by the project will work, and will actually provide the needed capabilities to fulfill the business case or mission of the project.

So It Comes Down To This

  • If we don't know what done looks like in some unit of measure meaningful to the decision makers, we'll never recognize it before we run out of time and money.
  • If we don't know what it will cost to reach done, we're over budget before we start.
  • If we don't have some probabilistic notion of when the project will be complete, we're late before you start.
  • If we don't measure progress to plan in some units of physical percent complete we have no idea if we are actually making progress. These measures include two classes:
    • Effectiveness - is the thing we're building actually effective at solving the problem.
    • Performance - is the solution performing in a way that allows it to be effective.
  • If we don't know what impediments we'll encounter along the way to done, those impediments will encounter us. They don't go away just because we don't know about them.
  • If we don't have any idea about what resources we'll be needing on the project, we will soon enough when we start to fall behind schedule or our products or services suffer from lack of skills, experience, or capacity for work.

Doing project work is about many things. But it's not just about writing code or bending metal. It's about the synergistic collaboration between all the participants. The notion that we don't need project management is one of those nonsense notions that is stated in the absence of a domain and context. The Product Owner in agile is the glue to pulls the development team together. But someone somewhere needs to fund that development, assure the logistics of deploying the resulting capabilities is in place, users trained, help desk manned and training, regulations complied wtih. The Program Manager on a mega-project in construction or defense does many of the same things.

Core information is need as well. Cost, planned deliverables, risk management. resource management and other house keeping functions are needed.

Delivering on or near the planned time, at or near the planned budget, and more or less with the needd capabilities is hard work. 

Related articles Lean Startup, Innovation, #NoEstimates, and Minimal Viable Features It Can't Be Any Clearer Than This Top impediments to Agile adoptions that I've encountered Managing In The Presence Uncertainty - Redux Risk Management for Dummies How to Deal With Complexity In Software Projects?
Categories: Project Management

Book Tour Schedule 2014

NOOP.NL - Jurgen Appelo - Tue, 06/24/2014 - 10:21
Book Tour 2014

Last week was Sweden-week in the Management 3.0 Book Tour, with workshops in Stockholm and Gothenburg. (Check out the videos!)

This week is Germany-Week, where I am visiting Munich, Frankfurt, and Berlin.

We have a lot of other countries on the list as well. Check out the complete schedule until December. Registration will open soon! (Sorry, no other countries will be added at this time

The post Book Tour Schedule 2014 appeared first on NOOP.NL.

Categories: Project Management

Humans suck at statistics - how agile velocity leads managers astray

Software Development Today - Vasco Duarte - Tue, 06/24/2014 - 04:00

Humans are highly optimized for quick decision making. The so-called System 1 that Kahneman refers to in his book "Thinking fast, thinking slow". One specific area of weakness for the average human is understanding statistics. A very simple exercise to review this is the coin-toss simulation.

Humans are highly optimized for quick decision making.

Get two people to run this experiment (or one computer and one person if you are low on humans :). One person throws a coin in the air and notes down the results. For each "heads" the person adds one to the total; for each "tails" the person subtracts one from the total. Then she graphs the total as it evolves with each throw.

The second person simulates the coin-toss by writing down "heads" or "tails" and adding/subtracting to the totals. Leave the room while the two players run their exercise and then come back after they have completed 100 throws.

Look at the graph that each person produced, can you detect which one was created by the real coin, which was "imagined"? Test your knowledge by looking at the graph below (don't peak at the solution at the end of the post). Which of these lines was generated by a human, and which by a pseudo-random process (computer simulation)?

One common characteristic in this exercise is that the real random walk, which was produced by actually throwing a coin in the air, is often more repetitive than the one simulated by the player. For example, the coin may generate a sequence of several consecutive heads or tails throws. No human (except you, after reading this) would do that because it would not "feel" random. We, humans, are bad at creating randomness and understanding the consequences of randomness. This is because we are trained to see meaning and a theory behind everything.

Take the velocity of the team. Did it go up in the latest sprint? Surely they are getting better! Or, it's the new person that joined the team, they are already having an effect! In the worst case, if the velocity goes down in one sprint, we are running around like crazy trying to solve a "problem" that prevented the team from delivering more.

The fact is that a team's velocity is affected by many variables, and its variation is not predictable. However, and this is the most important, velocity will reliably vary over time. Or, in other words, it is predictable that the velocity will vary up and down with time.

The velocity of a team will vary over time, but around a set of values that are the actual "throughput capability" of that team or project. For us as managers it is more important to understand what that throughput capability is, rather than to guess frantically at what might have caused a "dip" or a "peak" in the project's delivery rate.

The velocity of a team will vary over time, but around a set of values that are the actual "throughput capability" of that team or project.

When you look at a graph of a team's velocity don't ask "what made the velocity dip/peak?", ask rather: "based on this data, what is the capability of the team?". This second question will help you understand what your team is capable of delivering over a long period of time and will help you manage the scope and release date for your project.

The important question for your project is not, "how can we improve velocity?" The important question is: "is the velocity of the team reliable?"

Picture credit: John Hammink, follow him on twitter

Solution to the question above: The black line is the one generated by a pseudo-random simulation in a computer. The human generated line is more "regular", because humans expect that random processes "average out". Indeed that's the theory. But not the the reality. Humans are notoriously bad at distinguishing real randomness from what we believe is random, but isn't.

As you know I've been writing about #NoEstimates regularly on this blog. But I also send more information about #NoEstimates and how I use it in practice to my list. If you want to know more about how I use #NoEstimates, sign up to my #NoEstimates list. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to our mailing list* indicates required Email Address * First Name Last Name

Portfolio-Level Estimation

Portfolio Estimation?  A life coach might help!

Portfolio Estimation? A life coach might help!

I recently asked a group of people the question ‚ÄúWhat are the two largest issues in project estimation?‚ÄĚ ¬† The group were all involved in delivering value to clients either as developers, testers, methodologists and consultants.¬† The respondents experience ran the gamut from Scrum and eXtreme through Scaled Agile Framework (SAFe) and Disciplined Agile Development (DaD) to waterfall.¬† While not a scientific survey, the responses were illuminating.¬† While I am still in process of compiling the results and extracting themes, I thought I would share one of the first responses: all resources are not created equal.¬† The respondent made the point that most estimating exercises, which begin at the portfolio level, don‚Äôt take into account the nuances of individual experience and capacity when projects are ‚Äúplucked‚ÄĚ from a prioritized portfolio to begin work.¬† This problem, at the portfolio level, is based on two issues.¬† The first is making assumptions are based on assumptions and the second is making decisions based on averages.¬† At the portfolio level both are very hard to avoid.

Nearly all organizations practice some form of portfolio management.¬† Portfolio management techniques can be range from na√Įve (e.g. the squeaky wheel method) to sophisticated (e.g. portfolio-level Kanban).¬† In most cases the decision process as to when to release a piece of work from the portfolio requires making assumptions about the perceived project size and organizational capabilities required to deliver the project. In order to make the assumptions, a number of assumptions must be made (a bit of foreshadowing, assumptions made based on assumptions are a potential problem).¬† The most important set of assumptions that are made when a project is released is that the requirements and solution are known.¬† These assumptions will affect how large the project needs to be and the capabilities required to deliver the project. Many organizations go to great lengths to solve this problem.¬† Tactics used to address this issue include trying to gather and validate all of the requirements before starting any technical work (waterfall), running a small proof-of-concept project (prototypes) to generating rapid feedback (Agile). Other techniques that are used include creating repositories that link skills to people or teams.¬† And while these tools are useful for assembling teams in matrix organizations, these are rarely useful at the portfolio level because they are not forecasting tools. In all cases, the path that provides the most benefit revolves around generating information as early as possible and then reacting to the information.¬†

The second major issue is that estimates and budgets divined at the portfolio level are a reflection of averages.  In many cases, organizations use analogies to generate estimates and initial budget numbers for portfolio-level initiatives.  When using analogies an estimator (or group) will compare the project he or she is trying to estimate to completed projects to determine how alike they are to another. For example, if a you think that a project is about 70% the size of a known project, simple arithmetic can be used to estimate the new project.  Other assumptions and perceptions would be used to temper the precision.  Real project performance will reflect on all of the nuances that the technology, solution and the individual capabilities generate.  These nuances will generate variances from the estimate.  As with the knowledge issue, organizations use many techniques to manage the impact of the variances that will occur.  Two popular methods used include contingencies in the work breakdown schedule (waterfall) and backlog re-planning (Agile). In all cases, the best outcomes are reflective of feedback based on performance of real teams delivering value. 

Estimates by definition are never right (hopefully close). Estimates (different than planning) are based on what the estimator knows very early in the process.  What really needs to be built becomes know later in the process after estimates and budgets are set at the portfolio levels.  Mature organizations recognize that as projects progress new information is gathered which should be quickly used to refine estimates and budgets. 


Categories: Process Management

What to expect at I/O’14 - Develop

Google Code Blog - Mon, 06/23/2014 - 19:20
By Reto Meier, Google Developer Advocate

Google I/O 2014 will be live in less than 48 hours. Last Friday we shared a sneak peek of content and activities around design principles and techniques. This morning we’re excited to give a glimpse into what we have in store for develop experiences.

Google I/O at its core has always been about providing you with the inspiration and resources to develop remarkable applications using Google’s platforms, tools and technologies.

This year we’ll have a full lineup of sessions from the Android, Chrome and Cloud Platform teams, highlighting what’s new as well as showcasing cross-product integrations. Here’s a sample of some of the sessions we’ll be live streaming:
  • What‚Äôs new in Android || Wednesday 1-1:45PM (Room 8): Join us for a thrilling, guided tour of all the latest developments in Android technologies and APIs. We‚Äôll cover everything that‚Äôs new and improved in the Android platform since‚Ķwell, since the last time.
  • Making the mobile web fast, feature rich and beautiful || Thursday 10-10:45AM (Room 6): Reintroducing the mobile web! What is the mobile web good at? Why should developers build for it? And how do mobile web and native complement each other? The mobile web is often the first experience new users have with your brand and you're on the hook for delivering success to them. There's been massive investment in mobile browsers; so now we have the speed, the features, and the tools to help you make great mobile web apps.
  • Predicting the future with the Google Cloud Platform || Thursday 4-4:45PM (Room 7): Can you predict the future using Big Data? Can you divine if your users will come back to your site or where the next social conflict will arise? And most importantly, can Brazil be defeated at soccer on their own turf? In this talk, we'll go through the process of data extraction, modelling and prediction as well as generating a live dashboard to visualize the results. We‚Äôll demonstrate how you can use Google Cloud and Open Source technologies to make predictions about the biggest soccer matches in the world. You‚Äôll see how to use Google BigQuery for data analytics and Monte Carlo simulations, as well as how to create machine learning models in R and pandas. We predict that after this talk you‚Äôll have the necessary tools to cast your own eye on the future.
In addition, we’ve invited notable speakers such as Ray Kurzweil, Regina Dugan, Peter Norvig, and a panel of robotics experts, hosted by Women Techmakers, and will be hosting two Solve for X workshops. These speakers are defining the future with their groundbreaking research and technology, and want to bring you along for the ride.

Finally, we want to give you ample face to face time with the teams behind the products, so are hosting informal ‚ÄėBox talks for Accessibility, Android, Android NDK / Gaming Performance, Cloud, Chrome, Dart, and Go. Swing by the Develop Sandbox to connect, discuss, learn and maybe even have an app performance review.

See you at I/O!

Reto Meier manages the Scalable Developer Advocacy team as part of Google's Developer Relations organization, and wrote Professional Android 4 Application Development.

Posted by Louis Gray, Googler
Categories: Programming

Performance at Scale: SSDs, Silver Bullets, and Serialization

This is a guest post by Aaron Sullivan, Director & Principal Engineer at Rackspace.

We all love a silver bullet. Over the last few years, if I were to split the outcomes that I see with Rackspace customers who start using SSDs, the majority of the outcomes fall under two scenarios. The first scenario is a silver bullet—adding SSDs creates near-miraculous performance improvements. The second scenario (the most common) is typically a case of the bullet being fired at the wrong target—the results fall well short of expectations.

With the second scenario, the file system, data stores, and processes frequently become destabilized. These demoralizing results, however, usually occur when customers are trying to speed up the wrong thing.

A common phenomena at the heart of the disappointing SSD outcomes is serialization. Despite the fact that most servers have parallel processors (e.g. multicore, multi-socket), parallel memory systems (e.g. NUMA, multi-channel memory controllers), parallel storage systems (e.g. disk striping, NAND), and multithreaded software, transactions still must happen in a certain order. For some parts of your software and system design, processing goes step by step. Step 1. Then step 2. Then step 3. That’s serialization.

And just because some parts of your software or systems are inherently parallel doesn’t mean that those parts aren’t serialized behind other parts. Some systems may be capable of receiving and processing thousands of discrete requests simultaneously in one part, only to wait behind some other, serialized part. Software developers and systems architects have dealt with this in a variety of ways. Multi-tier web architecture was conceived, in part, to deal with this problem. More recently, database sharding also helps to address this problem. But making some parts of a system parallel doesn’t mean all parts are parallel. And some things, even after being explicitly enhanced (and marketed) for parallelism, still contain some elements of serialization.

How far back does this problem go? It has been with us in computing since the inception of parallel computing, going back at least as far as the 1960s(1). Over the last ten years, exceptional improvements have been made in parallel memory systems, distributed database and storage systems, multicore CPUs, GPUs, and so on. The improvements often follow after the introduction of a new innovation in hardware. So, with SSDs, we’re peering at the same basic problem through a new lens. And improvements haven’t just focused on improving the SSD, itself. Our whole conception of storage software stacks is changing, along with it. But, as you’ll see later, even if we made the whole storage stack thousands of times faster than it is today, serialization will still be a problem. We’re always finding ways to deal with the issue, but rarely can we make it go away.

Parallelization and Serialization
Categories: Architecture

Quantifying the Value of Information

Herding Cats - Glen Alleman - Mon, 06/23/2014 - 15:00

From the book How To Measure Anything, there is a notion starting from the McNamara Fallacy.

The first step is to measure whatever can be easily measured. This is okay as far as it goes. The second step is to disregard that which can't easily be measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily isn't important. This is blindness. The fourth step is to say that what can't be easily measured really dosnt' exist. This is suicide. ‚ÄĒ Charles Handy,¬†The Empty Raincoat (1995), describing the Vietnam-era measurement policies of Secretary of Defense, Robert McNamara

There are three reasons to seek information in the process of making business decisions:

  1. Information reduces uncertainty about decisions that have economic consequences.
  2. Information affects the behaviour of others, which has economic consequences.
  3. Information sometimes has its own market value.

When we read ...

No Estimates

... and there are no alternatives described, then it's time to realize this is an empty statement. To be successful in the software development business we need information about the cost to development of value, the duration of the work effort that produces this value for those paying for the outcomes of our efforts, and the confidence that we can produce the needed capabilities on or near the planned delivery date, at or below the planned budget. (And fixing the budget just leaves two other variables open, so that is an empty approach as well.)

The solution to the first has been around since the 1950's - decision theory. The answer to the second is provided by measuring productivity in regards to uncertainty about investments - an options or analysis of alternatives (AoA) process. The notion of market information is based on Return on Investment, where the value produced in exchange for cost to produce that value in an fundamental principle of all successful businesses.

If we can somehow separate the writing of software from the discussion of determining the cost of that effort, it may become clearer that the software development community needs to consider the needs of those funding their work over their own self-interest of not wanting to estimate the cost of that work. In the end those with the money need to know. If the development community isn't interested in providing viable - credible business processes - to answer how much, when, and what - then it'll be done without them, because to stay in business, the business must know the cost of their products or services.

Related articles Do It Right or Do It Twice We Can Know the Business Value of What We Build "Statistical Science and Philosophy of Science: where should they meet?"
Categories: Project Management

Don’t Overwhelm Yourself Trying to Learn Too Much

Making the Complex Simple - John Sonmez - Mon, 06/23/2014 - 15:00

It’s a great idea to educate yourself. I fully subscribe to the idea of lifetime learning–and you should too. But, in the software development field, sometimes there are so many new technologies, so many things to learn, that we can start to feel overwhelmed and like all we ever do is learn. You can start […]

The post Don’t Overwhelm Yourself Trying to Learn Too Much appeared first on Simple Programmer.

Categories: Programming

Kanban, Developer Career & Mobile UX in Methods & Tools Summer 2014 issue

From the Editor of Methods & Tools - Mon, 06/23/2014 - 14:54
Methods & Tools ‚Äď the free e-magazine for software developers, testers and project managers ‚Äď has just published its Summer 2014 issue that discusses objections to Kanban implementation, How to use a model to evaluate and improve mobile user experience, balancing a software development job and a meaningful life, Scrum agile project management tools, JavaScript unit testing and static analysis for BDD. Methods & Tools Summer 2014 contains the following articles: * Kanban for Skeptics * Using a Model To Systematically Evaluate and Improve Mobile User Experience * Developer Careers Considered Harmful * TargetProcess – ...

SPaMCAST 295 ‚Äď TDD, Software Sensei, Cognitive Load

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 295!

SPaMCAST 295 features our essay on Test Driven Development (TDD). TDD is an approach to development in which you write a test that proves the piece of work you are working on, and then write the code required to pass the test. You then refactor that code to eliminate duplication and any overlap, then repeat until all of the work is completed. Philosophically, Agile practitioners see TDD as a tool either to improve requirements and design (specification) or to improve the quality of the code.  This is similar to the distinction between verification (are you doing the right thing) and validation (are you doing the thing right).

We also have a new entry from the Software Sensei, Kim Pries. Kim addresses cognitive load theory.  Cognitive load theory helps explain how learning and change occur at personnel, team and organizational levels.

Next week we will feature our interview with Jeff Dalton. Jeff and I talked about making Agile resilient.  Jeff posits that the CMMI can be used to strengthen and reinforce Agile. This is an important interview for organizations that are considering scaled Agile frameworks.

Upcoming Events

Upcoming DCG Webinars:

July 24 11:30 EDT – The Impact of Cognitive Bias On Teams

Check these out at www.davidconsultinggroup.com

I will be attending Agile 2014 in Orlando, July 28 through August 1, 2014.  It would be great to get together with SPaMCAST listeners, let me know if you are attending.

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 295 - TDD, Software Sensei, Cognitive Load

Software Process and Measurement Cast - Sun, 06/22/2014 - 22:00

SPaMCAST 295 features our essay on Test Driven Development (TDD). TDD is an approach to development in which you write a test that proves the piece of work you are working on, and then write the code required to pass the test. You then refactor that code to eliminate duplication and any overlap, then repeat until all of the work is completed. Philosophically, Agile practitioners see TDD as a tool either to improve requirements and design (specification) or to improve the quality of the code.  This is similar to the distinction between verification (are you doing the right thing) and validation (are you doing the thing right).

We also have a new entry from the Software Sensei, Kim Pries. Kim addresses cognitive load theory.  Cognitive load theory helps explain how learning and change occur at personnel, team and organizational levels.

Next week we will feature our interview with Jeff Dalton. Jeff and I talked about making Agile resilient.  Jeff posits that the CMMI can be used to strengthen and reinforce Agile. This is an important interview for organizations that are considering scaled Agile frameworks.

Upcoming Events

Upcoming DCG Webinars:

July 24 11:30 EDT - The Impact of Cognitive Bias On Teams

Check these out at www.davidconsultinggroup.com

I will be attending Agile 2014 in Orlando, July 28 through August 1, 2014.  It would be great to get together with SPaMCAST listeners, let me know if you are attending. http://agile2014.agilealliance.org/

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1

http://www.pnsqc.org/international-conference-software-quality-test-management-2014/

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA. 

http://www.neqc.org/conference/60/location.asp

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events! 

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI's mission is to pull together the expertise and educational efforts of the world's leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: "This book will prove that software projects should not be a tedious process, neither for you or your team." Support SPaMCAST by buying the book here.

Available in English and Chinese. 

Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Sun, 06/22/2014 - 18:30

Charles-darwin

‚ÄúIgnorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science.‚ÄĚ

Along with those assertions (with no evidence) that this or that will not be solved, it is the assertion that I have a solutiuon for your complex problem that is simple and straightforward and usually involves NOT doing something that is being performed improperly, which I'll label as a dysfunction and ignore the search for the root cause.

H_l_menckenWhich bring the next quote for simple and simple-minded solutions to complex problems.

For every complex problem there is an answer that is clear, simple, and wrong. H. L. Mencken

Categories: Project Management

Parkinson’s Law and The Myth of 100% Utilization

20140621-191243-69163701.jpg

Nature abhors a vacuum, thus a line forms.

One of the common refrains in management circles is that work expands to fill the time available. It is know as Parkinson’s Law. The implication is that if work expands to fill available time, then managers should overload backlogs to ensure time is spent in the most efficient manner. Historical evidence from the UK Civil Service and experimental evidence from staffing contact centers can be backs up that claim. Given the existence of data that proves Parkinson’s Law, many IT managers and project managers strive to ensure that full utilization is planned and monitored.  The focus on planning 100% utilization in software teams is potentially counterproductive because it generates planning errors and compression.  

In classic project management some combination estimators, project managers and team members build a list of the tasks needed to deliver the project’s requirements.  These work breakdown structures are ordered based on predecessors, successors and the teams capacity.  Utilization of each team member is meticulously balanced to a prescribed level (generally 100%).  Once the project begins, the real world takes over and WHAM something unanticipated crops or a group of tasks turn out more difficult than anticipated. These are schedule errors. Rarely do the additions to the schedule ever balance with the subtractions.  As soon as the plan is disrupted something has to give. And while re-planning does occur, the usual approach is to work longer hours or to cut corners. Both cutting corners and tired team members can and generally do lead to increase levels of technical debt. 

Over planning, also known in many circles as stretch goals, generates immediate schedule compression. In this scenario, the project is compressed through a number of techniques including adding people to the team, working more hours or days or the infamous step of cutting testing. These same techniques are leveraged in projects where planning errors overwhelm any contingency.  Schedule compression increases risk, cost, team stress and technical debt.  Compression can (and does) occur in classic and Agile projects when teams are pushed to take on more work than they can deliver given their capacity. 

In projects with high levels of transparency these decisions reflect tradeoffs that are based on business decisions. In some cases the date might be more important than quality, cost and the long term health of the team.  Making that type of decision rarely makes sense but when it does it must be made done with knowledge of the consequences.

Agile teams natural antidotes for Parkinson’s Law: the prioritized backlog, the burn down chart and the daily standup/Scrum meeting. On a daily basis team members discuss the work they have completed and will complete.  When the sprint backlog is drawn down, the team can (with the product owners assent) draw new stories into the sprint.  The burn down chart is useful to help the team understand how they are consuming their capacity to complete work. 

Whether you use Agile or classic project management techniques, Parkinson‚Äôs Law can occur. However the typical response of planning and insisting on 100% utilization might lead to a situation where the cure is not worth the pain delivered in the treatment. In all cases, slack must be planned to account for the oft remarked ‚Äústuff‚ÄĚ that happens and teams must be both responsible and accountable for delivering value of the time at their disposal. ¬†

 


Categories: Process Management

How to "Lie" with Statistics

Herding Cats - Glen Alleman - Sat, 06/21/2014 - 20:20

How To Lie With StatisticsThe book How To Lie With Statistics, Darrell Huff, 1954 should be on the bookshelf of everyone who spends other peoples money for a very simple reason.

Everything on every project is part of an underlying ststistical process. Those expecting that any number associated with any project in any domain to be a single point estimate will be sorely disappointed to find out that is not the case after reading the book. 

As well, those expecting to make decisions about how to spend other peoples money will be disappointed to know that statistical information is needed to determine the impact of the decision is influenced by the cost of the decision and the cost of the value obtained by the decision, the impact on the schedule of the work needed to produce the value from that decision, and even the statistical outcomes of the benefits produced by making that decision.

One prime example of How To Lie (although unlikley not a Lie, but just poor application of statistical processes) is Todd Little's "Schedule Estimation and Uncertainty Surrounding the Cone of Uncertainty." In this paper the following figure is illustrative of the How to Lie paradigm.

Screen Shot 2014-06-21 at 12.22.18 PM

This figure shows 106 sampled projects, their actual completion and their ideal completion. First let's start with another example of Bad Statistics - the Standish Report - often referenced when trying to sell the idea that software projects are always in trouble. Here's a summary of posts about the Standish Report, which speaks to a few Lies in the How to Lie paradigm.

  • The samples are self-selected, so we don't get to see the correlation between the sampled projects and the larger population of projects at the firms.
    • Those returning the survey for Standish stating they had problems and those not having problems can't be compared to those not returning the survey. And can't be compared to the larger population of IT projects that was not sampled.
    • This is a Huff example - limit the sample space to those examples that support you hypothesis.
  • The credibility of the original estimate is not stated or even mentioned
    • Another good Huff example - no way to test what the root cause of the trouble was, so no way to tell the statistical inference of the suggested solution to the possible corrected outcome.
  • The Root Cause of the¬†over budget, over schedule, and¬†less the promised delivery of features is not investigated, nor any corrective actions suggested, other than¬†hire Standish.
    • Maybe the developers at these firsm are not very good at their job, and can't stay on cost and schedule.
    • Maybe the sampled projects were much harder than first estimated, and the initial estimate was not updated - a new¬†estimate to complete - when this was discovered.
    • Maybe management¬†forced the estimate onto the development team, so the project was doomed from day one.
    • Maybe those making the estimate had no estimating process, skills, or experience in the domain they were asked to estimate for.
    • Maybe a few dozen other Root Causes were in place to create the Standish charts, but these were not seperated from the statistical samples to seek the underlying data.

So let's look at Mr. Little's chart

There is likely good data at his firm, Landmark Graphics, for assessing the root cause of the projects finishing above the line in the chart. But the core issue is the line is not calibrated. It represents the ideal data. That is using the orginal estimate, what did the project do? as stated on page 49 of the paper.

For the Landmark data, the x-axis shows the initial estimate of project duration, and the y-axis shows the actual duration that the projects required.

There is no assessment of the credibility of the initial estimate for the project. This initial estimate might accurately represent the projected time and cost, with a confidence interval. Or this initial estimate could be completely bogus, a guess, made up by uninformed estimators, or worse yet, a estimate that was cooked in all the ways possible from bad management to bad math.

So if our baseline to make comparisons from is bogus from the start, it's going to be hard to draw any conclusion from the actual data on the projects. Both initial estimates and actual measurements must be statistically sound if any credible decisions can be made about the Root Cause of the overage and any possible Corrective Actions that can be taken to prevent these unfavorable outcomes.

This is classic How To Lie - let me present a bogus scale or baseline, then show you some data that supports my conjecture that something is wrong.

In the case of the #NoEstimates approach, that conjecture starts with the Twitter clip below, which can be interpreted as we can make decisions without having to estimate the independent and dependent variables that go into that decision.

No Estimates

So if, estimates are the smell of dysfunction, as the popular statement goes, what is the dysfunction? Let me count the ways:

  • The estimates in many software development domains are bogus to start. That'll cause management to be unhappy with the results and lower the trust in those making the estimates. Which in turn creates a distrust between those providing the moeny and those spending the money - a dysfunction
  • The management in these domains doesn't understand the underlying statistical nature of software development and have an unfounded desire to have¬†facts about the cost, duration, and probability of delivering the proper outcomes in the absence of the statistical processes driving those processes.¬†That'll cause the project to be in trouble from day one.
  • The insistence that estimating is somehow the source of these dysfunctions, and the¬†corrective action is to Not Estimate, is a false trade off - in the same way as the Standish Report saying "look at all these bad IT projects, hire us to help you fix them."¬†This will cause the project to fail on day one again, since those paying for the project have little or no understanding of what they are going to get in the end for an estimated cost if there is one.

So next time you hear estimates are the smell of dysfunction, or we can make decisions without estimating:

  • Ask if there is evidence of the root cause of the problem?
  • Ask to read - in simple bullet point examples - some of these alternatives - so you can test them in your domain.
  • Ask in what domain would not estimating be applicable? There are likley some. I know of some. Let's hear some others.
  • Ask to show how Not Estimating is the corrective action of the¬†dysfunction?
Related articles Averages Without Variances are Meaningless - Or Worse Misleading Statistics, Bad Statistics, and Damn Lies How To Estimate Almost Any Software Deliverable Let's Stop Guessing and Learn How to Estimate Probabilistic Cost and Schedule Processes How to lie with statistics: the case of female hurricanes. How to Fib With Statistics To explain or predict?
Categories: Project Management

GTAC 2014: Call for Proposals & Attendance

Google Testing Blog - Sat, 06/21/2014 - 16:26
Posted by Anthony Vallone on behalf of the GTAC Committee

The application process is now open for presentation proposals and attendance for GTAC (Google Test Automation Conference) (see initial announcement) to be held at the Google Kirkland office (near Seattle, WA) on October 28 - 29th, 2014.

GTAC will be streamed live on YouTube again this year, so even if you can’t attend, you’ll be able to watch the conference from your computer.

Speakers
Presentations are targeted at student, academic, and experienced engineers working on test automation. Full presentations and lightning talks are 45 minutes and 15 minutes respectively. Speakers should be prepared for a question and answer session following their presentation.

Application
For presentation proposals and/or attendance, complete this form. We will be selecting about 300 applicants for the event.

Deadline
The due date for both presentation and attendance applications is July 28, 2014.

Fees
There are no registration fees, and we will send out detailed registration instructions to each invited applicant. Meals will be provided, but speakers and attendees must arrange and pay for their own travel and accommodations.

Update : Our contact email was bouncing - this is now fixed.



Categories: Testing & QA

How to verify Web Service State in a Protractor Test

Xebia Blog - Sat, 06/21/2014 - 08:24

Sometimes it can be useful to verify the state of a web service in an end-to-end test. In my case, I was testing a web application that was using a third-party Javascript plugin that logged page views to a Rest service. I wanted to have some tests to verify that all our web pages did include the plugin, and that it was communicating with the Rest service properly when a new page was opened.
Because the webpages were written with AngularJS, Protractor was our framework of choice for our end-to-end test suite. But how to verify web service state in Protractor?

My first draft of a Protractor test looked like this:

var businessMonitoring = require('../util/businessMonitoring.js');
var wizard = require('./../pageobjects/wizard.js');

describe('Business Monitoring', function() {
  it('should log the page name of every page view in the wizard', function() {
    wizard.open();
    expect(wizard.activeStepNumber.getText()).toBe('1');

    // We opened the first page of the wizard and we expect it to have been logged
    expect(businessMonitoring.getMonitoredPageName()).toBe('/wizard/introduction');

    wizard.nextButton.click();
    expect(wizard.completeStep.getAttribute('class')).toContain('active');
    // We have clicked the ‚Äėnext‚Äô button so the ‚Äėcompleted‚Äô page has opened, this should have // been logged as well
    expect(businessMonitoring.getMonitoredPageName()).toBe('/wizard/completed');
  });
});

The next thing I had to write was the businessMonitoring.js script, which should somehow make contact with the Rest service to verify that the correct page name was logged.
First I needed a simple plugin to make http requests. I found the 'request' npm package , which provides a simple API to make a http request like this:

var request = require('request');

var executeRequest = function(method, url) {
  var defer = protractor.promise.defer();
  
  // method can be ‚ÄėGET‚Äô, ‚ÄėPOST‚Äô or ‚ÄėPUT‚Äô
  request({uri: url, method: method, json: true}, function(error, response, body) {

    if (error || response.statusCode >= 400) {
      defer.reject({
        error : error,
        message : response
      });
    } else {
      defer.fulfill(body);
    }
  });

  // Return a promise so the caller can wait on it for the request to complete
  return defer.promise;
};

Then I completed the businessmonitoring.js script with a method that gets the last request from the Rest service, using the request plugin.
It looked like this:

var businessMonitoring = exports; 

< .. The request wrapper with the executeRequest method is included here, left out here for brevity ..>

businessMonitoring.getMonitoredPageName = function () {

    var defer = protractor.promise.defer();

    executeRequest('GET', 'lastRequest')  // Calls the method which was defined above
      .then(function success(data) {
        defer.fulfill(data,.url);
      }, function error(e) {
        defer.reject('Error when calling BusinessMonitoring web service: ' + e);
      });

    return defer.promise;
 };

It just fires a GET request to the Rest service to see which page was logged. It is an Ajax call so the result is not immediately available, so a promise is returned instead.
But when I plugged the script into my Protractor test, it didn't work.
I could see that the requests to the Rest service were done, but they were done immediately before any of my end-to-end tests were executed.
How come?

The reason is that Protractor uses the WebdriverJS framework to handle its control flow. Statements like expect(), which we use in our Protractor tests, don't execute their assertions immediately, but instead they put their assertions on a queue. WebdriverJS first fills the queue with all assertions and other statements from the test, and then it executes the commands on the queue. Click here for a more extensive explanation of the WebdriverJs control flow.

That means that all statements in Protractor tests need to return promises, otherwise they will execute immediately when Protractor is only building its test queue. And that's what happened with my first implementation of the businessMonitoring mock.
The solution is to let the getMonitoredPageName return its promise within another promise, like this:

var businessMonitoring = exports; 

businessMonitoring.getMonitoredPageName = function () {
  // Return a promise that will execute the rest call,
  // so that the call is only done when the controlflow queue is executed.
  var deferredExecutor = protractor.promise.defer();

  deferredExecutor.then(function() {
    var defer = protractor.promise.defer();

    executeRequest('GET', 'lastRequest')
      .then(function success(data) {
        defer.fulfill(data.url);
      }, function error(e) {
        defer.reject('Error when calling BusinessMonitoring mock: ' + e);
      });

    return defer.promise;
  });

  return deferredExecutor;
};

Protractor takes care of resolving all the promises, so the code in my Protractor test did not have to be changed.

Why Size As Part of Estimation?

Trail Length Are An Estimate of size,  while the time need to hike  is another story!

Trail length is an estimate of size, while the time need to hike it is another story!

More than occasionally I am asked, “Why should we size as part of estimation?” ¬†In many cases the actual question is, “why can’t we just estimate hours?” ¬†It is a good idea to size for many reasons, such as generating an estimate in a quantitative, repeatable process, but in the long run, sizing is all about the conversation it generates.

It is well established that size provides a major contribution to the cost of an engineering project.  In houses, bridges, planes, trains and automobiles the use of size as part of estimating cost and effort is a mature behavior. The common belief is that size can and does play a similar role in software. Estimation based on size (also known as parametric estimation) can be expressed as a function of size, complexity and capabilities.

E = f(size, complexity, capabilities)

In a parametric estimate these three factors are used to develop a set of equations that include a productivity rate, which is used to translate size into effort.

Size is a measure of the functionality that will be delivered by the project.  The bar for any project-level size measure is whether it can be known early in the project, whether it is predictive and whether the team can apply the metric consistently.  A popular physical measure is lines of code, function points are the most popular functional measure and story points are the most common relative measure of size.

Complexity refers to the technical complexity of the work being done and includes numerous properties of a project (examples of complexity could include code structure, math and logic structure).  Business problems with increased complexity generally require increased levels of effort to satisfy them.

Capabilities include the dimensions of skills, experience, processes, team structure and tools (estimation tools include a much broader list).  Variation in each capability influences the level of effort the project will require.

Parametric estimation is a top-down approach to generating a project estimate.  Planning exercises are then used to convert the effort estimate into a schedule and duration.  Planning is generally a bottom-up process driven by the identification of tasks, order of execution and specific staffing assignments.  Bottom-up planning can be fairly accurate and precise over short time horizons. Top-down estimation is generally easier than bottom-up estimation early in a project, while task-based planning makes sense in tactical, short-term scenarios. Examples of estimation and planning in an Agile project include iteration/sprint planning, which includes planning poker (sizing) and task planning (bottom-up plan).  A detailed schedule built from tasks in a waterfall project would be example of a bottom-up plan.  As most of us know, plans become less accurate as we push them further into the future even if they are done to the same level of precision. Size-based estimation provides a mechanism to predict the rough course of the project before release planning can be performed then again, as a tool to support and triangulate release planning.

The act of building a logical case for a function point count or participating in a planning poker session helps those that are doing an estimate to collect, organize and investigate the information that is known about a need or requirement.  As the data is collected, questions can be asked and conversations had which enrich understanding and knowledge.  The process of developing the understanding needed to estimate size provides a wide range of benefits ranging from simply a better understanding of requirements to a crisper understanding of risks.

A second reason for estimating size as a separate step in the process is that separating it out allows a discussion of velocity or productivity as a separate entity.  By fixing one part of the size, the complexity and capability equation, we gain greater focus on the other parts like team capabilities, processes, risks or changes that will affect velocity.  Greater focus leads to greater understanding, which leads to a better estimate.

A third reason for estimating size of the software project as part of the overall estimation process is that by isolating the size of the work when capabilities change or knowledge about the project increases, the estimate can more easily be re-scaled. In most projects that exist for more than a few months, understanding of the business problem, how to solve that problem and capabilities of the team increase while at the same time the perceived complexity[1] of the solution decreases. If a team has jumped from requirements or stories directly to an effort estimate  it will require more effort to re-estimate the remaining work because they will not be able to reuse previous estimate because the original rational will have change. When you have captured size re-estimation becomes a re-scaling exercise. Re-scaling is much closer to a math exercise (productivity x size) which saves time and energy.  At best, re-estimation is more time consuming and yields the same value.  The ability to re-scale will aid in sprint planning and in release planning. Why waste time when we should be focusing on delivering value?

Finally, why size? ¬†In the words of David Herron, author and Vice President of Solution Services at the David Consulting Group, “Sizing is all about the conversation that it generates.” ¬†Conversations create a crisper, deeper understanding of the requirements and the steps needed to satisfy the business need. ¬†Determining the size of the project is a tool with which to focus a discussion as to whether requirements are understood. ¬†If a¬†requirement¬†can’t be sized, you can’t know enough to actually¬†fulfill¬†it. ¬†Planning poker is an example of a sizing conversation. I am always amazed at the richness of the information that is exposed during a group-planning poker session (please remember to take notes). ¬†The conversation provides many of the nuances a story or requirement just can’t provide.

Estimates, by definition, are wrong. ¬†The question is just how wrong. ¬† The search for knowledge generated by the conversations needed to size a project provides the best platform for starting a project well. ¬†That same knowledge provides the additional inputs needed to complete the size, complexity, capability equation¬†in order to yield¬†a project estimate. ¬†If you are asked, “Why size?” it might be tempting to fire off the answer “Why not?” but in the end, I think you will change more minds by suggesting that it is all about the conversation after you have made the more quantitative arguments.

Check out an audio version of this essay as part of  SPaMCAST 201

[1] Perceived complexity is more important than actual complexity as what is perceived more directly drives behavior than actual complexity.


Categories: Process Management

Why We Must Learn to Estimate

Herding Cats - Glen Alleman - Fri, 06/20/2014 - 22:18

Screen Shot 2014-06-19 at 1.54.38 PM

The notion of making any decisions without know something about the cost of that decision, the schedule impacts or the resulting impacts on delivered capabilities is like the guys here in the picture. They stared building their bridge. Will run out of  materials, can't see the destination and likely have the wrong tools.

The continued insistence that we can make decisions in the absence of estimates needs to be tested in the market place by those providing the money, not by those consuming the money.

No Estimates

When mentioned that cost can be fixed through a budget process, this still leaves the schedule and delivered capabilities as two random variables that need to be estimated if we are to provide those funding our work with a credible confidence that we'll show up on time with the needed capabilities.

Slide1

Related articles How To Fix Martin Fowler's Estimating Problem in 3 Easy Steps Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices Making Estimates For Your Project Require Discipline, Skill, and Experience First Comes Theory, Then Comes Practice
Categories: Project Management