Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Software Project Estimation: Historical Data

Historical data doesn't come from historical ruins.

Historical data doesn’t come from historical ruins.

Historical data is needed for any form of consistent estimation.  The problem with historical data is that gathering the data requires effort, time or money.  The need to expend resources to generate, collect or purchase historical data is often used as a bugaboo to resist collecting the data and as a tool to avoid using parametric or historical estimating techniques.

Historical data can be as simple as a Scrum team collecting their velocity or productivity every sprint and using it to calculate an average for planning and estimating or as complex as the set of data that teams using parametric estimation collect which includes a more robust pallet of data including project effort, size, duration, team capabilities and project context. In both cases the data collected needs to be for the method you are using and the level of granularity that you are going to estimate or plan.  For instance, if you are estimating at the project level you need data at a project level. If you are estimating at a task level you need to collect historical data at the task level.

Here is my recommended pallet of historical data for estimating at the project level:

Original Estimate (effort, duration, staffing)

Actual Outcome (effort, duration, staffing)

Cost (estimated and actual) – Cost data can be broken down based on the source.  Examples of further levels of granularity include hardware costs, software purchase or license costs and contractor versus internal personnel costs.

Capabilities (predicted and actual) – Capabilities describe the level of competency of the team.  Examples of capabilities include team skill set, experience level, roles and control structures.

Size (predicted and actual) – Size is a measure of the end project delivered by the project.  In a software project, size is a measure of the functionality that will be delivered by the project (IFPUG Function Points is an example of measure of software functionality).

Context – Context is the story of the project including whether anything out of the norm that happened. For example, knowing that half the project team was temporarily reassigned during the project may be important to know when analyzing the data.

Project Demographics (who was the customer, what were the product(s) affected, what methods were used, what was the primary technology, were any of the technologies new to the team, what were the primary languages, were any of the languages new to the team)

If we were to need to estimate (not plan) at a phase, release or sprint level then the data collected would need to be collected at that level.

Historical data is a requirement for effective budgeting and estimation.  The best data is data is data from your organization projects.  This means that you have to define the information you want, collect the data and analyze the data.  The collection of data also infers that someone needs to record the data as it happens (time accounting and project level accounting).  Only collect the information you need and only at the level you are going to use.  Remember, data collection for each measure or additional level of information will require more effort both from those analyzing the data, those collecting the data and perhaps more importantly by those that have to record the data.  Balance the level of measurement overhead with the benefit you can extract in the near term.  Collecting data that you might need or that will pay off in a few years will usually end up costing more than it will return and may well disenchant the people you are asking to collect and record the data.  When they become disenchanted your data quality will suffer (or potentially stop being reported).  When beginning an estimation program immediately start collecting your own data, BUT also consider reaching out to external sources of data to jump start the program that will ensure you can begin estimating as you collect your own data.


Categories: Process Management

Stuff The Internet Says On Scalability For January 24th, 2014

Hey, it's HighScalability time:


Gorgeous image from Scientific American's Your Brain by the Numbers
  • Quotable Quotes: 
    • @jezhumble: Google does everything off trunk despite 10k devs across 40 offices. 
    • @KentLangley: "in 2016. When it goes online, the SKA is expected to produce 700 terabytes of data each day" 
    • Jonathan Marks: It's actually a talk about how NOT to be creative. And what he [John Cleese] describes is the way most international broadcasters operated for most of their existence. They were content factories, slave to an artificial transmission schedule. Because they didn't take time to be creative, then ended up sounding like a tape machine. They were run by a computer algorithm. Not a human soul. There was never room for a creative pause. Routine was the solution. And that's creativities biggest enemy.

  • 40% better single-threaded performance in MariaDB. Using perf, cache misses were found and the fix was using the right gcc flags. But the big hairy key idea is: modern high-performance CPUs, it is necessary to do detailed measurements using the built-in performance counters in order to get any kind of understanding of how an application performs and what the bottlenecks are. Forget about looking at the code and counting instructions or cycles as we did in the old days. It no longer works, not even to within an order of magnitude.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Categories: Architecture

Your Invited to the Texas Association of Enterprise Architects January Meeting

Mike Walker's Blog - Fri, 01/24/2014 - 15:08

 Texas AEA

The Texas Association of Enterprise Architects (AEA) has launched!

From the President of the Texas AEA Chapter,

I am very excited to establish this new chapter of the Global Association of Enterprise Architects. It brings with it great prestige and a rich network of over 35,000 professional architects world-wide. We live in a vibrant community of Texas professionals, vendors and consortiums that offers thought leadership, visions for the future, and proven practices based on reality of today. The Texas AEA chapter has the following objectives:

  • create a model for the profession
  • exchange knowledge
  • discuss new and emerging trends
  • establish working groups

We will keep you up to date on the forthcoming website that will be up shortly where you can download all the latest happenings.

Best Regards,

Mike Walker

Events

Below are some upcoming events where you can learn more about the organization and network with your peers. We look forward to seeing you at one of our upcoming events.

 

January 30 – 1st Monthly Meeting

 

Join us for our very first monthly meeting. The meeting will take place on Thursday, January 30, 5:30 – 7:30 at Iron Cactus North Austin on Stonelake Blvd, Austin, TX.  (see new venue below).

This meeting will be a great opportunity for you to get plugged into the local architecture community along with an introduction to the Texas AEA Chapter.

To join RSVP here.

 

[Update! New Venue Selected]

I have an important announcement for all registered attendees for our Texas AEA meeting has been so popular that we have exceeded the capacity of our last meeting location! Thank you for your interest in the Texas AEA. Given this situation we needed a larger location that support us and we found a perfect one for us.

The new location information can be found below:

clip_image001

Able’s North Austin

4001 West Parmer Lane

Austin, Texas 78727

(512) 835-0010

Note: Able’s is behind other buildings and you should look for It’s a Grind coffee shop and turn into that small shopping plaza to find Able’s.

 

February 23 – Monthly Meeting

Monthly Meeting: Architecture topics and networking social. Details coming soon.

 

March 20 – Texas AEA Conference

We are pleased to announce that the Texas Architecture Conference will be held in Austin, Texas.

 

A thank you to our sponsors!

 Texas AEA Sponsors

Categories: Architecture

Neo4j HA: Election could not pick a winner

Mark Needham - Fri, 01/24/2014 - 11:30

Recently I’ve been spending a reasonable chunk of my time helping people get up and running with their Neo4j High Availability cluster and there’s sometimes confusion around how it should be configured.

A Neo4j cluster typically consists of a master and two slaves and you’d usually have it configured so that any machine can be the master.

However, there is a configuration parameter ‘ha.slave_only’ which can be set to ‘true’ to ensure that a machine will never be elected as master when an election takes place.

We might configure our machine with that setting if it’s acting as a reporting instance but we need to make sure that 2 members don’t have that setting otherwise we won’t have any failover in the cluster.

For example, if we set two of the machines in the cluster to be slave only and then stop the master we’ll see output similar to the following in data/graph.db/messages.log:

2014-01-23 11:17:24.510+0000 INFO  [o.n.c.p.a.m.MultiPaxosContext$ElectionContextImpl]: Doing elections for role coordinator
2014-01-23 11:17:24.510+0000 DEBUG [o.n.c.p.e.ElectionState$2]: ElectionState: election-[performRoleElections]->election from:cluster://10.239.8.251:5001 conversation-id:3/13#
2014-01-23 11:17:24.513+0000 DEBUG [o.n.c.p.e.ElectionState$2]: ElectionState: election-[vote:coordinator]->election from:cluster://10.151.24.237:5001 conversation-id:3/13#
2014-01-23 11:17:24.515+0000 DEBUG [o.n.c.p.e.ElectionState$2]: ElectionState: election-[voted]->election from:cluster://10.138.29.197:5001 conversation-id:3/13#
2014-01-23 11:17:24.516+0000 DEBUG [o.n.c.p.e.ElectionState$2]: ElectionState: election-[voted]->election from:cluster://10.151.24.237:5001 conversation-id:3/13#
2014-01-23 11:17:24.519+0000 DEBUG [o.n.c.p.a.m.MultiPaxosContext$ElectionContextImpl$2]: Elections ended up with list []
2014-01-23 11:17:24.519+0000 WARN  [o.n.c.p.e.ElectionState]: Election could not pick a winner

This message initially looks confusing but what it’s telling us is that the cluster was unable to elect a new master, in this case because there were no machines that could be elected as master.

So if you see that message in your logs, check your config to make sure that there are actually machines to choose from.

Categories: Programming

Blog Post #800

NOOP.NL - Jurgen Appelo - Fri, 01/24/2014 - 11:09
Blog Post 800

Sometimes you don’t need statistics.
Sometimes you don’t need retrospectives.
Sometimes you don’t need superlatives.

The post Blog Post #800 appeared first on NOOP.NL.

Categories: Project Management

Day 6 of 7 Days of Agile Results – Friday (Friday Reflection)

imageYour Outcome: Identify 3 things going well and 3 things to improve based on you results for this week.

Welcome to Day 6 of 7 Days of Agile Results.  Agile Results is the productivity system introduced in my best-selling time management book, Getting Results the Agile Way.

Let’s recap what we’ve done so far:

Use Friday Reflection as a way to invest in yourself, reinvent yourself, and renew yourself.   If you do this well, this is the secret of continuous improvement.  Each week, what you learn from Friday Reflection can help you tune and improve your results so you get better and better.

One of the most common patterns is to simply lose sight of what we set out to achieve for his week.  That's why thinking of 3 Wins for the Week is so powerful.  It gives us a target.  We check ourselves during the week, and adjust our course.  Then Friday is where we really peer into our personal process and find ways to improve it.

3 Things Going Well and 3 Things to Improve

To do Friday reflection, simply give yourself 10 or 20 minutes on Friday mornings to ask yourself two things:

  1. What are three things going well?
  2. What are three things to improve?

The goal is to carry the good forward and build better habits.

Before you answer the questions above, really reflect on your week.  Did you do what you set out to do?  If not, did you trade up for the right things?  Did you get randomized? Did you bite off more than you can chew?

See what starts to happen?  You start to notice your own patterns.  This awareness becomes your advantage, when you use it to change what's not working, and do more of what is working  It's a way to improve your personal habits and streamline your results.

This is a powerful way to learn your own capacity and to gradually improve your “ability to execute.”

Example Patterns

Here are a few common patterns and what to do about them:

  1. You completed none of what you set out to do for the week.   Pay a lot of attention to this.   What’s important is that you know “WHY” you didn’t complete what you set out to do.  If you traded up for higher value impact, congratulations.   If, you simply got randomized, and distracted, then notice how much better today would feel if you had stayed focused on your 3 outcomes.  Also, make it a point to identify how you choose better outcomes for the week.   You want to really start to learn what sorts of things are high value results for you.
  2. You completed some of what you set out to do.  Congratulations.   At least that’s some progress you can be proud of.   Use your results as feedback.   Did you complete your best opportunities?   Or did you simply go for low-hanging fruit?   Did you try to accomplish more than was reasonable within this time frame?   Use your answers here to gain tremendous insight into your own bottlenecks and ability to execute.   If you are simply biting off too much, then try to improve your ability to estimate what you can achieve within a week.   You’ll get better at this each time.  You’ll start to be able to size up your work week simply by eye-balling it.   You’ll develop an intuitive ability to guestimate your efforts, which will help you right size your wins, and focus on the highest impact results.   This, in turn, will help you build momentum so that you can create a snowball of progressive impact.
  3. You completed what you set out to do, but it doesn’t feel like impact.    This is a good learning, as well as a good reminder.   First, make sure you aren’t throwing your results away.   Check to see if you are giving yourself enough credit for achieving your results.   Practice an attitude of gratitude.   Next, see if you can put your finger on “WHY” it doesn’t feel like impact.   Did you simply have  lack of clarity around what would be significant for this week?   Did you simply set your bar too low?   The best insight here is to gain increasing clarity on what types of outcomes really constitute high value results.
Enjoy the Journey

Most important, relax and truly embrace Friday Reflection as one of the best ways that you can improve your personal performance, in a simple and almost automatic way.   Just by asking these simple questions, you start to gain awareness and you start to gain more clarity on what works best for you, and what your true execution abilities are.

What’s even better is that you’ll naturally start to improve here simply by paying attention to these key questions.   They’ll help you see what you did not see before.  And, if you make this Friday Reflection a simple way to check in with yourself each week, you’ll find yourself paying more attention throughout the week.  And, you’ll start to make little adjustments here and there that start to help you focus and prioritize on higher value outcomes and activities

You will gradually find that you are achieving better and better results, with less effort and more clarity.

You will unleash the productive artist that’s already within you, as well as tap your most inspiring abilities, skills, and strengths.

This is how you will fan your flames and set your productivity on fire … the quiet way … the Agile way.

Welcome to your private victories that last a life time.

You Might Also Like
Categories: Architecture, Programming

Showing Up On Time, On Budget, With Needed Capabilities

Herding Cats - Glen Alleman - Fri, 01/24/2014 - 02:30

Software development ranges from straight forward, with small a team with a continuous improvement effort. Say a web site or warehouse management application were the customer is happy to just keep the improvements coming. Every improvement or new feature can be put to use. Along the way new ideas enter into the mix and since there is no real deadline, the features can be deployed when they are ready.

On the other end of this wide spectrum of projects is the mission critical, business critical project. For example a Mergers and Acquisition strategy of two large firms that need to join their ERP systems before any benefits from the merger for the customers, operations, and cash flow can be realized. Below is an example of the Plan for such a project.

Plan

The first thing to put to rest is the naive and misinformed notion that planning is somehow not needed. The Plan above is a strategy for the integration of legacy systems with a new ERP system. The Planned date for this system is tied to business success. Long before this Plan was developed, the business strategy identified the needed capabilities of the integrated system, the planned savings from the integrated system, and a customer facing set of abilities of the result. 

This is the fundamental Return on Investment equation. ROI = (Value - Cost)/Cost. We know the value, since the analysis of the value of the integrated system was done before the merger. The cost was estimated at the highest level from experience of integrating ERP in the past. Details come next, usually after the merger. 

Each of the boxes provides a needed capability. End box is an incremental deployment of this capability, with feedback that informs the downstream capabilities. No project can have an end-to-end detailed schedule with any hope of that schedule remaining intact. But the Plan above describes the order of delivery of the needed capabilities. These are not arbitrary, these are not random, these are not subject to change. At least not without understanding the impact on the business merger process and the resulting cost savings and cash flow generation.

So What's The Point

If the value at risk is sufficient to call into question business failure, or at least major issues, if the project fails to meet its goals, then we need a Plan, an estimated cost, and an estimated delivery date. To ignore these business needs is to ignore the obligation to provide that needed information. 

You project may not be a merger of two multi-billion dollar firms. But the question isn't when to estimate but when do we no need to estimate. The asnswer to that starts with the Value at Risk. The business providing the funding gets to say what the value is. 

What is the business willing to Risk on the project without knowing that value before starting?

 

Categories: Project Management

Software Project Estimation: Hurdles to Good Estimates

Hurdles come in many shapes and sizes.

Hurdles come in many shapes and sizes.

There are a number of hurdles to jump to be in a position to provide accurate estimations (accurate, not precise!).  These hurdles represent a number of biases that have grown up around estimates that cause us to ignore uncertainty, the capability of the teams that will do the work and whether we are using consistent processes.

The first hurdle is uncertainty. Uncertainty is a natural occurrence of in all projects. The earlier in the project we estimate, the larger the amount of uncertainty. All budgets, estimates and even plans must recognize that uncertainty exists and make provisions for the degree of uncertainty. One technique that is used to incorporate uncertainty is padding, which is the inclusion of an amount of extra time at the task, phase or overall project. However, tasks should only be generated in planning where uncertainty is at its lowest level, therefore padding should not be needed. But, task level padding is typically done when a bottom-up estimate is being generated when a budget or top-down estimate would be more appropriate.  Padding is generally a quick and dirty way to address uncertainty.  A second mechanism for dealing with uncertainty is to generate a budget or estimate as a range (the project will cost between x to y)  based on the level of variability in past budgets or estimates compared to actuals.  The bigger the difference between the prediction and reality, the greater the range. Techniques like Monte Carlo analysis is can be used to generate confidence levels for range boundaries.  Measuring how uncertain we are is more problematic when we are not using measurement data (comparing past budgets and estimates project actuals).  One method is to gather a group of subject matter experts and have them individually develop a budget or estimate and use the range as an indication of uncertainty.

A second hurdle all budgets and estimates face is that the result needs to incorporate a number of lower level predictions.  These include predicting the capability of the team, the types of methods they will use, the level of technical complexity and, to an extent, how the problem will be solved.  All estimators and planners do this kind of prediction for every estimate, budget or plan they have ever created (whether they knew it or not).  The number of different attributes that can affect any budget or estimate can be daunting.  For example, T. Capers Jones rattled off 130+ in the appendix of Estimating Software Costs. I recommend having a formal list of attributes and a rating scale so that whomever is involved in developing the budget or estimate remembers to account for the whole range of attributes (or consciously decide they do not need to be addressed).  Knowledge of the capability attributes (all of the attributes including complexity) can have a very significant impact on the cost and speed of delivery.  Some estimators make an assumption that all attributes will be average, however the process of thinking through or assessing the attributes can uncover assumptions that have been made that may not be true.  There are several published lists of project attributes that can be mined to jump start this form of assessment. I generally recommend getting advice before adopting a list from published materials to ensure a fit with your organization’s culture.

The third hurdle is consistency of process or method.  The majority of the effort in any software project is generated by the tasks required to build the project.  I call this the engineering process, which includes generating requirements, any analysis and design, coding and testing.  At a macro level all projects include work that could be assigned to those categories.  The differences are in the details.  The tasks need to code a web module would be different than those needed to code a data warehouse (if you are not a developer you are going to have to trust me on this).  Each of these types of work will have a different productivity signature if these are following any of the basic development frameworks (e.g. Agile, spiral, Crystal, RUP).  If the team is just winging it there will be no means to predict how long the project will cost or last (you might be able to predict failure).  As a side note the consistency of estimation process is a necessity also in order to generate comparable results.

Clearing the three hurdles to effective estimation is not a herculean task, but it does require discipline and a degree of introspection, which may not come naturally to IT teams. This is where leadership, defined processes and coaching are helpful to break down barriers that inhibit good budgeting and estimation processes.  Let’s stop thinking of an estimate as a single number that states what will happen, but rather a prediction of what may happen.


Categories: Process Management

Do You Suffer From Fear of Failure?

Making the Complex Simple - John Sonmez - Thu, 01/23/2014 - 17:30

I didn’t think I had a fear of failure anymore.  I thought I had conquered that beast long ago, but I was wrong– I just had buried it deep. Fear of failure is something many people have and aren’t aware of, but it can really hold you back– if you let it. In this video, […]

The post Do You Suffer From Fear of Failure? appeared first on Simple Programmer.

Categories: Programming

Management and Task-Switching – 15 Minutes on Air with Johanna Rothman

NOOP.NL - Jurgen Appelo - Thu, 01/23/2014 - 15:10
15 Minutes on Air

Johanna Rothman is the author of Hiring Geeks that Fit, Manage Your Project Portfolio, and many other books. I asked Johanna three questions.

The post Management and Task-Switching – 15 Minutes on Air with Johanna Rothman appeared first on NOOP.NL.

Categories: Project Management

Day 5 of 7 Days of Agile Results – Thursday (Daily Outcomes)

imageYour Outcome: Know your 3 Wins to target today so that you have a simple way to focus and prioritize your effort throughout your day.

Welcome to Day 5 of 7 Days of Agile Results.  Agile Results is the productivity system introduced in my best-selling time management book, Getting Results the Agile Way.

Let’s recap what we’ve done so far:

Hopefully, at this point, it’s getting easier for you to identify your 3 outcomes for the day.

If not, fear not. 

You will get better, as long as you make it a conscious effort to really figure out what 3 outcomes you want for today.

Picture Your Day, the Agile Way

Hopefully, you’re also benefiting from scanning your calendar to get a good mental picture of your day.  And by doing a quick list of your main tasks for the day, this should also help you practice prioritizing before just diving in.  

You’re teaching yourself to focus on higher value activities.

You can actually use this simple picture of your day to help envision your results and inspire you. 

All you have to do is imagine a simple future scene or two for your morning, a simple future scene or two for your afternoon, and a simple future scene or two for your night. 

And, if you don’t like what you imagine, then re-imagine it, and play out new possibilities.

A Busy Day, But There’s Always a Chance for Bursts of Brilliance

For today, I have an exceptionally busy calendar, but I notice that I have a few time slices where I can really focus and nail a few things.

My 3 planned outcomes for today are:

  1. I have the Devices and Services Story slides 80% complete (and with the right “fingerprints” on them) so that I can shop it around.
  2. I have a test scenario to walk the Modern App Transformation Story so that I can validate our approach before sinking a bunch of time.
  3. I have a simple plan for landing and scaling the Devices and Services readiness material across the organization so I can start to put the right things in the right places.

If I accomplish those three things, I’ll really be in great shape.   I have a lot of “below the line” things to do today, and a schedule that will randomize me quite a bit, but if I play my cards right, I should be able to pull off my 3 Wins above.

Play Around with How You State Your Outcomes or Wins

One thing I’ll point out, and which I hope you noticed, is that each day, I’m playing around with how I represent the 3 Wins (or 3 Outcomes, or 3 Results.)   I want you to play around to so that you find what works for you.   I’m simply showing you a few different variations so that you can see that it’s not about doing it this way or that way that’s important.  

What’s important is that you have clarity on the 3 things you want out of today and for the week.   I can’t emphasize how important that is as a tool to help you focus and prioritize.

Appreciate Your Results

But what it also helps you do is to appreciate what you accomplish.  It gives you a simple way to play back your results, and acknowledge your achievements.  

It sounds so simple, and it is, but when you appreciate your results, you breathe new life into all your efforts, and you build momentum like you wouldn’t believe.

Don’t throw that away.  

Your results are actually your own reward.  

Cherish your achievements, and enjoy the journey as you go.

Have a great day, the agile way.

You Might Also Like
Categories: Architecture, Programming

Software Project Estimation: Fantasies

Fantasies are as ethereal as a cloud.

Fantasies are as ethereal as a cloud.

There are a number of fantasies about estimation that non-IT people and even some experienced software development professionals have. Three of the classic fantasies people have about software estimates include the idea that 1) estimates are like retail prices, a predictable fixed price, 2) estimates can always be negotiated to a smaller number with impunity, and 3) in order to be accurate estimates must be precise. The belief in any of these fantasy fallacies will have negative.

The first fantasy is that is that custom projects can be priced like a cup of coffee. We fall prey to these fantasies because we are human and we want software projects to be as predictable as buying that cup of coffee.  When you go to most coffee shops, whether in North America, South America, Europe or India to buy a cup of coffee, the price is posted above the register.  In my local Starbucks I can get a cup of coffee for a few dollars, I just read the menu and pay the amount. The same is true for buying an app on my iPhone or a software package. Software project estimates are built on imperfect information that ranges from partial requirements to evolving technologies and, worse yet, include the interaction of people and the chaos that portends.  From a purely mathematical perspective these imperfections mean that the actual effort, cost and duration of the project will be variable.  How variable is influenced by the process used, the amount of learning that is required to deliver the project and the number of people involved. These are just a few critical factors that drive project performance. This variability in knowledge is why mature estimation organizations almost always express an estimate either as a range or as a probability, and why some organizations suggest that estimation is impossible. Agile projects re-estimate every sprint based on the feedback from the previous sprint using the concept of velocity.  Many waterfall projects re-estimate at the beginning of every new phase so that the current estimate utilizes the information the team has learned through experience.  Even when a fixed price is offered, the organization agreeing to a fixed price will have done an analysis to determine whether they can deliver for that price and what the probability is that the project will really cost (with a profit). This would be the process followed for any project to say they were x% confident of an estimate. [WHO IS THEY?] When projects run short on time, resources or money and they can’t beg for more, they will begin to make compromises ranging from cutting corners (we don’t need to test that) to jettisoning scope (lets push that feature to phase two).  Many of these decisions will be made quickly and without enough thought, which will hurt IT’s reputation and increase project risk.

A second classic fantasy is that you can always brow beat the team into making the estimate smaller.  This fantasy can be true.  A good negotiator will leverage at least two physiological traits to whittle away at an estimate.  The first trait is the natural optimism of IT personnel, which we discussed in Software Project Estimation: Types of Estimates.  The problem is that negotiating the estimate downward (rather than negotiating over scope or resources) can lead to padding of estimates or to technical debt driven by pressure on profit margin or on career prospects. Estimators that know they are going to be pushed to reduce any estimate regardless of how well it is built will sometimes cheat and pad the estimate. So, when they are pushed to cut they can do so without hurting the project. This behavior is only a short term fix.  Sooner or later (and usually sooner) sponsors and managers figure out the tactic (perhaps because they used it themselves) and begin demanding even deeper cuts.  The classic estimation joke is that every first estimate should be cut in half and then sent back to be re-estimated.  A second side effect of this fantasy is that when the estimate is compressed and the requirements are not reduced, the probability of the team needing to cut corners increases.  Cutting corners can result in technical debt or just plan mistakes.  In extreme circumstances, teams will take big gambles on solutions in an attempt to be on budget.

A third fantasy is that precision equals accuracy. Precision is defined as exactness.  A precise estimate for a project might be that a project will cost $28,944 USD and require 432 hours, will take 43 days beginning January 1st and completing February 12th. Whether the estimate is accurate, defined as close to actual performance, is unknown.  This is precision bias, a form of cogitative bias, in which precision and accuracy are conflated.  In most cases in precision bias occurs the high precision infers higher accuracy.  The level of precision gives the impression that it is highly accurate.  The probability of a highly precise estimate being accurate is nearly zero, however add few decimal places and see how much more easily it is to be believed. As we have noted before, wrong budgets and/or estimates will increase the risk of project failure.

When I teach estimation I usually begin with the statement that all estimates are wrong.  This is done for theatrical effect, however it is perfectly true.  Any estimate that is a single, precise number that has gone through several negotiations (read that as revised down) is nearly always wrong. However, if when we jettison the false veneer of precision, integrate uncertainty and stop randomly padding estimates, we can construct a much more accurate prediction of how a project will perform.  Always remember that an estimate is a prediction, not a price.


Categories: Process Management

Neo4j Backup: Store copy and consistency check

Mark Needham - Wed, 01/22/2014 - 18:36

One of the lesser known things about the Neo4j online backup tool, which I wrote about last week, is that conceptually there are two parts to it:

  1. Copying the store files to a location of your choice
  2. Verifying that those store files are consistent.

By default both of these run when you run the ‘neo4j-backup’ script but sometimes it’s useful to be able to run them separately.

If we want to just run the copying the store files part of the process we can tell the backup tool to skip the consistency check by using the ‘verify‘ flag:

$ pwd
/Users/markneedham/Downloads/neo4j-enterprise-2.0.0
$ ./bin/neo4j-backup -from single://127.0.0.1 -to /tmp/foo -verify false
Performing full backup from 'single://127.0.0.1'
Files copied
................        done
Done

If we ran that without the ‘verify’ flag we’d see the output of the consistency checker as well:

$ ./bin/neo4j-backup -from single://127.0.0.1 -to /tmp/foo
Performing full backup from 'single://127.0.0.1'
Files copied
................        done
Full consistency check
....................  10%
....................  20%
....................  30%
....................  40%
....................  50%
....................  60%
....................  70%
....................  80%
....................  90%
.................... 100%
Done

If we already have a backup and only want to run the consistency checker we can run the following command:

$ java -cp 'lib/*:system/lib/*' org.neo4j.consistency.ConsistencyCheckTool /tmp/foo
Full consistency check
....................  10%
....................  20%
....................  30%
....................  40%
....................  50%
....................  60%
....................  70%
....................  80%
....................  90%
.................... 100%

The consistency tool itself takes a ‘config‘ flag which gives you some control over what things you want to consistency check.

The various options are defined in org.neo4j.consistency.ConsistencyCheckSettings.

For example, if we want to change the file that the consistency check report is written to we could add the following property to our config file:

$ tail -n 1 conf/neo4j.properties
consistency_check_report_file=/tmp/foo.txt

And then run the consistency tool like so:

$ java -cp 'lib/*:system/lib/*' org.neo4j.consistency.ConsistencyCheckTool -config conf/neo4j.properties /tmp/foo

If there are any inconsistencies they’ll now be written to that file rather than to a file in the store directory.

You can also pass that ‘config’ flag to the backup tool and it will make use of it when it runs the consistency check. e.g.

$ ./bin/neo4j-backup -from single://127.0.0.1 -to /tmp/foo -verify false -config conf/neo4j.properties

Most of the time you don’t need to worry too much about either of these commands but I always forget what the various options are so I thought I’d better write it up while it’s fresh in my mind.

Categories: Programming

How would you build the next Internet? Loons, Drones, Copters, Satellites, or Something Else?

If you were going to design a next generation Internet at the physical layer that routes around the current Internet, what would it look like? What should it do? How should it work? Who should own it? How should it be paid for? How would you access it?

It has long been said the Internet routes around obstacles. Snowden has revealed some major obstacles. The beauty of the current current app and web system is the physical network doesn't matter. We can just replace it with something else. Something that doesn't flow through choke points like backhaul networks, under sea cables, and cell towers. What might that something else look like?

Google's Loon Project

Project Loon was so named because the idea was thought to be loony. Maybe not...

Categories: Architecture

Performance Reviews Are Not Useful; Feedback Is

I have received some wonderful feedback from some of my managers. Back when I was a young engineer, one of my managers gave me the feedback at an annual review that I didn’t quite finish my projects.

“Oh, you mean on the project I just finished last week?” I wanted to know if it was just that one. I thought I could go back and finish it.

“No, I mean the one 9 months ago, the one 6 months ago, the one 3 months ago, and the one last week,” my boss said.

I became angry. “Okay, I understand why you saved last week’s project for my performance review. That’s okay. Why on earth did you “save” my feedback for the other three projects?? I could have fixed them!”

He shrugged. “I thought I was supposed to wait for the performance review.”

“Don’t wait that long!” I told him. I vowed that when I became a manager, I would never surprise people with feedback.

I now know about finishing projects. As I said, it was great feedback.

I’ve also received feedback about how I needed to let people on a project come to me with bad news. That was really helpful, and I didn’t receive it at a performance review, thank goodness. That would have been way too late. I was able to change my behavior.

When I became a manager, I had to write performance evaluations for my staff. I didn’t like it, but I did it. I thought it was crazy, because, even though we weren’t agile back then, the people worked in cross-functional teams where the people on the teams knew more about what “my” people did than I did. Yes, even though I had one-on-ones. Yes, even though I asked everyone for a list of accomplishments in advance. But, it was the way it was. Even I thought I couldn’t buck city hall.

But now, agile has blown the idea of performance evaluations wide open. And ranking people? Oh my.

I one worked in an organization where a new VP wanted to rank everyone in the Engineering organization, all 80 people. I thought he wasn’t serious, but he was. He wanted to rank everyone from 1 to 80. Us directors had to take an entire day to do this. What was he going to do with the ranking? Cut the bottom 10%. This was serious.

I asked him, “Who’s going to rank us?”

He answered, “I will.”

I asked, “Based on what information?” He’d been there a week.

He replied. “I have my sources.”

Yeah, I bet he did.

The results of that ranking exercise? He managed to take a team of directors who had worked together well before that day, and make us a group of individuals. We were out for ourselves, because this was a zero-sum game.

At the end, no one was happy. Everyone was unhappy with the ranking, with the process, with everything about the day. This was no way to run an organization where people have to work together.

I’ve been a consultant for almost 20 years now. I have not received a formal performance review in that time. I’ve received plenty of feedback. Even when I haven’t enjoyed the feedback, I have liked the fact that I have received it.

And, that is the topic of this month’s management myth, Management Myth 25: Performance Reviews Are Useful.

Remember, I was inside organizations for almost 20 years. I received fewer than 15 performance reviews. Somehow, my bosses never quite got around to them. They hated doing them. I know that one of my bosses wrote them with help of Scotch; he admitted it.

Feedback is useful. Performance reviews? Not so much.

P.S. I know there is a comment on that article already. I am writing a response. The comment deserves more than an off-hand reply.

Categories: Project Management

Software Development Linkopedia January 2014

From the Editor of Methods & Tools - Wed, 01/22/2014 - 15:09
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about user experience, coding standards, managing developers (and not assholes) and software testers in software development, the dark side of user stories, DSDM, assessing your adherence to Scrum and testing complex systems. Web site: UX axioms Blog: Douglas Crockford on coding standards Blog: A Rockstar Programmer Isn’t the Same Thing as a Smart Asshole Blog: The Productivity Cycle Blog: Stop Telling Stories Blog: Presenting the Scrum Adherence Index Blog: Is ...

Checklist for Book Writers

NOOP.NL - Jurgen Appelo - Wed, 01/22/2014 - 14:35
Checklist

Yesterday, in my hangout with Jason Little, I discussed the benefits of having a checklist for book chapters. I already published a blog post checklist on this blog earlier. So I thought, “Why not share my book chapter checklist as well”?

The post Checklist for Book Writers appeared first on NOOP.NL.

Categories: Project Management

Only Show Finished Work During a Sprint Review—Maybe

Mike Cohn's Blog - Wed, 01/22/2014 - 13:00

I was at dinner years ago with my wife, a friend and his girlfriend. After the main course, our waiter brought around a dessert tray. As he pointed out each dessert option, the waiter made a show of flicking his finger into the item he was discussing. Fortunately, the items were all plastic and his finger bounced off the fake dessert without harming it.

"I'll have the key lime pie," I said. My wife chose the creme brulee. Wanting to have a little fun, my friend Allan didn't say which he wanted. Instead he flicked his finger into what he thought was a fake slice of chocolate cake. Surprisingly, it was not a fake slice of cake and Allan's finger was embedded halfway into a real slice of cake. The cake was the only item on the dessert tray that was real. We hadn't noticed that it was the only item our waiter had not himself flicked.

This wouldn't have happened if our waiter hadn't mixed up a real dessert with a bunch of fake desserts.

This same problem shows up on during sprint reviews when Scrum teams mix work that is done and work that only appears to be done.

The Scrum rule is that during a sprint review a team is allowed to demonstrate only those product backlog items that are truly done. They can't demonstrate a screen without its backend coded, for example.

In general, this is a great rule. It prevents a team from showing a plate of ready-to-eat, real desserts mixed with a few fake desserts that look good but aren't really available.

If a team is allowed to show work that is nearly, but not fully, complete there is the risk that the team starts to do this more and more often because it feels good to show all that progress. But extrapolate that forward a few sprints and you'll see that the team will have to show more and more false progress just to appear to be going at the same speed. In effect, the lies get bigger.

There is also the huge risk that stakeholders mistakenly believe the work is done. Sometimes this is the fault of the team, which isn't clear enough in saying something is not done. Other times, though, the team may be perfectly clear, but stakeholders don't hear the message. Many years ago when big prototypes were more common, the term "protoduction" came into use to refer to a prototype that was forced into production use.

But are there times when it might be OK for a team to violate this Scrum rule and show a product backlog item that is not done?

Yes, I think there are times when it is OK to do.

Keep in mind that the purpose of a sprint review is to get feedback that can be used to inform what should be done next. To do that, it may sometimes be helpful to show work that isn't 100% done. And a sprint review can be a great forum for doing that because of the audience that may be there. For example, you may have everyone you need to comment on whether the visual design of this next item meets everyone's expectations. So go ahead, show that feature and get feedback on it.

So, while I'm not advocating the violent overthrow of the Scrum rule of only demonstrating what is finished, I do think it is worth understanding why that rule is in place: It prevents teams from deceiving themselves into thinking they are further along than they are, and it prevents teams from deceiving their stakeholders (intentionally or not).

But, don't let the rule prevent your team from getting valuable feedback on something that isn't quite yet done if that feedback would be hard to get another way.

A few simple guidelines can help you make sure your team is only breaking this rule when doing so is appropriate. I do not recommend breaking this rule:

  • when first starting with Scrum
  • when there is any chance the work will be misconstrued as being truly done
  • when the feedback you'd get could be easily gotten another way

If you follow those guidelines, you'll stay true to the intent of the rule and, unlike that long-ago waiter, won't cause your customers to stick a finger into perfectly fine piece of chocolate cake.

Only Show Finished Work During a Sprint Review—Maybe

Mike Cohn's Blog - Wed, 01/22/2014 - 13:00

I was at dinner years ago with my wife, a friend and his girlfriend. After the main course, our waiter brought around a dessert tray. As he pointed out each dessert option, the waiter made a show of flicking his finger into the item he was discussing. Fortunately, the items were all plastic and his finger bounced off the fake dessert without harming it.

"I'll have the key lime pie," I said. My wife chose the creme brulee. Wanting to have a little fun, my friend Allan didn't say which he wanted. Instead he flicked his finger into what he thought was a fake slice of chocolate cake. Surprisingly, it was not a fake slice of cake and Allan's finger was embedded halfway into a real slice of cake. The cake was the only item on the dessert tray that was real. We hadn't noticed that it was the only item our waiter had not himself flicked.

This wouldn't have happened if our waiter hadn't mixed up a real dessert with a bunch of fake desserts.

This same problem shows up on during sprint reviews when Scrum teams mix work that is done and work that only appears to be done.

The Scrum rule is that during a sprint review a team is allowed to demonstrate only those product backlog items that are truly done. They can't demonstrate a screen without its backend coded, for example.

In general, this is a great rule. It prevents a team from showing a plate of ready-to-eat, real desserts mixed with a few fake desserts that look good but aren't really available.

If a team is allowed to show work that is nearly, but not fully, complete there is the risk that the team starts to do this more and more often because it feels good to show all that progress. But extrapolate that forward a few sprints and you'll see that the team will have to show more and more false progress just to appear to be going at the same speed. In effect, the lies get bigger.

There is also the huge risk that stakeholders mistakenly believe the work is done. Sometimes this is the fault of the team, which isn't clear enough in saying something is not done. Other times, though, the team may be perfectly clear, but stakeholders don't hear the message. Many years ago when big prototypes were more common, the term "protoduction" came into use to refer to a prototype that was forced into production use.

But are there times when it might be OK for a team to violate this Scrum rule and show a product backlog item that is not done?

Yes, I think there are times when it is OK to do.

Keep in mind that the purpose of a sprint review is to get feedback that can be used to inform what should be done next. To do that, it may sometimes be helpful to show work that isn't 100% done. And a sprint review can be a great forum for doing that because of the audience that may be there. For example, you may have everyone you need to comment on whether the visual design of this next item meets everyone's expectations. So go ahead, show that feature and get feedback on it.

So, while I'm not advocating the violent overthrow of the Scrum rule of only demonstrating what is finished, I do think it is worth understanding why that rule is in place: It prevents teams from deceiving themselves into thinking they are further along than they are, and it prevents teams from deceiving their stakeholders (intentionally or not).

But, don't let the rule prevent your team from getting valuable feedback on something that isn't quite yet done if that feedback would be hard to get another way.

A few simple guidelines can help you make sure your team is only breaking this rule when doing so is appropriate. I do not recommend breaking this rule:

  • when first starting with Scrum
  • when there is any chance the work will be misconstrued as being truly done
  • when the feedback you'd get could be easily gotten another way

If you follow those guidelines, you'll stay true to the intent of the rule and, unlike that long-ago waiter, won't cause your customers to stick a finger into perfectly fine piece of chocolate cake.

Using visual models to prioritize features

Software Requirements Blog - Seilevel.com - Wed, 01/22/2014 - 12:45
One of the most difficult tasks on any project is prioritizing requirements.  This is because there is almost always disagreement about what the priorities should be.  The more stakeholders you have, the more disagreements you’re likely to have.  It is human nature to avoid conflict and that is why many product managers put off these […]

Using visual models to prioritize features is a post from: http://requirements.seilevel.com/blog

Categories: Requirements