Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Dazzle Your Audience By Doodling

Xebia Blog - Sun, 09/28/2014 - 10:29

When we were kids, we loved to doodle. Most of us did anyway. I doodled all the time, everywhere, and, to the dismay of my mother, on everything. I still love to doodle. In fact, I believe doodling is essential.

The tragedy of the doodle lies in its definition: "A doodle is an unfocused or unconscious drawing while a person's attention is otherwise occupied." That's why most of us have been taught not to doodle. Seems logical, right? Teacher sees you doodling, that is not paying attention in class, thus not learning as much as you should, so he puts a stop to it. Trouble is though, it's wrong. And it's not just a little bit wrong, it's totally and utterly wrong. Exactly how wrong was shown in a case study by Jackie Andrade. She discovered that doodlers have 29% better recall. So, if you don't doodle, you're doing yourself a disservice.

And you're not just doing yourself a disservice, you're also doing your audience a disservice. Neurologists have discovered a phenomenon dubbed "mirror neurons." When you see something, the same neurons fire as if you were doing it. So, if someone shows you a picture, let's say a slide in a presentation, it is as if you're showing that picture to yourself.

Wait, what? That doesn't sound special at all, now does it? That's why presentations using only slides can be so unintentionally relaxing.

Now, if you see someone write or draw something on a flip chart, dry erase board or any other surface in plain sight, it is as if you're writing or drawing it yourself. And that ensures 29% better recall. Better yet, you'll remember what the presenter wants you to rememeber. Especially if he can trigger an emotional response.

Now, why is that? At EUVIZ in Berlin last month, I attended a presentation by Barbara Siegel from Look2Listen that changed my life. Barbara talked about the latest insights from neuroscience that prove that everyone feels first and thinks later. So, if you want your audience to tune in to your talk, show some emotion! Want people to remember specific points of your talk? Trigger and capture emotion by writing and drawing in real-time. Emotion runs deep and draws firm neurological paths in the brain that help you recreate the memory. Memories are recreated, not stored and retrieved.

Another thing that helps you draw firm neurological paths is exercise. If you get your audience to stand up and move, you increase their brain activity by 7%, hightening alertness and motivation. By getting your audience to sit down again after physical exercise, you trigger a rebalancing of neurotransmitters and other neurochemicals, so they can use the newly spawned neurons in their brain to combine into memories of your talk. Now that got me running every other day! Well, jogging is more like it, but hey: I'm hitting my target heart-rate regularly!

How does this help you become a better public speaker? Remember these two key points:

  1. At the start of your speech, get your audience to stand up and move to ensure 7% more brain activity and prime them for maximum recall.
  2. Make sure to use visuals and metaphors and create most, if not all, of them in real-time to leverage the mirror neuron effect and increase recall by 29%.

Value Is More Than Quality

A good fruit salad requires balance.

A good fruit salad requires balance.

In the Software Process and Measurement Cast 308, author Michael West describes a scenario in which a cellphone manufacturer decided that quality was their ultimate goal. The handset that resulted did not wow their customers. The functionally wasn’t what was expected and the price was prohibitive. The morale of the story was that quality really should not have be the ultimate goal of the project. At the time I recorded the interview I did not think the message Mr. West was suggesting was controversial. Recently I got a call on Skype. The caller had listened to the podcast and read Project Success: A Balancing Act and wanted to discuss why his department’s clients were up in arms about the slow rate of delivery and the high cost of projects. Heated arguments had erupted at steering committee meetings and it was rumored that someone had suggested that if the business wanted cheaper products that IT would just stop testing. Clearly focusing on the goal of zero defects (which was equated to quality) was eliciting unproductive behavior. Our discussion lead to an agreement that a more balanced goal for software development, enhancement or maintenance projects is the delivery of maximum value to whoever requested the project.

When a sponsor funds and sells a project they communicate a set of expectations. Those exceptions typically encompass a project will deliver:

  1. The functionality needed to meet their needs,
  2. The budget they will spend to acquire that functionality,
  3. When want the functionality, and
  4. The level of quality required to support their needs.

Each expectation is part of the definition of value. A project that is delivered with zero defects two years after it is need is less valuable than a project delivered when needed that may have some latent minor defects. A project that costs too much uses resources that might be better used to do another project or potentially causes an organization products to be priced out of the market. Successful projects find a balance between all expectations in order to maximize the value that is delivered.

Quality is not the ultimate goal of any software development, enhancement or maintenance project but neither is cost, schedule or even functionality. Value is the goal all project should pursue. Finding and maintaining equilibrium between the competing goals of cost, schedule and functionality is needed to maximize the ultimate value of a project. Each project will have their own balance based on the context of the project. Focusing on one goal to the exclusion of all others represents an opportunity cost. Every time we face a decision that promotes one goal over another, we should ask ourselves whether that choice is worth giving focus over another goal. Projects that focus on value create an environment in which teams, sponsors and organizations confront the trade-offs goals like zero-defects or perfect quality can cause.

Categories: Process Management

Making Choices in the Absence of Information

Herding Cats - Glen Alleman - Sat, 09/27/2014 - 15:57

Making Choices with Scant InformationDecision making in the presence of uncertainty is a normal business function as well as a normal technical development process. The world is full of uncertainty.

Those seeking certainty will be woefully disappointed. Those conjecturing that decisions can't be made in the presence of uncertainty are woefully misinformed. 

Along with all this woefulness is the boneheaded notion that estimating is guessing, and that decisions can actually be made in the presence of uncertainty in the absence of estimating.

Here's why. When we are faced with a decision, a choice between multiple decisions, a choice between multiple outcomes, each is probabilistic. If it were not - that is we have 100% visibility into the consequences of our decision, the cost involved in making that decision, the cost impact or benefit impact from that decision - it's no longer a decision. It's a choice to pick between several options based on something other than time, money, or benefit.

We're at the farmers market every Saturday morning. Apples are in season. Honey Crisp are my favorite. Local growers all know each other and price their apples pretty much the same. What they don't sell on Saturday, they take to private grocers. What doesn't go there, goes to the chains and labeled Locally Grown. Each step in the supply chain has a mark up, so buying at the Farmers Market is the lowest price. So deciding which apples to buy is usually an impulse for me and my wife. The cost is the same, the benefit is the same, it's just an impulse.

Let's look at the broad issue here - not about apples, from Valuation of Software Initiatives Under Uncertainty, Hakan Erdogmus, John Favaro, and Michael Halling, (Erdogmus is well known for his work in Real Options).

Screen Shot 2014-09-26 at 5.06.49 PM

Buying an ERP system, or funding the development of a new product, or funding the consolidation of the data center in another city is a much different choice process than picking apples. These decisions have uncertainty. Uncertainty of the cost. Uncertainty of the benefits, revenue, savings, increasing in reliability and maintainability. Uncertainty in almost every variable. 

Managing in the presence of uncertainty and the resulting risk, is called business management. It's also called how adults manage projects (Tim Lister)

So with the uncertainty conditions established for our project work, how can we make decisions in the presence of the uncertainties of cost, schedule, resource utilization, delivered capabilities, and all the other attributes and all the ...ilities of the inputs and outcomes of our work?

The Presence of Uncertainty is one of most Significant Characteristics of Project Work

Managing in the presence of uncertainty is unavoidable. Ignoring this uncertainty is also unavoidable. It's still there even if you ignore it.

Uncertainty comes in many forms

  • Statistical uncertainty - Aleatory uncertainty, only margin can address this uncertainty.
  • Subjective judgement - bias, anchoring, and adjustment.
  • Systematic error - lack of understanding of the reference model.
  • Incomplete knowledge - Epistemic Uncertainty, this lack of knowledge can be improved with effort.
  • Temporal variation - instability in the observed and measured system.
  • Inherent stochasticity - instability between and within collaborative system elements

Agile is an Approach to Dealing With Software Project Uncertainty

Going on 12 years ago the topic of managing in the presence of uncertainty was an important topic that spawned approaches to ERP using agile. This work has progressed to more formal principles and practices around software development in the presence of uncertainty and the acquisition of software products.

So Back To the Problem at Hand If decisions - credible decisions - are to be made in the presence of uncertainty, then some how we need information to address the sources of that uncertainty in the bulleted list above. This information can be obtained through many means. Modeling, sampling, parametrically, past performance, reference classes. Each of these sources has in itself an inherent uncertainty.  So in the end, it comes done to this... To make a credible decision in the presence of uncertainty, we need to estimate the factors that go into that decision. We Need To Estimate There's no way out of it. We can't make a credible decision of any importance without an estimate of the impact of that decision, the cost incurred from making that decision, the potential benefits from that decision, the opportunity cost of NOT selecting an outcome from a decision. Picking one Honey Crisp basket over another, not much at risk, low cost, low probability of disappointment. Planning, funding, managing, deploying, operating an ERP system, not likley done in the absence of estimating all variables up front, every time we produce the next increment, every time we have new information, every time we need to make a decision. To suggest otherwise violates the core principles of Microeconomics. If it's your money no one cares - apples or ERP, proceed at your own risk, ignore microeconomics. If it's not your money, it's going to require intentional ignorance of the core principles of successful business management. Behave appropriately.   Related articles Time to Revisit The Risk Discussion How to Deal With Complexity In Software Projects? An Example of Complete Misunderstanding of Project Work Uncertainty is the Source of Risk When We Say Risk What Do We Really Mean? Both Aleatory and Epistemic Uncertainty Create Risk Unceratinty and Risk Four Types of Risk Bayesian probability theory banned by English court
Categories: Project Management

An Example of Complete Misunderstanding of Project Work

Herding Cats - Glen Alleman - Sat, 09/27/2014 - 01:06

Choose-to-Know-Stop-the-Misinformation-Profile-Picture-1WARNING RANT AGAINST INTENTIONAL IGNORANCE FOLLOWS

This is one of those pictures tossed out at some conference that drives me crazy. It's uninformed, ignores the disciplines of developing software for money, and is meant to show how smart someone is, without actually understanding the core processes needed for actually being knowledgeable of the topic - in this case statistical processes of project work. Then the picture gets circulated, re-posted, and becomes the basis of all kinds of other misunderstanding, just like the Dilbert cartons that represent cartons of the problem, but have no corrective actions associated.

It is popular in some circles of agile development to construct charts showing the strawman of deterministic and waterfall approaches, then compare them to the stochastic approaches and point out how much better the latter is than the former. Here's an example.

These strawman approaches are of course not only misinformed, they're essentially nonsense in any domain where credible project management is established, and the basis of the their response with Don't Do Stupid Things on Purpose.

Screen Shot 2014-09-25 at 11.27.59 AM

Let's look at each strawman statement for the Deterministic View in light of actual project management processes, either simply best practice or mandated practice.

  • Technologies are stable - no one believes this that has been around in the last 50 years. And if they do, they've been under a rock. Suggesting this is the case ignores even¬†the simplest observations of technology and it's path of progress.
  • Technologies are predictable - anyone who has any experience in any engineering disciplne knows this is not the case. Beyond the simplest single¬†machine unintended consequence and emergent behavior with obvious.
  • Requirements are stable - no they're not. Not even in the bone head simplest project. This would require precognition and clairvoyance.
  • Requirements are predictable - no they're not. Read any Requirements Management guidance, any requirements elicitation process, or work any non-trivial project to learn this as a cub developer.
  • Useful information is available at the start - this would require¬†clairvoyance.
  • Decisions are front loaded - this ignores completely the principles of microeconomics of decision making in the presence of uncertainty. Good way to set fire to your money. For a good survey of when and how to make front loaded decisions see¬†Making Essential Choices with Scant Information. Also buy Williams other book¬†Modelling Complex Projects. Along with another book Project Governance: Getting Investments Right.¬†This statement is a prime example of not doing your homework before deciding to post something in public.
  • Task durations are predictable - all task duration are driven by aleatory uncertainty. For this to be true, the laws of stochastic process would have to be suspended. Another example of been asleep in the High School statistics class.
  • Task arrival times are predictable - same as above. Must have be a classics major in college. With full apologies to our daughter who was a classics major.
  • Our work path is linear, unidirectional - this would require the problem to be so simple it can be modeled as a step by step assembly of Lego parts. Unlikely in any actual non-trivial project. When system of systems becomes the problem - any enterprise IT project, any complex product - the conditions of linear and unidirectional go out the window.
  • Variability is always harmful - this violates the basis of all engineering systems, where Deming variability is built into the system. Didn't anyone who made this chart read Deming?
  • The math we need is arithmetic - complete ignorance of the basic processes of all systems - they are statistical generating functions creating probabilistic outcomes.

The only explanation here is the intentional ignorance of basic science, math, engineering, and computer science. 

In the stochastic View there are equally egregious errors.

  • Technologies are evolving - of course they are. Look at any technology to see rapid and many times disruptive evolution. Project management is Risk Management. Risks are created by uncertainty - reducible and irreducible. Managing in the presence of uncertainty is how adults manage projects.
  • Technologies are unpredictable - in some sense, but we're building systems from parts in the market place. If you're a researcher at Apple this likely the case. If you're integrating an ERP system, you'd better understand the process, technology, and outcomes, or you're gonna fail. Don't let people who believe this to spend your money.
  • Requirements are evolving - of course they are. But the needed capabilities had better be stable or you've signed up for a Death March project, with no definition of done. But requirements aren't the starting point, Capabilities are. Capabilities Based Planning is how enterprise and complex projects are managed.
  • Requirements are degree of freedom - have no idea what this means. Trade Space is part of all good engineering process. Wonder if the author or those referencing this chart know that.
  • Useful information is continuously arriving - of course it is. This work is called engineering and development. Both are¬†verbs.
  • Decisions are continuous - of course they are. This is the core principle of microeconomics of all business decision making. But front-end decisions are mandatory. See "Issues on Front-end decision making¬†for some background before believing this statement is credible. And a summary of the concept of the Williams book above.¬†
  • Task arrival times are unpredictable - this is intentional ignorance of stochastic processes. Prediction always includes confidence and a probability distribution. Predictions¬†is simply saying something about the future. For task arrival time to be unpredictable, those time would have to be completly chaotic, with no underlying process to produce them. This would be unheard of in project work. And if so the project would be chaotic and destination to fail starting on day way. Another example of asleep in the stats class.
  • Our work path is networked and recursive - of course it is. But this statement is counter to the INVEST condition of agile which is present in only the simplest project.¬†
  • Variability is required - all processes are stochastic processes. A tautology. Natural variability is irreducible. Event based variability is disruptive to productivity, quality, cost and schedule performance and the forecasting of when, how much, and what will be delivered in terms of Capabilities. Uncontrolled Variability is counter proper stewardship of your customer's money.
  • The math we need is probability and statistics - yes, and you'd better have been paying attention in the High School statistics class and stop using terms you can't refer to in the books on your office shelf.¬†

In the End

For some reason using charts like this one, re-posting of Dilbert cartons, making statements using buzz words - we're using Real Options and Bayesian Statistics to manage our work - are may favorite ones - seems to be more common the closer we get to the sole contributor¬†point of view. Along with¬†look at my 22 samples of self-selected data with a ¬Ī70% variance as how¬†to forecast future performance.

It may be because sole contributors are becoming more prevalent. Sole contributors have certainly changed the world of software development in wasy never possible by larger organizations. But without the foundation of good math, good systems engineering - and I don't mean "data center systems engineering," I mean INCOSE Systems Engineering - those sole contributor points of view simply don't scale.

Always ask when you hear a piece of advice - in what domain have you applied this advice with success? 

Related articles Why is Statistical Thinking Hard? The Misunderstanding of Chaos - Again Deterministic versus Stochastic Trends in Earned Value Management Data Management is Prediction - W. Edwards Deming How To Assure Your Project Will Fail
Categories: Project Management

Kanban And The Theory of Constraints

A dam releasing water is an  example of flow through a constraint.

A dam releasing water is an example of flow through a constraint.

The Theory of Constraints (ToC), as defined by Eli Goldratt in the book The Goal (1984), is an important concept that shapes how we measure and improve processes.  ToC takes a systems thinking approach to understanding and improving processes. A simple explanation of ToC is that the output of any system or process is limited by a very small number of constraints within the process. Kanban is a technique to visualize a process, manage the flow of work through the process and to continually tune the process to maximize flow that can help you identify the constraints. There are three critical points from the ToC to remember when leveraging Kanban as a process improvement tool.

  1. All systems are limited by a small number of constraints. At any specific point in time, as work items flows through a process, the rate of flow is limited by the most significant constrained step or steps. For example, consider the TSA screening process in an Untied States airport. A large number of people stream into the queue, a single person checks their ID and ticket, passes them to another queue where people prepare for scanning, and then both people and loose belongings are scanned separately, are cleared or are rescanned, and finally the screened get to reassemble their belongings (try doing that fast). The constraint in the flow is typically processing people or their belongings through the scanner. Flow can’t be increased by adding more people to check IDs because that is not typically the constraint in the flow. While each step in a process can act as a constraint based on the amount of work a process is asked to perform or if a specific circumstance occurs (the ID checker has to step away and is not replaced, thereby shutting down the line), however, at any one time the flow of work is generally limited by one or just a few constraints.
  2. There is always at least one constraint in a process. No step is instantly and infinitely scalable. As the amount of work a is being called on to perform ebbs and flows there will be at least one constraint in the flow. When I was very young my four siblings and I would all get up to go to school roughly at the same time. My mother required us to brush our teeth just before leaving for school. The goal was to get our teeth cleaned and out of the bathroom so that we could catch the bus as quickly as possible. We all had a brush and there was plenty of room in the bathroom, however there was only one tube of toothpaste (constraint). One process improvement I remember my mother trying was to buy more tubes of toothpaste, which caused a different problem to appear when we began discussing who’s tube was who’s (another constraint). While flow was increased, a new constraint emerged. We never found the perfect process, although we rarely missed the bus.
  3. Flow can only be increased by increasing the flow through a constraint. Consider drinking a milkshake through a straw. In order to increase the amount of liquid we get in our mouth we need to either suck on the straw harder (and that will only work until the straw collapses), change the process or to increase capacity of the straw. In the short-run sucking harder might get a bit more milkshake through the straw but if done for any length of time the additional pressure will damage the “system.” ¬†In the long run the only means to increase flow is either change the size of the straw or change the process by drinking directly from the glass. In all cases to get more milkshake into our mouth we need to make a change so that more fluid gets through the constraint in the process.

The Theory of Constraints provides a tool to think about the flow of work from the point of view of the constraints within the overall process (systems thinking). ¬†In most process, just pushing harder doesn’t increase the output of a process beyond some very limited, short-term improvement. In order to increase the long-term flow of work through a process we need identify and remove the constraints that limit the flow of value.

Categories: Process Management

Neo4j: COLLECTing multiple values (Too many parameters for function ‚Äėcollect‚Äô)

Mark Needham - Fri, 09/26/2014 - 21:46

One of my favourite functions in Neo4j’s cypher query language is COLLECT which allows us to group items into an array for later consumption.

However, I’ve noticed that people sometimes have trouble working out how to collect multiple items with COLLECT and struggle to find a way to do so.

Consider the following data set:

create (p:Person {name: "Mark"})
create (e1:Event {name: "Event1", timestamp: 1234})
create (e2:Event {name: "Event2", timestamp: 4567})
create (p)-[:EVENT]->(e1)
create (p)-[:EVENT]->(e2)

If we wanted to return each person along with a collection of the event names they’d participated in we could write the following:

$ MATCH (p:Person)-[:EVENT]->(e)
| p                    | COLLECT(     |
| Node[0]{name:"Mark"} | ["Event1","Event2"] |
1 row

That works nicely, but what about if we want to collect the event name and the timestamp but don’t want to return the entire event node?

An approach I’ve seen a few people try during workshops is the following:

MATCH (p:Person)-[:EVENT]->(e)
RETURN p, COLLECT(, e.timestamp)

Unfortunately this doesn’t compile:

SyntaxException: Too many parameters for function 'collect' (line 2, column 11)
"RETURN p, COLLECT(, e.timestamp)"

As the error message suggests, the COLLECT function only takes one argument so we need to find another way to solve our problem.

One way is to put the two values into a literal array which will result in an array of arrays as our return result:

$ MATCH (p:Person)-[:EVENT]->(e)
> RETURN p, COLLECT([, e.timestamp]);
| p                    | COLLECT([, e.timestamp])    |
| Node[0]{name:"Mark"} | [["Event1",1234],["Event2",4567]] |
1 row

The annoying thing about this approach is that as you add more items you’ll forget in which position you’ve put each bit of data so I think a preferable approach is to collect a map of items instead:

$ MATCH (p:Person)-[:EVENT]->(e)
> RETURN p, COLLECT({eventName:, eventTimestamp: e.timestamp});
| p                    | COLLECT({eventName:, eventTimestamp: e.timestamp})                                         |
| Node[0]{name:"Mark"} | [{eventName -> "Event1", eventTimestamp -> 1234},{eventName -> "Event2", eventTimestamp -> 4567}] |
1 row

During the Clojure Neo4j Hackathon that we ran earlier this week this proved to be a particularly pleasing approach as we could easily destructure the collection of maps in our Clojure code.

Categories: Programming

Stuff The Internet Says On Scalability For September 26th, 2014

Hey, it's HighScalability time:

With tensegrity landing balls we'll be the coolest aliens to ever land on Mars.
  • 6-8Tbps:  Apple’s live video stream; $65B: crowdfunding's contribution to the global economy
  • Quotable Quotes:
    • @bodil: I asked @richhickey and he said "a transducer is just a pre-fused Kleisli arrows in the list monad." #strangeloop
    • @lusis: If you couldn’t handle runit maybe you shouldn’t be f*cking with systemd. You’ll shoot your g*ddamn foot off.
    • Rob Neely: Programming model stability + Regular advances in realized performance = scientific discovery through computation
    • @BenedictEvans: Maybe 5bn PCs have been sold so far. And 17bn mobile phones.
    • @xaprb: "There's no word for the opposite of synergy" @jasonh at #surgecon

  • The SSD Endurance Experiment. The good news: You don't have to worry about writing a lot of data to SSDs anymore. That bad news: When your SSD does die your data may not be safe. Good discussion on Hacker News.

  • Don't have a lot of money? Don't worry. Being cheap can actually create cool: Teleportation was used in Star Trek because the budget couldn't afford expensive shots of spaceships landing on different planets.

  • Not so crazy after all? Google’s Internet “Loon” Balloons Will Ring the Globe within a Year

  • Before cloud and after cloud as told through a car crash

  • Cluster around dear readers, videos from MesosCon 2014 are now available.

  • From Backbone To React: Our Experience Scaling a Web Application. This seems a lot like the approach Facebook uses in their Android apps. As things get complex move the logic to a top level centralized manager and then distribute changes down to components that are not incrementally changed, they are replaced entirely.

  • Deciding between GAE or EC2? This might help: Running a website: Google App Engine vs. Amazon EC2. AWS is hard to set up. Both give you a lot for free. GAE is not customizable. On AWS use whatever languages and software you want. GAE once written your software will scale. If you have a sysadmin or your project requires specific software go with AWS. If you are small or have a static site go with GAE. 

  • Mean vs Lamp – How Do They Stack Up? MEAN = MongoDB, Express.js, Angular.js, PHP or Python. Why be MEAN: the three most significant being a single language from top to bottom, flexibility in deployment platform, and enhanced speed in data retrieval. However, the switch is not without trade-offs; any existing code will either need to be rewritten in JavaScript or integrated into the new stack in a non-obvious manner.  

  • Free the Web: Sometimes, I feel like blaming money. When money comes into play, people start to fear. They fear losing their money, and they fear losing their visitors. And so they focus on making buttons easily clickable (which inevitably narrows down places where they can go), and they focus on making sites that are safe but predictably usable.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Labor Productivity: An Excerpt From The Metrics Minute

Productivity is number of clothes sewed per hour.

Productivity is number of clothes sewed per hour.


Labor productivity measures the efficiency of the transformation of labor into something of higher value. It¬†is the amount of output per unit of input: an example in manufacturing terms can be expressed as ¬†X widgets for every hour worked. Labor productivity (typically just called productivity) is a fairly simple manufacturing concept that is¬†useful in IT. It¬†is a powerful metric, made even more powerful by it’s simplicity. At its heart, productivity is a measure of the efficiency of a process (or group of processes). That knowledge can be tool to target and facilitate change. The problem with using productivity in a software environment is the lack of a universally agreed upon output unit of measure.

The lack of a universally agreed upon, tangible unit of output (for example cars in an automobile factory or steel from a steel mill) means that software processes often struggle to define and measure productivity because they’re forced to use esoteric size measures. IT has gone down three paths to solve this problem. The three basic camps to size software include relative measures (e.g. story points), physical measures (e.g. lines of code) and functional measures (e.g. function points). In all cases these measures of software size seek to measure the output of the processes and are defined independently of the input (effort or labor cost).

The standard formula for labor productivity is:

Productivity = output / input

If you were using lines of code for productivity, the equation would be as follows:

Productivity = Lines of Code / Hours to Code the Lines of Code

There are numerous factors that can influence productivity like skills, architecture, tools, time compression, programming language and level of quality. Organizations want to determine the impact of these factors on the development environment.

The measurement of productivity has two macro purposes. The first purpose is to determine efficiency. When productivity is known a baseline can be produced (line in the sand) then compared to external benchmarks. Comparisons between projects can indicate whether one process is better than another. The ability to make a comparison allows you to use efficiency as a tool in a decision-making process. The number and types of decisions that can be made using this tool are bounded only by your imagination and the granularity of the measurement.

The second macro rational for measuring productivity is as a basis for estimation. In its simplest form a parametric estimate can be calculated by multiplying size by a productivity rate.

1. The lack of a consistent size measure is the biggest barrier for measuring productivity.
2. Poor time accounting runs a close second. Time account issues range from misallocation of time to equating billing time to effort time.
3. Productivity is not a single number and is most accurately described as a curve which makes it appear complicated.

Variants or Related Measures:
1. Cost per unit of work
2. Delivery rate
3. Velocity (agile)
4. Efficiency

There are several criticisms of the using productivity in the software development and maintenance environment. The most prevalent is an argument that all software projects are different and therefore are better measured by metrics focusing on terminal value rather than by metrics focused on process efficiency (artesian versus manufacturing discussion). I would suggest that while the result of a software project tends to be different most of the steps taken are the same which makes the measure valid but that productivity should never be confused with value.

A second criticism of the use of productivity is a result of improper deployment. Numerous organizations and consultants promote the use of a single number for productivity. The use of a single number to describe the productivity the typical IT organization does match reality at the shop floor level when the metrics is used to make comparisons or for estimation. For example, would you expect a web project to have the same productivity rate of a macro assembler project? Would you expect a small project and a very large project to have the same productivity? In either case the projects would take different steps along their life cycles therefore we would expect their productivity to be different. I suggest that an organization analyze their data to look for clusters of performance. Typical clusters can include: client server projects, technology specific projects, package implementations and many others. Each will have a statistically different signature. An example of a productivity signature expressed as an equation is shown below:

Labor Productivity=46.719177-(0.0935884*Size)+(0.0001578*((Size-269.857)^2))

(Note this is an example of a very specialized productivity equation for a set of client server projects tailored for a design, code and unit testing. The results would not representative a typical organization.)

A third criticism is that labor productivity is an overly simple metric that does not reflect quality, value or speed. I would suggest that two out three of these criticisms are correct. Labor productivity is does not measure speed (although speed and effort are related) and does not address value (although value and effort may be related). Quality may be a red herring if rework due to defects is incorporated into productivity equation. In any case productivity should not be evaluated in a vacuum. Measurement programs should incorporate a palette of metrics to develop a holistic picture of a project or organization.

Categories: Process Management

Tell us about your experience building on Google, and raise money for educational organizations!

Google Code Blog - Thu, 09/25/2014 - 22:16
Here at Google, we always put the user first, and for the Developer Platform team, our developers are our users. We want to create the best development platform and provide the support you need to build world-changing apps, but we need to hear from you, our users, on a regular basis so we can see what’s working and what needs to change.

That's why we're launching our developer survey -- we want to hear about how you are using our APIs and platforms, and what your experience is using our developer products and services. We'll use your responses to identify how we can support you better in your development efforts.
Photo Credit: Google I/O 2014

The survey should only take 10 to 15 minutes of your time, and in addition to helping us improve our products, you can also help raise money to educate children around the globe. For every developer who completes the survey, we will donate $10 USD (up to a maximum amount of $20,000USD total) to your choice of one of these six education-focused organizations: Khan Academy, World Fund, Donors Choose, Girl Rising, Raspberry Pi, and Agastya.

The survey is live now and will be live until 11:59PM Pacific Time on October 15, 2014. We are excited to hear what you have to tell us!

Posted by Neel Kshetramade, Program Manager, Developer Platform
Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Thu, 09/25/2014 - 17:01

Most problems are self imposed and usually can be traced to lack of discipline. The foremost attribute of successful programs is discipline: Discipline to evolve and proclaim realistic cost goals; discipline to forego appealing but nonessential features; discipline to minimize engineering changes; discipline to do thorough failure analysis; discipline to abide by test protocols; and discipline to persevere in the face of problems that will occur in even the best-managed programs - Norm R. Augustine

Related articles Agile Requires Discipline, In Fact Successful Projects Require Discipline
Categories: Project Management

Empower Business Analysts to Turn Them Into Product Managers

Software Requirements Blog - - Thu, 09/25/2014 - 17:00
The Business Analyst role in most organizations I have worked with is passive and reactive by design.¬† Analysts are given a feature description and tasked with defining the requirements for the same.¬† The analyst then goes off to perform a set of activities and tasks like elicitation, model creation and requirements definition.¬† Eventually, they create […]
Categories: Requirements

Software Development Conferences Forecast September 2014

From the Editor of Methods & Tools - Thu, 09/25/2014 - 09:56
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban) software testing and software quality, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine. Future of Web Apps, September 29-October 1 2014, London, UK STARWEST, October 12-17 2014, Anaheim, USA Register and save up with code SW14MT JAX London, October 13-15 2014, London, UK Pacific Northwest Software Quality Conference, October 20-22 2014, Portland, USA Agile ...

I’ll Speak for Free If You Write a Review

NOOP.NL - Jurgen Appelo - Thu, 09/25/2014 - 09:17
personal coaching color

For book authors, Amazon reviews are very important. Reviews sell books. The difference between 10 and 100 book reviews can mean the difference between obscurity versus visibility. 250 reviews? That’s celebrity status!

The post I’ll Speak for Free If You Write a Review appeared first on NOOP.NL.

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Thu, 09/25/2014 - 04:00

A cynic is a man who knows the price of everything and the value of nothing. ‚ąí Oscar Wilde

The inverse is true as well, we can't know the value of something until we know it's cost. Both the cost and the value can be tangible or intangible. But both are needed when faced with deciding between alternatives. This is the very foundation of microeconomics of business decision making.

Categories: Project Management

Kanban: Work-In-Process


When discussing lean techniques such as Kanban, we often assume that you understand the concept of work-in-process (WIP). That assumption is not always true. I am occasionally asked how WIP is defined in a software development (or an enhancement or maintenance project) and whether related work products, like test plans or documentation, are part of WIP for piece of code.

WIP is work that has entered the production process, but is not yet complete. A slightly more technical definition of WIP is all raw materials or partially finished product that are used or transformed in the production process. In a software development or maintenance project, WIP includes all of the work products or resources required to deliver valuable functionality. In software development and maintenance, WIP for the story will typically include code, documentation, test cases, test results and plans (just to name a few typical work products). All of these work products are WIP until the story is deployed in production.

In the Kanban: An Overview and Introduction we used a simple software development lifecycle to define Kanban and the concept of flow. Using a user story as an example we can trace the transformation from a card into implemented software. The story begins its WIP journey as soon as is pulled out of the backlog and then completes WIP journey when it has been deployed in production. The simple Kanban workflow we used in the past is shown below:


Using the example of simple piece of web functionality we begin with the user story, ‚ÄúAs a SPaMCAST listener I want to see the description of the book by current interviewee so I can decide to buy the book.‚ÄĚ As soon as I grab the card and put it into the analysis column it is WIP. This is true whether I start fleshing out the user story at that exact moment or later in the day. In order to complete this story it requires that I create a link to the book and embed that link in the blog entry for the podcast episode and then test the link to ensure that it works. The link is complete when the blog entry is in production and the story is no longer WIP. Diving down a little into the detail, when I create the book link I go to the booksellers site (SPaMCAST uses an Amazon affiliate account so that purchases of the books mentioned on the podcast support offsetting the expenses needed to provide the podcast) and select¬†the link for the book. The next step is to paste the link into an Evernote document for documentation purposes. That Evernote document becomes part of the WIP connected to the story. Later if that the link is found to be wrong, not only does the link in the blog entry need to be changed, the link in the documentation will need to be updated. Nothing ceases to be WIP until the link is live on the website and it works correctly. As the story progresses the link is embedded it into podcasts blog entry, the link is tested using a simple test plan (the test plan is part of the WIP related to the story). During the process of finding the link and putting it into the blog entry and Evernote document, the story is being worked on,. When the story is is being actively worked on¬†the story is moving through the process (flow). When the link is embedded and tested the code is complete and ready to be implemented. Ready to be implemented is not same as being implemented. In process for the SPaMCAST podcast, the story is still WIP, but it is no longer being worked on. The flow of the story has been interrupted. If the blog entry or whole podcast needed to be updated the link would need to be revalidated and potentially changed. The listeners of the Software Process and Measurement Cast will know that the podcast typically goes live on Sunday at 17:00 Eastern Time. As soon as the link is validated in production the story moves from WIP to complete.

When discussing WIP in software development it is easy to fixate on the functional software as the only part of the story when tracing WIP. In the example of the book link user story, the process I use to develop the link and ensure that it works includes code (HTML), documentation and a simple test plan. All of these work products are part of the completing the user story. Completion of one work product without the other leaves the story in an incomplete state and still work-in-process.

Categories: Process Management

Allthecooks on Android Wear

Android Developers Blog - Wed, 09/24/2014 - 21:56

By Hoi Lam, Developer Advocate, Android Wear

The best cooking companion since the apron?

Android Wear is designed for serving up useful information at just the right time and in the right place. A neat example of this is Allthecooks Recipes. It gives you the right recipe, right when you need it.

This app is a great illustration of the four creative visions for Android Wear:

  1. Launched automatically
  2. Glanceable
  3. Suggest and demand
  4. Zero or low interaction

Allthecooks also shows what developers can do by combining both the power of the mobile device and the convenience of Android Wear.

Pick the best tool for the job

One particularly well-designed aspect of Allthecooks is their approach to the multi-device experience. Allthecooks lets the user search and browse the different recipes on their Android phone or tablet. When the user is ready, there is a clearly labelled blue action link to send the recipe to the watch.

The integration is natural. Using the on-screen keyboard and the larger screen real estate, Allthecooks is using the best screen to browse through the recipes. On the wearables side, the recipe is synchronised by using the DataApi and is launched automatically, fulfilling one of the key creative visions for Android Wear.

The end result? The mobile / Wear integration is seamless.

Thoughtful navigation

Once the recipe has been sent to the Android Wear device, Allthecooks splits the steps into easily glanceable pages. At the end of that list of steps, it allows the user to jump back to the beginning with a clearly marked button.

This means if you would like to browse through the steps before starting to cook, you can effortlessly get to the beginning again without swiping through all the pages. This is a great example of two other points in the vision: glanceable and zero or low interaction.

A great (cooking) assistant

One of the key ingredients of great cooking is timing, and Allthecooks is always on hand to do all the inputs for you when you are ready to start the clock. A simple tap on the blue ‚Äú1‚ÄĚ and Allthecooks will automatically set the timer to one hour. It is a gentle suggestion that Allthecooks can set the timer for you if you want.

Alternatively, if you want to use your egg timer, why not? It is a small detail but it really demonstrates the last and final element of Android Wear’s vision of suggest and demand. It is an ever ready assistant when the user wants it. At the same time, it is respectful and does not force the user to go down a route that the user does not want.

It’s about the details

Great design is about being user-centric and paying attention to details. Allthecooks could have just shrunk their mobile app for wear. Instead the Allthecooks team put a lot of thoughts into the design and leveraged all four points of the Android Wear creative vision. The end result is that the user can get the best experience out of both their Android mobile device and their Android Wear device. So developers, what will you be cooking next on Android Wear?

For more inspiring Android Wear user experiences, check out the Android Wear collection on Google Play!

Join the discussion on
+Android Developers

Categories: Programming

Neo4j: LOAD CSV ‚Äď Column is null

Mark Needham - Wed, 09/24/2014 - 21:21

One problem I’ve seen a few people have recently when using Neo4j’s LOAD CSV function is dealing with CSV files that have dodgy hidden characters at the beginning of the header line.

For example, consider an import of this CSV file:

$ cat ~/Downloads/dodgy.csv

We might start by checking which columns it has:

$ load csv with headers from "file:/Users/markneedham/Downloads/dodgy.csv" as line return line;
| line                             |
| {userId -> "1", movieId -> "2"} |
1 row

Looks good so far but what about if we try to return just ‘userId’?

$ load csv with headers from "file:/Users/markneedham/Downloads/dodgy.csv" as line return line.userId;
| line.userId |
| <null>      |
1 row

Hmmm it’s null…what about ‘movieId’?

$ load csv with headers from "file:/Users/markneedham/Downloads/dodgy.csv" as line return line.movieId;
| line.movieId |
| "2"          |
1 row

That works fine so immediately we can suspect there are hidden characters at the beginning of the first line of the file.

The easiest way to check if this is the case is open the file using a Hex Editor – I quite like Hex Fiend for the Mac.

If we look at dodgy.csv we’ll see the following:

2014 09 24 21 20 06

Let’s delete the highlighted characters and try our cypher query again:

$ load csv with headers from "file:/Users/markneedham/Downloads/dodgy.csv" as line return line.userId;
| line.userId |
| "1"         |
1 row

All is well again, but something to keep in mind if you see a LOAD CSV near you behaving badly.

Categories: Programming

New Google Apps Activity API

Google Code Blog - Wed, 09/24/2014 - 18:21
Back in January, Google Drive launched an activity stream that shows you what actions have been taken on files and folders in your Drive. For example, if someone makes edits on a file you’ve shared with them, you’ll see a notification in your activity stream.
Today, we’re introducing the new Google Apps Activity API designed to give developers programmatic access to this activity stream. This standard Google API will allow apps and extensions to access the activity history for individual Drive files as well as descendents of a folder through a RESTful interface.
The Google Apps Activity API will allow developers to build new tools to help users keep better track of what’s happening to specific files and folders they care about. For example, you might use this new API to help teachers see which students in their class are editing a file or, come tax season, you might want to create a quick script to audit the sharing of items in your financial information folder.
Check out the documentation at We can't wait to see what you build!
Posted by Justin Hicks, Software Engineer, Technical Lead for Google Apps Activity API
Categories: Programming

5 Tips for Scaling NoSQL Databases: Don‚Äôt Trust Assumptions‚ÄĒTest, Test, Test!

Alex Bordei, product manager for Bigstep’s Full Metal Cloud, in Scaling NoSQL databases: 5 tips for increasing performance, shares a nice set of lessons he's learned about how NoSQL databases scale:

  • Never assume linearity in scaling. Hardware prices grow exponentially as the specs increase, but not all software can take full advantage of all that power. So you may be paying for hardware your database can't use. Find the sweet spot for price and hardware capabilities.
  • Tests speak louder than specs. Don't trust vendor documentation. It's cheap to spin up new instances so test the specs for yourself.
  • Mind the details: Memory & CPU numbers matter. For in-memory databases the specs on your memory modules matter. Faster memory means faster performance. Same for CPU frequencies. Pay attention to what your money is buying.
  • Do not neglect network latency. Paying for fast memory and fast CPU won't do a lot of good if your network is slow. 
  • Avoid virtualization with NoSQL databases. Virtualization can exact a 20-200% performance penalty. Noisy neighbors also help ruin the neighborhood. Up to 400% performance gains can be seen by switching away from virtualization and adopting bare metal clouds.

Lots of good advice. Each of these points in discussed in more detail in the original article, which is well worth reading.


Categories: Architecture