Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Scaling Agile? Keep it simple, scaler!

Xebia Blog - Wed, 04/22/2015 - 08:59

The promise of Agile is short cycled value delivery, with the ability to adapt. This is achieved by focusing on the people that create value and optimising the way they work.

Scrum provides a framework that provides a limited set of roles and artefacts and offers a simple process framework that helps to implement the Agile values and to adhere to the Agile principles.

I have supported many organisations in adopting Agile as their mindset and culture. What puzzles me is that many larger organisations seem to think that Scrum is not enough in their context and they feel the need for something bigger and more complicated. As a result of this, more and more Agile transformations start with scaling Agile to fit their context and then try to make things less complex.

While the various scaling frameworks for Agile contain many useful and powerful tools to apply in situations that require them, applying a complete Agile scaling framework to an organisation from the get-go often prevents the really needed culture and mindset change.

When applying a little bit of creativity, already present organisational structure can be mapped easily on the structure suggested by many scaling frameworks. Most frameworks explain the needed behaviour in an Agile environment, but these explanations are often ignored or misinterpreted. Due to (lengthy) descriptions of roles and responsibilities, people tend to stop thinking for themselves about what would work best and start to focus on who plays which role and what is someone else’s responsibility. There is a tendency to focus on the ceremonies rather than on the value that should be delivered by the team(s) with regards to product or service.

My take on adopting Agile would be to start simple. Use an Agile framework that prescribes very little, like Scrum or Kanban, in order to provoke learning and experiencing. From this learning and experiencing will come changes in the organisational structure to best support the Agile Values and Principles. People will find or create positions where their added value has most impact on the value that the organisation creates and, when needed, will dismantle positions and structure that prevent this value to be created.

Another effect of starting simple is that people will not feel limited by rules and regulations, and that way use their creativity, experience and capabilities easier. Oftentimes, more energy is create by less rules.

As said by others as well, some products or value are difficult to create with simple systems. As observed by Dave Snowden and captured in his Cynefin framework, too much simplicity could result in chaos when this simplicity is applied to complex systems. To create value in more complex systems, use the least amount of tools provided by the scaling frameworks to prevent chaos and leverage the benefits that simpler systems provide. Solutions to fix problems in complex systems are best found when experiencing the complexity and discovering what works best to cope with that. Trying to prevent problems to pop up might paralyse an organisation too much to put out the most possible value.

So: Focus on delivering value in short cycles, adapt when needed and add the least amount of tools and/or process to optimise communication and value delivery.

Demonstrations in Distributed Teams

The demonstration needs to work for everyone, no matter where in the world you are.

The demonstration needs to work for everyone, no matter where in the world you are.

Demonstrations are an important tool for teams to gather feedback to shape the value they deliver.  Demonstrations provide a platform for the team to show the stories that have been completed so the stakeholders can interact with the solution.  The feedback a team receives not only ensures that the solution delivered meets the needs but also generates new insights and lets the team know they are on track.  Demonstrations should provide value to everyone involved. Given the breadth of participation in a demo, the chance of a distributed meeting is even more likely.  Techniques that support distributed demonstrations include:

  1. More written documentation: Teams, especially long-established teams, often develop shorthand expressions that convey meaning fall short before a broader audience. Written communication can be more effective at conveying meaning where body language can’t be read and eye contact can’t be made. Publish an agenda to guide the meeting; this will help everyone stay on track or get back on track when the phone line drops. Capture comments and ideas on paper where everyone can see them.  If using flip charts, use webcams to share the written notes.  Some collaboration tools provide a notepad feature that stays resident on the screen that can be used to capture notes that can be referenced by all sites.
  2. Prepare and practice the demo. The risk that something will go wrong with the logistics of the meeting increase exponentially with the number of sites involved.  Have a plan for the demo and then practice the plan to reduce the risk that you have not forgotten something.  Practice will not eliminate all risk of an unforeseen problem, but it will help.
  3. Replicate the demo in multiple locations. In scenarios with multiple locations with large or important stakeholder populations, consider running separate demonstrations.  Separate demonstrations will lose some of the interaction between sites and add some overhead but will reduce the logistical complications.
  4. Record the demo. Some sites may not be able to participate in the demo live due to their time zones or other limitations. Recording the demo lets stakeholders that could not participate in the live demo hear and see what happened and provide feedback, albeit asynchronously.  Recording the demo will also give the team the ability to use the recording as documentation and reference material, which I strongly recommend.
  5. Check the network(s)! Bandwidth is generally not your friend. Make sure the network at each location can support the tools you are going to use (video, audio or other collaboration tools) and then have a fallback plan. Fallback plans should be as low tech as practical.  One team I observed actually had to fall back to scribes in two locations who kept notes on flip charts by mirroring each-other (cell phones, bluetooth headphones and whispering were employed) when the audio service they were using went down.

Demonstrations typically involve stakeholders, management and others.  The team needs feedback, but also needs to ensure a successful demo to maintain credibility within the organization.  In order to get the most effective feedback in a demo everyone needs to be able to hear, see and get involved.  Distributed demos need to focus on facilitating interaction more than in-person demos. Otherwise, distributed demos risk not being effective.


Categories: Process Management

The Myths of Business Model Innovation

Business model innovation has a couple of myths.

One myth is that business model innovation takes big thinking.  Another myth about business model innovation is that technology is the answer.

In the book, The Business Model Navigator, Oliver Gassman, Karolin Frankenberger, and Michaela Csik share a couple of myths that need busting so that more people can actually achieve business model innovation.

The "Think Big" Myth

Business model innovation does not need to be “big bang.”  It can be incremental.  Incremental changes can create more options and more opportunities for serendipity.

Via The Business Model Navigator:

“'Business model innovations are always radical and new to the world.'   Most people associate new business models with the giants leaps taken by Internet companies.  The fact is that business model innovation, in the same way as product innovation, can be incremental.  For instance, Netflix's business model innovation of mailing DVDs to customers was undoubtedly incremental and yet brought great success to the company.  The Internet opened up new avenues for Netflix that allowed the company to steadily evolve into an online streaming service provider.”

The Technology Myth

It’s not technology for technology’s sake.  It’s applying technology to revolutionize a business that creates the business model innovation.

Via The Business Model Navigator:

“'Every business model innovation is based on a fascinating new technology that inspires new products.'  The fact is that while new technologies can indeed drive new business models, they are often generic in nature.  Where creativity comes in is in applying them to revolutionize a business.  It is the business application and the specific use of the technology which makes the difference.  Technology for technology's sake is the number one flop factor in innovation projects.  The truly revolutionary act is that of uncovering the economic potential of a new technology.”

If you want to get started with business model innovation, don’t just go for the home run.

You Might Also Like

Cloud Changes the Game from Deployment to Adoption

Cognizant on the Next Generation Enterprise

Drive Digital Transformation by Re-Imagining Operations

Drive Digital Transformation by Re-envisioning Your Customer Experience

The Future of Jobs

Categories: Architecture, Programming

A final farewell to ClientLogin, OAuth 1.0 (3LO), AuthSub, and OpenID 2.0

Google Code Blog - Tue, 04/21/2015 - 17:24

Posted by William Denniss, Product Manager, Identity and Authentication

Support for ClientLogin, OAuth 1.0 (3LO1), AuthSub, and OpenID 2.0 has ended, and the shutdown process has begun. Clients attempting to use these services will begin to fail and must be migrated to OAuth 2.0 or OpenID Connect immediately.

To migrate a sign-in system, the easiest path is to use the Google Sign-in SDKs (see the migration documentation). Google Sign-in is built on top of our standards-based OAuth 2.0 and OpenID Connect infrastructure and provides a single interface for authentication and authorization flows on Web, Android and iOS. To migrate server API use, we recommend using one of our OAuth 2.0 client libraries.

We are moving away from legacy authentication protocols, focusing our support on OpenID Connect and OAuth 2.0. These modern open standards enhance the security of Google accounts, and are generally easier for developers to integrate with.

13LO stands for 3-legged OAuth where there's an end-user that provides consent. In contrast, 2-legged (2LO) correspond to Enterprise authorization scenarios such as organizational-wide policies control access. Both OAuth1 3LO and 2LO flows are deprecated, but this announcement is specific to OAuth1 3LO.

Categories: Programming

How to Avoid the "Yesterday's Weather" Estimating Problem

Herding Cats - Glen Alleman - Tue, 04/21/2015 - 15:49

One suggestion from the #NoEstimates community is the use of empirical data of past performance. This is many time called yesterdays weather. First let's make sure we're not using just the averages from yesterdays weather. And even adding the variance to that small sample of past performance can lead to very naive outcomes. 

We need to do some actual statistics on that time series. A simple R set of commands will produce the chart below from the time series of past performance data.

Forecast

 But that doesn't really help without some more work.

  • Is the future Really like the past - are the work products and the actual work performed in the past replicated  in the future? If so, this sound like a simple project, just turn out features that all look alike.
  • Is there any interdependencies that grow  in complexity as the project moves forward? This is the integration and test problem. Then the system of systems integration and test problem. Again simple project don't usually have this problem. More complex projects do.
  • What about those pesky emerging requirements. This is a favorite idea of agile (and correctly so), but simple past performance is not going to forecast the needed performance in the presence of emerging requirements
  • Then there all the externalities of all project work, where are those captured in the sample of past performance?
  • All big projects have little projects inside them is a common phrase. Except that collection of little projects need to be integrated, tuned, tested, verified, and validated so that when all the parts are  assembled they actually do what the customer wants.

Getting Out of the Yesterday's Weather Dilemma

Let's use the chart below to speak about some sources of estimating NOT based on simple small samples of yesterdays weather. This is a Master Plan for a non-trivial project to integrate half dozen or so legacy enterprise systems with a new health insurance ERO system for an integrated payer/provider solution: 

Capabilities Flow

  • Reference Class Forecasting for each class of work product.
    • As the project moves left to right in time the classes of product and the related work likely change. 
    • Reference classes for each of this movements through increasing maturity, and increasing complexity from integration interactions needs to be used to estimate not only the current work but the next round of work
    • In the chart above work on the left is planned with some level of confidence, because it's work in hand. Work in the right is in the future, so an courser estimate is all that is needed for the moment.
    • This is a planning package notion used in space and defense. Only plan in detail what you understand in detail.
  • Interdependencies Modeling in MCS
    • On any non-trivial project there are interdependencies
    • The notion of INVEST needs to be tested 
      • Independent - not usually the case on enterprise projects
      • Negotiable - usually not, since he ERP system provides the core capability to do business. Would be illogical to have half the procurement system. We can issue purchase orders and receive goods. But we can't pay for them until we get the Accounts Payable system. We need both at the same time. Not negotiable.
      • Valuable - Yep, why we doing this if it's not valuable to the business. This is a strawman used by low business maturity projects.
      • Estimate - to a good approximation is what the advice tells us. The term good needs a unit of measure
      • Small - is a domain dependent measure. Small to an enterprise IT projects may be huge to a sole contributor game developer.
      • Testable - Yep, and verifiable, and validatable, and secure, and robust, and fault tolerant, and meets all performance requirements.
  • Margin - protects dates, cost, and technical performance from irreducible uncertainty. By irreductible it means nothing can be done about the uncertainties. It's not the lack of knowledged that is found in reducible uncertainty. Epistemic uncertainty. Irreducible uncertainty is Aleatory. It's the natural randomness in the underlying processes that creates the uncertainty. When we are estimating in the presence of aleatory uncertainty, we must account for this aleatory uncertainty. This is why using the average of a time series for making a decision about possible future outcomes will always lead to disappointment. 
    • First we should always use the Most Likely value of the time series, not Average of the time series.
    • The Most Likely - the Mode - is that number that occurs most often of all the possible values that have occurred in the past. This should make complete sense when we consider what value will appear next? Why the value that has appeared Most Often in the past.
    • The Average of two numbers 1 and 99 is 50. The average of two numbers 49 and 51 is 50. Be careful with averages in the absence of knowing the variance.
  • Risk retirement - Epistemic uncertainty creates risks that can be retired. This means spending money and time. So when we're looking at past performance in an attempt to estimate future performance (Yesterdays Weather), we must determine what kind of uncertainties there are in the future and what kind of uncertainties we encountered in the past.
    • Were the and are they reducible or irreducible?
    • Did the performance in the past contain irreducible uncertainties, baked into the numbers that we did not recognize? 

This bring up a critical issue with all estimates. Did the numbers produced from the past performance meet the expected values or were they just the numbers we observed? This notion of taking the observed numbers and using them for forecasting the future is an Open Loop control system. What SHOULD the numbers have been to meet our goals? What SHOULD the goal have been? Did know that, then there is no baseline to compare the past performance against to see if it will be able to meet the future goal. 

I'll say this again - THIS IS OPEN LOOP control, NOT CLOSED LOOP. No about of dancing around will get over this, it's a simple control systems principle found here. Open and Close Loop Project Controls

  • Measures of physical percent complete to forecast future performance with cost, schedule, and technical performance measures - once we have the notion of Closed Loop Control and have constructed a steering target, can capture actual against plan, we need to define measures that are meaningful to the decisions makers. Agile does a good jib of forcing working product to appear often. The assessment of Physical Percent Complete though needs to define what that working software is supposed to do in support of the business plan.
  • Measures of Effectiveness - one very good measure is of Effectiveness. Does the software provide and effective solution to the problem. This begs the question or questions. What is the problem and what does an effective solution looks like were it to show up. 
    • MOE's are operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions.
  • Key performance parameters - the companion of Measures of Effectiveness are Measures of Performance.
    • MOP's characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.
  • Along with these two measures are Technical Performance Measures
    • TPM's are attributes that determine how well a system or system element is satisfying or expected to satisfy a technical requirement or goal.
  • And finally there are Key Performance Parameters
    • KPPs represent the capabilities and characteristics so significant  that failure to meet them can be cause for reevaluation, reassessing, or termination of the program.

The connections between these measures are shown below.

Screen Shot 2015-03-28 at 4.37.51 PM

With these measures, tools for making estimates of the future - forecasts - using statistical tools, we can use yesterdays weather, tomorrow models and related reference classes, desired MOE's, MOP's, KPP's, and TPM's and construct a credible estimate of what needs to happen and then measure what is happening and close the loop with an error signal and take corrective action to stay on track toward our goal.

This all sounds simple in principle, but in practice of course it's not. It's hard work, but when you assess the value at risk to be outside the tolerance range where thj customer is unwilling to risk their investment, we need tools and processes wot actually control the project.

Related articles Hope is not a Strategy Incremental Delivery of Features May Not Be Desirable
Categories: Project Management

Doing the Math

Herding Cats - Glen Alleman - Tue, 04/21/2015 - 15:09

In the business of building software intensive systems; estimating, performance forecasting and  management, closed loop control in the presence of uncertainty for all variables is the foundation needed for  increasing the probability of success.

This means math is involved in planning, estimating, measuring,  analysis, and corrective actions to Keep the Program Green.

When we have past performance data, here's one approach...

And the details of the math in the Conference paper

  Related articles Hope is not a Strategy How to Avoid the "Yesterday's Weather" Estimating Problem Critical Success Factors of IT Forecasting
Categories: Project Management

Thinking About Estimation

I have an article up on agileconnection.com. It’s called How Do Your Estimates Provide Value?

I’ve said before that We Need Planning; Do We Need Estimation? Sometimes we need estimates. Sometimes we don’t. That’s why I wrote Predicting the Unpredictable: Pragmatic Approaches for Estimating Cost or Schedule.

I’m not judging your estimates. I want you to consider how you use estimates.

BTW, if you have an article you would like to write for agileconnection.com, email it to me. I would love to provide you a place for your agile writing.

Categories: Project Management

ScrumMaster – Full Time or Not?

Mike Cohn's Blog - Tue, 04/21/2015 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

I’ve been in some debates recently about whether the ScrumMaster should be full time. Many of the debates have been frustrating because they devolved into whether a team was better off with a full-time ScrumMaster or not.

I’ll be very clear on the issue: Of course, absolutely, positively, no doubt about it a team is better off with a full-time ScrumMaster.

But, a team is also better off with a full-time, 100 percent dedicated barista. Yes, that’s right: Your team would be more productive, quality would be higher, and you’d have more satisfied customers, if you had a full-time barista on your team.

What would a full-time barista do? Most of the time, the barista would probably just sit there waiting for someone to need coffee. But whenever someone was thirsty or under-caffeinated, the barista could spring into action.

The barista could probably track metrics to predict what time of day team members were most likely to want drinks, and have their drinks prepared for them in advance.

Is all this economically justified? I doubt it. But I am 100 percent sure a team would be more productive if they didn’t have to pour their own coffee. Is a team more productive when it has a fulltime ScrumMaster? Absolutely. Is it always economically justified? No.

What I found baffling while debating this issue was that teams who could not justify a full-time ScrumMaster were not really being left a viable Scrum option. Those taking the “100 percent or nothing” approach were saying that if you don’t have a dedicated ScrumMaster, don’t do Scrum. That’s wrong.

A dedicated ScrumMaster is great, but it is not economically justifiable in all cases. When it’s not, that should not rule out the use of Scrum.

And a note: I am not saying that one of the duties of the ScrumMaster is to fetch coffee for the team. It’s just an exaggeration of a role that would make any team more productive.

Swift optional chaining and method argument evaluation

Xebia Blog - Tue, 04/21/2015 - 08:21

Everyone that has been programming in Swift knows that you can call a method on an optional object using a question mark (?). This is called optional chaining. But what if that method takes any arguments whose value you need to get from the same optional? Can you safely force unwrap those values?

A common use case of this is a UIViewController that runs some code within a closure after some delay or after a network call. We want to keep a weak reference to self within that closure because we want to be sure that we don't create reference cycles in case the closure would be retained. Besides, we (usually) don't need to run that piece of code within the closure in case the view controller got dismissed before that closure got executed.

Here is a simplified example:

class ViewController: UIViewController {

    let finishedMessage = "Network call has finished"
    let messageLabel = UILabel()

    override func viewDidLoad() {
        super.viewDidLoad()

        someNetworkCall { [weak self] in
            self?.finished(self?.finishedMessage)
        }
    }

    func finished(message: String) {
        messageLabel.text = message
    }
}

Here we call the someNetworkCall function that takes a () -> () closure as argument. Once the network call is finished it will call that closure. Inside the closure, we would like to change the text of our label to a finished message. Unfortunately, the code above will not compile. That's because the finished method takes a non-optional String as parameter and not an optional, which is returned by self?.finishedMessage.

I used to fix such problem by wrapping the code in a if let statement:

if let this = self {
    this.finished(this.finishedMessage)
}

This works quite well, especially when there are multiple lines of code that you want to skip if self became nil (e.g. the view controller got dismissed and deallocated). But I always wondered if it was safe to force unwrap the method arguments even when self would be nil:

self?.finished(self!.finishedMessage)

The question here is: does Swift evaluate method argument even if it does not call the method?

I went through the Swift Programming Guide to find any information on this but couldn't find an answer. Luckily it's not hard to find out.

Let's add a method that will return the finishedMessage and print a message and then call the finished method on an object that we know for sure is nil.

override func viewDidLoad() {
    super.viewDidLoad()

    let vc: ViewController? = nil
    vc?.finished(printAndGetFinishedMessage())
}

func printAndGetFinishedMessage() -> String {
    println("Getting message")
    return finishedMessage
}

When we run this, we see that nothing gets printed to the console. So now we know that Swift will not evaluate the method arguments when the method is not invoked. Therefore we can change our original code to the following:

someNetworkCall { [weak self] in
    self?.finished(self!.finishedMessage)
}

R: Numeric keys in the nested list/dictionary

Mark Needham - Tue, 04/21/2015 - 06:59

Last week I described how I’ve been creating fake dictionaries in R using lists and I found myself using the same structure while solving the dice problem in Think Bayes.

The dice problem is described as follows:

Suppose I have a box of dice that contains a 4-sided die, a 6-sided die, an 8-sided die, a 12-sided die, and a 20-sided die. If you have ever played Dungeons & Dragons, you know what I am talking about.

Suppose I select a die from the box at random, roll it, and get a 6.
What is the probability that I rolled each die?

Here’s a simple example of the nested list that I started with:

dice = c(4,6,8,12,20)
priors = rep(1.0 / length(dice), length(dice))
names(priors) = dice
 
> priors
  4   6   8  12  20 
0.2 0.2 0.2 0.2 0.2

I wanted to retrieve the prior for the 8 dice which I tried to do like this:

> priors[8]
<NA> 
  NA

That comes back with ‘NA’ because we’re actually looking for the numeric index 8 rather than the item in that column.

As far as I understand if we want to look up values by name we have to use a string so I tweaked the code to store names as strings:

dice = c(4,6,8,12,20)
priors = rep(1.0 / length(dice), length(dice))
names(priors) = sapply(dice, paste)
 
> priors["8"]
  8 
0.2

That works much better but with some experimentation I realised I didn’t even need to run ‘dice’ through the sapply function, it already works the way it was:

dice = c(4,6,8,12,20)
priors = rep(1.0 / length(dice), length(dice))
names(priors) = dice
 
> priors["8"]
  8 
0.2

Now that we’ve got that working we can write a likelihood function which takes in observed dice rolls and tells us how likely it was that we rolled each type of dice. We start simple by copying the above code into a function:

likelihoods = function(names, observations) {
  scores = rep(1.0 / length(names), length(names))  
  names(scores) = names
 
  return(scores)
}
 
dice = c(4,6,8,12,20)
l1 = likelihoods(dice, c(6))
 
> l1
  4   6   8  12  20 
0.2 0.2 0.2 0.2 0.2

Next we’ll update the score for a particular dice to 0 if one of the observed rolls is greater than the dice’s maximum score:

likelihoods = function(names, observations) {
  scores = rep(1.0 / length(names), length(names))  
  names(scores) = names
 
  for(name in names) {
      for(observation in observations) {
        if(name < observation) {
          scores[paste(name)]  = 0       
        }
      }
    }  
  return(scores)
}
 
dice = c(4,6,8,12,20)
l1 = likelihoods(dice, c(6))
 
> l1
  4   6   8  12  20 
0.0 0.2 0.2 0.2 0.2

The 4 dice has been ruled out since we’ve rolled a 6! Now let’s put in the else condition which updates our score by the probability that we got the observed roll with each of valid dice. i.e. we have a 1/20 chance of rolling any number with the 20 side dice, a 1/8 chance with the 8 sided dice etc.

likelihoods = function(names, observations) {
  scores = rep(1.0 / length(names), length(names))  
  names(scores) = names
 
  for(name in names) {
      for(observation in observations) {
        if(name < observation) {
          scores[paste(name)]  = 0
        } else {
          scores[paste(name)] = scores[paste(name)] *  (1.0 / name)
        }        
      }
    }  
  return(scores)
}
 
dice = c(4,6,8,12,20)
l1 = likelihoods(dice, c(6))
 
> l1
         4          6          8         12         20 
0.00000000 0.03333333 0.02500000 0.01666667 0.01000000

And finally let’s normalise those scores so they’re a bit more readable:

> l1 / sum(l1)
        4         6         8        12        20 
0.0000000 0.3921569 0.2941176 0.1960784 0.1176471
Categories: Programming

Approximating for Improved Understanding

Herding Cats - Glen Alleman - Mon, 04/20/2015 - 22:05

Screen Shot 2015-04-09 at 9.35.45 AMThe world of projects, project management, and the products or services produced by those projects is uncertain. It's never certain. Seeking certainty is not only naive, it's simply not possible.

Making decisions in the presence of this uncertainty is part of our job as project managers, engineers, developers on behave of those paying for our work.

It's also the job of the business, whose money is being spent on the projects to produce tangible value in exchange for that money.

From the introduction of the book to the left...

Science and engineering, our modern ways of understanding and altering the world, are said to be about accuracy and precision. Yet we best master the complexity of our world by cultivating insight rather than precision. We need insight because our minds are but a small part of the world. An insight unifies fragments of knowledge into a compact picture that fits in our minds. But precision can overflow our mental registers, washing away the understanding brought by insight. This book shows you how to build insight and understanding first, so that you do not drown in complexity.

So what does this mean for our project world?

  • The future is uncertain. It is always uncertain. It can't be anything but uncertain. Assuming certainty, is a waste of time. Managing in the presence of uncertanity is unavoidable. To do this we must estimate. This is unavoidable. To suggest otherwise willfully ignores the basis of all management practices.
  • This uncertainty creates risk to our project. Cost, schedule, and risks to the delivered capabilities of the project or product development. To manage in with closed loop process, estimates are needed. This is unavoidable as well.
  • Uncertainty is either reducible or irreducible
    • Reducible uncertainty can be reduced with new information. We can buy down this uncertainty.
    • Irreducible uncertainty - the natural variations in what we do - can only be handled with margin.

In both these conditions we need to get organized in order to address the  underlying uncertainties. We need to put structure in place in some manner. Decomposing the work is a common way in the project domain. From a Work Breakdown Structure to simple sticky notes on the wall, breaking problems down into smaller parts is a known successful way to address a problem. 

With this decomposition, now comes the hard part. Making decisions in the presence of this uncertainty.

Probabilistic Reasoning

Reasoning about things that are uncertain is done with probability and statistics. Probability is a degree of belief. 

I believe we have a 80% probability of completing on or before the due date for the migration of SQL Server 2008 to SQL Server 2012.

Why do we have this belief?  Is it based on our knowledge from past experience. Is this knowledge sufficient to establish that 80% confidence?

  • Do we have some external model of the work effort needed to perform the task?
  • Is there a parametric model of similar work that can be applied to this work?
  • Could we decompose the work to smaller chunks that could then be used to model the larger set of tasks?
  • Could I conduct some experiments to improve my knowledge?
  • Could I build a model from intuition that could be used to test the limits of my confidence?

The answers to each of these informs our belief. 

Chaos, Complexity, Complex, Structured?

A well known agile thought leader made a statement today

I support total chaos in every domain

This is unlikely going to result in sound business decisions in the presence of uncertainty. Although there may be domains where chaos might produce usable results, when some degree of confidence that the money being spent will produce the needed capabilities, on of before the need date, at of below the budget needed to be profitable, and with the collection of all the needed capability to accomplish the mission or meet the business case, we're going to need to know how to manage our work to achieve those outcomes.

So let's assume - with a high degree of confidence - that we need to manage in the presence of uncertainty, but we have little interest in encouraging chaos, here's one approach.

So In The End

Since all the world's a set of statistical processes, producing probabilistic outcomes, which in turn create risk to any expected results when not addressed properly - the notion that decisions can be made in the presence of this condition can only be explained by the willful ignorance of the basic facts of the physic of project work. 

  Related articles The Difference Between Accuracy and Precision Making Decisions in the Presence of Uncertainty Managing in Presence of Uncertainty Herding Cats: Risk Management is How Adults Manage Projects Herding Cats: Decision Analysis and Software Project Management Five Estimating Pathologies and Their Corrective Actions
Categories: Project Management

Estimates

Herding Cats - Glen Alleman - Mon, 04/20/2015 - 22:00

Estimating SW Intensive SystemsEstimation and measurement of project attributes are critical success factors for designing, building, modifying, and operating products and services. †

Good estimates are the key to project success. Estimates provide information to the decision makers to assess adherence to performance specifications and plans, make decisions, revise designs and plans, and improve future estimates and processes.

We use estimates and measurements to evaluate the feasibility and affordability of products being built, choose between alternatives designs, assess risk, and support business decisions. Engineers compare estimates if technical baselines of observed performance to decide of the product meets its functional and performance requirements. These are used by management to control processes and detect compliance problems. Process manager use capability baselines to improve production processes.

Developers, engineers, and planners estimate resources needed to develop, maintain, enhance and deploy products, Project planners use estimates for staffing, facilities. Planners and managers use estimates for resources to determine project cost and schedule and prepare budgets and plans. 

Managers compare estimates - cost and schedule baselines - and actual values to determine deviations from plan and understand the root causes of those deviations needed to take corrective actions. Estimates of product, project, and process characteristics provide baselines to assess progress during the project. 

Bad estimates affect all participants in the project or product development process. Incomplete and inaccurate estimates mean inadequate time and money  available for increasing the probability of project success.

The Nature of Estimation

The verb estimate means to produce a statement of the approximate value of some quantity that describes or characterizes an object. The noun estimate refers to the value produced by the verb. The object can be an artifact - software, hardware, documents - or an activity - planning, development, testing, or process.

We make estimates because we cannot directly measure the value of that quantity because:

  • The object is inaccessible
  • The object does not exist yet
  • The Measurement would be too expensive

Reasons to Estimate and Measure Size, Cost and Schedule

  • Evaluate feasibility of requirements.
  • Analyze alternative designs and implementations.
  • Determ required capacity and speed of produced results.
  • Evaluate performance - accuracy, speed, reliability, availability and other ...ilities.
  • Identify and assess technical risks.
  • Provide technical baselines for tracking and guiding.

Reasons to Estimate Effort, Cost, and Schedule

  • Determine project feasibility in terms of cost and schedule.
  • Identify and assess risks.
  • Negotiate achievable commitments.
  • Prepare realistic plans and budgets.
  • Evaluate business value - cost versus benefit.
  • Provide cost and schedule baselines for tracking and guiding.

Reasons to Estimate Capability and Performance

  • Predict resource consumption and efficiency.
  • Establish norms for expected performance.
  • Identify opportunities for improvement.

There are many sources of data for making estimates, some reliable some not. Human subject matter expert based estimates have been shown to be the least reliable, accurate and precise due to the biases involved in the human processes of developing the estimate. estimates based on past performance, while useful, must be adjusted for the statistical behaviors of the past and the uncertainty of the future. 

If the estimate is misused in any way, this is not the fault of the estimate - both noun and verb - byt simply bad management. Fix that first, then apply proper estimating processes.

If your project or product development effort does none of these activities or has no need for information on which to make a decision, then estimating is likely a waste of time.

But before deciding estimate are the smell of dysfunction, with NO root cause identified for corrective action check with those paying your salary first, to see what they have to say about your desire to spend their money in presence of uncertainty with the absence of an estimate to see what they say.

† This post is extracted from Estimating Software Intensive Systems: Project, Products and Processes, Dr. Richard Stutzke, Addison Wesley. This book is a mandatory read for anyone working in a software domain on any project that is mission critical. This means if you need to show up on or before the need date, at or below your planned cost, with the needed capabilities - Key Performance Parameters , without which the project will get cancel - then you're going to need to estimate all the parameters of your project, If your project doesn't need to show up on time, stay on budget, or can provide less than the needed capabilities, no need to estimate. Just spend your customer's money, she'll tell you when to stop.

Related articles Capability Maturity Levels and Implications on Software Estimating Incremental Delivery of Features May Not Be Desirable Capabilities Based Planning First Then Requirements
Categories: Project Management

Root Cause Analysis

Herding Cats - Glen Alleman - Mon, 04/20/2015 - 16:07

Root-causeRoot Cause Analysis is a means to answer to why we keep seeing the same problems over and over again. When we treat the symptoms, the root cause remains.

In Lean there is a supporting process of 5S's. 5S's is a workplace organization method that uses a list of five words seiri, seiton, seiso, seiketsu, and shitsuke. This list describes how to organize a work places for efficiency and effectiveness by identifying and storing items used, maintaining the areas and items, and sustaining the new order. The decision making process usually comes from a dialogue about standardization, which build understanding around the employees of how they should do their work.

At one client we are installing Microsoft Team Foundation Server, for development, Release Management and Test Management. The current processes relies on the heroics of many on the team every Thursday night to get the release out the door.

We started the improvement of the development, test, and release process with Root Cause Analysis. In this domain Cyber and Security are paramount, so when there is a cyber or a data security  issue, RCA is the core process to address the issue.

The results of the RCA have show that the work place is chaotic at times, code poorly managed, testing struggles on deadline, and the configuration of the release base inconsistent. It was clear we were missing tools, but the human factors were also the source of the problem - the symptom of latent defects and a break fix paradigm.

There are many ways to ask and answer the 5 Whys and apply the 5 S's, but until that is done and the actual causes determined, and the work place cleaned up, the symptoms will continue to manifest in undesirable ways. 

If we're going to start down the path of 5 Whys and NOT actually determine the Root Cause and develop a corrective action plan, then that is in itself a waste. 

Related articles Five Estimating Pathologies and Their Corrective Actions Economics of Software Development
Categories: Project Management

The Programmer’s Guide to Networking at a Conference

Making the Complex Simple - John Sonmez - Mon, 04/20/2015 - 16:00

One of the best reasons to go to a conference is the networking opportunities that are present—if you know how to take advantage of them. Sometimes the best thing about a conference is everyone you meet in the hallways, not the actual talks or sessions themselves. So much so, that people often refer to this […]

The post The Programmer’s Guide to Networking at a Conference appeared first on Simple Programmer.

Categories: Programming

R: non-numeric argument to binary operator

Mark Needham - Mon, 04/20/2015 - 00:08

When debugging R code, given my Java background, I often find myself trying to print out the state of variables along with an appropriate piece of text like this:

names = c(1,2,3,4,5,6)
> print("names: " + names)
Error in "names: " + names : non-numeric argument to binary operator

We might try this next:

> print("names: ", names)
[1] "names: "

which doesn’t actually print the names variable – only the first argument to the print function is printed.

We’ll find more success with the paste function:

> print(paste("names: ", names))
[1] "names:  1" "names:  2" "names:  3" "names:  4" "names:  5" "names:  6"

This is an improvement but it repeats the ‘names:’ prefix multiple times which isn’t what we want. Introducing the toString function gets us over the line:

> print(paste("names: ", toString(names)))
[1] "names:  1, 2, 3, 4, 5, 6"
Categories: Programming

SPaMCAST 338 – Stephen Parry, Adaptive Organizations, Lean and Agile Thinking

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

Software Process and Measurement Cast 338 features our new interview with Stephen Parry.  We discussed adaptable organizations. Stephen recently wrote: “Organizations which are able to embrace and implement the principles of Lean Thinking are inevitably known for three things: vision, imagination and – most importantly of all – implicit trust in their own people.” We discussed why trust, vision and imagination have to be more than just words in a vision or mission statement to get value out of lean and Agile.

Need more Stephen Parry?  Check out our first interview.  We discussed adaptive thinking and command and control management!

Stephen’s Bio

Stephen Parry is an international leader and strategist on the design and creation of adaptive-lean enterprises. He has a world-class reputation for passionate leadership and organisational transformation by changing the way employees, managers and leaders think about their business and their customers.

He is the author of Sense and Respond: The Journey to Customer Purpose (Palgrave), a highly regarded book written as a follow-up to his award-winning organisational transformations. His change work was recognised when he received Best Customer Service Strategy at the National Business Awards. The judges declared his strategy had created organisational transformations which demonstrated an entire cultural change around the needs of customers and could, as a result, demonstrate significant business growth, innovation and success. He is the founder and senior partner at Lloyd Parry a consultancy specialising in Lean organisational design and business transformation.

Stephen believes that organisations must be designed around the needs of customers through the application of employee creativity, innovation and willing contribution. This was recognised when his approach received awards from the European Service Industry for the Best People Development Programme and a personal award for Innovation and Creativity. Stephen has since become a judge at the National Business Awards and the National Customer Experience Awards. He is also a Fellow at the Lean Systems Society.

Website: www.lloydparry.com
Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

QAI Quest 2015
April 20 -21 Atlanta, GA, USA
Scale Agile Testing Using the TMMi
http://www.qaiquest.org/2015/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our essay on demonstrations!  **** Meg June 24 – 29 2013 / / /**** Demonstrations are an important tool for teams to gather feedback to shape the value they deliver.  Demonstrations provide a platform for the team to show the stories that have been completed so the stakeholders can interact with the solution. It is unfortunate that many teams mess them up.  We can help demonstrate what a good demo is all about.

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 338 – Stephen Parry, Adaptive Organizations, Lean and Agile Thinking

Software Process and Measurement Cast - Sun, 04/19/2015 - 22:00

Software Process and Measurement Cast 338 features our new interview with Stephen Parry.  We discussed adaptable organizations. Stephen recently wrote: “Organizations which are able to embrace and implement the principles of Lean Thinking are inevitably known for three things: vision, imagination and – most importantly of all - implicit trust in their own people.” We discussed why trust, vision and imagination have to be more than just words in a vision or mission statement to get value out of lean and Agile.

Need more Stephen Parry?  Check out our first interview.  We discussed adaptive thinking and command and control management!

Stephen’s Bio

Stephen Parry is an international leader and strategist on the design and creation of adaptive-lean enterprises. He has a world-class reputation for passionate leadership and organisational transformation by changing the way employees, managers and leaders think about their business and their customers.

He is the author of Sense and Respond: The Journey to Customer Purpose (Palgrave), a highly regarded book written as a follow-up to his award-winning organisational transformations. His change work was recognised when he received Best Customer Service Strategy at the National Business Awards. The judges declared his strategy had created organisational transformations which demonstrated an entire cultural change around the needs of customers and could, as a result, demonstrate significant business growth, innovation and success. He is the founder and senior partner at Lloyd Parry a consultancy specialising in Lean organisational design and business transformation.

Stephen believes that organisations must be designed around the needs of customers through the application of employee creativity, innovation and willing contribution. This was recognised when his approach received awards from the European Service Industry for the Best People Development Programme and a personal award for Innovation and Creativity. Stephen has since become a judge at the National Business Awards and the National Customer Experience Awards. He is also a Fellow at the Lean Systems Society.

Website: www.lloydparry.com

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

QAI Quest 2015
April 20 -21 Atlanta, GA, USA
Scale Agile Testing Using the TMMi
http://www.qaiquest.org/2015/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our essay on demonstrations!  **** Meg June 24 – 29 2013 / / /**** Demonstrations are an important tool for teams to gather feedback to shape the value they deliver.  Demonstrations provide a platform for the team to show the stories that have been completed so the stakeholders can interact with the solution. It is unfortunate that many teams mess them up.  We can help demonstrate what a good demo is all about.

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Faster Word Puzzles with Neo4J

Mistaeks I Hav Made - Nat Pryce - Sun, 04/19/2015 - 21:08
When I used Neo4J to create and solve Word Morph puzzles, I brute-forced the algorithm to find and link words that differ by one letter. I was lucky – my dataset only contained four-letter words and so was small enough for my O(n2) algorithm to run in a reasonable amount of time. But what happens when I expand my dataset to include words of 4, 5 and 6 letters? Obviously, I have to change my Cypher to only relate words that are the same length: match (w1), (w2) where w2.word > w1.word and length(w1.word) = length(w2.word) with w1, w2, length([i in range(0,length(w1.word)) where substring(w1.word,i,1) <> substring(w2.word,i,1)]) as diffCount where diffCount = 1 create (w1)-[:STEP]->(w2) create (w2)-[:STEP]->(w1) But with the larger dataset, this query takes a very long time to run. I don’t know how long – I’ve never been patient enough to wait for the query to complete. I need a better algorithm. Nikhil Kuriakose has also written about solving these puzzles with Neo4J. He used a more sophisticated algorithm to create the graph. He first grouped words into equivalence classes, each of which contains words that are the same except at one letter position. So, for instance, the class P_PE would contain PIPE, POPE, etc., the class PIP_ would contain PIPE, PIPS, etc., and so on. He then created relationships between all the words in each equivalence class. This also has a straightforward representation as a property graph. An Equivalence class can be represented by an Equivalence node with a pattern property, and a word’s membership of an equivalence class can be represented by a relationship from the Word node to the Equivalence node. Words related via equivalence classes Nikhil implemented the algorithm in Java, grouping words with a HashMap and ArrayLists before loading them into Neo4J. But by modelling equivalence classes in the graph, I can implement the algorithm in Cypher – no Java required. For each Word in the database, my Cypher query calculates the patterns of the equivalence classes that the word belongs to, creates Equivalence nodes if for the patterns, and creates an :EQUIV relationship from the Word node to each Equivalence node. The trick is to only create Equivalence nodes for a pattern once, when one doesn’t yet exist, and subsequently use the same Equivalence node for the same pattern. This is achieved by creating Equivalence nodes with Cypher’s MERGE clause. MERGE either matches existing nodes and binds them, or it creates new data and binds that. It’s like a combination of MATCH and CREATE that additionally allows you to specify what happens if the data was matched or created. Before using MERGE, I must define a uniqueness constraint on the pattern property of the Equivalence nodes that will be used to identify nodes in the MERGE command. This makes Neo4J create an index for the property and ensures that the merge has reasonable performance. create constraint on (e:Equivalence) assert e.pattern is unique Then I relate all the Word nodes in my database to Equivalence nodes: match(w:Word) unwind [i in range(0,length(w.word)-1) | substring(w.word,0,i)+"_"+substring(w.word,i+1)] as pattern merge (e:Equivalence {pattern:pattern}) create (w)-[:EQUIV]->(e) This takes about 15 seconds to run. Much less time for my large dataset than my first, brute-force approach took for the small dataset of only four-letter words. Now that the words are related to their equivalence classes, there is no need to create relationships between the words directly. I can query via the Equivalence nodes: match (start {word:'halt'}), (end {word:'silo'}), p = shortestPath((start)-[*]-(end)) unwind [n in nodes(p)|n.word] as step with step where step is not null return step Giving: step ---- halt hilt silt silo Returned 4 rows in 897 ms. And it now works for longer words: step ---- candy bandy bands bends beads bears hears heart Returned 8 rows in 567 ms. Organising the Data During Import The Cypher above organised Word nodes that I had already loaded into my database. But if starting from scratch, I can organise the data while it is being imported, by using MERGE and CREATE clauses in the LOAD CSV command. load csv from "file:////puzzle-words.csv" as l create (w:Word{word:l[0]}) with w unwind [i in range(0,length(w.word)-1) | substring(w.word,0,i)+"_"+substring(w.word,i+1)] as pattern merge (e:Equivalence {pattern:pattern}) create (w)-[:EQUIV]->(e)
Categories: Programming, Testing & QA

Open positions for a Sr. dev and a data scientist in Appsflyer data group

As you may know I’ve recently joined AppsFlyer as Chief Data Officer. Appsflyer, in case you don’t know, is a well funded (20M$ round B just last January) and very exciting startup that is already the market leader in mobile attribution. In any event, one of the tasks that I have at hand, is to establish the data group within the company.The data group’s role is to unlock the potential of data handled by Appsflyer and increase the value for its customers – I am looking for both a data scientist and a senior backend developer to join that team. Below you can find the blurb describing the roles. If you think you are a good fit and interested drop me a line and/or send your cv to jobs@appsflyer.com. Note that the positions are in our HQ which are in Herzliya Israel and also that we have additional openings for R&D (in Israel) and Sales/account management (in multiple locations world-wide) which you can find here

Data Scientist:

Are you looking for an opportunity to play with vast, random and unstructured social data to create insights, patterns, and models? Then, leveraging that, for a groundbreaking software platform? Are you excited by the prospect of working on a small, highly technical team while enjoying significant responsibility and learning something new almost daily? Do you have a highly quantitative advanced degree and experience in mathematics and perhaps statistics? Use machine learning techniques to create scalable solutions for business problems. Analyze and extract key insights from the rich store of online social media data. Design, develop and evaluate highly innovative models for predictive learning. Work closely with the core development team to deploy models seamlessly, as part of the production system. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. Research and implement novel machine learning algorithms for new business problems.

Requirements:  

  • PhD or MSc in Computer Science, Math or Statistics with a focus on machine learning
  • Leadership and team skills
  • Hands-on experience in predictive modeling and analysis of large volumes of data
  • Strong problem-solving ability
  • Strong programming skills (Clojure or other functional language preferred)
  • Experience with large scale distributed programming paradigms – experience with the Hadoop/Spark and SQL stacks
  • Experience with mobile analytics
  • An absolute love for hacking and tinkering with data is the basic requirement

Perks:

AppsFlyer is a fast growing startup providing mobile advertising analytics and attribution in real time. The R&D team takes an active part of the Israeli development community, including a range of meetups and such outlets as Reversim. We are focused on functional programming and on releasing open source. Get immediate satisfaction from your work—get feedback from clients in hours or even minutes.

Senior Backend Developer:

If you’re a Senior Backend Developer and you can’t remember when was the last time you did something for the first time, then the AppsFlyer R&D team is the place for you. Are you looking to propel your career to the next level? You’ll experience the excitement of handling 3.2 billion events per day, in real time using technologies like Kafka, Spark, Couchbase, Clojure, Redis etc. Our micro-service architecture is built to support the scale that went from 200 million to 3.2 billion in a year.

You have the skills but lack of proven experience with this stack? come work with the best.

Requirements:  

  • At least 5 years of working with software development
  • Experience working with live production
  • Experience with architecture design, technology evaluation and performance tuning
  • Passion to learn cutting edge technologies
  • Team player, ownership and sense of urgency
  • “Can-do approach”

Perks:

AppsFlyer is a fast growing startup providing mobile advertising analytics and attribution in real time. The R&D team takes an active part of the Israeli development community, including a range of meetups and such outlets as Reversim. We are focused on functional programming and on releasing open source. Get immediate satisfaction from your work – it usually takes hours from the minutes the code is ready in production until the clients use it. Also get the benefits of: end-to-end system ownership, including design, technologies, code, quality and production liveliness

 

 

Categories: Architecture

Economics of Software Development

Herding Cats - Glen Alleman - Sun, 04/19/2015 - 16:21

Economics is called the Dismal Science. Economics is the branch of knowledge concerned with the production, consumption, and transfer of wealth. Economics is generally about behaviors of humans and markets, given the scarcity of means, arises to achieve certain ends.

How does economics apply to software development? We're not a market, we don't create wealth, at least directly, we create products and services that may create wealth. Microeconomics is a branch of economics that studies the behavior of individuals and their decision making on the allocation of limited resources. It's the scarcity of resources that is the basis of Microeconomics. Software development certainly operates in the presence of scarce resources. MicroEconomics is closer to what we need to make decisions in the presence of uncertainty. The general economics processes ae of litle interest, so starting with Big Picture Econ books is not much use.

Software economics is a subset of Engineering Economics. A key aspect of all Microeconomics applied to engineering problems is the application of Statistical Decisions Theory - making decisions in the presence of uncertainty. Uncertainty comes in two types:

  • Aleatory uncertainty - the naturally occurring variances in the underlying processes.
  • Epistemic uncertainty - the lack of information about a probabilistic event in the future.

Aleatory uncertainty can be addressed by adding margin to our work. Time and Money. Epistemic uncertainty and the missing information has economic value to our decision making processes. That is there is economic value in decision based problems in the presence of uncertainty.

This missing information can be bought down with simple solutions. Prototypes for example. Short deliverables to test an idea or confirm an approach. Both are the basis of Agile and have been discussed in depth in Software Engineering Economics, Barry Boehm, Prentice Hall. 1981.

Engineering economics is the application of economic techniques to the evaluation of design and engineering alternatives. Engineering economics assesses the appropriateness of a given project, estimates of its value, and justification of the project (or product) from an engineering standpoint.

This involves the time value of money and cash-flow concepts, -  compound and continuous interest. It continues with economic practices and techniques used to evaluate and optimize decisions on selection of strategies for project success. 

When I hear I read that book and it's about counting lines of code, the reader has failed to comprehend the difference between principles and practices. The section of Statistical Decision theory are about the Expected Value of Perfect Information and how to make decisions with Imperfect information.

Statistical Decision Theory is about making choice, identifying the values, uncertainties and other issues relevant in a given decision, its rationality, and the resulting optimal decision. In Statistical Decision Theory, the underlying statistical processes and the resulting Probabilistic outcomes require us to Estimate in the presence of uncertainty.

Writing software for money, other people's money, requires us to estimate how much money, when we'll be done spending that money and what will result from that spend.

This is the foundation of the Microeconomics of Software Development

If there is no scarcity of resources - time, cost, technical performance - then estimating is not necessary. Just start the work, spend the money and you'll be done when you're done. If however 

Related articles Five Estimating Pathologies and Their Corrective Actions Critical Success Factors of IT Forecasting
Categories: Project Management