Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Measurement: Start or Stop?

For all undertakings (home improvement as well as measurement programs) make sure you have resources, a plan and the right attitude.

For all undertakings (home improvement as well as measurement programs) make sure you have resources, a plan and the right attitude.

Part of the Simple Checklist Series

Beginning or continuing a measurement program is never easy. The simple Measurement Checklist is a tool to help generate a discussion about the attributes that are important to be successful either as you implement a measurement program or when you have shifted into support mode.  The tool has been broken into 3 categories: resources (part 1 and 2), plans, and attitudes.  Each can be scored and leveraged separately. However, using the three components will help you to focus on the big picture.

Scale

The simple checklist can be used as a tool to evaluate how well you have prepared for your measurement journey.  As a reminder, each question in the survey is evaluated as a multiple-choice question.  The scale is high, medium, low and not present (with one exception), and each response is worth a point value (some are negative).  Scoring a question with a response that has a 0 or negative value is usually a sign that your program faces significant issues; in which case, proceed with caution.

Section and Question Weights:

Resources: Forty-two total points. Each component contributes up to 7 points.

Plans: Eighteen total points. Each component contributes up to 6 points.

Attitude: Forty total points. Each component contributes up to 8 points.

Scoring

Sum all of the scores from each of the three categories.

100 ‚Äď 80¬† You have a great base. Go live the dream.¬† Use techniques like continuous process improvements and retrospective to make improvements to your program.

79 ‚Äď 60¬†¬† Focus on building your measurement infrastructure.¬† Use focused improvement projects to target weaknesses in your measurement program.¬† The changes will be bigger than the changes that are meant to be identified in a retrospective.

59 ‚Äď 30¬†¬† Remediate your weaknesses immediately.¬† If you have not started your measurement program, focus on the problem areas before you begin.¬† If you have begun implementation or are in support mode, consider putting the program on hold until you fix the most egregious problems.

29 -   0   Run Away! Trying to implement a measurement program will be equivalent to putting your hand in the garbage disposal with it running; avoid it!  In this case consider significant organizational change initiatives.


Categories: Process Management

Event Sourcing. Draw it

Here is a drawing to show the interaction between the Decide and Apply functions:

Categories: Architecture, Requirements

Small Basic Parser

Phil Trelford's Array - Sat, 01/04/2014 - 19:30

Microsoft Small Basic is a minimal implementation of the BASIC programming language aimed at beginners. In my last article I described the implementation of an interpreter for Small Basic using an internal DSL to specify the abstract syntax tree (AST) for programs.

With the AST for the language well defined, a text parser for Small Basic programs is now relatively easy. There are quite a few options for writing parsers in F# from FsxLex and FSYyacc to hand rolled recursive descent parsers and parser combinator libraries.

For the expression parser in the open source spreadsheet Cellz, I initially used a simple parser combinator implementation based on an F# Journal article by Jon Harrop. Later I changed it to a hand rolled recursive descent parser using F# Active Patterns. Tomas Petricek has a chapter in F# Deep Dives which uses active patterns for parsing Markdown, the syntax used for Stack Overflow posts.

FParsec

To try something new, I decided to use the open source FParsec parser combinator library. FParsec, written by Stephan Tolksdorf, has great documentation including an in depth tutorial and user guide along with a convenient Nuget package. FogCreek use FParsec for parsing search queries.

With FParsec, parsers for code fragments can be written as simple functions and then composed into larger parsers for values, expressions, statements and programs. A parser can be written incrementally using the F# interactive environment giving quick feedback as you go. The library also gives really helpful error messages with line and column numbers when a parser fails.

I spent around an hour going through the tutorial which provided enough detail to get started on a parser for Small Basic. A Small Basic program is composed of statements, with one statement per line.

Parsing literals

Small Basic supports a small range of value types:

/// Small Basic value
type value =
    | Bool of bool
    | Int of int
    | Double of double
    | String of string

A parser for the boolean literal values ‚Äútrue‚ÄĚ or ‚Äúfalse‚ÄĚ can be defined using stringReturn:

let ptrue = stringReturn "true" true
let pfalse = stringReturn "false" false

The parsers can then be combined to be either true or false using the <|> operator:

let pbool = ptrue <|> pfalse

To take the boolean parser to a value type we use the |>> pipeline combinator:

let pbool = (ptrue <|> pfalse) |>> fun x -> Bool(x)

FParsec contains parsers for many primitive types including integers:

let pint = pint32 |>> fun n -> Int(n)

These parsers can then be combined to create a parser for values:

let pvalue = pbool <|> pint // ...

Parsing expressions

Next up are Small Basic’s expressions:

/// Small Basic expression
type expr =
    | Literal of value
    | Var of identifier
    | GetAt of location
    | Func of invoke
    | Neg of expr
    | Arithmetic of expr * arithmetic * expr
    | Comparison of expr * comparison * expr
    | Logical of expr * logical * expr

A parser for literals can be created using the value parser:

let pliteral = pvalue |>> fun x -> Literal(x)

Identifiers are expected to start with a letter or underscore and may contain numerals. The FParsec tutorial contains a handy example:

let pidentifier =
    let isIdentifierFirstChar c = isLetter c || c = '_'
    let isIdentifierChar c = isLetter c || isDigit c || c = '_'
    many1Satisfy2L isIdentifierFirstChar isIdentifierChar "identifier"

This can be used to define a parser for variable names:

let pvar = pidentifier |>> fun name -> Var(name)

This can then be used to define a parser for simple expressions:

let pexpr = pliteral <|> pvar

Operators can be easily parsed using the built-in operator precedence parser described in the user guide.

Parsing statements

The longest of Small Basic’s statements is the for loop:

/// Small Basic assignment
type assign =
    | Set of identifier * expr
/// Small Basic instruction
type instruction =
    | Assign of assign
    | For of assign * expr * expr

A for statement is composed of an assignment and expressions for the end value and step:

For A=1 To 100 Step 1

To parse the assignment the pipe3 combinator can be used for the constituent parts:

let pset = pipe3 pidentifier (pstring "=") pexpr (fun id _ e -> Set(id, e))

The parser for the from, to and step components can be combined as:

let pfor =
    let pfrom = pstring "For" >>. spaces1 >>. pset
    let pto = pstring "To" >>. spaces1 >>. pexpr
    let pstep = pstring "Step" >>. spaces1 >>. pexpr
    let toStep = function None -> Literal(Int(1)) | Some s -> s
    pipe3 pfrom pto (opt pstep) (fun f t s -> For(f, t, toStep s))

It can be tested in F# interactive using the run function:

run pfor "For A=1 To 100"

Which produces a statement from the parser:

val it : ParserResult<instruction,unit> =
  Success: For (Set ("A",Literal (Int 1)),Literal (Int 100),Literal (Int 1))

Parsers for statements can be combined using the <|> operator or choice function:

let pstatement = 
    choice [
        attempt pfor
        // ... other statements
    ]

Parsing programs

Small Basic supports comments at the ends of the line:

let pcomment = 
    pchar '\'' >>. skipManySatisfy (fun c -> c <> '\n') >>. pchar '\n'

Thusly the end of a line is characterized either by a comment or a new line character:

let peol = pcomment <|> (pchar '\n')

The lines of the program can be parsed using the many function:

let plines = many (spaces >>. pstatement .>> peol) .>> eof

Finally the program can be parsed by applying the run function:

let parse (program:string) =    
    match run plines program with
    | Success(result, _, _)   -> result
    | Failure(errorMsg, e, s) -> failwith errorMsg

Running programs

The generated AST from the parser can be fed directly into the Small Basic interpreter built in the previous article.

The code from this example is available as a gist. The full parser and interpreter are available as an F# snippet.

Here’s FizzBuzz in Small Basic:

Sub Modulus
  Result = Dividend
  While Result >= Divisor
    Result = Result - Divisor
  EndWhile
EndSub

For A = 1 To 100    
  Dividend = A
  Divisor = 3
  Modulus()
  Mod3 = Result
  Divisor = 5
  Modulus()
  Mod5 = Result
  If Mod3 = 0 And Mod5 = 0 Then
    TextWindow.WriteLine("FizzBuzz")  
  ElseIf Mod3 = 0 Then
    TextWindow.WriteLine("Fizz")
  ElseIf Mod5 = 0 Then
    TextWindow.WriteLine("Buzz")
  Else
    TextWindow.WriteLine(A)        
  EndIf
EndFor

And the generated AST from the parser:

val program : instruction [] =
  [|
  Sub "Modulus"; Assign (Set ("Result",Var "Dividend"));
  While (Comparison (Var "Result",Ge,Var "Divisor"));
  Assign (Set ("Result",Arithmetic (Var "Result",Subtract,Var "Divisor")));
  EndWhile; 
  EndSub;
  For (Set ("A",Literal (Int 1)),Literal (Int 100),Literal (Int 1));
  Assign (Set ("Dividend",Var "A"));
  Assign (Set ("Divisor",Literal (Int 3))); GoSub "Modulus";
  Assign (Set ("Mod3",Var "Result"));
  Assign (Set ("Divisor",Literal (Int 5))); GoSub "Modulus";
  Assign (Set ("Mod5",Var "Result"));
  If
    (Logical
       (Comparison (Var "Mod3",Eq,Literal (Int 0)),And,
        Comparison (Var "Mod5",Eq,Literal (Int 0))));
  Action (Method ("TextWindow","WriteLine",[|Literal (String "FizzBuzz")|]));
  ElseIf (Comparison (Var "Mod3",Eq,Literal (Int 0)));
  Action (Method ("TextWindow","WriteLine",[|Literal (String "Fizz")|]));
  ElseIf (Comparison (Var "Mod5",Eq,Literal (Int 0)));
  Action (Method ("TextWindow","WriteLine",[|Literal (String "Buzz")|]));
  Else; 
  Action (Method ("TextWindow","WriteLine",[|Var "A"|])); 
  EndIf;
  EndFor|]

Conclusion

FParsec lets you declaratively build a parser for a programming language incrementally in F# interactive with minimal effort. If you need to write a parser for an external DSL or programming language then FParsec is well worth a look.

Categories: Programming

Get Up And Code 35: New Year’s Resolutions

Making the Complex Simple - John Sonmez - Sat, 01/04/2014 - 17:30

It is that time of year again when everyone is making new years resolutions to finally lose weight and get to the gym. The only problem is most of us will never actually keep our resolutions and we‚Äôll make the same ones again next year. In this episode of Get Up and CODE, Iris and […]

The post Get Up And Code 35: New Year’s Resolutions appeared first on Simple Programmer.

Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Sat, 01/04/2014 - 17:10

The only certainty is uncertainty - Pliny the Elder (Gauus Plinus Secundus) (Natural History)

When the cost of a future state is considered the decision makers ask What is the chance the system's cost will exceed a particular amount? Or what are the uncertainties and how do they drive cost? Cost, schedule, and technical uncertainty analysis provides the decision makers insight into these and other important questions.

These uncertainties come from inaccuracies in cost and schedule estimates. They come from the misuse, misrepresentation, or misinterpretation of estimating data, or misapplied estimating methods. They come from intentionally ignoring the probabilistic and statistical nature of all project work. 

But these uncertainties do not remove the need for the decision maker to know about the probabilities associated with the project, to some level of confidence of the cost, schedule, technical performance, and probability of project success.

This knowledge is needed before and during the project. Without this knowledge the very notion of making decisions is uninformed by the raw data needed to decide.

To not know, not be able to know, or not want to know, means basing decisions on ignorance of the emerging situation of the project. And since all project management processes are about making decisions, to make those decisions we need credible information. Estimates are one of those decision making pieces of data. To not have an estimate is to intenntionally ignore a piece of information critical to the sucess of any project.

So before you listen to anyone suggesting we don't need to estimate cost, schedule, and technical performance, ask them to show you their marked up copy of the book below (one of several dozen hands on guide books for estimating project cost and schedule) or have them point to where not knowing the cost, schedule, risk, or technical performance to some degree of confidence, before starting the project or during the project, is in the best interest of those funding that project.

Screen Shot 2014-01-03 at 2.55.07 PM

Related articles Uncertainty is the Source of Risk Managing In The Presence Uncertainty Let's Stop Guessing and Learn How to Estimate What's Gonna Cost and When Will We Be Done? Cost (and Schedule) Estimating Foundations
Categories: Project Management

Small Basic Interpreter

Phil Trelford's Array - Sat, 01/04/2014 - 14:52

Microsoft’s Small Basic is a minimal implementation of the BASIC programming language using only 14 keywords. It’s aimed at beginners with a very simple development environment and library. My kids have enjoyed playing with it particularly the Turtle API which are reminiscent of Logo. Small Basic programs can be run locally, online via Silverlight or migrated to full fat Visual Basic .Net.

I’m quite interested in building Domain Specific Languages (DSLs), including embedded DSLs, parsers and compilers. For a short exercise/experiment I wanted to recreate a simple imperative language and Small Basic looked like a fun option.

Abstract Syntax Tree

I started by sketching out an abstract syntax tree (AST) for the language which describes the values, expressions and instructions.

F# discriminated union’s make light work of this:

/// Small Basic instruction
type instruction =
    | Assign of assign
    | SetAt of location * expr
    | PropertySet of string * string * expr
    | Action of invoke
    | For of assign * expr * expr
    | EndFor
    | If of expr
    | ElseIf of expr
    | Else
    | EndIf
    | While of expr
    | EndWhile
    | Sub of identifier
    | EndSub
    | GoSub of identifier
    | Label of label
    | Goto of label

A parser or embedded DSL can be used to generate an AST for a program. The AST can then be evaluated by an interpreter, or transformed by a compiler to processor instructions, byte code or even another language.

Embedded DSL

To test the AST I built a small embedded DSL using custom operators and functions:

let I x = Literal(Int(x))
let (!) (name:string) = Var(name)
let FOR(var:identifier, from:expr, ``to``:expr) = 
    For(Set(var, from), ``to``, I(1))
let PRINT x = 
    let writeLine = typeof<Console>.GetMethod("WriteLine",[|typeof<obj>|])
    Action(Call(writeLine, [|x|]))

This can be used to specify a Small Basic program.

let program =
    [|
        FOR("A", I(1), I(100))
        PRINT(!"A")
        ENDFOR        
    |]

The program AST can then be evaluated in F# interactive:

val program : instruction [] =
  [|For (Set ("A",Literal (Int 1)),Literal (Int 100),Literal (Int 1));
    Action (Call (Void WriteLine(System.Object),[|Var "A"|])); 
    EndFor|]

Defining the embedded DSL in F# only took minutes using the interactive REPL environment and looks quite close to the target language.

Interpreter

Programs can now be run by evaluating the AST using an interpreter. The interpreter merely steps through each instruction using pattern matching:

let run (program:instruction[]) =
    /// ... 
    /// Instruction step
    let step () =
        let instruction = program.[!pi]
        match instruction with
        | Action(call) -> invoke state call |> ignore
        | For((Set(identifier,expr) as from), target, step) ->
            assign from
            let index = findIndex (!pi+1) (isFor,isEndFor) EndFor
            forLoops.[index] <- (!pi, identifier, target, step)
            if toInt(variables.[identifier]) > toInt(eval target) 
            then pi := index
        | EndFor ->
            let start, identifier, target, step = forLoops.[!pi]
            let x = variables.[identifier]
            variables.[identifier] <- arithmetic x Add (eval step)
            if toInt(variables.[identifier]) <= toInt(eval target) 
            then pi := start
    while !pi < program.Length do step (); incr pi

Scriptable Small Basic

The AST, embedded DSL and interpreter are available as an F# snippet that you can run in F# interactive or build as an executable. The script includes a Small Basic FizzBuzz sample.

FizzBuzz

Categories: Programming

Measurement Readiness Checklist: Attitude

That's a bad attitude.

That’s a bad attitude.

Part of the Simple Checklist Series 

The simple Measurement Readiness Checklist will be useful for any major measurement initiative, but is tailored toward beginning a measurement program.  The checklist will provide a platform for evaluating and discussing whether you have the resources, plans and organizational attitudes needed to implement a new measurement program or support the program you currently have in place.

I have divided the checklist into three categories: resources (part 1 and 2), plans, and attitudes.  Each can be leveraged separately. However, using the three components will help you to focus on the big picture. Today we address attitude.

Here we continue the checklist with the section on plans and planning.  If you have not read the first three sections of the checklist please take a moment see (Measurement Readiness Checklist: Resources Part 1,  Measurement Readiness Checklist: Resources Part 2 and Measurement Readiness Checklist: Plans).

Attitude

When you talk about attitude it seems personal rather than organizational. But when it comes to large changes (and implementing measurement is a large change), I believe that both the attitude of the overall organization and critical individuals (inside or outside the organization) are important. As you prepare to either implement measurement or keep it running, the onus is on you as a change leader to develop a nuanced understanding of who you need to influence within the organization. This part of the checklist will portray an organizational view; however, you can and should replicate the exercise for specific critical influencers and yourself.

Scale and Scoring

The attitude category of the checklist contributes up to forty total points. Each component contributes up to 8 points (8, 4, 2, 0).

Vision of tomorrow

Is there a belief that tomorrow will be demonstratively better based on the actions that are being taken? The organization needs to have a clear vision that tomorrow will be better than today in order to positively motivate the team to aspire to be better than they are.

8 ‚Äď The organization is excited about the changes that are being implemented.¬† Volunteers to help move the program or to pilot new concepts are numerous.

4 ‚Äď Most of the organization is excited about most of the changes and their impact on the future.

2 ‚Äď There is a neutral outlook (or at least undecided).

-5 ¬†‚Äď There is active disenchantment with or dissension about the future.

Support Note: Measurement organizations often fall into the trap of accepting and ignoring the organization’s overall vision of the future.  While a measurement program typically cannot change how an organization feels about itself, it can be a positive force for change.  Make sure your Organizational Change Plan includes positive marketing and how you will deliver positive messaging.

Minimalist

I once believed that the simplest process change that works was usually the best approach.  I have become much more absolutist in that attitude, demanding that if someone does not take the simplest route that they prove beyond a shadow of doubt that they are correct. Minimalism is important in today’s lean business environment.  Heavy processes are wearing on everyone who uses them and even a process is just right today, entropy will add steps and reviews over time, which may add unneeded weight.  Score this attribute higher if your organization has a policy to apply lean principles as a step in process development and maintenance.

8 ‚Äď All measurement processes are designed with lean principles formally applied.¬† Productivity and throughput are monitored to ensure that output isn‚Äôt negatively impacted.

4 ‚Äď All measurement processes are designed with lean principles formally applied; however, they are not monitored quantitatively.

2 ‚Äď All measurement processes are designed with lean principles informally applied.

-5 ‚Äď Measures and measurement processes are graded by complexity and the number of steps required with a higher number of steps being better.

Support Note:  In many cases embracing a lean philosophy is more important after the initial implementation of a measurement program as there is a natural tendency to add checks, balances and reviews to your measurement processes as time goes by.  Each step in a process must be evaluated to ensure the effort required adds value to information measurement delivers to the business.

Learner

A learner is someone that understands that they don‚Äôt know everything and that mistakes will be made, but is continually broadening their knowledge base. A learner understands that when made, mistakes are to be examined and corrected rather than swept under the carpet. Another attribute of a learner is the knowledge that the synthesis of data and knowledge from other sources is required for growth.¬† In most organizations an important source of process knowledge and definition are the practitioners ‚ÄĒ but not the sole source.

8 ‚Äď New ideas are actively pursued and evaluated on an equal footing with any other idea or concept.

4 ‚Äď New ideas are actively pursued and evaluated, but those that reflect the way work is currently done are given more weight.

2 ‚Äď The ‚Äúnot invented here‚ÄĚ point of view has a bit of a hold on the organization, making the introduction of new ideas difficult.

0 ‚Äď There is only one way to do anything and it was invented here sometime early last century.¬† Introduction of new ideas is considered dangerous.

Note:  The Buddhists call this the beginner’s mind which seeks new knowledge with free eyes.

Goal Driven

The organization needs to have a real need to drive the change and must be used to pursuing longer-term goals. The Process Philosopher of Sherbrooke argues that being goal-driven is required to be serious about change.  In many cases I have observed that a career near-death experience increases the probability of change, because it sharpens focus (assuming it does not create a negative atmosphere). A check-the-box goal rarely provides more than short-term, localized motivation.

¬†8 ‚Äď The organization has a well-stated positive goal and that measurement not only supports, but is integral to attaining that goal.

2 ‚Äď The pursuit of the measurement is about checking a box on a RFP response.

-10 ‚Äď Measurement is being pursued for no apparent purpose.

Overall Note:  Measurement programs that are not tied directly to supporting organizational direct goals should be stopped and restarted only after making sure of the linkage.

Conviction

Belief in the underlying concepts of the measurement (or other change framework) provides motivation to the organization and individuals. Belief provides a place to fall back upon when implementation or support becomes difficult.¬† Conviction creates a scenario where constancy of purpose (from Deming‚Äôs work) is not an after-thought, but the way things are done. Implementing measurement programs are long-term efforts ‚ÄĒ generally with levels of excitement cycling through peaks and valleys.¬† In the valley when despair becomes a powerful force, many times conviction is the thread that keeps things moving forward. Without a critical mass of conviction it will be easy to wander off to focus on the next new idea.

¬†8 ‚Äď We believe and have evidence that from the past that we can continue to believe over time.

4 ‚Äď We believe but this is the first time we‚Äôve attempted something this big!

2 ‚Äď We believe¬† . . . mostly.

0 ‚Äď No Organizational Change Plan has been created.

 

Next up: scoring and deciding what to do with the score.


Categories: Process Management

Stuff The Internet Says On Scalability For January 3rd, 2014

Hey, it's HighScalability time, can you handle the truth?


Should software architectures include parasites? They increase diversity and complexity in the food web.
  • 10 Million: classic hockey stick growth pattern for GitHub repositories
  • Quotable Quotes:
    • Seymour Cray: A supercomputer is a device for turning compute-bound problems into IO-bound problems.
    • Robert Sapolsky: And why is self-organization so beautiful to my atheistic self? Because if complex, adaptive systems don’t require a blue print, they don’t require a blue print maker. If they don’t require lightning bolts, they don’t require Someone hurtling lightning bolts.
    • @swardley: Asked for a history of PaaS? From memory, public launch - Zimki ('06), BungeeLabs ('06), Heroku ('07), GAE ('08), CloudFoundry ('11) ...
    • @neil_conway: If you're designing scalable systems, you should understand backpressure and build mechanisms to support it.
    • Scott Aaronson...the brain is not a quantum computer. A quantum computer is good at factoring integers, discrete logarithms, simulating quantum physics, modest speedups for some combinatorial algorithms, none of these have obvious survival value. The things we are good at are not the same thing quantum computers are good at.
    • @rbranson: Scaling down is way cooler than scaling up.
    • @rbranson: The i2 EC2 instances are a huge deal. Instagram could have put off sharding for 6 more months, would have had 3x the staff to do it.
    • @mraleph: often devs still approach performance of JS code as if they are riding a horse cart but the horse had long been replaced with fusion reactor
  • Now we know the cost of bandwidth: Netflix’s new plan: save a buck with SD-only streaming
  • Massively interesting Stack Overflow thread on Why is processing a sorted array faster than an unsorted array? Compilers may grant a hidden boon or turn traitor with a deep deceit. How do you tell? It's about branch prediction.
  • Can your database scale to 1000 cores? Nope. Concurrency Control in the Many-core Era: Scalability and Limitations: We conclude that rather than pursuing incremental solutions, many-core chips may require a completely redesigned DBMS architecture that is built from ground up and is tightly coupled with the hardware.
  • Not all SSDs are created equal. Power-Loss-Protected SSDs Tested: Only Intel S3500 PassesWith a follow up. If data on your SSD can't survive a power outage it ain't worth a damn. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Categories: Architecture

Why projects fail, is why (we think) they succeed!

Software Development Today - Vasco Duarte - Fri, 01/03/2014 - 07:00

When I started my career as a Project Manager, I too was convinced that following a plan was a mandatory requirement for project success. As I tried to manage my first projects, my emphasis was on making sure that the plan was known, understood and then followed by everyone involved.

When I started my career as a Project Manager, I too was convinced that following a plan was a mandatory requirement for project success

I wrote down all the work packages needed, and discussed with the teams involved when those work packages could be worked on, and completed. I checked that all the dependencies were clear, and that we did not have delays in the critical path (the linear path through a project plan that has no buffer).

I made sure that everyone knew what to do, to the point that I even started using daily meetings before I heard of Scrum which would, later on, institutionalize that practice in many software organizations.

As my projects succeeded, I was more and more convinced that the Great Plan was the cause for their success.

As my projects succeeded, I was more and more convinced that the Great Plan was the cause for their success. The better the plan the more likely the project would succeed - I thought. And I was good at planning!

Boy, was I wrong!It was only later - after several successful, and some failed projects - that I realized that The Plan had little effect on the success of the projects. I could only reach this conclusion through experience. Some of the projects I ran were "rushed", which made it impossible to create a Great Plan, but had to be managed "by the seat of the pants". Many of them were successful nonetheless.

In other cases, I did create a plan that I was happy with. Then I had to change it. And then change it again, and again, and again - to the point that I did little else but change The Plan.

Confusing the chicken with the egg, which came first?

The example above is one where I had confused the final cause (chicken) with the original cause (egg).

When something works well, we will often retrospectively analyze the events that led to success, and create a story/narrative about why that particular approach succeeded. We will assign a "final cause" to the success: in my example I assigned the "final cause" of project success to having a Great Plan, and the events that created the Great Plan.

This is normal, and it is so prevalent in humans that there is a name for it: retrospective coherence. Retrospective coherence is what we create when we evaluate events after-the-fact and create a logical path that leads from the initial state to the final state via logical causality, that can easily be explained to others. These causality relationships are what lead us to create lists of "Best Practices".

Best Practice lists are the result of Retrospective Coherence, and because of that many are useless When the solution becomes the problem

However, this phenomena of Retrospective Coherence is not necessarily a good thing. In my initial example about Project Management I was convinced that the Great Plan and the related activities were the reason for success because that is what I could "make sense" of when I looked back in time. But as I gained experience I was forced to recognize that my "Best Practice" did not, in fact, help me in other projects. This realization, in turn led me to question the real reasons for success in my previous projects.

After many years of research and reflection I came to realize that many projects are successful purely by random reasons. For example: someone did an heroic effort to come to work during the week-end and recover the Visual SourceSafe database that had been corrupted once again and for the 1000th time!

But there are many other reasons why projects succeed by pure random chance. Here are some:

  • In one project we had a few great testers that were not willing to wait to the end of the project to test the product. What they found changed the requirements and made the project a success
  • Some projects were started so that we could deliver the "perfect feature set" to our customers. But as time went by and the deadlines were closing in, some managers - sometimes even me - understood that delivering on time was more important, and therefore changed the project significantly by reducing scope.
  • Some developers were single handedly able to both, increase product functionality, while reducing the code base by 30%. This feat increased quality massively and made a delivery on time even possible.
  • In one project we tried to use Agile. As a result, we started practicing timeboxed iterations and eventually ended up releasing so often that we could never be late

These are only a few of the reasons why projects succeed despite having a Great Plan, rather than because of it.

The Original CauseThe reasons for project success that I listed above are only a few that can be called "original cause" for project success. Original causes are those that actually start a chain of events that lead to success (or failure), but are too detailed or far into the past to be remembered while doing a retrospectively coherent analysis of project success (after-the-fact).

The kickerBut the kicker is this: when we get caught in "Final Cause" assignment through the retrospective coherence lenses or our logical mind, we lose a massive opportunity to actually learn something. By removing the role of "luck" or "randomness" from our success scorecard we miss the opportunity to study the system that we are part of (the teams, the organization, the market). We miss the opportunity to understand how we can influence this system and therefore increase our chances of success in the future.

Many people in the project management community still think - today - that having a Great Plan is a "Best Practice", and that you cannot succeed without one. I would be the first to agree that having a plan will increase your chances of success, but I will also claim that the Great Plan alone (including following that Great Plan) can never deliver success without those random events that you will never recognize because you are blind to the effects of chance in your own success.

In our lives we must, always, strive to separate Original Cause (what actually caused success) from Final Cause (why we think success happened by analysing it after-the-fact).

In a later post I will discuss how to increase the chances of project success by - on purpose - inserting randomness and chance into the project. Stay tuned...

Image credit: John Hammink, follow him on twitter

Financial Accounting Versus Project Acounting

Herding Cats - Glen Alleman - Fri, 01/03/2014 - 06:14

Let's Start With End In Mind

If you work on projects and get paid for that work, someone is paying for you. That can be a direct customer if you're working on a external project. Or it can be the firms customers, buying a product or service which provides the money to pay you. No matter, someone pays for you to do what you do best.

Project Accounting is a subset of Finance Accounting, Both Are Important to the Project Manager

As a person working on a project, you may not care about Project Accounting or Finanical Accounting. But it's important to know where your pay check comes from, and it's not the Bank of America.

Project Accounting, sometimes called job cost accounting, creates data that tracks the financial performance of projects.  Project Accounting enables the firm providing project resources (labor and material) to monitor the progress of their projects from a financial point of view. This is separate from standard organizational accounting for departments, divisions or the firm.

There are several data values used in project accounting. The project manager and the business manager(s) funding the project expect a certain level of profitability to be  maintained in each function of the organization workiing on the project. The cost associated with delivering the project is Budgeted by the firm - usually before the customer of the project receives an invoice for the work and sends Funds in the form of real money. Budget is not dead presidents. Budget is an authorization to spend dead presidents, but you can't take your budget to Star Bucks and buy a Vente Latte.

For external projects the separation of internal costs - both direct and indirect - is many times done by finance. Usually the project costs are wrapped or grossed up for billing purposes to the customer. For internal projects, the fringe, labor overhead, material overhead, and G&A costs are not usually recorded directly to the project, but recorded in the Project Ledger as the wrap rate for your labor. Fringe, labor overhead, material overhead, G&A and other indirects are paid with actual funds - dead presidents - so those costs can be recorded outside the project, since they are not material to the project's performance if managed properly. We see all the time, when those indirect costs get out of hand  the firm itself reduces its profit and they come looking for savings from your project. 

An important concept of project cost accounting is the ability to provide visibility to targeted revenue against the actual revenue, and the estimated costs to produce that revenue against actual cost to produce that revenue. Many firms separate these two items - revenue and costs. The Project Manager focuses on the cost side - direct labor, materials, services, etc. And the Financial Accounting department focuses the revenue forecast and actuals. Since those direct labor, materials, and services - whether internal or external - have to be paid in real money, someone in Accounts Payable needs to know how much will be coming due in the next period.

This By The Way is the role of Estimating. How much labor will be needed to produce the planned outcomes of the project next quarter, next release, or the next anything? Fixed labor pool? Simply add up the FTE (Full Time Equivalents), multiply by the Fully Burdened labor rate and send that to accounting. But of course that means with a fixed labor pool, you'll need to know what your capacity for work is. Have a fixed commitment to deliver some value? Then you'll need to know how many FTE's that takes. Either way you'll need an estimate for those outside your project paying the bills.

What Does Project Accounting Do Best?

Project cost accounting records the costs associated with a particular project or job. Project accounting collects all cost - invoices, time cards, other direct project charges - and assures they are recorded in the proper charge account. These charges can be direct billed to the customer, or assigned to a Budget Item in the project cost baseline, or recorded as non-billable. When costs are non-billable, they are also considered non-recoverable and are usually captured in some overhead account. It's the cost of business.

It's a simple matter of balancing the books. Money In minus Money Out = Retained Earnings (money left over). It's of course more complex than this, but this will do for now.

This must balance...

All Income in the form of Accounts Receivable = All Outgo in the form of payments in "real money"

For project accounting to add value, care is needed to record and track every project cost. This by the way is one primary role of the Work Breakdown Structure. The WBS defines the budget and captures the cost of each project element. On the project side expenses are represented by direct labor (this is why time cards are used on many external contracts), invoices submitted properly for non-labor project costs. 

One important issue in project accounting is who carries the overhead? 

Project managers are usually not aware - at least directly - of the overhead, fringe, and other indirect costs to their projects. In some domains these costs are wrapped in a multiplier to the direct labor and recorded on the Project Balance sheet without any detail. This Wrap Rate value is important to the project manager, since this cost is subtracted from the revenue generated from the external customer. For internal projects, this wrapped cost is part of the ROI calculation for the cost of delivering the value from the project to the internal business customer. 

In the end someone has to pay the electric bill, those free lunches for the project team, those Star Buck cards we hand out for hard work, and the spot award checks for other actions of the team. Those really nice 1080P monitors sitting on the engineers desks? Someone has to pay for those. The Aeron chairs? Someone has to pay. In the end someone has to pay.

Our daughter (now grown an gone) came home one day from her High School Economics class and announced, Dad we learned today there is no such thing as free. Yep, dear, welcome to the real world. People need to know what the real cost is. When they say free checking, someone is paying for that, probably you.

On projects, all that infrastructure you enjoy - laptops, lunchroom, parking, etc. is paid for by someone. If not directly, than indirectly. That's accounted for in the Financial Accounting System, the one the CFO is interested in. As a project manager you may or may not know about the wrap rate or even care about the wrap rate. But those who pay you do. And since they do, you may want to have some concern about the wrap rate as well. Fringe benefits, labor overhead, material overhead, G&A, etc. are real costs and impact that ellusive bottom line for your project and your firm.

So In the End

If you work on projects and are not concerned with the wrap rate either directly or indirectly (a pun for all us project planning and controls geeks), then you're probably direct labor. Wonderful direct labor, hired for your irreplaceable skills, but accounted for as labor all the same. Recorded on the books by your direct rate (annual or hourly), the fringe, overhead and other indirects.

So if you don't care for the Overhead discussion, or don't want to perform your role as a project manager using Overhead, just remember, those who pay you do care. And since they care, you may want to care, or at least pretend you care, you're job may depend on it.

Related articles Managing Your Project Performance Using 2nd Grade Class Concepts Project Controls is the Basis of Project Management
Categories: Project Management

Measurement Readiness Checklist: Plans

Plans are your guide to where you want to go.

Plans are your guide to where you want to go.

(Part of the Simple Checklist Series)

The simple Measurement Readiness Checklist will be useful for any major measurement initiative, but is tailored toward beginning a measurement program.  The checklist will provide a platform for evaluating and discussing whether you have the resources, plans and organizational attitudes needed to implement a new measurement program or support the program you currently have in place.

I have divided the checklist into three categories: resources (part 1 and 2), plans, and attitudes.  Each can be leveraged separately. However, using the three components will help you to focus on the big picture. We will address each component separately over the next several days.

Here we continue the checklist with the section on plans and planning.  If you have not read the first two sections of the checklist please take a moment see (Measurement Readiness Checklist: Resources Part 1 and Measurement Readiness Checklist: Resources Part 2).

Plans

Planning for the implementation or support of a measurement program can take many forms ‚ÄĒ from classic planning documents, to schedules, Kanban boards or even product backlogs.¬† The exact structure of the plan is less germane here, rather having an understanding of what needs to be done is most important. There are several plans that are needed when changing an organization. While the term ‚Äúseveral‚ÄĚ is used, this does not mandate many volumes of paper and schedules, rather that the needs and activities required have been thought through and written down somewhere so everyone can understand what needs to be done. Transparency demands that the program goal is known and that the constraints on the program have been identified (in other words capture the who, what, when, why and how to the level required).

Scale and Scoring

The plans category of the checklist contributes up to eighteen total points. Each component contributes up to 6 points (6, 3, 1, 0).

Organizational Change Plan

The Organizational Change Plan includes information on how the changes required to implement and/or support the measurement program will be communicated, marketed, reported, discussed, supported, trained and, if necessary escalated.  This level of planning needed to include tasks such as:

  • Develop activity/timeline calendar
  • Identify topics newsletter articles
  • Create articles
  • Publish articles
  • Identify topics for education/awareness sessions
  • Schedule sessions
  • Conduct sessions

6 ‚Äď A full change management plan has been developed, implemented and is being constantly monitored.

3 ‚ÄďAn Organizational Change Plan is planned, but is yet to be developed.

1 ‚Äď When created, the Organizational Change Plan will be referenced occasionally.

0 ‚Äď No Organizational Change Plan has or will be created.

Support Note: Even when a program reaches the level of on-going support, an overall organizational change and marketing plan is needed.  Adding energy to keep the program moving and evolving is necessary, or entropy will set in.  Any process improvement will tend to lose energy and regress unless they continually have energy added.

Backlog

The backlog records what needs to be changed, listed in prioritized order. The backlog should include all changes, issues and risks. The items in the backlog will be broken down into tasks.  The format needs to match corporate culture and can range from an Agile backlog, a Kanban board to a Microsoft Project Schedule.

6 ‚Äď A prioritized backlog exists and is constantly maintained.

3 ‚Äď A prioritized backlog exists and is periodically maintained.

1 ‚Äď A rough list of tasks and activities is kept on whiteboard (but marked with a handwritten ‚Äúdo not erase‚ÄĚ sign).

0 ‚Äď No backlog or list of tasks exists.

Support Note:  Unless you have reached the level of heat death that entropy suggests will someday exist, there will always be a backlog of new measurement concepts to implement, update and maintain. The backlog needs to be continually reviewed, groomed and prioritized.

Governance

Any measurement program requires resources, perseverance and political capital. In most corporations these types of requirements scream the need for oversight (governance is a friendly code word for the less friendly word oversight). Governance defines who decides which changes will be made, when changes will be made and who will pay for the changes. I strongly recommend that you decide how governance will be handled and write it down. Make sure all of your stakeholders are comfortable with how you will get their advice, counsel, budget and, in some cases, permission.

6 ‚Äď A full-governance plan has been developed, implemented and is being constantly monitored.

3 ‚ÄďA governance plan is planned, but is yet to be developed. .

1 ‚Äď When created, the governance plan will be used to keep the process auditors off our back.

0 ‚Äď Governance . . . who needs it!

Next  . . . Attitude. You have to have one and you have to manage that attitude to successfully lead and participate in organizational change.


Categories: Process Management

Resources for Moving Beyond the "Estimating Fallacy"

Herding Cats - Glen Alleman - Fri, 01/03/2014 - 00:00

A Fallacy of Estimation piece has an interesting phrase. 

Screen Shot 2014-01-02 at 2.52.25 PM

The post goes on to say...

Screen Shot 2014-01-03 at 8.05.30 AM

Here's an ongoing collection of fallacy of estimating commentary, some useful, some misinformed, some not even right let alone wrong. Moving beyond the personal opinion into the realm of actual processes, tools, and people who estimate for a living probably has value when you're assigned a project that spends non-trivial amounts of money.

Ignoring for the moment the uninformed notion that estimating is the smell of dysfunction, since no dysfunctions have been mentioned in that context, let alone corrective actions, other than to Not Do Estimates, there is in fact a critical issue about estimating in all domains.

Of course the last statement as well ignores time phase for estimates. Before the project starts, what's our a risk exposure for the cost of this project? The let's get started and find out how much this will cost is like Steve Levitt's Freakonomics description of the drug dealers - here just try this, I give it to you for free. Now of course Not Estimating has no connection to getting people kooked on Crack Cocain, but let's get started spending your money and we'll find out later how much you're going to have to commit to this project, sounds a bit like bait and switch of the 1970's with Cal Worthington in Southern California, when he'd advertise a car that didn't exist, get you to come down, then up-sell everything - ah the good olde days.

Just heard the story on NPR yesterday about the Spanish firm that will Stop Work on the widening of the Panama Canal because they've overrun by a Billion $.

The Fallacy of Estimating term is used many times without attribution. It starts with the Daniel Kahneman and Amos Tversky and the difficulties humans have in making estimates. Their current book Think Fast and Slow is a continuation of their thesis. But the core of the thesis is contained in a few critical papers, that must be read before drawing any conclusion, and most importantly be read before listening to anyone who has read them.

But there's another set of knowledge needed to be successful in the estimating business and that the acknowledgement that all estimates are probabilitic. This can't be said enough. The place to start is Probability Methods for Cost Uncertainty Analysis: A Systems Engineering Perspective, Paul R. Garvey. This book is the anchor for everything we do in the cost, schedule, and techncial performance estimating business on software intensive programs. Mr. Garvey's work at Mitre is the basis and many of the tools and processes used in our domain.

The last paper is one you must have on your desk if you're actually interested in solving the fallacy of estimating. 

How Did We Get Into This Estimating Fallacy Mess?

The original explanation for the estimating fallacy was that planners focused on the most optimistic scenario for the project, rather than using past performance, subject matter experts, or parametric models of how much time similar work would require given similar conditions. This is the Optimism fallacy. What could possibly go wrong? Well we know the answer to that now don't we? This is common in our defense and space procurement world and most other worlds where high stakes projects are driven by politics. It a fundamental axiom of life. No Guts No Glory. When the value at risk is high, conservative actions go by the wayside. 

Another explanation - one found in our domain as well - is the Authorization Imperitative. If we want to get our program approved, we'd better not tell them how much it will cost. The James Web Telescope and Joint Strike Fighter are good examples of that. JWT is currently at something like $7B it started out less than $1B. Similar for JSF.

So What Next?

It's the olde saw Doctor, Doctor it hurts when I do this? Than stop doing that. This of course is utter nonsense when it comes to estimating. Doctor, doctor we can't make good estimates. Estimates are the smell of dysfunction (with none listed) OK, then stop estimating and start spending. Yea Right! 

A few facts of life:

  • Building products or supplying services for money almost always means spending other people money. If it's your own money, do as you wish. If it's other people's money they get to say what you do with it. They shouldn't be acting like Dilbert's boss. BTW we can find examples of Dilbert Bosses everywhere. That's trivial. Pointing out those problems is child's play. How about providing solutions. They should understand enough about business management to know that all estimates are probabilistic. That all estimates have built in risks, some reducible, some irreducible, all knowable, many not known. If you have¬†Unknowable risks, you'd better not start the project, until you get those back into the¬†knowable column.
  • When those with the money give you the money, they expect you to spend it wisely. That means, you have some notion what you're going to spend it on to produce¬†value to those who gave it to you. That you also know to some degree of confidence how much money you are going to need to spend to deliver the expected value the customer has given you the money for.

How Can We Get Better Estimates?

Let's assume for a moment that we understand why we need to estimate how much of other people's money we're going to spend. 

How do we get better? The answer is simple - We're spending other peoples money and they likely want to know how much, when, and what are they getting. We start by looking to the tried and true, field proven, tractable approaches to estimating. This is not a platitude. We start by doing our homework, reading books and papers, looking at tools, asking others how do you do this? We do what any person learning to do something new would do. We look to others first. 

What's Out There to Learn?

There are lots of sources for learning to estimate. But the first - field proven way - is Reference Class Forecasting. It's the basis of most estimating processes in use today

Bent Flyvbjerg has lots to say on this. But some care is needed, he tends to overstate the intent of planners and estimators making poor estimates as Lairs. Maybe that's the case at times, I know of a few programs, but it's overstating none the less. Calling people Lairs is fight'en words where I come from in the Texas Panhandle, home of T. Boone Pickens, Dog the Bounty Hunter and Randy Matson 

Let's assume you actually want to improve your estimating skills, abilities, and probability of success. Where do you start. First you start by ignoring those who say it can't be done, because it can. Then ignore those who say estimates are the smell of dysfunction because estimates are part of any credible business process - period. OK, if you believe in Unicorns and Pixy Dust, you might belive that making estimates is a dysfunction. Making estimates for the wrong reason is dysfunction. But doing anything for the wrong reason is a dysfunction. Learn to do things for the right reason and as the poster campaign at Rocky Flats said:

Don't do stupid things on purpose

By the way, the notion of drip funding is fine. But it does not answer the question what is our estimate at complete? Drip Funding is also called Time Boxed Scheduling been around for decades. Here's a small amount of money and a list of things I want you to do. Go do them, come back and we'll talk more. If you did them for more or less the money provided, good. If not, you've now got information to calibrate the future capacity for work. This called Reference Class Forecasting.

If you can't estimate credibly you won't be in business long. From the lawn care guy who cut our grass every to the builder of Joint Strike Fighter. OK, they're still in business ;<(

Getting Started

So the first place to start is to inform yourself how other do credible estimating. Let's start with How to Estimate if You Really Want To. But those aren't enough, you'll need some tools. And before you listen to anyone telling you tools get in the way of innovation and understand, ask if you can do non-stationary stochastic modeling of Monte Carlo Markov chains of dependent process flows in you head or by exchanging words between people? OK, back to the problem at hand.

There are many starting points for probabilistic estimating, but they all have one thing in common. We need to know what the problem is. For software we need to know what capabilities the customer would like to possess when the project is DONE. This is called Capabilities Based Planning. Capabilities aren't requirements. Capabilities reveal the requirements, and requirements enable the capabilities to exist. Here's an example of a set of evolving capabilities for a health insurance ERP system

Screen Shot 2014-01-02 at 4.12.01 PM

So once we have something like this, we can start to decompose the parts into bite-sized chunks, just like those suggesting drip funding need for success. These are Drips of work. 

Next you'll need to suspend belief, just like the Unicorns. The suspension goes like this.

For the vast majority of commercial and a whole lot of military and space software systems, there is nothing new under the sun.

You may not personally have had experience with this new requirement. You may have not even heard of such a thing. But help is at hand - Google. Start there, someone, somewhere, somehow has built something similar. Find it, ask them, do your homework, build a Reference Class. Can't build the Reference Class? OK, then spend some money to build a prototype. Charge the customer for this exploratory effort. This by the way is called agile development. Try a little, learn a little. Try some more, learn some more. Improve your probability of success with direct experience. But make sure you get paid for this. It's part of the project. Exploring like this on other people's money with out them knowing it or without them paying you for it is really bad business. That's a true dysfunction. 

Now For Some Tools

Here's my list of favorites. They're favorites because I use them or know people who do:

There are likely others, so if you have one send me a note.

In the End

It's not your money, behave appropriately

Related articles How To Estimate Almost Any Software Deliverable How To and How Not To Make Credible Estimates of Cost and Schedule Vikas Shah interviews Daniel Kahneman Facts and Fallacies of Estimating Software Cost and Schedule Measurement of Uncertainty
Categories: Project Management

Project Accounting

Herding Cats - Glen Alleman - Thu, 01/02/2014 - 22:41

The separation of Project Accounting from Financial Accounting is common in many firms. Finanical Acounting is the domain of the CFO or equivalent and the finance and accounting department, where payments of invoices to customers and from suppliers ae processed.

Project Accounting, sometimes called Job Cost Accounting, is usually the domain of the Project Manager. Where budgets are assigned and records of work peformed and cost of that work reported. But actual money doesn't change hands. 

In the Financial Accounting domain, all costs and all income are recorded as  transactions on the General Ledger. These transctions are rolled out through accounts to produce a Balance Sheet of the cost versus value in some form. Sometimes that value is revenue. Sometimes, it's business value recorded as intangible asset. Managing of the intangible assets of a firm is the primary role of business leaders. Teh tangible assets are usualy managed in the accounting, by the depaertment with the same name.  

I'm making this simple, and likely simple minded for the point that is coming.  Who owns the overhead costs that are present in every single business? For larger complex projects here's one framework for doing that, but we'll keep it simple for now.

But before going simple here's a quick looks at FASB 86, the guidance for accouting for the development and use of software. But one quick notion that is missed by many in the agile world not working the arcane processes of public accounting ... 

Screen Shot 2014-01-02 at 2.37.59 PM
The Agile Alliance is working this issue. 

So Back To Project Accounting Processes

Projects spend budget. Businesses spend Funds. If you work on a project, rarely does someone on the business side come to you with an envelope full of money - dead presidents - and say go hire some developers to write code for this project. Those authorizing you to hire people, or buy materials, do so with a Purchase Order of some sort (we have PCARDS that skip all the paper work), or some authorization from HR to hire. The employee or contractor then charges to a charge number or Employee Number to capture their expenses (labor or material).

That expense is then wrapped with fringe, overhead, and other indirects and recorded on the books as the Fully Burdened cost to the project. 

Someone Has to Pay

No matter where you work in the organization, the all in cost for the project has to be paid. This includes your direct labor. What you get in your pay check, those fringe benefits - health insurance, matching 401(k), AMC movie passes, spot awards. Your firm has to pay. If you're a 1099 supplier, then it's you that has to pay. As a 1099, if you're not billing your customer for all that overhead than those costs are reducing your revenue. Same for a firm. 

So if you're not concerned about overhead, don't manage the project knowing the overhead. Not a problem in principle. In practice though someone does, and if they care, you may want to care as well. If not, then you're likley working as labor on the project. That's in NO way a problem. We're all labor in some form to some one. But when it comes to managing projects and managing the business that projects use or produce, these indirect costs can make or break the project and or the business. Knowing about them, managing in their presence is simply the responsible thing to do. You know likey knowing what your project will cost when it's done.

Here's some background that have served me well over time as a Program Planning and Control manager.

Categories: Project Management

New Year’s Resolutions for ScrumMasters and Product Owners

Mike Cohn's Blog - Thu, 01/02/2014 - 20:59

With the new year, it's time for some resolutions. I've got the same old ones (lose weight, eat better, get more sleep, help more old ladies across the street, stop calling every cat I see "Fajita," and such). But since I fail at those every year, I thought perhaps it would be better for all of us if we made some Scrum-related resolutions. And so here are a couple of suggestions for both ScrumMasters and Product Owners. Each is a resolution I've made in the past during my time in each role.

             

 

 

For ScrumMasters


Let's start with two possible resolutions ScrumMasters may want to make:

  1. Always let team members speak before you do in meetings. This was a hard one for me. I'm both opinionated and impatient. When an issue comes up during a meeting, it can be hard for me not to instantly blurt out my opinion. This often shuts down debate that might otherwise have occurred.
 
I finally got over this problem (mostly) years ago when I resolved to always let two team members give their opinions before I (as a combination ScrumMaster / developer) would share my own.
  2. Praise the team more often. I am very much a glass-is-half-empty type of guy. A team cuts their defect rate by 50 percent, and I want to know why not 100 percent. Velocity goes up by 5 points, and I think they could have gotten one more point. Part of this can be good; I am definitely always looking for ways to get better. And I put the same pressure on myself. But, I've learned that, of course, it can be depressing for a team that is making tremendous improvements to always hear about how there is still room to improve. Prevent this from becoming a problem by making it a priority to praise them more often.
For Product Owners

Here are a couple of resolutions Product Owners may want to make:

Redirect the team less often. It is extremely tempting to redirect the team every time you come up with a great new idea. Of course, you know how bad this can be as the team gets buffeted from one top priority to the next. Simply resolving to stop redirecting them, though, is unlikely to work. So here are two things you can to help achieve this resolution:

  1. Sit on changes for a day before introducing them to the team. Just commit to yourself that no matter how good or urgent an idea seems, you will hold off for one day before asking the team to work on it. Rarely is a change so critical that it must be started immediately. Stalling for even a day gives you time to reconsider. If the change seems as critical tomorrow, go ahead and interrupt the team.
  2. Write it down somewhere. Often telling the team to work on something new is just a way for the Product Owner to get the item out of his or her head. You can also achieve this by making note of the desired change. If you're using a tool to manage your product backlog, add it so it'll be visible when you plan the next sprint. If you don't use a tool, note the item in a simple text file you can review before each sprint planning meeting.

Be more available. I've surveyed a few agile teams about what their Product Owners could do better; "Be more available" always ranks near the top of the results. If you're a Product Owner, resolve that 2014 will be the year in which you fully address this problem. Here are a couple of things you can do to make that happen:

  1. Spend time in the team's area. If your desk is away from the team's shared workspace, do something like tell the team you will spend from 1 p.m. to 3 p.m. every day in their area. This time isn't for any specific meeting (although meetings can occur during this time). Rather, you just bring your laptop, find an empty desk or surface in their area, and do your normal work right there. If the team needs you, you're just a few steps away. When they don't, you just do your normal work but near them.
  2. Share a secret code with the team. Establish a rule with the team that if they email you with something like "[Today]" in the subject line of an email, you will respond before going home for the day. (Or perhaps before going to bed at night, if working remotely from the team.) Discuss with the team what constitutes appropriate use of this. For example, they shouldn't email you 100 times per day with this secret code. (If they need to, see the item above about spending time in the team's work area. You've got a problem that can't be solved by email.)
Last, but Not Least

Resolve to Improve. Whatever you choose for 2014, resolve that by the end of the year, you and the team you're working with will be better than you are today. And it wouldn't hurt if you ate better and got more sleep this year, too.

In the comments below, please let me know what agile-related resolutions you've made for 2014.

xkcd: How Standards Proliferate:

The great thing about standards is there are so many to choose from. What is it about human nature that makes this so recognizably true?

Categories: Architecture

Habits Are More Powerful Than Goals

Making the Complex Simple - John Sonmez - Thu, 01/02/2014 - 17:30

How many times have you failed your new years resolutions? Do you really think this year is going to be any different? Of course not‚ÄĒunless you are willing to try something different. In this video, I talk about why habits are much more important than goals themselves and how you can use the power of […]

The post Habits Are More Powerful Than Goals appeared first on Simple Programmer.

Categories: Programming

Are You a Creative Networker?

NOOP.NL - Jurgen Appelo - Thu, 01/02/2014 - 12:41
Are You a Creative Networker?

I came up with the term creative networker as an alternative to ‚Äúknowledge worker‚ÄĚ, which is a bit outdated in my opinion. I think the term ‚Äúcreative networker‚ÄĚ is better able to express that we‚Äôre moving into an creative economy and that almost all work we do is part of a network of activities.

The post Are You a Creative Networker? appeared first on NOOP.NL.

Categories: Project Management

Mr. Langella Never Does it the Same Way Twice

James Bach’s Blog - Thu, 01/02/2014 - 12:40

This is from the New York Times:

Its other hallmark is that Mr. Langella never does the part the same way twice. This is partly because he‚Äôs still in the process of discovering the character and partly because it‚Äôs almost a point of honor. ‚ÄúThe Brit approach is very different from mine,‚ÄĚ he said. ‚ÄúThere‚Äôs a tendency to value consistency over creativity. You get it, you nail it, you repeat it. I‚Äôd rather hang myself. To me, every night within a certain framework ‚ÄĒ the framework of integrity ‚ÄĒ you must forget what you did the night before and create it anew every single time you walk out on the stage.‚ÄĚ

I love that phrase the framework of integrity. It ties in to what I’ve been saying about integrity and also what is true about informal testing: if you are well prepared, and you are true to yourself, then whatever you do is going to be spontaneously and rather effortlessly okay, even if it changes over time.

I often hear anxiety in testers and managers about how terrible it is to do something once, some particular way, and then to forget it. What a waste, they say. Let’s write it all down and not deviate from our past behavior, they say. Well I don’t think it’s waste, I think it’s mental hygiene. Testing is a performance, and I want to be free to perform better. So, I make notes, sure. But I am properly reluctant about formalizing what I do.

Doing your best work includes having the courage to let go of pretty good work that you’ve done before.

Categories: Testing & QA

Integrity 2: On Being Under the Radar [REVISED]

James Bach’s Blog - Thu, 01/02/2014 - 03:16

I have taken down the original text of this post at the request of my colleague who had the courage and audacity to let me post his detailed comment about how he works “under the radar” to change things in his company.

I had posted his comment originally with his permission, of course. But, apparently, in his country, “it’s illegal to harm [one's] employer’s business” and it can reasonably be considered doing harm to express a low opinion of your own company’s behavior, even if you are dedicated to improving that behavior. Dirty laundry in public is arguably bad for business, if your business involves telling people that you’re a trustworthy expert, and your laundry says otherwise.

Of course this is understandable. Working under the radar generally means not being public about what you are doing. Therefore, as much as I prefer the clean feeling of working ON the radar, I wish him good luck with his mode of influencing a big, commercial, ceremonial system.

 

 

 

 

Categories: Testing & QA

Project Maxims

Herding Cats - Glen Alleman - Thu, 01/02/2014 - 02:36

It is popular to make a list of maxims for developing products, managing projects, or managing business processes. Some are based on experience, some based on surveys, some based on principles and practices of a profession.

Here's mine based on counter examples of the sole contributor  paradigm. The sole (or small group) contributor paradigm means maxims gathered from personal experience from a person's engagement on the job. One example for the sole contributor, used without permission and with full attribution is Woody Zuill's list. There are others Five Project Maxims, 18 Maxims of Successful IT Consulting and other. But I like Woody's framework best, because his topics fit best with our processes on complex, mission critical, software intensive programs and the hands on integration with process. Although Woody would likely not agree, both technical skills and formal process frameworks are critical success factors in any sufficiently complex domain - both are needed.

Doing the work is guided by the Strategy and Performance Goals of the needed Capabilities.

Without a clear and concise understanding of what  DONE looks like in Measures of Effectiveness for the needed capabilities, all the project work has no home. It's just a list of features or functions captured by the development team from the customer or product owner.

It's the capability to accomplish a business strategy that defines the mission and vision of the project. Why are we doing this project? How will we recognize we've accomplished our mission? The capabilities delivered by the project starts with the fulfillment of Critical Success Factors. Which in turn implements a Performance Goal in support of a Strategic Objective that measurably benefits the business or supports a mission.

Responding to Change is impossible unless the system is easy to change, easy to maintain, easy to fix, and easy to enhance.

The ability to easily change a product or a process starts and ends with the architecture. This understanding began with Notes on the Synthesis of Form, Christopher Alexander, 1964. It's the architecture that enables the change, assuring that coupling among the components is minimized, cohesion between the static and dynamic processes is maximized, separation of concerns is traceable to all architecture decisions. If you're developing these as you go - allowing them to emerge - you're going to be disappointed when you discover your product is coupled in ways you didn't know, has weak cohesion among it's parts, and has cross cutting concerns which result in a tangled mess when you start to make changes.

The notion that the best architectures emerge are suggested by those not working on complex systems interdependent components, but on systems with lower levels of compelxity between the components. Imagine an enterprise ERP system, a software intensive manufacturing system, the 32 flight and weapons computers on the F-35, the multiple levels of interaction of Future Combat System (I worked the rebaslining of the IMP/IMS for Class I), and process control system found in a nuclear power station.

Now ask, would you like the architecture of these software systems to emerge as the development takes place?

Here's where to start for architecture in the enterprise IT domain. There are architectures for realtime embedded systems as well. For defense systems DoDAF is the architecture framework. So when you here responding to change ask - what's the mechanism that allows you to do that, when the system you're working on is complex, high risk, critically important - say banking, navigation and control, oil & gas supply chain, electric power generation and delivery, health care, drug development, retail, transportation? You get the idea

The notion of Question Everything ignores the fundamentals of every professional process improvement paradigm. 

Working on projects is not about the needs of the individual. It's about the needs of the whole. Personal desires must be subordinate to the needs of the mission success. It's not about you. It's about the customer and the governance framework in which the customer operates her business or fulfills her mission. 

Questions are great, you can learn at lot from questions. But questions asked without doing your homework are a waste of your time and those you are asking the question to. Go do your homework. Learn about ITIL, COBIT, INCOSE Systems Engineering, SEI, and other professional frameworks first. Then you'll have a basis for your questions. Then start with the root cause test for your questions. When someone says those haven't worked in their experience, don't just ask the 5 whys, seek the root cause.

The Why's approach may be able to reveal the symptoms. But to get at the root cause a deeper assessment is needed. One based on a process framework. A place to the Whys to land. Why didn't the work team follow the established test procedures? Why didn't the customer establish a set of needed capabilities before we started developing stories for the software development effort? These whys then reveal the root cause. The whys need to have actionable outcomes, not just the question. 1st graders can ask why.

Process improvement needs to ask why, but it can only deliver value when there is an actionable answer. No actionable answer in units of measure meaningful to the decision makers? The question everything paradigm is Muda (waste).

Working Product is product that meets the Technical Performance Measures (TPM), the Measure of Performance (MoP), and Measure of Effectiveness (MoE) as defined by the Concept of Operations (ConOps), Statement of Objectives (SOO), and Statement of Work (SOW).

Without stating these attributes of the working product there is no way to tell if it is the right working product. Right for the needed capabilities. Right for the strategy. Right for the technical, operational, and performance requirements. Simply saying working product in the absence of these measures is ignoring the large context of effectiveness and strategic value. When we hear many software features have little value, we can only determine that if the planned strategic value is defined and tested along the way. This is NOT Big Design Up Front. It is the core of strategy making. 

But the notion of having working software be put to immediate use needs a domain and context for it to be useful. Otherwise it's just another platitude of the agile vocabulary. Working on orbit for a Navigation and Guidance computer may not happen for 9 months. That's the time it takes to get to Mars. So working needs an operational definition. Working in the full fidelity emulator of the space craft. Working in the  complete Verification and Validation (100% thread coverage) of the emergency shutdown system. (I was one of gthe orginal architects of this system). Working in the full transaction processes system test bed.

Crunch-time is a symptom of harmful and counter-productive attitudes.

It's got nothing to do with attitudes, and everything to do with competent and mature business management and processes. Newspapers have crunch time every day, sometime twice or three times a day. Banks have crunch time every month. Surgeons have crunch time once they make their fist cut. 3 miles out onto a hot LZ in I-Corp, 1969, is crunch time delivering critical supplies to Fire Base Rip Cord. Flying to New York City has crunch time every time the 777 pushes back from the gate at LAX. It's not attitude, it's competency to manage in the presence of uncertainty and deliver as promised because you've been trained, have experience and a support system. But in the end it's the process. Process rules.

Knowing the capacity for work starts with knowing the demand for work. Throughput can only be determined if you know both demand and capacity. Then and only then, can you add margin for the irreducible uncertainties. And reserve for the reducible uncertainties. 

We are only innovators of our process if we are  capable  of providing the innovative solution within the governance framework of our business.

If it ain't broke don't fix it. If it's broken first find the root cause and fix that cause. Rarely in modern business is the a broken process that didn't work right at one time. A critical success factor for all process improvement is to determine the root cause of the failure. Then and only then examine if there is a process problem. If so, fix the process. If not, fix the application of the process. Stop wasting time looking for solutions to the wrong problem.

The object of all projects is to deliver value to those paying you to do the work.

Writing software for money is not the same as producing art for money. If you're producing art for money, you're probably not a very good artist. If you're treating your job of producing value for money as art, you're probably not getting a lot of repeat customers.

Customers bought the capability to do something, they only care if you're self-actualized if they are a relative. Customers are happy when you've fulfilled their need to possess a capability for the expected cost on the expected day. There must lots of opportunities for participants on a project to receive personal satisfaction, grow together as a team, increase their skills, and even be innovative - but the customer rarely is willing to pay for that directly. It better be wrapped in the overhead rate. Good artists copy, great artist steal - Pablo Picasso. Good firms hire people already prepared to succeed. Read Making the Impossible Possible: Leading Extraordinary Performance: The Story of Rocky Flats for specific actgionable advice on doing all the things needed for success, including all the people processes. The abstract is here.

The more we understand that improvement is hard work. There is no free ride. 

Nobody Ever Gets Credit for Fixing Problems that Never Happened: Creating and Sustaining Process Improvement is a start. They suggest less than 10% of the firms adopting Toyota's TQM actually apply it properly. This loops back to the question everything nonsense, when the questioning is uninformed by the missing root cause analysis of the dysfunction. The source of Dysfunction in the workplace must be determined before any suggestions for improvement can be made. Stating something is the smell of dysfunction is like stating  what's that rotten smell as we drive by the recycling center/ Look out the window and see the source. Find the source before doing anything.

At the end of the day successfully managing projects is hard work. But there is plenty of advice. This is one of my favorites. 

Ten rules for common sense program management from Glen Alleman So the wrap up here starts with establish an architectural framework. A framework for product development, programmatic management (cost, schedule, risk, performance assessment), and most of all a framework for responding to the rapidly emerging forces of the market place, technology, and competition. Remember Steve Job's ideas on Innovation. Related articles Agile Project Management Requirements Elicitation Performance Based Management The "Real" Root Cause of IT Project Failure One of the Problems with Emergent Design What's Missing from Project Management in IT
Categories: Project Management