Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

The Easy Way of Building a Growing Startup Architecture Using HAProxy, PHP, Redis and MySQL to Handle 1 Billion Requests a Week

This Case Study is a guest post written by Antoni Orfin, Co-Founder and Software Architect at Octivi

In the post I'll show you the way we developed quite simple architecture based on HAProxy, PHP, Redis and MySQL that seamlessly handles approx 1 billion requests every week. There’ll be also a note of the possible ways of further scaling it out and pointed uncommon patterns, that are specific for this project.

Stats:
Categories: Architecture

How to Negotiate Your Salary

Making the Complex Simple - John Sonmez - Mon, 08/11/2014 - 15:00

I’m often surprised how many software developers neglect to do any salary negotiations at all or make a single attempt at negotiating their salary and then give up and take whatever is offered. Negotiating your salary is important—not just because the dollars will add up over time and you could end up leaving a lot […]

The post How to Negotiate Your Salary appeared first on Simple Programmer.

Categories: Programming

The AngularJS Promise DSL

Xebia Blog - Mon, 08/11/2014 - 10:21

As promised in my previous post, I just pushed the first version of our "Angular Promise DSL" to Github. It extends AngularJS's $q promises with a number of helpful methods to create cleaner applications.

The project is a V1, it may be a bit rough around the edges in terms of practical applicability and documentation, but that's why it's open source now.

The repository is at https://github.com/fwielstra/ngPromiseDsl and licensed as MIT. It's the first OS project I've created, so bear with me. I am accepting pull requests and issues, of course.

Questions? Ask them on the issues page, ask me via Twitter (@frwielstra) or send me an e-mail. I'd offer you to come by my office too... if I had one.

Ocean Revival

Phil Trelford's Array - Sun, 08/10/2014 - 23:20

This weekend I had the pleasure of sitting on the Ocean Q&A panel at Revival 2014 in Wolverhampton. I worked at Ocean in Manchester in my early 20s on titles like Jurassic Park (PC & Amiga) and Addams Family Values (SNES & Megadrive). It was fun to reminisce about the good old days with other former Ocean employees and people who’d played the games.

Ocean panel

The panel closely coincided with the public release of The History of Ocean Software book by Chris Wilkins which was funded as a Kickstarter:

Ocean the history

There were plenty of old games to play at the event too. I particularly enjoyed Rez on a PS2, Omega Race on a Vic-20 and a Flappy Bird clone on a Commodore 64.

Flappy Bird C64

When we got home, my 7yo and I pulled the Vic-20 out of the garage, and played some more Omega Race with the joystick we’d just picked up:



My 7yo has been picking up Python recently, with a Coding Club - Python Basics book.

One of the tasks is to print out the 5 times table:

number = 1
while i <= 12:
    print(number,"x 5 =",number*5)
    number = number + 1

Funnily enough Vic-20 Basic (circa 1981) was easily up to the challenge too:

5 times table

And good old FizzBuzz was no bother either:

FizzBuzz Vic-20

Then my son had a go at Magic 8-ball, but sadly lost the code he’d spent a while typing in when it is closed, so we re-wrote it again in F# so there was less to type:

let answers =[
   "Go for it!"
   "No way Jose!"
   "I'm not sure. Ask me again."
   "Fear of the unknown is what imprisons us."
   "It would be madness to do that!"
   "Only you can save mankind!"
   "Makes no difference to me, do or don't - whatever."
   "Yes, I think on balance that is the right choice"
   ]

printfn "Welcome to Magic 8-Ball."

printfn "Ask me for advice and then press enter to shake me"
System.Console.ReadLine() |> ignore

for i = 1 to 4 do printfn "Shaking..."

let rand = System.Random()
let choice = rand.Next(answers.Length)

printfn "%s" (answers.[choice])

Why not dig out your old computers and have some programming and games fun! :)
Categories: Programming

SPaMCAST 302- Larry Maccherone, Measuring Agile

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 302

Software Process and Measurement Cast number 302 features our interview with Larry Maccherone of Rally Software. We talked about Agile and metrics.  Measuring and challenging the folklore of Agile is a powerful tool for change!  Measurement and Agile in the same sentence really is not an oxymoron.

Larry’s Bio:

Larry is an industry recognized Agile speaker and thought leader. He is Rally Software’s Director of Analytics and Research. Before coming to Rally Software, Larry worked at Carnegie Mellon with the Software Engineering Institute for seven years conducting research on software engineering metrics with a particular focus on reintroducing quantitative insight back into the agile world. He now leads a team at Rally using big data techniques to draw interesting insights and Agile performance metrics, and provide products that allow Rally customers to make better decisions. Larry is an accomplished author and speaker, presenting at major conferences for the lean and agile markets over the last several years, including the most highly rated talk at Agile 2013. He just gave two talks on the latest research at Agile 2014.

Contact information:

Rally Author Page

Email: lmaccherone@rallydev.com

Google+

Next

Software Process and Measurement Cast number 303 will feature our essay on estimation.  Estimation is a hot bed of controversy. But perhaps first we should synchronize on just what we think the word means.  Once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 303 we will not shy away from a hard discussion.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested!

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 302- Larry Maccherone, Measuring Agile

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 302

Software Process and Measurement Cast number 302 features our interview with Larry Maccherone of Rally Software. We talked about Agile and metrics.  Measuring and challenging the folklore of Agile is a powerful tool for change!  Measurement and Agile in the same sentence really is not an oxymoron.

Larry’s Bio:

Larry is an industry recognized Agile speaker and thought leader. He is Rally Software’s Director of Analytics and Research. Before coming to Rally Software, Larry worked at Carnegie Mellon with the Software Engineering Institute for seven years conducting research on software engineering metrics with a particular focus on reintroducing quantitative insight back into the agile world. He now leads a team at Rally using big data techniques to draw interesting insights and Agile performance metrics, and provide products that allow Rally customers to make better decisions. Larry is an accomplished author and speaker, presenting at major conferences for the lean and agile markets over the last several years, including the most highly rated talk at Agile 2013. He just gave two talks on the latest research at Agile 2014.

Contact information:

Rally Author Page

Email: lmaccherone@rallydev.com

Google+

Next

Software Process and Measurement Cast number 303 will feature our essay on estimation.  Estimation is a hot bed of controversy. But perhaps first we should synchronize on just what we think the word means.  Once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 303 we will not shy away from a hard discussion.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested!

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 302- Larry Maccherone, Measuring Agile

Software Process and Measurement Cast - Sun, 08/10/2014 - 22:00

Software Process and Measurement Cast number 302 features our interview with Larry Maccherone of Rally Software. We talked about Agile and metrics.  Measuring and challenging the folklore of Agile is a powerful tool for change!  Measurement and Agile in the same sentence really is not an oxymoron.

Larry’s Bio:

Larry is an industry recognized Agile speaker and thought leader. He is Rally Software's Director of Analytics and Research. Before coming to Rally Software, Larry worked at Carnegie Mellon with the Software Engineering Institute for seven years conducting research on software engineering metrics with a particular focus on reintroducing quantitative insight back into the agile world. He now leads a team at Rally using big data techniques to draw interesting insights and Agile performance metrics, and provide products that allow Rally customers to make better decisions. Larry is an accomplished author and speaker, presenting at major conferences for the lean and agile markets over the last several years, including the most highly rated talk at Agile 2013. He just gave two talks on the latest research at Agile 2014.

Contact information:

Rally Author Page

Email: lmaccherone@rallydev.com

Google+

Next

Software Process and Measurement Cast number 303 will feature our essay on estimation.  Estimation is a hot bed of controversy. But perhaps first we should synchronize on just what we think the word means.  Once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 303 we will not shy away from a hard discussion.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested!

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

SPaMCAST 302- Larry Maccherone, Measuring Agile

Software Process and Measurement Cast - Sun, 08/10/2014 - 22:00

Software Process and Measurement Cast number 302 features our interview with Larry Maccherone of Rally Software. We talked about Agile and metrics.  Measuring and challenging the folklore of Agile is a powerful tool for change!  Measurement and Agile in the same sentence really is not an oxymoron.

Larry’s Bio:

Larry is an industry recognized Agile speaker and thought leader. He is Rally Software's Director of Analytics and Research. Before coming to Rally Software, Larry worked at Carnegie Mellon with the Software Engineering Institute for seven years conducting research on software engineering metrics with a particular focus on reintroducing quantitative insight back into the agile world. He now leads a team at Rally using big data techniques to draw interesting insights and Agile performance metrics, and provide products that allow Rally customers to make better decisions. Larry is an accomplished author and speaker, presenting at major conferences for the lean and agile markets over the last several years, including the most highly rated talk at Agile 2013. He just gave two talks on the latest research at Agile 2014.

Contact information:

Rally Author Page

Email: lmaccherone@rallydev.com

Google+

Next

Software Process and Measurement Cast number 303 will feature our essay on estimation.  Estimation is a hot bed of controversy. But perhaps first we should synchronize on just what we think the word means.  Once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 303 we will not shy away from a hard discussion.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested!

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Sun, 08/10/2014 - 18:27

"You cannot have an execution culture without robust dialog - one that brings reality to the surface through openness, candor, and informality.
This is called "truth over harmony"

in Execution: The Discipline of Getting Things Done, Larry Bossidy and Ram Charan

Related articles Practices without Principles Does Not Scale The Need for Strategy How to Manage in the Presence of Uncertainty
Categories: Project Management

Seven Deadly Sins of Metrics Programs: Lust

1123084506_c69acb0424_b

The measurement/performance feedback loop causes an addiction to a single metric. The addict will exclude what is really important.

There is a famous adage: you get what you measure. When an organization measures a specific activity or process, people tend to execute so they maximize their performance against that measure. Managers and change agents often create measures to incentivize teams or individuals to perform work in a specific then to generate a feedback loop. The measurement/performance feedback loop causes an addiction to a single metric. The addict will exclude what is really important. Chasing the endorphins that the feedback will generate is the sin of lust in the measurement world. Lust, like wrath, is a loss of control which affects your ability to think clearly. Balanced goals and medium to long-term focus are tools to defeat the worst side effects of measurement lust. The ultimate solution is a focus on the long-term goals of the organization.

How does this type of unbalanced behavior occur?  Usually measurement lust is generated by either an unbalanced measurement programs or performance compensation programs.   Both cases can generate the same types of unintended consequences. I call this the “one number syndrome”. An example of the “one number syndrome” is when outsourcing contracts include penalty and bonus clauses based on a single measure, such as productivity improvements.  Productivity is a simple metric that can be affected by a wide range of project and organizational attributes. Therefore just focusing on measuring just productivity can have all sorts of outcomes as teams tweak the attributes affecting productivity and then review performance based on feedback.  For example, one common tactic used to influence productivity is by changing the level of quality that a project is targeting; generally higher quality generates lower productivity and vice versa. Another typical example of organizations or teams maximize productivity is to throttle the work entering the organization. Reducing the work entering an organization or team generally increases productivity. In our examples the feedback loop created by fixating on improving productivity may have the unintended consequence.

A critical shortcoming caused by measurement lust is a shift toward short-term thinking as teams attempt to maximize the factors that will use to just their performance. We have all seen the type of short-term thinking that occurs when a manager (or an organization) does everything in their power to make some monthly goal. At the time the choices are made they seem to be perfectly rational. Short-term thinking has the ability to convert the choices made today into the boat anchors of the next quarter. For example, right after I left university I worked for a now defunct garment manufacturer. On occasion salespeople would rush a client into an order at the end of a sales cycle to make their quota. All sorts of shenanigans typically ensued including returns, sale rebates but the behavior always caught up one or two sales periods later. In a cycle of chasing short-term goals with short-term thinking, a major failure is merely a matter of time. I’m convinced from reading the accounts of the Enron debacle that the cycle of short-term thinking generated by the lust to meet their numbers made it less and less likely that anyone could perceive just how irrational their decisions were becoming.

The fix is easy (at least conceptually). You need to recognize that measurement is a behavioral tool and create a balanced set of measures (frameworks like the Balanced Scorecard are very helpful) that therefore encourage balanced behavior.  I strongly suggest that as you are defining measures and metrics, take the time to forecast the behaviors each measure could generate.  Ask yourself whether these are the behaviors you want and whether other measures will be needed to avoid negative excesses.

Lust rarely occurs without a negative feedback loop that enables the behavior. Measures like productivity or velocity when used for purely process improvement or planning rather than to judge performance (or for bonuses) don’t create measurement lust. Balanced goals, balanced metrics, balanced feedback and balanced compensation are all a part of plan to generate balanced behavior. Imbalances of any of these layers will generate imbalances in behavior. Rebalancing can change behavior but just make sure it is the behavior you anticipate and it doesn’t cause unintended consequences by shifting measurement lust to another target.


Categories: Process Management

Seven Deadly Sins of Metrics Programs: Lust

1123084506_c69acb0424_b

The measurement/performance feedback loop causes an addiction to a single metric. The addict will exclude what is really important.

There is a famous adage: you get what you measure. When an organization measures a specific activity or process, people tend to execute so they maximize their performance against that measure. Managers and change agents often create measures to incentivize teams or individuals to perform work in a specific then to generate a feedback loop. The measurement/performance feedback loop causes an addiction to a single metric. The addict will exclude what is really important. Chasing the endorphins that the feedback will generate is the sin of lust in the measurement world. Lust, like wrath, is a loss of control which affects your ability to think clearly. Balanced goals and medium to long-term focus are tools to defeat the worst side effects of measurement lust. The ultimate solution is a focus on the long-term goals of the organization.

How does this type of unbalanced behavior occur?  Usually measurement lust is generated by either an unbalanced measurement programs or performance compensation programs.   Both cases can generate the same types of unintended consequences. I call this the “one number syndrome”. An example of the “one number syndrome” is when outsourcing contracts include penalty and bonus clauses based on a single measure, such as productivity improvements.  Productivity is a simple metric that can be affected by a wide range of project and organizational attributes. Therefore just focusing on measuring just productivity can have all sorts of outcomes as teams tweak the attributes affecting productivity and then review performance based on feedback.  For example, one common tactic used to influence productivity is by changing the level of quality that a project is targeting; generally higher quality generates lower productivity and vice versa. Another typical example of organizations or teams maximize productivity is to throttle the work entering the organization. Reducing the work entering an organization or team generally increases productivity. In our examples the feedback loop created by fixating on improving productivity may have the unintended consequence.

A critical shortcoming caused by measurement lust is a shift toward short-term thinking as teams attempt to maximize the factors that will use to just their performance. We have all seen the type of short-term thinking that occurs when a manager (or an organization) does everything in their power to make some monthly goal. At the time the choices are made they seem to be perfectly rational. Short-term thinking has the ability to convert the choices made today into the boat anchors of the next quarter. For example, right after I left university I worked for a now defunct garment manufacturer. On occasion salespeople would rush a client into an order at the end of a sales cycle to make their quota. All sorts of shenanigans typically ensued including returns, sale rebates but the behavior always caught up one or two sales periods later. In a cycle of chasing short-term goals with short-term thinking, a major failure is merely a matter of time. I’m convinced from reading the accounts of the Enron debacle that the cycle of short-term thinking generated by the lust to meet their numbers made it less and less likely that anyone could perceive just how irrational their decisions were becoming.

The fix is easy (at least conceptually). You need to recognize that measurement is a behavioral tool and create a balanced set of measures (frameworks like the Balanced Scorecard are very helpful) that therefore encourage balanced behavior.  I strongly suggest that as you are defining measures and metrics, take the time to forecast the behaviors each measure could generate.  Ask yourself whether these are the behaviors you want and whether other measures will be needed to avoid negative excesses.

Lust rarely occurs without a negative feedback loop that enables the behavior. Measures like productivity or velocity when used for purely process improvement or planning rather than to judge performance (or for bonuses) don’t create measurement lust. Balanced goals, balanced metrics, balanced feedback and balanced compensation are all a part of plan to generate balanced behavior. Imbalances of any of these layers will generate imbalances in behavior. Rebalancing can change behavior but just make sure it is the behavior you anticipate and it doesn’t cause unintended consequences by shifting measurement lust to another target.


Categories: Process Management

More Than Target Needed To Steer Toward Project Success

Herding Cats - Glen Alleman - Sat, 08/09/2014 - 15:49

There is a suggestion that only the final target of a project's performance is needed to steer toward success. This target can be budget, a finish date, the number of stories or story points in an agile software project. With the target and the measure of performance to date, collected from the measures at each sample point, there is still a missing piece needed to guide the project.

With the target and the samples, no error signal is available to make intermediate corrections to arrive on target. With the target alone, any variances in cost, schedule, or techncial performance can only be discovered when the project arrives at the end. With the target alone, this is an Open Loop control system.

Control systems from Glen Alleman Pages 27 and 28 show the difference between Open Loop control and Closed Loop control of a notional software development project using stories as the unit of measure. In the figure below (page 27), the cummulative performance of stories is collected from the individual performance of stories over the projects duration. The target stories -  or budget, or some other measure - is the final target. But along the way, there is no measure of are we going to make it at this rate?  An Open Loop Control System
  • Is a non-feedback system, where the output – the desired state – has no influence or effect on the control action of the input signal — the measures of performance are just measures. They are not compared to what the performance should be at that point in time.
  • The output – the desired state– is neither measured nor “fed back” for comparison with the input — there is not an intermediate target goal to measure the actual performance against. Over budget, late, missing capabilities are only discovered at the end.
  • Is expected to faithfully follow its input command or set point regardless of the final result — the planned work can be sliced into small chunks of equal size - this is a huge assumptions by the way - but the execution of this work must also faithfully follow the planned productivity. (See assumptions below).
  • Has no knowledge of the output condition – the difference between desired state and actual state – so cannot self-correct any errors it could make when the preset value drifts, even if this results in large deviations from the preset value — the final target is present but the compliance with that target along the way is missing, since there is no intermediate target to steer toward for each period of assessment - only the final.
Screen Shot 2014-08-09 at 7.46.27 AM There are two very simplifying assumptions made in the slicing approach sugegsted to solve the control of projects: 
  • The needed performance in terms of stories or any other measure of performance are linear and of the same size - this requires decompsong the planned work for each period to nearly identical sizes, work efforts, and outcomes.
  • The productivity of the work performed is also linear and unvarying - this require zero defects, zero variance in the work effort, and sustained productivity at the desired performance level.
Fulfilling these assumptions before the project starts requires effort and the assumptions about the homogeneity of the planned production, the homogeneity of the work effort, and the homogeneity of any defects, rework, or changes in plan would require near Perfect planning and management of the project. Instead, the reality of all project work is the planned effort, duration, outcomes, dependencies, and cost are random variables. This is the nature of the non-stationary stochastic processes that drive project work. Nothing will turn out as planned during to uncertainty. There are two types of uncertainty found in project work:
  • Irreducible Uncertainty - this is the noise of the project. Random fluctuations on productivity, technical performance, efficiency, effectiveness, risk. These cannot be reduced. They are Aleatory Uncertainties.
  • Reducible Uncertainty - these are event based uncertainties that have a probability of occurring, have a probability of the consequences, and have a residual probability that when fixed will come back back again.

Irreducible Uncertainty can only be handled with Margin. Cost margin, schedule margin, technical margin. This is the type of margin you use when you drive to work. The GPS Navigation system says it 23 ninutes to the office. It's NEVER 23 minutes to the office. Something always interferes with our progress.

Reducible Uncertainty is handled in two way. Spending money to buydown the risk that results from this uncertainty. Management Reserve (budget reserve and schedule contingency) to be used whenm soemthnig goes wrong to pay for the fix when the uncertainty turns into reality.

The next figure (page 28) shows how to manage in the presence of these  uncertainties, by measuring actual performance against the desired performance at each step along the way.

Screen Shot 2014-08-09 at 8.03.15 AM

In this figure, we measure at each assessment point the progress of the project against the desired progress - the planned progress, the needed progress. This planned, desired, or needed progress is developed by looking at the future effort, duration, risk, uncertainty - the stochastic processes that drive the project - and determining what should be the progress at this point in time to reach our target on or before the need date, at or below the needed cost, and with the needed confidence that the technical capabilities can be delivered along the way?  This is closed loop control.

The planned performance, the needed performance, the desired performance is developed early in the project. Maybe on day one, more likely after actual performance has been assessed to calibrate future performance. This is called Reference Class Forecasting. With this information estimates of the needed performance can then be used to establish steering targets along the way to completing the project. These intermediate references - or steering - points provide feedback along the way toward the goal. They provide the error signal needed to keep the project on track. They are the basis of Closed Loop control. 

In the US, many highways have rumble strips cut into the asphalt to signal that you are nearing the edge of the road on the right. They make a loud noise that tells you - hey get back in the lane, otherwise you're going to end up in the ditch.

This is the purpose of the intermediate steering targets for the project. When the variance between planned and actual exceeds a defined threshold, this says hey, you're not going to make it to the end on time, on budget, or with your needed capabilities if you keep going like this.

Ditch 03[1]

Kent Beck's quote is...

Optimism is the disease of software development. Feedback is the cure.

 

This feedback must have a reference to compare against if it is to be of any value in steering the project to a successful completion. Knowing it's going to be late, over budget, and doesn't work when we arrive at late, over budget, and not working is of little help to the passengers of the project.

Related articles Indicators of project performance provide us with "Steering Inputs" Seven Immutable Activities of Project Success Why Project Management is a Control System The Use and Misuse of Little's Law and Central Limit Theorem in Agile Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices Slicing Work Into Small Pieces How To Assure Your Project Will Fail Control Systems - Their Misuse and Abuse Getting to done!
Categories: Project Management

Seven Deadly Sins of Metrics Programs: Gluttony

3068483640_328b020efa_bGluttony is over-indulgence to the point of waste.  Gluttony brings to mind pictures of someone consuming food at a rate well beyond simple need.  In measurement, gluttony then can be exemplified by programs that collect data that has no near-term need or purpose.  When asked why the data was collected, the most common answer boils down to ‘we might need it someday…’

Why is the collection of data just in case, for future use or just because it can be done a problem?  The problems caused by measurement gluttony fall into two basic categories.  The first is that it wastes the effort of the measurement team, and second because it wastes credibility.

Wasting effort dilutes the measurement team’s resources that should be focused on collecting and analyzing data that can make a difference.  Unless the measurement program has unlimited resources, over collection can obscure important trends and events by reducing time for analysis and interpretation.  Any program that scrimps on analysis and interpretation is asking trouble, much as a person with clogged arteries.  Measures without analysis and interpretation are dangerous because people see what they like in the data due to clustering illusion (cognitive bias). Clustering illusion (or clustering bias) is the tendency to see patterns in clusters or streaks of in a smaller sample of data inside larger data sets. Once a pattern is seen it becomes difficult to stop people from believing that the does not exist.

The second problem of measurement gluttony occurs because it wastes the credibility of the measurement team.  Collecting data that is warehoused just in case it might be important causes those who provide the measures and metrics to wonder what is being done the data. Collecting data that you are not using will create an atmosphere of mystery and fear.  Add other typical organizational problems, such as not being transparent and open about communication of measurement results, and fear will turn into resistance.   A sure sign of problems is when you  begin hearing consistent questions about what you are doing, such as “just what is it that you do with this data?” All measures should have a feedback loop to those being measured so they understand what you are doing, how the data is being used and what the analysis means.  Telling people that you are not doing anything with the data doesn’t count as feedback. Simply put, don’t collect the data if you are not going to use it and make sure you are using the data you are collecting to make improvements!

A simple rule is to collect only the measurement data that you need and CAN use.  Make sure all stakeholders understand what you are going to do with the data.  If you feel that you are over-collecting, go on a quick data diet.  One strategy for cutting back is to begin in the areas you feel safest [SAFEST HOW?]. For example, start with a measure that you have not based a positive action on in the last 6 months. Gluttony in measurement gums up the works just like it does in a human body; the result of measurement gluttony slows down reactions and creates resistance, which can lead to a fatal event for your program.

 


Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Fri, 08/08/2014 - 22:31

Strategy without tactics is the slowest route to victory
Tactics without strategy is the noise before defeat.

— Sun Tzu

Every time you hear about some conjectured great new idea on how to solve a vexing problem, ask what's the actual problem to be solved, what's the root cause of that problem, what domain is this problem found in and what domain is this problem already solved in, and finally ask what's your strategy for assuring that your solution will actually result in a solution that doesn't break something else, violate a business principle, and create an obligation that you didn't think of because you were too busy admiring your cleaver idea?

 

Categories: Project Management

Introduction to big data presentation

I presented big data to Amdocs’ product group last week. One of the sessions I did was recorded so I might be able to add here later. Meanwhile you can check out the slides.

Note that trying to keep the slide visual I put some of the information is in the slide notes and not on the slides themselves.

Big data Overview from Arnon Rotem-Gal-Oz

Categories: Architecture

Are You Doing Agile Results?

If you already use Agile Results as your personal results system, you have a big advantage.

Why?

Because most people are running around, scrambling through a laundry list of too many things to do, a lack of clarity around what the end result or outcomes should be, and a lack of clarity around what the high-value things to focus on are.  They are using their worst energy for their most important things.  They are spending too much time on the things that don’t matter and not enough time on the things that do.   They are feeling at their worst, when they need to feel at their best, and they are struggling to keep up with the pace of change.

I created Agile Results to deal with the chaos in work and life, as a way to rise above the noise, and to easily leverage the most powerful habits and practices for getting better results in work and life.

Agile Results, in a nutshell, is a simple system for mastering productivity and time management, while at the same time, achieving more impact, realizing your potential, and feeling more fulfillment.

I wrote about the system in the book Getting Results the Agile Way.  It’s been a best seller in time management.

How Does Agile Results Work?

Agile Results works by combining proven practices for productivity, time management, psychology, project management, and some of the best lessons learned on high-performance.   And it’s been tested for more than a decade under extreme scenarios and a variety of conditions from individuals to large teams.

Work-Life balance is baked into the system, but more importantly Agile Results helps you live your values wherever you are, play to your strengths, and rapidly learn how to improve your results in an situation.  When you spend more time in your values, you naturally tap into your skills and abilities that help bring out your best.

The simplest way to think of Agile Results is that it helps you direct your attention and apply your effort on the things that count.  By spending more time on high-value activities and by getting intentional about your outcomes, you dramatically improve your ability to get better results.

But none of that matters if you aren’t using Agile Results.

How Can You Start Using Agile Results?

Start simple.

Simply ask yourself, “What are the 3 wins, results, or outcomes that I want for today?.”   Consider the demands you have on your plate, the time and energy you’ve got, and the opportunities you have for today, and write those 3 things down.

That’s it.   You’re doing Agile Results.

Of course, there’s more, but that’s the single most important thing you can do to immediately gain clarity, regain your focus, and spend your time and energy on the most valuable things.

Now, let’s assume this is the only post you ever read on Agile Results.   Let’s take a fast walkthrough of how you could use the system on a regular basis to radically and rapidly improve your results on an ongoing basis.

How I Do Agile Results? …

Here’s a summary of how I do Agile Results.

I create a new monthly list at the start of each month that lists out all the things that I think I need to do, and I bubble up 3 of my best things I could achieve or must get done to the top.   I look at it at the start of the week, and any time I’m worried if I’m missing something.  This entire process takes me anywhere from 10-20 minutes a month.

I create a weekly list at the start of the week, and I look at it at the start of each day, as input to my 3 target wins or outcomes for the day, and any time I’m worried if I’m missing anything.   This tends to take me 5-10 minutes at the start of the week.

I barely have to ever look at my lists – it’s the act of writing things down that gives me quick focus on what’s important.   I’m careful not to put a bunch of minutia in my lists, because then I’d train my brain to stop focusing on what’s important, and I would become forgetful and distracted.  Instead, it’s simple scaffolding.

Each day, I write a simple list of what’s on my mind and things I think I need to achieve.   Next, I step back and ask myself, “What are the 3 things I want to accomplish today?”, and I write those down.   (This tends to take me 5 minutes or less.  When I first started it took me about 10.)

Each Friday, I take the time to think through three things going well and three things to improve.   I take what I learn as input into how I can simplify work and life, and how I can improve my results with less effort and more effectiveness.   This takes me 10-20 minutes each Friday.

How Can You Adopt Agile Results?

Use it to plan your day, your week, and your month.

Here is a simple recipe for adopting Agile Results and using it to get better results in work and life:

  1. Add a recurring appointment on your calendar for Monday mornings.  Call it Monday Vision.   Add this text to the body of the reminder: “What are your 3 wins for this week?”
  2. Add a recurring appointment on your calendar to pop up every day in the morning.  Call it Daily Wins.  Add this text to the body of the reminder: “What are your 3 wins for today?”
  3. Add a recurring appointment on your calendar to pop up every Friday mid-morning.  Call it Friday Reflection.  Add this text to the body of your reminder:  What are 3 things going well?  What are 3 things to improve?”
  4. On the last day of the month, make a full list of everything you care about for the next month.   Alphabetize the list.  Identify the 3 most important things that you want to accomplish for the month, and put those at the top of the list.   Call this list  Monthly Results for Month XYZ.  (Note – Alphabetizing your list helps you name your list better and sort your list better.  It’s hard to refer to something important you have to do if you don’t even have a name for it.  If naming the things on your list and sorting them is too much to do, you don’t need to.  It’s just an additional tip that helps you get even more effective and more efficient.)
  5. On Monday of each week, when you wake up, make a full list of everything you care about accomplishing for the week.  Alphabetize the list.  Identify the 3 most important things you want to accomplish and add that to the top of the list.  (Again, if you don’t want to alphabetize then don’t.)
  6. On Wednesdays, in the morning, review the three things you want to accomplish for the week to see if anything matters that you should have spent time on or completed by now.  Readjust your priorities and focus as appropriate.  Remember that the purpose of having the list of your most important outcomes for the week isn’t to get good at predicting what’s important.  It’s to help you focus and to help you make better decisions about what to spend time on throughout the week.  If something better comes along, then at least you can make a conscious decision to trade up and focus on that.  Keep trading up.   And when you look back on Friday, you’ll know whether you are getting better at trading up or if you are just getting randomize or focusing on the short-term but hurting the long term.
  7. On Fridays,  in the morning, do your Friday Reflection.  As part of the exercise, check against your weekly outcomes and your monthly outcomes that you want to accomplish.  If you aren’t effective for the week, don’t ask “why not,” ask “how to.”   Ask how can you bite off better things and how can you make better choices throughout the week.  Just focus on little behavior changes, and this will add up over time.  You’ll get better and better as you go, as long as you keep learning and changing your approach.   That’s the Agile Way.

There are lots of success stories by other people who have used Agile Results.   Everybody from presidents of companies to people in the trenches, to doctors and teachers, to teams and leaders, as well as single parents and social workers.

But none of that matters if it’s not your story.

Work on your success story and just start getting better results, right here, right now.

What are the three most important things you really want to accomplish or achieve today?

Categories: Architecture, Programming

Stuff The Internet Says On Scalability For August 8th, 2014

Hey, it's HighScalability time:


Physicists reveal the scaling behaviour of exotic giant molecules.
  • 5 billion: Transistors Intel manufactures each second; 396M: WeChat active users
  • Quotable Quotes:
    • @BenedictEvans: Every hour or so, Apple ships phones with something around 2x more transistors than were in all the PCs on earth in 1995.
    • @robgomes: New client. Had one of their employees tune an ORM-generated query. Reduced CPU by 99.999%, IO by 99.996%.  Server now idle.
    • @pbailis: As a hardware-oriented systems builder, I'd pay attention to, say, ~100 ns RTTs via on-chip photonic interconnects
    • @CompSciFact: "Fancy algorithms are buggier than simple ones, and they're much harder to implement." -- Rob Pike's rule No. 4
    • @LusciousPear: I'm probably doing in Google what would have taken 5-8 engineers on AWS.
    • C. Michael Holloway, NASA: To a first approximation, we can say that accidents are almost always the result of incorrect estimates of the likelihood of one or more things.
    • Stephen O'Grady: More specific to containers specifically, however, is the steady erosion in the importance of the operating system. 

  • Wait, I thought mobile meant making single purpose apps? Mobile meant tearing down the portal cathedrals built by giants of the past. Then Why aren’t App Constellations working?: The App Constellation strategy works when you have a core resource which can be shared across multiple apps. 

  • Decentralization: I Want to Believe. The irony is mobile loves centralization, not p2p. Mobile IP addresses change all the time and you can't run a server on a phone. The assumption that people want decentralization has been disproven. Centralized services have won. People just want a service that works. The implementation doesn't matter that much. Good discussion on HackerNews and on Reddit.

  • Myth: It takes less money to start a startup these days. Sort of.  Why the Structural Changes to the VC Industry Matter: It turns out that, while it is in fact cheaper to get started and enter the market, it also requires more money for the breakout companies to win the market. Ultimately, today’s winners have a chance to be a lot bigger. But winning requires more money for geographic expansion, full-stack support of multiple new disciplines, and product expansion. And these companies have to do all of this while staying private for a much longer period of time; the median for money raised by companies prior to IPO has doubled in the past five years. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Extending AngularJS services with the Decorate method

Xebia Blog - Fri, 08/08/2014 - 12:00

Many large Angular applications tend to see a lot of repetition - same API endpoint, same method of dealing with and transforming data, etcetera. One technique you can use to at least alleviate that is using AngularJS's decorate method, which allows you to extend, adjust or even fully replace any existing service.

As you'll see in this post, using this allows you to modify and extend the framework you build your app in, which will lead to a cleaner, more legible codebase, written in a more functional style (the what, not the how).

Update 11/8: The follow-up is now live, along with the GitHub repository.

A feature not often used when developing AngularJS applications is the $provide service, which is the primary service used to register components with the $injector. More commonly, a developer would use methods like $provide.service() or $provide.factory to do so, but those are merely utility methods defined in the $provide service and exposed via angular.module().

The main reasons to use $provide over the service() and factory() methods is to configure the service before it's instantiated, for example. While there may be more advanced use-cases for using $provide, I haven't yet encountered them while developing regular applications and I'm sure they won't occur often.

One of the methods listed at the very bottom of the $provide documentation is the decorate() method. It doesn't look like much (it's at the bottom, after all), but its documentation hints that it's very powerful:

"A service decorator intercepts the creation of a service, allowing it to override or modify the behaviour of the service. The object returned by the decorator may be the original service, or a new service object which replaces or wraps and delegates to the original service."

Nothing to add there. You can use decorate() to change, add to, or completely replace the behaviour of services without having to edit its code. This can be done on any code not your own - core AngularJS services, but also third-party libraries. It's the equivalent of overriding methods in OO languages or monkey-patching in the more dynamic languages.

“Isn’t that evil?”, I hear you ask. As with every programming-related question, the only correct answer is: it depends. I’m going to give a few practical examples of when I believe using decorate() is appropriate. In a future blog post, I'll expand on this example, showing how relatively simple code can positively influence your entire application architecture.

Here’s a practical example, a neat follow-up on my previous blog about angular promises: decorating $q to add methods to the promise object. The promise API itself defines only one method: then(). $q adds a few simple methods to that like catch() and finally(), but for your own application you can add a few more.

If you’ve been working with promises for a little while in your AngularJS application, you’ve probably noticed some operations are pretty common; assigning the promise result to the scope (or any object), logging the result in the console, or calling some other method. Using decorate(), we can add methods to the promise object to simplify those. Here's a bit of code from my previous post; we'll add a method to $q to remove the need for a callback:

CustomerService.getCustomer(currentCustomer)
 .then(CartService.getCart)
 .then(function(cart) {
   $scope.cart = cart;
 })
 .catch($log.error);

First, we’ll need to do some boilerplate: we create a function that adds our methods to the promise object, and then we replace all the default promise methods. Note that the decorating function will also apply itself to the given promise.then method again, so that our customisations aren’t lost further down a promise chain:

angular.module('ngPromiseDsl', [])
  .config(function ($provide) {
    $provide.decorator('$q', function ($delegate, $location) {

      // decorating method

      function decoratePromise(promise) {
        var then = promise.then;

        // Overwrite promise.then. Note that $q's custom methods (.catch and .finally) are implemented by using .then themselves, so they're covered too.

        promise.then = function (thenFn, errFn, notifyFn) {
          return decoratePromise(then(thenFn, errFn, notifyFn));
        };

        return promise;
      }

      // wrap and overwrite $q's deferred object methods

      var defer = $delegate.defer,
        when = $delegate.when,
        reject = $delegate.reject,
        all = $delegate.all;

      $delegate.defer = function () {
        var deferred = defer();
        decoratePromise(deferred.promise);
        return deferred;
      };

      $delegate.when = function () {
        return decoratePromise(when.apply(this, arguments));
      };

      $delegate.reject = function () {
        return decoratePromise(reject.apply(this, arguments));
      };

      $delegate.all = function () {
        return decoratePromise(all.apply(this, arguments));
      };

      return $delegate;

    });
  });

With that boilerplate in place, we can now start adding methods. As I mentioned earlier, one of the most common uses of a then() function is to set the result onto the scope (or some other object). This is a fairly trivial operation, and it’s pretty straightforward to add it to the promise object using our decorator, too:

function decoratePromise(promise) {
  var then = promise.then;

  promise.then = function (thenFn, errFn, notifyFn) {
    return decoratePromise(then(thenFn, errFn, notifyFn));
  };

  // assigns the value given to .then on promise resolution to the given object under the given varName
  promise.thenSet = function (obj, varName) {
    return promise.then(function (value) {
      obj[varName] = value;
      return value;
    });
  };

  return promise;
}

That’s all. Put this .config block in your application's module definition, or create a new module and add a dependency to it, and you can use it throughout your application. Here's the same piece of code, now with our new thenSet method:

CustomerService.getCustomer(currentCustomer)
  .then(CartService.getCart)
  .thenSet($scope, 'cart')
  .catch($log.error);

This particular example can be extended in a multitude ways to add useful utilities to promises. In my current project we’ve added a number of methods to the promise object, which allows us to reduce the number of callback definitions in our controllers and thus create cleaner, more legible code.

 

Replacing custom callbacks with named methods allows for a more functional programming style, and allows readers to read and write code as a list of “whats”, instead of “hows” - and it's also fully asynchronous.

Extending $q is just the start though: Every angular service can be extended for various purposes - add performance monitoring and logging to $http, set common prefixes or fixed properties on $resource urls or template paths, you name it. Leave a remark in the comments about how you've used decorate() to create a better application.

Stay tuned for an upcoming post where I release a small open source project that extends angular’s promise objects with a number of helpful methods to perform common tasks.

Further reading, resources, helpful links:

Seven Deadly Sins of Metrics Programs: Greed

Greed is taking all the food and not leaving some for everyone else.

Greed is taking all the food and not leaving some for everyone else.

Greed, in metrics programs, means allowing metrics to be used as a tool to game the system to gain more resources than one needs or deserves.  At that point measurement programs start down the path to abandonment. The literature shows that greed, like envy, is affected by a combination of personal and organizational attributes.   Whether the root of the problem is nature or nurture, organizational culture can make the incidence of greed worse and that is something we can do something about.

One of the critical cultural drivers that create a platform for greed is fear.  W. Edward Deming in his famous 14 Principles addressed fear: “Drive out fear, so that everyone may work effectively for the company.” Fear is its own disease, however combined with an extremely competitive culture that stresses win/lose transactions, it creates an atmosphere that causes greed to become an economically rational behavior.  Accumulating and hoarding resources reduces your internal competitors’ ability to compete and reduces the possibility of losing because of lack of resources.  Fear-driven greed creates its own insidious cycle of ever increasing fear as the person infected with greed fears that their resource horde is at risk and requires defense (attributed to Sun Tzu in the Art of War). An example of the negative behaviors caused by fear that I recently heard about was a company that had announced that they cull the lower ten percent of the organization annually at the beginning of last year.  Their thought was that completion would help them identify the best and the brightest.  In a recent management meeting the person telling the story indicated that the CIO had expressed exasperation with projects that hadn’t shared resources and that there were cases in which personnel managers had actively redirected resources to less critical projects.

Creating an atmosphere that fosters greed can generate a whole host of bad behaviors including:

  1. Disloyalty
  2. Betrayal
  3. Hoarding
  4. Cliques/silos
  5. Manipulation of authority

Coupling goals, objectives and bonuses to measures in your metrics program can induce greed and have a dramatic effect on many of the other Seven Deadly Sins. For example, programs that have wrestled with implementing a common measure of project size and focused on measuring effectiveness and efficiency will be able to highlight how resources are used.  Organizations that then set goals that based on comparing team effectiveness and efficiency will create an environment in which hoarding resources generate a higher economic burden on the hoarder, because it reduces the efficiency of other teams.  That potential places a burden on a measurement program to create an environment where greed is less likely to occur.

Measurement programs can help create an atmosphere that defuses greed by providing transparency and accountability for results. Alternately as we have seen in earlier parts of this essay, poor measurement programs can and do foster a wide range of poor behaviors.


Categories: Process Management

Process is King

Herding Cats - Glen Alleman - Thu, 08/07/2014 - 18:35

The notion of self-directed development teams has a range of applicable domains. Much of the rancor around agile development these days is about how to apply the core principles of agile software development. Do we need estimates? What's the role of business process in the development life cycle? How are capabilities and requirements elicited? Who has what decision rights for what decisions? How can we made these decisions and what information is needed in order to make the decisions?

Guy Strelitz's post has got me thinking about the spectrum of the world called agile. Here's my take on his diagram. Working in a domain where we're spending other people's money, lots of other people's money, the Winging it approach is simply not accepted. The Kanban approach doesn't either because the inter-dependencies between the backlog items it tight, so picking the next thing off the blog log based on business value may not be possible, since some pre-condition may need to be fulfilled before a higher value item can be started. Softwarr development is not production in our domain. Kanban is a production flow management system, no matter how twisted the logic of the Kanban software advocates make it out to be.

Scrum is a powerful approach to emergent requirements. With those requirements anchored to Needed Capabilities. Capabilities that all have to be in place for the system to be called ready for Go Live. Some more formality is needed as governance and regulatory paradigms are encountered.

Finally we arrive at the Enterprise model of software development. The firm depends on the software system for revenue. PayPal depends on the system for revenue, but not in the way an insurance company does. Or a gas pipeline process control system does. 

But at the same time, agile can contribute to not only increasing the probability of project success, but also deal with the emerging requirements traceable to the need capabilities.

Screen Shot 2014-08-06 at 11.37.15 AM

Process is King

So not to the point. In that agile enterprise paradigm, the mission critical aspects of the software system demand assuance that the released software is not only Fit for Purpose and it is Fit for Use. That is, the software does what it is supposed to do, in a way it is supposed to work. 

One of the critically important process of any enterprise software system is the Change Control (CC) and Release Management (RM) process. The software system is a corporate asset and must be treated as such. This asset is carried on the General Ledger as an asset. A capital investment, governed by the rules of accounting for capital assets. That's essentially the definition of enterprise. 

In this enterprise paradigm, the control of these assets starts with CC and RM. Here's a high level flow of how this corporate asset is managed. In this example development occurs in the lower left. The CC and RM process is post development. This development business rhythm can be weekly, monthly, possibly even daily. But once the software is ready for release to production, this is a possible process. 

The key here is separation of concerns. The developers of enterprise software our not the approvers of the release of that software, nor are they the involved in the QA, UAT, and Performance Assessment of that software.

Screen Shot 2014-08-06 at 11.33.30 AM

So this is the separation from the activities on the left of the top diagram to those on the right. When some suggests a new idea, or has read a book about a new idea and wants to discuss it - ask where on the spectrum of the top diagram they work, where they think their idea would fit in.

In the End

Without first starting with a domain, context, framing assumptions around governance, established decision rights any suggested process has no home for being tested against reality.

Related articles Performance Based Management Agile Project Management Kanban is the New Scrum Can Enterprise Agile Be Bottom Up? Agile Paradigm The Myth of Incremental Development
Categories: Project Management