Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Hierarchies remove scaling properties in Agile Software projects

Software Development Today - Vasco Duarte - Tue, 08/12/2014 - 05:00

There is a lot of interest in scaling Agile Software Development. And that is a good thing. Software projects of all sizes benefit from what we have learned over the years about Agile Software Development.

Many frameworks have been developed to help us implement Agile at scale. We have: SAFe, DAD, Large-scale Scrum, etc. I am also aware of other models for scaled Agile development in specific industries, and those efforts go beyond what the frameworks above discuss or tackle.

However, scaling as a problem is neither a software nor an Agile topic. Humanity has been scaling its activities for millennia, and very successfully at that. The Pyramids in Egypt, the Panama Canal in central America, the immense railways all over the world, the Airbus A380, etc.

All of these scaling efforts share some commonalities with software and among each other, but they are also very different. I'd like to focus on one particular aspect of scaling that has a huge impact on software development: communication.

The key to scaling software development

We've all heard countless accounts of projects gone wrong because of lack (inadequate, or just plain bad) communication. And typically, these problems grow with the size of the team. Communication is a major challenge in scaling any human endeavor, and especially one - like software - that so heavily depends on successful communication patterns.

In my own work in scaling software development I've focused on communication networks. In fact, I believe that scaling software development is first an exercise in understanding communication networks. Without understanding the existing and necessary communication networks in large projects we will not be able to help those project adapt. In many projects, a different approach is used: hierarchical management with strict (and non-adaptable) communication paths. This approach effectively reduces the adaptability and resilience in software projects.

Scaling software development is first and foremost an exercise in understanding communication networks.

Even if hierarchies can successfully scale projects where communication needs are known in advance (like building a railway network for example), hierarchies are very ineffective at handling adaptive communication needs. Hierarchies slow communication down to a manageable speed (manageable for those at the top), and reduce the amount of information transferred upwards (managers filter what is important - according to their own view).

In a software project those properties of hierarchy-bound communication networks restrict valuable information from reaching stakeholders. As a consequence one can say that hierarchies remove scaling properties from software development. Hierarchical communication networks restrict information reach without concern for those who would benefit from that information because the goal is to "streamline" communication so that it adheres to the hierarchy.

In software development, one must constantly map, develop and re-invent the communication networks to allow for the right information to reach the relevant stakeholders at all times. Hence, the role of project management in scaled agile projects is to curate communication networks: map, intervene, document, and experiment with communication networks by involving the stakeholders.

Scaling agile software development is - in its essential form - a work of developing and evolving communication networks.

A special thank you note to Esko Kilpi and Clay Shirky for the inspiration for this post through their writings on organizational patterns and value networks in organizations.

Picture credit: John Hammink, follow him on twitter

Seven Deadly Sins of Metrics Programs: a Conclusion

Dr. Deming

Dr. Deming

The Seven Deadly Sins of metrics programs are:

  1. Pride – Believing that a single number/metric is more important than any other factor.
  2. Envy – Instituting measures that facilitate the insatiable desire for another team’s people, tools or applications.
  3. Wrath – Using measures to create friction between groups or teams.
  4. Sloth – Unwillingness to act on or care about the measures you create.
  5. Greed – Allowing metrics to be used as a tool to game the system.
  6. Gluttony – Application of an excess of metrics.
  7. Lust – Pursuit of the number rather than the business goal.

In the end, these sins are a reflection of the organization’s culture. Bad metrics can generate bad behavior and reinforce an organizational culture issues. Adopting good measures is a step in the right direction however culture can’t be changed by good metrics alone. Shifting the focus on an organizations business goals, fostering transparency to reduce gaming and then using measures as tools rather than weapons can support changing the culture. Measurement can generate behavior that leads towards a healthier environment.  As leaders, measurement and process improvement professionals, we should push to shape their environment so that everyone can work effectively for the company.

The Shewhart PDCA Cycle (or Deming Wheel), set outs of model where measurement becomes a means to an end rather than an end in their own right. The Deming wheel popularized the Plan, Do Check, Act (PDCA) cycle which is focused on delivering business value. Using the PDCA cycle, organizational changes are first planned, executed, checked by measurement and then refined based on a positive feedback model. In his book The New Economics Deming wrote “Reward for good performance may be the same as reward to the weather man for a pleasant day.” Organizations that fall prey to the Seven Deadly Sins of metrics programs are apt to incent the wrong behavior.

(Thank you Dr. Deming).


Categories: Process Management

Staying on Plan Means Closed Loop Control

Herding Cats - Glen Alleman - Tue, 08/12/2014 - 00:19

Screen Shot 2014-08-11 at 5.03.33 PMFor most projects showing up on or near the planned need date, at or near the planned cost, and more or less with the planned capabilities is a good measure of success. Delivering capabilities late and over budget is usually not acceptable to those paying for our work.

So how do we do this? Simple actually.

We start with a Plan. Here's the approach to Planning and the resulting Plan.

Planning constantly peers into the future for indications as to where a solution may emerge. A Plan is a complex situation, adapting to an emerging solution. 
- Mike Dwyer, Big Visible

The Plan tells us when we need the capabilities to produce the needed business value or accomplish the mission. The Plan is a strategy. This strategy involves setting goals, determining actions to achieve the goals, and mobilizing resources to execute the actions. The strategy describes how the ends (goals) will be achieved by the means (resources) in units of measure meanigful to the decision makers.

Strategy creates fit among a firms activities. For Enterprise IT, this fit is defined by the relationships between the needed capabilities delivered by the project. The success of a strategy depends on doing many things well — not just a few.

The things that are done well must operate within a close nit system. If there is no fit among the activities, there is no distinctive strategy and little to sustain the strategic deployment process. Management then reverts to the simpler task of overseeing independent functions.

When this occurs operational effectiveness determines the relative performance of the organization. ["What is Strategy,” M. E. Porter, Harvard Business Review, Volume 74, Number 6, pp. 61–78.]

As successful Plan describes the order of delivery of value in exchange for cost, the inter-dependencies between these value producing items, and the synergistic outcomes from these value producing items working together to meet the strategy.

With the Plan in hand, we can ask and answer the following questions:

  • Do we know what Done looks like in units of measure meaningful to the decision makers?
  • Do we have a strategy for reaching done, on or near the need date, at or near the budget?
  • Do we have the resources we need to execute that strategy?
  • Do we know what impediments we'll encounter along the way and how we're going to handle them?
  • Do we have some means of measuring our progress along the way to provides the confidence we'll reach done when we expect to? 

This Post Answers the Last Question

The example below is from our cycling group. The principles are the same for projects. We have a desired outcome in terms of date, cost, and technical performance. These desired outcomes have some end goal. A budget, a go live date, a set of features or capabilities needed to fulfill the business case. 

Along the way we need to take corrective actions when we see we are falling behind. 

How did we know we were falling behind? Because we have a desired performance at points along the way, that we compare our actual performance to. The difference between our actual performance and the desired performance, creates an "error signal" we can use to make adjustments.

Out thermostat does this, our speed control on our car does this, the Close Loop Control systems used for managing our project does this. So replaces the cycling example with writing software for money. The Peleton for the desired performance of our work. In the presentation below, ignore the guy in the Yellow Jersey at the end. Turns out he's a Dopper and an all around bad person to his fellow riders and fans.

The simple problem of schedule performance indices from Glen Alleman So let's look at a project example using the analogy of our cycling group, ignoring Lance.
  • We have a goal of riding a distance in a specific time. This can be a training ride, a century, or a Grand Fondo (timed distance for record).
  • We have instruments on the bike. Strava on the smart phone. Gamin on the bars. Both keeping track of instantaneous speed, time moving, total time. As well an average time over the distance.
  • We know, because we've done this route A 100 times before, or we know because we looked at the coure map, or we know because we have talked about the planned distance for the day - about how fats we need to ride - on average - to get back to the drive on a planned time.
  • Say we're out for our Saturday ride, which is usually a 50 to 65 mile loop from the driveway in Niwot Colorado to Carter Lake south of Fort Collins.
  • As a group we agree our average pace will be 20 MPH. Everyone has some way of measuring their average and instantaneous speed.
  • Over the course, it's never flat, some climbs and descents, change the flat land speed, but the average needs to stay at 20 MPH. 
  • When one or more drop off the back, I'm usually one of those, we know what are average is. And instinctively we know how much faster we need to ride to catch up - pick up the pace.
  • But if we were actual racers riding solo - time trial or Triathlon, we'd look at our Garmin to see if we're going fast enough to get back on pace, and come in under our target time.

This example can be related to a project. 

  • We have a target for the end — a target set of capabilities, with a need date, and a target budget
  • We have targets for all the intermediary points along the way — capabilities over time, budget over time.
  • We know our pace — how fast to we have to go, how much cost can we absorb, to make our goal of delivering the needed capabilities on the needed date, for the needed budget.
  • We know the gap between our pace and our needed pace to stay on track — with an intermediate desired performance, budget, number of features delivered, stated capabilities ready to be  used, and the actual measures of these, we can develop a variance that is used to product an error signal that is used to steer to the project. This is the feedback loop. The closed loop control system we need to keep the project on plan.
  • From that variance we know how much faster we need to go to arrive on time.

This is Close Loop Control

  • Have a target for the end and we have intermediate targets along the way. These intermediate targets are the feedback points we use to assess progress to date
  • Measure the actual value of whatever variables we need to provide visibility to the project's performance. This can be cost, schedule, performance. But it can be other attributes of the project. Customer acceptance. Market place feedback. Incremental savings.
  • Determine the variance between the plan and the actual.
  • Take corrective action to close the gap to get back on target  or get within the variance tolerance.
  • Repeat until project ends.

You're cruise control does this about every 10 milliseconds. You Nest thermostat does this slower, but still less than a minute. To know how often you need to sample your progress against plan, answer this question

How long are you willing to wait before you find out you're late? Sample at ½ that time.

This is called the Nyquist Rate, one of the starting point for all the process control software I wrote in my youner days for flying and swimming machines. But it's a good question to ask on all projects as well.

Related articles Is There Such a Thing As Making Decisions Without Knowing the Cost? Time to Revisit The Risk Discussion Can Enterprise Agile Be Bottom Up? Process Improvement In A Complex World Why Project Management is a Control System If you want to pretent you're a Pro, then starting acting like one Control Systems - Their Misuse and Abuse Seven Immutable Activities of Project Success More Than Target Needed To Steer Toward Project Success
Categories: Project Management

R: Grouping by two variables

Mark Needham - Mon, 08/11/2014 - 17:47

In my continued playing around with R and meetup data I wanted to group a data table by two variables – day and event – so I could see the most popular day of the week for meetups and which events we’d held on those days.

I started off with the following data table:

> head(eventsOf2014, 20)
      eventTime                                              event.name rsvps            datetime       day monthYear
16 1.393351e+12                                         Intro to Graphs    38 2014-02-25 18:00:00   Tuesday   02-2014
17 1.403635e+12                                         Intro to Graphs    44 2014-06-24 18:30:00   Tuesday   06-2014
19 1.404844e+12                                         Intro to Graphs    38 2014-07-08 18:30:00   Tuesday   07-2014
28 1.398796e+12                                         Intro to Graphs    45 2014-04-29 18:30:00   Tuesday   04-2014
31 1.395772e+12                                         Intro to Graphs    56 2014-03-25 18:30:00   Tuesday   03-2014
41 1.406054e+12                                         Intro to Graphs    12 2014-07-22 18:30:00   Tuesday   07-2014
49 1.395167e+12                                         Intro to Graphs    45 2014-03-18 18:30:00   Tuesday   03-2014
50 1.401907e+12                                         Intro to Graphs    35 2014-06-04 18:30:00 Wednesday   06-2014
51 1.400006e+12                                         Intro to Graphs    31 2014-05-13 18:30:00   Tuesday   05-2014
54 1.392142e+12                                         Intro to Graphs    35 2014-02-11 18:00:00   Tuesday   02-2014
59 1.400611e+12                                         Intro to Graphs    53 2014-05-20 18:30:00   Tuesday   05-2014
61 1.390932e+12                                         Intro to Graphs    22 2014-01-28 18:00:00   Tuesday   01-2014
70 1.397587e+12                                         Intro to Graphs    47 2014-04-15 18:30:00   Tuesday   04-2014
7  1.402425e+12       Hands On Intro to Cypher - Neo4j's Query Language    38 2014-06-10 18:30:00   Tuesday   06-2014
25 1.397155e+12       Hands On Intro to Cypher - Neo4j's Query Language    28 2014-04-10 18:30:00  Thursday   04-2014
44 1.404326e+12       Hands On Intro to Cypher - Neo4j's Query Language    43 2014-07-02 18:30:00 Wednesday   07-2014
46 1.398364e+12       Hands On Intro to Cypher - Neo4j's Query Language    30 2014-04-24 18:30:00  Thursday   04-2014
65 1.400783e+12       Hands On Intro to Cypher - Neo4j's Query Language    26 2014-05-22 18:30:00  Thursday   05-2014
5  1.403203e+12 Hands on build your first Neo4j app for Java developers    34 2014-06-19 18:30:00  Thursday   06-2014
34 1.399574e+12 Hands on build your first Neo4j app for Java developers    28 2014-05-08 18:30:00  Thursday   05-2014

I was able to work out the average number of RSVPs per day with the following code using plyr:

> ddply(eventsOf2014, .(day=format(datetime, "%A")), summarise, 
        count=length(datetime),
        rsvps=sum(rsvps),
        rsvpsPerEvent = rsvps / count)
 
        day count rsvps rsvpsPerEvent
1  Thursday     5   146      29.20000
2   Tuesday    13   504      38.76923
3 Wednesday     2    78      39.00000

The next step was to show the names of events that happened on those days next to the row for that day. To do this we can make use of the paste function like so:

> ddply(eventsOf2014, .(day=format(datetime, "%A")), summarise, 
        events = paste(unique(event.name), collapse = ","),
        count=length(datetime),
        rsvps=sum(rsvps),
        rsvpsPerEvent = rsvps / count)
 
        day                                                                                                    events count rsvps rsvpsPerEvent
1  Thursday Hands On Intro to Cypher - Neo4j's Query Language,Hands on build your first Neo4j app for Java developers     5   146      29.20000
2   Tuesday                                         Intro to Graphs,Hands On Intro to Cypher - Neo4j's Query Language    13   504      38.76923
3 Wednesday                                         Intro to Graphs,Hands On Intro to Cypher - Neo4j's Query Language     2    78      39.00000

If we wanted to drill down further and see the number of RSVPs per day per event type then we could instead group by the day and event name:

> ddply(eventsOf2014, .(day=format(datetime, "%A"), event.name), summarise, 
        count=length(datetime),
        rsvps=sum(rsvps),
        rsvpsPerEvent = rsvps / count)
 
        day                                              event.name count rsvps rsvpsPerEvent
1  Thursday Hands on build your first Neo4j app for Java developers     2    62      31.00000
2  Thursday       Hands On Intro to Cypher - Neo4j's Query Language     3    84      28.00000
3   Tuesday       Hands On Intro to Cypher - Neo4j's Query Language     1    38      38.00000
4   Tuesday                                         Intro to Graphs    12   466      38.83333
5 Wednesday       Hands On Intro to Cypher - Neo4j's Query Language     1    43      43.00000
6 Wednesday                                         Intro to Graphs     1    35      35.00000

There are too few data points for some of those to make any decisions but as we gather more data hopefully we’ll see if there’s a trend for people to come to events on certain days or not.

Categories: Programming

The Easy Way of Building a Growing Startup Architecture Using HAProxy, PHP, Redis and MySQL to Handle 1 Billion Requests a Week

This Case Study is a guest post written by Antoni Orfin, Co-Founder and Software Architect at Octivi

In the post I'll show you the way we developed quite simple architecture based on HAProxy, PHP, Redis and MySQL that seamlessly handles approx 1 billion requests every week. There’ll be also a note of the possible ways of further scaling it out and pointed uncommon patterns, that are specific for this project.

Stats:
Categories: Architecture

How to Negotiate Your Salary

Making the Complex Simple - John Sonmez - Mon, 08/11/2014 - 15:00

I’m often surprised how many software developers neglect to do any salary negotiations at all or make a single attempt at negotiating their salary and then give up and take whatever is offered. Negotiating your salary is important—not just because the dollars will add up over time and you could end up leaving a lot […]

The post How to Negotiate Your Salary appeared first on Simple Programmer.

Categories: Programming

The AngularJS Promise DSL

Xebia Blog - Mon, 08/11/2014 - 10:21

As promised in my previous post, I just pushed the first version of our "Angular Promise DSL" to Github. It extends AngularJS's $q promises with a number of helpful methods to create cleaner applications.

The project is a V1, it may be a bit rough around the edges in terms of practical applicability and documentation, but that's why it's open source now.

The repository is at https://github.com/fwielstra/ngPromiseDsl and licensed as MIT. It's the first OS project I've created, so bear with me. I am accepting pull requests and issues, of course.

Questions? Ask them on the issues page, ask me via Twitter (@frwielstra) or send me an e-mail. I'd offer you to come by my office too... if I had one.

Ocean Revival

Phil Trelford's Array - Sun, 08/10/2014 - 23:20

This weekend I had the pleasure of sitting on the Ocean Q&A panel at Revival 2014 in Wolverhampton. I worked at Ocean in Manchester in my early 20s on titles like Jurassic Park (PC & Amiga) and Addams Family Values (SNES & Megadrive). It was fun to reminisce about the good old days with other former Ocean employees and people who’d played the games.

Ocean panel

The panel closely coincided with the public release of The History of Ocean Software book by Chris Wilkins which was funded as a Kickstarter:

Ocean the history

There were plenty of old games to play at the event too. I particularly enjoyed Rez on a PS2, Omega Race on a Vic-20 and a Flappy Bird clone on a Commodore 64.

Flappy Bird C64

When we got home, my 7yo and I pulled the Vic-20 out of the garage, and played some more Omega Race with the joystick we’d just picked up:



My 7yo has been picking up Python recently, with a Coding Club - Python Basics book.

One of the tasks is to print out the 5 times table:

number = 1
while i <= 12:
    print(number,"x 5 =",number*5)
    number = number + 1

Funnily enough Vic-20 Basic (circa 1981) was easily up to the challenge too:

5 times table

And good old FizzBuzz was no bother either:

FizzBuzz Vic-20

Then my son had a go at Magic 8-ball, but sadly lost the code he’d spent a while typing in when it is closed, so we re-wrote it again in F# so there was less to type:

let answers =[
   "Go for it!"
   "No way Jose!"
   "I'm not sure. Ask me again."
   "Fear of the unknown is what imprisons us."
   "It would be madness to do that!"
   "Only you can save mankind!"
   "Makes no difference to me, do or don't - whatever."
   "Yes, I think on balance that is the right choice"
   ]

printfn "Welcome to Magic 8-Ball."

printfn "Ask me for advice and then press enter to shake me"
System.Console.ReadLine() |> ignore

for i = 1 to 4 do printfn "Shaking..."

let rand = System.Random()
let choice = rand.Next(answers.Length)

printfn "%s" (answers.[choice])

Why not dig out your old computers and have some programming and games fun! :)
Categories: Programming

SPaMCAST 302- Larry Maccherone, Measuring Agile

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 302

Software Process and Measurement Cast number 302 features our interview with Larry Maccherone of Rally Software. We talked about Agile and metrics.  Measuring and challenging the folklore of Agile is a powerful tool for change!  Measurement and Agile in the same sentence really is not an oxymoron.

Larry’s Bio:

Larry is an industry recognized Agile speaker and thought leader. He is Rally Software’s Director of Analytics and Research. Before coming to Rally Software, Larry worked at Carnegie Mellon with the Software Engineering Institute for seven years conducting research on software engineering metrics with a particular focus on reintroducing quantitative insight back into the agile world. He now leads a team at Rally using big data techniques to draw interesting insights and Agile performance metrics, and provide products that allow Rally customers to make better decisions. Larry is an accomplished author and speaker, presenting at major conferences for the lean and agile markets over the last several years, including the most highly rated talk at Agile 2013. He just gave two talks on the latest research at Agile 2014.

Contact information:

Rally Author Page

Email: lmaccherone@rallydev.com

Google+

Next

Software Process and Measurement Cast number 303 will feature our essay on estimation.  Estimation is a hot bed of controversy. But perhaps first we should synchronize on just what we think the word means.  Once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 303 we will not shy away from a hard discussion.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested!

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 302- Larry Maccherone, Measuring Agile

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 302

Software Process and Measurement Cast number 302 features our interview with Larry Maccherone of Rally Software. We talked about Agile and metrics.  Measuring and challenging the folklore of Agile is a powerful tool for change!  Measurement and Agile in the same sentence really is not an oxymoron.

Larry’s Bio:

Larry is an industry recognized Agile speaker and thought leader. He is Rally Software’s Director of Analytics and Research. Before coming to Rally Software, Larry worked at Carnegie Mellon with the Software Engineering Institute for seven years conducting research on software engineering metrics with a particular focus on reintroducing quantitative insight back into the agile world. He now leads a team at Rally using big data techniques to draw interesting insights and Agile performance metrics, and provide products that allow Rally customers to make better decisions. Larry is an accomplished author and speaker, presenting at major conferences for the lean and agile markets over the last several years, including the most highly rated talk at Agile 2013. He just gave two talks on the latest research at Agile 2014.

Contact information:

Rally Author Page

Email: lmaccherone@rallydev.com

Google+

Next

Software Process and Measurement Cast number 303 will feature our essay on estimation.  Estimation is a hot bed of controversy. But perhaps first we should synchronize on just what we think the word means.  Once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 303 we will not shy away from a hard discussion.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested!

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 302- Larry Maccherone, Measuring Agile

Software Process and Measurement Cast - Sun, 08/10/2014 - 22:00

Software Process and Measurement Cast number 302 features our interview with Larry Maccherone of Rally Software. We talked about Agile and metrics.  Measuring and challenging the folklore of Agile is a powerful tool for change!  Measurement and Agile in the same sentence really is not an oxymoron.

Larry’s Bio:

Larry is an industry recognized Agile speaker and thought leader. He is Rally Software's Director of Analytics and Research. Before coming to Rally Software, Larry worked at Carnegie Mellon with the Software Engineering Institute for seven years conducting research on software engineering metrics with a particular focus on reintroducing quantitative insight back into the agile world. He now leads a team at Rally using big data techniques to draw interesting insights and Agile performance metrics, and provide products that allow Rally customers to make better decisions. Larry is an accomplished author and speaker, presenting at major conferences for the lean and agile markets over the last several years, including the most highly rated talk at Agile 2013. He just gave two talks on the latest research at Agile 2014.

Contact information:

Rally Author Page

Email: lmaccherone@rallydev.com

Google+

Next

Software Process and Measurement Cast number 303 will feature our essay on estimation.  Estimation is a hot bed of controversy. But perhaps first we should synchronize on just what we think the word means.  Once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 303 we will not shy away from a hard discussion.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested!

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

SPaMCAST 302- Larry Maccherone, Measuring Agile

Software Process and Measurement Cast - Sun, 08/10/2014 - 22:00

Software Process and Measurement Cast number 302 features our interview with Larry Maccherone of Rally Software. We talked about Agile and metrics.  Measuring and challenging the folklore of Agile is a powerful tool for change!  Measurement and Agile in the same sentence really is not an oxymoron.

Larry’s Bio:

Larry is an industry recognized Agile speaker and thought leader. He is Rally Software's Director of Analytics and Research. Before coming to Rally Software, Larry worked at Carnegie Mellon with the Software Engineering Institute for seven years conducting research on software engineering metrics with a particular focus on reintroducing quantitative insight back into the agile world. He now leads a team at Rally using big data techniques to draw interesting insights and Agile performance metrics, and provide products that allow Rally customers to make better decisions. Larry is an accomplished author and speaker, presenting at major conferences for the lean and agile markets over the last several years, including the most highly rated talk at Agile 2013. He just gave two talks on the latest research at Agile 2014.

Contact information:

Rally Author Page

Email: lmaccherone@rallydev.com

Google+

Next

Software Process and Measurement Cast number 303 will feature our essay on estimation.  Estimation is a hot bed of controversy. But perhaps first we should synchronize on just what we think the word means.  Once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 303 we will not shy away from a hard discussion.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested!

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Sun, 08/10/2014 - 18:27

"You cannot have an execution culture without robust dialog - one that brings reality to the surface through openness, candor, and informality.
This is called "truth over harmony"

in Execution: The Discipline of Getting Things Done, Larry Bossidy and Ram Charan

Related articles Practices without Principles Does Not Scale The Need for Strategy How to Manage in the Presence of Uncertainty
Categories: Project Management

Seven Deadly Sins of Metrics Programs: Lust

1123084506_c69acb0424_b

The measurement/performance feedback loop causes an addiction to a single metric. The addict will exclude what is really important.

There is a famous adage: you get what you measure. When an organization measures a specific activity or process, people tend to execute so they maximize their performance against that measure. Managers and change agents often create measures to incentivize teams or individuals to perform work in a specific then to generate a feedback loop. The measurement/performance feedback loop causes an addiction to a single metric. The addict will exclude what is really important. Chasing the endorphins that the feedback will generate is the sin of lust in the measurement world. Lust, like wrath, is a loss of control which affects your ability to think clearly. Balanced goals and medium to long-term focus are tools to defeat the worst side effects of measurement lust. The ultimate solution is a focus on the long-term goals of the organization.

How does this type of unbalanced behavior occur?  Usually measurement lust is generated by either an unbalanced measurement programs or performance compensation programs.   Both cases can generate the same types of unintended consequences. I call this the “one number syndrome”. An example of the “one number syndrome” is when outsourcing contracts include penalty and bonus clauses based on a single measure, such as productivity improvements.  Productivity is a simple metric that can be affected by a wide range of project and organizational attributes. Therefore just focusing on measuring just productivity can have all sorts of outcomes as teams tweak the attributes affecting productivity and then review performance based on feedback.  For example, one common tactic used to influence productivity is by changing the level of quality that a project is targeting; generally higher quality generates lower productivity and vice versa. Another typical example of organizations or teams maximize productivity is to throttle the work entering the organization. Reducing the work entering an organization or team generally increases productivity. In our examples the feedback loop created by fixating on improving productivity may have the unintended consequence.

A critical shortcoming caused by measurement lust is a shift toward short-term thinking as teams attempt to maximize the factors that will use to just their performance. We have all seen the type of short-term thinking that occurs when a manager (or an organization) does everything in their power to make some monthly goal. At the time the choices are made they seem to be perfectly rational. Short-term thinking has the ability to convert the choices made today into the boat anchors of the next quarter. For example, right after I left university I worked for a now defunct garment manufacturer. On occasion salespeople would rush a client into an order at the end of a sales cycle to make their quota. All sorts of shenanigans typically ensued including returns, sale rebates but the behavior always caught up one or two sales periods later. In a cycle of chasing short-term goals with short-term thinking, a major failure is merely a matter of time. I’m convinced from reading the accounts of the Enron debacle that the cycle of short-term thinking generated by the lust to meet their numbers made it less and less likely that anyone could perceive just how irrational their decisions were becoming.

The fix is easy (at least conceptually). You need to recognize that measurement is a behavioral tool and create a balanced set of measures (frameworks like the Balanced Scorecard are very helpful) that therefore encourage balanced behavior.  I strongly suggest that as you are defining measures and metrics, take the time to forecast the behaviors each measure could generate.  Ask yourself whether these are the behaviors you want and whether other measures will be needed to avoid negative excesses.

Lust rarely occurs without a negative feedback loop that enables the behavior. Measures like productivity or velocity when used for purely process improvement or planning rather than to judge performance (or for bonuses) don’t create measurement lust. Balanced goals, balanced metrics, balanced feedback and balanced compensation are all a part of plan to generate balanced behavior. Imbalances of any of these layers will generate imbalances in behavior. Rebalancing can change behavior but just make sure it is the behavior you anticipate and it doesn’t cause unintended consequences by shifting measurement lust to another target.


Categories: Process Management

Seven Deadly Sins of Metrics Programs: Lust

1123084506_c69acb0424_b

The measurement/performance feedback loop causes an addiction to a single metric. The addict will exclude what is really important.

There is a famous adage: you get what you measure. When an organization measures a specific activity or process, people tend to execute so they maximize their performance against that measure. Managers and change agents often create measures to incentivize teams or individuals to perform work in a specific then to generate a feedback loop. The measurement/performance feedback loop causes an addiction to a single metric. The addict will exclude what is really important. Chasing the endorphins that the feedback will generate is the sin of lust in the measurement world. Lust, like wrath, is a loss of control which affects your ability to think clearly. Balanced goals and medium to long-term focus are tools to defeat the worst side effects of measurement lust. The ultimate solution is a focus on the long-term goals of the organization.

How does this type of unbalanced behavior occur?  Usually measurement lust is generated by either an unbalanced measurement programs or performance compensation programs.   Both cases can generate the same types of unintended consequences. I call this the “one number syndrome”. An example of the “one number syndrome” is when outsourcing contracts include penalty and bonus clauses based on a single measure, such as productivity improvements.  Productivity is a simple metric that can be affected by a wide range of project and organizational attributes. Therefore just focusing on measuring just productivity can have all sorts of outcomes as teams tweak the attributes affecting productivity and then review performance based on feedback.  For example, one common tactic used to influence productivity is by changing the level of quality that a project is targeting; generally higher quality generates lower productivity and vice versa. Another typical example of organizations or teams maximize productivity is to throttle the work entering the organization. Reducing the work entering an organization or team generally increases productivity. In our examples the feedback loop created by fixating on improving productivity may have the unintended consequence.

A critical shortcoming caused by measurement lust is a shift toward short-term thinking as teams attempt to maximize the factors that will use to just their performance. We have all seen the type of short-term thinking that occurs when a manager (or an organization) does everything in their power to make some monthly goal. At the time the choices are made they seem to be perfectly rational. Short-term thinking has the ability to convert the choices made today into the boat anchors of the next quarter. For example, right after I left university I worked for a now defunct garment manufacturer. On occasion salespeople would rush a client into an order at the end of a sales cycle to make their quota. All sorts of shenanigans typically ensued including returns, sale rebates but the behavior always caught up one or two sales periods later. In a cycle of chasing short-term goals with short-term thinking, a major failure is merely a matter of time. I’m convinced from reading the accounts of the Enron debacle that the cycle of short-term thinking generated by the lust to meet their numbers made it less and less likely that anyone could perceive just how irrational their decisions were becoming.

The fix is easy (at least conceptually). You need to recognize that measurement is a behavioral tool and create a balanced set of measures (frameworks like the Balanced Scorecard are very helpful) that therefore encourage balanced behavior.  I strongly suggest that as you are defining measures and metrics, take the time to forecast the behaviors each measure could generate.  Ask yourself whether these are the behaviors you want and whether other measures will be needed to avoid negative excesses.

Lust rarely occurs without a negative feedback loop that enables the behavior. Measures like productivity or velocity when used for purely process improvement or planning rather than to judge performance (or for bonuses) don’t create measurement lust. Balanced goals, balanced metrics, balanced feedback and balanced compensation are all a part of plan to generate balanced behavior. Imbalances of any of these layers will generate imbalances in behavior. Rebalancing can change behavior but just make sure it is the behavior you anticipate and it doesn’t cause unintended consequences by shifting measurement lust to another target.


Categories: Process Management

More Than Target Needed To Steer Toward Project Success

Herding Cats - Glen Alleman - Sat, 08/09/2014 - 15:49

There is a suggestion that only the final target of a project's performance is needed to steer toward success. This target can be budget, a finish date, the number of stories or story points in an agile software project. With the target and the measure of performance to date, collected from the measures at each sample point, there is still a missing piece needed to guide the project.

With the target and the samples, no error signal is available to make intermediate corrections to arrive on target. With the target alone, any variances in cost, schedule, or techncial performance can only be discovered when the project arrives at the end. With the target alone, this is an Open Loop control system.

Control systems from Glen Alleman Pages 27 and 28 show the difference between Open Loop control and Closed Loop control of a notional software development project using stories as the unit of measure. In the figure below (page 27), the cummulative performance of stories is collected from the individual performance of stories over the projects duration. The target stories -  or budget, or some other measure - is the final target. But along the way, there is no measure of are we going to make it at this rate?  An Open Loop Control System
  • Is a non-feedback system, where the output – the desired state – has no influence or effect on the control action of the input signal — the measures of performance are just measures. They are not compared to what the performance should be at that point in time.
  • The output – the desired state– is neither measured nor “fed back” for comparison with the input — there is not an intermediate target goal to measure the actual performance against. Over budget, late, missing capabilities are only discovered at the end.
  • Is expected to faithfully follow its input command or set point regardless of the final result — the planned work can be sliced into small chunks of equal size - this is a huge assumptions by the way - but the execution of this work must also faithfully follow the planned productivity. (See assumptions below).
  • Has no knowledge of the output condition – the difference between desired state and actual state – so cannot self-correct any errors it could make when the preset value drifts, even if this results in large deviations from the preset value — the final target is present but the compliance with that target along the way is missing, since there is no intermediate target to steer toward for each period of assessment - only the final.
Screen Shot 2014-08-09 at 7.46.27 AM There are two very simplifying assumptions made in the slicing approach sugegsted to solve the control of projects: 
  • The needed performance in terms of stories or any other measure of performance are linear and of the same size - this requires decompsong the planned work for each period to nearly identical sizes, work efforts, and outcomes.
  • The productivity of the work performed is also linear and unvarying - this require zero defects, zero variance in the work effort, and sustained productivity at the desired performance level.
Fulfilling these assumptions before the project starts requires effort and the assumptions about the homogeneity of the planned production, the homogeneity of the work effort, and the homogeneity of any defects, rework, or changes in plan would require near Perfect planning and management of the project. Instead, the reality of all project work is the planned effort, duration, outcomes, dependencies, and cost are random variables. This is the nature of the non-stationary stochastic processes that drive project work. Nothing will turn out as planned during to uncertainty. There are two types of uncertainty found in project work:
  • Irreducible Uncertainty - this is the noise of the project. Random fluctuations on productivity, technical performance, efficiency, effectiveness, risk. These cannot be reduced. They are Aleatory Uncertainties.
  • Reducible Uncertainty - these are event based uncertainties that have a probability of occurring, have a probability of the consequences, and have a residual probability that when fixed will come back back again.

Irreducible Uncertainty can only be handled with Margin. Cost margin, schedule margin, technical margin. This is the type of margin you use when you drive to work. The GPS Navigation system says it 23 ninutes to the office. It's NEVER 23 minutes to the office. Something always interferes with our progress.

Reducible Uncertainty is handled in two way. Spending money to buydown the risk that results from this uncertainty. Management Reserve (budget reserve and schedule contingency) to be used whenm soemthnig goes wrong to pay for the fix when the uncertainty turns into reality.

The next figure (page 28) shows how to manage in the presence of these  uncertainties, by measuring actual performance against the desired performance at each step along the way.

Screen Shot 2014-08-09 at 8.03.15 AM

In this figure, we measure at each assessment point the progress of the project against the desired progress - the planned progress, the needed progress. This planned, desired, or needed progress is developed by looking at the future effort, duration, risk, uncertainty - the stochastic processes that drive the project - and determining what should be the progress at this point in time to reach our target on or before the need date, at or below the needed cost, and with the needed confidence that the technical capabilities can be delivered along the way?  This is closed loop control.

The planned performance, the needed performance, the desired performance is developed early in the project. Maybe on day one, more likely after actual performance has been assessed to calibrate future performance. This is called Reference Class Forecasting. With this information estimates of the needed performance can then be used to establish steering targets along the way to completing the project. These intermediate references - or steering - points provide feedback along the way toward the goal. They provide the error signal needed to keep the project on track. They are the basis of Closed Loop control. 

In the US, many highways have rumble strips cut into the asphalt to signal that you are nearing the edge of the road on the right. They make a loud noise that tells you - hey get back in the lane, otherwise you're going to end up in the ditch.

This is the purpose of the intermediate steering targets for the project. When the variance between planned and actual exceeds a defined threshold, this says hey, you're not going to make it to the end on time, on budget, or with your needed capabilities if you keep going like this.

Ditch 03[1]

Kent Beck's quote is...

Optimism is the disease of software development. Feedback is the cure.

 

This feedback must have a reference to compare against if it is to be of any value in steering the project to a successful completion. Knowing it's going to be late, over budget, and doesn't work when we arrive at late, over budget, and not working is of little help to the passengers of the project.

Related articles Indicators of project performance provide us with "Steering Inputs" Seven Immutable Activities of Project Success Why Project Management is a Control System The Use and Misuse of Little's Law and Central Limit Theorem in Agile Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices Slicing Work Into Small Pieces How To Assure Your Project Will Fail Control Systems - Their Misuse and Abuse Getting to done!
Categories: Project Management

Seven Deadly Sins of Metrics Programs: Gluttony

3068483640_328b020efa_bGluttony is over-indulgence to the point of waste.  Gluttony brings to mind pictures of someone consuming food at a rate well beyond simple need.  In measurement, gluttony then can be exemplified by programs that collect data that has no near-term need or purpose.  When asked why the data was collected, the most common answer boils down to ‘we might need it someday…’

Why is the collection of data just in case, for future use or just because it can be done a problem?  The problems caused by measurement gluttony fall into two basic categories.  The first is that it wastes the effort of the measurement team, and second because it wastes credibility.

Wasting effort dilutes the measurement team’s resources that should be focused on collecting and analyzing data that can make a difference.  Unless the measurement program has unlimited resources, over collection can obscure important trends and events by reducing time for analysis and interpretation.  Any program that scrimps on analysis and interpretation is asking trouble, much as a person with clogged arteries.  Measures without analysis and interpretation are dangerous because people see what they like in the data due to clustering illusion (cognitive bias). Clustering illusion (or clustering bias) is the tendency to see patterns in clusters or streaks of in a smaller sample of data inside larger data sets. Once a pattern is seen it becomes difficult to stop people from believing that the does not exist.

The second problem of measurement gluttony occurs because it wastes the credibility of the measurement team.  Collecting data that is warehoused just in case it might be important causes those who provide the measures and metrics to wonder what is being done the data. Collecting data that you are not using will create an atmosphere of mystery and fear.  Add other typical organizational problems, such as not being transparent and open about communication of measurement results, and fear will turn into resistance.   A sure sign of problems is when you  begin hearing consistent questions about what you are doing, such as “just what is it that you do with this data?” All measures should have a feedback loop to those being measured so they understand what you are doing, how the data is being used and what the analysis means.  Telling people that you are not doing anything with the data doesn’t count as feedback. Simply put, don’t collect the data if you are not going to use it and make sure you are using the data you are collecting to make improvements!

A simple rule is to collect only the measurement data that you need and CAN use.  Make sure all stakeholders understand what you are going to do with the data.  If you feel that you are over-collecting, go on a quick data diet.  One strategy for cutting back is to begin in the areas you feel safest [SAFEST HOW?]. For example, start with a measure that you have not based a positive action on in the last 6 months. Gluttony in measurement gums up the works just like it does in a human body; the result of measurement gluttony slows down reactions and creates resistance, which can lead to a fatal event for your program.

 


Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Fri, 08/08/2014 - 22:31

Strategy without tactics is the slowest route to victory
Tactics without strategy is the noise before defeat.

— Sun Tzu

Every time you hear about some conjectured great new idea on how to solve a vexing problem, ask what's the actual problem to be solved, what's the root cause of that problem, what domain is this problem found in and what domain is this problem already solved in, and finally ask what's your strategy for assuring that your solution will actually result in a solution that doesn't break something else, violate a business principle, and create an obligation that you didn't think of because you were too busy admiring your cleaver idea?

 

Categories: Project Management

Introduction to big data presentation

I presented big data to Amdocs’ product group last week. One of the sessions I did was recorded so I might be able to add here later. Meanwhile you can check out the slides.

Note that trying to keep the slide visual I put some of the information is in the slide notes and not on the slides themselves.

Big data Overview from Arnon Rotem-Gal-Oz

Categories: Architecture

Are You Doing Agile Results?

If you already use Agile Results as your personal results system, you have a big advantage.

Why?

Because most people are running around, scrambling through a laundry list of too many things to do, a lack of clarity around what the end result or outcomes should be, and a lack of clarity around what the high-value things to focus on are.  They are using their worst energy for their most important things.  They are spending too much time on the things that don’t matter and not enough time on the things that do.   They are feeling at their worst, when they need to feel at their best, and they are struggling to keep up with the pace of change.

I created Agile Results to deal with the chaos in work and life, as a way to rise above the noise, and to easily leverage the most powerful habits and practices for getting better results in work and life.

Agile Results, in a nutshell, is a simple system for mastering productivity and time management, while at the same time, achieving more impact, realizing your potential, and feeling more fulfillment.

I wrote about the system in the book Getting Results the Agile Way.  It’s been a best seller in time management.

How Does Agile Results Work?

Agile Results works by combining proven practices for productivity, time management, psychology, project management, and some of the best lessons learned on high-performance.   And it’s been tested for more than a decade under extreme scenarios and a variety of conditions from individuals to large teams.

Work-Life balance is baked into the system, but more importantly Agile Results helps you live your values wherever you are, play to your strengths, and rapidly learn how to improve your results in an situation.  When you spend more time in your values, you naturally tap into your skills and abilities that help bring out your best.

The simplest way to think of Agile Results is that it helps you direct your attention and apply your effort on the things that count.  By spending more time on high-value activities and by getting intentional about your outcomes, you dramatically improve your ability to get better results.

But none of that matters if you aren’t using Agile Results.

How Can You Start Using Agile Results?

Start simple.

Simply ask yourself, “What are the 3 wins, results, or outcomes that I want for today?.”   Consider the demands you have on your plate, the time and energy you’ve got, and the opportunities you have for today, and write those 3 things down.

That’s it.   You’re doing Agile Results.

Of course, there’s more, but that’s the single most important thing you can do to immediately gain clarity, regain your focus, and spend your time and energy on the most valuable things.

Now, let’s assume this is the only post you ever read on Agile Results.   Let’s take a fast walkthrough of how you could use the system on a regular basis to radically and rapidly improve your results on an ongoing basis.

How I Do Agile Results? …

Here’s a summary of how I do Agile Results.

I create a new monthly list at the start of each month that lists out all the things that I think I need to do, and I bubble up 3 of my best things I could achieve or must get done to the top.   I look at it at the start of the week, and any time I’m worried if I’m missing something.  This entire process takes me anywhere from 10-20 minutes a month.

I create a weekly list at the start of the week, and I look at it at the start of each day, as input to my 3 target wins or outcomes for the day, and any time I’m worried if I’m missing anything.   This tends to take me 5-10 minutes at the start of the week.

I barely have to ever look at my lists – it’s the act of writing things down that gives me quick focus on what’s important.   I’m careful not to put a bunch of minutia in my lists, because then I’d train my brain to stop focusing on what’s important, and I would become forgetful and distracted.  Instead, it’s simple scaffolding.

Each day, I write a simple list of what’s on my mind and things I think I need to achieve.   Next, I step back and ask myself, “What are the 3 things I want to accomplish today?”, and I write those down.   (This tends to take me 5 minutes or less.  When I first started it took me about 10.)

Each Friday, I take the time to think through three things going well and three things to improve.   I take what I learn as input into how I can simplify work and life, and how I can improve my results with less effort and more effectiveness.   This takes me 10-20 minutes each Friday.

How Can You Adopt Agile Results?

Use it to plan your day, your week, and your month.

Here is a simple recipe for adopting Agile Results and using it to get better results in work and life:

  1. Add a recurring appointment on your calendar for Monday mornings.  Call it Monday Vision.   Add this text to the body of the reminder: “What are your 3 wins for this week?”
  2. Add a recurring appointment on your calendar to pop up every day in the morning.  Call it Daily Wins.  Add this text to the body of the reminder: “What are your 3 wins for today?”
  3. Add a recurring appointment on your calendar to pop up every Friday mid-morning.  Call it Friday Reflection.  Add this text to the body of your reminder:  What are 3 things going well?  What are 3 things to improve?”
  4. On the last day of the month, make a full list of everything you care about for the next month.   Alphabetize the list.  Identify the 3 most important things that you want to accomplish for the month, and put those at the top of the list.   Call this list  Monthly Results for Month XYZ.  (Note – Alphabetizing your list helps you name your list better and sort your list better.  It’s hard to refer to something important you have to do if you don’t even have a name for it.  If naming the things on your list and sorting them is too much to do, you don’t need to.  It’s just an additional tip that helps you get even more effective and more efficient.)
  5. On Monday of each week, when you wake up, make a full list of everything you care about accomplishing for the week.  Alphabetize the list.  Identify the 3 most important things you want to accomplish and add that to the top of the list.  (Again, if you don’t want to alphabetize then don’t.)
  6. On Wednesdays, in the morning, review the three things you want to accomplish for the week to see if anything matters that you should have spent time on or completed by now.  Readjust your priorities and focus as appropriate.  Remember that the purpose of having the list of your most important outcomes for the week isn’t to get good at predicting what’s important.  It’s to help you focus and to help you make better decisions about what to spend time on throughout the week.  If something better comes along, then at least you can make a conscious decision to trade up and focus on that.  Keep trading up.   And when you look back on Friday, you’ll know whether you are getting better at trading up or if you are just getting randomize or focusing on the short-term but hurting the long term.
  7. On Fridays,  in the morning, do your Friday Reflection.  As part of the exercise, check against your weekly outcomes and your monthly outcomes that you want to accomplish.  If you aren’t effective for the week, don’t ask “why not,” ask “how to.”   Ask how can you bite off better things and how can you make better choices throughout the week.  Just focus on little behavior changes, and this will add up over time.  You’ll get better and better as you go, as long as you keep learning and changing your approach.   That’s the Agile Way.

There are lots of success stories by other people who have used Agile Results.   Everybody from presidents of companies to people in the trenches, to doctors and teachers, to teams and leaders, as well as single parents and social workers.

But none of that matters if it’s not your story.

Work on your success story and just start getting better results, right here, right now.

What are the three most important things you really want to accomplish or achieve today?

Categories: Architecture, Programming