Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Scale Agile With Small-World Networks Posted

I posted my most recent Pragmatic Manager newsletter, Scale Agile With Small-World Networks on my site.

This is a way you can scale agile out, not up. No hierarchies needed.

Small-world networks take advantage of the fact that people want to help other people in the organization. Unless you have created MBOs (Management By Objectives) that make people not want to help others, people want to see the entire product succeed. That means they want to help others. Small-world networks also take advantage of the best network in your organization—the rumor mill.

If you enjoy reading this newsletter, please do subscribe. I let my readers know about specials that I run for my books and when new books come out first.

Categories: Project Management

What Ever Happened to the Founders of Sierra Online?

Making the Complex Simple - John Sonmez - Mon, 09/29/2014 - 15:00

Some of the fondest memories of my childhood involve playing adventure games like Space Quest, Kings Quest and Quest for Glory. I remember spending countless hours reloading from save spots and trying to figure out a puzzle. I remember that exciting feeling of anticipation when the Sierra logo flashed onto the screen as my 486 […]

The post What Ever Happened to the Founders of Sierra Online? appeared first on Simple Programmer.

Categories: Programming

R: Filtering data frames by column type (‘x’ must be numeric)

Mark Needham - Mon, 09/29/2014 - 06:46

I’ve been working through the exercises from An Introduction to Statistical Learning and one of them required you to create a pair wise correlation matrix of variables in a data frame.

The exercise uses the ‘Carseats’ data set which can be imported like so:

> install.packages("ISLR")
> library(ISLR)
> head(Carseats)
  Sales CompPrice Income Advertising Population Price ShelveLoc Age Education Urban  US
1  9.50       138     73          11        276   120       Bad  42        17   Yes Yes
2 11.22       111     48          16        260    83      Good  65        10   Yes Yes
3 10.06       113     35          10        269    80    Medium  59        12   Yes Yes
4  7.40       117    100           4        466    97    Medium  55        14   Yes Yes
5  4.15       141     64           3        340   128       Bad  38        13   Yes  No
6 10.81       124    113          13        501    72       Bad  78        16    No Yes

filter the categorical variables from a data frame and

If we try to run the ‘cor‘ function on the data frame we’ll get the following error:

> cor(Carseats)
Error in cor(Carseats) : 'x' must be numeric

As the error message suggests, we can’t pass non numeric variables to this function so we need to remove the categorical variables from our data frame.

But first we need to work out which columns those are:

> sapply(Carseats, class)
      Sales   CompPrice      Income Advertising  Population       Price   ShelveLoc         Age   Education 
  "numeric"   "numeric"   "numeric"   "numeric"   "numeric"   "numeric"    "factor"   "numeric"   "numeric" 
      Urban          US 
   "factor"    "factor"

We can see a few columns of type ‘factor’ and luckily for us there’s a function which will help us identify those more easily:

> sapply(Carseats, is.factor)
      Sales   CompPrice      Income Advertising  Population       Price   ShelveLoc         Age   Education 
      FALSE       FALSE       FALSE       FALSE       FALSE       FALSE        TRUE       FALSE       FALSE 
      Urban          US 
       TRUE        TRUE

Now we can remove those columns from our data frame and create the correlation matrix:

> cor(Carseats[sapply(Carseats, function(x) !is.factor(x))])
                  Sales   CompPrice       Income  Advertising   Population       Price          Age    Education
Sales        1.00000000  0.06407873  0.151950979  0.269506781  0.050470984 -0.44495073 -0.231815440 -0.051955242
CompPrice    0.06407873  1.00000000 -0.080653423 -0.024198788 -0.094706516  0.58484777 -0.100238817  0.025197050
Income       0.15195098 -0.08065342  1.000000000  0.058994706 -0.007876994 -0.05669820 -0.004670094 -0.056855422
Advertising  0.26950678 -0.02419879  0.058994706  1.000000000  0.265652145  0.04453687 -0.004557497 -0.033594307
Population   0.05047098 -0.09470652 -0.007876994  0.265652145  1.000000000 -0.01214362 -0.042663355 -0.106378231
Price       -0.44495073  0.58484777 -0.056698202  0.044536874 -0.012143620  1.00000000 -0.102176839  0.011746599
Age         -0.23181544 -0.10023882 -0.004670094 -0.004557497 -0.042663355 -0.10217684  1.000000000  0.006488032
Education   -0.05195524  0.02519705 -0.056855422 -0.033594307 -0.106378231  0.01174660  0.006488032  1.000000000
Categories: Programming

Don Yaeger's 16 Consistent Characteristics of Greatness

Herding Cats - Glen Alleman - Mon, 09/29/2014 - 04:40

Don Yeager has a small book mark sized card on 16 Consistent Characteristics of Greatness. I got my card at a PMI conference where he spoke. I'm repeating them here. Don's talk was about sports people he interviewed for magazines and books. The audience was hard-bitten Government and Industry project and program managers. Those accountable for millions and billions of dollars of high risk, high reward endeavors. After Don finished his talk, no a person in the room had dry eyes. Subscribe to Don's daily message at www.donyaeger.com

How They Think

1. It's personal - they hate to lose more than they love to win.

2. Rubbing elbows - they understand the value  of association.

3. Believe - they have faith in a higher power.

4. Contagious Enthusiasm - they are positive thinkers... They are enthusiastic... and that enthusiasm rubs off.

How They Prepare

5. Hope for the best, but ... They prepare for all possibilities before they step on the field.

6. What Off-Season? They are always working towards the next game... The goal is whart's ahead, and there's always something ahead.

7. Visualize Victory - They see victory before the game begins.

8. Inner Fire - they use adversity as fuel.

How They Work

9. Ice in Their Veins - they are risk-takers and don't fear making a mistake.

10. When All Else Fails - they know how - and when - to adjust their game plan.

11. Ultimate Teammate - they will assume whatever role is necessary for the team to win.

12. Not Just About the Benjamin's - they don't just play for the money.

How They Live

13. Do Unto Others - they know character is defined by how they treat those who cannot help them.

14. When No One Is Watching - they are comfortable in the mirror... They live their life with integrity.

15. When Everyone is Watching - they embrace the idea of being a role model.

16. Records Are Made To Be Broken - they know their legacy isn't what they did on the field. They are well rounded.

Categories: Project Management

SPaMCAST 309 – Agile User Acceptance Testing

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 309

Software Process and Measurement Cast number 309 features our essay on Agile user acceptance testing. Agile user acceptance testing (AUAT) confirms that the output of a project meets the business’ needs and requirements. The concept of acceptance testing early and often is almost inarguable, whether you are using Agile or any other method. AUAT generates early customer feedback, which increases customer satisfaction and reduces the potential for delivering defects. While implementing an effective and efficient AUAT isn’t always easy it most certainly is possible!

The essay begins:

The classic definition of a user acceptance test (UAT) is a process that confirms that the output of a project meets the business needs and requirements. UAT in an Agile project generally is more rigorous and timely than the classic end of project UAT found in waterfall projects. In waterfall projects, the UAT is usually the last step in the development process.  The problem with that classic scenario is that significant defects are found late in the process, or worse, the business discovers that what is being delivered isn’t exactly what they wanted. Agile projects provide a number of opportunities to interject UAT activities throughout the process, starting with the development of user stories, to the sprint reviews and demos, and finally the UAT sprints at the end of a release.  Each level provides a platform for active learning and feedback from the business.

Listen to the rest of the essay, here! —- this will be a link.

Next

SPaMCAST 310 features our interview with Michael Burrows. This is Michael’s second visit to the Software Process and Measurement Cast.  In this visit we discussed his new book, Kanban from the Inside.  The book lays out why Kanban is a management method built on a set of values rather than just a set of techniques. The argument is made that Kanban leads to better outcomes for projects, managers, organizations and customers!

Buy and read the book before the interview!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT

Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 309 – Agile User Acceptance Testing

Software Process and Measurement Cast - Sun, 09/28/2014 - 22:00

 Software Process and Measurement Cast number 309 features our essay on Agile user acceptance testing. Agile user acceptance testing (AUAT) confirms that the output of a project meets the business’ needs and requirements. The concept of acceptance testing early and often is almost inarguable, whether you are using Agile or any other method. AUAT generates early customer feedback, which increases customer satisfaction and reduces the potential for delivering defects. While implementing an effective and efficient AUAT isn’t always easy it most certainly is possible!

The essay begins:

The classic definition of a user acceptance test (UAT) is a process that confirms that the output of a project meets the business needs and requirements. UAT in an Agile project generally is more rigorous and timely than the classic end of project UAT found in waterfall projects. In waterfall projects, the UAT is usually the last step in the development process.  The problem with that classic scenario is that significant defects are found late in the process, or worse, the business discovers that what is being delivered isn’t exactly what they wanted. Agile projects provide a number of opportunities to interject UAT activities throughout the process, starting with the development of user stories, to the sprint reviews and demos, and finally the UAT sprints at the end of a release.  Each level provides a platform for active learning and feedback from the business.

Listen to the rest of the essay!

Next

SPaMCAST 310 features our interview with Michael Burrows. This is Michael’s second visit to the Software Process and Measurement Cast.  In this visit we discussed his new book, Kanban from the Inside.  The book lays out why Kanban is a management method built on a set of values rather than just a set of techniques. The argument is made that Kanban leads to better outcomes for projects, managers, organizations and customers!

Buy and read the book before the interview!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT

Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

One Week in F#

Phil Trelford's Array - Sun, 09/28/2014 - 20:30

With Sergey Tihon on vacation this week, I’ve collated a one off alternative F# Weekly roundup, covering some of the highlights from another busy week in the F# community.

News in brief

FSharp Logo

Events

This week has seen meetups in Nashville, Raleigh, Portland, Washington DC, Stockholm and London:

The Multi-hit brick! #fsharp #gamedev #breakout @DCFSharp @silverSpoon pic.twitter.com/nVPxUrhIZg

— Wesley Wiser (@wesleywiser) September 25, 2014

Recordings

Upcoming meetups

Upcoming Conferences

Projects

Blogs

FsiBot

.@djidja8 " ▤ ⬛ ▤ ⬛ ▤ ⬛ ▤ ⬛ ⬛ ▤ ⬛ ▤ ⬛ ▤ ⬛ ▤ ▤ ⬛ ▤ ⬛ ▤ ⬛ ▤ ⬛ ⬛ ▤ ⬛ ▤ ⬛ ▤ ⬛ ▤ ▤ ⬛ ▤ ⬛ ♙ ⬛ ▤ ⬛ ⬛ ▤ ⬛ ▤ ⬛ ▤ ⬛ ▤ ▤ ⬛ ▤ ⬛ ▤ ⬛ ▤ ⬛ ⬛ ▤ ⬛ ▤ ⬛ ▤ ⬛ ▤"

— fsibot (@fsibot) September 22, 2014

Have a great week!

Categories: Programming

Dazzle Your Audience By Doodling

Xebia Blog - Sun, 09/28/2014 - 10:29

When we were kids, we loved to doodle. Most of us did anyway. I doodled all the time, everywhere, and, to the dismay of my mother, on everything. I still love to doodle. In fact, I believe doodling is essential.

The tragedy of the doodle lies in its definition: "A doodle is an unfocused or unconscious drawing while a person's attention is otherwise occupied." That's why most of us have been taught not to doodle. Seems logical, right? Teacher sees you doodling, that is not paying attention in class, thus not learning as much as you should, so he puts a stop to it. Trouble is though, it's wrong. And it's not just a little bit wrong, it's totally and utterly wrong. Exactly how wrong was shown in a case study by Jackie Andrade. She discovered that doodlers have 29% better recall. So, if you don't doodle, you're doing yourself a disservice.

And you're not just doing yourself a disservice, you're also doing your audience a disservice. Neurologists have discovered a phenomenon dubbed "mirror neurons." When you see something, the same neurons fire as if you were doing it. So, if someone shows you a picture, let's say a slide in a presentation, it is as if you're showing that picture to yourself.

Wait, what? That doesn't sound special at all, now does it? That's why presentations using only slides can be so unintentionally relaxing.

Now, if you see someone write or draw something on a flip chart, dry erase board or any other surface in plain sight, it is as if you're writing or drawing it yourself. And that ensures 29% better recall. Better yet, you'll remember what the presenter wants you to rememeber. Especially if he can trigger an emotional response.

Now, why is that? At EUVIZ in Berlin last month, I attended a presentation by Barbara Siegel from Look2Listen that changed my life. Barbara talked about the latest insights from neuroscience that prove that everyone feels first and thinks later. So, if you want your audience to tune in to your talk, show some emotion! Want people to remember specific points of your talk? Trigger and capture emotion by writing and drawing in real-time. Emotion runs deep and draws firm neurological paths in the brain that help you recreate the memory. Memories are recreated, not stored and retrieved.

Another thing that helps you draw firm neurological paths is exercise. If you get your audience to stand up and move, you increase their brain activity by 7%, hightening alertness and motivation. By getting your audience to sit down again after physical exercise, you trigger a rebalancing of neurotransmitters and other neurochemicals, so they can use the newly spawned neurons in their brain to combine into memories of your talk. Now that got me running every other day! Well, jogging is more like it, but hey: I'm hitting my target heart-rate regularly!

How does this help you become a better public speaker? Remember these two key points:

  1. At the start of your speech, get your audience to stand up and move to ensure 7% more brain activity and prime them for maximum recall.
  2. Make sure to use visuals and metaphors and create most, if not all, of them in real-time to leverage the mirror neuron effect and increase recall by 29%.

Value Is More Than Quality

A good fruit salad requires balance.

A good fruit salad requires balance.

In the Software Process and Measurement Cast 308, author Michael West describes a scenario in which a cellphone manufacturer decided that quality was their ultimate goal. The handset that resulted did not wow their customers. The functionally wasn’t what was expected and the price was prohibitive. The morale of the story was that quality really should not have be the ultimate goal of the project. At the time I recorded the interview I did not think the message Mr. West was suggesting was controversial. Recently I got a call on Skype. The caller had listened to the podcast and read Project Success: A Balancing Act and wanted to discuss why his department’s clients were up in arms about the slow rate of delivery and the high cost of projects. Heated arguments had erupted at steering committee meetings and it was rumored that someone had suggested that if the business wanted cheaper products that IT would just stop testing. Clearly focusing on the goal of zero defects (which was equated to quality) was eliciting unproductive behavior. Our discussion lead to an agreement that a more balanced goal for software development, enhancement or maintenance projects is the delivery of maximum value to whoever requested the project.

When a sponsor funds and sells a project they communicate a set of expectations. Those exceptions typically encompass a project will deliver:

  1. The functionality needed to meet their needs,
  2. The budget they will spend to acquire that functionality,
  3. When want the functionality, and
  4. The level of quality required to support their needs.

Each expectation is part of the definition of value. A project that is delivered with zero defects two years after it is need is less valuable than a project delivered when needed that may have some latent minor defects. A project that costs too much uses resources that might be better used to do another project or potentially causes an organization products to be priced out of the market. Successful projects find a balance between all expectations in order to maximize the value that is delivered.

Quality is not the ultimate goal of any software development, enhancement or maintenance project but neither is cost, schedule or even functionality. Value is the goal all project should pursue. Finding and maintaining equilibrium between the competing goals of cost, schedule and functionality is needed to maximize the ultimate value of a project. Each project will have their own balance based on the context of the project. Focusing on one goal to the exclusion of all others represents an opportunity cost. Every time we face a decision that promotes one goal over another, we should ask ourselves whether that choice is worth giving focus over another goal. Projects that focus on value create an environment in which teams, sponsors and organizations confront the trade-offs goals like zero-defects or perfect quality can cause.


Categories: Process Management

Making Choices in the Absence of Information

Herding Cats - Glen Alleman - Sat, 09/27/2014 - 15:57

Making Choices with Scant InformationDecision making in the presence of uncertainty is a normal business function as well as a normal technical development process. The world is full of uncertainty.

Those seeking certainty will be woefully disappointed. Those conjecturing that decisions can't be made in the presence of uncertainty are woefully misinformed. 

Along with all this woefulness is the boneheaded notion that estimating is guessing, and that decisions can actually be made in the presence of uncertainty in the absence of estimating.

Here's why. When we are faced with a decision, a choice between multiple decisions, a choice between multiple outcomes, each is probabilistic. If it were not - that is we have 100% visibility into the consequences of our decision, the cost involved in making that decision, the cost impact or benefit impact from that decision - it's no longer a decision. It's a choice to pick between several options based on something other than time, money, or benefit.

We're at the farmers market every Saturday morning. Apples are in season. Honey Crisp are my favorite. Local growers all know each other and price their apples pretty much the same. What they don't sell on Saturday, they take to private grocers. What doesn't go there, goes to the chains and labeled Locally Grown. Each step in the supply chain has a mark up, so buying at the Farmers Market is the lowest price. So deciding which apples to buy is usually an impulse for me and my wife. The cost is the same, the benefit is the same, it's just an impulse.

Let's look at the broad issue here - not about apples, from Valuation of Software Initiatives Under Uncertainty, Hakan Erdogmus, John Favaro, and Michael Halling, (Erdogmus is well known for his work in Real Options).

Screen Shot 2014-09-26 at 5.06.49 PM

Buying an ERP system, or funding the development of a new product, or funding the consolidation of the data center in another city is a much different choice process than picking apples. These decisions have uncertainty. Uncertainty of the cost. Uncertainty of the benefits, revenue, savings, increasing in reliability and maintainability. Uncertainty in almost every variable. 

Managing in the presence of uncertainty and the resulting risk, is called business management. It's also called how adults manage projects (Tim Lister)

So with the uncertainty conditions established for our project work, how can we make decisions in the presence of the uncertainties of cost, schedule, resource utilization, delivered capabilities, and all the other attributes and all the ...ilities of the inputs and outcomes of our work?

The Presence of Uncertainty is one of most Significant Characteristics of Project Work

Managing in the presence of uncertainty is unavoidable. Ignoring this uncertainty is also unavoidable. It's still there even if you ignore it.

Uncertainty comes in many forms

  • Statistical uncertainty - Aleatory uncertainty, only margin can address this uncertainty.
  • Subjective judgement - bias, anchoring, and adjustment.
  • Systematic error - lack of understanding of the reference model.
  • Incomplete knowledge - Epistemic Uncertainty, this lack of knowledge can be improved with effort.
  • Temporal variation - instability in the observed and measured system.
  • Inherent stochasticity - instability between and within collaborative system elements

Agile is an Approach to Dealing With Software Project Uncertainty

Going on 12 years ago the topic of managing in the presence of uncertainty was an important topic that spawned approaches to ERP using agile. This work has progressed to more formal principles and practices around software development in the presence of uncertainty and the acquisition of software products.

So Back To the Problem at Hand If decisions - credible decisions - are to be made in the presence of uncertainty, then some how we need information to address the sources of that uncertainty in the bulleted list above. This information can be obtained through many means. Modeling, sampling, parametrically, past performance, reference classes. Each of these sources has in itself an inherent uncertainty.  So in the end, it comes done to this... To make a credible decision in the presence of uncertainty, we need to estimate the factors that go into that decision. We Need To Estimate There's no way out of it. We can't make a credible decision of any importance without an estimate of the impact of that decision, the cost incurred from making that decision, the potential benefits from that decision, the opportunity cost of NOT selecting an outcome from a decision. Picking one Honey Crisp basket over another, not much at risk, low cost, low probability of disappointment. Planning, funding, managing, deploying, operating an ERP system, not likley done in the absence of estimating all variables up front, every time we produce the next increment, every time we have new information, every time we need to make a decision. To suggest otherwise violates the core principles of Microeconomics. If it's your money no one cares - apples or ERP, proceed at your own risk, ignore microeconomics. If it's not your money, it's going to require intentional ignorance of the core principles of successful business management. Behave appropriately.   Related articles Time to Revisit The Risk Discussion How to Deal With Complexity In Software Projects? An Example of Complete Misunderstanding of Project Work Uncertainty is the Source of Risk When We Say Risk What Do We Really Mean? Both Aleatory and Epistemic Uncertainty Create Risk Unceratinty and Risk Four Types of Risk Bayesian probability theory banned by English court
Categories: Project Management

An Example of Complete Misunderstanding of Project Work

Herding Cats - Glen Alleman - Sat, 09/27/2014 - 01:06

Choose-to-Know-Stop-the-Misinformation-Profile-Picture-1WARNING RANT AGAINST INTENTIONAL IGNORANCE FOLLOWS

This is one of those pictures tossed out at some conference that drives me crazy. It's uninformed, ignores the disciplines of developing software for money, and is meant to show how smart someone is, without actually understanding the core processes needed for actually being knowledgeable of the topic - in this case statistical processes of project work. Then the picture gets circulated, re-posted, and becomes the basis of all kinds of other misunderstanding, just like the Dilbert cartons that represent cartons of the problem, but have no corrective actions associated.

It is popular in some circles of agile development to construct charts showing the strawman of deterministic and waterfall approaches, then compare them to the stochastic approaches and point out how much better the latter is than the former. Here's an example.

These strawman approaches are of course not only misinformed, they're essentially nonsense in any domain where credible project management is established, and the basis of the their response with Don't Do Stupid Things on Purpose.

Screen Shot 2014-09-25 at 11.27.59 AM

Let's look at each strawman statement for the Deterministic View in light of actual project management processes, either simply best practice or mandated practice.

  • Technologies are stable - no one believes this that has been around in the last 50 years. And if they do, they've been under a rock. Suggesting this is the case ignores even the simplest observations of technology and it's path of progress.
  • Technologies are predictable - anyone who has any experience in any engineering disciplne knows this is not the case. Beyond the simplest single machine unintended consequence and emergent behavior with obvious.
  • Requirements are stable - no they're not. Not even in the bone head simplest project. This would require precognition and clairvoyance.
  • Requirements are predictable - no they're not. Read any Requirements Management guidance, any requirements elicitation process, or work any non-trivial project to learn this as a cub developer.
  • Useful information is available at the start - this would require clairvoyance.
  • Decisions are front loaded - this ignores completely the principles of microeconomics of decision making in the presence of uncertainty. Good way to set fire to your money. For a good survey of when and how to make front loaded decisions see Making Essential Choices with Scant Information. Also buy Williams other book Modelling Complex Projects. Along with another book Project Governance: Getting Investments Right. This statement is a prime example of not doing your homework before deciding to post something in public.
  • Task durations are predictable - all task duration are driven by aleatory uncertainty. For this to be true, the laws of stochastic process would have to be suspended. Another example of been asleep in the High School statistics class.
  • Task arrival times are predictable - same as above. Must have be a classics major in college. With full apologies to our daughter who was a classics major.
  • Our work path is linear, unidirectional - this would require the problem to be so simple it can be modeled as a step by step assembly of Lego parts. Unlikely in any actual non-trivial project. When system of systems becomes the problem - any enterprise IT project, any complex product - the conditions of linear and unidirectional go out the window.
  • Variability is always harmful - this violates the basis of all engineering systems, where Deming variability is built into the system. Didn't anyone who made this chart read Deming?
  • The math we need is arithmetic - complete ignorance of the basic processes of all systems - they are statistical generating functions creating probabilistic outcomes.

The only explanation here is the intentional ignorance of basic science, math, engineering, and computer science. 

In the stochastic View there are equally egregious errors.

  • Technologies are evolving - of course they are. Look at any technology to see rapid and many times disruptive evolution. Project management is Risk Management. Risks are created by uncertainty - reducible and irreducible. Managing in the presence of uncertainty is how adults manage projects.
  • Technologies are unpredictable - in some sense, but we're building systems from parts in the market place. If you're a researcher at Apple this likely the case. If you're integrating an ERP system, you'd better understand the process, technology, and outcomes, or you're gonna fail. Don't let people who believe this to spend your money.
  • Requirements are evolving - of course they are. But the needed capabilities had better be stable or you've signed up for a Death March project, with no definition of done. But requirements aren't the starting point, Capabilities are. Capabilities Based Planning is how enterprise and complex projects are managed.
  • Requirements are degree of freedom - have no idea what this means. Trade Space is part of all good engineering process. Wonder if the author or those referencing this chart know that.
  • Useful information is continuously arriving - of course it is. This work is called engineering and development. Both are verbs.
  • Decisions are continuous - of course they are. This is the core principle of microeconomics of all business decision making. But front-end decisions are mandatory. See "Issues on Front-end decision making for some background before believing this statement is credible. And a summary of the concept of the Williams book above. 
  • Task arrival times are unpredictable - this is intentional ignorance of stochastic processes. Prediction always includes confidence and a probability distribution. Predictions is simply saying something about the future. For task arrival time to be unpredictable, those time would have to be completly chaotic, with no underlying process to produce them. This would be unheard of in project work. And if so the project would be chaotic and destination to fail starting on day way. Another example of asleep in the stats class.
  • Our work path is networked and recursive - of course it is. But this statement is counter to the INVEST condition of agile which is present in only the simplest project. 
  • Variability is required - all processes are stochastic processes. A tautology. Natural variability is irreducible. Event based variability is disruptive to productivity, quality, cost and schedule performance and the forecasting of when, how much, and what will be delivered in terms of Capabilities. Uncontrolled Variability is counter proper stewardship of your customer's money.
  • The math we need is probability and statistics - yes, and you'd better have been paying attention in the High School statistics class and stop using terms you can't refer to in the books on your office shelf. 

In the End

For some reason using charts like this one, re-posting of Dilbert cartons, making statements using buzz words - we're using Real Options and Bayesian Statistics to manage our work - are may favorite ones - seems to be more common the closer we get to the sole contributor point of view. Along with look at my 22 samples of self-selected data with a ±70% variance as how to forecast future performance.

It may be because sole contributors are becoming more prevalent. Sole contributors have certainly changed the world of software development in wasy never possible by larger organizations. But without the foundation of good math, good systems engineering - and I don't mean "data center systems engineering," I mean INCOSE Systems Engineering - those sole contributor points of view simply don't scale.

Always ask when you hear a piece of advice - in what domain have you applied this advice with success? 

Related articles Why is Statistical Thinking Hard? The Misunderstanding of Chaos - Again Deterministic versus Stochastic Trends in Earned Value Management Data Management is Prediction - W. Edwards Deming How To Assure Your Project Will Fail
Categories: Project Management

Kanban And The Theory of Constraints

A dam releasing water is an  example of flow through a constraint.

A dam releasing water is an example of flow through a constraint.

The Theory of Constraints (ToC), as defined by Eli Goldratt in the book The Goal (1984), is an important concept that shapes how we measure and improve processes.  ToC takes a systems thinking approach to understanding and improving processes. A simple explanation of ToC is that the output of any system or process is limited by a very small number of constraints within the process. Kanban is a technique to visualize a process, manage the flow of work through the process and to continually tune the process to maximize flow that can help you identify the constraints. There are three critical points from the ToC to remember when leveraging Kanban as a process improvement tool.

  1. All systems are limited by a small number of constraints. At any specific point in time, as work items flows through a process, the rate of flow is limited by the most significant constrained step or steps. For example, consider the TSA screening process in an Untied States airport. A large number of people stream into the queue, a single person checks their ID and ticket, passes them to another queue where people prepare for scanning, and then both people and loose belongings are scanned separately, are cleared or are rescanned, and finally the screened get to reassemble their belongings (try doing that fast). The constraint in the flow is typically processing people or their belongings through the scanner. Flow can’t be increased by adding more people to check IDs because that is not typically the constraint in the flow. While each step in a process can act as a constraint based on the amount of work a process is asked to perform or if a specific circumstance occurs (the ID checker has to step away and is not replaced, thereby shutting down the line), however, at any one time the flow of work is generally limited by one or just a few constraints.
  2. There is always at least one constraint in a process. No step is instantly and infinitely scalable. As the amount of work a is being called on to perform ebbs and flows there will be at least one constraint in the flow. When I was very young my four siblings and I would all get up to go to school roughly at the same time. My mother required us to brush our teeth just before leaving for school. The goal was to get our teeth cleaned and out of the bathroom so that we could catch the bus as quickly as possible. We all had a brush and there was plenty of room in the bathroom, however there was only one tube of toothpaste (constraint). One process improvement I remember my mother trying was to buy more tubes of toothpaste, which caused a different problem to appear when we began discussing who’s tube was who’s (another constraint). While flow was increased, a new constraint emerged. We never found the perfect process, although we rarely missed the bus.
  3. Flow can only be increased by increasing the flow through a constraint. Consider drinking a milkshake through a straw. In order to increase the amount of liquid we get in our mouth we need to either suck on the straw harder (and that will only work until the straw collapses), change the process or to increase capacity of the straw. In the short-run sucking harder might get a bit more milkshake through the straw but if done for any length of time the additional pressure will damage the “system.”  In the long run the only means to increase flow is either change the size of the straw or change the process by drinking directly from the glass. In all cases to get more milkshake into our mouth we need to make a change so that more fluid gets through the constraint in the process.

The Theory of Constraints provides a tool to think about the flow of work from the point of view of the constraints within the overall process (systems thinking).  In most process, just pushing harder doesn’t increase the output of a process beyond some very limited, short-term improvement. In order to increase the long-term flow of work through a process we need identify and remove the constraints that limit the flow of value.


Categories: Process Management

Neo4j: COLLECTing multiple values (Too many parameters for function ‘collect’)

Mark Needham - Fri, 09/26/2014 - 21:46

One of my favourite functions in Neo4j’s cypher query language is COLLECT which allows us to group items into an array for later consumption.

However, I’ve noticed that people sometimes have trouble working out how to collect multiple items with COLLECT and struggle to find a way to do so.

Consider the following data set:

create (p:Person {name: "Mark"})
create (e1:Event {name: "Event1", timestamp: 1234})
create (e2:Event {name: "Event2", timestamp: 4567})
 
create (p)-[:EVENT]->(e1)
create (p)-[:EVENT]->(e2)

If we wanted to return each person along with a collection of the event names they’d participated in we could write the following:

$ MATCH (p:Person)-[:EVENT]->(e)
> RETURN p, COLLECT(e.name);
+--------------------------------------------+
| p                    | COLLECT(e.name)     |
+--------------------------------------------+
| Node[0]{name:"Mark"} | ["Event1","Event2"] |
+--------------------------------------------+
1 row

That works nicely, but what about if we want to collect the event name and the timestamp but don’t want to return the entire event node?

An approach I’ve seen a few people try during workshops is the following:

MATCH (p:Person)-[:EVENT]->(e)
RETURN p, COLLECT(e.name, e.timestamp)

Unfortunately this doesn’t compile:

SyntaxException: Too many parameters for function 'collect' (line 2, column 11)
"RETURN p, COLLECT(e.name, e.timestamp)"
           ^

As the error message suggests, the COLLECT function only takes one argument so we need to find another way to solve our problem.

One way is to put the two values into a literal array which will result in an array of arrays as our return result:

$ MATCH (p:Person)-[:EVENT]->(e)
> RETURN p, COLLECT([e.name, e.timestamp]);
+----------------------------------------------------------+
| p                    | COLLECT([e.name, e.timestamp])    |
+----------------------------------------------------------+
| Node[0]{name:"Mark"} | [["Event1",1234],["Event2",4567]] |
+----------------------------------------------------------+
1 row

The annoying thing about this approach is that as you add more items you’ll forget in which position you’ve put each bit of data so I think a preferable approach is to collect a map of items instead:

$ MATCH (p:Person)-[:EVENT]->(e)
> RETURN p, COLLECT({eventName: e.name, eventTimestamp: e.timestamp});
+--------------------------------------------------------------------------------------------------------------------------+
| p                    | COLLECT({eventName: e.name, eventTimestamp: e.timestamp})                                         |
+--------------------------------------------------------------------------------------------------------------------------+
| Node[0]{name:"Mark"} | [{eventName -> "Event1", eventTimestamp -> 1234},{eventName -> "Event2", eventTimestamp -> 4567}] |
+--------------------------------------------------------------------------------------------------------------------------+
1 row

During the Clojure Neo4j Hackathon that we ran earlier this week this proved to be a particularly pleasing approach as we could easily destructure the collection of maps in our Clojure code.

Categories: Programming

Stuff The Internet Says On Scalability For September 26th, 2014

Hey, it's HighScalability time:


With tensegrity landing balls we'll be the coolest aliens to ever land on Mars.
  • 6-8Tbps:  Apple’s live video stream; $65B: crowdfunding's contribution to the global economy
  • Quotable Quotes:
    • @bodil: I asked @richhickey and he said "a transducer is just a pre-fused Kleisli arrows in the list monad." #strangeloop
    • @lusis: If you couldn’t handle runit maybe you shouldn’t be f*cking with systemd. You’ll shoot your g*ddamn foot off.
    • Rob Neely: Programming model stability + Regular advances in realized performance = scientific discovery through computation
    • @BenedictEvans: Maybe 5bn PCs have been sold so far. And 17bn mobile phones.
    • @xaprb: "There's no word for the opposite of synergy" @jasonh at #surgecon

  • The SSD Endurance Experiment. The good news: You don't have to worry about writing a lot of data to SSDs anymore. That bad news: When your SSD does die your data may not be safe. Good discussion on Hacker News.

  • Don't have a lot of money? Don't worry. Being cheap can actually create cool: Teleportation was used in Star Trek because the budget couldn't afford expensive shots of spaceships landing on different planets.

  • Not so crazy after all? Google’s Internet “Loon” Balloons Will Ring the Globe within a Year

  • Before cloud and after cloud as told through a car crash

  • Cluster around dear readers, videos from MesosCon 2014 are now available.

  • From Backbone To React: Our Experience Scaling a Web Application. This seems a lot like the approach Facebook uses in their Android apps. As things get complex move the logic to a top level centralized manager and then distribute changes down to components that are not incrementally changed, they are replaced entirely.

  • Deciding between GAE or EC2? This might help: Running a website: Google App Engine vs. Amazon EC2. AWS is hard to set up. Both give you a lot for free. GAE is not customizable. On AWS use whatever languages and software you want. GAE once written your software will scale. If you have a sysadmin or your project requires specific software go with AWS. If you are small or have a static site go with GAE. 

  • Mean vs Lamp – How Do They Stack Up? MEAN = MongoDB, Express.js, Angular.js, PHP or Python. Why be MEAN: the three most significant being a single language from top to bottom, flexibility in deployment platform, and enhanced speed in data retrieval. However, the switch is not without trade-offs; any existing code will either need to be rewritten in JavaScript or integrated into the new stack in a non-obvious manner.  

  • Free the Web: Sometimes, I feel like blaming money. When money comes into play, people start to fear. They fear losing their money, and they fear losing their visitors. And so they focus on making buttons easily clickable (which inevitably narrows down places where they can go), and they focus on making sites that are safe but predictably usable.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Labor Productivity: An Excerpt From The Metrics Minute

Productivity is number of clothes sewed per hour.

Productivity is number of clothes sewed per hour.

Definition:

Labor productivity measures the efficiency of the transformation of labor into something of higher value. It is the amount of output per unit of input: an example in manufacturing terms can be expressed as  X widgets for every hour worked. Labor productivity (typically just called productivity) is a fairly simple manufacturing concept that is useful in IT. It is a powerful metric, made even more powerful by it’s simplicity. At its heart, productivity is a measure of the efficiency of a process (or group of processes). That knowledge can be tool to target and facilitate change. The problem with using productivity in a software environment is the lack of a universally agreed upon output unit of measure.

The lack of a universally agreed upon, tangible unit of output (for example cars in an automobile factory or steel from a steel mill) means that software processes often struggle to define and measure productivity because they’re forced to use esoteric size measures. IT has gone down three paths to solve this problem. The three basic camps to size software include relative measures (e.g. story points), physical measures (e.g. lines of code) and functional measures (e.g. function points). In all cases these measures of software size seek to measure the output of the processes and are defined independently of the input (effort or labor cost).

Formula:
The standard formula for labor productivity is:

Productivity = output / input

If you were using lines of code for productivity, the equation would be as follows:

Productivity = Lines of Code / Hours to Code the Lines of Code

Uses:
There are numerous factors that can influence productivity like skills, architecture, tools, time compression, programming language and level of quality. Organizations want to determine the impact of these factors on the development environment.

The measurement of productivity has two macro purposes. The first purpose is to determine efficiency. When productivity is known a baseline can be produced (line in the sand) then compared to external benchmarks. Comparisons between projects can indicate whether one process is better than another. The ability to make a comparison allows you to use efficiency as a tool in a decision-making process. The number and types of decisions that can be made using this tool are bounded only by your imagination and the granularity of the measurement.

The second macro rational for measuring productivity is as a basis for estimation. In its simplest form a parametric estimate can be calculated by multiplying size by a productivity rate.

Issues:
1. The lack of a consistent size measure is the biggest barrier for measuring productivity.
2. Poor time accounting runs a close second. Time account issues range from misallocation of time to equating billing time to effort time.
3. Productivity is not a single number and is most accurately described as a curve which makes it appear complicated.

Variants or Related Measures:
1. Cost per unit of work
2. Delivery rate
3. Velocity (agile)
4. Efficiency

Criticisms:
There are several criticisms of the using productivity in the software development and maintenance environment. The most prevalent is an argument that all software projects are different and therefore are better measured by metrics focusing on terminal value rather than by metrics focused on process efficiency (artesian versus manufacturing discussion). I would suggest that while the result of a software project tends to be different most of the steps taken are the same which makes the measure valid but that productivity should never be confused with value.

A second criticism of the use of productivity is a result of improper deployment. Numerous organizations and consultants promote the use of a single number for productivity. The use of a single number to describe the productivity the typical IT organization does match reality at the shop floor level when the metrics is used to make comparisons or for estimation. For example, would you expect a web project to have the same productivity rate of a macro assembler project? Would you expect a small project and a very large project to have the same productivity? In either case the projects would take different steps along their life cycles therefore we would expect their productivity to be different. I suggest that an organization analyze their data to look for clusters of performance. Typical clusters can include: client server projects, technology specific projects, package implementations and many others. Each will have a statistically different signature. An example of a productivity signature expressed as an equation is shown below:

Labor Productivity=46.719177-(0.0935884*Size)+(0.0001578*((Size-269.857)^2))

(Note this is an example of a very specialized productivity equation for a set of client server projects tailored for a design, code and unit testing. The results would not representative a typical organization.)

A third criticism is that labor productivity is an overly simple metric that does not reflect quality, value or speed. I would suggest that two out three of these criticisms are correct. Labor productivity is does not measure speed (although speed and effort are related) and does not address value (although value and effort may be related). Quality may be a red herring if rework due to defects is incorporated into productivity equation. In any case productivity should not be evaluated in a vacuum. Measurement programs should incorporate a palette of metrics to develop a holistic picture of a project or organization.


Categories: Process Management

Tell us about your experience building on Google, and raise money for educational organizations!

Google Code Blog - Thu, 09/25/2014 - 22:16
Here at Google, we always put the user first, and for the Developer Platform team, our developers are our users. We want to create the best development platform and provide the support you need to build world-changing apps, but we need to hear from you, our users, on a regular basis so we can see what’s working and what needs to change.

That's why we're launching our developer survey -- we want to hear about how you are using our APIs and platforms, and what your experience is using our developer products and services. We'll use your responses to identify how we can support you better in your development efforts.
Photo Credit: Google I/O 2014

The survey should only take 10 to 15 minutes of your time, and in addition to helping us improve our products, you can also help raise money to educate children around the globe. For every developer who completes the survey, we will donate $10 USD (up to a maximum amount of $20,000USD total) to your choice of one of these six education-focused organizations: Khan Academy, World Fund, Donors Choose, Girl Rising, Raspberry Pi, and Agastya.

The survey is live now and will be live until 11:59PM Pacific Time on October 15, 2014. We are excited to hear what you have to tell us!

Posted by Neel Kshetramade, Program Manager, Developer Platform
Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Thu, 09/25/2014 - 17:01

Most problems are self imposed and usually can be traced to lack of discipline. The foremost attribute of successful programs is discipline: Discipline to evolve and proclaim realistic cost goals; discipline to forego appealing but nonessential features; discipline to minimize engineering changes; discipline to do thorough failure analysis; discipline to abide by test protocols; and discipline to persevere in the face of problems that will occur in even the best-managed programs - Norm R. Augustine

Related articles Agile Requires Discipline, In Fact Successful Projects Require Discipline
Categories: Project Management

Empower Business Analysts to Turn Them Into Product Managers

Software Requirements Blog - Seilevel.com - Thu, 09/25/2014 - 17:00
The Business Analyst role in most organizations I have worked with is passive and reactive by design.  Analysts are given a feature description and tasked with defining the requirements for the same.  The analyst then goes off to perform a set of activities and tasks like elicitation, model creation and requirements definition.  Eventually, they create […]
Categories: Requirements

Software Development Conferences Forecast September 2014

From the Editor of Methods & Tools - Thu, 09/25/2014 - 09:56
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban) software testing and software quality, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine. Future of Web Apps, September 29-October 1 2014, London, UK STARWEST, October 12-17 2014, Anaheim, USA Register and save up with code SW14MT JAX London, October 13-15 2014, London, UK Pacific Northwest Software Quality Conference, October 20-22 2014, Portland, USA Agile ...

I’ll Speak for Free If You Write a Review

NOOP.NL - Jurgen Appelo - Thu, 09/25/2014 - 09:17
personal coaching color

For book authors, Amazon reviews are very important. Reviews sell books. The difference between 10 and 100 book reviews can mean the difference between obscurity versus visibility. 250 reviews? That’s celebrity status!

The post I’ll Speak for Free If You Write a Review appeared first on NOOP.NL.

Categories: Project Management