Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Rely on Specialists, but Sparingly

Mike Cohn's Blog - Thu, 08/14/2014 - 15:48

Last week, I talked about the concept of equality on an agile team. I mentioned that one meaning of equality could be all team members do the same work, so that everyone in agile becomes a generalist.

A common misconception is that everyone on a Scrum team must be a generalist—equally good at all technologies and disciplines, rather than a specialist in one. This is simply not true.

What I find surprising about this myth is that every sandwich shop in the world has figured out how to handle specialists, yet we, in the software industry, still struggle with the question.

My favorite sandwich shop is the Beach Hut Deli in Folsom, California. I’ve spent enough lunches there to notice that they have three types of employees: order takers, sandwich makers, and floaters.

The order takers work the counter, writing each sandwich order on a slip of paper that is passed back to the sandwich makers. Sandwich makers work behind the order takers and prepare each sandwich as it’s ordered.

Order takers and sandwich makers are the specialists of the deli world. Floaters are generalists—able to do both jobs, although perhaps not as well as the specialists. It’s not that their sandwiches taste worse, but maybe a floater is a little slower making them.

When I did my obligatory teenage stint at a fast food restaurant, I was a floater. I wasn’t as quick at wrapping burritos and making tacos as Mark, one of the cooks. And whenever the cash register needed a new roll of paper, I had to yell for my manager, Nikki, because I could never remember how to do it. But, unlike Mark and Nikki, I could do both jobs.

I suspect that just about every sandwich shop in the world has some specialists—people who only cook or who only work the counter. But these businesses have also learned the value of having generalists.

Having some generalists working during the lunch rush helps the sandwich shop balance the need to have some people writing orders and some people making the sandwiches.

What this means for Scrum teams is that yes, we should always attempt to have some generalists around. It is the generalists who enable specialists to specialize.

There will always be teams who need the hard-core device driver programmer, the C++ programmer well-versed in Windows internals, the artificial intelligence programmer, the performance test engineer, the bioinformaticist, the artist, and so on.

But, every time a specialist is added to a team, think of it as equivalent to adding a pure sandwich maker to your deli. Put too many specialists on your team, and you increase the likelihood that someone will spend perhaps too much time waiting for work to be handed off.

Note: A portion of this post is an excerpt from Mike Cohn’s book, Succeeding with Agile.

How Often Should I Blog

Making the Complex Simple - John Sonmez - Thu, 08/14/2014 - 15:00

In this video I talk about how often you should blog and why blogging more often is better as long as you can maintain a consistent level of quality.

The post How Often Should I Blog appeared first on Simple Programmer.

Categories: Programming

Where does r studio install packages/libraries?

Mark Needham - Thu, 08/14/2014 - 11:24

As a newbie to R I wanted to look at the source code of some of the libraries/packages that I’d installed via R Studio which I initially struggled to do as I wasn’t sure where the packages had been installed.

I eventually came across a StackOverflow post which described the .libPaths function which tells us where that is:

> .libPaths()
[1] "/Library/Frameworks/R.framework/Versions/3.1/Resources/library"

If we want to see which libraries are installed we can use the list.files function:

> list.files("/Library/Frameworks/R.framework/Versions/3.1/Resources/library")
 [1] "alr3"         "assertthat"   "base"         "bitops"       "boot"         "brew"        
 [7] "car"          "class"        "cluster"      "codetools"    "colorspace"   "compiler"    
[13] "data.table"   "datasets"     "devtools"     "dichromat"    "digest"       "dplyr"       
[19] "evaluate"     "foreign"      "formatR"      "Formula"      "gclus"        "ggplot2"     
[25] "graphics"     "grDevices"    "grid"         "gridExtra"    "gtable"       "hflights"    
[31] "highr"        "Hmisc"        "httr"         "KernSmooth"   "knitr"        "labeling"    
[37] "Lahman"       "lattice"      "latticeExtra" "magrittr"     "manipulate"   "markdown"    
[43] "MASS"         "Matrix"       "memoise"      "methods"      "mgcv"         "mime"        
[49] "munsell"      "nlme"         "nnet"         "openintro"    "parallel"     "plotrix"     
[55] "plyr"         "proto"        "RColorBrewer" "Rcpp"         "RCurl"        "reshape2"    
[61] "RJSONIO"      "RNeo4j"       "Rook"         "rpart"        "rstudio"      "scales"      
[67] "seriation"    "spatial"      "splines"      "stats"        "stats4"       "stringr"     
[73] "survival"     "swirl"        "tcltk"        "testthat"     "tools"        "translations"
[79] "TSP"          "utils"        "whisker"      "xts"          "yaml"         "zoo"

We can then drill into those directories to find the appropriate file – in this case I wanted to look at one of the Rook examples:

$ cat /Library/Frameworks/R.framework/Versions/3.1/Resources/library/Rook/exampleApps/helloworld.R
app <- function(env){
    req <- Rook::Request$new(env)
    res <- Rook::Response$new()
    friend <- 'World'
    if (!is.null(req$GET()[['friend']]))
	friend <- req$GET()[['friend']]
    res$write(paste('<h1>Hello',friend,'</h1>\n'))
    res$write('What is your name?\n')
    res$write('<form method="GET">\n')
    res$write('<input type="text" name="friend">\n')
    res$write('<input type="submit" name="Submit">\n</form>\n<br>')
    res$finish()
}
Categories: Programming

Running TAP

Phil Trelford's Array - Thu, 08/14/2014 - 08:23

The Test Anything Protocol (TAP) is a text-based protocol for test results:

 1..4
 ok 1 - Input file opened
 not ok 2 - First line of the input valid
 ok 3 - Read the rest of the file
 not ok 4 - Summarized correctly # TODO Not written yet

 

I think the idea is an good one, a simple cross-platform human readable standard for formatting test results. There are TAP producers and consumers for Perl, Java, JavaScript etc. allowing you to join up tests for cross-platform projects.

NUnit runner

Over the last week or so I’ve created a TAP runner F# script for NUnit test methods. It supports the majority of NUnit’s attributes including ExpectedException, TimeOut and test generation with TestCase, TestCaseSource, Values, etc., .

The runner can be used in a console app to produce TAP output to the console or directly in F# interactive for running tests embedded in a script.

TAP run

Tests can be organized in classes:

type TAPExample () =
    [<Test>] member __.``input file opened`` () = Assert.Pass()
    [<Test>] member __.``First line of the input valid`` () = Assert.Fail()
    [<Test>] member __.``Read the rest of the file`` () = Assert.Pass()
    [<Test>] member __.``Summarized correctly`` () = Assert.Fail()

Tap.Run typeof<TAPExample>

or in modules:

let [<Test>] ``input file opened`` () = Assert.Pass()
let [<Test>] ``First line of the input valid`` () = Assert.Fail()
let [<Test>] ``Read the rest of the file`` () = Assert.Pass()
let [<Test>] ``Summarized correctly`` () = Assert.Fail()
type Marker = interface end
Tap.Run typeof<Marker>.DeclaringType

Note: the marker interface is used to reflect the module’s type.

Console output

In the console test cases are marked in red or green:

image


Debugging

If you create an F# Tutorial project you get both, an F# script file that runs as a console application allowing you to set breakpoints in your script with the Visual Studio debugger.

image


Prototyping

When I’m prototyping a new feature I typically use the F# interactive environment for quick feedback and to do exploratory testing. The TAP runner lets you create and run NUnit  formatted tests directly in the script file before later promoting to a full fat project for use in a continuous build environment.

F# Scripting

Interested in learning more about F# scripting, pop along to the Phil Nash’s talk at the F#unctional Londoners tonight.

Categories: Programming

Measurement Proliferation: Guarding the Peace or Mutually Assured Destruction?

Small, Medium and Large or Low, Average  and High?

Lots of ways to measure

Measurement proliferation is when organizations decide that everything can should be measured therefore there is a rapid increase in measures and metrics. There are at least two measurement proliferation scenarios, and they both have as great of a chance of destroying your measurement program as helping it.  The two scenarios can be summarized as proliferation of breadth (measuring everything), followed by proliferation of depth (measuring the same thing many ways).

There are many items that are very important to measure, and it’s difficult to restrain yourself once you’ve started.  Because it seems important to measure many activities within an IT organization, many measurement teams think measuring everything is important.  Unfortunately, measuring what is really important is rarely easy or straightforward. When organizations slip into the “measure everything” mode, often times what gets measured is not related to the organization’s target behavior (the real needs).  When measures are not related to the target behavior, it will be easy to breed unexpected behaviors (not indeterminate or unpredictable, just not what was expected).  For example, one organization determined that the personal capability was a key metric.  More capability would translate into higher productivity and quality.  During the research into the topic, it was determined that capability was too difficult or “touchy-feely” to measure directly. The organization decided that counting requirements were a rough proxy for systems capability, and if systems capability went up, it must be a reflection of personal capability.  So, of course, they measured requirements.  One unanticipated behavior was that the requirements became more granular (actually more consistent), which caused the appearance that increased capability that could not be sustained (or easily approved) after the initial baseline of the measure.

The explosion of pre-defined measures drives the second proliferation scenario, having too many measures for the same concept. Capers Jones mentioned a number of examples in my interview with him for SPaMCAST.  Capers caught my imagination with the statement that there are many functional metrics are currently in use, ranging from IFPUG function points to cosmic; with use case points, NESMA function points and others in between.  This is in addition to counting lines of code, object and ants.  The fracturing in the world of functional metrics has occurred for many reasons, ranging from a natural maturation of the measurement category to the explosion of information sharing on the web. Regardless of the reason for the proliferation, using multiple measures for the same concept just because you can, can have unintended consequences. Having multiple measures for the same concept can cause focus making the concept seem more important than it is. Secondly having multiple measures may send a message that no one is quite sure how to measure the concept which can lead to confusion by the casual observer.  Generally this no reason to use multiple methods to measure the same concept within any organization. Even if each measure was understood, proliferation of multiple measures to measure the same concept will waste time and money. An organization I recently observed had implemented IFPUG Function Points, Cosmic Function Points, Use Case Points and Story Points to measure software size. This organization had spent the time and effort to find a conversion mechanism so that each measure could be combined for reporting. In this case the proliferation metrics for the same concept had become an ‘effort eater.’ Unfortunately it is not uncommon to see organizations trying to compare the productivity of projects based on very different yardsticks rather than adopting a single measure for size. The value of measurement tends to get lost when there is no common basis for discussion. A single measure will provide that common basis.

Both the proliferation of breadth and of depth have upsides, everybody gets to collect, report and use their favorite measure, and downsides, (which sound very similar) everybody gets to collect, report and use their favorite measures.  Extra choices come at a cost: the cost of effort, communication and compatibility.  The selection of measures and metrics must be approached with the end in mind – your organization’s business goals.  Allowing the proliferation of measures and metrics, whether in depth or breadth, must be approached with great thought, or it will cost you dearly in information and credibility.


Categories: Process Management

Seven Principles of Agile Software Development in the US DOD

Herding Cats - Glen Alleman - Wed, 08/13/2014 - 23:36

The Software Engineering Institute is a FFRDC (Federally Funded Research and Development Center). An FFRDC, IDA, is a client.

SEI is focused on helping the DOD improve the development of software.

Here are Pod Casts of the Principles of Agile Development of software in the DOD

  • First Principle - Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.”
  • Second Principle - Welcome changing requirements, even late in development
  • Third Principle - Delivering working software frequently”
  • Fourth Principle - Business people and developers must work together daily throughout the project.
  • Fifth Principle - Build projects around motivated individuals. Give them the environment and support they need and trust them to get the job done.
  • Sixth Principle - The most efficient and effective method of conveying information to and within a development team is face-to-face conversation
  • Seventh Principle - Working software is the primary measure of progress
Related articles Is Your Organization Ready for Agile? Three Kinds of Uncertainty About the Estimated Cost and Schedule Can Enterprise Agile Be Bottom Up? What Do We Mean When We Say "Agile Community?" Studies, Science, Cybersecurity & Saddam: $888.8M to IDA for its 3 US FFRDCs from 2014-2018.
Categories: Project Management

Testing on the Toilet: Web Testing Made Easier: Debug IDs

Google Testing Blog - Wed, 08/13/2014 - 18:58
by Ruslan Khamitov 

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

Adding ID attributes to elements can make it much easier to write tests that interact with the DOM (e.g., WebDriver tests). Consider the following DOM with two buttons that differ only by inner text:
Save buttonEdit button
<div class="button">Save</div>
<div class="button">Edit</div>

How would you tell WebDriver to interact with the “Save” button in this case? You have several options. One option is to interact with the button using a CSS selector:
div.button

However, this approach is not sufficient to identify a particular button, and there is no mechanism to filter by text in CSS. Another option would be to write an XPath, which is generally fragile and discouraged:
//div[@class='button' and text()='Save']

Your best option is to add unique hierarchical IDs where each widget is passed a base ID that it prepends to the ID of each of its children. The IDs for each button will be:
contact-form.save-button
contact-form.edit-button

In GWT you can accomplish this by overriding onEnsureDebugId()on your widgets. Doing so allows you to create custom logic for applying debug IDs to the sub-elements that make up a custom widget:
@Override protected void onEnsureDebugId(String baseId) {
super.onEnsureDebugId(baseId);
saveButton.ensureDebugId(baseId + ".save-button");
editButton.ensureDebugId(baseId + ".edit-button");
}

Consider another example. Let’s set IDs for repeated UI elements in Angular using ng-repeat. Setting an index can help differentiate between repeated instances of each element:
<tr id="feedback-{{$index}}" class="feedback" ng-repeat="feedback in ctrl.feedbacks" >

In GWT you can do this with ensureDebugId(). Let’s set an ID for each of the table cells:
@UiField FlexTable table;
UIObject.ensureDebugId(table.getCellFormatter().getElement(rowIndex, columnIndex),
baseID + colIndex + "-" + rowIndex);

Take-away: Debug IDs are easy to set and make a huge difference for testing. Please add them early.

Categories: Testing & QA

The Ruckus About DDSTOP Post

Herding Cats - Glen Alleman - Wed, 08/13/2014 - 18:03

OpinionAwhile back I posted about Don't Do Stupid Things On Purpose (DDSTOP). This caused some comments about this being a very Theory-X management approach. The notion of Don't Do Stupid Things on Purpose comes from observinig people DSTOP with other peoples money.

Many might say, people have to experiment, try thing out, test the waters. Yes this is true, but who pays for that testing, trying, experimenting? Is there budget for that in the project plan? May be that this is actually an experimental project and the whole point of the project is to do stupid things on purpose to see what happens. 

I wonder if the experimental work being paid for by the customer were worded like that if it would sound so clever?

Here's some clarity and a few examples of DSTOP in recent years.

  • Install CA's Clarity (a very expensive project management and project accounting system).
    • Apply Earned Value to a large ($250M) Enterprise IT project. Project gets in trouble, we come on board to perform triage and discover that the Program Management Office has been copying ACWP - the Actual Cost of Work Performed - to BCWP - the Budgeted Cost of Work Performed - the Earned Value - then reporting to senior management that everything is going fine.
    • Instead of actually measuring physical percent complete to compute BCWP, they simplied copies the cost of the budgeted work, hiding the actual process
    • That's DSTOP
  • Install TTPro as a defect tracking and help desk ticketing system,
    • Communicate verbally most of the defect repairs dome in development.
    • Lose the traceability between the defect and the fix, so the QA staff can't trace the defects back to their rest coverage suite
    • That's DSTOP
  • Plan the development of several 100 million $ of flight avioncis upgrades for an aircraft that has been upgraded in the past.
    • Build the Integrated Master Schedule around the past performance activities - good idea
    • Resource load the IMS from past performance - good idea
    • Discover that the cost and schedule don't fit inside the stated needs of the customer, so dial the work durations and assigned labor loads to fit the price to win for the RFP
    • Get awarded the control, go over budget, and show up late after 12 months of work, get contract canceled
    • That's DSTOP

So is it Theory X to ask those working on the project and managing the project to stop and think about what they are doing, what decisions are being made - and assess the impact of those decisions? Or should those entrusted with the customers money just go exploring, trying out ideas with the hope that something innovative will come out of it?

When we're assumed to be the stewards of other people's money, they should expect us to behave as those stewards. This means we make decision about the use of that time and treasure is ways that are informed by our experience, skills, and governance model. Doing otherwise means we're not the right people to be spending our customers money, or our customer has a lot of money to spend on us to become the right people they should have hired to spend their money.

Related articles In Earned Value, Money Spent is Not a Measure of Progress! What Do We Mean When We Say "Agile Community?"
Categories: Project Management

Hamsterdb: An Analytical Embedded Key-value Store

 

In this post, I’d like to introduce you to hamsterdb, an Apache 2-licensed, embedded analytical key-value database library similar to Google's leveldb and Oracle's BerkeleyDB.

hamsterdb is not a new contender in this niche. In fact, hamsterdb has been around for over 9 years. In this time, it has dramatically grown, and the focus has shifted from a pure key-value store to an analytical database offering functionality similar to a column store database. 

hamsterdb is single-threaded and non-distributed, and users usually link it directly into their applications. hamsterdb offers a unique (at least, as far as I know) implementation of Transactions, as well as other unique features similar to column store databases, making it a natural fit for analytical workloads. It can be used natively from C/C++ and has bindings for Erlang, Python, Java, .NET, and even Ada. It is used in embedded devices and on-premise applications with millions of deployments, as well as serving in cloud instances for caching and indexing.

hamsterdb has a unique feature in the key-value niche: it understands schema information. While most databases do not know or care what kind of keys are inserted, hamsterdb supports key types for binary keys...

Categories: Architecture

People Are Not Resources

My manager reviewed the org chart along with the budget. “I need to cut the budget. Which resources can we cut?”

“Well, I don’t think we can cut software licenses,” I was reviewing my copy of the budget. “I don’t understand this overhead item here,” I pointed to a particular line item.

“No,” he said. “I’m talking about people. Which people can we lay off? We need to cut expenses.”

“People aren’t resources! People finish work. If you don’t want us to finish projects, let’s decide which projects not to do. Then we can re-allocate people, if we want. But we don’t start with people. That’s crazy.” I was vehement.

My manager looked at me as if I’d grown three heads. “I’ll start wherever I want,” he said. He looked unhappy.

“What is the target you need to accomplish? Maybe we can ship something earlier, and bring in revenue, instead of laying people off? You know, bring up the top line, not decrease the bottom line?”

Now he looked at me as if I had four heads.

“Just tell me who to cut. We have too many resources.”

When managers think of people as resources, they stop thinking. I’m convinced of this. My manager was under pressure from his management to reduce his budget. In the same way that technical people under pressure to meet a date stop thinking, managers under pressure stop thinking. Anyone under pressure stops thinking. We react. We can’t consider options. That’s because we are so very human.

People are resourceful. But we, the people, are not resources. We are not the same as desks, licenses, infrastructure, and other goods that people need to finish their work.

We need to change the language in our organizations. We need talk about people as people, not resources. And, that is the topic of this month’s management myth: Management Myth 32: I Can Treat People as Interchangeable Resources.

Let’s change the language in our organizations. Let’s stop talking about people as “resources” and start talking about people as people. We might still need layoffs. But, maybe we can handle them with humanity. Maybe we can think of the work strategically.

And, maybe, just maybe, we can think of the real resources in the organization. You know, the ones we buy with the capital equipment budget or expense budget, not operating budget. The desks, the cables, the computers. Those resources. The ones we have to depreciate. Those are resources. Not people.

People become more valuable over time. Show me a desk that does that. Ha!

Go read Management Myth 32: I Can Treat People as Interchangeable Resources.

Categories: Project Management

Success Articles for Work and Life

"Success consists of going from failure to failure without loss of enthusiasm." -- Winston Churchill

I now have more than 300 articles on the topic of Success to help you get your game on in work and life:

Success Articles

That’s a whole lot of success strategies and insights right at your fingertips. (And it includes the genius from a wide variety of sources including  Scott Adams, Tony Robbins, Bruce Lee, Zig Ziglar, and more.)

Success is a hot topic. 

Success has always been a hot topic, but it seems to be growing in popularity.  I suspect it’s because so many people are being tested in so many new ways and competition is fierce.

But What is Success? (I tried to answer that using Zig Ziglar’s frame for success.)

For another perspective, see Success Defined (It includes definitions of success from Stephen Covey and John Maxwell.)

At the end of the day, the most important definition of success, is the one that you apply to you and your life.

People can make or break themselves based on how they define success for their life.

Some people define success as another day above ground, but for others they have a very high, and very strict bar that only a few mere mortals can ever achieve.

That said, everybody is looking for an edge.   And, I think our best edge is always our inner edge.

As my one mentor put it, “the fastest thing you can change in any situation is yourself.”  And as we all know, nature favors the flexible.  Our ability to adapt and respond to our changing environment is the backbone of success.   Otherwise, success is fleeting, and it has a funny way of eluding or evading us.

I picked a few of my favorite articles on success.  These ones are a little different by design.  Here they are:

Scott Adam’s (Dilbert) Success Formula

It’s the Pebble in Your Shoe

The Wolves Within

Personal Leadership Helps Renew You

The Power of Personal Leadership

Tony Robbins on the 7 Traits of Success

The Way of Success

The future is definitely uncertain.  I’m certain of that.   But I’m also certain that life’s better with skill and that the right success strategies under your belt can make or break you in work and life.

And the good news for us is that success leaves clues.

So make like a student and study.

Categories: Architecture, Programming

The Measurement Value Equation

04_14 Garlic

Producing now, so we can consume later.

A food chain is the sequence between production and consumption. In software development the development team generates the functionality that is delivered to a user. In the measurement food chain, the measures that the team generates and collect marks the beginning of process that is consumed to create analysis and reports.  As the team develops, enhances or maintains functionality they consume raw materials, such as effort, ideas and the time to produce an output to be measured.  Managers and administrators monitor the consumption of inputs, the process of transformation and the outputs.  Each of these components can be analyzed and measured; transformed into a number that equates to value or cost.  The comparison of value to cost can be evaluated against the trials and tribulation of production, adding a significant component to the overall value equation.

The question that begs to be asked is ‘who needs this data?’  Who can and does leverage the output of measurement?  Does the audience for measurement include the project and support personnel that create and maintain the functionality?  Or is measurement merely a tool to control the work and workers?   In order to maximize value of you metrics program all constituencies must derive value: development teams, administrators, project managers and organizational managers.  Design measures with this end in mind.


Categories: Process Management

Welcome to The Situation Room

Software Requirements Blog - Seilevel.com - Tue, 08/12/2014 - 17:00
The Seilevel World Headquarters in Austin has workstations scattered throughout about 10 different offices (rooms, not separate locations) and about 7 conference rooms with marker boards, large tables, conference phones, and projectors. We are a consulting firm; so we aren’t all always in our office. Those of us who travel to client sites don’t have […]
Categories: Requirements

Agile Bootcamp Talk Posted on Slideshare

I posted my slides for my Agile 2014 talk, Agile Projects, Program & Portfolio Management: No Air Quotes Required on Slideshare. It’s a bootcamp talk, so the majority of the talk is making sure that people understand the basics about projects. Walk before you run. That part.

However, you can take projects and “scale” them to programs. I wish people wouldn’t use that terminology. Program management isn’t exactly scaling. Program management is when the strategic endeavor  of the program encompases each of the projects underneath.

If you have questions about the presentation, let me know. Happy to answer questions.

Categories: Project Management

Hierarchies remove scaling properties in Agile Software projects

Software Development Today - Vasco Duarte - Tue, 08/12/2014 - 05:00

There is a lot of interest in scaling Agile Software Development. And that is a good thing. Software projects of all sizes benefit from what we have learned over the years about Agile Software Development.

Many frameworks have been developed to help us implement Agile at scale. We have: SAFe, DAD, Large-scale Scrum, etc. I am also aware of other models for scaled Agile development in specific industries, and those efforts go beyond what the frameworks above discuss or tackle.

However, scaling as a problem is neither a software nor an Agile topic. Humanity has been scaling its activities for millennia, and very successfully at that. The Pyramids in Egypt, the Panama Canal in central America, the immense railways all over the world, the Airbus A380, etc.

All of these scaling efforts share some commonalities with software and among each other, but they are also very different. I'd like to focus on one particular aspect of scaling that has a huge impact on software development: communication.

The key to scaling software development

We've all heard countless accounts of projects gone wrong because of lack (inadequate, or just plain bad) communication. And typically, these problems grow with the size of the team. Communication is a major challenge in scaling any human endeavor, and especially one - like software - that so heavily depends on successful communication patterns.

In my own work in scaling software development I've focused on communication networks. In fact, I believe that scaling software development is first an exercise in understanding communication networks. Without understanding the existing and necessary communication networks in large projects we will not be able to help those project adapt. In many projects, a different approach is used: hierarchical management with strict (and non-adaptable) communication paths. This approach effectively reduces the adaptability and resilience in software projects.

Scaling software development is first and foremost an exercise in understanding communication networks.

Even if hierarchies can successfully scale projects where communication needs are known in advance (like building a railway network for example), hierarchies are very ineffective at handling adaptive communication needs. Hierarchies slow communication down to a manageable speed (manageable for those at the top), and reduce the amount of information transferred upwards (managers filter what is important - according to their own view).

In a software project those properties of hierarchy-bound communication networks restrict valuable information from reaching stakeholders. As a consequence one can say that hierarchies remove scaling properties from software development. Hierarchical communication networks restrict information reach without concern for those who would benefit from that information because the goal is to "streamline" communication so that it adheres to the hierarchy.

In software development, one must constantly map, develop and re-invent the communication networks to allow for the right information to reach the relevant stakeholders at all times. Hence, the role of project management in scaled agile projects is to curate communication networks: map, intervene, document, and experiment with communication networks by involving the stakeholders.

Scaling agile software development is - in its essential form - a work of developing and evolving communication networks.

A special thank you note to Esko Kilpi and Clay Shirky for the inspiration for this post through their writings on organizational patterns and value networks in organizations.

Picture credit: John Hammink, follow him on twitter

Seven Deadly Sins of Metrics Programs: a Conclusion

Dr. Deming

Dr. Deming

The Seven Deadly Sins of metrics programs are:

  1. Pride – Believing that a single number/metric is more important than any other factor.
  2. Envy – Instituting measures that facilitate the insatiable desire for another team’s people, tools or applications.
  3. Wrath – Using measures to create friction between groups or teams.
  4. Sloth – Unwillingness to act on or care about the measures you create.
  5. Greed – Allowing metrics to be used as a tool to game the system.
  6. Gluttony – Application of an excess of metrics.
  7. Lust – Pursuit of the number rather than the business goal.

In the end, these sins are a reflection of the organization’s culture. Bad metrics can generate bad behavior and reinforce an organizational culture issues. Adopting good measures is a step in the right direction however culture can’t be changed by good metrics alone. Shifting the focus on an organizations business goals, fostering transparency to reduce gaming and then using measures as tools rather than weapons can support changing the culture. Measurement can generate behavior that leads towards a healthier environment.  As leaders, measurement and process improvement professionals, we should push to shape their environment so that everyone can work effectively for the company.

The Shewhart PDCA Cycle (or Deming Wheel), set outs of model where measurement becomes a means to an end rather than an end in their own right. The Deming wheel popularized the Plan, Do Check, Act (PDCA) cycle which is focused on delivering business value. Using the PDCA cycle, organizational changes are first planned, executed, checked by measurement and then refined based on a positive feedback model. In his book The New Economics Deming wrote “Reward for good performance may be the same as reward to the weather man for a pleasant day.” Organizations that fall prey to the Seven Deadly Sins of metrics programs are apt to incent the wrong behavior.

(Thank you Dr. Deming).


Categories: Process Management

Staying on Plan Means Closed Loop Control

Herding Cats - Glen Alleman - Tue, 08/12/2014 - 00:19

Screen Shot 2014-08-11 at 5.03.33 PMFor most projects showing up on or near the planned need date, at or near the planned cost, and more or less with the planned capabilities is a good measure of success. Delivering capabilities late and over budget is usually not acceptable to those paying for our work.

So how do we do this? Simple actually.

We start with a Plan. Here's the approach to Planning and the resulting Plan.

Planning constantly peers into the future for indications as to where a solution may emerge. A Plan is a complex situation, adapting to an emerging solution. 
Mike Dwyer, Big Visible

The Plan tells us when we need the capabilities to produce the needed business value or accomplish the mission. The Plan is a strategy. This strategy involves setting goals, determining actions to achieve the goals, and mobilizing resources to execute the actions. The strategy describes how the ends (goals) will be achieved by the means (resources) in units of measure meanigful to the decision makers.

Strategy creates fit among a firms activities. For Enterprise IT, this fit is defined by the relationships between the needed capabilities delivered by the project. The success of a strategy depends on doing many things well — not just a few.

The things that are done well must operate within a close nit system. If there is no fit among the activities, there is no distinctive strategy and little to sustain the strategic deployment process. Management then reverts to the simpler task of overseeing independent functions.

When this occurs operational effectiveness determines the relative performance of the organization. ["What is Strategy,” M. E. Porter, Harvard Business Review, Volume 74, Number 6, pp. 61–78.]

As successful Plan describes the order of delivery of value in exchange for cost, the inter-dependencies between these value producing items, and the synergistic outcomes from these value producing items working together to meet the strategy.

With the Plan in hand, we can ask and answer the following questions:

  • Do we know what Done looks like in units of measure meaningful to the decision makers?
  • Do we have a strategy for reaching done, on or near the need date, at or near the budget?
  • Do we have the resources we need to execute that strategy?
  • Do we know what impediments we'll encounter along the way and how we're going to handle them?
  • Do we have some means of measuring our progress along the way to provides the confidence we'll reach done when we expect to? 

This Post Answers the Last Question

The example below is from our cycling group. The principles are the same for projects. We have a desired outcome in terms of date, cost, and technical performance. These desired outcomes have some end goal. A budget, a go live date, a set of features or capabilities needed to fulfill the business case. 

Along the way we need to take corrective actions when we see we are falling behind. 

How did we know we were falling behind? Because we have a desired performance at points along the way, that we compare our actual performance to. The difference between our actual performance and the desired performance, creates an "error signal" we can use to make adjustments.

Out thermostat does this, our speed control on our car does this, the Close Loop Control systems used for managing our project does this. So replaces the cycling example with writing software for money. The Peleton for the desired performance of our work. In the presentation below, ignore the guy in the Yellow Jersey at the end. Turns out he's a Dopper and an all around bad person to his fellow riders and fans.

The simple problem of schedule performance indices from Glen Alleman So let's look at a project example using the analogy of our cycling group, ignoring Lance.
  • We have a goal of riding a distance in a specific time. This can be a training ride, a century, or a Grand Fondo (timed distance for record).
  • We have instruments on the bike. Strava on the smart phone. Gamin on the bars. Both keeping track of instantaneous speed, time moving, total time. As well an average time over the distance.
  • We know, because we've done this route A 100 times before, or we know because we looked at the coure map, or we know because we have talked about the planned distance for the day - about how fats we need to ride - on average - to get back to the drive on a planned time.
  • Say we're out for our Saturday ride, which is usually a 50 to 65 mile loop from the driveway in Niwot Colorado to Carter Lake south of Fort Collins.
  • As a group we agree our average pace will be 20 MPH. Everyone has some way of measuring their average and instantaneous speed.
  • Over the course, it's never flat, some climbs and descents, change the flat land speed, but the average needs to stay at 20 MPH. 
  • When one or more drop off the back, I'm usually one of those, we know what are average is. And instinctively we know how much faster we need to ride to catch up - pick up the pace.
  • But if we were actual racers riding solo - time trial or Triathlon, we'd look at our Garmin to see if we're going fast enough to get back on pace, and come in under our target time.

This example can be related to a project. 

  • We have a target for the end — a target set of capabilities, with a need date, and a target budget
  • We have targets for all the intermediary points along the way — capabilities over time, budget over time.
  • We know our pace — how fast to we have to go, how much cost can we absorb, to make our goal of delivering the needed capabilities on the needed date, for the needed budget.
  • We know the gap between our pace and our needed pace to stay on track — with an intermediate desired performance, budget, number of features delivered, stated capabilities ready to be  used, and the actual measures of these, we can develop a variance that is used to product an error signal that is used to steer to the project. This is the feedback loop. The closed loop control system we need to keep the project on plan.
  • From that variance we know how much faster we need to go to arrive on time.

This is Close Loop Control

  • Have a target for the end and we have intermediate targets along the way. These intermediate targets are the feedback points we use to assess progress to date
  • Measure the actual value of whatever variables we need to provide visibility to the project's performance. This can be cost, schedule, performance. But it can be other attributes of the project. Customer acceptance. Market place feedback. Incremental savings.
  • Determine the variance between the plan and the actual.
  • Take corrective action to close the gap to get back on target  or get within the variance tolerance.
  • Repeat until project ends.

You're cruise control does this about every 10 milliseconds. You Nest thermostat does this slower, but still less than a minute. To know how often you need to sample your progress against plan, answer this question

How long are you willing to wait before you find out you're late? Sample at ½ that time.

This is called the Nyquist Rate, one of the starting point for all the process control software I wrote in my youner days for flying and swimming machines. But it's a good question to ask on all projects as well.

Related articles Is There Such a Thing As Making Decisions Without Knowing the Cost? Time to Revisit The Risk Discussion Can Enterprise Agile Be Bottom Up? Process Improvement In A Complex World Why Project Management is a Control System If you want to pretent you're a Pro, then starting acting like one Control Systems - Their Misuse and Abuse Seven Immutable Activities of Project Success More Than Target Needed To Steer Toward Project Success
Categories: Project Management

R: Grouping by two variables

Mark Needham - Mon, 08/11/2014 - 17:47

In my continued playing around with R and meetup data I wanted to group a data table by two variables – day and event – so I could see the most popular day of the week for meetups and which events we’d held on those days.

I started off with the following data table:

> head(eventsOf2014, 20)
      eventTime                                              event.name rsvps            datetime       day monthYear
16 1.393351e+12                                         Intro to Graphs    38 2014-02-25 18:00:00   Tuesday   02-2014
17 1.403635e+12                                         Intro to Graphs    44 2014-06-24 18:30:00   Tuesday   06-2014
19 1.404844e+12                                         Intro to Graphs    38 2014-07-08 18:30:00   Tuesday   07-2014
28 1.398796e+12                                         Intro to Graphs    45 2014-04-29 18:30:00   Tuesday   04-2014
31 1.395772e+12                                         Intro to Graphs    56 2014-03-25 18:30:00   Tuesday   03-2014
41 1.406054e+12                                         Intro to Graphs    12 2014-07-22 18:30:00   Tuesday   07-2014
49 1.395167e+12                                         Intro to Graphs    45 2014-03-18 18:30:00   Tuesday   03-2014
50 1.401907e+12                                         Intro to Graphs    35 2014-06-04 18:30:00 Wednesday   06-2014
51 1.400006e+12                                         Intro to Graphs    31 2014-05-13 18:30:00   Tuesday   05-2014
54 1.392142e+12                                         Intro to Graphs    35 2014-02-11 18:00:00   Tuesday   02-2014
59 1.400611e+12                                         Intro to Graphs    53 2014-05-20 18:30:00   Tuesday   05-2014
61 1.390932e+12                                         Intro to Graphs    22 2014-01-28 18:00:00   Tuesday   01-2014
70 1.397587e+12                                         Intro to Graphs    47 2014-04-15 18:30:00   Tuesday   04-2014
7  1.402425e+12       Hands On Intro to Cypher - Neo4j's Query Language    38 2014-06-10 18:30:00   Tuesday   06-2014
25 1.397155e+12       Hands On Intro to Cypher - Neo4j's Query Language    28 2014-04-10 18:30:00  Thursday   04-2014
44 1.404326e+12       Hands On Intro to Cypher - Neo4j's Query Language    43 2014-07-02 18:30:00 Wednesday   07-2014
46 1.398364e+12       Hands On Intro to Cypher - Neo4j's Query Language    30 2014-04-24 18:30:00  Thursday   04-2014
65 1.400783e+12       Hands On Intro to Cypher - Neo4j's Query Language    26 2014-05-22 18:30:00  Thursday   05-2014
5  1.403203e+12 Hands on build your first Neo4j app for Java developers    34 2014-06-19 18:30:00  Thursday   06-2014
34 1.399574e+12 Hands on build your first Neo4j app for Java developers    28 2014-05-08 18:30:00  Thursday   05-2014

I was able to work out the average number of RSVPs per day with the following code using plyr:

> ddply(eventsOf2014, .(day=format(datetime, "%A")), summarise, 
        count=length(datetime),
        rsvps=sum(rsvps),
        rsvpsPerEvent = rsvps / count)
 
        day count rsvps rsvpsPerEvent
1  Thursday     5   146      29.20000
2   Tuesday    13   504      38.76923
3 Wednesday     2    78      39.00000

The next step was to show the names of events that happened on those days next to the row for that day. To do this we can make use of the paste function like so:

> ddply(eventsOf2014, .(day=format(datetime, "%A")), summarise, 
        events = paste(unique(event.name), collapse = ","),
        count=length(datetime),
        rsvps=sum(rsvps),
        rsvpsPerEvent = rsvps / count)
 
        day                                                                                                    events count rsvps rsvpsPerEvent
1  Thursday Hands On Intro to Cypher - Neo4j's Query Language,Hands on build your first Neo4j app for Java developers     5   146      29.20000
2   Tuesday                                         Intro to Graphs,Hands On Intro to Cypher - Neo4j's Query Language    13   504      38.76923
3 Wednesday                                         Intro to Graphs,Hands On Intro to Cypher - Neo4j's Query Language     2    78      39.00000

If we wanted to drill down further and see the number of RSVPs per day per event type then we could instead group by the day and event name:

> ddply(eventsOf2014, .(day=format(datetime, "%A"), event.name), summarise, 
        count=length(datetime),
        rsvps=sum(rsvps),
        rsvpsPerEvent = rsvps / count)
 
        day                                              event.name count rsvps rsvpsPerEvent
1  Thursday Hands on build your first Neo4j app for Java developers     2    62      31.00000
2  Thursday       Hands On Intro to Cypher - Neo4j's Query Language     3    84      28.00000
3   Tuesday       Hands On Intro to Cypher - Neo4j's Query Language     1    38      38.00000
4   Tuesday                                         Intro to Graphs    12   466      38.83333
5 Wednesday       Hands On Intro to Cypher - Neo4j's Query Language     1    43      43.00000
6 Wednesday                                         Intro to Graphs     1    35      35.00000

There are too few data points for some of those to make any decisions but as we gather more data hopefully we’ll see if there’s a trend for people to come to events on certain days or not.

Categories: Programming

The Easy Way of Building a Growing Startup Architecture Using HAProxy, PHP, Redis and MySQL to Handle 1 Billion Requests a Week

This Case Study is a guest post written by Antoni Orfin, Co-Founder and Software Architect at Octivi

In the post I'll show you the way we developed quite simple architecture based on HAProxy, PHP, Redis and MySQL that seamlessly handles approx 1 billion requests every week. There’ll be also a note of the possible ways of further scaling it out and pointed uncommon patterns, that are specific for this project.

Stats:
Categories: Architecture

How to Negotiate Your Salary

Making the Complex Simple - John Sonmez - Mon, 08/11/2014 - 15:00

I’m often surprised how many software developers neglect to do any salary negotiations at all or make a single attempt at negotiating their salary and then give up and take whatever is offered. Negotiating your salary is important—not just because the dollars will add up over time and you could end up leaving a lot […]

The post How to Negotiate Your Salary appeared first on Simple Programmer.

Categories: Programming