Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Varnish Goes Upstack with Varnish Modules and Varnish Configuration Language

This is a guest post by Denis Brækhus and Espen Braastad, developers on the Varnish API Engine from Varnish Software. Varnish has long been used in discriminating backends, so it's interesting to see what they are up to.

Varnish Software has just released Varnish API Engine, a high performance HTTP API Gateway which handles authentication, authorization and throttling all built on top of Varnish Cache. The Varnish API Engine can easily extend your current set of APIs with a uniform access control layer that has built in caching abilities for high volume read operations, and it provides real-time metrics.

Varnish API Engine is built using well known components like memcached, SQLite and most importantly Varnish Cache. The management API is written in Python. A core part of the product is written as an application on top of Varnish using VCL (Varnish Configuration Language) and VMODs (Varnish Modules) for extended functionality.

We would like to use this as an opportunity to show how you can create your own flexible yet still high performance applications in VCL with the help of VMODs.

VMODs (Varnish Modules)
Categories: Architecture

The Newest Management 3.0 Game: Improv Cards

NOOP.NL - Jurgen Appelo - Wed, 05/06/2015 - 16:04
Improv Cards

The Improv Cards game contains 52 playing cards with pictures on them. You play it with at least three people, though best is probably a table of four to six players.

The post The Newest Management 3.0 Game: Improv Cards appeared first on NOOP.NL.

Categories: Project Management

Negotiating ‚Äď BATNA

Cabbage or Onion, Which Can I Afford?

Cabbage or Onion, Which Can I Afford?

‚ÄúAll the worlds a negotiation, and¬†all¬†the men and women merely players‚ÄĚ (apologies to William Shakespeare As You Like It, Act II Scene VII).

When done correctly, negotiating is like a play in which the parties are still working out the final scene. Each party will have an idea of what the scene will be like; however the scene may never happen so they need to have a fall back plan. In negotiation terms that fallback plan is called the best alternative to a negotiated agreement or BATNA. The term BATNA was originally coined by Roger Fisher and William Ury in their 1981 their book Getting to Yes: Negotiating Without Giving In. Every project manager, Scrum master, product owner, developer and tester is involved in countless negotiations on a daily basis. Those negotiations can be as trivial as were to go to lunch or as momentous as whether a specific feature will be included in a release. One of most common failings of an untrained negotiator is to not identify their BATNA BEFORE they begin negotiations.

Establishing a BATNA provides a negotiator with more negotiating power. Knowing what your alternative is before you start negotiating allows a negotiator to understand what they can concede and where they have to push to attain their goals. Not having a BATNA can lead to a negotiator conceding too much or needlessly digging in their heels.

We have noted that cognitive biases affect how people interpret data. One common bias that affects negotiations is the outcome bias. A person (or group) afflicted with this bias judges a decision by its eventual outcome instead of how they arrive at the decision. The single-minded pursuit of the final outcome of the negotiation, for example a new contract or a new house, leads the party with this type of bias to make the wrong concessions, often leading to buyer’s remorse. Establishing a BATNA before the negotiations begins provides an anchor (another form of bias) that can be used as a flag to indicate when walk away.

BATNA is a very simple concept. BATNA represents what you will do if you fail to get an agreement at the end of a negotiation. Simply put, BATNA is the best you can do when the other party stops negotiate and you can not agree to that position. Without a clear understanding of your BATNA, it is difficult to determine when to accept, draw the line or walk away. For example, once on a family vacation our Subaru Legacy broke down in rural New York State. We found the only person that could work on a Subaru to look at the car. Because they were the only dealer in over a 100 miles we determined that we had two options other than accepting their repair estimate. Our BATNA was either to buy a new car or have our car towed home and take the bus back to Cleveland. Neither were good options. Luckily the dealer’s estimate was VERY reasonable. However we prepared in case we had to negotiate more strenuously or had to walk away.


Categories: Process Management

Python: Selecting certain indexes in an array

Mark Needham - Tue, 05/05/2015 - 22:39

A couple of days ago I was scrapping the UK parliament constituencies from Wikipedia in preparation for the Graph Connect hackathon and had got to the point where I had an array with one entry per column in the table.

2015 05 05 22 22 57

import requests
 
from bs4 import BeautifulSoup
from soupselect import select
 
page = open("constituencies.html", 'r')
soup = BeautifulSoup(page.read())
 
for row in select(soup, "table.wikitable tr"):
    if select(row, "th"):
        print [cell.text for cell in select(row, "th")]
 
    if select(row, "td"):
        print [cell.text for cell in select(row, "td")]
$ python blog.py
[u'Constituency', u'Electorate (2000)', u'Electorate (2010)', u'Largest Local Authority', u'Country of the UK']
[u'Aldershot', u'66,499', u'71,908', u'Hampshire', u'England']
[u'Aldridge-Brownhills', u'58,695', u'59,506', u'West Midlands', u'England']
[u'Altrincham and Sale West', u'69,605', u'72,008', u'Greater Manchester', u'England']
[u'Amber Valley', u'66,406', u'69,538', u'Derbyshire', u'England']
[u'Arundel and South Downs', u'71,203', u'76,697', u'West Sussex', u'England']
[u'Ashfield', u'74,674', u'77,049', u'Nottinghamshire', u'England']
[u'Ashford', u'72,501', u'81,947', u'Kent', u'England']
[u'Ashton-under-Lyne', u'67,334', u'68,553', u'Greater Manchester', u'England']
[u'Aylesbury', u'72,023', u'78,750', u'Buckinghamshire', u'England']
...

I wanted to get rid of the 2nd and 3rd columns (containing the electorates) from the array since those aren’t interesting to me as I have another source where I’ve got that data from.

I was struggling to do this but two different Stack Overflow questions came to the rescue with suggestions to use enumerate to get the index of each column and then add to the list comprehension to filter appropriately.

First we’ll look at the filtering on a simple example. Imagine we have a list of 5 people:

people = ["mark", "michael", "brian", "alistair", "jim"]

And we only want to keep the 1st, 4th and 5th people. We therefore only want to keep the values that exist in index positions 0,3 and 4 which we can do like this:

>>> [x[1] for x in enumerate(people) if x[0] in [0,3,4]]
['mark', 'alistair', 'jim']

Now let’s apply the same approach to our constituencies data set:

import requests
 
from bs4 import BeautifulSoup
from soupselect import select
 
page = open("constituencies.html", 'r')
soup = BeautifulSoup(page.read())
 
for row in select(soup, "table.wikitable tr"):
    if select(row, "th"):
        print [entry[1].text for entry in enumerate(select(row, "th")) if entry[0] in [0,3,4]]
 
    if select(row, "td"):
        print [entry[1].text for entry in enumerate(select(row, "td")) if entry[0] in [0,3,4]]
$ python blog.py
[u'Constituency', u'Largest Local Authority', u'Country of the UK']
[u'Aldershot', u'Hampshire', u'England']
[u'Aldridge-Brownhills', u'West Midlands', u'England']
[u'Altrincham and Sale West', u'Greater Manchester', u'England']
[u'Amber Valley', u'Derbyshire', u'England']
[u'Arundel and South Downs', u'West Sussex', u'England']
[u'Ashfield', u'Nottinghamshire', u'England']
[u'Ashford', u'Kent', u'England']
[u'Ashton-under-Lyne', u'Greater Manchester', u'England']
[u'Aylesbury', u'Buckinghamshire', u'England']
Categories: Programming

A Framework for Managing Other Peoples Money

Herding Cats - Glen Alleman - Tue, 05/05/2015 - 17:06

Managing other people's money - our internal money, money from a customer, money from an investor - means making rational decisions based on facts. In an uncertain and emerging future, making those decisions means assessing the impact of that decision on that future in terms of money, time, and delivered value.

To make those decisions - in the presence of this uncertainty - implies we need to develop information about the variables that appear in that future. We can use past data of course. That data needs to be adjusted for several factors:

  • Does this data in the past represent data in the future?
  • Does this data have a statistical assessment for variance, standard deviation, and other¬†moments that we can use to assess the usefulness of the data for making decisions in the future?
  • Are there enough data points to create a credible forecast of the future?

No answers to these questions? That data you collected not likely to have much value in making decisions for the future.

Categories: Project Management

Getting Comfortable with Not Signing Up for Tasks in Sprint Planning

Mike Cohn's Blog - Tue, 05/05/2015 - 15:00

In last week’s blog post, I wrote about whether team members should sign up for tasks during sprint planning. I concluded that team commitment goes up when names are left off specific tasks during sprint planning, and this is a good thing.

But, starting a sprint without names on any tasks can also feel very unsettling to teams and ScrumMasters who are new to Scrum. So, I want to offer some advice on how to get comfortable with this idea.

If you’d prefer to leave sprint planning with a name on every task, go ahead; have team members sign up for tasks and make sure each task has a name next to it. Do this for perhaps a team’s first five sprints until everyone is comfortable with the process.

Then switch to having people sign up for only about half of the tasks in the sprint. This will be more than enough to get started and probably won’t feel any worse—or any better—than when everything had a name next to it at the start of the sprint.

But it’s an important first step in the direction of getting out of the habit of allocating tasks to individuals during sprint planning.

Do this for two sprints. After two sprints, have team members sign up for only one-fourth of the total tasks in the sprint. At this point, you’ll almost certainly start to see most of the benefits of a real-time sign-up strategy.

You can stop there if you’d like. Or allocate 25 percent of tasks for two sprints and then go all the way to 0.

Elements of Scale: Composing and Scaling Data Platforms

This is a guest repost of Ben Stopford's epic post on Elements of Scale: Composing and Scaling Data Platforms. A masterful tour through the evolutionary forces that shape how systems adapt to key challenges.

As software engineers we are inevitably affected by the tools we surround ourselves with. Languages, frameworks, even processes all act to shape the software we build.

Likewise databases, which have trodden a very specific path, inevitably affect the way we treat mutability and share state in our applications.

Over the last decade we’ve explored what the world might look like had we taken a different path. Small open source projects try out different ideas. These grow. They are composed with others. The platforms that result utilise suites of tools, with each component often leveraging some fundamental hardware or systemic efficiency. The result, platforms that solve problems too unwieldy or too specific to work within any single tool.

So today’s data platforms range greatly in complexity. From simple caching layers or polyglotic persistence right through to wholly integrated data pipelines. There are many paths. They go to many different places. In some of these places at least, nice things are found.

So the aim for this talk is to explain how and why some of these popular approaches work. We’ll do this by first considering the building blocks from which they are composed. These are the intuitions we’ll need to pull together the bigger stuff later on.

Categories: Architecture

My Journey to Finally Ditching My Desktop PC

Making the Complex Simple - John Sonmez - Mon, 05/04/2015 - 16:00

I’m a bit crazy when it comes to computer hardware. I’ll admit it, I sort of obsess over what most people would consider minor details. I’ve long had this dream of having the perfect workstation for getting my work done. This dream has changed and morphed over time, but I like to think it’s getting […]

The post My Journey to Finally Ditching My Desktop PC appeared first on Simple Programmer.

Categories: Programming

The Innovation Revolution (A Time of Radical Transformation)

It was the best of times, it was the worst of times …

It‚Äôs not A Tale of Two Cities.   It‚Äôs a tale of the Innovation Revolution.

We‚Äôve got real problems worth solving.  The stakes are high.  Time is short.  And abstract answers are not good enough.

In the book, Ten Types of Innovation: The Discipline of building Breakthroughs, Larry Keeley, Helen Walters, Ryan Pikkel, and Brian Quinn explain how it is like A Tale of Two Cities in that it is the worst of time and it is the best of times.

But it is also like no other time in history.

It’s an Innovation Revolution … We have the technology and we can innovate our way through radical transformation.

The Worst of Times (Innovation Has Big Problems to Solve)

We‚Äôve got some real problems to solve, whether it‚Äôs health issues, poverty, crime, or ignorance.  Duty calls.  Will innovation answer?

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

‚ÄúPeople expect very little good news about the wars being fought (whether in Iraq, Afghanistan, or on Terror, Drugs, Poverty, or Ignorance).  The promising Arab Spring has given way to a recurring pessimism about progress.  Gnarly health problems are on a tear the world over--diabetes now affects over eight percent of Americans--an other expensive disease conditions such as obesity, heart disease, and cancer are also now epidemic.  The cost of education rises like a runaway helium balloon, yet there is less and less evidence that it nets the students a real return on their investment.  Police have access to ever more elaborate statistical models of crime, but there is still way too much of it.  And global warming, steadily produces more extreme and more dangerous conditions the world over, yet according to about half of our elected 'leaders,' it is still, officially, only a theory that can conveniently be denied.‚ÄĚ

The Best of Times (Innovation is Making Things Happen)

Innovation has been answering.  There have been amazing innovations heard round the world.  It‚Äôs only the beginning for an Innovation Revolution.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“And yet ...

We steadily expect more from our computers, our smartphones, apps, networks, and games.  We have grown to expect routine and wondrous stories of new ventures funded through crowdsourcing.  We hear constantly of lives around the world transformed because of Twitter or Kahn Academy or some breakthrough discovery in medicine.  Esther Duflo and her team at the Poverty Action Lab at MIT keep cracking tough problems that afflict the poor to arrive at solutions with demonstrated efficacy, and then, often the Gates Foundation or another philanthropic institution funds the transformational solution at unprecedented scale.

Storytelling is in a new golden age--whether in live events, on the radio, or in amazing new television series that can emerge anywhere in the world and be adapted for global tastes.  Experts are now everywhere, and shockingly easy and affordable to access.

Indeed, it seems clear that all the knowledge we've been struggling to amass is steadily being amplified and swiftly getting more organized, accessible, and affordable--whether through the magic of elegant little apps or big data managed in ever-smarter clouds or crowdfunding sites used to capitalize creative ideas in commerce or science.‚ÄĚ

It’s a Time of Radical Transformation and New, More Agile Institutions

The pace of change and the size of change will accelerate exponentially as the forces of innovation rally together.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

‚ÄúOne way to make sense of these opposing conditions is to see us as being in a time of radical transformation.  To see the old institutions as being challenged as a series of newer, more agile ones arise.  In history, such shifts have rarely been bloodless, but this one seems to be a radical transformation in the structure, sources, and nature of expertise.  Indeed, among innovation experts, this time in one like no other.  For the very first time in history, we are in a position to tackle tough problems with ground-breaking tools and techniques.‚ÄĚ

It’s time to break some ground.

Join the Innovation Revolution and crack some problems worth solving.

You Might Also Like

How To Get Innovation to Succeed Instead of Fail

Innovation Life Cycle

Management Innovation is at the Top of the Innovation Stack

The Drag of Old Mental Models on Innovation and Change

The Myths of Business Model Innovation

Categories: Architecture, Programming

How We Make Decisions is as Important as What We Decide.

Herding Cats - Glen Alleman - Mon, 05/04/2015 - 15:36

Working over the week on a release of a critical set of project capabilities and need a break from that. This post will be somewhat scattered as I'm writing it in the lobby to get some fresh air.

Here's the post asking for a conversation about estimates. Here's a long response to that request.

Let's ignore the term FACT for the moment as untestable and see how to arrive at some answers for each statement. These answers are from a paradigm of Software Intensive Systems, where Microeconomics of decision-making is the core paradigm used to make decisions, based on Risk and Opportunity Costs from those decisions.

  • FACT: It is possible, and sometimes necessary, to estimate software tasks and projects.
    • It is always possible to estimate the future. This is well established in all domains. The mathematical means to makes estimates is readily available in any book store, college campus, and on the web.
    • The confidence in the estimate's value is part of the estimating process. Measurement of Error, Variance, Confidence Intervals, Sample Sizes for past performance, and a myriad of other measures are also readily available for the asking
    • The¬†value at risk¬†is one attribute of the estimate
    • Low¬†value at risk¬†provides a wider range on the confidence value
    • High¬†value at¬†risk¬†requires higher confidence
  • FACT: Questioning the intent behind a request for an estimate is the professional thing to do
    • Introducing the profession card is a common tactic. Developing software for moment is not a profession. A profession requires prolonged training, requiring recognition of professional qualifications.¬† Directive on Recognition of Professional Qualifications (2005/36/EC) ‚Äúthose practiced on the basis of relevant professional qualifications in a personal, responsible and professionally independent capacity by those providing intellectual and conceptual services in the interest of the client and the public‚ÄĚ.
    • Programmers are not professions in that sense.
    • To the CFO with a CPA ‚Äď which is a profession ‚Äď the intent of estimates is to inform those accountable for the money to make decisions about that money informed by the¬†value at risk.
    • To question that intent assumes those making those decisions no longer have the fiduciary responsibility for being the stewards of the money. And that responsibility is transferred to those¬†spending¬†the money.
    • This would imply the¬†separation¬†of concerns¬†on any¬†governance¬†based business has been suspended.
  • FACT: #NoEstimates is a Twitter hashtag and was never intended to become a demand, a method or a black-and-white truth
    • The Hash Tag's original poster makes a clear and concise statement.
      • We can make¬†decisions¬†in the absence of¬†estimating¬†the impact of those decisions.¬†
    • Until those original words are addressed, clarified, and possibly corrected, or even withdraw, the hashtag will remain contentious.
    • Since the original post would mean the principle of Microeconomics would not longer be applicable in the development of software using other people‚Äôs money in the presence of uncertainty.
    • Continually redefining the meaning of #NoEstimates to address the willful ignoring of Microeconomics is simply going in circles.
    • If it is possible to make a decision about the future in the presence of uncertainty about that future, uncertainty about the cost of achieving a beneficial outcome from the decision, about the uncertainty of the cost and time needed to achieve that probabilistic outcome ‚Äď WITHOUT estimating, let‚Äôs hear it.
    • And by the way, using past performance samples ‚Äď and small ones at that ‚Äď does not remove to need to estimate the future outcomes. It only provides one way to information the probabilistic behavior of that future outcome. It is still estimating. Calling it No Estimates and using past performance, no matter how poorly formed, does not replace the misuse of the term.
  • FACT: The #NoEstimates hashtag became something due to the interest it generated
    • This is a¬†shouting fire in a theater¬†approach to conversation
    • Without a domain and governance paradigm, the notion of¬†making decisions in the¬†absence¬†of¬†estimates¬†has no basis for being tested.
  • FACT: A forecast is a type of estimate, whether probabilistic, deterministic, bombastic or otherwise
    • Yes, forecasting is estimating outcomes in the future. The weather forecast is an estimate of the future behavior of the atmosphere.
    • Estimates of past and present can also be made.
  • FACT: Forecasting is distinct from estimation, at least in the common usage of the words,¬†in that it involves using data to make the ‚Äúestimate‚ÄĚ rather than relying on a person or people drawing on ‚Äúexperience‚ÄĚ or guessing
    • These definitions are not found outside the posters personally selected operational definitions. No probability and statistics book uses that definition. If there is one, please provide the reference. Wikipedia definitions from other domains don‚Äôt count. Find that definition in the estimating software systems and let‚Äôs talk further.
    • Texts like¬†Estimating Software¬†Intensive¬†Systems¬†do not make this distinction, nor do any other books, papers, and resources on estimating.
    • Estimating is about past, present, and future approximation of value found in system with uncertainty.
      • Estimate ¬†- a number that approximates a value of interest in a system with uncertainty.
      • Estimating - the process used to make such a calculation
      • To Estimate - find a value close to the actual value. 2 ‚Čą 2.3. 2 is an approximation of the value 2.3.¬†
    • Forecasts are about future approximations of values found in systems with uncertainty.
    • Looking for definitions outside the domain of software development and applying to fit the needs of the argument is disingenuous¬†
  • FACT: People who tweet with the hashtag #NoEstimates, or indeed any other hashtag, are not automatically saying ‚ÄúMy tweet is congruent and completely in agreement with the literal meaning of the words in the hashtag‚ÄĚ
    • Those who tweet with hashtag are in fact retweeting the notion that decisions can be made without estimates if they do not explicitly challenge that notion.
    • If that is not established, there is an implicit support of the original idea
  • FACT: The prevailing way estimation is done in software projects is single point estimation
    • This is likely a personal experience, since many stating that have limited experience outside their domain.
    • It is simply bad mathematics, well known to anyone who took a High School statistics class. If you did take that class and believe that, you get a D
  • FACT: The prevailing way estimates are used in software¬†organizations¬†is a push for a commitment, and¬†then an excuse for a whipping when the estimate is not met.
    • Again likely personal experience.
    • If the poster said¬†in my experience...¬†that would establish the limits of the statement.
    • ‚ÄúIME‚ÄĚ takes 3 letters. Those are rarely seen by those suggesting¬†not¬†estimating¬†is a desirable approach to managing in the presence of uncertainty while spending other people money.
    • Those complaining the phrase¬†spending other peoples money¬†are likely not dong that, or not doing that with a substantial¬†value at risk.¬†
  • FACT: The above fact does not make estimates a useless¬†artifact, nor estimation itself a useless or¬†damaging activity
    • Those proffering decisions can be made without estimating have in FACT said estimating are damaging, useless, and a waste of time.
    • Until that is countered, it will remain the basis of NoEstimates.

So if the OP is actually interested in moving from the known problem of using estimates in a dysfunction way, let's stop speaking about how to make decisions without estimates, and learn how to make good estimates needed for good decisions.

This issue of Harvard Business Review is dedicated to Make Better Decisions. Start with reading how to make good decisions. There is a wealth of guidance how to do that. Why use Dilbert-style management examples. We all now about those. How about some actionable corrective actions to the root causes of bad management. All backed up with data beyond personal anecdotes. Reminds me of eraly XP where just try it was pretty much the approach. So if the OP is really interested in...

Let’s use our collective influence and intelligence to take the discussion forward to how we can cure the horrible cancer in our industry of Estimate = Date = Commitment.

Then there are nearly unlimited resources for doing that. The first is to call BS on the notion decisions can be made without estimates, without stating where this is applicable - first. Acknowledge unequivocally that estimates are needed when the value at risk reaches a level deemed important by the owners of the money, and start acting like the professionals we want to be and gain a seat at the team to improve the probability of success of our projects with credible estimates of cost, effort, risk, productivity, production of value, and all the other attributes of that success. 

For those interested in exploring further how to provide value to those paying your salary, here are some posts on Estimating Books

Categories: Project Management

Neo4j: LOAD CSV ‚Äď java.io.InputStreamReader there‚Äôs a field starting with a quote and whereas it ends that quote there seems to be character in that field after that ending quote. That isn‚Äôt supported.

Mark Needham - Mon, 05/04/2015 - 10:56

I recently came across the last.fm dataset via Ben Frederickson’s blog and thought it’d be an interesting one to load into Neo4j and explore.

I started with a simple query to parse the CSV file and count the number of rows:

LOAD CSV FROM "file:///Users/markneedham/projects/neo4j-recommendations/lastfm-dataset-360K/usersha1-artmbid-artname-plays.tsv" 
AS row FIELDTERMINATOR  "\t"
return COUNT(*)
 
At java.io.InputStreamReader@4d307fda:6484 there's a field starting with a quote and whereas it ends that quote there seems  to be character in that field after that ending quote. That isn't supported. This is what I read: 'weird al"'

This blows up because (as the message says) we’ve got a field which uses double quotes but then has other characters either side of the quotes.

A quick search through the file reveals one of the troublesome lines:

$ grep "\"weird" lastfm-dataset-360K/usersha1-artmbid-artname-plays.tsv  | head -n 1
0015371426d2cbef354b2f680340de38d0ebd2f0	7746d775-9550-4360-b8d5-c37bd448ce01	"weird al" yankovic	4099

I ran a file containing only that line through CSV Lint to see what it thought and indeed it is invalid:

2015 05 04 10 50 43

Let’s clean up our file to use single quotes instead of double quotes and try the query again:

$ tr "\"" "'" < lastfm-dataset-360K/usersha1-artmbid-artname-plays.tsv > lastfm-dataset-360K/clean.tsv
LOAD CSV FROM "file:///Users/markneedham/projects/neo4j-recommendations/lastfm-dataset-360K/clean.tsv" as row FIELDTERMINATOR  "\t"
return COUNT(*)
 
17559530

And we’re back in business! Interestingly Python’s CSV reader chooses to strip out the double quotes rather than throw an exception:

import csv
with open("smallWeird.tsv", "r") as file:
    reader = csv.reader(file, delimiter="\t")
 
    for row in reader:
        print row
$ python explore.py
['0015371426d2cbef354b2f680340de38d0ebd2f0', '7746d775-9550-4360-b8d5-c37bd448ce01', 'weird al yankovic', '4099']

I prefer LOAD CSV’s approach but it’s an interesting trade off I hadn’t considred before.

Categories: Programming

How To Get Innovation to Succeed Instead of Fail

‚ÄúBecause the purpose of business is to create a customer, the business enterprise has two‚Äďand only two‚Äďbasic functions: marketing and innovation. Marketing and innovation produce results; all the rest are costs. Marketing is the distinguishing, unique function of the business.‚ÄĚ ‚Äď Peter Drucker

I’m diving deeper into patterns and practices for innovation.

Along the way, I’m reading and re-reading some great books on the art and science of innovation.

One innovation book I’m seriously enjoying is Ten Types of Innovation: The Discipline of building Breakthroughs by Larry Keeley, Helen Walters, Ryan Pikkel, and Brian Quinn.

Right up front, Larry Keeley shares some insight into the journey to this book.  He says that this book really codifies, structures, and simplifies three decades of experience from Doblin, a consulting firm focused on innovation.

For more than three decades, Doblin tried to answer the following question:

‚ÄúHow do we get innovation to succeed instead of fail?‚ÄĚ 

Along the journey, there were a few ideas that they used to bridge the gap in innovation between the state of the art and the state of the practice.

Here they are …

Balance 3 Dimensions of Innovation (Theoretical Side + Academic Side + Applied Side)

Larry Keeley and his business partner Jay Doblin, a design methodologist, always balanced three dimensions of innovation: a theoretical side, an academic side, and an applied side.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

‚ÄúOver the years we have kept three important dimensions in dynamic tension.  We have a theoretical side, where we ask and seek real answers to tough questions about innovation.  Simple but critical ones like, 'Does brainstorming work?' (it doesn't), along with deep and systemic ones like, 'How do you really know what a user wants when the user doesn't know either?'  We have an academic side, since many of us are adjunct professors at Chicago's Institute of Design and this demands that we explain our ideas to smart young professionals in disciplined, distinctive ways.  And third, we have an applied side, in that have been privileged to adapt our innovation methods to many of the world's leading global enterprises and start-ups that hanker to be future leading firms.‚ÄĚ

Effective Innovation Needs a Blend of Analysis + Synthesis

Innovation is a balance and blend of analysis and synthesis.  Analysis involves tearing things down, while synthesis is building new things up.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

‚ÄúFrom the beginning, Doblin has itself been interdisciplinary, mixing social sciences, technology, strategy, library sciences, and design into a frothy admixture that has always tried to blend both analysis, breaking tough things down, with synthesis, building new things up.  Broadly, we think any effective innovation effort needs plenty of both, stitched together as a seamless whole.‚ÄĚ

Orchestrate the Ten Types of Innovation to Make a Game-Changing Innovation

Game-changing innovation is an orchestration of the ten types of innovation.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“The heart of this book is built around a seminal Doblin discovery: that there are (and have always been) ten distinct types of innovation that need to be orchestrated with some care to make a game-changing innovation.“

The main idea is that innovation fails if you try to solve it with just one dimension.

You can’t just take a theoretical approach, and hope that it works in the real-world.

At the same time, innovation fails if you don’t leverage what we learn from the academic world and actually apply it.

And, if you know the ten types of innovation, you can focus your efforts more precisely.

You Might Also Like

Innovation Life Cycle

Management Innovation is at the Top of the Innovation Stack

No Slack = No Innovation

The Drag of Old Mental Models on Innovation and Change

The Myths of Business Model Innovation

Categories: Architecture, Programming

SPaMCAST 340 -Tom Howlett, Scrum Master, Teams, Collaboration, Distributed Agile www.spamcast.net

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

Software Process and Measurement Cast 340 features our interview with Tom Howlett.  Tom is a Scrum Master.  We talked about teams, collaboration and how to effectively be Agile in distributed teams.

Tom’s bio:

Tom’s been building and working with teams that focus on continuous improvement for 15 years. In that time he’s written about the difficulties he faced and how he overcame them in over 100 blog posts on “Diary of a Scrummaster”, and a book called “A Programmer’s Guide To People”. He has a strong focus on breaking down the barriers that restrict collaboration (whether remote or co-located) and ensuring the people who do the work can effectively decide how it’s done. He’s becoming well known in the Agile community through his speaking and running his local group the “Cheltenham Geeks’. His company LeanTomato provides help forming new teams and helping existing ones meet people‚Äôs needs more effectively.

Contact information
Blog: Diary of a ScrumMaster
Twitter: @diaryofscrum
Website: LeanTomato

Remember:

Jo Ann Sweeny (Explaining Change) is running her annual Worth Working Summit.  Please visit http://www.worthworkingsummit.com/

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

CMMI Institute Global Congress
May 12-13 Seattle, WA, USA
My topic – Agile Risk Management
http://cmmiconferences.com/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our essay on Agile team decision making. Team based decision making requires mechanisms and prerequisites for creating consensus among team members. The prerequisites are a decision to be made, trust, knowledge and the tools to make a decisions. In many instances team members are assumed to have the required tools and techniques in their arsenal. In many instances team members are assumed by management and other team members to have the required tools and techniques in their arsenal.  Next week we will explore decision making and give you tools to make decisions.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 340 -Tom Howlett - Scrum Master, Teams, Collaboration, Distributed Agile

Software Process and Measurement Cast - Sun, 05/03/2015 - 22:00

Software Process and Measurement Cast 340 features our interview with Tom Howlett.  Tom is a Scrum Master.  We talked about teams, collaboration and how to effectively be Agile in distributed teams.

Tom’s bio:

Tom's been building and working with teams that focus on continuous improvement for 15 years. In that time he's written about the difficulties he faced and how he overcame them in over 100 blog posts on "Diary of a Scrummaster", and a book called "A Programmer's Guide To People". He has a strong focus on breaking down the barriers that restrict collaboration (whether remote or co-located) and ensuring the people who do the work can effectively decide how it's done. He's becoming well known in the Agile community through his speaking and running his local group the "Cheltenham Geeks'. His company LeanTomato provides help forming new teams and helping existing ones meet people’s needs more effectively.

Contact information
Blog: Diary of a ScrumMaster
Twitter: @diaryofscrum
Website: LeanTomato

Remember:

Jo Ann Sweeny (Explaining Change) is running her annual Worth Working Summit.  Please visit http://www.worthworkingsummit.com/

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

CMMI Institute Global Congress
May 12-13 Seattle, WA, USA
My topic - Agile Risk Management
http://cmmiconferences.com/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our essay on Agile team decision making. Team based decision making requires mechanisms and prerequisites for creating consensus among team members. The prerequisites are a decision to be made, trust, knowledge and the tools to make a decisions. In many instances team members are assumed to have the required tools and techniques in their arsenal. In many instances team members are assumed by management and other team members to have the required tools and techniques in their arsenal.  Next week we will explore decision making and give you tools to make decisions.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.

Categories: Process Management

How to deploy an ElasticSearch cluster using CoreOS and Consul

Xebia Blog - Sun, 05/03/2015 - 13:39

The hot potato in the room of Containerized solutions is persistent services. Stateless applications are easy and trivial, but to deploy a persistent services like ElasticSearch is a totally different ball game. In this blog post we will show you how easy it is on this platform to create ElasticSearch clusters. The key to the easiness is the ability to lookup external ip addresses and port numbers of all cluster members in Consul and the reusable power of the CoreOS unit file templates. The presented solution is a ready-to-use ElasticSearch component for your application.

This solution:

  • uses empheral ports so that we can actually run multiple ElasticSearch nodes on the same host
  • mounts persistent storage under each node to prevent data loss on server crashes
  • uses the power of the CoreOS unit template files to deploy new ElasticSearch clusters.


In the previous blog posts we defined our A High Available Docker Container Platform using CoreOS and Consul and showed how we can add persistent storage to a Docker container. 

Once this platform is booted the only thing you need to do to deploy an ElasticSearch Cluster,  is to submit the following fleet unit system template file elasticsearch@.service  and start 3 or more instances.

Booting the platform

To see the ElasticSearch cluster in action, first boot up our CoreOS platform.

git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service
cd coreos-container-platform-as-a-service/vagrant
vagrant up
./is_platform_ready.sh
Starting an ElasticSearch cluster

Once the platform is started, submit the elasticsearch unit file and start three instances:

export FLEETCTL_TUNNEL=127.0.0.1:2222
cd ../fleet-units/elasticsearch
fleetctl submit elasticsearch@.service
fleetctl start elasticsearch@{1..3}

Now wait until all elasticsearch instances are running by checking the unit status.

fleetctl list-units
...
UNIT            MACHINE             ACTIVE  SUB
elasticsearch@1.service f3337760.../172.17.8.102    active  running
elasticsearch@2.service ed181b87.../172.17.8.103    active  running
elasticsearch@3.service 9e37b320.../172.17.8.101    active  running
mnt-data.mount      9e37b320.../172.17.8.101    active  mounted
mnt-data.mount      ed181b87.../172.17.8.103    active  mounted
mnt-data.mount      f3337760.../172.17.8.102    active  mounted
Create an ElasticSearch index

Now that the ElasticSearch cluster is running, you can create an index to store data.

curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/ -d \
     '{ "settings" : { "index" : { "number_of_shards" : 3, "number_of_replicas" : 2 } } }'
Insert a few documents
curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/1 -d@- <<!
{
    "first_name" : "John",
    "last_name" :  "Smith",
    "age" :        25,
    "about" :      "I love to go rock climbing",
    "interests": [ "sports", "music" ]
}
!

curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/2 -d@- <<!
{
    "first_name" :  "Jane",
    "last_name" :   "Smith",
    "age" :         32,
    "about" :       "I like to collect rock albums",
    "interests":  [ "music" ]
}
!

curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/3 -d@- <<!
{
    "first_name" :  "Douglas",
    "last_name" :   "Fir",
    "age" :         35,
    "about":        "I like to build cabinets",
    "interests":  [ "forestry" ]
}
!
And query the index
curl -XGET http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/_search?q=last_name:Smith
...
{
  "took": 50,
  "timed_out": false,
  "_shards": {
    "total": 3,
    "successful": 3,
    "failed": 0
  },
  "hits": {
    "total": 2,
  ...
}

restarting the cluster

Even when you restart the entire cluster, your data is persisted.

fleetctl stop elasticsearch@{1..3}
fleetctl list-units

fleetctl start elasticsearch@{1..3}
fleetctl list-units

curl -XGET http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/_search?q=last_name:Smith
...
{
  "took": 50,
  "timed_out": false,
  "_shards": {
    "total": 3,
    "successful": 3,
    "failed": 0
  },
  "hits": {
    "total": 2,
  ...
}

Open the console

Finally you can see the servers and the distribution of the index in the cluster by opening the console
http://elasticsearch.127.0.0.1.xip.io:8080/_plugin/head/.

elasticsearch head

Deploy other ElasticSearch clusters

Changing the name of the template file is the only thing you need to deploy another ElasticSearch cluster.

cp elasticsearch\@.service my-cluster\@.service
fleetctl submit my-cluster\@.service
fleetctl start my-cluster\@{1..3}
curl my-cluster.127.0.0.1.xip.io:8080
How does it work?

Starting a node in an ElasticSearch cluster is quite trivial, as shown in by the command line below:

exec gosu elasticsearch elasticsearch \
    --discovery.zen.ping.multicast.enabled=false \
    --discovery.zen.ping.unicast.hosts=$HOST_LIST \
    --transport.publish_host=$PUBLISH_HOST \
    --transport.publish_port=$PUBLISH_PORT \
     $@

We use the unicast protocol and specify our own publish host and port and list of ip address and port numbers of all the other nodes in the cluster.

Finding the other nodes in the cluster

But how do we find the other nodes in the cluster? That is quite easy. We query the Consul REST API for all entries with the same service name that are tagged as the "es-transport". This is the service exposed by ElasticSearch on port 9300.

curl -s http://consul:8500/v1/catalog/service/$SERVICE_NAME?tag=es-transport

...
[
    {
        "Node": "core-03",
        "Address": "172.17.8.103",
        "ServiceID": "elasticsearch-1",
        "ServiceName": "elasticsearch",
        "ServiceTags": [
            "es-transport"
        ],
        "ServiceAddress": "",
        "ServicePort": 49170
    },
    {
        "Node": "core-01",
        "Address": "172.17.8.101",
        "ServiceID": "elasticsearch-2",
        "ServiceName": "elasticsearch",
        "ServiceTags": [
            "es-transport"
        ],
        "ServiceAddress": "",
        "ServicePort": 49169
    },
    {
        "Node": "core-02",
        "Address": "172.17.8.102",
        "ServiceID": "elasticsearch-3",
        "ServiceName": "elasticsearch",
        "ServiceTags": [
            "es-transport"
        ],
        "ServiceAddress": "",
        "ServicePort": 49169
    }
]

Turning this into a comma seperated list of network endpoints is done using the following jq command:

curl -s http://consul:8500/v1/catalog/service/$SERVICE_NAME?tag=es-transport |\
     jq -r '[ .[] | [ .Address, .ServicePort | tostring ] | join(":")  ] | join(",")'
Finding your own network endpoint

As you can see in the above JSON output, each service entry has a unique ServiceID. To obtain our own endpoint, we use the following jq command:

curl -s http://consul:8500/v1/catalog/service/$SERVICE_NAME?tag=es-transport |\
     jq -r ".[] | select(.ServiceID==\"$SERVICE_9300_ID\") | .Address, .ServicePort" 
Finding the number of node in the cluster

Finding the intended number of nodes in the cluster is determined by counting the number of fleet unit instance files in CoreOS on startup and passing this number as an environment variable.

TOTAL_NR_OF_SERVERS=$(fleetctl list-unit-files | grep '%p@[^\.][^\.]*.service' | wc -l)

The %p refers to the part of the fleet unit file before the @ sign.

The Docker run command

The Docker run command is shown below. ElasticSearch exposes two ports: port 9200 exposes a REST api to the clients and port 9300 is used as the transport protocol between nodes in the cluster. Each port is a service and tagged appropriately.

ExecStart=/bin/sh -c "/usr/bin/docker run --rm \
    --name %p-%i \
    --env SERVICE_NAME=%p \
    --env SERVICE_9200_TAGS=http \
    --env SERVICE_9300_ID=%p-%i \
    --env SERVICE_9300_TAGS=es-transport \
    --env TOTAL_NR_OF_SERVERS=$(fleetctl list-unit-files | grep '%p@[^\.][^\.]*.service' | wc -l) \
    -P \
    --dns $(ifconfig docker0 | grep 'inet ' | awk '{print $2}') \
    --dns-search=service.consul \
    cargonauts/consul-elasticsearch"

The options are explained in the table below:

option description --env SERVICE_NAME=%p The name of this service to be advertised in Consul, resulting in a FQDN of %p.service.consul and will be used as the cluster name. %p refers to the first part of the fleet unit template file up to the @. --env SERVICE_9200_TAGS=www The tag assigned to the service at port 9200. This is picked up by the http-router, so that any http traffic to the host elasticsearch is direct to this port. --env SERVICE_9300_ID=%p-%i The unique id of this service in Consul. This is used by the startup script to find it's external port and ip address in Consul and will be used as the node name for the ES server. %p refers to the first part of the fleet unit template file up to the @ %i refers to the second part of the fleet unit file upto the .service. --env SERVICE_9300_TAGS=es-transport The tag assigned to the service at port 9300. This is used by the startup script to find the other servers in the cluster. --env TOTAL_NR_OF_SERVERS=$(...) The number of submitted unit files is counted and passed in as the environment variable 'TOTAL_NR_OF_SERVERS'. The start script will wait until this number of servers is actually registered in Consul before starting the ElasticSearch Instance. --dns $(...) Set DNS to query on the docker0 interface, where Consul is bound on port 53. (The docker0 interface ip address is chosen at random from a specific range). -dns-search=service.consul The default DNS search domain. Sources

The sources for the ElasticSearch repository can be found on github.

source description start-elasticsearch-clustered.sh   complete startup script of elasticsearch elasticsearch CoreOS fleet unit files for elasticsearch cluster consul-elasticsearch Sources for the Consul ElasticSearch repository Conclusion

CoreOS fleet template unit files are a powerful way of deploying ready to use components for your platform. If you want to deploy cluster aware applications, a service registry like Consul is essential.

Coding: Visualising a bitmap

Mark Needham - Sun, 05/03/2015 - 01:19

Over the last month or so I’ve spent some time each day reading a new part of the Neo4j code base to get more familiar with it, and one of my favourite classes is the Bits class which does all things low level on the wire and to disk.

In particular I like its toString method which returns a binary representation of the values that we’re storing in bytes, ints and longs.

I thought it’d be a fun exercise to try and write my own function which takes in a 32 bit map and returns a string containing a 1 or 0 depending if a bit is set or not.

The key insight is that we need to iterate down from the highest order bit and then create a bit mask of that value and do a bitwise and with the full bitmap. If the result of that calculation is 0 then the bit isn’t set, otherwise it is.

For example, to check if the highest order bit (index 31) was set our bit mask would have the 32nd bit set and all of the others 0’d out.

java> (1 << 31) & 0x80000000
java.lang.Integer res5 = -2147483648

If we wanted to check if lowest order bit was set then we’d run this computation instead:

java> (1 << 0) & 0x00000001
java.lang.Integer res7 = 0
 
java> (1 << 0) & 0x00000001
java.lang.Integer res8 = 1

Now let’s put that into a function which checks all 32 bits of the bitmap rather than just the ones we define:

private String  asString( int bitmap )
{
    StringBuilder sb = new StringBuilder();
    sb.append( "[" );
    for ( int i = Integer.SIZE - 1; i >= 0; i-- )
    {
        int bitMask = 1 << i;
        boolean bitIsSet = (bitmap & bitMask) != 0;
        sb.append( bitIsSet ? "1" : "0" );
 
        if ( i > 0 &&  i % 8 == 0 )
        {
            sb.append( "," );
        }
    }
    sb.append( "]" );
    return sb.toString();
}

And a quick test to check it works:

@Test
public void shouldInspectBits()
{
    System.out.println(asString( 0x00000001 ));
    // [00000000,00000000,00000000,00000001]
 
    System.out.println(asString( 0x80000000 ));
    // [10000000,00000000,00000000,00000000]
 
    System.out.println(asString( 0xA0 ));
    // [00000000,00000000,00000000,10100000]
 
    System.out.println(asString( 0xFFFFFFFF ));
    // [11111111,11111111,11111111,11111111]
}

Neat!

Categories: Programming

Re-Read Saturday: The Goal: A Process of Ongoing Improvement. Part 11

IMG_1249

This week I was a participant at the International Software and Measurement (ISMA) Conference, put on by the International Function Point User Group (IFPUG). During the conference, I struck up a conversation with Anteneh Berhane who was sitting behind me during the general session. Our conversation quickly turned to books and unbidden, Anteneh volunteered that The Goal was the type of book that had a major impact on his¬†life. He said that The Goal provided an implementable and measurable framework to think about all types of work (personal and professional). Nearly everyone that I have talked has been impacted by the ideas in the book such¬†small-batch sizes and analytically looking at the whole process that we see in today’s installment of the re-read.

Part 1       Part 2       Part 3      Part 4      Part 5      Part 6      Part 7      Part 8    Part 9   Part 10

Chapter 27

Alex presents the plant’s monthly reports to Bill Peach (Alex’s manager) and Alex’s peers. Even with the troubles with the non-bottleneck parts (Re-Read Part 10), the turnaround has been spectacular. Peach opens the meeting by telling everyone that it is because of Alex’s plant that the division was profitable in the last month. However Peach has no confidence that the turnaround will continue. Peach tells Alex’s that unless the plant delivers an additional 15% increase in profit, Peach will close the plant. Alex commits to the increase with a lot of internal trepidation and no idea how he will deliver the increase.

On the way home Alex visits Julie, his estranged wife. Alex proposes identifying the goal of their marriage and then working backwards to identify what would help them achieve that goal. Alex is applying the many of the ideas from the plant to his personal life. In my conversations between presentations during the ISMA Conference, Anteneh said that understanding the goal of any endeavor is the critical first step toward measuring whether you are attaining that goal (ISMA is a measurement conference). Measuring progress provides feedback to keep you on track.

Chapter 28

Johan calls Alex. Johan will be out of touch for several weeks and he wants to make sure things are going well at the plant. Alex fills him in on the progress and the 15% demand being levied on the plant. Jonah points out since the plant is the only profitable component in the division, Peach probably will not follow through on the threat to close the plant. Side note: Most of us that have managed projects or any other group have been handed stretch goals. Most of these demands are presented in terms of both a carrot (incentives) and stick (consequences). Peach’s words have that sort of ring to them, however Alex has committed and asks Johan if there are any next steps.

Alex meets his management team the next day and relates the first of the next steps. Johan has suggested that the plant cut batch size in half for all non-bottleneck steps. Alex’s management team list the steps and the time for each step needed for a batch.

Batch Time =
set-up time +
processing time +
wait time before processing +
wait time before being assembled into the next step.

The two categories that include wait time are generally the longest in duration, and cutting batch size directly cuts the overall batch time. The only wild card in the equation is the amount of set of time which must occur before each batch. Smaller batches generally decrease wait time more than the additional set up time and will increase the ability to change direction if business needs change.

The concept of shortening batch size has been directly adopted by the Agile community. Time boxes enforce small batch sizes. Teams practicing Scrum will recognize sprint planning as a set-up step required before processing, which includes design, coding and testing. The smaller the batch size the faster value is delivered and the faster feedback is generated.

Alex asks his team whether they feel that the process will allow them to deliver orders in four weeks or less. When they agree he asks for a public commitment. Scrum also uses the idea of public commitment to generate internal team support for an overall goal. If everyone publicly commits it is harder to throw in the towel and it creates an atmosphere where the entire team helps each other out when a problem arises (in Agile we call this swarming).

Johan also suggested that Alex ask the company’s sales department to promote the company’s new ability to deliver quickly to their clients. While not stated, the politics of this idea is wonderful. If Alex and his team can pull delivery change off they will be virtually immune to Bill Peach’s irrational demands. However, when Alex pitches the plant’s new ability to Jons, the sales/marketing manager, he experiences pushback. Jons does not believe the turn around because less than a year before the best the plant could promise was four months (and they were generally late) and now Alex was promising a four week turn around on orders. Alex and Jons end up striking a compromise, sales will promote a six week turnaround on orders. If the plant can deliver in less, Jons will buy Alex a pair of shoes and if the plant misses the 6-week window, Alex will have to buy Jons a pair of shoes.

Summary of The Goal so far:
(Next week I am going to create a separate summary page)

Chapters 1 through 3 actively present the reader with a burning platform. The plant and division are failing. Alex Rogo has actively pursued increased efficiency and automation to generate cost reductions, however performance is falling even further behind and fear has become central feature in the corporate culture.

Chapters 4¬†through¬†6¬†shift the focus from steps in the process to the process as a whole. Chapters 4 ‚Äď 6 move us down the path of identifying the ultimate goal of the organization (in this book). The goal is making money and embracing the big picture of systems thinking. In this section, the authors point out that we are often caught up with pursuing interim goals, such as quality, efficiency or even employment, to the exclusion of the of the ultimate goal. We are reminded by the burning platform identified in the first few pages of the book, the impending closure of the plant and perhaps the division, which in the long run an organization must make progress towards their ultimate goal, or they won‚Äôt exist.

Chapters 7 through 9¬†show Alex‚Äôs commitment to change, seeks more precise advice from Johan, brings his closest reports into the discussion and begins a dialog with his wife (remember this is a novel). In this section of the book the concept ‚Äúthat you get what you measure‚ÄĚ is addressed. In this section of the book, we see measures of efficiency being used at the level of part production, but not at the level of whole orders or even sales. We discover the corollary to the adage ‚Äėyou get what you measure‚Äô is that if you measure the wrong thing ‚Ķyou get the wrong thing. We begin to see Alex‚Äôs urgency and commitment to make a change.

Chapters 10 through 12 mark a turning point in the book. Alex has embraced a more systems view of the plant and that the measures that have been used to date are more focused on optimizing parts of the process to the detriment to overall goal of the plant.  What has not fallen into place is how to take that new knowledge and change how the plant works. The introduction of the concepts of dependent events and statistical variation begin the shift the conceptual understanding of what measure towards how the management team can actually use that information.

Chapters 13 through 16 drive home the point that dependent events and statistical variation impact the performance of the overall system. In order for the overall process to be more effective you have to understand the capability and capacity of each step and then take a systems view. These chapters establish the concepts of bottlenecks and constraints without directly naming them and that focusing on local optimums causes more trouble than benefit.

Chapters 17 through 18 introduces the concept of bottlenecked resources. The affect of the combination dependent events and statistical variability through bottlenecked resources makes delivery unpredictable and substantially more costly. The variability in flow through the process exposes bottlenecks that limit our ability to catch up, making projects and products late or worse generating technical debt when corners are cut in order to make the date or budget.

Chapters 19 through 20 begins with Johan coaching Alex’s team to help them to identify a pallet of possible solutions. They discover that every time the capacity of a bottleneck is increased more product can be shipped.  Changing the capacity of a bottleneck includes reducing down time and the amount of waste the process generates. The impact of a bottleneck is not the cost of individual part, but the cost of the whole product that cannot be shipped. Instead of waiting to make all of the changes Alex and his team implement changes incrementally rather than waiting until they can deliver all of the changes.

Chapters 21 through 22 are a short primer on change management. Just telling people to do something different does not generate support. Significant change requires transparency, communication and involvement. One of Deming’s 14 Principles is constancy of purpose. Alex and his team engage the workforce though a wide range of communication tools and while staying focused on implementing the changes needed to stay in business.

Chapters 23 through 24 introduce the idea of involving the people doing the work in defining the solutions to work problems and finding opportunities. In Agile we use retrospectives to involve and capture the team’s ideas on process and personnel improvements. We also find that fixing one problem without an overall understanding of the whole system can cause problems to pop up elsewhere.

Chapters 25 and 26 introduce several concepts. The first concept is that if non-bottleneck steps are run at full capacity, they create inventory and waste. At full capacity their output outstrips the overall process’ ability to create a final product. Secondly, keeping people and resources 100% busy does not always move you closer to the goal of delivering value to the end customer. Simply put: don’t do work that does not move you closer to the goal of the organization. The combination of these two concepts suggests that products (parts or computer programs) should only be worked on and completed until they are needed in the next step in the process (Kanban). A side effect to these revelations is that sometimes people and processes will not be 100% utilized.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version


Categories: Process Management

Software architecture as code

Coding the Architecture - Simon Brown - Sat, 05/02/2015 - 11:40

A quick note to say that the video from my Software architecture as code talk at CRAFT 2015 in Budapest, Hungary last week is available to view online. This talk looks at why the software architecture model never quite matches the code, discusses architecturally-evident coding styles as a way to address this and shows how something like my Structurizr for Java library can be used to create a living software architecture model based upon a combination of extracting elements from the code and supplementing the model where this isn't possible.

The slides are also available to view online/download. Enjoy!

Categories: Architecture

Next-gen Web Apps with Isomorphic JavaScript

Xebia Blog - Fri, 05/01/2015 - 20:54

The web application landscape has recently seen a big shift in application architecture. Nowadays we build so-called Single Page Applications. These are web applications which render and run in the browser, powered by JavaScript. They are called ‚ÄúSingle Page‚ÄĚ because in such an application the browser never actually switches between pages. All interaction takes place within a single HTML document. This is great because users will not see a ‚ÄĚflash of white‚ÄĚ whenever they perform an action, so all interaction feels much more fluid and natural. The application seems to respond much quicker which has a positive effect on user experience and conversion of the site. Unfortunately Single Page Applications also have several big drawbacks, mostly concerning the initial loading time and poor rankings in search engines.

Continue reading on Medium ¬Ľ

Forecasting the Future is Critical to All Success

Herding Cats - Glen Alleman - Fri, 05/01/2015 - 18:47

Skate to Where Puck Will BeFull attribution to Gaping Void for this carton. http://gapingvoid.com/2010/05/03/daily-bizcard-11-fred-wilson/

Wayne makes realtime estimates on every skate stroke of where he is going, where the puck is going, and where all the defensemen are going to be when he plans to take his shot on goal.

When we hear we can make decisions about the future without estimating the impact from those decision or using only small sample, non-statistically adjusted measures, or ignore the stochastic behaviors of the past and the future we'll be ion the loosing end of the shot on goal.

There simply is no way out of the need to estimate the future for any non-trivial project funded by other peoples money. Trivial project? Our own money? Act as you wish. No one cares what you do. Make a suggestion it works somewhere else, better come to the table with some testable data independent of personal anecdotes. 

Related articles Root Cause Analysis The Reason We Plan, Schedule, Measure, and Correct The Flaw of Empirical Data Used to Make Decisions About the Future The Flaw of Averages and Not Estimating
Categories: Project Management