Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Re-Read Saturday: The Goal: A Process of Ongoing Improvement. Part 18

IMG_1249

I had intended to spend the last entry our re-read of the The Goal waxing poetic about the afterward in the book titled “Standing on the Shoulders of Giants”. Suffice it to say that the afterward does an excellent job describing the practical and theoretical basis for Goldratt and Cox’s ideas that ultimately shaped the both lean and process improvement movements since 1984.

Previous Installments:

Part 1       Part 2       Part 3      Part 4      Part 5 
Part 6       Part 7      Part 8     Part 9      Part 10
Part 11     Part 12      Part 13    Part 14    Part 15
Part 16    Part 17

The Goal is important because it introduced and explained the theory of constraints (TOC), which has proven over and over again to be critical to anyone managing a system. The TOC says that the output of any manageable system is limited by a small number of constraints and that all typical systems have at least one constraint. I recently had a discussion with a colleague that posited that not all systems have constraints. He laid out a scenario in which if you had unlimited resources and capability it would be possible to create a system without constraints. While theoretically true, it would be safe to embrace the operational hypothesis that any realistic process has at least one constraint. Understanding the constraints that affect a process or system provides anyone with an interest in process improvement with a powerful tool to deliver effective change. I do mean anyone! While the novel is set in a manufacturing environment, it is easy to identify how the ideas can apply to any setting where work follows a systematic process. For example, software development and maintenance is a process that takes business needs and transforms those needs into functionality. The readers of the Software Process and Measurement Blog should recognize that ideas in The Goal are germane to the realm of information technology.

As we have explored the book, I have shared how I have been able to apply the concepts explored to illustrate that what Goldratt and Cox wrote was applicable in the 21st century workplace. I also shared how others reacted to the book when I read it in public or talked about to people trapped next to me on numerous flights. Their reaction reminded me that my reaction was not out of the ordinary. The Goal continues to affect people years after it was first published. For example, the concept of the TOC and the Five Focusing Steps proved useful again this week. I was asked to discuss process improvement with a team comprised of tax analysts, developers and testers. Each role is highly specialized and there is little cross-specialty work-sharing. With a bit of coaching the team was able to identify their process flow and to develop a mechanism to identify their bottleneck(s) to improve their through put. Even though the Five Focusing Steps were never mentioned directly, we were able agree on an improvement approach that would find the constraint, help them exploit the constraint, subordinate the other steps in the process to the constraint, support improving the capacity of the constraint, then reiterate the analysis if the step was no longer a constraint. Had I never read The Goal, we might not have found a way to improve the process.

Perhaps re-reading the book or just carrying it around has made me overly sensitive to the application of the TOC and the other concepts in the book. However, I don’t think that was the real reason the material is useful. Others have been equally impacted, for example, Steve Tendon, author of Tame The Flow, and currently a columnist on the Software Process and Measurement Cast suggests that The Goal and the TOC has had a significant influence on his groundbreaking process improvement ideas. Bottom line, if you have not read or re-read The Goal I strongly suggest that you make the time to read the book. The Goal is an important book if you manage processes or are interested in improving how work is done in the 21st century.

I would like to hear from you! Can you tell me:

  1. How has The Goal impacted how you work?
  2. Have you been able to put the ideas in the book into practice?
  3. What are the successes and difficulties you faced when leveraging the Theory of Constraints?
  4. Do you use the Socratic method to identity and fix problems?

Re-Read Saturday Housekeeping Notes:

  • Next week we begin the re-read of The Mythical Man-Month (I am buying a new copy today, so if you do not have a copy, get a copy today.  I will be reading this version of Man-Month.
  • Remember that the summary of previous entries in the re-read of The Goal have been shifted to a new page.
  • Also, if you don’t have a copy of The Goal, buy one and read it! If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version.

Categories: Process Management

R: dplyr – segfault cause ‘memory not mapped’

Mark Needham - Sat, 06/20/2015 - 23:18

In my continued playing around with web logs in R I wanted to process the logs for a day and see what the most popular URIs were.

I first read in all the lines using the read_lines function in readr and put the vector it produced into a data frame so I could process it using dplyr.

library(readr)
dlines = data.frame(column = read_lines("~/projects/logs/2015-06-18-22-docs"))

In the previous post I showed some code to extract the URI from a log line. I extracted this code out into a function and adapted it so that I could pass in a list of values instead of a single value:

extract_uri = function(log) {
  parts = str_extract_all(log, "\"[^\"]*\"")
  return(lapply(parts, function(p) str_match(p[1], "GET (.*) HTTP")[2] %>% as.character))
}

Next I ran the following function to count the number of times each URI appeared in the logs:

library(dplyr)
pages_viewed = dlines %>%
  mutate(uri  = extract_uri(column)) %>% 
  count(uri) %>%
  arrange(desc(n))

This crashed my R process with the following error message:

segfault cause 'memory not mapped'

I narrowed it down to a problem when doing a group by operation on the ‘uri’ field and came across this post which suggested that it was handled more cleanly in more recently version of dplyr.

I upgraded to 0.4.2 and tried again:

## Error in eval(expr, envir, enclos): cannot group column uri, of class 'list'

That makes more sense. We’re probably returning a list from extract_uri rather than a vector which would fit nicely back into the data frame. That’s fixed easily enough by unlisting the result:

extract_uri = function(log) {
  parts = str_extract_all(log, "\"[^\"]*\"")
  return(unlist(lapply(parts, function(p) str_match(p[1], "GET (.*) HTTP")[2] %>% as.character)))
}

And now when we run the count function it’s happy again, good times!

Categories: Programming

Leadership Skills for Making Things Happen

"A leader is one who knows the way, goes the way, and shows the way." -- John C. Maxwell

How many people do you know that talk a good talk, but don’t walk the walk?

Or, how many people do you know have a bunch of ideas that you know will never see the light of day?  They can pontificate all day long, but the idea of turning those ideas into work that could be done, is foreign to them.

Or, how many people do you know can plan all day long, but their plan is nothing more than a list of things that will never happen?  Worse, maybe they turn it into a team sport, and everybody participates in the planning process of all the outcomes, ideas and work that will never happen. (And, who exactly wants to be accountable for that?)

It doesn’t need to be this way.

A lot of people have Hidden Strengths they can develop into Learned Strengths.   And one of the most important bucket of strengths is Leading Implementation.

Leading Implementation is a set of leadership skills for making things happen.

It includes the following leadership skills:

  1. Coaching and Mentoring
  2. Customer Focus
  3. Delegation
  4. Effectiveness
  5. Monitoring Performance
  6. Planning and Organizing
  7. Thoroughness

Let’s say you want to work on these leadership skills.  The first thing you need to know is that these are not elusive skills reserved exclusively for the elite.

No, these are commonly Hidden Strengths that you and others around you already have, and they just need to be developed.

If you don’t think you are good at any of these, then before you rule yourself out, and scratch them off your list, you need to ask yourself some key reflective questions:

  1. Do you know what good actually looks like?  Who are you role models?   What do they do differently than you, and is it really might and magic or do they simply do behaviors or techniques that you could learn, too?
  2. How much have you actually practiced?   Have you really spent any sort of time working at the particular skill in question?
  3. How did you create an effective feedback loop?  So many people rapidly improve when they figure out how to create an effective learning loop and an effective feedback loop.
  4. Who did you learn from?  Are you expecting yourself to just naturally be skilled?  Really?  What if you found a good mentor or coach, one that could help you create an effective learning loop and feedback loop, so you can improve and actually chart and evaluate your progress?
  5. Do you have a realistic bar?  It’s easy to fall into the trap of “all or nothing.”   What if instead of focusing on perfection, you focused on progress?   Could a little improvement in a few of these areas, change your game in a way that helps you operate at a higher level?

I’ve seen far too many starving artists and unproductive artists, as well as mad scientists, that had brilliant ideas that they couldn’t turn into reality.  While some were lucky to pair with the right partners and bring their ideas to live, I’ve actually seen another pattern of productive artists.

They develop some of the basic leadership skills in themselves to improve their ability to execute.

Not only are they more effective on the job, but they are happier with their ability to express their ideas and turn their ideas into action.

Even better, when they partner with somebody who has strong execution, they amplify their impact even more because they have a better understanding and appreciation of what it takes to execute ideas.

Like talk, ideas are cheap.

The market rewards execution.

Categories: Architecture, Programming

R: Regex – capturing multiple matches of the same group

Mark Needham - Fri, 06/19/2015 - 22:38

I’ve been playing around with some web logs using R and I wanted to extract everything that existed in double quotes within a logged entry.

This is an example of a log entry that I want to parse:

log = '2015-06-18-22:277:548311224723746831\t2015-06-18T22:00:11\t2015-06-18T22:00:05Z\t93317114\tip-127-0-0-1\t127.0.0.5\tUser\tNotice\tneo4j.com.access.log\t127.0.0.3 - - [18/Jun/2015:22:00:11 +0000] "GET /docs/stable/query-updating.html HTTP/1.1" 304 0 "http://neo4j.com/docs/stable/cypher-introduction.html" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"'

And I want to extract these 3 things:

  • /docs/stable/query-updating.html
  • http://neo4j.com/docs/stable/cypher-introduction.html
  • Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36

i.e. the URI, the referrer and browser details.

I’ll be using the stringr library which seems to work quite well for this type of work.

To extract these values we need to find all the occurrences of double quotes and get the text inside those quotes. We might start by using the str_match function:

> library(stringr)
> str_match(log, "\"[^\"]*\"")
     [,1]                                               
[1,] "\"GET /docs/stable/query-updating.html HTTP/1.1\""

Unfortunately that only picked up the first occurrence of the pattern so we’ve got the URI but not the referrer or browser details. I tried str_extract with similar results before I found str_extract_all which does the job:

> str_extract_all(log, "\"[^\"]*\"")
[[1]]
[1] "\"GET /docs/stable/query-updating.html HTTP/1.1\""                                                                            
[2] "\"http://neo4j.com/docs/stable/cypher-introduction.html\""                                                                    
[3] "\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36\""

We still need to do a bit of cleanup to get rid of the ‘GET’ and ‘HTTP/1.1′ in the URI and the quotes in all of them:

parts = str_extract_all(log, "\"[^\"]*\"")[[1]]
uri = str_match(parts[1], "GET (.*) HTTP")[2]
referer = str_match(parts[2], "\"(.*)\"")[2]
browser = str_match(parts[3], "\"(.*)\"")[2]
 
> uri
[1] "/docs/stable/query-updating.html"
 
> referer
[1] "https://www.google.com/"
 
> browser
[1] "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"

We could then go on to split out the browser string into its sub components but that’ll do for now!

Categories: Programming

How to deploy composite Docker applications with Consul key values to CoreOS

Xebia Blog - Fri, 06/19/2015 - 16:34

Most examples on the deployment of Docker applications to CoreOS use a single docker application. But as soon as you have an application that consists of more than 1 unit, the number of commands you have to type soon becomes annoying. At Xebia we have a best practice that says "Three strikes and you automate" mandating that a third time you do something similar, you automate. In this blog I share the manual page of the utility called fleetappctl that allows you to perform rolling upgrades and deploy Consul Key value pairs of composite applications to CoreOS and show three examples of its usage.


fleetappctl is a utility that allows you to manage a set of CoreOS fleet unit files as a single application. You can start, stop and deploy the application. fleetappctl is idempotent and does rolling upgrades on template files with multiple instances running. It can substitute placeholders upon deployment time and it is able to deploy Consul key value pairs as part of your application. Using fleetappctl you have everything you need to create a self contained deployment  unit of your composite application and put it under version control.

The command line options to fleetappctl are shown below:

fleetappctl [-d deployment-descriptor-file]
            [-e placeholder-value-file]
            (generate | list | start | stop | destroy)
option -d

The deployment descriptor file describes all the fleet unit files and Consul key-value pair files that make up the application. All the files referenced in the deployment-descriptor may have placeholders for deployment time values. These placeholders are enclosed in  double curly brackets {{ }}.

option -e

The file contains the values for the placeholders to be used on deployment of the application. The file has a simple format:

<name>=<value>
start

starts all units in the order as they appear in the deployment descriptor. If you have a template unit file, you can specify the number of instances you want to start. Start is idempotent, so you may call start multiple times. Start will bring the deployment inline with your descriptor.

If the unit file has changed with respect to the deployed unit file, the corresponding instances will be stopped and restarted with the new unit file. If you have a template file, the instances of the template file will be upgraded one by one.

Any consul key value pairs as defined by the consul.KeyValuePairs entries are created in Consul. Existing values are not overwritten.

generate

generates a deployment descriptor (deployit-manifest.xml) based upon all the unit files found in your directory. If a file is a fleet unit template file the number of instances to start is set to 2, to support rolling upgrades.

stop

stops all units in reverse order of their appearance in the deployment descriptor.

destroy

destroys all units in reverse order of their appearance in the deployment descriptor.

list

lists the runtime status of the units that appear in the deployment descriptor.

Install fleetappctl

to nstall the fleetappctl utility, type the following commands:

curl -q -L https://github.com/mvanholsteijn/fleetappctl/archive/0.25.tar.gz | tar -xzf -
cd fleetappctl-0.25
./install.sh
brew install xmlstarlet
brew install fleetctl
Start the platform

If you do not have the platform running, start it first.

cd ..
git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service.git
cd coreos-container-platform-as-a-service
git checkout 029d3dd8e54a5d0b4c085a192c0ba98e7fc2838d
cd vagrant
vagrant up
./is_platform_ready.sh
Example - Three component web application


The first example is a three component application. It consists of a mount, a Redis database service and a web application. We generate the deployment descriptor, indicate we do not want to start the mount, start the application and then modify the web application unit file to change the service name into 'helloworld'. We perform a rolling upgrade by issuing start again.. Finally we list, stop and destroy the application.

cd ../fleet-units/app
# generate a deployment descriptor
fleetappctl generate

# do not start mount explicitly
xml ed -u '//fleet.UnitConfigurationFile[@name="mnt-data"]/startUnit' \
       -v false deployit-manifest.xml > \
        deployit-manifest.xml.new
mv deployit-manifest.xml{.new,}

# start the app
fleetappctl start 

# Check it is working
curl hellodb.127.0.0.1.xip.io:8080
curl hellodb.127.0.0.1.xip.io:8080

# Change the service name of the application in the unit file
sed -i -e 's/SERVICE_NAME=hellodb/SERVICE_NAME=helloworld/' app-hellodb@.service

# do a rolling upgrade
fleetappctl start 

# Check it is now accessible on the new service name
curl helloworld.127.0.0.1.xip.io:8080

# Show all units of this app
fleetappctl list

# Stop all units of this app
fleetappctl stop
fleetappctl list

# Restart it again
fleetappctl start

# Destroy it
fleetappctl destroy
Example - placeholder references

This example shows the use of a placeholder reference in the unit file of the paas-monitor application. The application takes two optional environment  variables: RELEASE and MESSAGE that allow you to configure the resulting responses. The variable RELEASE is configured in the Docker run command in the fleet unit file through a placeholder. The actual value for the current deployment is taken from an placeholder value file.

cd ../fleetappctl-0.25/examples/paas-monitor
#check out the placeholder reference
grep '{{' paas-monitor@.service

...
ExecStart=/bin/sh -c "/usr/bin/docker run --rm --name %p-%i \
 <strong>--env RELEASE={{release}}</strong> \
...
# checkout our placeholder values
cat dev.env
...
release=V2
# start the app
fleetappctl -e dev.env start

# show current release in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start is idempotent (ie. nothing happens)
fleetappctl -e dev.env start

# update the placeholder value and see a rolling upgrade in the works
echo 'release=V3' > dev.env
fleetappctl -e dev.env start
curl paas-monitor.127.0.0.1.xip.io:8080/status

fleetappctl destroy
Example - Env Consul Key Value Pair deployments


The final example shows the use of a Consul Key Value Pair, the use of placeholders and envconsul to dynamically update the environment variables of a running instance. The environment variables RELEASE and MESSAGE are taken from the keys under /paas-monitor in Consul. In turn the initial value of these keys are loaded on first load and set using values from the placeholder file.

cd ../fleetappctl-0.25/examples/envconsul

#check out the Consul Key Value pairs, and notice the reference to placeholder values
cat keys.consul
...
paas-monitor/release={{release}}
paas-monitor/message={{message}}

# checkout our placeholder values
cat dev.env
...
release=V4
message=Hi guys
# start the app
fleetappctl -e dev.env start

# show current release and message in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# Change the message in Consul
fleetctl ssh paas-monitor@1 \
    curl -X PUT \
    -d \'hello Consul\' \
    http://172.17.8.101:8500/v1/kv/paas-monitor/message

# checkout the changed message
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start does not change the values..
fleetappctl -e dev.env start
Conclusion

CoreOS provides all the basic functionality for a Container Platform as a Service. With the utility fleetappctl it becomes easy to start, stop and upgrade composite applications. The script is an superfluous to fleetctl and does not break other ways of deploying your applications to CoreOS.

The source code, manual page and documentation of fleetappctl can be found on https://github.com/mvanholsteijn/fleetappctl.

 

Diff'ing software architecture diagrams again

Coding the Architecture - Simon Brown - Fri, 06/19/2015 - 12:42

In Diff'ing software architecture diagrams, I showed that creating a software architecture model with a textual format provides you with the ability to version control and diff different versions of the model. As a follow-up, somebody asked me whether Structurizr provides a way to recreate what Robert Annett originally posted in Diagrams for System Evolution. In other words, can the colours of the lines be changed? As you can see from the images below, the answer is yes.

Before Before

To do this, you simply add some tags to the relationships and add the appropriate styles to the view configuration. structurizr.com will even auto-generate a key for you.

Diagram key

And yes, you can do the same with elements too. As this illustrates, the choice of approach is yours.

Categories: Architecture

If You Have More Than One Sprint You Need To Scale!

JMP

JMP

I use Microsoft Office, many of my clients use large ERP packages and last week I bought a highly functional math package to do data analysis. I would describe all of these as products that are refreshed over time by major and minor releases. As an outsider, the delineation of product and release is both easily recognizable and easily describable. However once you peel back the covers and dive into the inner workings of the typical corporate IT organization (broad definition that includes development, maintenance, support, security and more), how work is grouped and described can often be much more akin to a descent through Dante’s nine circles of hell.   Recently I sat in a conference room in front of a white board and challenged a group to identify how work was grouped and how the groupings related to each other. During the discussion we choose not to discuss the portfolio level within the organization and focus on the deliver functionality. The results followed a path that looked like a simple basic hierarchy running from programs, projects to sprints . . . except for the waterfall teams that don’t do sprints. This hierarchy  was crosscut by products and releases increasing the possible complexity of any particular scenario. As work is grouped together from sprints to program into larger and larger blocks additional layers of control are need to manage risk.

In Agile the smallest grouping of work is the sprint. In a perfect world, a team would accept work into a sprint where each story was independent and intrinsically motivating. Each piece of work would be its own big picture, however that scenario is at best rare. Most people are interested in knowing that they are helping build the Golden Gate Bridge, not just the left lane of a typical bridge on-ramp. We are more motivated when we believe we are doing something big. It is rare that a unit of work being delivered by a single sprint from a single team will require any scaling.

In most organizations, a project represents a grouping of work that most easily recognized. While the concept of a project is under-pressure in some circles (we will explore concepts such #NoProjects later) I haven’t sat next to anyone on plane that doesn’t describe their work as projects whether they are in marketing, sales, accounting, consulting or software. Projects and project accounting is firmly enforced in most organization’s finance and accounting departments. As noted in Projects in Agile, almost every organization has their own definition of a project. Interestingly, I was eating dinner with a group of developers and scrum masters recently when the conversation turned to the definition of project. A sizable group decided that any discrete task could be described as project. A task had a strart and an end and was goal oriented. From a grouping perspective, projects are typically an accumulation of sprints or releases in Agile. In more classic scenarios, a project can be described as one or more release into production. Any project that is more than a single team and a single team will require scaling to afford greater foresight and planning so that the pieces fit together in a coherent whole.

The definition of a release is widely variable. Releases can be subset of a project with functionality pushed to production, test  or into some other environment. Alternately a release could be a group of whole discrete projects that are moved into an environment together. The only common thread in the use of the term release is that it represents movement. Releases, other than in very uncomplicated environments, will always require coordination between development teams and operations, business and potentially customers. The larger and more complex the release, the more planning and coordination will be required.

Programs are groups of related and often inter-related projects or releases that are organized together to achieve a common goal. By definition programs are larger than projects and can be implemented through one or a large number of releases. Because they are larger and therefore more complex, programs typically require additional techniques to ensure foresight, planning and coordination so that all stakeholders understand what is happening and their role in achieving success.

A final grouping of work is around the concept of product. Building from the Software Engineering Institute’s definition of a software product line, a simple definition of a product could a set of related functions that are managed as a whole to satisfy a specific market need. Typically a product is developed through a project or program and is maintained through releases, projects and programs. Products should have a roadmap that provides internal and external customers guidance on the features that the organization intends to develop and deliver over time. Roadmaps are typically more granular in the near term and more speculative as the time horizon recedes.

As work is grouped from smallest to largest; from sprint to program or product added effort is required to organize and coordinate work. Increased levels of planning and coordination require additional tools and techniques in addition to the basics typically found in Scrum or Extreme Programing (XP). In the Agile vernacular, the need to for additional techniques to deal with size, coordination and planning is called scaling.


Categories: Process Management

Startup Thinking

“Startups don't win by attacking. They win by transcending.  There are exceptions of course, but usually the way to win is to race ahead, not to stop and fight.” -- Paul Graham

A startup is the largest group of people you can convince to build a different future.

Whether you launch a startup inside a big company or launch a startup as a new entity, there are a few things that determine the strength of the startup: a sense of mission, space to think, new thinking, and the ability to do work.

The more clarity you have around Startup Thinking, the more effective you can be whether you are starting startups inside our outside of a big company.

In the book, Zero to One: Notes on Startups, or How to Build the Future, Peter Thiel shares his thoughts about Startup Thinking.

Startups are Bound Together by a Sense of Mission

It’s the mission.  A startup has an advantage when there is a sense of mission that everybody lives and breathes.  The mission shapes the attitudes and the actions that drive towards meaningful outcomes.

Via Zero to One: Notes on Startups, or How to Build the Future:

“New technology tends to come from new ventures--startups.  From the Founding Fathers in politics to the Royal Society in science to Fairchild Semiconductor's ‘traitorous eight’ in business, small groups of people bound together by a sense of mission have changed the world for the better.  The easiest explanation for this is negative: it's hard to develop new things in big organizations, and it's even harder to do it by yourself.  Bureaucratic hierarchies move slowly, and entrenched interests shy away from risk.” 

Signaling Work is Not the Same as Doing Work

One strength of a startup is the ability to actually do work.  With other people.  Rather than just talk about it, plan for it, and signal about it, a startup can actually make things happen.

Via Zero to One: Notes on Startups, or How to Build the Future:

“In the most dysfunctional organizations, signaling that work is being done becomes a better strategy for career advancement than actually doing work (if this describes your company, you should quit now).  At the other extreme, a lone genius might create a classic work of art or literature, but he could never create an entire industry.  Startups operate on the principle that you need to work with other people to get stuff done, but you also need to stay small enough so that you actually can.”

New Thinking is a Startup’s Strength

The strength of a startup is new thinking.  New thinking is even more valuable than agility.  Startups provide the space to think.

Via Zero to One: Notes on Startups, or How to Build the Future:

“Positively defined, a startup is the largest group of people you can convince of a plan to build a different future.  A new company's most important strength is new thinking: even more important than nimbleness, small size affords space to think.  This book is about the questions you must ask and answer to succeed in the business of doing new things: what follows is not a manual or a record of knowledge but an exercise in thinking.  Because that is what a startup has to do: question received ideas and rethink business from scratch.”

Do you have stinking thinking or do you beautiful mind?

New thinking will take you places.

You Might Also Like

How To Get Innovation to Succeed Instead of Fail

Management Innovation is at the Top of the Innovation Stack

The Innovation Revolution

The New Competitive Landscape

The New Realities that Call for New Organizational and Management Capabilities

Categories: Architecture, Programming

I Know I’ll Never Be Happy Unless I Do Something For Myself

Making the Complex Simple - John Sonmez - Thu, 06/18/2015 - 16:00

In this episode, I answer an email about being truly happy doing something for oneself. Full transcript: John:               Hey, John Sonmez from simpleprogrammer.com. I’ve got a question here about some life choices. It’s titled In A Pickle, and I’ll read you here. Cole says, “Hey, John. First off, I wanted to say you’re a huge […]

The post I Know I’ll Never Be Happy Unless I Do Something For Myself appeared first on Simple Programmer.

Categories: Programming

Monte Carlo Simulation of Project Performance

Herding Cats - Glen Alleman - Thu, 06/18/2015 - 15:32

Monte-Carlo-3Project work is random. Most everything in the world  is random. The weather, commuter traffic, productivity of writing and testing code. Few things actually take as long as they are planned. Cost is less random, but there are variances in the cost of labor, the availability of labor. Mechanical devices have variances as well.

The exact fit of a water pump on a Toyota Camry is not the same for each pump. There is a tolerance in the mounting holes, the volume of water pumped. This is a variance in the technical performance.

Managing in the presence of these uncertainties is part of good project management. But there are two distinct paradigms of managing in the presence of these uncertainties.

  1. We have empirical data of the variances. We have samples of the hole positions and sizes of the water pump mounting plate for the last 10,000 pumps that were installed. We have samples of how long it took to write a piece of code and the attributes of the code that are correlated to that duration. We have empirical measures.
  2. We have a theoretical model of the water pump in the form of a 3D CAD model with the materials modeling for expansion, drilling errors of the holes and other static and dynamic variances. We have modeling the duration of work using a Probability Distribution Function and a Three Point estimate of the Most Likely Duration, the Pessimistic and Optimistic duration. These can be derived form past performance, but we don't have enough actual data to produce the PDF and have a low enough Sample Error for our needs.

In the first case we have empirical data. In the second case we don't. There are two approaches to modeling what the system will do in terms of cost and schedule outcomes.

Bootstrapping the Empirical Data

With samples of past performance and the proper statistical assessment of those samples, we can re-sample them to produce a model of future performance. This bootstrap resampling shares the principle of the second method - Monte Carlo Simulation - but with several important differences.

  • The researcher - and we are researching what the possible outcomes might be from our model - does not know nor have any control of the Probability Distribution Function that generated the past sample. You take what you got. 
  • As well we don;'t have any understanding of Why those samples appear as they do. They're just there. We get what we get.
  • This last piece is critical because it prevents us from defining what performance must be in place to meet some future goal. We can't tell what performance we need because we have not model of the need performance, just samples from the past.
  • This results from the statistical conditions that there is a PDF for the process that ius unobserved. All we have is a few samples of this process.
  • With these few samples, we're going to resample them to produce a modeled outcome. This resampling locks in any behavior of the future using the samples from the past, which may or may not actually represent the true underlying behavior. This may be all we can do because we don't have any theoretical model of the process.

This bootstrapping method is quick, easy, and produces a quick and easy result. But it has issues that must be acknowledged.

  • There is a fundamental assumption that the past empirical samples represent the future. That is, the samples contained in the bootstrapped list and their resampling are also contained in all the future samples.
  • Said in a more formal way
    • If the sample of data we have from the past is a reasonable representation of the underlying population of all samples from the work process, then the distribution of parameter estimates produced from the bootstrap  model on a series of resampled data sets will provide a good approximation of the distribution of that statistics in the population.
    • With this sample data and its parameters (statistical moments) we can make a good approximation of the future.
  • There are some important statistical behaviors though that must be considered, starting with the future samples are identical to the statistical behaviors of the past samples.
    • Nothing is going to change in the future
    • The past and the future are identical statistically
    • In the project domain that is very unlikely
  • With all these condition, for a small project, with few if any interdependencies, a static work process with little valance, boot strapping is a nice quick and dirty approach to forecasting (estimating the future)  based on the past.

Monte Carlo Simulation

This approach is more general and removes many of the restrictions to the statistical confidence of bootstrapping.

Just as a reminder, in principle both the parametric and the non-parametric bootstrap are special cases of Monte Carlo simulations used for a very specific purpose: estimate some characteristics of the sampling distribution. But like all principles, in practice there are larger differences when modeling project behaviors.

In the more general approach  of Monte Carlo Simulation the algorithm repeatedly creating random data in some way, performing some modeling with that random data, and collecting some result.

  • The duration of a set independent tasks
  • The probabilistic completion date of a series of tasks connected in a network (schedule), each with a different Probability Distribution  Function evolving as the project moves into the future.
  • A probabilistic cost  correlated with the probabilistic schedule model. This is called the Joint Confidence Level. Both cost and schedule are random variance with time evolving changes in their respective PDFs.

In practice when we hear Monte Carlo simulation we are talking about a theoretical investigation, e.g. creating random data with no empirical content - or from reference classes -  used to investigate whether an estimator can represent known characteristics of this random data, while the (parametric) bootstrap refers to an empirical estimation and is not necessary a model of the underlying processes, just a small sample of observations independent from the actual processes that generated that data.

The key advantage of MCS is we don't necessarily need  past empirical data. MCS can be used to advantage if we do, but we don't need it for the Monte Carlo Simulation algorithm to work.

This approach could be used to estimate some outcome, like in the bootstrap, but also to theoretically investigate some general characteristic of an statistical estimator (cost, schedule, technical performance) which is difficult to derive from empirical data.

MCS removes the road block heard in many critiques of estimating - we don't have any past data on which to estimate.  No problem, build a model of the work, the dependencies between that work, and assign statistical parameters to the individual or collected PDFs and run the MCS to see what comes out.

This approach has several critical advantages:

  • The first is a restatement - we don't need empirical data, although it will add value to the modeling process.
    • This is the primary purpose of Reference Classes
    • They are the raw material for defining possible future behaviors form the past
  • We can make judgement of what he future will be like, or most importantly what the future MUST be like to meet or goals, run the simulation and determine is our planned work will produce a desired result.

So Here's the Killer Difference

Bootstrapping models make several key assumptions, which may not be true in general. So they must be tested before accepting any of the outcomes.

  • The future is like the past.
  • The statistical parameters are static - they don't evolve with time. That is the future is like the past, an unlikely prospect on any non-trivial project.
  • The sampled data is identical to the population data both in the past and in the future.

Monte Carlo Simulation models provide key value that bootstrapping can't.

  • Different Probability Distribution Function can be assigned to work as it progresses through time
  • The shape of that PDF can be defined from past performance, or defined from the needed performance. This is a CRITICAL capability of MCS

The critical difference between Bootstrapping and Monte Carlo Simulation is that MCS can show what the future performance has to be to stay on schedule (within variance), on cost, and have the technical performance meet the needs of the stakeholder.

When the process of defining the needed behavior of the work is done, a closed loop control system in put in place. This needed performance is the steering target. Measures of actual performance compared to needed performance generate the error signals for taking corrective actions. Just measuring past performance and assuming the future will be the same, is Open Loop control. Any non-trivial project management method needs a closed loop control system

Bootstrapping can only show what the future will be like if it like the past, not what it must be like. In Bootstrapping this future MUST be like the past. In MCS we can tune the PDFs to show what performance has to be to manage to that plan. Bootstrapping is reporting yesterday's weather as tomorrow's weather - just like Steve Martin in LA Story. If tomorrow's weather turns out not to be like yesterday's weather, you gonna get wet.

MCS can forecast tomorrows weather, by assigning PDFs to future activities that are different than past activities, then we can make any needed changes in that future model to alter the weather to meet or needs. This is in fact how weather forecasts are made - with much more sophisticated models of course here at the National Center for Atmospheric Research in Boulder, CO

This forecasting (estimating the future state) of possible outcomes and the alternation of those outcomes through management actions to change dependencies, add or remove resources, provide alternatives to the plan (on ramps and off maps of technology for example), buy down risk, apply management reserve, assess impacts of rescoping the project, etc. etc. etc.  is what project management is all about.

Bootstrapping is necessary but far from sufficient for any non-trivial project to show up on of before the need date (with schedule reserve), at o below the budgeted cost (with cost reserve) and have the produce or service provide the needed capabilities (technical performance reserve).

Here's an example of that probabilistic forecast of project performance from a MCS (Risky Project). This picture shows the probability for cost, finish date, and duration. But it is built on time evolving PDFs assigned to each activity in a network of dependent tasks, which models the work stream needed to complete as planned.

When that future work stream is changed to meet new requirements, unfavorable past performance and the needed corrective actions, or changes in any or all of the underlying random variables, the MCS can show us the expected impact on key parameters of the project so management in intervention can take place - since Project Management is a verb.

Untitled

The connection between the Bootstrap and Monte Carlo simulation of a statistic is simple.

Both are based on repetitive sampling and then direct examination of the results.

But there are significant differences between the methods (hence the difference in names and algorithms). Bootstrapping uses the original, initial sample as the population from which to resample. Monte Carlo Simulation uses a data generation process, with known values of the parameters of the Probability Distribution Function. The common algorithm for MCS is Lurie-Goldberg. Monte Carlo is used to test that the results of the estimators produce desired outcomes on the project. And if not, allow the modeler and her management to change those estimators and then mange to the changed plan.

Bootstrap can be used to estimate the variability of a statistic and the shape of its sampling distribution from past data. Assuming the future is like the past, make forecasts of throughput, completion and other project variables. 

In the end the primary differences (and again the reason for the name differences) is Bootstrapping is based on unknown distributions. Sampling and assessing the shape of the distribution in Bootstrapping adds no value to the outcomes. Monte Carlo is based on known or defined distributions usually from Reference Classes.

Related articles Do The Math Complex, Complexity, Complicated The Fallacy of the Planning Fallacy
Categories: Project Management

More Material Design with Topeka for Android

Android Developers Blog - Thu, 06/18/2015 - 06:44

Posted by Ben Weiss, Developer Programs Engineer

Material design is a new system for visual, interaction and motion design. We originally launched the Topeka web app as an Open Source example of material design on the web.

Today, we’re publishing a new material design example: The Android version of Topeka. It demonstrates that the same branding and material design principles can be used to create a consistent experience across platforms. Grab the code today on GitHub. table, th, td { border: clear; border-collapse: collapse; } The juicy bits

While the project demonstrates a lot of different aspects of material design, let’s take a quick look at some of the most interesting bits.

Transitions

Topeka for Android features several possibilities for transition implementation. For starters the Transitions API within ActivityOptions provides an easy, yet effective way to make great transitions between Activities.

To achieve this, we register the shared string in a resources file like this:

<resources>
    <string name="transition_avatar">AvatarTransition</string>
</resources>

Then we use it within the source’s and target’s view as transitionName

<ImageView
    android:id="@+id/avatar"
    android:layout_width="@dimen/avatar_size"
    android:layout_height="@dimen/avatar_size"
    android:layout_marginEnd="@dimen/keyline_16"
    android:transitionName="@string/transition_avatar"/>

And then make the actual transition happen within SignInFragment.

private void performSignInWithTransition(View v) {
    Activity activity = getActivity();
    ActivityOptions activityOptions = ActivityOptions
            .makeSceneTransitionAnimation(activity, v,
                    activity.getString(R.string.transition_avatar));
    CategorySelectionActivity.start(activity, mPlayer, activityOptions);
    activity.finishAfterTransition();
}

For multiple transition participants with ActivityOptions you can take a look at the CategorySelectionFragment.

Animations

When it comes to more complex animations you can orchestrate your own animations as we did for scoring.

To get this right it is important to make sure all elements are carefully choreographed. The AbsQuizView class performs a handful of carefully crafted animations when a question has been answered:

The animation starts with a color change for the floating action button, depending on the provided answer. After this has finished, the button shrinks out of view with a scale animation. The view holding the question itself also moves offscreen. We scale this view to a small green square before sliding it up behind the app bar. During the scaling the foreground of the view changes color to match the color of the fab that just disappeared. This establishes continuity across the various quiz question states.

All this takes place in less than a second’s time. We introduced a number of minor pauses (start delays) to keep the animation from being too overwhelming, while ensuring it’s still fast.

The code responsible for this exists within AbsQuizView’s performScoreAnimation method.

FAB placement

The recently announced Floating Action Buttons are great for executing promoted actions. In the case of Topeka, we use it to submit an answer. The FAB also straddles two surfaces with variable heights; like this:

To achieve this we query the height of the top view (R.id.question_view) and then set padding on the FloatingActionButton once the view hierarchy has been laid out:

private void addFloatingActionButton() {
    final int fabSize = getResources().getDimensionPixelSize(R.dimen.fab_size);
    int bottomOfQuestionView = findViewById(R.id.question_view).getBottom();
    final LayoutParams fabLayoutParams = new LayoutParams(fabSize, fabSize,
            Gravity.END | Gravity.TOP);
    final int fabPadding = getResources().getDimensionPixelSize(R.dimen.padding_fab);
    final int halfAFab = fabSize / 2;
    fabLayoutParams.setMargins(0, // left
        bottomOfQuestionView - halfAFab, //top
        0, // right
        fabPadding); // bottom
    addView(mSubmitAnswer, fabLayoutParams);
}

To make sure that this only happens after the initial layout, we use an OnLayoutChangeListener in the AbsQuizView’s constructor:

addOnLayoutChangeListener(new OnLayoutChangeListener() {
    @Override
    public void onLayoutChange(View v, int l, int t, int r, int b,
            int oldLeft, int oldTop, int oldRight, int oldBottom) {
        removeOnLayoutChangeListener(this);
        addFloatingActionButton();
    }
});
Round OutlineProvider

Creating circular masks on API 21 onward is now really simple. Just extend the ViewOutlineProvider class and override the getOutline() method like this:

@Override
public final void getOutline(View view, Outline outline) {
    final int size = view.getResources().
        getDimensionPixelSize(R.id.view_size);
    outline.setOval(0, 0, size, size);
}

and setClipToOutline(true) on the target view in order to get the right shadow shape.

Check out more details within the outlineprovider package within Topeka for Android.

Vector Drawables

We use vector drawables to display icons in several places throughout the app. You might be aware of our collection of Material Design Icons on GitHub which contains about 750 icons for you to use. The best thing for Android developers: As of Lollipop you can use these VectorDrawables within your apps so they will look crisp no matter what density the device’s screen. For example, the back arrow ic_arrow_back from the icons repository has been adapted to Android’s vector drawable format.

<vector xmlns:android="http://schemas.android.com/apk/res/android"
    android:width="24dp"
    android:height="24dp"
    android:viewportWidth="48"
    android:viewportHeight="48">
    <path
        android:pathData="M40 22H15.66l11.17-11.17L24 8 8 24l16 16 2.83-2.83L15.66 26H40v-4z"
        android:fillColor="?android:attr/textColorPrimary" />
</vector>

The vector drawable only has to be stored once within the res/drawable folder. This means less disk space is being used for drawable assets.

Property Animations

Did you know that you can easily animate any property of a View beyond the standard transformations offered by the ViewPropertyAnimator class (and it’s handy View#animate syntax)? For example in AbsQuizView we define a property for animating the view’s foreground color.

// Property for animating the foreground
public static final Property FOREGROUND_COLOR =
        new IntProperty("foregroundColor") {

            @Override
            public void setValue(FrameLayout layout, int value) {
                if (layout.getForeground() instanceof ColorDrawable) {
                    ((ColorDrawable) layout.getForeground()).setColor(value);
                } else {
                    layout.setForeground(new ColorDrawable(value));
                }
            }

            @Override
            public Integer get(FrameLayout layout) {
                return ((ColorDrawable) layout.getForeground()).getColor();
            }
        };

This can later be used to animate changes to said foreground color from one value to another like this:

final ObjectAnimator foregroundAnimator = ObjectAnimator
        .ofArgb(this, FOREGROUND_COLOR, Color.WHITE, backgroundColor);

This is not particularly new, as it has been added with API 12, but still can come in quite handy when you want to animate color changes in an easy fashion.

Tests

In addition to exemplifying material design components, Topeka for Android also features a set of unit and instrumentation tests that utilize the new testing APIs, namely “Gradle Unit Test Support” and the “Android Testing Support Library.” The implemented tests make the app resilient against changes to the data model. This catches breakages early, gives you more confidence in your code and allows for easy refactoring. Take a look at the androidTest and test folders for more details on how these tests are implemented within Topeka. For a deeper dive into Testing on Android, start reading about the Testing Tools.

What’s next?

With Topeka for Android, you can see how material design lets you create a more consistent experience across Android and the web. The project also highlights some of the best material design features of the Android 5.0 SDK and the new Android Design Library.

While the project currently only supports API 21+, there’s already a feature request open to support earlier versions, using tools like AppCompat and the new Android Design Support Library.

Have a look at the project and let us know in the project issue tracker if you’d like to contribute, or on Google+ or Twitter if you have questions.

Join the discussion on

+Android Developers
Categories: Programming

Introducing new Android training programs with Udacity

Android Developers Blog - Wed, 06/17/2015 - 18:51

Posted by Peter Lubbers, Senior Program Manager, Google Developer Training

We know how important it is for you to efficiently develop the skills to build better Android apps and be successful in your jobs. To meet your training needs, we’ve partnered with Udacity to create Android training courses, ranging from beginner to more advanced content.

Last week at Google I/O we announced the Android Nanodegree, an education credential that is designed for busy people to learn new skills and advance their careers in a short amount of time from anywhere at any time. The nanodegree ties together our Android courses, and provides you with a certificate that may help you be a more marketable Android developer.

Training courses

All training courses are developed and taught by expert Google instructors from the Developer Platform team. In addition to updating our popular Developing Android Apps course and releasing Advanced Android App Development, we now have courses for everyone from beginning programmers to advanced developers who want to configure their Gradle build settings. And then there's all the fun stuff in between—designing great-looking, high performance apps, making your apps run on watches, TVs, and in cars, and using Google services like Maps, Ads, Analytics, and Fit.

Each course is available individually, without charge, at udacity.com/google. Our instructors are waiting for you:

Android Nanodegree

You can also enroll in the new Android Nanodegree for a monthly subscription fee, which gives you access to coaches who will review your code, provide guidance on your project, answer questions about the class, and help keep you on track when you need it.

More importantly, you will learn by doing, focusing only on where you need to grow. Since the Nanodegree is based on your skills and the projects in your portfolio, you do not need to complete the courses that address the skills you already have. You can focus on writing the code and building the projects that meet the requirements for the Nanodegree credential. We’ll also be inviting 50 Android Nanodegree graduates to Google's headquarters in Mountain View, California, for a three day intensive Android Career Summit in November. Participants will have the opportunity to experience Google’s company culture and attend workshops focused on developing their personal career paths. Participants will then leverage the skills learned from Udacity’s Android Nanodegree during a two-day hackathon.

To help you learn more about this program and and courses within it, Google and Udacity are partnering up for an "Ask the Experts" live streamed series. In the first episode on Wednesday, June 3rd at 2pm PDT, Join Sebastian Thrun, Peter Lubbers and Jocelyn Becker who will be answering your questions on the Nanodegree. RSVP here and ask and vote for questions here. Android training in Arabic

We also believe that everyone has the right to learn how to develop Android apps. Today, there is a great need for developers in countries outside of the United States as software powers every industry from food and transportation to healthcare and retail. As a first step in getting the Android Nanodegree localized and targeted for individual countries, we have worked with the Government of Egypt and Udacity to create end-to-end translations of our top Android courses into Arabic (including fully dubbed video). Google will offer 2,000 scholarships to students to get a certificate for completing the Arabic version of the Android Fundamentals course. Google will also host job fairs and sessions for students with local employers and the Egyptian Government. For more information, see www.udacity.com/egypt.

Complete Android course catalog

Here are the currently-planned courses in the Android Nanodegree:

So get learning now at udacity.com/android

Join the discussion on

+Android Developers
Categories: Programming

Coding: Explore and retreat

Mark Needham - Wed, 06/17/2015 - 18:23

When refactoring code or looking for the best way to integrate a new piece of functionality I generally favour a small steps/incremental approach but recent experiences have led me to believe that this isn’t always the quickest approach.

Sometimes it seems to make more sense to go on little discovery missions in the code, make some bigger steps and then if necessary retreat and revert our changes and apply the lessons learnt on our next discovery mission. This technique which isn’t anything novel but I think is quite effective.

Michael and I were recently looking at the Smart Local Moving algorithm which is used for community detection in large networks and decided to refactor the code to make sure we understood how it worked. When we started the outline of the main class was like this:

public class Network implements Cloneable, Serializable
{
    private static final long serialVersionUID = 1;
 
    private int numberOfNodes;
    private int[] firstNeighborIndex;
    private int[] neighbor;
    private double[] edgeWeight;
    private double totalEdgeWeightSelfLinks;
    private double[] nodeWeight;
    private int nClusters;
    private int[] cluster;
 
    private double[] clusterWeight;
    private int[] numberNodesPerCluster;
    private int[][] nodePerCluster;
    private boolean clusteringStatsAvailable;
...
}

My initial approach was to put methods around things to make it a bit easier to understand and then step by step replace each of those fields with nodes and relationships. I spent the first couple of hours doing this and while it was making the code more readable it wasn’t progressing very quickly and I wasn’t much wiser about how the code worked.

Michael and I paired on it for a few hours and he adopted a slightly different but more successful approach where we looked at slightly bigger chunks of code e.g. all the loops that used the firstNeighborIndex field and then created a hypothesis of what that code was doing.

In this case firstNeighborIndex acts as an offset into neighbor and is used to iterate through a node’s relationships. We thought we could probably replace that with something more similar to the Neo4j model where you have classes for nodes and relationships and a node has a method which returns a collection of relationships.

We tried tearing out everywhere that used those two fields and replacing them with our new nodes/relationships code but that didn’t work because we hadn’t realised that edgeWeight and nodeWeight are also tied to the contents of the original fields.

We therefore needed to retreat and try again. This time I put the new approach alongside the existing approach and then slowly replaced existing bits of code.

Along the way I came up with other ideas about how to restructure the code, tried some more bigger leaps to validate my ideas and then moved back into incremental mode again.

In summary I’ve found the combination of incrementally changing code and going on bigger exploratory missions works quite well.

Now I’m trying to work out when each approach is appropriate and I’ll write that up when I learn more! You can see my progress via the github commits.

Categories: Programming

Episode 229: Flavio Junqueira on Distributed Coordination with Apache ZooKeeper

Flavio Junqueira is the author of Zookeeper: Distributed Process Coordination. Flavio and Jeff Meyerson begin by defining ZooKeeper and talking about what ZooKeeper is and isn’t. ZooKeeper can be thought of as a patch against certain fallacies of distributed computing: that the network is secure, has zero latency, has infinite bandwidth, and so on. With […]
Categories: Programming

The Lifecycle Model Applied to Common Methodologies - New Lecture Posted

10x Software Development - Steve McConnell - Wed, 06/17/2015 - 14:35

I've posted this week's lecture in my Understanding Software Projects series at https://cxlearn.com. Some of the past lectures that have been posted are still free. 

In this week's lecture I explain how methodologies like Scrum, Extreme Programming, Waterfall, and Code & Fix look from the point of view of the Software Lifecycle Model. The lecture starts by going into more detail about how the Lifecycle Model can be abstracted for purposes of describing various methodologies and then goes into describing specific methodologies. I conclude with some comments about the role that specific practices fit within the Lifecycle Model (pair programming, formal inspections, test first, continuous integration, etc.). 

The point of this is to show the way that familiar methodologies can be abstracted into a general-purpose model. Performing that abstraction helps to separate the substance from the hype in terms of identifying what is really significantly different about a methodology like Scrum or XP vs. what is just marketing fluff. 

Lectures posted so far include:  

0.0 Understanding Sofware Projects - Intro
     0.1 Introduction - My Background
     0.2 Reading the News
     0.3 Definitions and Notations 

1.0 The Software Lifecycle Model - Intro
     1.1 Variations in Iteration 
     1.2 Lifecycle Model - Defect Removal
     1.3 Lifecycle Model Applied to Common Methodologies (New this Week)

2.0 Software Size
     2.05 Size - Comments on Lines of Code
     2.1 Size - Staff Sizes 
     2.2 Size - Schedule Basics 

Check out the lectures at http://cxlearn.com!

Understanding Software Projects - Steve McConnell

 

Diff'ing software architecture diagrams

Coding the Architecture - Simon Brown - Wed, 06/17/2015 - 14:02

Robert Annett wrote a post titled Diagrams for System Evolution where he describes a simple approach to showing how to visually describe changes to a software architecture. In essence, in order to show how a system is to change, he'll draw different versions of the same diagram and use colour-coding to highlight the elements/relationships that will be added, removed or modified.

I've typically used a similar approach for describing as-is and to-be architectures in the past too. It's a technique that works well. Although you can version control diagrams, it's still tricky to diff them using a tool. One solution that addresses this problem is to not create diagrams, but instead create a textual description of your software architecture model that is then subsequently rendered with some tooling. You could do this with an architecture description language (such as Darwin) although I would much rather use my regular programming language instead.

Creating a software architecture model as code

This is exactly what Structurizr is designed to do. I've recreated Robert's diagrams with Structurizr as follows.

Before

Before

And since the diagrams were created by a model described as Java code, that description can be diff'ed using your regular toolchain.

The diff between the two model versions

Code provides opportunities

This perhaps isn't as obvious as Robert's visual approach, and I would likely still highlight the actual differences on diagrams using notation as Robert did too. Creating a textual description of a software architecture model does provide some interesting opportunities though.

Categories: Architecture

Legacy Hiring

Phil Trelford's Array - Wed, 06/17/2015 - 12:40

Over the years I’ve been involved in a lot of developer hiring. Here I’d like to share a pearl of wisdom I gleaned from a colleague I shared the hiring process with.

Their angle was to give extra points to people with experience with both building AND maintaining systems, i.e those who stuck around long enough to understand the consequences of their own actions.

I found this is an interesting insight, the logic being that you learn the longer term impact of your design decisions from hanging around to fix the faults and add new features.

But what does staying the distance actually teach you? I think the only real way is to find out for yourself, but here’s a few things that spring to mind.

Prefer simple

Simple isn’t easy, there’s an old saying that goes “I didn't have time to write a short letter, so I wrote a long one instead”.

The same goes for code; finding the underlying simplicity can take significantly longer than just bashing out the first thing that comes to mind. However the long term payoffs for system maintenance can be huge.

Tests

Tests can be a good thing, and test-driven development (TDD) can provide a helpful suite of regression tests to move forward with. However if the end result is tightly-coupled tests then the the pattern might be better coined Test-Driven Concrete (TDC), as instead of making change easier the product becomes almost impossible to maintain (thanks to Ian Johnson for the metaphor).

Instead of testing at the class and method level you should be testing required behaviour.

Summary

If you’re embarking on a big shiny new project, it might make sense to have some developers around who’ve been through the mill and come out the other side, and can help guide you to a more maintainable future.

Do you have any tips to share?

Categories: Programming

Trust, Accountability, and Where Does the Time Go?

As more of my clients transition to agile, many of them have a fascinating question:

How do I assess who is doing what on my team?

When I ask why they want to know, they say it’s all related to reviews, rewards, and general compensation. They are still discussing individual compensation, not team compensation.

When I ask why they want to reward individuals instead of the team, they say, “I am sure some people do more work than others. I want to reward them, and not the other people.”

Interesting idea. And, wrong for agile teams. Also wrong for any innovation or learning that you want to happen as a team (regardless of whether you are agile or not).

Agile is a team-based approach to work. Why would you want to reward some people more than others? If the team is not sure that they are working well together, they need to learn to provide each other feedback. If the team doesn’t know how to manage team membership, a manager can facilitate that membership discussion and problem-solving. Then, the managers can transition team membership issues to the team, with manager as backup/facilitator.

What I see is that the managers want to control team membership. Instead, why not let the team control its membership?

I often see that the managers want to control feedback: who provides it and who receives it. Instead, why not train everyone in how to provide and receive feedback?

When managers want to reward some people more than others, they imply that some people are less capable than others—something agile is supposed to fix with teamwork. Or, they wonder if some people are wasting time.

If managers trust their teams, managers don’t need to worry about accountability. They don’t have to worry about people “wasting time.” Agile creates transparency, which helps people to learn to trust each other and know when they are working on relevant work or not. If you encourage the team to add pairing or swarming (or both), you have the recipe for whole-team work and team momentum.

I don’t know anyone who goes to work thinking, “How can I waste my time today?” Do you? (If you do, why is that person on your team?)

I know plenty of people who do waste their time, because of technical debt, or experts creating bottlenecks, or team members who don’t want to work as part of a team. I bet you know many of these people, too.

But I don’t know anyone who wants to go to work to waste time and collect a paycheck without doing anything. Sure, there might be some people like that. I don’t know any.

In an agile team, the team members know who works hard and who doesn’t. A manager could trust the team to handle their compensation.

And, that would mean the manager has to relinquish control of much of what managers have done recently.

  • The manager would have to provide feedback and meta-feedback to people who want to learn how to provide and receive feedback.
  • The manager might provide coaching as to how to work in a team effectively. (This can be tough if a manager has never been part of a team that worked well.)
  • The manager would allow the team to control their compensation.

There’s more, and I’ll stop there.

What would managers do? They would stop interfering with the team’s progress. The manager might read “If Managers Don’t Give Performance Reviews, What Happens?

Managers control much less in agile. Managers enable more. They have to trust the teams. This is a culture change.

If you can’t trust your agile team or its team members, there is something wrong. You can investigate and either fix (manage the project portfolio) or ask the team to fix it (maybe with retrospectives as a first step). If you need to know who is accountable for what, you are asking the wrong question. The answer might be you.

If you are managing an agile team and you want to know about individual work or accountability, ask yourself what you really need. Ask yourself if there is another way to obtain the results you want. Maybe you won’t waste quite as much time.

Categories: Project Management

Trust, Accountability, and Where Does the Time Go?

As more of my clients transition to agile, many of them have a fascinating question:

How do I assess who is doing what on my team?

When I ask why they want to know, they say it’s all related to reviews, rewards, and general compensation. They are still discussing individual compensation, not team compensation.

When I ask why they want to reward individuals instead of the team, they say, “I am sure some people do more work than others. I want to reward them, and not the other people.”

Interesting idea. And, wrong for agile teams. Also wrong for any innovation or learning that you want to happen as a team (regardless of whether you are agile or not).

Agile is a team-based approach to work. Why would you want to reward some people more than others? If the team is not sure that they are working well together, they need to learn to provide each other feedback. If the team doesn’t know how to manage team membership, a manager can facilitate that membership discussion and problem-solving. Then, the managers can transition team membership issues to the team, with manager as backup/facilitator.

What I see is that the managers want to control team membership. Instead, why not let the team control its membership?

I often see that the managers want to control feedback: who provides it and who receives it. Instead, why not train everyone in how to provide and receive feedback?

When managers want to reward some people more than others, they imply that some people are less capable than others—something agile is supposed to fix with teamwork. Or, they wonder if some people are wasting time.

If managers trust their teams, managers don’t need to worry about accountability. They don’t have to worry about people “wasting time.” Agile creates transparency, which helps people to learn to trust each other and know when they are working on relevant work or not. If you encourage the team to add pairing or swarming (or both), you have the recipe for whole-team work and team momentum.

I don’t know anyone who goes to work thinking, “How can I waste my time today?” Do you? (If you do, why is that person on your team?)

I know plenty of people who do waste their time, because of technical debt, or experts creating bottlenecks, or team members who don’t want to work as part of a team. I bet you know many of these people, too.

But I don’t know anyone who wants to go to work to waste time and collect a paycheck without doing anything. Sure, there might be some people like that. I don’t know any.

In an agile team, the team members know who works hard and who doesn’t. A manager could trust the team to handle their compensation.

And, that would mean the manager has to relinquish control of much of what managers have done recently.

  • The manager would have to provide feedback and meta-feedback to people who want to learn how to provide and receive feedback.
  • The manager might provide coaching as to how to work in a team effectively. (This can be tough if a manager has never been part of a team that worked well.)
  • The manager would allow the team to control their compensation.

There’s more, and I’ll stop there.

What would managers do? They would stop interfering with the team’s progress. The manager might read “If Managers Don’t Give Performance Reviews, What Happens?

Managers control much less in agile. Managers enable more. They have to trust the teams. This is a culture change.

If you can’t trust your agile team or its team members, there is something wrong. You can investigate and either fix (manage the project portfolio) or ask the team to fix it (maybe with retrospectives as a first step). If you need to know who is accountable for what, you are asking the wrong question. The answer might be you.

If you are managing an agile team and you want to know about individual work or accountability, ask yourself what you really need. Ask yourself if there is another way to obtain the results you want. Maybe you won’t waste quite as much time.

Categories: Project Management