Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Management, Humanity and Expectations

There’s a twitter discussion of what people “should” do in certain situations. One of the participants believes that people “should” want to learn on their own time and work more than 40 hours per week. I believe in learning. I don’t believe in expecting people to work more than 40 hours/week. My experience is that when you ask people to work more than 40 hours, they get stupid. See  Management Myth 15: I Need People to Work Overtime. If you want people to learn, read Management Myth #9: We Have No Time for Training.

One participant also said that people should leave their emotional baggage (my word) at home. Work supposedly isn’t for emotions. Well, I don’t understand how we can have people who work without their emotions. Emotions are how we explain how we feel about things. I want people to advocate for what they feel is useful and good. I want to know when they feel something is bad and damaging. I want that, as a manager. See Management Myth #4: I Don’t Need One-on-Ones.

People are emotional. Let’s assume they are adults and can harness their emotions. If not, we can provide feedback about the situation. But, ignoring their emotions? That never works. It’s incongruent and can make the situation worse.

I have a problem with “shoulds” for other people. I cannot know what is going on in other people’s lives. Nor, do I want to know all the details as a manager. I need to know enough to use my judgement as a manager to help the people and teams proceed.

When managers build trust with people, those people can share what is relevant about the way they can perform at work with their manager, and maybe with their team. If they have a personal situation that requires time off, depending on the team, the person might have to talk to the team before the manager. (I know some agile teams like this.) The team might manage the situation without management help or interference.

If you are in a leadership position, don’t impose your “shoulds” on other people. You cannot know what is happening in other people’s lives. You can ask for

You can ask for the results you want. You want people to learn more? Provide time during the week for everyone to learn together. You want people to work through a personal crisis? Provide support.

Don’t expect automatons at work. Expect humans and you’ll get way more than you could imagine.

Categories: Project Management

Software architecture workshops in Australia during August

Coding the Architecture - Simon Brown - Mon, 06/22/2015 - 08:02

This is a quick update on my upcoming trip to Australia ... in conjunction with the lovely folks behind the YOW! conference, we've scheduled two public software architecture sketching workshops as follows.

This workshop addresses one of the major problems I've evidenced during my career in software development; namely that people find it really hard to describe, communicate and visualise the design of their software. Sure, we have UML, but very few people use it and instead resort to an ad hoc "boxes and lines" notation. This is fine, but the resulting diagrams (as illustrated below) need to make sense and they rarely do in my experience.

Some typical software architecture sketches

My workshop explores this problem from a number of different perspectives, not least by giving you an opportunity to practice these skills yourself. My Voxxed article titled Simple Sketches for Diagramming Your Software Architecture provides an introduction to this topic and the C4 model that I use. I've been running this workshop in various formats for nearly 10 years now and the techniques we'll cover have proven invaluable for software development teams around the world in the following situations:

  • Doing (just enough) up front design on whiteboards.
  • Communicating software design retrospectively before architecture refactoring.
  • Explaining how the software works when somebody new joins the team.
  • Documenting/recording the software architecture in something like a software architecture document or software guidebook.
"Really surprising training! I expected some typical spoken training about someones revolutionary method and I found a hands-on training that used the "do-yourself-and-fail" approach to learn the lesson, and taught us a really valuable, reasonable and simple method as an approach to start the architecture of a system. Highly recommended!" (an attendee from the software architecture sketching workshop at GOTO Amsterdam 2014)

Attendees will also receive a copy of my Software Architecture for Developers ebook and a year subscription to Structurizr. Please do feel free to e-mail me with any questions. See you in Australia!

Categories: Architecture

Climbing Mountains Requires Good Estimates

Herding Cats - Glen Alleman - Mon, 06/22/2015 - 06:01

LongsswridgerouteThere was an interesting post on the #NoEstimates thread that triggered memories of our hiking and climbing days with our children (now grown and gone) and our neighbor who has summited many of the highest peaks around the world.

The quote was Getting better at estimates is like using time to plan the Everest climb instead of climbing smaller mountains for practice.

A couple background ideas:

  • The picture above is Longs Peak. We can see Longs Peak from our back deck in Niwot Colorado. It's one of 53 14,000 foot mountains in Colorado - Fourteeners. Long is one of 4 along the front range

In our neighborhood are several semi-pro mountain climbers. People move to Colorado for the outdoor life, skiing, mountain and road biking, hiking, and climbing. 

Now to the Tweet suggesting that getting better at estimating is replaced by doing (climb) smaller projects. Turns out estimates are needed for those smaller mountains, estimates are needed for all hiking and climbing. But first...

  • No one is going to climb Everest - and live to tell about it - without first having summited many other high peaks.
  • Anyone interested in the trials and tribulations of Everest should start with John Krakauer's Into Thin Air: A Personal Account of the Mt. Everest Disaster.
  • Before attempting - and attempting is the operative word here - any signifiant peak several things have to be in place.

Let's start with those Things.

No matter how prepared you are, you need a plan. Practice on lower peaks is necessary but far from sufficient for success. Each summit requires planning in depth. For Long's peak you need a Plan A, Plan B, and possibly a Plan C. On most of all you need strong estimating skills and the accompanying experience to determine when to invoke each Plan. People die on Longs because they foolishly think they can beat the odds and proceed with Plan B.

So the suggest that summiting something big, like any of the Seven Summits, without both deep experience and deep planning is likely going to not be heard of again.

 So the OP is likely speaking for not having summited much of anything, hard to tell, no experience resume attached.

The estimating part is basic, Can we make it to the key hole on Long's Peak before the afternoon storms come in/ On Everest, can we make it to the Hillary Step before 1:00 PM? No? Turn back, you're gonna die if you continue.

Can we make it to the delivery date at the pace we're on now, AND with the emerging situation for the remaining work, AND for the cost we're trying to keep AND with the needed capabilities the customer needs? Remember the use of past performance is fine, If and Only If the future is something like the past, or we know something about how the future is different from the past.

When the future not like the past? We need a Plan B. And that plan has to have estimates of our future capabilities, cost expenditure rate, and our abilities to produce the needed capabilities.

ALL PLANNING IN THE PRESENCE OF UNCERTAINTY REQUIRES - MANDATES ACTUALLY - ESTIMATING. 

Ask any hiker, climber, development manager, business person. Time to stop managing by platitudes and start managing by the principles of good management.

Related articles There is No Such Thing as Free The Fallacy of the Planning Fallacy Systems Thinking, System Engineering, and Systems Management Myth's Abound Eyes Wide Shut - A View of No Estimates
Categories: Project Management

The Art of Systems Architecting

Herding Cats - Glen Alleman - Sun, 06/21/2015 - 23:36

Art of Systems ArchitectingThe Art of Systems Architecting† is a book that changed the way I look at  development of software intensive systems. As a manager of software in the system of systems domain, this book created a clear and concise vision of how to assemble all the pieces of the system into a single cohesive framework.

One of the 12 principles of the Agile Manifesto is The best architectures, requirements, and designs emerge from self-organizing teams. The self-organizing team parts is certainly good. But good architectures don't emerge, unless it's the Ball of Mud architecture. Good architecture is a combination of science, engineering and art. Hence the title of the book.

Systems architecting borrows from other architectures, but the basic attributes are the same: † 

  • The architect is principally the agent of the client, not the builder. The architect must act in the best interests of the client.
  • The architect works jointly with the client and the builder on the problem and the definition of the solution. Systems requirements - in the form of needed capabilities, their Measures of Effectiveness, Measures of Performance (MOP), Key Performance Parameters (KPP), and Technical Performance Measures (TPM), are an input. The client will provide the requirements, but the architect is expected to jointly help the client  determine the requirements.
  • The architect's product is an architectural representation. A set of abstracted designs of the system.
  • The product of the architect is not just a physical representation of the system. Cost estimates are part of any feasible deliverables as well. Knowing the value of some built item requires we also know the cost. The system architecture must cover physical structure, system behavior, cost, performance, delivery schedule, and other elements needed to clarify the clients priorities.
  • The initial architecture is a Vision of the future outcome of the work effort. This description os a set of specific models. These include the needed capabilities, the motives of the outcome, beliefs, and unstated assumptions. These distinctions are critical when creating standards for the architecture. ToGAF and DoDAF are examples of architecture standards. 

Why Do We Care About This?

When we hear of some new and possibly different approach to anything, we need to ask - what is the paradigm this idea fits into? If it is truly new, what paradigm does it replace and how does that replacement maintain the needed information from the old paradigm used for success and what parts of the old paradigm are replaced for the better and how can we be assured that it is actually better?

One answer starts with the architecture of the paradigm. In the case of managing projects this is the programmatic architecture. This Principles, Practices, and Processes of the Programmatic Architecture.

Five Immutable Principles of project success can be found in...

5 Immutable Principles

With these principles we can apply Five Practices guided by these Principles

Screen Shot 2015-06-21 at 3.24.50 PM

With the Principles and Practices in place, Processes can be defined for the specific needs of the domain.

Screen Shot 2015-06-21 at 3.26.57 PM

So with the Principles, Practices, and Processes in place, we can now ask

When it is suggested a new approach be taken, where does that approach fit in the Principles, Practices, and Processes that are in place now? If there is no place, how does this new suggestion fulfill the needs of the business that are in place? If there needs aren't fulfilled, does the business acknowledge that those needs are no longer needed?

If not, the chances of this new idea of actually being accepted by the business are slim to none.

Related articles Systems Thinking, System Engineering, and Systems Management Who's Budget is it Anyway? Eyes Wide Shut - A View of No Estimates
Categories: Project Management

R: Scraping Neo4j release dates with rvest

Mark Needham - Sun, 06/21/2015 - 23:07

As part of my log analysis I wanted to get the Neo4j release dates which are accessible from the release notes and decided to try out Hadley Wickham’s rvest scraping library which he released at the end of 2014.

rvest is based on Python’s beautifulsoup which has become my scraping library of choice so I didn’t find it too difficult to pick up.

To start with we need to download the release notes locally so we don’t have to go over the network when we’re doing our scraping:

download.file("http://neo4j.com/release-notes/page/1", "release-notes.html")
download.file("http://neo4j.com/release-notes/page/2", "release-notes2.html")

We want to parse those pages back and return the rows which contain version numbers and release dates. The HTML looks like this:

2015 06 21 22 57 20

We can get the rows with the following code:

library(rvest)
library(dplyr)
 
page1 <- html("release-notes.html")
page2 <- html("release-notes2.html")
 
rows = c(page1 %>% html_nodes("div.small-12 div.row"), 
         page2 %>% html_nodes("div.small-12 div.row") ) 
 
> rows %>% head(1)
[[1]]
<div class="row"> <h3 class="entry-title"><a href="http://neo4j.com/release-notes/neo4j-2-2-2/">Latest Release: Neo4j 2.2.2</a></h3> <h6>05/21/2015</h6> <p>Neo4j 2.2.2 is a maintenance release, with critical improvements.</p> <p>Notably, this release:</p> <ul><li>Provides support for running Neo4j on Oracle and OpenJDK Java 8 runtimes</li> <li>Resolves an issue that prevented the Neo4j Browser from loading in the latest Chrome release (43.0.2357.65).</li> <li>Corrects the behavior of the <code>:sysinfo</code> (aka <code>:play sysinfo</code>) browser directive.</li> <li>Improves the <a href="http://neo4j.com/docs/2.2.2/import-tool.html">import tool</a> handling of values containing newlines, and adds support f...</li></ul><a href="http://neo4j.com/release-notes/neo4j-2-2-2/">Read full notes →</a> </div>

Now we need to loop through the rows and pull out just the version and release date. I wrote the following function to do this and strip out any extra text that we’re not interested in:

generate_releases = function(rows) {
  releases = data.frame()
  for(row in rows) {
    version = row %>% html_node("h3.entry-title")
    date = row %>% html_node("h6")  
 
    if(!is.null(version) && !is.null(date)) {
      version = version %>% html_text()
      version = gsub("Latest Release: ", "", version)
      version = gsub("Neo4j ", "", version)
      releases = rbind(releases, data.frame(version = version, date = date %>% html_text()))
    }
  }
  return(releases)
}
 
> generate_releases(rows)
   version       date
1    2.2.2 05/21/2015
2    2.2.1 04/14/2015
3    2.1.8 04/01/2015
4    2.2.0 03/25/2015
5    2.1.7 02/03/2015
6    2.1.6 11/25/2014
7    1.9.9 10/13/2014
8    2.1.5 09/30/2014
9    2.1.4 09/04/2014
10   2.1.3 07/28/2014
11   2.0.4 07/08/2014
12   1.9.8 06/19/2014
13   2.1.2 06/11/2014
14   2.0.3 04/30/2014
15   2.0.1 02/04/2014
16   2.0.2 04/15/2014
17   1.9.7 04/11/2014
18   1.9.6 02/03/2014
19     2.0 12/11/2013
20   1.9.5 11/11/2013
21   1.9.4 09/19/2013
22   1.9.3 08/30/2013
23   1.9.2 07/16/2013
24   1.9.1 06/24/2013
25     1.9 05/13/2013
26   1.8.3         //

Finally I wanted to convert the ‘date’ column to be in R date format and get rid of the 1.8.3 row since it doesn’t contain a date. lubridate is my goto library for date manipulation in R so we’ll use that here:

library(lubridate)
 
> generate_releases(rows) %>%  
      mutate(date = mdy(date)) %>%   
      filter(!is.na(date)) 
 
   version       date
1    2.2.2 2015-05-21
2    2.2.1 2015-04-14
3    2.1.8 2015-04-01
4    2.2.0 2015-03-25
5    2.1.7 2015-02-03
6    2.1.6 2014-11-25
7    1.9.9 2014-10-13
8    2.1.5 2014-09-30
9    2.1.4 2014-09-04
10   2.1.3 2014-07-28
11   2.0.4 2014-07-08
12   1.9.8 2014-06-19
13   2.1.2 2014-06-11
14   2.0.3 2014-04-30
15   2.0.1 2014-02-04
16   2.0.2 2014-04-15
17   1.9.7 2014-04-11
18   1.9.6 2014-02-03
19     2.0 2013-12-11
20   1.9.5 2013-11-11
21   1.9.4 2013-09-19
22   1.9.3 2013-08-30
23   1.9.2 2013-07-16
24   1.9.1 2013-06-24
25     1.9 2013-05-13

We could then easily see how many releases there were by year:

releasesByDate = generate_releases(rows) %>%  
  mutate(date = mdy(date)) %>%   
  filter(!is.na(date))
 
> releasesByDate %>% mutate(year = year(date)) %>% count(year)
Source: local data frame [3 x 2]
 
  year  n
1 2013  7
2 2014 13
3 2015  5

Or by month:

> releasesByDate %>% mutate(month = month(date)) %>% count(month)
Source: local data frame [11 x 2]
 
   month n
1      2 3
2      3 1
3      4 5
4      5 2
5      6 3
6      7 3
7      8 1
8      9 3
9     10 1
10    11 2
11    12 1

Previous to this quick bit of hacking I’d always turned to Ruby or Python whenever I wanted to scrape a dataset but it looks like rvest makes R a decent option for this type of work now. Good times!

Categories: Programming

SPaMCAST 347 – Agile Project Management, Conway’s Law and Microservices, Hardcore Testing

Software Process and Measurement Cast - Sun, 06/21/2015 - 22:00

The Software Process and Measurement Cast includes three columns.  The first is our essay on managing Agile projects and teams. I often say project management is dead. That does not mean that the pressures that drive the need to manage work have gone away. In the end the “what” of project management is important because control, discipline and coordination are needed tools to deliver value; the journey toward Agile is the reframing of the “how” of project management.

This week Gene Hughson returns with an entry from his Form Follows Function column.  Gene tackles the topic of whether the application of Conway’s Law makes microservices more of an organizational approach than an architecture.   After listening, check out Gene’s Form Follows Function blog!

The third column in this SPaMCAST magazine is from the Software Sensei, Kim Pries.  Kim tackles hardcore testing.  Kim discusses the implications and uses of this aggressive type of testing in hardware, software and wetware.

A great line up!

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

We just completed the Re-Read Saturday of Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement which began on February 21nd. What did you think?  Did the re-read cause you to pick  The Goal back up for a refresher? Visit the Software Process and Measurement Blog and review the whole re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

Next week we will begin re-reading The Mythical Man-MonthGet a copy now and start reading!

Upcoming Events

Software Quality and Test Management 

September 13 – 18, 2015

San Diego, California

http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams!  Let me know if you are attending!

 

More on other great conferences next week.

 

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with Woody Zuill.  Some people might think “that there is no Woody only Zuul” (apologies to the Ghostbusters) when it comes to topics like #NoEstimates.  However as Woody points out, it is important to peer past the “thou musts” to gain greater understanding of what you should be doing!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

NDC Oslo 2015 Experience Report

Phil Trelford's Array - Sun, 06/21/2015 - 21:54

Last week my son Sean and I popped over to Norway for NDC Oslo, billed as one of the world’s largest independent software conferences.

Venue

The venue, the Oslo Spektrum in the centre of the city was huge, with a large lower floor for vendors and refreshments surrounded by 9 rooms for talks. There were nearly 2000 delegates and nearly 200 speakers at the event, enjoying a veritable binge festival of hour long talks, and even a room where you could watch all the talks simultaneously on a series of big screens.

Invitation

My 8yo Sean had been invited to do a lightning talk, and according to one NDC conference organizer they were then stuck with inviting me along too, apparently something they wouldn’t have done otherwise, or so he felt compelled to tell me on our first meeting. The other organizers seemed happier to see me or perhaps were more able to exercise tact. Regardless, it was great to meet up with many friends old and new and from far and wide.

The #fsharp rat pack at #ndcoslo last night! pic.twitter.com/xTEWkrtzh5

— John Azariah (@johnazariah) June 19, 2015

Travel

NDC were kind enough to arrange flights and accommodation for me, but I had to pay for Sean’s travel myself. The hotel was close to the venue, unfortunately when we arrived late on Wednesday night it was over-booked and the room we were initially sent to was already occupied, where the phrase what is seen cannot be unseen is probably quite fitting. Thankfully we were eventually rehoused in another hotel nearby. The next day we were asked to pay 4500Kr (450GBP) to remain in the hotel. After some assistance from organizer Charlotte Lyng we were relieved to be given an unoccupied room at the original hotel for the following nights.

Lightning talk

Sean’s lightning talk was after lunch on the Thursday, the room was standing room only with over 100 attending, and he had a great reception. He started with a short rendition of Pink Floyd’s Wish You Were Here while I set up the laptop (thanks to Carl Franklin of .Net Rocks for the loan of the guitar). Followed by the main content, a live coding session on Composing 3D Objects in OpenGL with F#.

There was a lot of reaction on Twitter, here's a few of the many tweets:

Sean warming up for his lightning talk in Room 6 at 13:40 #ndcoslo #fsharp #opengl #webgl pic.twitter.com/uomoGV9vS6

— Sean's dad (@ptrelford) June 18, 2015

Full house for Sean Trelford 8yo at #ndcoslo pic.twitter.com/rUiywM38mG

— Jakob Bradford (@jakbradf) June 18, 2015

Sean presenting 3D rendering in #fsharp for lightning talks. He's the youngest speaker at #ndcodlo pic.twitter.com/oad4mcNtqM

— Jérémie Chassaing (@thinkb4coding) June 18, 2015

Watching 8-year old Sean do a lightning talk about composing 3D objects in OpenGL with F# #ndcoslo pic.twitter.com/hNTbJyqV78

— Andreia Gaita (@sh4na) June 18, 2015

This 8 y/o kid wins #ndcoslo - talking on F# and OpenGL. Remember this those of you looking for the courage to speak. pic.twitter.com/VXwXgSB351

— Troy Hunt (@troyhunt) June 18, 2015

Thanks for all the supportive comments, this was only Sean's third public speaking engagement and he really enjoyed speaking at and attending the conference.

You can try composing your own 3D scenes in the browser with F# and WebGL at Fun3D.net


F# for C# developers

My talk was an introduction to the F# programming language from a C# developer's perspective and also seemed to be well received, although in terms of Twitter I was rightly eclipsed by my son Sean's efforts. In fact from now on I think I might be more commonly known as Sean's dad.

Solid content from @ptrelford at #ndcoslo. Live-code ports of C# to F# If you missed catch on video #fsharp pic.twitter.com/7UjeoNdqql

— Bryan Hunter (@bryan_hunter) June 19, 2015

Dot-driven-development with #fsharp. @ptrelford exploring Premiere League table with #html type provider #ndcoslo pic.twitter.com/07y47rU9Lx

— Tomas Petricek (@tomaspetricek) June 19, 2015

I also appear to have made the NDC Oslo speaker leaderboard, in the 100-200 audience category, thanks for all the greens votes! :)

Look closely: @elixirlang talks are heading up the leaderboards with 100% audience approval at #ndcoslo pic.twitter.com/mqZMgqi6j1

— Chris McCord (@chris_mccord) June 19, 2015

Other talks

We saw some fantastic talks including Phil Nash on his C++ unit testing framework Catch, Gojko Azdic's thoughts on Continuous Delivery, Grey Young's new project PrivateEye, Tomas Petricek's live coded F# web programming session, Gary Short's Troll Hunting talk and many more. It was also great to see F# feature so heavily both on and off the functional programming track, and attendance on the FP track doubling since last year!

Summary

Both Sean and I really enjoyed the conference, there were a vast array of speakers to learn from, huge numbers of passionate developers to mingle with during the breaks and a very child friendly vendor floor. The floor was complete with a surfboard simulator, and Sean was very happy to win a Raspberry Pi for his efforts on it, along with enjoying an endless supply of ice cream! Also, look out for Sean's interview on Herding Code which should be out in a month or so.

Group photo #ndcoslo :) pic.twitter.com/tYQApGTiDe

— Andrea (@silverSpoon) June 19, 2015
Categories: Programming

SPaMCAST 347 – Agile Project Management, Conway’s Law and Microservices, Hardcore Testing

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

The Software Process and Measurement Cast includes three columns.  The first is our essay on managing Agile projects and teams. I often say project management is dead. That does not mean that the pressures that drive the need to manage work have gone away. In the end the “what” of project management is important because control, discipline and coordination are needed tools to deliver value; the journey toward Agile is the reframing of the “how” of project management.

This week Gene Hughson returns with an entry from his Form Follows Function column.  Gene tackles the topic of whether the application of Conway’s Law makes microservices more of an organizational approach than an architecture.   After listening, check out Gene’s Form Follows Function blog!

The third column in this SPaMCAST magazine is from the Software Sensei, Kim Pries.  Kim tackles hardcore testing.  Kim discusses the implications and uses of this aggressive type of testing in hardware, software and wetware.

A great line up!

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

We just completed the Re-Read Saturday of Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement which began on February 21nd. What did you think?  Did the re-read cause you to pick  The Goal back up for a refresher? Visit the Software Process and Measurement Blog and review the whole re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

Next week we will begin re-reading The Mythical Man-Month. Get a copy now and start reading!

Upcoming Events

Software Quality and Test Management 

September 13 – 18, 2015

San Diego, California

http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams!  Let me know if you are attending!

 

More on other great conferences next week.

 

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with Woody Zuill.  Some people might think “that there is no Woody only Zuul” (apologies to the Ghostbusters) when it comes to topics like #NoEstimates.  However as Woody points out, it is important to peer past the “thou musts” to gain greater understanding of what you should be doing!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

Systems Thinking, System Engineering, and Systems Management

Herding Cats - Glen Alleman - Sun, 06/21/2015 - 16:46

There are several paradigms for Systems ThinkingRanging from Psychobabble to hard core Systems Engineering. A group of colleagues are starting a book with a working title Increasing The Probability of Project Success, several of the chapters are based on Systems Thinking.

But first some background between Systems Theory, Systems Thinking, and Systems Engineering

Systems Theory is the interdisciplinary study of systems in general, with the goal of elucidating principles that can be applied to all types of systems at all nesting levels in all fields of research.

Systems Engineering is an interdisciplinary field of engineering that focuses on how to design and manage complex engineering systems over their life cycles. Systems Management (MSSM, USC, 1980) is an umbrella discipline encompassing systems engineering, managerial finance, contract management, program management, human factors, operations research, in limitary, defense, space, and other complex systems disciplines)

Here's are two books references that inform our thought processes 

Systems ThinkingThis book is the basis of Thinking about systems. It's a manufacturing and Industrial Engineering paradigm. Software Intensive Systems fit in here as well, since interfaces between system components define the complexity aspects of all system of systems.

This book opens with an Einstein quote In the brain, thinking is doing. As engineers - yes software engineering is alive and well in many domains, no matter how much we think wqe have to do. We can plan, prepare, and predict, but action occurs through doing.

so when we hear any suggestion, ask how can this be put to work in some measurable way to assess the effectiveness and performance of the outcomes?

Systems Thinking Building MapsThis is the companion mapping processes book. Systems Thinking is the process of understanding how systems influence one another withn a world  of systems and has been defined as an approach to problem solving by viewing our "problems" as parts of an obverall system, rather than reacting to a specific part or outcome.

There are many kinds of systems. Hard systems, software systems, evolutionary systems. It is popular to mix these, but that creates confusion and removes the ability to connect concepts with actionable outcomes. 

Cynefin is one of those popular approaches that has no units of measure of complex, complicated, chaotic, and obvious. Just soft self referencing words. 

so in our engineering paradigm this approach is not very useful.

Along with these appoaches are some other seminal works

In The End

Everything's  system. Interactions between components is where the action is and where the problems come from. Any non-trivial systems has interactions that must be managed as system interactions. this means modeling these interactions, estimating the impacts of these interactions. defining the behaviors of these interaction before, during, and after their development,

This means recognizing the criteria for a mature and effective method of managing in the presence of uncertainty.

  • Recognition by clients and providers the need to architect the system.
  • Acceptance of a disciple to those function using known methods.
  • Recognition of the separation of value judgements and technical decisions between cleint, architect and builder.
  • Recognition that the architecture is an art as well as a science, in particular, the development and use of nonanalytical techniques.
  • Effective utilization of an educated professional staff engaged in the process of systems level architecting.
Related articles Eyes Wide Shut - A View of No Estimates Who's Budget is it Anyway? The Dysfunctional Approach to Using "5 Whys"
Categories: Project Management

Re-Read Saturday: The Goal: A Process of Ongoing Improvement. Part 18

IMG_1249

I had intended to spend the last entry our re-read of the The Goal waxing poetic about the afterward in the book titled “Standing on the Shoulders of Giants”. Suffice it to say that the afterward does an excellent job describing the practical and theoretical basis for Goldratt and Cox’s ideas that ultimately shaped the both lean and process improvement movements since 1984.

Previous Installments:

Part 1       Part 2       Part 3      Part 4      Part 5 
Part 6       Part 7      Part 8     Part 9      Part 10
Part 11     Part 12      Part 13    Part 14    Part 15
Part 16    Part 17

The Goal is important because it introduced and explained the theory of constraints (TOC), which has proven over and over again to be critical to anyone managing a system. The TOC says that the output of any manageable system is limited by a small number of constraints and that all typical systems have at least one constraint. I recently had a discussion with a colleague that posited that not all systems have constraints. He laid out a scenario in which if you had unlimited resources and capability it would be possible to create a system without constraints. While theoretically true, it would be safe to embrace the operational hypothesis that any realistic process has at least one constraint. Understanding the constraints that affect a process or system provides anyone with an interest in process improvement with a powerful tool to deliver effective change. I do mean anyone! While the novel is set in a manufacturing environment, it is easy to identify how the ideas can apply to any setting where work follows a systematic process. For example, software development and maintenance is a process that takes business needs and transforms those needs into functionality. The readers of the Software Process and Measurement Blog should recognize that ideas in The Goal are germane to the realm of information technology.

As we have explored the book, I have shared how I have been able to apply the concepts explored to illustrate that what Goldratt and Cox wrote was applicable in the 21st century workplace. I also shared how others reacted to the book when I read it in public or talked about to people trapped next to me on numerous flights. Their reaction reminded me that my reaction was not out of the ordinary. The Goal continues to affect people years after it was first published. For example, the concept of the TOC and the Five Focusing Steps proved useful again this week. I was asked to discuss process improvement with a team comprised of tax analysts, developers and testers. Each role is highly specialized and there is little cross-specialty work-sharing. With a bit of coaching the team was able to identify their process flow and to develop a mechanism to identify their bottleneck(s) to improve their through put. Even though the Five Focusing Steps were never mentioned directly, we were able agree on an improvement approach that would find the constraint, help them exploit the constraint, subordinate the other steps in the process to the constraint, support improving the capacity of the constraint, then reiterate the analysis if the step was no longer a constraint. Had I never read The Goal, we might not have found a way to improve the process.

Perhaps re-reading the book or just carrying it around has made me overly sensitive to the application of the TOC and the other concepts in the book. However, I don’t think that was the real reason the material is useful. Others have been equally impacted, for example, Steve Tendon, author of Tame The Flow, and currently a columnist on the Software Process and Measurement Cast suggests that The Goal and the TOC has had a significant influence on his groundbreaking process improvement ideas. Bottom line, if you have not read or re-read The Goal I strongly suggest that you make the time to read the book. The Goal is an important book if you manage processes or are interested in improving how work is done in the 21st century.

I would like to hear from you! Can you tell me:

  1. How has The Goal impacted how you work?
  2. Have you been able to put the ideas in the book into practice?
  3. What are the successes and difficulties you faced when leveraging the Theory of Constraints?
  4. Do you use the Socratic method to identity and fix problems?

Re-Read Saturday Housekeeping Notes:

  • Next week we begin the re-read of The Mythical Man-Month (I am buying a new copy today, so if you do not have a copy, get a copy today.  I will be reading this version of Man-Month.
  • Remember that the summary of previous entries in the re-read of The Goal have been shifted to a new page.
  • Also, if you don’t have a copy of The Goal, buy one and read it! If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version.

Categories: Process Management

R: dplyr – segfault cause ‘memory not mapped’

Mark Needham - Sat, 06/20/2015 - 23:18

In my continued playing around with web logs in R I wanted to process the logs for a day and see what the most popular URIs were.

I first read in all the lines using the read_lines function in readr and put the vector it produced into a data frame so I could process it using dplyr.

library(readr)
dlines = data.frame(column = read_lines("~/projects/logs/2015-06-18-22-docs"))

In the previous post I showed some code to extract the URI from a log line. I extracted this code out into a function and adapted it so that I could pass in a list of values instead of a single value:

extract_uri = function(log) {
  parts = str_extract_all(log, "\"[^\"]*\"")
  return(lapply(parts, function(p) str_match(p[1], "GET (.*) HTTP")[2] %>% as.character))
}

Next I ran the following function to count the number of times each URI appeared in the logs:

library(dplyr)
pages_viewed = dlines %>%
  mutate(uri  = extract_uri(column)) %>% 
  count(uri) %>%
  arrange(desc(n))

This crashed my R process with the following error message:

segfault cause 'memory not mapped'

I narrowed it down to a problem when doing a group by operation on the ‘uri’ field and came across this post which suggested that it was handled more cleanly in more recently version of dplyr.

I upgraded to 0.4.2 and tried again:

## Error in eval(expr, envir, enclos): cannot group column uri, of class 'list'

That makes more sense. We’re probably returning a list from extract_uri rather than a vector which would fit nicely back into the data frame. That’s fixed easily enough by unlisting the result:

extract_uri = function(log) {
  parts = str_extract_all(log, "\"[^\"]*\"")
  return(unlist(lapply(parts, function(p) str_match(p[1], "GET (.*) HTTP")[2] %>% as.character)))
}

And now when we run the count function it’s happy again, good times!

Categories: Programming

Leadership Skills for Making Things Happen

"A leader is one who knows the way, goes the way, and shows the way." -- John C. Maxwell

How many people do you know that talk a good talk, but don’t walk the walk?

Or, how many people do you know have a bunch of ideas that you know will never see the light of day?  They can pontificate all day long, but the idea of turning those ideas into work that could be done, is foreign to them.

Or, how many people do you know can plan all day long, but their plan is nothing more than a list of things that will never happen?  Worse, maybe they turn it into a team sport, and everybody participates in the planning process of all the outcomes, ideas and work that will never happen. (And, who exactly wants to be accountable for that?)

It doesn’t need to be this way.

A lot of people have Hidden Strengths they can develop into Learned Strengths.   And one of the most important bucket of strengths is Leading Implementation.

Leading Implementation is a set of leadership skills for making things happen.

It includes the following leadership skills:

  1. Coaching and Mentoring
  2. Customer Focus
  3. Delegation
  4. Effectiveness
  5. Monitoring Performance
  6. Planning and Organizing
  7. Thoroughness

Let’s say you want to work on these leadership skills.  The first thing you need to know is that these are not elusive skills reserved exclusively for the elite.

No, these are commonly Hidden Strengths that you and others around you already have, and they just need to be developed.

If you don’t think you are good at any of these, then before you rule yourself out, and scratch them off your list, you need to ask yourself some key reflective questions:

  1. Do you know what good actually looks like?  Who are you role models?   What do they do differently than you, and is it really might and magic or do they simply do behaviors or techniques that you could learn, too?
  2. How much have you actually practiced?   Have you really spent any sort of time working at the particular skill in question?
  3. How did you create an effective feedback loop?  So many people rapidly improve when they figure out how to create an effective learning loop and an effective feedback loop.
  4. Who did you learn from?  Are you expecting yourself to just naturally be skilled?  Really?  What if you found a good mentor or coach, one that could help you create an effective learning loop and feedback loop, so you can improve and actually chart and evaluate your progress?
  5. Do you have a realistic bar?  It’s easy to fall into the trap of “all or nothing.”   What if instead of focusing on perfection, you focused on progress?   Could a little improvement in a few of these areas, change your game in a way that helps you operate at a higher level?

I’ve seen far too many starving artists and unproductive artists, as well as mad scientists, that had brilliant ideas that they couldn’t turn into reality.  While some were lucky to pair with the right partners and bring their ideas to live, I’ve actually seen another pattern of productive artists.

They develop some of the basic leadership skills in themselves to improve their ability to execute.

Not only are they more effective on the job, but they are happier with their ability to express their ideas and turn their ideas into action.

Even better, when they partner with somebody who has strong execution, they amplify their impact even more because they have a better understanding and appreciation of what it takes to execute ideas.

Like talk, ideas are cheap.

The market rewards execution.

Categories: Architecture, Programming

R: Regex – capturing multiple matches of the same group

Mark Needham - Fri, 06/19/2015 - 22:38

I’ve been playing around with some web logs using R and I wanted to extract everything that existed in double quotes within a logged entry.

This is an example of a log entry that I want to parse:

log = '2015-06-18-22:277:548311224723746831\t2015-06-18T22:00:11\t2015-06-18T22:00:05Z\t93317114\tip-127-0-0-1\t127.0.0.5\tUser\tNotice\tneo4j.com.access.log\t127.0.0.3 - - [18/Jun/2015:22:00:11 +0000] "GET /docs/stable/query-updating.html HTTP/1.1" 304 0 "http://neo4j.com/docs/stable/cypher-introduction.html" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"'

And I want to extract these 3 things:

  • /docs/stable/query-updating.html
  • http://neo4j.com/docs/stable/cypher-introduction.html
  • Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36

i.e. the URI, the referrer and browser details.

I’ll be using the stringr library which seems to work quite well for this type of work.

To extract these values we need to find all the occurrences of double quotes and get the text inside those quotes. We might start by using the str_match function:

> library(stringr)
> str_match(log, "\"[^\"]*\"")
     [,1]                                               
[1,] "\"GET /docs/stable/query-updating.html HTTP/1.1\""

Unfortunately that only picked up the first occurrence of the pattern so we’ve got the URI but not the referrer or browser details. I tried str_extract with similar results before I found str_extract_all which does the job:

> str_extract_all(log, "\"[^\"]*\"")
[[1]]
[1] "\"GET /docs/stable/query-updating.html HTTP/1.1\""                                                                            
[2] "\"http://neo4j.com/docs/stable/cypher-introduction.html\""                                                                    
[3] "\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36\""

We still need to do a bit of cleanup to get rid of the ‘GET’ and ‘HTTP/1.1′ in the URI and the quotes in all of them:

parts = str_extract_all(log, "\"[^\"]*\"")[[1]]
uri = str_match(parts[1], "GET (.*) HTTP")[2]
referer = str_match(parts[2], "\"(.*)\"")[2]
browser = str_match(parts[3], "\"(.*)\"")[2]
 
> uri
[1] "/docs/stable/query-updating.html"
 
> referer
[1] "https://www.google.com/"
 
> browser
[1] "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"

We could then go on to split out the browser string into its sub components but that’ll do for now!

Categories: Programming

How to deploy composite Docker applications with Consul key values to CoreOS

Xebia Blog - Fri, 06/19/2015 - 16:34

Most examples on the deployment of Docker applications to CoreOS use a single docker application. But as soon as you have an application that consists of more than 1 unit, the number of commands you have to type soon becomes annoying. At Xebia we have a best practice that says "Three strikes and you automate" mandating that a third time you do something similar, you automate. In this blog I share the manual page of the utility called fleetappctl that allows you to perform rolling upgrades and deploy Consul Key value pairs of composite applications to CoreOS and show three examples of its usage.


fleetappctl is a utility that allows you to manage a set of CoreOS fleet unit files as a single application. You can start, stop and deploy the application. fleetappctl is idempotent and does rolling upgrades on template files with multiple instances running. It can substitute placeholders upon deployment time and it is able to deploy Consul key value pairs as part of your application. Using fleetappctl you have everything you need to create a self contained deployment  unit of your composite application and put it under version control.

The command line options to fleetappctl are shown below:

fleetappctl [-d deployment-descriptor-file]
            [-e placeholder-value-file]
            (generate | list | start | stop | destroy)
option -d

The deployment descriptor file describes all the fleet unit files and Consul key-value pair files that make up the application. All the files referenced in the deployment-descriptor may have placeholders for deployment time values. These placeholders are enclosed in  double curly brackets {{ }}.

option -e

The file contains the values for the placeholders to be used on deployment of the application. The file has a simple format:

<name>=<value>
start

starts all units in the order as they appear in the deployment descriptor. If you have a template unit file, you can specify the number of instances you want to start. Start is idempotent, so you may call start multiple times. Start will bring the deployment inline with your descriptor.

If the unit file has changed with respect to the deployed unit file, the corresponding instances will be stopped and restarted with the new unit file. If you have a template file, the instances of the template file will be upgraded one by one.

Any consul key value pairs as defined by the consul.KeyValuePairs entries are created in Consul. Existing values are not overwritten.

generate

generates a deployment descriptor (deployit-manifest.xml) based upon all the unit files found in your directory. If a file is a fleet unit template file the number of instances to start is set to 2, to support rolling upgrades.

stop

stops all units in reverse order of their appearance in the deployment descriptor.

destroy

destroys all units in reverse order of their appearance in the deployment descriptor.

list

lists the runtime status of the units that appear in the deployment descriptor.

Install fleetappctl

to nstall the fleetappctl utility, type the following commands:

curl -q -L https://github.com/mvanholsteijn/fleetappctl/archive/0.25.tar.gz | tar -xzf -
cd fleetappctl-0.25
./install.sh
brew install xmlstarlet
brew install fleetctl
Start the platform

If you do not have the platform running, start it first.

cd ..
git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service.git
cd coreos-container-platform-as-a-service
git checkout 029d3dd8e54a5d0b4c085a192c0ba98e7fc2838d
cd vagrant
vagrant up
./is_platform_ready.sh
Example - Three component web application


The first example is a three component application. It consists of a mount, a Redis database service and a web application. We generate the deployment descriptor, indicate we do not want to start the mount, start the application and then modify the web application unit file to change the service name into 'helloworld'. We perform a rolling upgrade by issuing start again.. Finally we list, stop and destroy the application.

cd ../fleet-units/app
# generate a deployment descriptor
fleetappctl generate

# do not start mount explicitly
xml ed -u '//fleet.UnitConfigurationFile[@name="mnt-data"]/startUnit' \
       -v false deployit-manifest.xml > \
        deployit-manifest.xml.new
mv deployit-manifest.xml{.new,}

# start the app
fleetappctl start 

# Check it is working
curl hellodb.127.0.0.1.xip.io:8080
curl hellodb.127.0.0.1.xip.io:8080

# Change the service name of the application in the unit file
sed -i -e 's/SERVICE_NAME=hellodb/SERVICE_NAME=helloworld/' app-hellodb@.service

# do a rolling upgrade
fleetappctl start 

# Check it is now accessible on the new service name
curl helloworld.127.0.0.1.xip.io:8080

# Show all units of this app
fleetappctl list

# Stop all units of this app
fleetappctl stop
fleetappctl list

# Restart it again
fleetappctl start

# Destroy it
fleetappctl destroy
Example - placeholder references

This example shows the use of a placeholder reference in the unit file of the paas-monitor application. The application takes two optional environment  variables: RELEASE and MESSAGE that allow you to configure the resulting responses. The variable RELEASE is configured in the Docker run command in the fleet unit file through a placeholder. The actual value for the current deployment is taken from an placeholder value file.

cd ../fleetappctl-0.25/examples/paas-monitor
#check out the placeholder reference
grep '{{' paas-monitor@.service

...
ExecStart=/bin/sh -c "/usr/bin/docker run --rm --name %p-%i \
 <strong>--env RELEASE={{release}}</strong> \
...
# checkout our placeholder values
cat dev.env
...
release=V2
# start the app
fleetappctl -e dev.env start

# show current release in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start is idempotent (ie. nothing happens)
fleetappctl -e dev.env start

# update the placeholder value and see a rolling upgrade in the works
echo 'release=V3' > dev.env
fleetappctl -e dev.env start
curl paas-monitor.127.0.0.1.xip.io:8080/status

fleetappctl destroy
Example - Env Consul Key Value Pair deployments


The final example shows the use of a Consul Key Value Pair, the use of placeholders and envconsul to dynamically update the environment variables of a running instance. The environment variables RELEASE and MESSAGE are taken from the keys under /paas-monitor in Consul. In turn the initial value of these keys are loaded on first load and set using values from the placeholder file.

cd ../fleetappctl-0.25/examples/envconsul

#check out the Consul Key Value pairs, and notice the reference to placeholder values
cat keys.consul
...
paas-monitor/release={{release}}
paas-monitor/message={{message}}

# checkout our placeholder values
cat dev.env
...
release=V4
message=Hi guys
# start the app
fleetappctl -e dev.env start

# show current release and message in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# Change the message in Consul
fleetctl ssh paas-monitor@1 \
    curl -X PUT \
    -d \'hello Consul\' \
    http://172.17.8.101:8500/v1/kv/paas-monitor/message

# checkout the changed message
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start does not change the values..
fleetappctl -e dev.env start
Conclusion

CoreOS provides all the basic functionality for a Container Platform as a Service. With the utility fleetappctl it becomes easy to start, stop and upgrade composite applications. The script is an superfluous to fleetctl and does not break other ways of deploying your applications to CoreOS.

The source code, manual page and documentation of fleetappctl can be found on https://github.com/mvanholsteijn/fleetappctl.

 

Diff'ing software architecture diagrams again

Coding the Architecture - Simon Brown - Fri, 06/19/2015 - 12:42

In Diff'ing software architecture diagrams, I showed that creating a software architecture model with a textual format provides you with the ability to version control and diff different versions of the model. As a follow-up, somebody asked me whether Structurizr provides a way to recreate what Robert Annett originally posted in Diagrams for System Evolution. In other words, can the colours of the lines be changed? As you can see from the images below, the answer is yes.

Before Before

To do this, you simply add some tags to the relationships and add the appropriate styles to the view configuration. structurizr.com will even auto-generate a key for you.

Diagram key

And yes, you can do the same with elements too. As this illustrates, the choice of approach is yours.

Categories: Architecture

If You Have More Than One Sprint You Need To Scale!

JMP

JMP

I use Microsoft Office, many of my clients use large ERP packages and last week I bought a highly functional math package to do data analysis. I would describe all of these as products that are refreshed over time by major and minor releases. As an outsider, the delineation of product and release is both easily recognizable and easily describable. However once you peel back the covers and dive into the inner workings of the typical corporate IT organization (broad definition that includes development, maintenance, support, security and more), how work is grouped and described can often be much more akin to a descent through Dante’s nine circles of hell.   Recently I sat in a conference room in front of a white board and challenged a group to identify how work was grouped and how the groupings related to each other. During the discussion we choose not to discuss the portfolio level within the organization and focus on the deliver functionality. The results followed a path that looked like a simple basic hierarchy running from programs, projects to sprints . . . except for the waterfall teams that don’t do sprints. This hierarchy  was crosscut by products and releases increasing the possible complexity of any particular scenario. As work is grouped together from sprints to program into larger and larger blocks additional layers of control are need to manage risk.

In Agile the smallest grouping of work is the sprint. In a perfect world, a team would accept work into a sprint where each story was independent and intrinsically motivating. Each piece of work would be its own big picture, however that scenario is at best rare. Most people are interested in knowing that they are helping build the Golden Gate Bridge, not just the left lane of a typical bridge on-ramp. We are more motivated when we believe we are doing something big. It is rare that a unit of work being delivered by a single sprint from a single team will require any scaling.

In most organizations, a project represents a grouping of work that most easily recognized. While the concept of a project is under-pressure in some circles (we will explore concepts such #NoProjects later) I haven’t sat next to anyone on plane that doesn’t describe their work as projects whether they are in marketing, sales, accounting, consulting or software. Projects and project accounting is firmly enforced in most organization’s finance and accounting departments. As noted in Projects in Agile, almost every organization has their own definition of a project. Interestingly, I was eating dinner with a group of developers and scrum masters recently when the conversation turned to the definition of project. A sizable group decided that any discrete task could be described as project. A task had a strart and an end and was goal oriented. From a grouping perspective, projects are typically an accumulation of sprints or releases in Agile. In more classic scenarios, a project can be described as one or more release into production. Any project that is more than a single team and a single team will require scaling to afford greater foresight and planning so that the pieces fit together in a coherent whole.

The definition of a release is widely variable. Releases can be subset of a project with functionality pushed to production, test  or into some other environment. Alternately a release could be a group of whole discrete projects that are moved into an environment together. The only common thread in the use of the term release is that it represents movement. Releases, other than in very uncomplicated environments, will always require coordination between development teams and operations, business and potentially customers. The larger and more complex the release, the more planning and coordination will be required.

Programs are groups of related and often inter-related projects or releases that are organized together to achieve a common goal. By definition programs are larger than projects and can be implemented through one or a large number of releases. Because they are larger and therefore more complex, programs typically require additional techniques to ensure foresight, planning and coordination so that all stakeholders understand what is happening and their role in achieving success.

A final grouping of work is around the concept of product. Building from the Software Engineering Institute’s definition of a software product line, a simple definition of a product could a set of related functions that are managed as a whole to satisfy a specific market need. Typically a product is developed through a project or program and is maintained through releases, projects and programs. Products should have a roadmap that provides internal and external customers guidance on the features that the organization intends to develop and deliver over time. Roadmaps are typically more granular in the near term and more speculative as the time horizon recedes.

As work is grouped from smallest to largest; from sprint to program or product added effort is required to organize and coordinate work. Increased levels of planning and coordination require additional tools and techniques in addition to the basics typically found in Scrum or Extreme Programing (XP). In the Agile vernacular, the need to for additional techniques to deal with size, coordination and planning is called scaling.


Categories: Process Management

Startup Thinking

“Startups don't win by attacking. They win by transcending.  There are exceptions of course, but usually the way to win is to race ahead, not to stop and fight.” -- Paul Graham

A startup is the largest group of people you can convince to build a different future.

Whether you launch a startup inside a big company or launch a startup as a new entity, there are a few things that determine the strength of the startup: a sense of mission, space to think, new thinking, and the ability to do work.

The more clarity you have around Startup Thinking, the more effective you can be whether you are starting startups inside our outside of a big company.

In the book, Zero to One: Notes on Startups, or How to Build the Future, Peter Thiel shares his thoughts about Startup Thinking.

Startups are Bound Together by a Sense of Mission

It’s the mission.  A startup has an advantage when there is a sense of mission that everybody lives and breathes.  The mission shapes the attitudes and the actions that drive towards meaningful outcomes.

Via Zero to One: Notes on Startups, or How to Build the Future:

“New technology tends to come from new ventures--startups.  From the Founding Fathers in politics to the Royal Society in science to Fairchild Semiconductor's ‘traitorous eight’ in business, small groups of people bound together by a sense of mission have changed the world for the better.  The easiest explanation for this is negative: it's hard to develop new things in big organizations, and it's even harder to do it by yourself.  Bureaucratic hierarchies move slowly, and entrenched interests shy away from risk.” 

Signaling Work is Not the Same as Doing Work

One strength of a startup is the ability to actually do work.  With other people.  Rather than just talk about it, plan for it, and signal about it, a startup can actually make things happen.

Via Zero to One: Notes on Startups, or How to Build the Future:

“In the most dysfunctional organizations, signaling that work is being done becomes a better strategy for career advancement than actually doing work (if this describes your company, you should quit now).  At the other extreme, a lone genius might create a classic work of art or literature, but he could never create an entire industry.  Startups operate on the principle that you need to work with other people to get stuff done, but you also need to stay small enough so that you actually can.”

New Thinking is a Startup’s Strength

The strength of a startup is new thinking.  New thinking is even more valuable than agility.  Startups provide the space to think.

Via Zero to One: Notes on Startups, or How to Build the Future:

“Positively defined, a startup is the largest group of people you can convince of a plan to build a different future.  A new company's most important strength is new thinking: even more important than nimbleness, small size affords space to think.  This book is about the questions you must ask and answer to succeed in the business of doing new things: what follows is not a manual or a record of knowledge but an exercise in thinking.  Because that is what a startup has to do: question received ideas and rethink business from scratch.”

Do you have stinking thinking or do you beautiful mind?

New thinking will take you places.

You Might Also Like

How To Get Innovation to Succeed Instead of Fail

Management Innovation is at the Top of the Innovation Stack

The Innovation Revolution

The New Competitive Landscape

The New Realities that Call for New Organizational and Management Capabilities

Categories: Architecture, Programming

I Know I’ll Never Be Happy Unless I Do Something For Myself

Making the Complex Simple - John Sonmez - Thu, 06/18/2015 - 16:00

In this episode, I answer an email about being truly happy doing something for oneself. Full transcript: John:               Hey, John Sonmez from simpleprogrammer.com. I’ve got a question here about some life choices. It’s titled In A Pickle, and I’ll read you here. Cole says, “Hey, John. First off, I wanted to say you’re a huge […]

The post I Know I’ll Never Be Happy Unless I Do Something For Myself appeared first on Simple Programmer.

Categories: Programming

Monte Carlo Simulation of Project Performance

Herding Cats - Glen Alleman - Thu, 06/18/2015 - 15:32

Monte-Carlo-3Project work is random. Most everything in the world  is random. The weather, commuter traffic, productivity of writing and testing code. Few things actually take as long as they are planned. Cost is less random, but there are variances in the cost of labor, the availability of labor. Mechanical devices have variances as well.

The exact fit of a water pump on a Toyota Camry is not the same for each pump. There is a tolerance in the mounting holes, the volume of water pumped. This is a variance in the technical performance.

Managing in the presence of these uncertainties is part of good project management. But there are two distinct paradigms of managing in the presence of these uncertainties.

  1. We have empirical data of the variances. We have samples of the hole positions and sizes of the water pump mounting plate for the last 10,000 pumps that were installed. We have samples of how long it took to write a piece of code and the attributes of the code that are correlated to that duration. We have empirical measures.
  2. We have a theoretical model of the water pump in the form of a 3D CAD model with the materials modeling for expansion, drilling errors of the holes and other static and dynamic variances. We have modeling the duration of work using a Probability Distribution Function and a Three Point estimate of the Most Likely Duration, the Pessimistic and Optimistic duration. These can be derived form past performance, but we don't have enough actual data to produce the PDF and have a low enough Sample Error for our needs.

In the first case we have empirical data. In the second case we don't. There are two approaches to modeling what the system will do in terms of cost and schedule outcomes.

Bootstrapping the Empirical Data

With samples of past performance and the proper statistical assessment of those samples, we can re-sample them to produce a model of future performance. This bootstrap resampling shares the principle of the second method - Monte Carlo Simulation - but with several important differences.

  • The researcher - and we are researching what the possible outcomes might be from our model - does not know nor have any control of the Probability Distribution Function that generated the past sample. You take what you got. 
  • As well we don;'t have any understanding of Why those samples appear as they do. They're just there. We get what we get.
  • This last piece is critical because it prevents us from defining what performance must be in place to meet some future goal. We can't tell what performance we need because we have not model of the need performance, just samples from the past.
  • This results from the statistical conditions that there is a PDF for the process that ius unobserved. All we have is a few samples of this process.
  • With these few samples, we're going to resample them to produce a modeled outcome. This resampling locks in any behavior of the future using the samples from the past, which may or may not actually represent the true underlying behavior. This may be all we can do because we don't have any theoretical model of the process.

This bootstrapping method is quick, easy, and produces a quick and easy result. But it has issues that must be acknowledged.

  • There is a fundamental assumption that the past empirical samples represent the future. That is, the samples contained in the bootstrapped list and their resampling are also contained in all the future samples.
  • Said in a more formal way
    • If the sample of data we have from the past is a reasonable representation of the underlying population of all samples from the work process, then the distribution of parameter estimates produced from the bootstrap  model on a series of resampled data sets will provide a good approximation of the distribution of that statistics in the population.
    • With this sample data and its parameters (statistical moments) we can make a good approximation of the future.
  • There are some important statistical behaviors though that must be considered, starting with the future samples are identical to the statistical behaviors of the past samples.
    • Nothing is going to change in the future
    • The past and the future are identical statistically
    • In the project domain that is very unlikely
  • With all these condition, for a small project, with few if any interdependencies, a static work process with little valance, boot strapping is a nice quick and dirty approach to forecasting (estimating the future)  based on the past.

Monte Carlo Simulation

This approach is more general and removes many of the restrictions to the statistical confidence of bootstrapping.

Just as a reminder, in principle both the parametric and the non-parametric bootstrap are special cases of Monte Carlo simulations used for a very specific purpose: estimate some characteristics of the sampling distribution. But like all principles, in practice there are larger differences when modeling project behaviors.

In the more general approach  of Monte Carlo Simulation the algorithm repeatedly creating random data in some way, performing some modeling with that random data, and collecting some result.

  • The duration of a set independent tasks
  • The probabilistic completion date of a series of tasks connected in a network (schedule), each with a different Probability Distribution  Function evolving as the project moves into the future.
  • A probabilistic cost  correlated with the probabilistic schedule model. This is called the Joint Confidence Level. Both cost and schedule are random variance with time evolving changes in their respective PDFs.

In practice when we hear Monte Carlo simulation we are talking about a theoretical investigation, e.g. creating random data with no empirical content - or from reference classes -  used to investigate whether an estimator can represent known characteristics of this random data, while the (parametric) bootstrap refers to an empirical estimation and is not necessary a model of the underlying processes, just a small sample of observations independent from the actual processes that generated that data.

The key advantage of MCS is we don't necessarily need  past empirical data. MCS can be used to advantage if we do, but we don't need it for the Monte Carlo Simulation algorithm to work.

This approach could be used to estimate some outcome, like in the bootstrap, but also to theoretically investigate some general characteristic of an statistical estimator (cost, schedule, technical performance) which is difficult to derive from empirical data.

MCS removes the road block heard in many critiques of estimating - we don't have any past data on which to estimate.  No problem, build a model of the work, the dependencies between that work, and assign statistical parameters to the individual or collected PDFs and run the MCS to see what comes out.

This approach has several critical advantages:

  • The first is a restatement - we don't need empirical data, although it will add value to the modeling process.
    • This is the primary purpose of Reference Classes
    • They are the raw material for defining possible future behaviors form the past
  • We can make judgement of what he future will be like, or most importantly what the future MUST be like to meet or goals, run the simulation and determine is our planned work will produce a desired result.

So Here's the Killer Difference

Bootstrapping models make several key assumptions, which may not be true in general. So they must be tested before accepting any of the outcomes.

  • The future is like the past.
  • The statistical parameters are static - they don't evolve with time. That is the future is like the past, an unlikely prospect on any non-trivial project.
  • The sampled data is identical to the population data both in the past and in the future.

Monte Carlo Simulation models provide key value that bootstrapping can't.

  • Different Probability Distribution Function can be assigned to work as it progresses through time
  • The shape of that PDF can be defined from past performance, or defined from the needed performance. This is a CRITICAL capability of MCS

The critical difference between Bootstrapping and Monte Carlo Simulation is that MCS can show what the future performance has to be to stay on schedule (within variance), on cost, and have the technical performance meet the needs of the stakeholder.

When the process of defining the needed behavior of the work is done, a closed loop control system in put in place. This needed performance is the steering target. Measures of actual performance compared to needed performance generate the error signals for taking corrective actions. Just measuring past performance and assuming the future will be the same, is Open Loop control. Any non-trivial project management method needs a closed loop control system

Bootstrapping can only show what the future will be like if it like the past, not what it must be like. In Bootstrapping this future MUST be like the past. In MCS we can tune the PDFs to show what performance has to be to manage to that plan. Bootstrapping is reporting yesterday's weather as tomorrow's weather - just like Steve Martin in LA Story. If tomorrow's weather turns out not to be like yesterday's weather, you gonna get wet.

MCS can forecast tomorrows weather, by assigning PDFs to future activities that are different than past activities, then we can make any needed changes in that future model to alter the weather to meet or needs. This is in fact how weather forecasts are made - with much more sophisticated models of course here at the National Center for Atmospheric Research in Boulder, CO

This forecasting (estimating the future state) of possible outcomes and the alternation of those outcomes through management actions to change dependencies, add or remove resources, provide alternatives to the plan (on ramps and off maps of technology for example), buy down risk, apply management reserve, assess impacts of rescoping the project, etc. etc. etc.  is what project management is all about.

Bootstrapping is necessary but far from sufficient for any non-trivial project to show up on of before the need date (with schedule reserve), at o below the budgeted cost (with cost reserve) and have the produce or service provide the needed capabilities (technical performance reserve).

Here's an example of that probabilistic forecast of project performance from a MCS (Risky Project). This picture shows the probability for cost, finish date, and duration. But it is built on time evolving PDFs assigned to each activity in a network of dependent tasks, which models the work stream needed to complete as planned.

When that future work stream is changed to meet new requirements, unfavorable past performance and the needed corrective actions, or changes in any or all of the underlying random variables, the MCS can show us the expected impact on key parameters of the project so management in intervention can take place - since Project Management is a verb.

Untitled

The connection between the Bootstrap and Monte Carlo simulation of a statistic is simple.

Both are based on repetitive sampling and then direct examination of the results.

But there are significant differences between the methods (hence the difference in names and algorithms). Bootstrapping uses the original, initial sample as the population from which to resample. Monte Carlo Simulation uses a data generation process, with known values of the parameters of the Probability Distribution Function. The common algorithm for MCS is Lurie-Goldberg. Monte Carlo is used to test that the results of the estimators produce desired outcomes on the project. And if not, allow the modeler and her management to change those estimators and then mange to the changed plan.

Bootstrap can be used to estimate the variability of a statistic and the shape of its sampling distribution from past data. Assuming the future is like the past, make forecasts of throughput, completion and other project variables. 

In the end the primary differences (and again the reason for the name differences) is Bootstrapping is based on unknown distributions. Sampling and assessing the shape of the distribution in Bootstrapping adds no value to the outcomes. Monte Carlo is based on known or defined distributions usually from Reference Classes.

Related articles Do The Math Complex, Complexity, Complicated The Fallacy of the Planning Fallacy
Categories: Project Management

More Material Design with Topeka for Android

Android Developers Blog - Thu, 06/18/2015 - 06:44

Posted by Ben Weiss, Developer Programs Engineer

Material design is a new system for visual, interaction and motion design. We originally launched the Topeka web app as an Open Source example of material design on the web.

Today, we’re publishing a new material design example: The Android version of Topeka. It demonstrates that the same branding and material design principles can be used to create a consistent experience across platforms. Grab the code today on GitHub. table, th, td { border: clear; border-collapse: collapse; } The juicy bits

While the project demonstrates a lot of different aspects of material design, let’s take a quick look at some of the most interesting bits.

Transitions

Topeka for Android features several possibilities for transition implementation. For starters the Transitions API within ActivityOptions provides an easy, yet effective way to make great transitions between Activities.

To achieve this, we register the shared string in a resources file like this:

<resources>
    <string name="transition_avatar">AvatarTransition</string>
</resources>

Then we use it within the source’s and target’s view as transitionName

<ImageView
    android:id="@+id/avatar"
    android:layout_width="@dimen/avatar_size"
    android:layout_height="@dimen/avatar_size"
    android:layout_marginEnd="@dimen/keyline_16"
    android:transitionName="@string/transition_avatar"/>

And then make the actual transition happen within SignInFragment.

private void performSignInWithTransition(View v) {
    Activity activity = getActivity();
    ActivityOptions activityOptions = ActivityOptions
            .makeSceneTransitionAnimation(activity, v,
                    activity.getString(R.string.transition_avatar));
    CategorySelectionActivity.start(activity, mPlayer, activityOptions);
    activity.finishAfterTransition();
}

For multiple transition participants with ActivityOptions you can take a look at the CategorySelectionFragment.

Animations

When it comes to more complex animations you can orchestrate your own animations as we did for scoring.

To get this right it is important to make sure all elements are carefully choreographed. The AbsQuizView class performs a handful of carefully crafted animations when a question has been answered:

The animation starts with a color change for the floating action button, depending on the provided answer. After this has finished, the button shrinks out of view with a scale animation. The view holding the question itself also moves offscreen. We scale this view to a small green square before sliding it up behind the app bar. During the scaling the foreground of the view changes color to match the color of the fab that just disappeared. This establishes continuity across the various quiz question states.

All this takes place in less than a second’s time. We introduced a number of minor pauses (start delays) to keep the animation from being too overwhelming, while ensuring it’s still fast.

The code responsible for this exists within AbsQuizView’s performScoreAnimation method.

FAB placement

The recently announced Floating Action Buttons are great for executing promoted actions. In the case of Topeka, we use it to submit an answer. The FAB also straddles two surfaces with variable heights; like this:

To achieve this we query the height of the top view (R.id.question_view) and then set padding on the FloatingActionButton once the view hierarchy has been laid out:

private void addFloatingActionButton() {
    final int fabSize = getResources().getDimensionPixelSize(R.dimen.fab_size);
    int bottomOfQuestionView = findViewById(R.id.question_view).getBottom();
    final LayoutParams fabLayoutParams = new LayoutParams(fabSize, fabSize,
            Gravity.END | Gravity.TOP);
    final int fabPadding = getResources().getDimensionPixelSize(R.dimen.padding_fab);
    final int halfAFab = fabSize / 2;
    fabLayoutParams.setMargins(0, // left
        bottomOfQuestionView - halfAFab, //top
        0, // right
        fabPadding); // bottom
    addView(mSubmitAnswer, fabLayoutParams);
}

To make sure that this only happens after the initial layout, we use an OnLayoutChangeListener in the AbsQuizView’s constructor:

addOnLayoutChangeListener(new OnLayoutChangeListener() {
    @Override
    public void onLayoutChange(View v, int l, int t, int r, int b,
            int oldLeft, int oldTop, int oldRight, int oldBottom) {
        removeOnLayoutChangeListener(this);
        addFloatingActionButton();
    }
});
Round OutlineProvider

Creating circular masks on API 21 onward is now really simple. Just extend the ViewOutlineProvider class and override the getOutline() method like this:

@Override
public final void getOutline(View view, Outline outline) {
    final int size = view.getResources().
        getDimensionPixelSize(R.id.view_size);
    outline.setOval(0, 0, size, size);
}

and setClipToOutline(true) on the target view in order to get the right shadow shape.

Check out more details within the outlineprovider package within Topeka for Android.

Vector Drawables

We use vector drawables to display icons in several places throughout the app. You might be aware of our collection of Material Design Icons on GitHub which contains about 750 icons for you to use. The best thing for Android developers: As of Lollipop you can use these VectorDrawables within your apps so they will look crisp no matter what density the device’s screen. For example, the back arrow ic_arrow_back from the icons repository has been adapted to Android’s vector drawable format.

<vector xmlns:android="http://schemas.android.com/apk/res/android"
    android:width="24dp"
    android:height="24dp"
    android:viewportWidth="48"
    android:viewportHeight="48">
    <path
        android:pathData="M40 22H15.66l11.17-11.17L24 8 8 24l16 16 2.83-2.83L15.66 26H40v-4z"
        android:fillColor="?android:attr/textColorPrimary" />
</vector>

The vector drawable only has to be stored once within the res/drawable folder. This means less disk space is being used for drawable assets.

Property Animations

Did you know that you can easily animate any property of a View beyond the standard transformations offered by the ViewPropertyAnimator class (and it’s handy View#animate syntax)? For example in AbsQuizView we define a property for animating the view’s foreground color.

// Property for animating the foreground
public static final Property FOREGROUND_COLOR =
        new IntProperty("foregroundColor") {

            @Override
            public void setValue(FrameLayout layout, int value) {
                if (layout.getForeground() instanceof ColorDrawable) {
                    ((ColorDrawable) layout.getForeground()).setColor(value);
                } else {
                    layout.setForeground(new ColorDrawable(value));
                }
            }

            @Override
            public Integer get(FrameLayout layout) {
                return ((ColorDrawable) layout.getForeground()).getColor();
            }
        };

This can later be used to animate changes to said foreground color from one value to another like this:

final ObjectAnimator foregroundAnimator = ObjectAnimator
        .ofArgb(this, FOREGROUND_COLOR, Color.WHITE, backgroundColor);

This is not particularly new, as it has been added with API 12, but still can come in quite handy when you want to animate color changes in an easy fashion.

Tests

In addition to exemplifying material design components, Topeka for Android also features a set of unit and instrumentation tests that utilize the new testing APIs, namely “Gradle Unit Test Support” and the “Android Testing Support Library.” The implemented tests make the app resilient against changes to the data model. This catches breakages early, gives you more confidence in your code and allows for easy refactoring. Take a look at the androidTest and test folders for more details on how these tests are implemented within Topeka. For a deeper dive into Testing on Android, start reading about the Testing Tools.

What’s next?

With Topeka for Android, you can see how material design lets you create a more consistent experience across Android and the web. The project also highlights some of the best material design features of the Android 5.0 SDK and the new Android Design Library.

While the project currently only supports API 21+, there’s already a feature request open to support earlier versions, using tools like AppCompat and the new Android Design Support Library.

Have a look at the project and let us know in the project issue tracker if you’d like to contribute, or on Google+ or Twitter if you have questions.

Join the discussion on

+Android Developers
Categories: Programming