Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

SATURN conference in Baltimore, MD

Coding the Architecture - Simon Brown - Wed, 04/15/2015 - 09:25

Following on from the CRAFT conference in Budapest next week, I'm heading straight across the water to the SATURN conference, which is taking place in Baltimore, Maryland. SATURN is much more focussed around software architecture than many of the other events I attend and I had fantastic time when I attended in 2013, so I'm delighted to be invited back. I'm involved with a number of different things at the conference as follows.

  • Tuesday April 28th - Microservices Trial - I'll be on a panel with Len Bass and Sam Newman, debating whether a microservices architecture is just an attractive nuisance rather than something that's actually useful.
  • Wednesday April 29th - Software architecture as code - a talk about how much of the software architectural intent remains in the code and how we can represent this architectural model as code.
  • Wednesday April 29th - Office hours - an open discussion about anything related to software architecture.
  • Thursday April 30th - My Silver Toolbox - I'll be doing a short talk in Micheal Keeling's session about some of the software architecture tools I find indispensable.

SATURN 2015 brochure

This is my last scheduled trip to the US this year, so please do come and grab me if you want to chat.

Categories: Architecture


Herding Cats - Glen Alleman - Wed, 04/15/2015 - 04:21

This Blog has been focused on improving program and project management process for many years. Over that time I've run into several bunk ideas around projects, development, methods, and process of managing other peoples money. When that happens, the result is a post or two about the nonsense idea and the corrections to those ideas, not just form my experience but from the governance frameworks that guide our work.

A post on Tony DaSilva's blog, was about the Debunkers Club that struck a cord. I've edit that blog's content to fit mine domain, with full attribution.

This Blog is dedicated to the proposition that all information is not created equal. Much of it is endowed by its creators with certain undeniable wrongs. Misinformation is dangerous!!

There's a lot of crap floating around the any business or technical field. Much of it gets passed around by well-meaning folks, but it is harmful regardless of the purity of the conveyer.

People who attempt to debunk myths, mistakes, and misinformation are often tireless in their efforts. They are also too often helpless against the avalanche of misinformation.

The Debunker Club is an experiment in professional responsibility. Anyone who's interested may join as long as they agree to the following:

  1. I would like to see less misinformation in the project management field. This includes, planning, estimating, risk, execution, performance management, development methods.
  2. I will invest some of my time in learning and seeking the truth, from sources like peer-reviewed scientific research or translations of that research.
  3. I will politely, but actively, provide feedback to those who transmit misinformation.
  4. I will be open to counter feedback, listening to understand opposing viewpoints based on facts, example, and evidence outside personal opinion. I will provide counter-evidence and argument when warranted.
Related articles Debunker Club Works to Dispel the Corrupted Cone of Learning Five Estimating Pathologies and Their Corrective Actions Critical Success Factors of IT Forecasting Calculating Value from Software Projects - Estimating is a Risk Reduction Process
Categories: Project Management

Personas, Scenarios and Stories, Part 3

Tom “The Brewdog” Pinternick

Tom “The Brewdog” Pinternick

The goal of all software centric projects is functional software that meets user needs and provides value. The often-trod path to functional software begins at a group of personas, then to scenarios and then to user stories before being translated into software or other technical requirements.

Personas: A persona represents one of the archetypical users that interacts with the system or product.

Scenarios: A scenario is a narrative that describes purpose-driven interactions between a persona(s) and the product or system.

User Stories: A user story is a brief, simple requirement statement from the user perspective.

In Personas, Scenarios and Stories, Part 2 we generated a simple scenario for the “Brewdog” Logo Glass Collection Application:

When visiting a microbrewery, Tom “Brewdog” Pinternick, notices that there are logo pint glasses on sale. The brewpub has several varieties. Tom can’t remember if he has collected all of the different glasses being offered. Recognizing a potential buying opportunity, Tom opens the Logo Pint Glass app and browses the glasses he has already collected for this brewpub and discovers one the glasses is new.

After a team accepts the narrative story contained in a scenario the team needs to mine the scenario for user stories. The typical format for a user story is <persona> <goal> <benefit>. Using the scenario above, the first user story that can be generated is:

Tom “Brewdog” Pinternick wants to browse his glass collection so that he does not buy the same glass twice.

Much of the activity of this scenario happens outside of the application and provides context for the developers. For example one inferred user story for the context might reflect that since many brewpubs have muted mood lighting, the application will need to be readable in low light scenarios. This is a non-functional requirement. A user story could be constructed to tackle this requirement.

Tom “Brewdog” Pinternick wants to be able to see the app in low light situations so that so that the application is usable in brewpubs with varied lighting schemes.

Other possible ways to tackle this (and other) non-functional requirement includes adding criteria to the definition of done or by adding specific acceptance criteria to all user interface related user stories. For example, the team could add “all screens must be readable in low light” to the definition of done or as acceptance criteria for screens.

In both cases, these proposed user stories would need to be groomed before a team accepts them into a sprint. Grooming will ensure that the story has acceptance criteria and fits the INVEST (independent, negotiable, valuable, estimable, small enough and testable) criteria we have suggested as a guide the development of quality of user stories. Once a story has been created and groomed, the team can convert the story into something of value (e.g. code, hardware or some other user facing deliverable).

I am occasionally ask why teams can’t immediately start working as soon as they think they know what they are expected to deliver. The simplest answer is that the risk of failure is too high. Unless teams spend the time and effort to begin to find out what their customers (whether a customer is internal or external is immaterial) want before they start building, the chances are they will incur failure or rework. Agile embraces change by building in mechanisms for accepting change into the backlog based on learning and feedback. Techniques like personas, scenarios and user stories provide Agile teams with a robust, repeatable process for generating and interpreting user needs.

Categories: Process Management

Spark: Generating CSV files to import into Neo4j

Mark Needham - Tue, 04/14/2015 - 23:56

About a year ago Ian pointed me at a Chicago Crime data set which seemed like a good fit for Neo4j and after much procrastination I’ve finally got around to importing it.

The data set covers crimes committed from 2001 until now. It contains around 4 million crimes and meta data around those crimes such as the location, type of crime and year to name a few.

The contents of the file follow this structure:

$ head -n 10 ~/Downloads/Crimes_-_2001_to_present.csv
ID,Case Number,Date,Block,IUCR,Primary Type,Description,Location Description,Arrest,Domestic,Beat,District,Ward,Community Area,FBI Code,X Coordinate,Y Coordinate,Year,Updated On,Latitude,Longitude,Location
9464711,HX114160,01/14/2014 05:00:00 AM,028XX E 80TH ST,0560,ASSAULT,SIMPLE,APARTMENT,false,true,0422,004,7,46,08A,1196652,1852516,2014,01/20/2014 12:40:05 AM,41.75017626412204,-87.55494559131228,"(41.75017626412204, -87.55494559131228)"
9460704,HX113741,01/14/2014 04:55:00 AM,091XX S JEFFERY AVE,031A,ROBBERY,ARMED: HANDGUN,SIDEWALK,false,false,0413,004,8,48,03,1191060,1844959,2014,01/18/2014 12:39:56 AM,41.729576153145636,-87.57568059471686,"(41.729576153145636, -87.57568059471686)"
9460339,HX113740,01/14/2014 04:44:00 AM,040XX W MAYPOLE AVE,1310,CRIMINAL DAMAGE,TO PROPERTY,RESIDENCE,false,true,1114,011,28,26,14,1149075,1901099,2014,01/16/2014 12:40:00 AM,41.884543798701515,-87.72803579358926,"(41.884543798701515, -87.72803579358926)"
9461467,HX114463,01/14/2014 04:43:00 AM,059XX S CICERO AVE,0820,THEFT,$500 AND UNDER,PARKING LOT/GARAGE(NON.RESID.),false,false,0813,008,13,64,06,1145661,1865031,2014,01/16/2014 12:40:00 AM,41.785633535413176,-87.74148516669783,"(41.785633535413176, -87.74148516669783)"
9460355,HX113738,01/14/2014 04:21:00 AM,070XX S PEORIA ST,0820,THEFT,$500 AND UNDER,STREET,true,false,0733,007,17,68,06,1171480,1858195,2014,01/16/2014 12:40:00 AM,41.766348042591375,-87.64702037047671,"(41.766348042591375, -87.64702037047671)"
9461140,HX113909,01/14/2014 03:17:00 AM,016XX W HUBBARD ST,0610,BURGLARY,FORCIBLE ENTRY,COMMERCIAL / BUSINESS OFFICE,false,false,1215,012,27,24,05,1165029,1903111,2014,01/16/2014 12:40:00 AM,41.889741146006095,-87.66939334853973,"(41.889741146006095, -87.66939334853973)"
9460361,HX113731,01/14/2014 03:12:00 AM,022XX S WENTWORTH AVE,0820,THEFT,$500 AND UNDER,CTA TRAIN,false,false,0914,009,25,34,06,1175363,1889525,2014,01/20/2014 12:40:05 AM,41.85223460427207,-87.63185047834335,"(41.85223460427207, -87.63185047834335)"
9461691,HX114506,01/14/2014 03:00:00 AM,087XX S COLFAX AVE,0650,BURGLARY,HOME INVASION,RESIDENCE,false,false,0423,004,7,46,05,1195052,1847362,2014,01/17/2014 12:40:17 AM,41.73607283858007,-87.56097809501115,"(41.73607283858007, -87.56097809501115)"
9461792,HX114824,01/14/2014 03:00:00 AM,012XX S CALIFORNIA BLVD,0810,THEFT,OVER $500,STREET,false,false,1023,010,28,29,06,1157929,1894034,2014,01/17/2014 12:40:17 AM,41.86498077118534,-87.69571529596696,"(41.86498077118534, -87.69571529596696)"

Since I wanted to import this into Neo4j I needed to do some massaging of the data since the neo4j-import tool expects to receive CSV files containing the nodes and relationships we want to create.

Spark logo 192x100px

I’d been looking at Spark towards the end of last year and the pre-processing of the big initial file into smaller CSV files containing nodes and relationships seemed like a good fit.

I therefore needed to create a Spark job to do this. We’ll then pass this job to a Spark executor running locally and it will spit out CSV files.

2015 04 15 00 51 42

We start by creating a Scala object with a main method that will contain our processing code. Inside that main method we’ll instantiate a Spark context:

import org.apache.spark.{SparkConf, SparkContext}
object GenerateCSVFiles {  
    def main(args: Array[String]) {    
        val conf = new SparkConf().setAppName("Chicago Crime Dataset")    
        val sc = new SparkContext(conf)  

Easy enough. Next we’ll read in the CSV file. I found the easiest way to reference this was with an environment variable but perhaps there’s a more idiomatic way:

import org.apache.spark.{SparkConf, SparkContext}
object GenerateCSVFiles {
  def main(args: Array[String]) {
    var crimeFile = System.getenv("CSV_FILE")
    if(crimeFile == null || !new File(crimeFile).exists()) {
      throw new RuntimeException("Cannot find CSV file [" + crimeFile + "]")
    println("Using %s".format(crimeFile))
    val conf = new SparkConf().setAppName("Chicago Crime Dataset")
    val sc = new SparkContext(conf)
    val crimeData = sc.textFile(crimeFile).cache()

The type of crimeData is RDD[String] – Spark’s way of representing the (lazily evaluated) lines of the CSV file. This also includes the header of the file so let’s write a function to get rid of that since we’ll be generating our own headers for the different files:

import org.apache.spark.rdd.RDD
def dropHeader(data: RDD[String]): RDD[String] = {
  data.mapPartitionsWithIndex((idx, lines) => {
    if (idx == 0) {

Now we’re ready to start generating our new CSV files so we’ll write a function which parses each line and extracts the appropriate columns. I’m using Open CSV for this:

def generateFile(file: String, withoutHeader: RDD[String], fn: Array[String] => Array[String], header: String , distinct:Boolean = true, separator: String = ",") = {
  FileUtil.fullyDelete(new File(file))
  val tmpFile = "/tmp/" + System.currentTimeMillis() + "-" + file
  val rows: RDD[String] = withoutHeader.mapPartitions(lines => {
    val parser = new CSVParser(',') => {
      val columns = parser.parseLine(line)
  if (distinct) rows.distinct() saveAsTextFile tmpFile else rows.saveAsTextFile(tmpFile)

We then call this function like this:

generateFile("/tmp/crimes.csv", withoutHeader, columns => Array(columns(0),"Crime", columns(2), columns(6)), "id:ID(Crime),:LABEL,date,description", false)

The output into ‘tmpFile’ is actually 32 ‘part files’ but I wanted to be able to merge those together into individual CSV files that were easier to work with.

I won’t paste the the full job here but if you want to take a look it’s on github.

Now we need to submit the job to Spark. I’ve wrapped this in a script if you want to follow along but these are the contents:

./spark-1.1.0-bin-hadoop1/bin/spark-submit \
--driver-memory 5g \
--class GenerateCSVFiles \
--master local[8] \ 
target/scala-2.10/playground_2.10-1.0.jar \

If we execute that we’ll see the following output…”

Spark assembly has been built with Hive, including Datanucleus jars on classpath
Using Crimes_-_2001_to_present.csv
Using Spark's default log4j profile: org/apache/spark/
15/04/15 00:31:44 INFO SparkContext: Running Spark version 1.3.0
15/04/15 00:47:26 INFO TaskSchedulerImpl: Removed TaskSet 8.0, whose tasks have all completed, from pool
15/04/15 00:47:26 INFO DAGScheduler: Stage 8 (saveAsTextFile at GenerateCSVFiles.scala:51) finished in 2.702 s
15/04/15 00:47:26 INFO DAGScheduler: Job 4 finished: saveAsTextFile at GenerateCSVFiles.scala:51, took 8.715588 s
real	0m44.935s
user	4m2.259s
sys	0m14.159s

and these CSV files will be generated:

$ ls -alh /tmp/*.csv
-rwxrwxrwx  1 markneedham  wheel   3.0K 14 Apr 07:37 /tmp/beats.csv
-rwxrwxrwx  1 markneedham  wheel   217M 14 Apr 07:37 /tmp/crimes.csv
-rwxrwxrwx  1 markneedham  wheel    84M 14 Apr 07:37 /tmp/crimesBeats.csv
-rwxrwxrwx  1 markneedham  wheel   120M 14 Apr 07:37 /tmp/crimesPrimaryTypes.csv
-rwxrwxrwx  1 markneedham  wheel   912B 14 Apr 07:37 /tmp/primaryTypes.csv

Let’s have a quick check what they contain:

$ head -n 10 /tmp/beats.csv
$ head -n 10 /tmp/crimes.csv
9464711,Crime,01/14/2014 05:00:00 AM,SIMPLE
9460704,Crime,01/14/2014 04:55:00 AM,ARMED: HANDGUN
9460339,Crime,01/14/2014 04:44:00 AM,TO PROPERTY
9461467,Crime,01/14/2014 04:43:00 AM,$500 AND UNDER
9460355,Crime,01/14/2014 04:21:00 AM,$500 AND UNDER
9461140,Crime,01/14/2014 03:17:00 AM,FORCIBLE ENTRY
9460361,Crime,01/14/2014 03:12:00 AM,$500 AND UNDER
9461691,Crime,01/14/2014 03:00:00 AM,HOME INVASION
9461792,Crime,01/14/2014 03:00:00 AM,OVER $500
$ head -n 10 /tmp/crimesBeats.csv

Looking good. Let’s get them imported into Neo4j:

$ ./neo4j-community-2.2.0/bin/neo4j-import --into /tmp/my-neo --nodes /tmp/crimes.csv --nodes /tmp/beats.csv --nodes /tmp/primaryTypes.csv --relationships /tmp/crimesBeats.csv --relationships /tmp/crimesPrimaryTypes.csv
[*>:45.76 MB/s----------------------------------|PROPERTIES(2)=============|NODE:3|v:118.05 MB/]  4M
Done in 5s 605ms
Prepare node index
[*RESOLVE:64.85 MB-----------------------------------------------------------------------------]  4M
Done in 4s 930ms
Calculate dense nodes
[>:42.33 MB/s-------------------|*PREPARE(7)===================================|CALCULATOR-----]  8M
Done in 5s 417ms
[>:42.33 MB/s-------------|*PREPARE(7)==========================|RELATIONSHIP------------|v:44.]  8M
Done in 6s 62ms
Node --> Relationship
[*>:??-----------------------------------------------------------------------------------------]  4M
Done in 324ms
Relationship --> Relationship
[*LINK-----------------------------------------------------------------------------------------]  8M
Done in 1s 984ms
Node counts
[*>:??-----------------------------------------------------------------------------------------]  4M
Done in 360ms
Relationship counts
[*>:??-----------------------------------------------------------------------------------------]  8M
Done in 653ms
IMPORT DONE in 26s 517ms

Next I updated conf/ to point to my new database:

# Server configuration
# location of the database directory

Now I can start up Neo and start exploring the data:

$ ./neo4j-community-2.2.0/bin/neo4j start
MATCH (:Crime)-[r:CRIME_TYPE]->() 
Graph  15

There’s lots more relationships and entities that we could pull out of this data set – what I’ve done is just a start. So if you’re up for some more Chicago crime exploration the code and instructions explaining how to run it are on github.

Categories: Programming

New course: Take Android app performance to the next level

Android Developers Blog - Tue, 04/14/2015 - 17:40

Posted by Jocelyn Becker, Developer Advocate

Building the next great Android app isn't enough. You can have the most amazing social integration, best API coverage, and coolest photo filters, but none of that matters if your app is slow and frustrating to use.

That's why we've launched our new online training course at Udacity, focusing entirely on improving Android performance. This course complements the Android Performance Patterns video series, focused on giving you the resources to help make fast, smooth, and awesome experiences for users.

Created by Android Performance guru Colt McAnlis, this course reviews the main pillars of performance (rendering, compute, and battery). You'll work through tutorials on how to use the tools in Android Studio to find and fix performance problems.

By the end of the course, you'll understand how common performance problems arise from your hardware, OS, and application code. Using profiling tools to gather data, you'll learn to identify and fix performance bottlenecks so users can have that smooth 60 FPS experience that will keep them coming back for more.

Take the course: Join the conversation and follow along on social at #PERFMATTERS.

Join the discussion on

+Android Developers
Categories: Programming

Sponsored Post: OpenDNS, MongoDB, Internap, Aerospike, Nervana, SignalFx, InMemory.Net, Couchbase, VividCortex, Transversal, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • The Cloud Platform team at OpenDNS is building a PaaS for our engineering teams to build and deliver their applications. This is a well rounded team covering software, systems, and network engineering and expect your code to cut across all layers, from the network to the application. Learn More

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • Nervana Systems is hiring several engineers for cloud positions. Nervana is a startup based in Mountain View and San Diego working on building a highly scalable deep learning platform on CPUs, GPUs and custom hardware. Deep Learning is an AI/ML technique breaking all the records by a wide-margin in state of the art benchmarks across domains such as image & video analysis, speech recognition and natural language processing. Please apply here and mention “” in your message.

  • Linux Web Server Systems EngineerTransversal. We are seeking an experienced and motivated Linux System Engineer to join our Engineering team. This new role is to design, test, install, and provide ongoing daily support of our information technology systems infrastructure. As an experienced Engineer you will have comprehensive capabilities for understanding hardware/software configurations that comprise system, security, and library management, backup/recovery, operating computer systems in different operating environments, sizing, performance tuning, hardware/software troubleshooting and resource allocation. Apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • MongoDB World brings together over 2,000 developers, sysadmins, and DBAs in New York City on June 1-2 to get inspired, share ideas and get the latest insights on using MongoDB. Organizations like Salesforce, Bosch, the Knot, Chico’s, and more are taking advantage of MongoDB for a variety of ground-breaking use cases. Find out more at but hurry! Super Early Bird pricing ends on April 3.
Cool Products and Services
  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • Looking for a scalable NoSQL database alternative? Aerospike is validating the future of ACID compliant NoSQL with our open source Key-Value Store database for real-time transactions. Download our free Community Edition or check out the Trade-In program to get started. Learn more.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • Benchmark: MongoDB 3.0 (w/ WiredTiger) vs. Couchbase 3.0.2. Even after the competition's latest update, are they more tired than wired? Get the Report.

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here:

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required.

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Give the Power to the Programmers

From the Editor of Methods & Tools - Tue, 04/14/2015 - 14:22
Delegation of power to programmers is a smart idea. It is provably and measurably smarter than leaving the power with managers to design the developer’s work environment, and with IT architects to design the technology that we should program. Devolving the power to create a better working-environment, and to design the technology for our stakeholders, […]

CRAFT conference in Budapest

Coding the Architecture - Simon Brown - Tue, 04/14/2015 - 10:31

I'm heading to Budapest next week for the 2nd annual CRAFT conference, which is about software craftsmanship and modern software development. It was one of my favourite conferences from last year (my talk was called Agility and the essence of software architecture) so I'm really looking forward to going back. I'll be speaking about software architecture as a workshop, conference talk and meetup.

  • Workshop (22nd April) - Simple sketches for diagramming your software architecture - my popular software architecture sketching workshop.
  • Meetup (22nd April) - Software architecture vs code - a short talk at the Full Stack Budapest meetup where I'll be looking at why those software architecture diagrams you have on the wall never quite reflect the code.
  • Talk (24th April) - Software architecture as code - a talk about how we should stop drawing software architecture diagrams in tools like Visio and instead try to extract as much architecture information from the code as possible, supplementing the model where necessary.

CRAFT in 2014

See you there. :-)

Categories: Architecture

Questions with a license to kill in the Sprint Review

Xebia Blog - Tue, 04/14/2015 - 09:19

A team I had been coaching held a sprint review to show what they had achieved and to get feedback from stakeholders. Among these were managers, other teams, enterprise architects, and other interested colleagues.

In the past sprint they had built and realized the automation of part of the Continuous Delivery pipeline. This was quite a big achievement for the team. The organization had been struggling for quite some time to get this working, and the team had realized this in a couple of sprints!

Team - "Anyone has questions or wants to know more?"
Stakeholder - "Thanks for the demo. How does the shown solution deal with 'X'?"

The team replied with a straightforward answer to this relatively simple question.

Stakeholder - "I have more questions related to the presented solution and concerns on corporate level, but this is probably not the good time to go into details."

What just happened and how did the team respond?

First, let me describe how the dialogue continued.

Team - "Let's make time now because it is important. What do you have on your mind?"
Stakeholder - "On corporate level solution Y is defined to deal with the company's concern related to Z compliance. I am concerned with how your solution will deal with this. Please elaborate on this."
[Everybody in the organization knows that Z compliance has been a hot topic during the past year.]

Team - "We have thought of several alternatives to deal with this issue. One of these is to have a coupling with another system that will provide the compliance. Also we see possibilities in altering the ....."
Stakeholder - "The other system has issues coping with this and is not suited for what you want it to do. What are your plans for dealing with this?"

The team replied with more details after which the stakeholder asked even more detailed questions....

How dit the team get itself out of this situation?

After a couple of questions and answers the team responded with "Look, the organisation has been struggling to find a working solution for quite some time now and has't succeeded. Therefor, we are trying a new and different approach. Since this is new we don't have all the answers yet. Next steps will deal with your concerns."

Team - "Thanks for your feedback and see you all at the next demo!"

Killing a good idea

In the above dialogue between the team and one stakeholder during the sprint review the stakeholder kept asking details questions about specific aspects of the solution. He also related these to well-known corporate issues of which the importance was very clear to everyone. Thereby, consciously or unconsciously, intimidating the audience whether the approach chosen by the team is a good one and perhaps should be abandoned.

This could be especially dangerous if not appropriately dealt with. For instance, managers at first being supportive of the (good) idea might turn against the approach, even though the idea is a good one.

Dealing with these and other difficult questions

In his book 'Buy-in - saving your good idea from getting shot down' John Kotter describes 4 basic types of attack:

  1. Fear mongering
  2. Death by delay
  3. Confusion
  4. Ridicule

Attacks can be one of these four or any combination of these. The above attack is an example of a combination of 'Fear mongering' (relating to the fear that important organisational concerns are not properly addressed) and 'Confusion' (asking about many details that are not yet worked out).

In addition, Kotter describes 24 basic attacks. The attack as described above is an example of attack no. 6.

Don't worry. No need to remember all 24 responses; they all follow a very simple strategy that can be applied:

Step 1: Invite the stakeholder(s) to ask their questions,

Step 2: Respect the person asking the question by taking his point seriously,

Step 3: Respond in a reasonable and concise way.

The team did well by inviting the stakeholder to come forward with his questions. This is good marketing to the rest of the stakeholders: this shows the team believes in the idea (their solution) and is confident to respond to any (critical) question.

Second, the team responded in a respectful way taking the question serious as a valid concern. The team did so by responding in a concise and reasonable way.

As Kotter explains, it is not about convincing that one critical  stakeholder, but it's about not losing the rest of the stakeholders!


"Buy-in - saving your good idea from getting shot down" - John P. Kotter & Lorne A. Whitehead,

"24 attacks & responses" - John P. Kotter & Lorne Whitehead,

Qui Bono

Herding Cats - Glen Alleman - Tue, 04/14/2015 - 00:57

When we hear a suggestion about a process that inverts the normal process based on a governance framework - say Microeconomics of Software Development, we need to ask who benefits? How would that suggestion be tangibly beneficial to the recipient that is now inverted?

Estimates for example are for the business, why would the business no longer what an estimate of cost, schedule, or technical performance of the provided capabilities?

In the world of spending money to produce value, the one that benefits, should be, must be, the one paying for that value and therefore have a compelling interest in the information needed to make decisions about how the money is spent.

When that relationship between paying and benefit is inverted, then the path to Qui Bono is inverted as well.

In the end follow the money must be the basis of assessing the applicability of any suggestion. If it is suggested that decision making can be done in the absence of estimating the impacts of those decisions, who benefits. If it's not those paying for the value, then Qui Bono is not longer applicable. 

Categories: Project Management

More nxlog logging tricks

Agile Testing - Grig Gheorghiu - Tue, 04/14/2015 - 00:11
In a previous post I talked about "Sending Windows logs to Papertrail with nxlog". In the mean time I had to work through a couple of nxlog issues that weren't quite obvious to solve -- hence this quick post.

Scenario 1: You don't want to send a given log file to Papertrail

My solution:

In this section:

# Monitor MyApp1 log files 
 Module im_file
 File 'C:\\MyApp1\\logs\\*.log' 
 Exec $Message = $raw_event; 
 Exec if $Message =~ /GET \/ping/ drop(); 
 Exec if file_name() =~ /.*\\(.*)/ $SourceName = $1; 
 SavePos TRUE 
 Recursive TRUE 

add a line which drops the current log line if the file name contains the pattern you are looking to skip. For example, for a file name called skip_this_one.log (from the same log directory), the new stanza would be:
# Monitor MyApp1 log files 
 Module im_file
 File 'C:\\MyApp1\\logs\\*.log' 
 Exec $Message = $raw_event; 
 Exec if $Message =~ /GET \/ping/ drop();  Exec if file_name() =~ /skip_this_one.log/ drop();
 Exec if file_name() =~ /.*\\(.*)/ $SourceName = $1; 
 SavePos TRUE 
 Recursive TRUE 
Scenario 2: You want to prefix certain log lines depending on their directory of origin
Assume you have a test app and a dev app running on the same box, with the same exact log format, but with logs saved in different directories, so that in the Input sections you would have 
File 'C:\\MyTestApp\\logs\\*.log' for the test app and File 'C:\\MyDevApp\\logs\\*.log' for the dev app.
The only solution I found so far was to declare a filewatcher_transformer Processor section for each app. The default filewatcher_transformer section I had before looked like this:

START_ANGLE_BRACKET  Processor filewatcher_transformer END_ANGLE_BRACKET  Module pm_transformer    # Uncomment to override the program name  # Exec $SourceName = 'PROGRAM NAME';  Exec $Hostname = hostname();  OutputFormat syslog_rfc5424START_ANGLE_BRACKET/Processor END_ANGLE_BRACKET
I created instead these 2 sections:
START_ANGLE_BRACKET Processor filewatcher_transformer_test END_ANGLE_BRACKET  Module pm_transformer    # Uncomment to override the program name  # Exec $SourceName = 'PROGRAM NAME';  Exec $SourceName = "TEST_" + $SourceName;  Exec $Hostname = hostname();  OutputFormat syslog_rfc5424START_ANGLE_BRACKET/Processor END_ANGLE_BRACKET

START_ANGLE_BRACKET Processor filewatcher_transformer_dev END_ANGLE_BRACKET  Module pm_transformer    # Uncomment to override the program name  # Exec $SourceName = 'PROGRAM NAME';  Exec $SourceName = "DEV_" + $SourceName;  Exec $Hostname = hostname();  OutputFormat syslog_rfc5424START_ANGLE_BRACKET/Processor END_ANGLE_BRACKET
As you can see, I chose to prefix $SourceName, which is the name of the log file in this case, with either TEST_ or DEV_ depending on the app.
There is one thing remaining, which is to define a specific route for each app. Before, I had a common route for both apps:
Path MyAppTest, MyAppDev=> filewatcher_transformer => syslogoutSTART_ANGLE_BRACKET /Route END_ANGLE_BRACKET
I replaced the common route with the following 2 routes, each connecting an app with its respective Processor section.
Path MyAppTest=> filewatcher_transformer_test => syslogoutSTART_ANGLE_BRACKET /Route END_ANGLE_BRACKET
Path MyAppDev=> filewatcher_transformer_dev => syslogoutSTART_ANGLE_BRACKET /Route END_ANGLE_BRACKET
At this point, I restarted the nxlog service and I started to see log filenames in Papertrail of the form DEV_errors.log and TEST_errors.log.

The Realtime API: In memory mode, debug tools, and more

Google Code Blog - Mon, 04/13/2015 - 21:20

Posted by Cheryl Simon Retzlaff, Software Engineer on the Realtime API team

Originally posted to the Google Apps Developer blog

Real-time collaboration is a powerful feature for getting work done inside Google docs. We extended that functionality with the Realtime API to enable you to create Google-docs style collaborative applications with minimal effort.

Integration of the API becomes even easier with a new in memory mode, which allows you to manipulate a Realtime document using the standard API without being connected to our servers. No user login or authorization is required. This is great for building applications where Google login is optional, writing tests for your app, or experimenting with the API before configuring auth.

The Realtime debug console lets you view, edit and debug a Realtime model. To launch the debugger, simply execute; in the JavaScript console in Chrome.

Finally, we have refreshed the developer guides to make it easier for you to learn about the API as a new or advanced user. Check them out at

For details on these and other recent features, see the release note.

Categories: Programming

Best Creativity Books

NOOP.NL - Jurgen Appelo - Mon, 04/13/2015 - 20:34

After my lists of mindfulness books and happiness books, here you can find the 20 Best Creativity Books in the World.

This list is created from the books on GoodReads tagged with “creativity”, sorted using an algorithm that favors number of reviews, average rating, and recent availability.

The post Best Creativity Books appeared first on NOOP.NL.

Categories: Project Management

Mobile Sync for Mongo

Eric.Weblog() - Eric Sink - Mon, 04/13/2015 - 19:00

We here at Zumero have been exploring the possibility of a mobile sync solution for MongoDB.

We first released our Zumero for SQL Server product almost 18 months ago, and today there are bunches of people using mobile apps which sync using our solution.

But not everyone uses SQL Server, so we often wonder what other database backends we should consider supporting. In this blog entry, I want to talk about some progress we've made toward a "Zumero for Mongo" solution and "think out loud" about the possibilities.

Background: Mobile Sync

The basic idea of mobile sync is to keep a partial copy of the database on the mobile device so the app doesn't have to go back to the network for every single CRUD operation. The benefit is an app that is faster, more reliable, and works offline. The flip side of that coin is the need to keep the mobile copy of the database synchronized with the data on the server.

Sync is tricky, but as mobile continues its explosive growth, this approach is gaining momentum:

If the folks at Mongo are already working on something in this area, we haven't seen any sign of it. So we decided to investigate some ideas.

Pieces of the puzzle

In addition to the main database (like SQL Server or MongoDB or whatever), a mobile sync solution has three basic components:

Mobile database
  • Runs on the mobile device as part of the app

  • Probably an embedded database library

  • Keeps a partial replica of the main database

  • Wants to be as similar as possible to the main database

Sync server
  • Monitors changes made by others to the main database

  • Sends incremental changes back and forth between clients and the main database

  • Resolves conflicts, such as when two participants want to change the same data

  • Manages authentication and permissions for mobile clients

  • Filters data so that each client only gets what it needs

Sync client
  • Monitors changes made by the app to the mobile database

  • Talks over the network to the sync server

  • Pushes and pulls incremental changes to keep the mobile database synchronized

  • For this blog entry, I want to talk mostly about the mobile database. In our Zumero for SQL Server solution, this role is played by SQLite. There are certainly differences between SQL Server and SQLite, but on the whole, SQLite does a pretty good job pretending to be SQL Server.

    What embedded database could play this role for Mongo?

    This question has no clear answer, so we've been building a a lightweight Mongo-compatible database. Right now it's just a prototype, but its development serves the purpose of helping us explore mobile sync for Mongo.

    Embeddable Lite Mongo

    Or "Elmo", for short.

    Elmo is a database that is designed to be as Mongo-compatible as it can be within the constraints of mobile devices.

    In terms of the status of our efforts, let me begin with stuff that does NOT work:

    • Sharding is an example of a Mongo feature that Elmo does not support and probably never will.

    • Elmo also has no plans to support any feature which requires embedding a JavaScript engine, since that would violate Apple's rules for the App Store.

    • We do hope to support full text search ($text, $meta, etc), but this is not yet implemented.

    • Similarly, we have not yet implemented any of the geo features, but we consider them to be within the scope of the project.

    • Elmo does not support capped collections, and we are not yet sure if it should.

    Broadly speaking, except for the above, everything works. Mostly:

    • All documents are stored in BSON

    • Except for JS code, all BSON types are supported

    • Comparison and sorting of BSON values (including different types) works

    • All basic CRUD operations are implemented

    • The update command supports all the update operators except $isolated

    • The update command supports upsert as well

    • The findAndModify command includes full support for its various options

    • Basic queries are fully functional, including query operators, projection, and sorting

    • The matcher supports Mongo's notion of query predicates matching any element of an array

    • CRUD operations support resolution of paths into array subobjects, like x.y to {x:[{y:2}]}

    • Regex works, with support for the i, s, and m options

    • The positional operator $ works in update and projection

    • Cursors and batchSize are supported

    • The aggregation pipeline is supported, including all expression elements and all stages (except geo)

    More caveats:

    • Support for indexes is being implemented, but they don't actually speed anything up yet.

    • The dbref format is tolerated, but is not [yet] resolved.

    • The $explain feature is not implemented yet.

    • For the purpose of storing BSON blobs, Elmo is currently using SQLite. Changing this later will be straightforward, as we're basically just using SQLite as a key-value store, so the API between all of Elmo's CRUD logic and the storage layer is not very wide.

    Notes on testing:

    • Although mobile-focused Elmo does not need an actual server, it has one, simply so that we can run the jstests suite against it.

    • The only test suite sections we have worked on are jstests/core and jstests/aggregation.

    • Right now, Elmo can pass 311 of the test cases from jstests.

    • We have never tried contacting Elmo with any client driver except the mongo shell. So this probably doesn't work yet.

    • Elmo's server only supports the new style protocol, including OP_QUERY, OP_GET_MORE, OP_KILL_CURSORS, and OP_REPLY. None of the old "fire and forget" messages are implemented.

    • Where necessary to make a test case pass, Elmo tries to return the same error numbers as Mongo itself.

    • All effort thus far has been focused on making Elmo functional, with no effort spent on performance.

    How Elmo should work:

    • In general, our spec for Elmo's behavior is the MongoDB documentation plus the jstests suite.

    • In cases where the Mongo docs seem to differ from the actual behavior of Mongo, we try to make Elmo behave like Mongo does.

    • In cases where the Mongo docs are silent, we often stick a proxy in front of the Mongo server and dump all the messages so we can see exactly what is going on.

    • We occasionally consult the Mongo server source code for reference purposes, but no Mongo code has been copied into Elmo.

    Notes on the code:

    • Elmo is written in F#, which was chosen because it's an insanely productive environment and we want to move quickly.

    • But while F# is a great language for this exploratory prototype, it may not be the right choice for production, simply because it would confine Elmo use cases to Xamarin, and Miguel's world domination plan is not quite complete yet. :-)

    • The Elmo code is now available on GitHub at Currently the license is GPLv3, which makes it incompatible with production use on mobile platforms, which is okay for now, since Elmo isn't ready for production use anyway. We'll revisit licensing issues later.

    Next steps:

    • Our purpose in this blog entry is to start conversations with others who may be interested in mobile sync solutions for Mongo.

    • Feel free to post a question or comment or whatever as an issue on GitHub:

    • Or email me:

    • Or Tweet: @eric_sink

    • If you're interested in a face-to-face conversation or a demo, we'll be at MongoDB World in NYC at the beginning of June.


Three Fast Data Application Patterns

This is guest post by John Piekos, VP Engineering at VoltDB. I understand this is a little PRish, but I think the ideas are solid.

The focus of many developers and architects in the past few years has been on Big Data, specifically mining historical intelligence from the Data Lake (usually a Hadoop stack containing terabytes to petabytes of data).

Now, product architects are asking how they can use this business intelligence for competitive advantage. As a result, application developers have come to see the value of using and acting in real-time on streams of fast data; using OLAP reporting wisdom, they can realize the benefits of both fast data and Big Data. As a result, a new set of application patterns have emerged. The applications are designed to capture value from fast-moving streaming data, before it reaches Hadoop.

At VoltDB we call this new breed of applications “fast data” applications. The goal of these fast data applications is to do more than just push data into Hadoop asap, but also to capture real-time value from the data the moment the data arrives.  

Because traditional databases historically haven’t been fast enough, developers have been forced to go to great effort to build fast data applications - they build complex multi-tier systems often involving a handful of tools typically utilizing a dozen or more servers.  However, a new class of database technology, especially NewSQL offerings, has changed this equation.

If you have a relational database that is fast enough, highly available, and able to scale horizontally, the ability to build fast data applications becomes less esoteric and much more manageable. Three new real-time application patterns have emerged as the necessary dataflows to implement real-time applications. These patterns, enabled by new, fast database technology, are:

Categories: Architecture

The Flaw of Averages and Not Estimating

Herding Cats - Glen Alleman - Mon, 04/13/2015 - 16:04

There is a popular notion in the #NoEstimates paradigm that Empirical data is the basis of forecasting the future performance of a development project. In principle this is true, but the concept is not complete in the way it is used. Let's start with the data source used for this conjecture.

There are 12 sample in the example used by #NoEstimates. In this case stickies per week. From this time series an average is calculated for the future. This is the empirical data is used to estimate in the No Estimates paradigm. The Average is 18.1667 or just 18 stickies per week.



But we all have read or should have read Sam Savage's The Flaw of Averages. This is a very nice populist book. By populist I mean an easily accessible text with little or not mathematics in the book. Although Savage's work is highly mathematically based with his tool set.

There is a simple set of tools that can be applied for Time Series analysis, using past performance to forecast future performance of the system that created the previous time series. The tool is R and is free for all platforms. 

Here's the R code for performing a statistically sound forecast to estimate the possible ranges values the past empirical stickies can take on in the future.

Put the time series in an Excel file and save it as TEXT named BOOK1

> SPTS=ts(Book1) - apply the Time Series function in R to convert this data to a time series
> SPFIT=arima(SPTS) - apply the simple ARIMA function to the time series
> SPFCST=forecast(SPFIT) - build a forecast from the ARIMA outcome
> plot(SPFCST) - plot the results

Here's that plot. This is the 80% and 90% confidence bands for the possible outcomes in the future from the past performance - empirical data from the past. 

The 80% range is 27 to 10 and the 90% range is 30 to 5.


So the killer question.

Would you bet your future on a probability of success with a +65 to -72% range of cost, schedule, or technical performance of the outcomes?

I hope not. This is a flawed example I know. Too small a sample, no adjustment of the ARIMA factors, just a quick raw assessment of the data used in some quarters as a replacement for actually estimating future performance. But this assessment shows how to  empirical data COULD  support making decisions about future outcomes in the presence of uncertainty using past time series once the naive assumptions of sample size and wide variances are corrected..

The End

If you hear you can make decisions without estimating that's pretty much a violation of all established principles of Microeconomics and statistical forecasting. When answer comes back we sued empirical data, that your time series empirical data, download R, install all the needed packages, put the data in a file, apply the functions above and see if you really want to commit to spending other peoples money with a confidence range of +65 to -72%  of performing like you did in the past? I sure hope not!!

Related articles Flaw of Averages Estimating Probabilistic Outcomes? Of Course We Can! Critical Success Factors of IT Forecasting Herding Cats: Empirical Data Used to Estimate Future Performance Some More Background on Probability, Needed for Estimating Forecast, Automatic Routines vs. Experience Five Estimating Pathologies and Their Corrective Actions
Categories: Project Management

Why Comments Are Stupid, a Real Example

Making the Complex Simple - John Sonmez - Mon, 04/13/2015 - 16:00

Nothing seems to stir up religious debate more so than when I write a post or do a YouTube video that mentions how most of the time comments are not necessary and are actually more harmful than helpful. I first switched sides in this debate when I read the second edition of Code Complete. In […]

The post Why Comments Are Stupid, a Real Example appeared first on Simple Programmer.

Categories: Programming

By: Excellent Resources for Business Analysts | Practical Analyst

Software Requirements Blog - - Mon, 04/13/2015 - 14:11

[…] Karl Wiegers are participants. I learn something new every time I visit the Seilevel board. The blog is also […]

Categories: Requirements

Who Builds a House without Drawings?

Herding Cats - Glen Alleman - Mon, 04/13/2015 - 05:46

This month's issue of Communications of the ACM, has a Viewpoint article titled "Who Builds a House without Drawing Blueprints?" where two ideas are presented:

  • It is a good idea to think about what we are about to do before we do  it.
  • If we're going to write a good program, we need to think above to code level.

The example from the last bullet is there are many coding methods - test driven development, agile programming, and others ...

If the only sorting algorithm we know is a bubble sort no coding method will produce code that sorts in O(n log n) time.

Not only do we need to have somes sense of what capabilities the software needs to deliver in exchange for the cost of the software, but also do those capabilities meet the needs? What are the Measures of Effectiveness and Measures of Performance the software must fulfill? In what order must these be fulfilled? What supporting documentation is needed for the resulting product or service in order to maintain it over it's life cycle.

If we do not start with a specification, every line of code we write is a patch.†

This notion brings up several other gaps in our quest to build software that fulfills the needs of those paying. There are several conjectures floating around that willfully ignore the basic principles of providing solutions acceptable to the business. Since the business operates on the principles of Microeconomics of decision making, let's look at developing software from the point of view of those paying for our work. It is conjectured that ...

  • Good code is it's own documentation.
  • We can develop code just by sitting down and doing it. Our mob of coders can come up with the best solution as they go.
  • We don't need to estimate the final cost and schedule, we'll just use some short term highly variable empirical data to show us the average progress and project that.
  • All elements of the software can be sliced to a standard size and we'll use Kanban to forecast future outcomes.
  • We're bad at estimating and our managers misuse those numbers, so the solution is to Not Estimate and that will fix the root cause of those symptoms of Bad Management.

There are answers to each of these in the literature for the immutable principles of project management, but I came across a dialog that illustrates to naïvety  around spending other people's money to develop software without knowing how much, what, and when.

Here's a conversation - following Galileo Galilei's Dialogue Concerning the Two Chief World Systems - between Salviati who argues for the principles of celestial mechanics and Simplicio who is a dedicated follower that those principles have not value for him as his sees them an example of dysfunction. 

I'll co-op the actual social media conversation and use those words by Salviati and Simplicio as the actors. The two people on the social media are both fully qualified to be Salviati. Galileo used Simplicio as a double entendre to make his point, so neither is Simplicio here:

  • Simplicio - my first born is a novice software developer but is really bad at math and especially statistics and those pesky estimating requests asked by the managers he works for. he's thinking he needs to find a job where they let him develop code, where there is #NoMath needed to make those annoying estimates.
  • Salviati - Maybe you tell him you're not suggesting he not learn math, but simply reduce his dependence on math in his work, since it is hard and he's not very good at it.
  • Simplicio - Yes, lots of developers struggle with answering estimate questions based on  statistics and other know and tested approaches. I'm suggesting he find some alternative to having to make estimates, since he's so bad at them.
  • Salviati - I'll agree for the moment, since he doesn't appear to be capable of learning the needed math. Perhaps he should seek other ways to answering the questions asked of him by those paying his salary. Ways in which he can apply #NoMath to answering those questions needed by the business people to make decisions.
  • Simplicio - Maybe he can just pick the most important thing to work on first, do that, then go back and start the next most important thing, and do that until he is done. Then maybe those paying him will stop asking when will you be done and how much will it cost when that day arrives, and oh yes, all that code you developed it will meet the needed capabilities I'm paying you to develop right?
  • Salviati - again this might be a possible solution to your son's dilemma. After all we're not all good at using statistics and other approaches to estimate those numbers needed to make business decisions. Since we really like to just start coding, maybe the idea of #NoMath is a good one and he can just be an excellent coder. Those paying for his work really only want it to work on the needed day for the expected cost and provide the needed capabilities - all within the confidence levels needed to fulfill their business case needs so they can stay in business.  
  • Simplicio - He heard of this idea on the internet. Collect old data and use those for projecting the new data. That'd of course not be be the same as analyzing the future risks, changing sizes of work and all the other probabilistic outcomes. Yea, that's work, add up all the past estimates, find the average and use that.
  • Salviati - that might be useful for him, but make sure you caution him, that those numbers from the past may not represent the numbers in the future if he doesn't assess what capabilities are needed in the future and what the structure of the solution is for those capabilities. And while he's at it, make sure the uncertainties in the future are the same as the uncertainties in the past, otherwise that past numbers are pretty much worthless for making decisions about the future.
  • Simplicio - Sure, but at his work, his managers abuse those numbers and take them as point values and ignore the probabilistic ranges he places on them. His supervisor - the one with the pointy hair - simply doesn't recognize that all project work is probabilistic and wants his developers to just do it.
  • Salviati - Maybe your son can ask his supervisors boss - the one that provides the money for his work, Five Whys as to why he even needs an estimate. Maybe that person will be happy to have you son spend his money with no need to know how much it will cost in the end, or when he'll be done, or really what will be done when the money and time runs out.
  • Simplicio - yes that's the solution. All those books, classes, and papers he should have read, all those tools he could have used, really don't matter any more. He can go back and tell the person paying for the work that he can produce the result without using any math whatsoever. Just take whatever he is producing, one slice at a time, and eventually he'll get what he needs to fulfill his business case, hopefully before time and money runs out.

† Viewpoint: Who Builds a House without Drawing Blueprints?, Leslie Lamport, CACM, Vol.58 No.4, pp. 38-41.

Categories: Project Management

SPaMCAST 337 – Agile Release Plan, Baselining Software, Executing Communication

Listen Now

Subscribe on iTunes

In this episode of the Software Process and Measurement Cast we feature three columns!  The first is our essay on the Agile release plans.  Even after 12 years or more with Agile we are still asked what we will deliver, when a features will be delivered and how much the project will cost.  Agile release plans are a tool to answer those questions.  Our second column this week is from the Software Sensei, Kim Pries. Kim asks why is baselining so important. Kim posits that if we do not baseline, we cannot tell whether a change is negative, positive, or indifferent—we simply do NOT know. Finally Jo Ann Sweeney will complete the communication cycle in her Explaining Change column by discussing delivery with a special focus on social media.

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

QAI Quest 2015
April 20 -21 Atlanta, GA, USA
Scale Agile Testing Using the TMMi

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with Stephen Parry.  Stephen is a returning interviewee.  We discussed adaptable organizations. Stephen recently wrote: “Organizations which are able to embrace and implement the principles of Lean Thinking are inevitably known for three things: vision, imagination and – most importantly of all – implicit trust in their own people.” We discussed why trust, vision and imagination have to be more than just words in a vision or mission statement to get value out of lean and Agile.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management