Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Playing with two most interesting new features of elasticsearch 1.3.0

Gridshore - Fri, 07/25/2014 - 11:47

Just a few days a go elasticsearch released version 1.3.0 of their flagship product. The first one is the most waited for feature called the Top hits aggregation. Basically this is what is called grouping. You want to group certain items based on one characteristic, but within this group you want to have the best matching result(s) based on score. Another very important feature is the new support for scripts. Better security options when using scripts using sandboxed script languages.

In this blogpost I am going to explain and show the top_hits feature as well as the new scripting support.


Top hits

I am going to show a very simple example of top hits using my music index. This index contains all the songs I have in my itunes library. First step is to find songs by genre, the following query gives (the default) 10 hits based on the match_all query and the terms aggregation as requested.

GET /mymusic/_search
{
  "aggs": {
    "byGenre": {
      "terms": {
        "field": "genre",
        "size": 10
      }
    }
  }
}

The response is of the format:

{
	"hits": {},
	"aggregations": {
		"byGenre": {
			"buckets": [
				{"key":"rock","doc_count":1910},
				...
			]
		}
    }
}

Now we add the query to the request, songs containing the word love in the title.

GET /mymusic/_search
{
  "query": {
    "match": {
      "name": "love"
    }
  }, 
  "aggs": {
    "byGenre": {
      "terms": {
        "field": "genre",
        "size": 10
      }
    }
  }
}

Now we have less hits, still a number of buckets and the amount of songs that match our query within that bucket. The biggest change is the score in the returned hits. In te previous query the score was always 1, now the score is different due to the query we execute. The highest score now is the song Love by The Mission. The genre for this song is Rock and the song is from the year 1990. Time to introduce the top hits aggregation. With this query we can return the top song containing the word love in the title per genre

GET /mymusic/_search
{
  "query": {
    "match": {
      "name": "love"
    }
  },
  "aggs": {
    "byGenre": {
      "terms": {
        "field": "genre",
        "size": 5
      },
      "aggs": {
        "topFoundHits": {
          "top_hits": {
            "size": 1
          }
        }
      }
    }
  }
}

Again we get hits, but they are not different from the query before. The interesting part is in the aggs part. Here we add a sub aggregation to the byGenre aggregation. This aggregation is called topFoundHits of type top_hits. We only return the best hit per genre. The next code block shows the part of the response with the top hits, I did remove the content of the _source field in the top_hits to keep the response shorter.

{
   "took": 4,
   "timed_out": false,
   "_shards": {
      "total": 3,
      "successful": 3,
      "failed": 0
   },
   "hits": {
      "total": 141,
      "max_score": 0,
      "hits": []
   },
   "aggregations": {
      "byGenre": {
         "buckets": [
            {
               "key": "rock",
               "doc_count": 52,
               "topFoundHits": {
                  "hits": {
                     "total": 52,
                     "max_score": 4.715253,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "4147",
                           "_score": 4.715253,
                           "_source": {
                              "name": "Love",
                           }
                        }
                     ]
                  }
               }
            },
            {
               "key": "pop",
               "doc_count": 39,
               "topFoundHits": {
                  "hits": {
                     "total": 39,
                     "max_score": 3.3341873,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "11381",
                           "_score": 3.3341873,
                           "_source": {
                              "name": "Love To Love You",
                           }
                        }
                     ]
                  }
               }
            },
            {
               "key": "alternative",
               "doc_count": 12,
               "topFoundHits": {
                  "hits": {
                     "total": 12,
                     "max_score": 4.1945505,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "7889",
                           "_score": 4.1945505,
                           "_source": {
                              "name": "Love Love Love",
                           }
                        }
                     ]
                  }
               }
            },
            {
               "key": "b",
               "doc_count": 9,
               "topFoundHits": {
                  "hits": {
                     "total": 9,
                     "max_score": 3.0271564,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "2549",
                           "_score": 3.0271564,
                           "_source": {
                              "name": "First Love",
                           }
                        }
                     ]
                  }
               }
            },
            {
               "key": "r",
               "doc_count": 7,
               "topFoundHits": {
                  "hits": {
                     "total": 7,
                     "max_score": 3.0271564,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "2549",
                           "_score": 3.0271564,
                           "_source": {
                              "name": "First Love",
                           }
                        }
                     ]
                  }
               }
            }
         ]
      }
   }
}

Did you note a problem with my analyser for genre? Hint R&B!

More information on the top_hits aggregation can be found here:

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-metrics-top-hits-aggregation.html

Scripting

Elasticsearch has support for scripts for a long time. The default scripting language was and is mvel up to version 1.3. It will change to groovy in 1.4. Mvel is not a well known scripting language. The biggest advantage is that mvel is very powerful. The disadvantage is that it is to powerful. Mvel does not come with a sandbox principle. Therefore is is possible to write some very nasty scripts even when only doing a query. This was very well shown by a colleague of mine (Byron Voorbach) who created a query to read private keys on developer machines who did not safeguard their elasticsearch instance. Therefore dynamic scripting was switched off in version 1.2 by default.

This came with a very big disadvantage, now it was not possible anymore to use the function_score query without resorting to stored scripts on the server. In version 1.3 of elasticsearch a much better way is introduced. Now you use sandboxed scripting languages like groovy to keep using the flexible approach. Groovy can be configured to include object creation and method calls that are allowed. More information about this is provided in the elasticsearch documentation about scripting.

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html

Next is an example query that is querying my music index. This index contains all the songs from my music library. It queries all he songs after the year 1999 and calculates the score based on the year. So the newest songs get the highest score. And yes I know a sort by year desc would have given the same result.

GET mymusic/_search
{
  "query": {
    "function_score": {
      "query": {
        "range": {
          "year": {
            "gte": 2000
          }
        }
      },
      "functions": [
        {
          "script_score": {
            "lang": "groovy", 
            "script": "_score * doc['year'].value"
          }
        }
      ]
    }
  }
}

The score now becomes high, since we do a range query we get back only scores of one. Using the function_score as the multiplication of the year with the score, the end score is the year. I added the year as the only field to return, some of the results than are:

{
   "took": 4,
   "timed_out": false,
   "_shards": {
      "total": 3,
      "successful": 3,
      "failed": 0
   },
   "hits": {
      "total": 2895,
      "max_score": 2014,
      "hits": [
         {
            "_index": "mymusic",
            "_type": "itunes",
            "_id": "12965",
            "_score": 2014,
            "fields": {
               "year": [
                  "2014"
               ]
            }
         },
         {
            "_index": "mymusic",
            "_type": "itunes",
            "_id": "12975",
            "_score": 2014,
            "fields": {
               "year": [
                  "2014"
               ]
            }
         }
      ]
   }
}

Next up is the last sample, a combination of top_hits and scripting.

Top hits with scripting

We start with the sample from top_hits using my music index. Now we want to sort the buckets on the score of the best matching document in the bucket. The default is the number of documents in the bucket. As mentioned in the documentation you need a trick to do this.

The top_hits aggregator isn’t a metric aggregator and therefor can’t be used in the order option of the terms aggregator.

GET /mymusic/_search?search_type=count
{
  "query": {
    "match": {
      "name": "love"
    }
  },
  "aggs": {
    "byGenre": {
      "terms": {
        "field": "genre",
        "size": 5,
        "order": {
          "best_hit":"desc"
        }
      },
      "aggs": {
        "topFoundHits": {
          "top_hits": {
            "size": 1
          }
        },
        "best_hit": {
          "max": {
            "lang": "groovy", 
            "script": "doc.score"
          }
        }
      }
    }
  }
}

The results of this query again with most of the _source taken out is following. Compare it to the query in the top_hits section. Notice the different genres that we get back now. Also check the scores.

{
   "took": 4,
   "timed_out": false,
   "_shards": {
      "total": 3,
      "successful": 3,
      "failed": 0
   },
   "hits": {
      "total": 141,
      "max_score": 0,
      "hits": []
   },
   "aggregations": {
      "byGenre": {
         "buckets": [
            {
               "key": "rock",
               "doc_count": 37,
               "topFoundHits": {
                  "hits": {
                     "total": 37,
                     "max_score": 4.715253,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "4147",
                           "_score": 4.715253,
                           "_source": {
                              "name": "Love",
                           }
                        }
                     ]
                  }
               },
               "best_hit": {
                  "value": 4.715252876281738
               }
            },
            {
               "key": "alternative",
               "doc_count": 12,
               "topFoundHits": {
                  "hits": {
                     "total": 12,
                     "max_score": 4.1945505,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "7889",
                           "_score": 4.1945505,
                           "_source": {
                              "name": "Love Love Love",
                           }
                        }
                     ]
                  }
               },
               "best_hit": {
                  "value": 4.194550514221191
               }
            },
            {
               "key": "punk",
               "doc_count": 3,
               "topFoundHits": {
                  "hits": {
                     "total": 3,
                     "max_score": 4.1945505,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "7889",
                           "_score": 4.1945505,
                           "_source": {
                              "name": "Love Love Love",
                           }
                        }
                     ]
                  }
               },
               "best_hit": {
                  "value": 4.194550514221191
               }
            },
            {
               "key": "pop",
               "doc_count": 24,
               "topFoundHits": {
                  "hits": {
                     "total": 24,
                     "max_score": 3.3341873,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "11381",
                           "_score": 3.3341873,
                           "_source": {
                              "name": "Love To Love You",
                           }
                        }
                     ]
                  }
               },
               "best_hit": {
                  "value": 3.3341872692108154
               }
            },
            {
               "key": "b",
               "doc_count": 7,
               "topFoundHits": {
                  "hits": {
                     "total": 7,
                     "max_score": 3.0271564,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "2549",
                           "_score": 3.0271564,
                           "_source": {
                              "name": "First Love",
                           }
                        }
                     ]
                  }
               },
               "best_hit": {
                  "value": 3.027156352996826
               }
            }
         ]
      }
   }
}

This is just a first introduction into the top_hits and scripting. Stay tuned for more blogs around these topics.

The post Playing with two most interesting new features of elasticsearch 1.3.0 appeared first on Gridshore.

Categories: Architecture, Programming

Quote of the Month July 2014

From the Editor of Methods & Tools - Fri, 07/25/2014 - 08:22
Research has shown that the presumption of selfishness is true for maybe 30% of most populations; another 50% are reliably unselfish, and the remaining 20% could go either way, depending on the context. If a company presumes that the undecided 20% are selfish, you can bet they will be selfish—it’s a self-fulfilling prophecy. But worse, the company will create an environment where the 50% of the people who are unselfish are forced to act selfishly. And losing the energy, commitment, and intelligence of half the workforce is perhaps the biggest ...

Building A Backlog: Notes On Observing For Gathering Requirements

3740422323_12654e3ec0_b

One of my jobs during high school was in a tire manufacturing plant in Memphis, Tennessee. On more than one occasion the hated and dreaded time and motion “guy” showed up to observe how I was doing my job. I never knew what the outcome of the observation was or whether the change to the four page process I performed to sort green tires was due to the observation. The job was never easier after the change. Reflecting on that time (and several industrial classes later) I understand that observation is an important tool for developing an understanding of how work should be done, but is not a tool to be used all the time. Using observation in the right scenarios and then taking steps to plan how you will observe is critical getting value for the effort needed.

Why observe? The simplest and clearest rational for using observation techniques is that users and stakeholders either don’t always know what they want or can’t always express their current needs and foresee their future needs. Therefore a new set of eyes will expose more and different needs. There are four typical reasons for observing should be considered as a tool gather knowledge and requirements.

  1. Physical location is a determinant. Processes and work flow is often affected by the physical location. When the physical layout of the people or machines could strongly affect the solution the team developing the initial backlog should observe the process in action to understand the nuances of flow of work.
  2. When people can’t tell you. Occasionally the process being studied will be so complex that no one is able to coherently describe how it works or how it should work. Even more occasionally asking is met by silence due to lack of trust. In both cases observation is a valid tool to develop an initial backlog.
  3. When interactions are crucial. Complex processes often require a wide range of interactions between people, tool and applications. Interactions, except when they cross the boundary, are difficult to identify unless you see them.
  4. When the output and the process don’t match. When, on occasion, the measured output or the output described by a manager does no match what is possible based on the published process then observing the real process is mandatory.

Once you have decided that you must observe, planning becomes a necessity.

  1. Begin by reviewing the known policies, culture and process of the organization or team being observed. This step helps to ensure that you have a sense of the environment and what you will be seeing.
  2. Decide on how long you will observe. Some processes and process variations need time to be seen. If a process requires a week to complete you will need to observe for at least that amount of time.
  3. Determine how you will record what you see. Trying memorize what you see will result in some information, however you will at least need to take notes. Remember that recording can include taking notes or recording audio and video. The level of detail needed will help determine the method needed.
  4. Finalize the logistics of the observation session before showing up. Office space, network and physical access can suck up huge quantities of time and effort. If you have a week for observation do not spend the first day dealing with administration tasks.
  5. Decide how you will create rapport with the group you are observing. Your presence will cause disruption. You need to find a way of observing with minimal impact to the results and without scaring those you are observing into calling placement firms. I am a fan of transparency; tell people why you observing and what will be done with the data. Where possible I usually involve those that I have observed in an early review of the data collected to elicit more information (hybridization of techniques by combining observing and asking).
  6. Finally do what was planned, but do not be afraid to tweak the plan as needed.

When I was in the tire plant, the time and motion guy would just appear and no one was thrilled. When we saw him coming we followed the proscribed process a bit more carefully, even if it was less effective. Observation can change behavior positively or negatively (Hawthorne effect). Sometimes observation might be the only way to know what is really happening, but without planning the data you gather might be what someone wants you to know rather than what you need to know.


Categories: Process Management

Review The Twitter Story by Nick Bilton

Gridshore - Thu, 07/24/2014 - 23:35


A lot of people dream of creating the new Facebook, Twitter or Instagram. Nobody knows when they will create it, most of the ideas just came to be big. Personally I do not believe I will ever create such a product. To busy with to much different things, usually based on great ideas of others. One thing I do like is reading about the success stories of others. I read the book about starbucks, microsoft, apple and a few others.

Recently I started reading the Twitter Story. It reeds like an exciting story, nevertheless it is telling a story based on interviews and facts behind one of the most exciting companies on the internet of the last century.

I do not think it is a coincidence that a lot of what I read in the Lean Startup but also the starbucks story is coming back in the twitter story. One thing that struck me in this book is what business does to friendship. It is also showing that people with great ideas are usually not the people that make this ideas into a profitable company.

When starting a company based on your terrific idea, read this book and learn from it. It might make your life a lot better.

The post Review The Twitter Story by Nick Bilton appeared first on Gridshore.

Categories: Architecture, Programming

Purpose, Not Discipline

NOOP.NL - Jurgen Appelo - Thu, 07/24/2014 - 16:06
Purpose, Not Discipline

I tried running a few years ago, but I stopped because of shin splints and impossible work schedules.

I tried a fitness school, for several months, but I hated all the machines and uninteresting equipment.

I tried Pilates exercises, for a few days, but I found the exercises on a mat mind-numbingly boring.

I tried swimming, for a week or two, but the pool was always crowded and it was far away from my home.

I tried body-weight exercises, for exactly two days, but I found them too hard, which didn’t really motivate me.

And I tried yoga, for less than a week, but it was as least as boring as the Pilates exercises.

The post Purpose, Not Discipline appeared first on NOOP.NL.

Categories: Project Management

How Do I Make $2,000 A Month On Passive Income?

Making the Complex Simple - John Sonmez - Thu, 07/24/2014 - 15:00

In this video I answer a question about how to make passive income from a book and a blog.

The post How Do I Make $2,000 A Month On Passive Income? appeared first on Simple Programmer.

Categories: Programming

How Not To Make Decisions Using Bad Estimates

Herding Cats - Glen Alleman - Thu, 07/24/2014 - 04:54

The presentation Dealing with Estimation, Uncertainty, Risk, and Commitment: An Outside-In Look at Agility and Risk Management has become a popular message for those suggesting we can make decisions about software development in the absence of estimates.

The core issue starts with first chart. It shows the actual completion of a self-selected set of projects versus the ideal estimate. This chart is now in use for the #NoEstimates paradigm as to why estimating is flawed and should be eliminated. How to eliminate estimates while making decisions about spending other peoples money is not actually clear. You'll have to pay €1,300 to find out. 

But let's look at this first chart. It shows the self-selected projects, the vast majority completed above the initial estimate. What is this initial estimate? In the original paper, the initial estimate appears to be the estimate made by someone for how long the project would take. No sure how that estimate was arrived at - the basis of estimate - or how was the estimate was derived. We all know that subject matter expertise is the least desired and past performance, calibrated for all the variables is the best.

So Here in Lies the Rub - to Misquote from Shakespeare's Hamlet

The ideal line is not calibrated. There is no assessment if the orginal estimate was credible or bogus. If it was credible, what was the confidence of that credibility and what was the error band on that confidence. 

This is a serious - some might say egregious - error in statistical analysis. We're comparing actuals to a baseline that is not calibrated. This means the initial estimate is meaningless in the analysis of the variances without an assessment of it accuracy and precision. To then construct a probability distribution chart is nice, but measured against what - against bogus data.

This is harsh, but the paper and the presentation provide no description of the credibility of the initial estimates. Without that, any statistical analysis is meaningless. Let's move to another example in the second chart.

Screen Shot 2014-07-23 at 11.22.14 AM

The second chart - below - is from a calibrated  baseline. The calibration comes from a parametric model, where the parameters of the initial estimate are derived from prior projects - the reference class forecasting paradigm. The tool used here is COCOMO. There are other tools based on COCOMO and Larry Putman's and other methods that can be used for similar calibration of the initial estimates. A few we use are QSM, SEER, Price.

One place to start is Validation Method for Calibrating Software Effort Models. But this approach started long ago with An Empirical Validation of Software Cost Estimation Models. All the way to the current approaches of ARIMA and PCA forecasting for cost, schedule, and performance using past performance. And current approaches, derived from past research, of tuning those cost drivers using Bayesian Statistics.

Screen Shot 2014-07-20 at 10.42.05 PMSo What's All The Flap About?

The issue of software management, estimates of software cost, time, and performance abound. We hear about it every day. Our firm works on programs that have gone Over Target Baseline. So we walk the walk every day.

But when there is bad statistics used to sell solutions to complex problems, that's when it becomes a larger problem. To solve this nearly intractable problem of project cost and schedule over run, we need to look to the root cause. Let's start with a book Facts and Fallacies of Estimating Software Cost and Schedule. From there let's look to some more root causes of software project problems. Why Projects Fail is a good place to move to, with their 101 common casues. Like the RAND and IDA Root Cause Analysis reports many are symptoms, rather than root causes, but good infomation all the same.

So in the end when it is suggested that the woo's of project success can be addressed by applying

  • Decision making frameworks for projects that do not require estimates.
  • Investment models for software projects that do not require estimates.
  • Project management (risk management, scope management, progress reporting, etc.) approaches that do not require estimates.

Ask a simple question - is there any tangible, verifiable, externally reviewed evidence for this. Or is this just another self-selected, self-reviewed, self-promoting idea that violates the principles of microeconomics as it is applied to software development, where:

  • Economics is the study of how people make decisions in resource-limited situations. This definition of economics fits the major branches of classical economics very well. 

  • Macroeconomics is the study of how people make decisions in resource-limited situations on a national or global scale. It deals with the effects of decisions that national leaders make on such issues as tax rates, interest rates, and foreign and trade policy, in the presence of uncertainty

  • Microeconomics is the study of how people make decisions in resource—limited situations on a  personal scale. It deals with the decisions that individuals and organizations make on such issues as how much insurance to buy, which word processor to buy, what features to develop in what order, whether to make or buy a capability, or what prices to charge for their products or services, in the presence of uncertainty. Real Options is part of this decision making process as well.

Economic principles underlie the structure of the software development life cycle, and its primary refinements of prototyping, itertaive and incremental development, and emerging requirements. 

If we look at writing software for money, it falls into the microeconomics realm. We have limited resources, limited time, and we need to make decisions in the presence of uncertainty.

In order to decide about the future impact of any one decision - making a choice - we need to know something about the furture which is itself uncertain. The tool to makes these decisions about the future in the presence of uncertainty is call estimating. Lot's of ways to estimate. Lots of tools to help us. Lots of guidance - books, papers, classrooms, advisers. 

But asserting we can in fact make decisions about the future in the presence of uncertainty without estimating is mathematically and practically nonsense. 

So now is the time to learn how to estimate, using your favorite method, because to decide in the absence of knowing the impact of that decision is counter to the stewardship of our customers money. And if we want to keep writing software for money we need to be good stewards first.

Related articles Averages Without Variances are Meaningless - Or Worse Misleading How to "Lie" with Statistics How to Fib With Statistics When Uncertainty is Good No Estimates of Costs and Schedule? The Value of Information COCOMO Model Why is Statistical Thinking Hard? Back To The Future The Failure of Open Loop Thinking
Categories: Project Management

All Decisions Are Based On Mathematics

Herding Cats - Glen Alleman - Thu, 07/24/2014 - 04:25

How Not To Be WrongObvious not every decision we make is based on mathematics, but when we're spending money, especially other people's money, we'd better have so good reason to do so. Some reason other than gut feel for any sigifican value at risk. This is the principle of Microeconomics.

All Things Considered is running a series on how people interprete probability. From capturing a terrortist to the probability it will rain at your house today. The world lives on probabilitic outcomes. These probabilities are driven by underlying statistical process. These statistical processes create uncertainties in our decision making processes.

Both Aleatory and Epistemic uncertainty exist on projects. These two uncertainties create risk. This risk impacts how we make decisions. Minimizing risk, while maximizing reward is a project management process, as well as a microeconomics process. By applying statistical process control we can engage project participants in the decision making process. Making decision in the presence of uncertainty is sporty business and many example of poor forecasts abound. The flaws of statistical thinking are well documented.

When we encounter to notion that decisions can be made in the absence of statistical thinking, there are some questions that need to be answered. Here's one set of questions and answers from the point of view of the mathematics of decision making using probability and statistics.

The book opens with a simple example.

Here's a question. We're designing airplanes - during WWII - in ways that will prevent them getting shot down by enemy fighters, so we provide them  with armour. But armor makes them heavier. Heavier planes are less maneuverable and use more fuel. Armoring planes too much is a proplem. Too little is a problem. Somewhere in between is optimum.

When the planes came back from a mission, the number of bullet holes was recorded. The damage was not uniformly distributed, but followed this pattern

  • Engine - 1.11 bullet holes per square foot (BH/SF)
  • Fueselage - 1.73 BH/SF
  • Fuel System - 1.55 BH/SF
  • Rest of plane - 1.8 BH/SF
The first thought was to provide armour where the need was the highest. But after some thought, the right answer was to provide amour where the bullet holes aren't - on the engines. "where are the missing bullet holes?" The answer was onb the missing planes. The total number of planed leaving minus those returning were the number of planes that were hit in a location that caused them not to return - the engines.

The mathematics here is simple. Start with setting a variable to Zero. This variables is the probability that a plane that takes a hit in the enginer manages to staty in the air and return to base. The result of this analysis (pp. 5-7 of the book) can be applied to our project work.

This is an example of the thought processes needed for project management and the decision making processes needed for spending other peoples money. The mathematician approach is to ask what assumptions are we making? Are they justified? The first assumption - the errenous assumption - was tyhat the planes returning represented were a random sample of all the planes. If so, the conclusions could be drawn.

In The End

Show me the numbers. Numbers talk BS walks is the crude phrase, but true. When we hear some conjecture about the latest fad think about the numbers. But before that read Beyond the Hype: Rediscovedring the Essence of Management, Robert Eccles and Nitin Nohria. This is an important book that lays out the processes for sorting out the hype - and untested and liley untestable conjectures - from the testable processes.

Related articles How To Fix Martin Fowler's Estimating Problem in 3 Easy Steps The World of Probability and Statistics Stationary processes How Not To Be Wrong Why is Statistical Thinking Hard? Selection bias and bombers How Not To Make Decisions Using Bad Estimates
Categories: Project Management

Building A Backlog: Notes On Asking For Requirements

Asking requires listening and writing down what you hear!

Asking requires listening and writing down what you hear!

Asking stakeholders to describe or define requirements is the most common way to develop requirements for projects. Specific techniques include talking to stakeholders in the hall informally, interviews and questionnaires and very formal joint application design (JAD). These techniques are popular because asking and talking to people is easy and opens a dialog. However, while stakeholders may know their business need, they may not know the details of what they really want and need. Moderation and planning are critical for making all of the techniques in the category as effective as possible for creating an initial backlog. Examples of how moderation and planning could be implemented in two classic “asking” techniques are shown below:

Joint Application Design (JAD) is a very formal technique that is an off-shoot of Joint Application Development that evolved in the 1970s. JAD is highly structured approach to developing the requirements and design for an application or project. The process is based on the interaction between key roles (sponsor, subject matter experts including business and IT participants, facilitator, scribe and potentially observers). The process requires all roles. It should be noted that the JAD process was one of the earliest techniques used to embed business and IT personnel for any substantial period of time. The process (documented many places including Wikipedia) has a number of key steps that provide a structured approach for interaction and generating information. Setting the goal (one of the key steps) of the JAD acts as an anchor for the process and provides a tool for the facilitator to re-focus the process if it wanders off course. In order for a JAD to work, up-front planning is mandatory. The participants need to be carefully identified, the goals of the JAD identified and a detailed agenda with supporting documentation needs to be developed. Preparing for the JAD can take as long as the session itself. JADs typically run three to eight days and participants typically were sequestered from the typical working environment during the session.   The combination of skilled facilitator and structure help IT and business participants interact in a creative and productive fashion. Overall JAD is a very powerful technique, however the structure and overhead tend to make it more difficult to apply.

In its classic form, JAD is viewed as less than Agile. Historically it was used to develop the much abused, big up-front design (BUFD). Agile principles call out the concept of emergent design, while eschewing the BUFD. The practice of Agile  is generally more a reflection of finding the balance between what needs to be known and what needs to be discovered. I have used the formal structure of the JAD process as a tool to initiate Agile projects very successfully by refocusing the goal to build an initial product backlog. The combination of structure and facilitation is more valuable when a team is addressing a new business area or in matrix organizations where teams are assembled for each new project.

Interviews are another of the classic “asking” techniques. Interview techniques can range from formally scripted question and answer sessions to loosely guided discussions. Formal interview techniques begin by developing a set of questions to be asked during the interview. In formal interview situations, the responses to the questions in the scrip and any follow-on questions captured as close to verbatim as possible. A legal disposition is an example of a formal interview. They require the interviewers to prepare for the interview not only by developing the set of questions to be asked, but also to gather information about the general outline of the answer they are going to receive. A good interviewer is rarely surprised by the answer they receive. Informal interviews are typically less structured, however they still require preparation. In less formal scenarios I generally recommend developing a loose set of framing questions (framing questions capture the direction of interview without being specific) so that the interviewer develops a goal for the interview and then plans the approach to attain that goal. The framing process is important in case the interviewee throws a curve so that interviewer can gradually guide the interview back to the correct track. Take notes (do not trust your memory) in all interviews. While informal interview seem more like common conversation, interviewers that are good at the informal technique tend to good counter-punchers (able to deliver well formed follow questions that keep the interviewee talking) however even in an informal interview, the interviewee must always their ultimate goal in mind. In both formal and informal situations, if the interviewer is emotionally involved in what the answer should be, consider using a facilitator or external interviewer.

Asking stakeholders for requirements is a tried and true method to generate an initial backlog. Asking should not equate to ad hoc or mere order taking. Asking requires preparation to be effective whether using formal techniques based on JAD or informal interviews. As an interviewer you need to map out where you want the session to go and then act as the guide.


Categories: Process Management

Java: Determining the status of data import using kill signals

Mark Needham - Wed, 07/23/2014 - 23:20

A few weeks ago I was working on the initial import of ~ 60 million bits of data into Neo4j and we kept running into a problem where the import process just seemed to freeze and nothing else was imported.

It was very difficult to tell what was happening inside the process – taking a thread dump merely informed us that it was attempting to process one line of a CSV line and was somehow unable to do so.

One way to help debug this would have been to print out every single line of the CSV as we processed it and then watch where it got stuck but this seemed a bit over kill. Ideally we wanted to only print out the line we were processing on demand.

As luck would have it we can do exactly this by sending a kill signal to our import process and have it print out where it had got up to. We had to make sure we picked a signal which wasn’t already being handled by the JVM and decided to go with ‘SIGTRAP’ i.e. kill -5 [pid]

We came across a neat blog post that explained how to wire everything up and then created our own version:

class Kill3Handler implements SignalHandler
{
    private AtomicInteger linesProcessed;
    private AtomicReference<Map<String, Object>> lastRowProcessed;
 
    public Kill3Handler( AtomicInteger linesProcessed, AtomicReference<Map<String, Object>> lastRowProcessed )
    {
        this.linesProcessed = linesProcessed;
        this.lastRowProcessed = lastRowProcessed;
    }
 
    @Override
    public void handle( Signal signal )
    {
        System.out.println("Last Line Processed: " + linesProcessed.get() + " " + lastRowProcessed.get());
    }
}

We then wired that up like so:

AtomicInteger linesProcessed = new AtomicInteger( 0 );
AtomicReference<Map<String, Object>> lastRowProcessed = new AtomicReference<>(  );
Kill3Handler kill3Handler = new Kill3Handler( linesProcessed, lastRowProcessed );
Signal.handle(new Signal("TRAP"), kill3Handler);
 
// as we iterate each line we update those variables
 
linesProcessed.incrementAndGet();
lastRowProcessed.getAndSet( properties ); // properties = a representation of the row we're processing

This worked really well for us and we were able to work out that we had a slight problem with some of the data in our CSV file which was causing it to be processed incorrectly.

We hadn’t been able to see this by visual inspection since the CSV files were a few GB in size. We’d therefore only skimmed a few lines as a sanity check.

I didn’t even know you could do this but it’s a neat trick to keep in mind – I’m sure it shall come in useful again.

Categories: Programming

The New Realities that Call for New Organizational and Management Capabilities

“The only people who can change the world are people who want to. And not everybody does.” -- Hugh MacLeod

Is it just me or is the world changing faster than ever?

I hear from everybody around me (inside and outside of Microsoft) how radically their worlds are changing under their feet, business models are flipped on their heads, and the game of generating new business value for customers is at an all-time competitive high.

Great.

Challenge is where growth and greatness come from.  It’s always a chance to test what we’re capable of and respond to whatever gets thrown our way.  But first, it helps to put a finger on what exactly these changes are that are disrupting our world, and what to focus on to survive and thrive.

In the book The Future of Management, Gary Hamel shares some great insight into the key challenges that companies are facing that create even more demand for management innovation.

The New Realities We’re Facing that Call for Management Innovation

I think Hamel describes our new world pretty well …

Via The Future of Management:

  • “As the pace of change accelerates more, more and more companies are finding themselves on the wrong side of the change curve.  Recent research by L.G. Thomas and Richard D'Aveni suggests that industry leadership is changing hands more frequently, and competitive advantage is eroding more rapidly, than ever before.  Today, it's not just the occasional company that gets caught out by the future, but entire industries -- be it traditional airlines, old-line department stores, network television broadcasters, the big drug companies, America's carmakers, or the newspaper and music industries.”
  • “Deregulation, along with the de-scaling effects of new technology, are dramatically reducing the barriers to entry across a wide range of industries, from publishing to telecommunications to banking to airlines.  As a result, long-standing oligopolies are fracturing and competitive 'anarchy' is on the rise.”
  • “Increasingly, companies are finding themselves enmeshed in 'value webs' and 'ecosystems' over which they have only partial control.  As a result, competitive outcomes are becoming less the product of market power, and more the product of artful negotiation.  De-verticalization, disintermediation, and outsource-industry consortia, are leaving firms with less and less control over their own destinies.”
  • “The digitization of anything not nailed down threatens companies that make their living out of creating and selling intellectual property.  Drug companies, film studios, publishers, and fashion designers are all struggling to adapt to a world where information and ideas 'want to be free.'”
  • “The internet is rapidly shifting bargaining power from producers to consumers.  In the past, customer 'loyalty' was often an artifact of high search costs and limited information, and companies frequently profited from customer ignorance.  Today, customers are in control as never before -- and in a world of near-perfect information, there is less and less room for mediocre products and services.”
  • “Strategy cycles are shrinking.  Thanks to plentiful capital, the power of outsourcing, and the global reach of the Web, it's possible to ramp up a new business faster than ever before.  But the more rapidly a business grows, the sooner it fulfills the promise of its original business model, peaks, and enters its dotage.  Today, the parabola of success is often a short, sharp spike.”
  • “Plummeting communication costs and globalization are opening up industries to a horde of new ultra-low-cost competitors.  These new entrants are eager to exploit the legacy costs of the old guard.  While some veterans will join the 'race to the bottom' and move their core activities to the world's lowest-cost locations, many others will find it difficult to reconfigure their global operations.  As Indian companies suck in service jobs and China steadily expands its share of global manufacturing, companies everywhere will struggle to maintain their margins.”
Strategically Adaptive and Operationally Efficient

So how do you respond to the challenges.   Hamel says it takes becoming strategically adaptable and operationally efficient.   What a powerful combo.

Via The Future of Management:

“These new realities call for new organizational and managerial capabilities.  To thrive in an increasingly disruptive world, companies must become as strategically adaptable as they are operationally efficient.  To safeguard their margins, they must become gushers of rule-breaking innovation.  And if they're going to out-invent and outthink a growing mob of upstarts, they must learn how to inspire their employees to give the very best of themselves every day.  These are the challenges that must be addressed by 21st-century management innovations.”

There are plenty of challenges.  It’s time to get your greatness on. 

If there ever was a chance to put to the test what you’re capable of, now is the time.

No matter what, as long as you live and learn, you’ll grow from the process.

You Might Also Like

Simplicity is the Ultimate Enabler

The New Competitive Landscape

Who’s Managing Your Company

Categories: Architecture, Programming

How Pairing & Swarming Work & Why They Will Improve Your Products

If you’ve been paying attention to agile at all, you’ve heard these terms: pairing and swarming. But what do they mean? What’s the difference?

When you pair, two people work together to finish a piece of work. Traditionally, two developers paired. The “driver” wrote the piece of work. The other person, the “navigator,” observed the work, providing review, as the work was completed.

I first paired as a developer in 1982 (kicking and screaming). I later paired in the late 1980′s as the tester in several developer-tester pairs. I co-wrote Behind Closed Doors: Secrets of Great Management with Esther Derby as a pair.

There is some data that says that when we pair, the actual coding takes about 15-20% longer. However, because we have built-in code review, there is much less debugging at the end.

When Esther and I wrote the book, we threw out the original two (boring) drafts, and rewrote the entire book in six weeks. We were physically together. I had to learn to stop talking. (She is very funny when she talks about this.) We both had to learn each others’ idiosyncrasies about indentations and deletions when writing. That’s what you do when you pair.

However, this book we wrote and published is nothing like what the original drafts were. Nothing. We did what pairs do: We discussed what we wanted this section to look like. One of us wrote for a few minutes. That person stopped. We changed. The other person wrote. Maybe we discussed as we went, but we paired.

After about five hours, we were done for the day. Done. We had expended all of our mental energy.

That’s pairing. Two developers. One work product. Not limited to code, okay?

Now, let’s talk about swarming.

Swarming is when the entire team says, “Let’s take this story and get it to done, all together.” You can think of swarming as pairing on steroids. Everyone works on the same problem. But how?

Someone will have to write code. Someone will have to write tests. The question is this: in what order and who navigates? What does everyone else do?

When I teach my agile and lean workshop, I ask the participants to select one feature that the team can complete in one hour. Everyone groans. Then they do it.

Some teams do it by having the product owner explain what the feature is in detail. Then the developers pair and the tester(s) write tests, both automated and manual. They all come together at about the 45-minute mark. They see if what they have done works. (It often doesn’t.) Then the team starts to work together, to really swarm. “What if we do this here? How about if this goes there?”

Some teams work together from the beginning. “What is the first thing we can do to add value?” (That is an excellent question.) They might move into smaller pairs, if necessary. Maybe. Maybe they need touchpoints every 15-20 minutes to re-orient themselves to say, “Where are we?” They find that if they ask for feedback from the product owner, that works well.

If you first ask, “What is the first thing we can do to add value and complete this story?” you are probably on the right track.

Why Do Pairing and Swarming Work So Well?

Both pairing and swarming:

  • Build feedback into development of the task at hand. No one works alone. Can the people doing the work still make a mistake? Sure. But it’s less likely. Someone will catch the mistake.
  • Create teamwork. You get to know someone well when you work with them that intensely.
  • Expose the work. You know where you are.
  • Reduce the work in progress. You are less likely to multitask, because you are working with someone else.
  • Encourage you to take no shortcuts, at least in my case. Because someone was watching me, I was on my best professional behavior. (Does this happen to you, too?)
How do Pairing and Swarming Improve Your Products?

The effect of pairing and swarming is what improves your products. The built-in feedback is what creates less debugging downstream. The improved teamwork helps people work together. When you expose the work in progress, you can measure it, see it, have no surprises. With reduced work in progress, you can increase your throughput. You have better chances for craftsmanship.

You don’t have to be agile to try pairing or swarming. You can pair or swarm on any project. I bet you already have, if you’ve been on a “tiger team,” where you need to fix something for a “Very Important Customer” or you have a “Critical Fix” that must ship ASAP. If you had all eyes on one problem, you might have paired or swarmed.

If you are agile, and you are not pairing or swarming, consider adding either or both to your repertoire, now.

Categories: Project Management

The Deadline to Sign up for GTAC 2014 is Jul 28

Google Testing Blog - Wed, 07/23/2014 - 01:53
Posted by Anthony Vallone on behalf of the GTAC Committee

The deadline to sign up for GTAC 2014 is next Monday, July 28th, 2014. There is a great deal of interest to both attend and speak, and we’ve received many outstanding proposals. However, it’s not too late to add yours for consideration. If you would like to speak or attend, be sure to complete the form by Monday.

We will be making regular updates to our site over the next several weeks, and you can find conference details there:
  developers.google.com/gtac

For those that have already signed up to attend or speak, we will contact you directly in mid August.

Categories: Testing & QA

Building A Backlog: Three Categories

Requirements don't grow on trees

Requirements don’t grow on trees

Many discussions of Agile techniques begin with the assumption that a backlog has magically appeared on the team’s door step. Anyone that has participated in any form of project, whether related to information technology, operations or physical engineering, knows that requirements don’t grow on trees. They need to be developed before a team can start to satisfy those requirements.  There are three primary ways to gather requirements based on how information is elicited.

  1. Asking: There are many techniques that focus on asking users, potential users and other stakeholders what they want the project to deliver.  Classic examples are joint application design, questionnaires and interviews (formal and informal).  This category gets used most often because of its simplicity (everyone thinks they know how to ask questions). For small and uncomplicated projects simply asking and talking about what is needed may well suffice.  Unfortunately this method can fail if the wrong people are asked and/or if those who are asked don’t know what they really want or need (remember the adage: you don’t know what you don’t know).
  2. Observing: Observation was one of the first methods I was taught when collecting requirements for process changes.  Watching how people work jumps over what people think happens and goes directly to the source. For example, an organization saw a substantial drop in the number of mortgage applications being keyed into the system around lunchtime.  None of the management stakeholders could explain the productivity change well. In order to get the bottom of the issue we observed work in the department for a week. What was found was the form was long (many pages) and forms not completed before lunch were lost and had to be restarted. The system was changed to allow portions of the form to be saved and then restarted, increasing throughput in the department. One of the drawbacks to watching is the impact of observation. As noted in the Hawthorne effect, paying attention to people can impact the output of the process, thereby generating false information.  Further, in cases where trust is an issue, observing a group can generate panic.
  3. Showing: Prototyping (functional or paper) or delivering functional products that stakeholders can interact with and react to are both great ways to push requirements past what is known and currently possible.  Paper prototypes are a mechanism for helping team members and stakeholders visualize how work could be done.  Some techniques start with how work is currently done, then visualizes change. Functional prototyping delivers software that functions at some level.  Functional prototyping requires planning and effort as if a project in its own right.

Each of these categories can be overlapped to create hybrids.

An initial backlog is built at the beginning of a project, or at least very early on. If your organization has a strenuous budgeting and strategic planning process, the more that you will need to generate a detailed initial backlog before the team starts to work.  Whether a team develops just enough of a backlog to get started or builds a broader backlog, they need have a backlog before they start developing. Backlogs don’t grow on trees!


Categories: Process Management

Troubleshooting haproxy 502 errors related to malformed/large HTTP headers

Agile Testing - Grig Gheorghiu - Wed, 07/23/2014 - 00:02
We had a situation recently where our web application started to behave strangely. First nginx (which sits in front of the application) started to error out with messages of this type:

upstream sent too big header while reading response header from upstream

A quick Google search revealed that a fix for this is to bump up proxy_buffer_size in nginx.conf, for both http and https traffic, along these lines:

proxy_buffer_size   256k;
proxy_buffers   4 256k;
proxy_busy_buffers_size   256k;

Now nginx was happy when hit directly. However, haproxy was still erroring out with a 502 'bad gateway' return code, followed by PH. Here is a snippet from the haproxy log file:

Jul 22 21:27:13 127.0.0.1 haproxy[14317]: 172.16.38.57:53408 [22/Jul/2014:21:27:12.776] www-frontend www-backend/www2:80 1/0/1/-1/898 502 8396 - - PH-- 0/0/0/0/0 0/0 "GET /someurl HTTP/1.1"

Another Google search revealed that PH means that haproxy rejected the header from the backend because it was malformed.

At this point, an investigation into the web app did discover a loop in the code that kept adding elements to a cookie included in the response header.

Anyway, I leave this here in the hope that somebody will stumble on it and benefit from it.

Sponsored Post: Apple, Asana, FoundationDB, Cie Games, BattleCry, Surge, Dreambox, Chartbeat, Monitis, Netflix, Salesforce, Cloudant, CopperEgg, Logentries, Gengo, Couchbase, MongoDB, BlueStripe, AiScaler, Aerospike, LogicMonitor, AppDynamics, ManageEngin

Who's Hiring?
  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here. 
    • Software Developer in Test. The iOS Systems team is looking for a Quality Assurance engineer. In this role you will be expected to work hand-in-hand with the software engineering team to find and diagnose software defects. The ideal candidate will also seek out ways to further automate all aspects of our existing process. This is a highly technical role and requires in-depth knowledge of both white-box and black-box testing methodologies. Please apply here
    • Senior Software Engineer -iOS Systems. Do you love building highly scalable, distributed web applications? Does the idea of a fast-paced environment make your heart leap? Do you want your technical abilities to be challenged every day, and for your work to make a difference in the lives of millions of people? If so, the iOS Systems Carrier Services team is looking for a talented software engineer who is not afraid to share knowledge, think outside the box, and question assumptions. Please apply here.
    • Software Engineering Manager, IS&T WWDR Dev Systems. The WWDR development team is seeking a hands-on engineering manager with a passion for building large-scale, high-performance applications. The successful candidate will be collaborating with Worldwide Developer Relations (WWDR) and various engineering teams throughout Apple. You will lead a team of talented engineers to define and build large-scale web services and applications. Please apply here.
    • C++ Senior Developer and Architect- Maps. The Maps Team is looking for a senior developer and architect to support and grow some of the core backend services that support Apple Map's Front End Services. Ideal candidate would have experience with system architecture, as well as the design, implementation, and testing of individual components but also be comfortable with multiple scripting languages. Please apply here.

  • Cie Games, small indie developer and publisher in LA, is looking for rock star Senior Game Java programmers to join our team! We need devs with extensive experience building scalable server-side code for games or commercial-quality applications that are rich in functionality. We offer competitive comp, great benefits, interesting projects, and exceptional growth opportunities. Check us out at http://www.ciegames.com/careers.

  • BattleCry, the newest ZeniMax studio in Austin, is seeking a qualified Front End Web Engineer to help create and maintain our web presence for AAA online games. This includes the game accounts web site, enhancing the studio website, our web and mobile- based storefront, and front end for support tools. http://jobs.zenimax.com/requisitions/view/540

  • FoundationDB is seeking outstanding developers to join our growing team and help us build the next generation of transactional database technology. You will work with a team of exceptional engineers with backgrounds from top CS programs and successful startups. We don’t just write software. We build our own simulations, test tools, and even languages to write better software. We are well-funded, offer competitive salaries and option grants. Interested? You can learn more here.

  • Asana. As an infrastructure engineer you will be designing software to process, query, search, analyze, and store data for applications that are continually growing in scale. You will work with a world-class team of engineers on deploying and operating existing systems, and building new ones for problems that are unique to our problem space. Please apply here.

  • Operations Engineer - AWS Cloud. Want to grow and extend a cutting-edge cloud deployment? Take charge of an innovative 24x7 web service infrastructure on the AWS Cloud? Join DreamBox Learning’s creative team of engineers, designers, and educators. Help us radically change education in an environment that values collaboration, innovation, integrity and fun. Please apply here. http://www.dreambox.com/careers

  • Chartbeat measures and monetizes attention on the web. Our traffic numbers are growing, and so is our list of product and feature ideas. That means we need you, and all your unparalleled backend engineer knowledge to help up us scale, extend, and evolve our infrastructure to handle it all. If you've these chops: www.chartbeat.com/jobs/be, come join the team!

  • The Salesforce.com Core Application Performance team is seeking talented and experienced software engineers to focus on system reliability and performance, developing solutions for our multi-tenant, on-demand cloud computing system. Ideal candidate is an experienced Java developer, likes solving real-world performance and scalability challenges and building new monitoring and analysis solutions to make our site more reliable, scalable and responsive. Please apply here.

  • Sr. Software Engineer - Distributed Systems. Membership platform is at the heart of Netflix product, supporting functions like customer identity, personalized profiles, experimentation, and more. Are you someone who loves to dig into data structure optimization, parallel execution, smart throttling and graceful degradation, SYN and accept queue configuration, and the like? Is the availability vs consistency tradeoff in a distributed system too obvious to you? Do you have an opinion about asynchronous execution and distributed co-ordination? Come join us

  • Human Translation Platform Gengo Seeks Sr. DevOps Engineer. Build an infrastructure capable of handling billions of translation jobs, worked on by tens of thousands of qualified translators. If you love playing with Amazon’s AWS, understand the challenges behind release-engineering, and get a kick out of analyzing log data for performance bottlenecks, please apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • OmniTI has a reputation for scalable web applications and architectures, but we still lean on our friends and peers to see how things can be done better. Surge started as the brainchild of our employees wanting to bring the best and brightest in Web Operations to our own backyard. Now in its fifth year, Surge has become the conference on scalability and performance. Early Bird rate in effect until 7/24!
Cool Products and Services
  • A third party vendor benchmarked the performance of three databases: Couchbase Server, MongoDB and DataStax Enterprise. The databases were benchmarked with two different workloads (read-intensive / balanced) via YCSB on dedicated servers. Read.

  • Now track your log activities with Log Monitor and be on the safe side! Monitor any type of log file and proactively define potential issues that could hurt your business' performance. Detect your log changes for: Error messages, Server connection failures, DNS errors, Potential malicious activity, and much more. Improve your systems and behaviour with Log Monitor.

  • The NoSQL "Family Tree" from Cloudant explains the NoSQL product landscape using an infographic. The highlights: NoSQL arose from "Big Data" (before it was called "Big Data"); NoSQL is not "One Size Fits All"; Vendor-driven versus Community-driven NoSQL.  Create a free Cloudant account and start the NoSQL goodness

  • Finally, log management and analytics can be easy, accessible across your team, and provide deep insights into data that matters across the business - from development, to operations, to business analytics. Create your free Logentries account here.

  • CopperEgg. Simple, Affordable Cloud Monitoring. CopperEgg gives you instant visibility into all of your cloud-hosted servers and applications. Cloud monitoring has never been so easy: lightweight, elastic monitoring; root cause analysis; data visualization; smart alerts. Get Started Now.

  • Whitepaper Clarifies ACID Support in Aerospike. In our latest whitepaper, author and Aerospike VP of Engineering & Operations, Srini Srinivasan, defines ACID support in Aerospike, and explains how Aerospike maintains high consistency by using techniques to reduce the possibility of partitions.  Read the whitepaper: http://www.aerospike.com/docs/architecture/assets/AerospikeACIDSupport.pdf.

  • LogicMonitor is the cloud-based IT performance monitoring solution that enables companies to easily and cost-effectively monitor their entire IT infrastructure stack – storage, servers, networks, applications, virtualization, and websites – from the cloud. No firewall changes needed - start monitoring in only 15 minutes utilizing customized dashboards, trending graphs & alerting.

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Business or Pleasure? Boy or Girl?

NOOP.NL - Jurgen Appelo - Tue, 07/22/2014 - 16:05
Business or Pleasure?

Someone emailed to me, “I’m sorry to see you had to work on Saturday night.” I don’t understand why because the “work” was only a couple of emails that I sent. And I was happy to do that on a Saturday night, just like I was perfectly happy to spend the entire Tuesday morning shopping for a mountain bike.

The post Business or Pleasure? Boy or Girl? appeared first on NOOP.NL.

Categories: Project Management

My Primary Criticism of Scrum

Mike Cohn's Blog - Tue, 07/22/2014 - 15:00

On the first day of my Certified ScrumMaster course, we go over the agenda for the two days. I point out that one topic we’ll cover will be “meetings.” I usually point out that Scrum is often criticized for the amount of meetings it defines. I then claim that this is a pretty weak criticism of Scrum because most of the meetings really aren’t very long, and that if we wanted to, we could find better criticisms of Scrum than “Scrum teams meet too much.”

After saying that, I always expect someone to ask for an example of a better criticism of Scrum. But usually, no one asks and I move onto the next topic.

And since I think it’s important to remain critical, I’d like to use this post to share one of my own criticisms of Scrum.

The strongest criticism I’d levy on Scrum is that many Scrum teams have become too focused on checking the boxes to say they are done with something, and less focused on finding innovative solutions to the problems they are handed.

Scrum in the mid-1990s (as I implemented it and saw it implemented back then) was all about finding innovative solutions. Teams were given a problem, and given a month (or four weeks) to solve the problem. With that much time, teams were able to try one or more potential breakthrough approaches before having to revert back to a safer, tried-and-true approach.

In today’s version of Scrum, many teams have become overly obsessed with being able to say they finished everything they thought they would. This leads those teams to start with the safe approach. Many teams never try any wild ideas that could lead to truly innovative solutions.

I believe a great deal of this is the result of the move to shorter sprints, with two weeks being the norm these days. Shorter sprints leave less time to recover if a promising but risky initial approach fails to pan out.

I’m willing to take my share of the “blame” for the prevalence of two-week sprints. I’ve certainly been vocal in my preference for that length (while remaining open to whatever is right for a given team). And, I still think two weeks is the right length for most teams. Two weeks is also my default, initial recommendation to a team until I know more about them.

So, as much good as a shift toward shorter sprints have done for Scrum teams, for many teams, it has come at the expense of creativity, experimentation and breakthrough ideas.

I don’t think the answer is for all teams to instantly revert to four-week sprints. I think mature Scrum teams have found ways to balance the pressure to get things done with the benefits that come from occasionally pursuing long-shot ideas that sometimes pay off.

So, there’s my biggest criticism of Scrum as I see it practiced today. What’s yours? What problems do you see with Scrum as defined or as it’s commonly practiced today?

How Requirements Are Captured Really Matters

Capturing requirements is different than catching fish.

Capturing requirements is different than catching fish.

Where do you get the requirements that make up your backlog? There are two classic macro strategies that project teams follow to gather requirements and create a backlog.  Where a backlog of requirements comes from and who was involved in the process has implications that can influence the whole life cycle of a project by setting expectations and identifying behavioral norms for the entire project.

The first macro category represents scenarios in which requirements are provided to the project team either fully or partially formed. This is very common for projects that are outsourced or in organizations where a significant barrier has been erected between corporate IT and the business.  The business decides what it wants and then negotiates to obtain what they are asking for.  In order for this scenario to work effectively, business departments hire business analysts (BA), create shadow IT teams or leverage super users to act as liaisons to IT.  The BAs, or proxy BAs, gather and document the businesses requirements, and then during the project execution, they interpret those requirements.  This type of behavior reinforces a barrier between the business and the team doing the work.

One of major problems this arm’s length process of gathering of requirements generates is an anchor bias in which the business’ expectations get set before they know what is possible.  The backlog becomes the baseline against which project success will be measured. Change and evolution become more difficult because changes would be perceived as renegotiating success.  Applying the Agile principal of embracing change and working with the business on a continual bias become significantly more difficult when the business has become anchored to the picture the initial requirements generates.  This anchor becomes even stickier when those requirements are codified in a contract or as success criteria in a charter (a weak form of a contract).

Another behavior that falls into this category is that the BA becomes a proxy for the product owner.  Proxy product owners don’t have the decision making power to change or evolve the backlog, so they tend to defend the status quo.  A better solution is to incorporate the BA into the project team with a true product owner.

In the second macro category the whole team, or at least a cross-functional portion of the team, gathers the requirements.  In an Agile project, the requirements are used to generate an initial backlog. Incorporating the requirements into backlog is an explicit recognition that the project will allow the requirements to evolve.  Involvement of a cross-functional team will produce better requirements earlier, while incorporating Agile principles and backlog concepts help parties stay less anchored to the initial requirements given the flexibility inherent unthreatening techniques. Removing or diluting the initial anchor makes it easier for the product owner to identify and incorporate the concept of a minimum viable product into release planning.  Without the anchor the project will not be perceived as all or nothing because the backlog can be re-prioritized or changed if needed.

The cross functional approach to developing requirements that includes business, BA, development and testing capabilities build bridges between IT and the business.  These bridges make it less likely that the BA will have play the role of proxy product owner, because business personnel have more exposure to the project environment, making it less foreign and frightening.

How the initial set of requirements defined and who participated in the process can have a huge impact on the trajectory of a project.  Whether organizations perceive IT as a partner or as a vendor will influence which requirements strategy will be most attractive, as will how dynamic the organization perceives the business environment. Partnership and dynamic environments tend more towards scenario two.  In the long run, all projects have to have a set of requirements. How we organize them will affect the how, who and when of gathering requirements.


Categories: Process Management

Google Play Services 5.0

Android Developers Blog - Tue, 07/22/2014 - 00:43
gps

Google Play services 5.0 is now rolled out to devices worldwide, and it includes a number of features you can use to improve your apps. This release introduces Android wearable services APIs, Dynamic Security Provider and App Indexing, whilst also including updates to the Google Play game services, Cast, Drive, Wallet, Analytics, and Mobile Ads.

Android wearable services

Google Play services 5.0 introduces a set of APIs that make it easier to communicate with your apps running on Android wearables. The APIs provide an automatically synchronized, persistent data store and a low-latency messaging interface that let you sync data, exchange control messages, and transfer assets.

Dynamic security provider

Provides an API that apps can use to easily install a dynamic security provider. The dynamic security provider includes a replacement for the platform's secure networking APIs, which can be updated frequently for rapid delivery of security patches. The current version includes fixes for recent issues identified in OpenSSL.

Google Play game services

Quests are a new set of APIs to run time-based goals for players, and reward them without needing to update the game. To do this, you can send game activity data to the Quests service whenever a player successfully wins a level, kills an alien, or saves a rare black sheep, for example. This tells Quests what’s going on in the game, and you can use that game activity to create new Quests. By running Quests on a regular basis, you can create an unlimited number of new player experiences to drive re-engagement and retention.

Saved games lets you store a player's game progress to the cloud for use across many screen, using a new saved game snapshot API. Along with game progress, you can store a cover image, description and time-played. Players never play level 1 again when they have their progress stored with Google, and they can see where they left off when you attach a cover image and description. Adding cover images and descriptions provides additional context on the player’s progress and helps drive re-engagement through the Play Games app.

App Indexing API

The App Indexing API provides a way for you to notify Google about deep links in your native mobile applications and drive additional user engagement. Integrating with the App Indexing API allows the Google Search app to serve up your app’s history to users as instant Search suggestions, providing fast and easy access to inner pages in your app. The deep links reported using the App Indexing API are also used by Google to index your app’s content and surface them as deep links to Google search result.

Google Cast

The Google Cast SDK now includes media tracks that introduce closed caption support for Chromecast.

Drive

The Google Drive API adds the ability to sort query results, create folders offline, and select any mime type in the file picker by default.

Wallet

Wallet objects from Google take physical objects (like loyalty cards, offers) from your wallet and store them in the cloud. In this release, Wallet adds "Save to Wallet" button support for offers. When a user clicks "Save to Wallet" the offer gets saved and shows up in the user's Google Wallet app. Geo-fenced in-store notifications prompt the user to show and scan digital cards at point-of-sale, driving higher redemption. This also frees the user from having to carry around offers and loyalty cards.

Users can also now use their Google Wallet Balance to pay for Instant Buy transactions by providing split tender support. With split tender, if your Wallet Balance is not sufficient, the payment is split between your Wallet Balance and a credit/debit card in your Google Wallet.

Analytics

Enhanced Ecommerce provides visibility into the full customer journey, adding the ability to measure product impressions, product clicks, viewing product details, adding a product to a shopping cart, initiating the checkout process, internal promotions, transactions, and refunds. Together they help users gain deeper insights into the performance of their business, including how far users progress through the shopping funnel and where they are abandoning in the purchase process. Enhanced Ecommerce also allows users to analyze the effectiveness of their marketing and merchandising efforts, including the impact of internal promotions, coupons, and affiliate marketing programs.

Mobile Ads

Google Mobile Ads are a great way to monetise your apps and you now have access to better in-app purchase ads. We've now added a default implementation for consumable purchases using the Google Play In-app Billing service.

And that’s another release of Google Play services. The updated Google Play services SDK is now available through the Android SDK manager. For details on the APIs, please see New Features in Google Play services 5.0.




Join the discussion on
+Android Developers


Categories: Programming