Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Docs, Sheets and Forms add-ons now open to all developers

Google Code Blog - Thu, 04/23/2015 - 18:33

Posted by Saurabh Gupta, Product Manager

Back in 2014, we introduced add-ons for Google Docs, Sheets, and Forms in developer preview. Since then, the developer community has built a wide variety of features to help millions of Docs, Sheets and Forms users become more productive. Over the last few months, we launched a number of developer-friendly features that made it easier to build, test, deploy and distribute add-ons. Some key capabilities include:

With these features under our belt, we are ready to graduate add-ons out of developer preview. Starting today, any developer can publish an add-on. To ensure users find the best tools for them, every new add-on will undergo a review for adherence to our guidelines before it’s available in the add-ons store.

We can’t wait to see what you will build!

Categories: Programming

Build a Risk Adjusted Project Plan in 6 Steps

Herding Cats - Glen Alleman - Thu, 04/23/2015 - 17:35

When we hear about project planning and scheduling, we think about tasks being planned, organized in the proper order, durations assigned to the work, resources committed to perform the work. 

This is all well and good, but without a risk adjusted view of the planned work, it's going to be disappointing at best. There are some key root causes of most project failure. Let's start here.

Screen Shot 2015-04-23 at 10.01.38 AM

Each of these has been shown, through research on failed programs, to contribute to cost and schedule impacts. Unrealistic expectations of the project's deliverables, Technical issues, Naive Cost and Schedule estimating, and less than acceptable risk mitigation planning.

Project Mangement in Six Steps

Screen Shot 2015-04-23 at 10.33.57 AM

Here's how to address the cost and schedule estimating

Develop a schedule. What ever your feelings are for Gantt's, sticky note, or any handwaving processes you've learned to use, you need a sequence of the work, the dependencies, the planned durations. Without something like that , you have no idea what work is needed to complete the project. Here's a straightforward Master Schedule for some flight avionics on a small vehicle. All software, has to complete as planned, otherwise the users can't do their job as planned. And since they're the ones paying for our work, they have an expectation of us showing up, near oru budget, with the needed capabilities. Not the Minimum, the NEEDED

Screen Shot 2015-04-23 at 9.55.52 AM

Using a Monte Carlo tool (RiskyProject), here is run for the probabilities of cost, duration and completion dates. All project work is probabilistic, any notion that a deterministic plan can be successful is going to result in disappointment.

We usually call our planning sessions done when we can get the risk adjusted Integrated Master Schedule to show the completion date of on or before the need date to an 80% confidence level.

Screen Shot 2015-04-23 at 9.02.07 AM

With a resource loaded schedule - or some external time phased cost model - we can now show the probability of completing on or before the need date, and at of below the planned budget. The chart below informs everyone what the chances are for success for the cost and schedule aspects of the project. Technically it has to work. The customer gets to say that. The Fit for Use and Fit for Purpose measures if we're in an IT Governance paradigm. The Measures of Effectiveness and Measures of Performance if we're in other paradigms. Those measures can be modeled as well, but I'm just focusing on cost and schedule here.

Screen Shot 2015-04-23 at 9.22.27 AM

With this information we can produce the needed margin and management reserve to protect the delivery date and budget. Showing up late and over budget for a product that works is usually not good business. 

Do You Need all This?

What's the Value At Risk?

  • A $10,000 warehouse database app update - certainly not. Just do it.
  • Website for gaming, probably not, just start and let the market tell you where to go. Try to forecast what features and how much they'll cost periodically so those paying get a sense of value returned for their investment.
  • Product development, with a planned launch date, planned sell price, planned cost (socan make money) and planned features. Probably need so visibility into what's going to be happening in the future.
  • Enterprise IT, say an ERP system, worth 10's of millions? Better have a plan. 

Don't know the value at risk? Don't have a clear vision of what done looks like in  units of measure meaningful to the decision makers? You've got bigger problems. This approach won't help?

Related articles Calculating Value from Software Projects - Estimating is a Risk Reduction Process Incremental Delivery of Features May Not Be Desirable Decision Analysis and Software Project Management How to Avoid the "Yesterday's Weather" Estimating Problem Hope is not a Strategy Critical Success Factors of IT Forecasting
Categories: Project Management

I’m Not Sure I Want To Be A Specialist

Making the Complex Simple - John Sonmez - Thu, 04/23/2015 - 16:00

In this episode, I answer an email from a generalist who asked me – what if he doesn’t want to be a specialist? Watch the video and find out what my thoughts are regarding specialization. Full transcript: John:               Hey, this is John Sonmez from simpleprogrammer.com. I’ve got an email here that I want to read […]

The post I’m Not Sure I Want To Be A Specialist appeared first on Simple Programmer.

Categories: Programming

How to deploy High Available persistent Docker services using CoreOS and Consul

Xebia Blog - Thu, 04/23/2015 - 15:45

Providing High Availability to stateless applications is pretty trivial as was shown in the previous blog posts A High Available Docker Container Platform and Rolling upgrade of Docker applications using CoreOS and Consul. But how does this work when you have a persistent service like Redis?

In this blog post we will show you how a persistent service like Redis can be moved around on machines in the cluster, whilst preserving the state. The key is to deploy a fleet mount configuration into the cluster and mount the storage in the Docker container that has persistent data.

To support persistency we have added a NAS to our platform architecture in the form of three independent NFS servers which act as our NAS storage, as shown in the picture below.

CoreOS platform architecture with fake NASThe applications are still deployed in the CoreOS cluster as docker containers.  Even our Redis instance is running in a Docker container. Our application is configured using the following three Fleet unit files:

The unit file of the Redis server is the most interesting one because it is our persistence service. In the unit section of the file, it first declares that it requires a mount for '/mnt/data' on which it will persist its data.

[Unit]
Description=app-redis
Requires=mnt-data.mount
After=mnt-data.mount
RequiresMountsFor=/mnt/data

In the start clause of the redis service, a specific subdirectory of /mnt/data is mounted into the container.

...
ExecStart=/usr/bin/docker run --rm \
    --name app-redis \
    -v /mnt/data/app-redis-data:/data \
    -p 6379:6379 \
    redis
...

The mnt-data.mount unit file is quite simple: It defines an NFS mount with the option 'noauto' indicating  that device should be automatically mounted on boot time.  The unit file has the option 'Global=true' so that the mount is distributed to  all the nodes in the cluster. The mount is only activated when another unit requests it.

[Mount]
What=172.17.8.200:/mnt/default/data
Where=/mnt/data
Type=nfs
Options=vers=3,sec=sys,noauto

[X-Fleet]
Global=true

Please note that the NFS mount specifies system security (sec=sys) and uses NFS version 3 protocol, to avoid all sorts of errors surrounding mismatches in user- and group ids between the client and the server.

Preparing the application

To see the failover in action, you need to start the platform and deploy the application:

git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service.git
cd coreos-container-platform-as-a-service/vagrant
vagrant up
./is_platform_ready.sh

This will start 3 NFS servers and our 3 node CoreOS cluster. After that is done, you can deploy the application, by first submitting the mount unit file:

export FLEETCTL_TUNNEL=127.0.0.1:2222
cd ../fleet-units/app
fleetctl load mnt-data.mount

starting the redis service:

fleetctl start app-redis.service

and finally starting a number of instances of the application:

fleetctl submit app-hellodb@.service
fleetctl load app-hellodb@{1..3}.service
fleetctl start app-hellodb@{1..3}.service

You can check that everything is running by issuing the fleetctl list-units command. It should show something like this:

fleetctl list-units
UNIT			MACHINE				ACTIVE		SUB
app-hellodb@1.service	8f7472a6.../172.17.8.102	active		running
app-hellodb@2.service	b44a7261.../172.17.8.103	active		running
app-hellodb@3.service	2c19d884.../172.17.8.101	active		running
app-redis.service	2c19d884.../172.17.8.101	active		running
mnt-data.mount		2c19d884.../172.17.8.101	active		mounted
mnt-data.mount		8f7472a6.../172.17.8.102	inactive	dead
mnt-data.mount		b44a7261.../172.17.8.103	inactive	dead

As you can see three app-hellodb instances are running and the redis service is running on 172.17.8.101, which is the only host that as /mnt/data mounted. The other two machines have this mount in the status 'dead', which is an unfriendly name for stopped.

Now you can access the app..

yes 'curl hellodb.127.0.0.1.xip.io:8080; echo ' | head -10 | bash
..
Hello World! I have been seen 20 times.
Hello World! I have been seen 21 times.
Hello World! I have been seen 22 times.
Hello World! I have been seen 23 times.
Hello World! I have been seen 24 times.
Hello World! I have been seen 25 times.
Hello World! I have been seen 26 times.
Hello World! I have been seen 27 times.
Hello World! I have been seen 28 times.
Hello World! I have been seen 29 times.
Redis Fail-over in Action

To see the fail-over in action, you start a monitor on a machine not running Redis. In our case the machine running app-hellodb@1.

vagrant ssh -c \
   "yes 'curl --max-time 2 hellodb.127.0.0.1.xip.io; sleep 1 ' | \
    bash" \
    app-hellodb@1.service

Now restart the redis machine:

vagrant ssh -c "sudo shutdown -r now" app-redis.service

After you restarted the machine running Redis, the  output should look something like this:

...
Hello World! I have been seen 1442 times.
Hello World! I have been seen 1443 times.
Hello World! I have been seen 1444 times.
Hello World! Cannot tell you how many times I have been seen.
	(Error 111 connecting to redis:6379. Connection refused.)
curl: (28) Operation timed out after 2004 milliseconds with 0 out of -1 bytes received
curl: (28) Operation timed out after 2007 milliseconds with 0 out of -1 bytes received
Hello World! I have been seen 1445 times.
Hello World! I have been seen 1446 times.
curl: (28) Operation timed out after 2004 milliseconds with 0 out of -1 bytes received
curl: (28) Operation timed out after 2004 milliseconds with 0 out of -1 bytes received
Hello World! I have been seen 1447 times.
Hello World! I have been seen 1448 times.
..

Notice that the distribution of your units has changed after the reboot.

fleetctl list-units
...
UNIT			MACHINE				ACTIVE		SUB
app-hellodb@1.service	3376bf5c.../172.17.8.103	active		running
app-hellodb@2.service	ff0e7fd5.../172.17.8.102	active		running
app-hellodb@3.service	3376bf5c.../172.17.8.103	active		running
app-redis.service	ff0e7fd5.../172.17.8.102	active		running
mnt-data.mount		309daa5a.../172.17.8.101	inactive	dead
mnt-data.mount		3376bf5c.../172.17.8.103	inactive	dead
mnt-data.mount		ff0e7fd5.../172.17.8.102	active		mounted
Conclusion

We now have the basis for a truly immutable infrastructure setup: the entire CoreOS cluster including the application can be destroyed and a completely identical environment can be resurrected within a few minutes!

  • Once you have an reliable external persistent store, CoreOS can help you migrate persistent services just as easy as stateless services. We chose a NFS server for ease of use on this setup, but nothing prevents you from mounting other kinds of storage systems for your application.
  • Consul excels in providing fast and dynamic service discovery for  services, allowing the Redis service to migrate to a different machine and the application instances to find the new address of the Redis service through as simple DNS lookup!

 

Just Say No to More End-to-End Tests

Google Testing Blog - Thu, 04/23/2015 - 00:10
by Mike Wacker

At some point in your life, you can probably recall a movie that you and your friends all wanted to see, and that you and your friends all regretted watching afterwards. Or maybe you remember that time your team thought they’d found the next "killer feature" for their product, only to see that feature bomb after it was released.

Good ideas often fail in practice, and in the world of testing, one pervasive good idea that often fails in practice is a testing strategy built around end-to-end tests.

Testers can invest their time in writing many types of automated tests, including unit tests, integration tests, and end-to-end tests, but this strategy invests mostly in end-to-end tests that verify the product or service as a whole. Typically, these tests simulate real user scenarios.
End-to-End Tests in Theory While relying primarily on end-to-end tests is a bad idea, one could certainly convince a reasonable person that the idea makes sense in theory.

To start, number one on Google's list of ten things we know to be true is: "Focus on the user and all else will follow." Thus, end-to-end tests that focus on real user scenarios sound like a great idea. Additionally, this strategy broadly appeals to many constituencies:
  • Developers like it because it offloads most, if not all, of the testing to others. 
  • Managers and decision-makers like it because tests that simulate real user scenarios can help them easily determine how a failing test would impact the user. 
  • Testers like it because they often worry about missing a bug or writing a test that does not verify real-world behavior; writing tests from the user's perspective often avoids both problems and gives the tester a greater sense of accomplishment. 
End-to-End Tests in Practice So if this testing strategy sounds so good in theory, then where does it go wrong in practice? To demonstrate, I present the following composite sketch based on a collection of real experiences familiar to both myself and other testers. In this sketch, a team is building a service for editing documents online (e.g., Google Docs).

Let's assume the team already has some fantastic test infrastructure in place. Every night:
  1. The latest version of the service is built. 
  2. This version is then deployed to the team's testing environment. 
  3. All end-to-end tests then run against this testing environment. 
  4. An email report summarizing the test results is sent to the team.

The deadline is approaching fast as our team codes new features for their next release. To maintain a high bar for product quality, they also require that at least 90% of their end-to-end tests pass before features are considered complete. Currently, that deadline is one day away:

Days LeftPass %Notes15%Everything is broken! Signing in to the service is broken. Almost all tests sign in a user, so almost all tests failed.04%A partner team we rely on deployed a bad build to their testing environment yesterday.-154%A dev broke the save scenario yesterday (or the day before?). Half the tests save a document at some point in time. Devs spent most of the day determining if it's a frontend bug or a backend bug.-254%It's a frontend bug, devs spent half of today figuring out where.-354%A bad fix was checked in yesterday. The mistake was pretty easy to spot, though, and a correct fix was checked in today.-41%Hardware failures occurred in the lab for our testing environment.-584%Many small bugs hiding behind the big bugs (e.g., sign-in broken, save broken). Still working on the small bugs.-687%We should be above 90%, but are not for some reason.-789.54%(Rounds up to 90%, close enough.) No fixes were checked in yesterday, so the tests must have been flaky yesterday.
Analysis Despite numerous problems, the tests ultimately did catch real bugs.

What Went Well 
  • Customer-impacting bugs were identified and fixed before they reached the customer.

What Went Wrong 
  • The team completed their coding milestone a week late (and worked a lot of overtime). 
  • Finding the root cause for a failing end-to-end test is painful and can take a long time. 
  • Partner failures and lab failures ruined the test results on multiple days. 
  • Many smaller bugs were hidden behind bigger bugs. 
  • End-to-end tests were flaky at times. 
  • Developers had to wait until the following day to know if a fix worked or not. 

So now that we know what went wrong with the end-to-end strategy, we need to change our approach to testing to avoid many of these problems. But what is the right approach?
The True Value of Tests Typically, a tester's job ends once they have a failing test. A bug is filed, and then it's the developer's job to fix the bug. To identify where the end-to-end strategy breaks down, however, we need to think outside this box and approach the problem from first principles. If we "focus on the user (and all else will follow)," we have to ask ourselves how a failing test benefits the user. Here is the answer:

A failing test does not directly benefit the user. 

While this statement seems shocking at first, it is true. If a product works, it works, whether a test says it works or not. If a product is broken, it is broken, whether a test says it is broken or not. So, if failing tests do not benefit the user, then what does benefit the user?

A bug fix directly benefits the user.

The user will only be happy when that unintended behavior - the bug - goes away. Obviously, to fix a bug, you must know the bug exists. To know the bug exists, ideally you have a test that catches the bug (because the user will find the bug if the test does not). But in that entire process, from failing test to bug fix, value is only added at the very last step.

StageFailing TestBug OpenedBug FixedValue AddedNoNoYes
Thus, to evaluate any testing strategy, you cannot just evaluate how it finds bugs. You also must evaluate how it enables developers to fix (and even prevent) bugs.
Building the Right Feedback LoopTests create a feedback loop that informs the developer whether the product is working or not. The ideal feedback loop has several properties:
  • It's fast. No developer wants to wait hours or days to find out if their change works. Sometimes the change does not work - nobody is perfect - and the feedback loop needs to run multiple times. A faster feedback loop leads to faster fixes. If the loop is fast enough, developers may even run tests before checking in a change. 
  • It's reliable. No developer wants to spend hours debugging a test, only to find out it was a flaky test. Flaky tests reduce the developer's trust in the test, and as a result flaky tests are often ignored, even when they find real product issues. 
  • It isolates failures. To fix a bug, developers need to find the specific lines of code causing the bug. When a product contains millions of lines of codes, and the bug could be anywhere, it's like trying to find a needle in a haystack. 
Think Smaller, Not LargerSo how do we create that ideal feedback loop? By thinking smaller, not larger.

Unit TestsUnit tests take a small piece of the product and test that piece in isolation. They tend to create that ideal feedback loop:

  • Unit tests are fast. We only need to build a small unit to test it, and the tests also tend to be rather small. In fact, one tenth of a second is considered slow for unit tests. 
  • Unit tests are reliable. Simple systems and small units in general tend to suffer much less from flakiness. Furthermore, best practices for unit testing - in particular practices related to hermetic tests - will remove flakiness entirely. 
  • Unit tests isolate failures. Even if a product contains millions of lines of code, if a unit test fails, you only need to search that small unit under test to find the bug. 

Writing effective unit tests requires skills in areas such as dependency management, mocking, and hermetic testing. I won't cover these skills here, but as a start, the typical example offered to new Googlers (or Nooglers) is how Google builds and tests a stopwatch.

Unit Tests vs. End-to-End TestsWith end-to-end tests, you have to wait: first for the entire product to be built, then for it to be deployed, and finally for all end-to-end tests to run. When the tests do run, flaky tests tend to be a fact of life. And even if a test finds a bug, that bug could be anywhere in the product.

Although end-to-end tests do a better job of simulating real user scenarios, this advantage quickly becomes outweighed by all the disadvantages of the end-to-end feedback loop:

UnitEnd-toEndFast

Reliable


Isolates Failures

Simulates a Real User


Integration TestsUnit tests do have one major disadvantage: even if the units work well in isolation, you do not know if they work well together. But even then, you do not necessarily need end-to-end tests. For that, you can use an integration test. An integration test takes a small group of units, often two units, and tests their behavior as a whole, verifying that they coherently work together.

If two units do not integrate properly, why write an end-to-end test when you can write a much smaller, more focused integration test that will detect the same bug? While you do need to think larger, you only need to think a little larger to verify that units work together.
Testing PyramidEven with both unit tests and integration tests, you probably still will want a small number of end-to-end tests to verify the system as a whole. To find the right balance between all three test types, the best visual aid to use is the testing pyramid. Here is a simplified version of the testing pyramid from the opening keynote of the 2014 Google Test Automation Conference:



The bulk of your tests are unit tests at the bottom of the pyramid. As you move up the pyramid, your tests gets larger, but at the same time the number of tests (the width of your pyramid) gets smaller.

As a good first guess, Google often suggests a 70/20/10 split: 70% unit tests, 20% integration tests, and 10% end-to-end tests. The exact mix will be different for each team, but in general, it should retain that pyramid shape. Try to avoid these anti-patterns:
  • Inverted pyramid/ice cream cone. The team relies primarily on end-to-end tests, using few integration tests and even fewer unit tests. 
  • Hourglass. The team starts with a lot of unit tests, then uses end-to-end tests where integration tests should be used. The hourglass has many unit tests at the bottom and many end-to-end tests at the top, but few integration tests in the middle. 
Just like a regular pyramid tends to be the most stable structure in real life, the testing pyramid also tends to be the most stable testing strategy.


Categories: Testing & QA

R: Replacing for loops with data frames

Mark Needham - Wed, 04/22/2015 - 23:18

In my last blog post I showed how to derive posterior probabilities for the Think Bayes dice problem:

Suppose I have a box of dice that contains a 4-sided die, a 6-sided die, an 8-sided die, a 12-sided die, and a 20-sided die. If you have ever played Dungeons & Dragons, you know what I am talking about.

Suppose I select a die from the box at random, roll it, and get a 6.
What is the probability that I rolled each die?

To recap, this was my final solution:

likelihoods = function(names, observations) {
  scores = rep(1.0 / length(names), length(names))  
  names(scores) = names
 
  for(name in names) {
      for(observation in observations) {
        if(name < observation) {
          scores[paste(name)]  = 0
        } else {
          scores[paste(name)] = scores[paste(name)] *  (1.0 / name)
        }        
      }
    }  
  return(scores)
}
 
dice = c(4,6,8,12,20)
l1 = likelihoods(dice, c(6))
 
> l1 / sum(l1)
        4         6         8        12        20 
0.0000000 0.3921569 0.2941176 0.1960784 0.1176471

Although it works we have nested for loops which aren’t very idiomatic R so let’s try and get rid of them.

The first thing we want to do is return a data frame rather than a vector so we tweak the first two lines to read like this:

scores = rep(1.0 / length(names), length(names))  
df = data.frame(score = scores, name = names)

Next we can get rid of the inner for loop and replace it with a call to ifelse wrapped inside a dplyr mutate call:

library(dplyr)
likelihoods2 = function(names, observations) {
  scores = rep(1.0 / length(names), length(names))  
  df = data.frame(score = scores, name = names)
 
  for(observation in observations) {
    df = df %>% mutate(score = ifelse(name < observation, 0, score * (1.0 / name)) )
  }
 
  return(df)
}
 
dice = c(4,6,8,12,20)
l1 = likelihoods2(dice, c(6))
 
> l1
       score name
1 0.00000000    4
2 0.03333333    6
3 0.02500000    8
4 0.01666667   12
5 0.01000000   20

Finally we’ll tidy up the scores so they’re relatively weighted against each other:

likelihoods2 = function(names, observations) {
  scores = rep(1.0 / length(names), length(names))  
  df = data.frame(score = scores, name = names)
 
  for(observation in observations) {
    df = df %>% mutate(score = ifelse(name < observation, 0, score * (1.0 / name)) )
  }
 
  return(df %>% mutate(weighted = score / sum(score)) %>% select(name, weighted))
}
 
dice = c(4,6,8,12,20)
l1 = likelihoods2(dice, c(6))
 
> l1
  name  weighted
1    4 0.0000000
2    6 0.3921569
3    8 0.2941176
4   12 0.1960784
5   20 0.1176471

Now we’re down to just the one for loop. Getting rid of that one is a bit trickier. First we’ll create a data frame which contains a row for every (observation, dice) pair, simulating the nested for loops:

likelihoods3 = function(names, observations) {
  l = list(observation = observations, roll = names)
  obsDf = do.call(expand.grid,l) %>% 
    mutate(likelihood = 1.0 / roll, 
           score = ifelse(roll < observation, 0, likelihood))   
 
  return(obsDf)
}
 
dice = c(4,6,8,12,20)
l1 = likelihoods3(dice, c(6))
 
> l1
  observation roll likelihood      score
1           6    4 0.25000000 0.00000000
2           6    6 0.16666667 0.16666667
3           6    8 0.12500000 0.12500000
4           6   12 0.08333333 0.08333333
5           6   20 0.05000000 0.05000000
 
l2 = likelihoods3(dice, c(6, 4, 8, 7, 7, 2))
> l2
   observation roll likelihood      score
1            6    4 0.25000000 0.00000000
2            4    4 0.25000000 0.25000000
3            8    4 0.25000000 0.00000000
4            7    4 0.25000000 0.00000000
5            7    4 0.25000000 0.00000000
6            2    4 0.25000000 0.25000000
7            6    6 0.16666667 0.16666667
8            4    6 0.16666667 0.16666667
9            8    6 0.16666667 0.00000000
10           7    6 0.16666667 0.00000000
11           7    6 0.16666667 0.00000000
12           2    6 0.16666667 0.16666667
13           6    8 0.12500000 0.12500000
14           4    8 0.12500000 0.12500000
15           8    8 0.12500000 0.12500000
16           7    8 0.12500000 0.12500000
17           7    8 0.12500000 0.12500000
18           2    8 0.12500000 0.12500000
19           6   12 0.08333333 0.08333333
20           4   12 0.08333333 0.08333333
21           8   12 0.08333333 0.08333333
22           7   12 0.08333333 0.08333333
23           7   12 0.08333333 0.08333333
24           2   12 0.08333333 0.08333333
25           6   20 0.05000000 0.05000000
26           4   20 0.05000000 0.05000000
27           8   20 0.05000000 0.05000000
28           7   20 0.05000000 0.05000000
29           7   20 0.05000000 0.05000000
30           2   20 0.05000000 0.05000000

Now we need to iterate over the data frame, grouping by ‘roll’ so that we end up with one row for each one.

We’ll add a new column which stores the posterior probability for each dice. This will be calculated by multiplying the prior probability by the product of the ‘score’ entries.

This is what our new likelihood function looks like:

likelihoods3 = function(names, observations) {
  l = list(observation = observations, roll = names)
  obsDf = do.call(expand.grid,l) %>% 
    mutate(likelihood = 1.0 / roll, 
           score = ifelse(roll < observation, 0, likelihood))   
 
  return (obsDf %>% 
    group_by(roll) %>% 
    summarise(s = (1.0/length(names)) * prod(score) ) %>%
    ungroup() %>% 
    mutate(weighted = s / sum(s)) %>%
    select(roll, weighted))
}
 
l1 = likelihoods3(dice, c(6))
> l1
Source: local data frame [5 x 2]
 
  roll  weighted
1    4 0.0000000
2    6 0.3921569
3    8 0.2941176
4   12 0.1960784
5   20 0.1176471
 
l2 = likelihoods3(dice, c(6, 4, 8, 7, 7, 2))
> l2
Source: local data frame [5 x 2]
 
  roll    weighted
1    4 0.000000000
2    6 0.000000000
3    8 0.915845272
4   12 0.080403426
5   20 0.003751302

We’ve now got the same result as we did with our nested for loops so I think the refactoring has been a success.

Categories: Programming

New Android Code Samples

Android Developers Blog - Wed, 04/22/2015 - 22:47

Posted by Rich Hyndman, Developer Advocate

A new set of Android code samples, covering Android Wear, Android for Work, NFC and Screen capturing, have been committed to our Google Samples repository on GitHub. Here’s a summary of the new code samples:

XYZTouristAttractions

This sample mimics a real world mobile and Android Wear app. It has a more refined design and also provides a practical example of how a mobile app would interact and communicate with its Wear counterpart.

The app itself is modeled after a hypothetical tourist attractions experience that notifies the user when they are in close proximity to notable points of interest. In parallel,the Wear component shows tourist attraction images and summary information, and provides quick actions for nearby tourist attractions in a GridViewPager UI component.

DeviceOwner - A Device Owner is a specialized type of device administrator that can control device security and configuration. This sample uses the DevicePolicyManager to demonstrate how to use device owner features, including configuring global settings (e.g.automatic time and time-zone) and setting the default launcher.

NfcProvisioning - This sample demonstrates how to use NFC to provision a device with a device owner. This sample sets up the peer device with the DeviceOwner sample by default. You can rewrite the configuration to use any other device owner.

NFC BeamLargeFiles - A demonstration of how to transfer large files via Android Beam on Android 4.1 and above. After the initial handshake over NFC, file transfer will take place over a secondary high-speed communication channel such as Bluetooth or WiFi Direct.

ScreenCapture - The MediaProjection API was added in Android Lollipop and allows you to easily capture screen contents and/or record system audio. The ScreenCapture sample demonstrates how to use the API to capture device screen in real time and show it on a SurfaceView.

As an additional bonus, the Santa Tracker Android app, including three games, two watch-faces and other goodies, was also recently open sourced and is now available on GitHub.

As with all the Android samples, you can also easily access these new additions in Android Studio using the built in Import Samples feature and they’re also available through our Samples Browser.

Check out a sample today to help you with your development!

Join the discussion on

+Android Developers
Categories: Programming

Android Developer Story: Jelly Button Games grows globally through data driven development

Android Developers Blog - Wed, 04/22/2015 - 17:55

Posted by Leticia Lago, Google Play team

For Jelly Button Games, understanding users is the key to creating and maintaining a successful game, particularly when growth relies on moving into overseas markets. The team makes extensive use of Google Analytics and Google BigQuery to analyze more than 3 billion events each month. By using this data, Jelly Button can pinpoint exactly where, when, and why people play their highly-rated game, Pirate Kings. Feeding this information back into development has driven active daily users up 1500 percent in just five months.

We caught up with Mor Shani, Moti Novo, and Ron Rejwan — some of the co-founders — in Tel Aviv, Israel, to discover how they created an international hit and keep it growing.


Learn about Google Analytics and taking your game to an international audience:

  • Analyze — discover the power of data from the Google Play Developer Console and Google Analytics.
  • Query — find out how Google BigQuery can help you extract the essential information you need from millions or billions of data points.
  • Localize — guide the localization of your app with best practices and tools.
Join the discussion on

+Android Developers
Categories: Programming

Drive app installs through App Indexing

Android Developers Blog - Wed, 04/22/2015 - 17:54

Posted by Lawrence Chang, Product Manager

You’ve invested time and effort into making your app an awesome experience, and we want to help people find the great content you’ve created. App Indexing has already been helping people engage with your Android app after they’ve installed it — we now have 30 billion links within apps indexed. Starting this week, people searching on Google can also discover your app if they haven’t installed it yet. If you’ve implemented App Indexing, when indexed content from your app is relevant to a search done on Google on Android devices, people may start to see app install buttons for your app in search results. Tapping these buttons will take them to the Google Play store where they can install your app, then continue straight on to the right content within it.

App installs through app indexing

With the addition of these install links, we are starting to use App Indexing as a ranking signal for all users on Android, regardless of whether they have your app installed or not. We hope that Search will now help you acquire new users, as well as re-engage your existing ones. To get started, visit g.co/AppIndexing and to learn more about the other ways you can integrate with Google Search, visit g.co/DeveloperSearch.

Join the discussion on

+Android Developers
Categories: Programming

The Story of a Digital Artist

I’m always on the hunt for people that do what makes them come alive.

Artists in particular are especially interesting for me, especially when they are able to do what they love.

I’ve known too many artists that lived painful lives, trying to be an artist, but never making ends meet.

I’ve also known too many artists that lived another life outside of art, but never really lived, because they never answered their calling.

I believe that in today’s world, there are a lot more options for you to live life on you terms.

With technology at our fingertips, it’s easier to connect with people around the world and share your art, whatever that may be.

On Sources of Insight, I’ve asked artist Rebecca Tsien to share her story:

Why I Draw People and Animals

It’s more than a story of a digital artist.   It’s a journey of fulfillment.

Rebecca has found a way to do what she loves.  She lives and breathes her passion.

Maybe her story can inspire you.

Maybe there’s a way you can do more art.

Categories: Architecture, Programming

The Reason We Plan, Schedule, Measure, and Correct

Herding Cats - Glen Alleman - Wed, 04/22/2015 - 15:56

YogiThe notion that planning is of little value, seems to be lost on those not accountable for showing up on or before the need date, at or below the needed cost, and with the planned capabilities needed to fulfill the business case or successfully accomplish the mission.

Yogi says it best in our project management domain. And it bears repeating when someone says let's get started and we'll let the requirements emerge. Or my favorite, let's get started and we'll use our perform numbers to forecast future performance, we don't need no stink'in estimates.

Yogi says ...If you don't know where you are going, you'll end up someplace else.

This is actually a quote from Alice in Wonderland 

"Would you tell me, please, which way I ought to go from here?"
"That depends a good deal on where you want to get to," said the Cat.
"I don't much care where--" said Alice.
"Then it doesn't matter which way you go," said the Cat.
"--so long as I get SOMEWHERE," Alice added as an explanation.
"Oh, you're sure to do that," said the Cat, "if you only walk long enough."
(Alice's Adventures in Wonderland, Chapter 6)

This is often misquoted as If you don't know where you're going, any road will get you there. Which is in fact  technically not possible and not from Alice.

So What To Do?

We need a plan to deliver the value that is being exchanged for the cost of that value. We can't determine the result value - benefit - until first we know the cost to produce that value, then we have to know when that value will be arriving. 

  • Arriving late and over budget diminishes the value for a higher cost, since arriving late means we've had to pay more in labor - people continue to work on producing the value. That extra cost diminishes the value.
  • Arriving with less than the needed capabilities diminishes the value for the same cost.

Both these conditions are basic Managerial Finance 101 concepts base on Return on Investment. 

ROI = (Value - Cost) / Cost

The first thing some will say is but value can't be monetized. Ask the CFO of your firm to see what she thinking about monetizing the outcomes of your work on the balance sheet. Better yet, don't embarrass yourself, read Essentials of Managerial Finance, Brigham and Weston. Mine is 11th Edition, looks like its up to the 14th Edition.

As well, once it is established that both  cost and value are random variables about measurements in the future, you'll need to estimate them to some degree of confidence if you're going to make decisions. These decisions are actual opportunity cost decisions about future outcomes. This is the basis of Microeconomics of software development.

2000px-PDCA_Cycle.svgSo when you hear we can make decisions about outcomes in the future in the presence of uncertainty - the basis of project work - without estimating - don't believe a word of it. Instead read Weston and you too will have the foundational skills to know better. 

Because the close loop management processes we need on project and product development require we make decisions in the presence of uncertainty. There is simply no way to do that without estimating all the variance when we Plan, Do, Check, Act. Each is a random variable, with random outcomes. each require some access of what will happen if I do this. A that notion of let's just try it reminds me of my favorite picture of open loop, no estimates, no measurement, no corrective action management.

Sailing Over The Edge

 

Categories: Project Management

Software Development Linkopedia April 2015

From the Editor of Methods & Tools - Wed, 04/22/2015 - 15:20
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about honnest ignorance, canonical data model, global teams, recruiting developers, self-organization, requirements management, SOLID programming, developer training, retrospectives and Test-Driven Development (TDD). Blog: The Power of Not Knowing Blog: Why You Should Avoid a Canonical Data Model Blog: Managing global teams – Lessons learned Blog: The Sprint Burndown is dead, long live Confidence Smileys Blog: How to write a job post for development positions Blog: Why Self-Organizing is So Hard Article: Order Your ...

Scaling Agile? Keep it simple, scaler!

Xebia Blog - Wed, 04/22/2015 - 08:59

The promise of Agile is short cycled value delivery, with the ability to adapt. This is achieved by focusing on the people that create value and optimising the way they work.

Scrum provides a framework that provides a limited set of roles and artefacts and offers a simple process framework that helps to implement the Agile values and to adhere to the Agile principles.

I have supported many organisations in adopting Agile as their mindset and culture. What puzzles me is that many larger organisations seem to think that Scrum is not enough in their context and they feel the need for something bigger and more complicated. As a result of this, more and more Agile transformations start with scaling Agile to fit their context and then try to make things less complex.

While the various scaling frameworks for Agile contain many useful and powerful tools to apply in situations that require them, applying a complete Agile scaling framework to an organisation from the get-go often prevents the really needed culture and mindset change.

When applying a little bit of creativity, already present organisational structure can be mapped easily on the structure suggested by many scaling frameworks. Most frameworks explain the needed behaviour in an Agile environment, but these explanations are often ignored or misinterpreted. Due to (lengthy) descriptions of roles and responsibilities, people tend to stop thinking for themselves about what would work best and start to focus on who plays which role and what is someone else’s responsibility. There is a tendency to focus on the ceremonies rather than on the value that should be delivered by the team(s) with regards to product or service.

My take on adopting Agile would be to start simple. Use an Agile framework that prescribes very little, like Scrum or Kanban, in order to provoke learning and experiencing. From this learning and experiencing will come changes in the organisational structure to best support the Agile Values and Principles. People will find or create positions where their added value has most impact on the value that the organisation creates and, when needed, will dismantle positions and structure that prevent this value to be created.

Another effect of starting simple is that people will not feel limited by rules and regulations, and that way use their creativity, experience and capabilities easier. Oftentimes, more energy is create by less rules.

As said by others as well, some products or value are difficult to create with simple systems. As observed by Dave Snowden and captured in his Cynefin framework, too much simplicity could result in chaos when this simplicity is applied to complex systems. To create value in more complex systems, use the least amount of tools provided by the scaling frameworks to prevent chaos and leverage the benefits that simpler systems provide. Solutions to fix problems in complex systems are best found when experiencing the complexity and discovering what works best to cope with that. Trying to prevent problems to pop up might paralyse an organisation too much to put out the most possible value.

So: Focus on delivering value in short cycles, adapt when needed and add the least amount of tools and/or process to optimise communication and value delivery.

Demonstrations in Distributed Teams

The demonstration needs to work for everyone, no matter where in the world you are.

The demonstration needs to work for everyone, no matter where in the world you are.

Demonstrations are an important tool for teams to gather feedback to shape the value they deliver.  Demonstrations provide a platform for the team to show the stories that have been completed so the stakeholders can interact with the solution.  The feedback a team receives not only ensures that the solution delivered meets the needs but also generates new insights and lets the team know they are on track.  Demonstrations should provide value to everyone involved. Given the breadth of participation in a demo, the chance of a distributed meeting is even more likely.  Techniques that support distributed demonstrations include:

  1. More written documentation: Teams, especially long-established teams, often develop shorthand expressions that convey meaning fall short before a broader audience. Written communication can be more effective at conveying meaning where body language can’t be read and eye contact can’t be made. Publish an agenda to guide the meeting; this will help everyone stay on track or get back on track when the phone line drops. Capture comments and ideas on paper where everyone can see them.  If using flip charts, use webcams to share the written notes.  Some collaboration tools provide a notepad feature that stays resident on the screen that can be used to capture notes that can be referenced by all sites.
  2. Prepare and practice the demo. The risk that something will go wrong with the logistics of the meeting increase exponentially with the number of sites involved.  Have a plan for the demo and then practice the plan to reduce the risk that you have not forgotten something.  Practice will not eliminate all risk of an unforeseen problem, but it will help.
  3. Replicate the demo in multiple locations. In scenarios with multiple locations with large or important stakeholder populations, consider running separate demonstrations.  Separate demonstrations will lose some of the interaction between sites and add some overhead but will reduce the logistical complications.
  4. Record the demo. Some sites may not be able to participate in the demo live due to their time zones or other limitations. Recording the demo lets stakeholders that could not participate in the live demo hear and see what happened and provide feedback, albeit asynchronously.  Recording the demo will also give the team the ability to use the recording as documentation and reference material, which I strongly recommend.
  5. Check the network(s)! Bandwidth is generally not your friend. Make sure the network at each location can support the tools you are going to use (video, audio or other collaboration tools) and then have a fallback plan. Fallback plans should be as low tech as practical.  One team I observed actually had to fall back to scribes in two locations who kept notes on flip charts by mirroring each-other (cell phones, bluetooth headphones and whispering were employed) when the audio service they were using went down.

Demonstrations typically involve stakeholders, management and others.  The team needs feedback, but also needs to ensure a successful demo to maintain credibility within the organization.  In order to get the most effective feedback in a demo everyone needs to be able to hear, see and get involved.  Distributed demos need to focus on facilitating interaction more than in-person demos. Otherwise, distributed demos risk not being effective.


Categories: Process Management

The Myths of Business Model Innovation

Business model innovation has a couple of myths.

One myth is that business model innovation takes big thinking.  Another myth about business model innovation is that technology is the answer.

In the book, The Business Model Navigator, Oliver Gassman, Karolin Frankenberger, and Michaela Csik share a couple of myths that need busting so that more people can actually achieve business model innovation.

The "Think Big" Myth

Business model innovation does not need to be “big bang.”  It can be incremental.  Incremental changes can create more options and more opportunities for serendipity.

Via The Business Model Navigator:

“'Business model innovations are always radical and new to the world.'   Most people associate new business models with the giants leaps taken by Internet companies.  The fact is that business model innovation, in the same way as product innovation, can be incremental.  For instance, Netflix's business model innovation of mailing DVDs to customers was undoubtedly incremental and yet brought great success to the company.  The Internet opened up new avenues for Netflix that allowed the company to steadily evolve into an online streaming service provider.”

The Technology Myth

It’s not technology for technology’s sake.  It’s applying technology to revolutionize a business that creates the business model innovation.

Via The Business Model Navigator:

“'Every business model innovation is based on a fascinating new technology that inspires new products.'  The fact is that while new technologies can indeed drive new business models, they are often generic in nature.  Where creativity comes in is in applying them to revolutionize a business.  It is the business application and the specific use of the technology which makes the difference.  Technology for technology's sake is the number one flop factor in innovation projects.  The truly revolutionary act is that of uncovering the economic potential of a new technology.”

If you want to get started with business model innovation, don’t just go for the home run.

You Might Also Like

Cloud Changes the Game from Deployment to Adoption

Cognizant on the Next Generation Enterprise

Drive Digital Transformation by Re-Imagining Operations

Drive Digital Transformation by Re-envisioning Your Customer Experience

The Future of Jobs

Categories: Architecture, Programming

A final farewell to ClientLogin, OAuth 1.0 (3LO), AuthSub, and OpenID 2.0

Google Code Blog - Tue, 04/21/2015 - 17:24

Posted by William Denniss, Product Manager, Identity and Authentication

Support for ClientLogin, OAuth 1.0 (3LO1), AuthSub, and OpenID 2.0 has ended, and the shutdown process has begun. Clients attempting to use these services will begin to fail and must be migrated to OAuth 2.0 or OpenID Connect immediately.

To migrate a sign-in system, the easiest path is to use the Google Sign-in SDKs (see the migration documentation). Google Sign-in is built on top of our standards-based OAuth 2.0 and OpenID Connect infrastructure and provides a single interface for authentication and authorization flows on Web, Android and iOS. To migrate server API use, we recommend using one of our OAuth 2.0 client libraries.

We are moving away from legacy authentication protocols, focusing our support on OpenID Connect and OAuth 2.0. These modern open standards enhance the security of Google accounts, and are generally easier for developers to integrate with.

13LO stands for 3-legged OAuth where there's an end-user that provides consent. In contrast, 2-legged (2LO) correspond to Enterprise authorization scenarios such as organizational-wide policies control access. Both OAuth1 3LO and 2LO flows are deprecated, but this announcement is specific to OAuth1 3LO.

Categories: Programming

How to Avoid the "Yesterday's Weather" Estimating Problem

Herding Cats - Glen Alleman - Tue, 04/21/2015 - 15:49

One suggestion from the #NoEstimates community is the use of empirical data of past performance. This is many time called yesterdays weather. First let's make sure we're not using just the averages from yesterdays weather. And even adding the variance to that small sample of past performance can lead to very naive outcomes. 

We need to do some actual statistics on that time series. A simple R set of commands will produce the chart below from the time series of past performance data.

Forecast

 But that doesn't really help without some more work.

  • Is the future Really like the past - are the work products and the actual work performed in the past replicated  in the future? If so, this sound like a simple project, just turn out features that all look alike.
  • Is there any interdependencies that grow  in complexity as the project moves forward? This is the integration and test problem. Then the system of systems integration and test problem. Again simple project don't usually have this problem. More complex projects do.
  • What about those pesky emerging requirements. This is a favorite idea of agile (and correctly so), but simple past performance is not going to forecast the needed performance in the presence of emerging requirements
  • Then there all the externalities of all project work, where are those captured in the sample of past performance?
  • All big projects have little projects inside them is a common phrase. Except that collection of little projects need to be integrated, tuned, tested, verified, and validated so that when all the parts are  assembled they actually do what the customer wants.

Getting Out of the Yesterday's Weather Dilemma

Let's use the chart below to speak about some sources of estimating NOT based on simple small samples of yesterdays weather. This is a Master Plan for a non-trivial project to integrate half dozen or so legacy enterprise systems with a new health insurance ERO system for an integrated payer/provider solution: 

Capabilities Flow

  • Reference Class Forecasting for each class of work product.
    • As the project moves left to right in time the classes of product and the related work likely change. 
    • Reference classes for each of this movements through increasing maturity, and increasing complexity from integration interactions needs to be used to estimate not only the current work but the next round of work
    • In the chart above work on the left is planned with some level of confidence, because it's work in hand. Work in the right is in the future, so an courser estimate is all that is needed for the moment.
    • This is a planning package notion used in space and defense. Only plan in detail what you understand in detail.
  • Interdependencies Modeling in MCS
    • On any non-trivial project there are interdependencies
    • The notion of INVEST needs to be tested 
      • Independent - not usually the case on enterprise projects
      • Negotiable - usually not, since he ERP system provides the core capability to do business. Would be illogical to have half the procurement system. We can issue purchase orders and receive goods. But we can't pay for them until we get the Accounts Payable system. We need both at the same time. Not negotiable.
      • Valuable - Yep, why we doing this if it's not valuable to the business. This is a strawman used by low business maturity projects.
      • Estimate - to a good approximation is what the advice tells us. The term good needs a unit of measure
      • Small - is a domain dependent measure. Small to an enterprise IT projects may be huge to a sole contributor game developer.
      • Testable - Yep, and verifiable, and validatable, and secure, and robust, and fault tolerant, and meets all performance requirements.
  • Margin - protects dates, cost, and technical performance from irreducible uncertainty. By irreductible it means nothing can be done about the uncertainties. It's not the lack of knowledged that is found in reducible uncertainty. Epistemic uncertainty. Irreducible uncertainty is Aleatory. It's the natural randomness in the underlying processes that creates the uncertainty. When we are estimating in the presence of aleatory uncertainty, we must account for this aleatory uncertainty. This is why using the average of a time series for making a decision about possible future outcomes will always lead to disappointment. 
    • First we should always use the Most Likely value of the time series, not Average of the time series.
    • The Most Likely - the Mode - is that number that occurs most often of all the possible values that have occurred in the past. This should make complete sense when we consider what value will appear next? Why the value that has appeared Most Often in the past.
    • The Average of two numbers 1 and 99 is 50. The average of two numbers 49 and 51 is 50. Be careful with averages in the absence of knowing the variance.
  • Risk retirement - Epistemic uncertainty creates risks that can be retired. This means spending money and time. So when we're looking at past performance in an attempt to estimate future performance (Yesterdays Weather), we must determine what kind of uncertainties there are in the future and what kind of uncertainties we encountered in the past.
    • Were the and are they reducible or irreducible?
    • Did the performance in the past contain irreducible uncertainties, baked into the numbers that we did not recognize? 

This bring up a critical issue with all estimates. Did the numbers produced from the past performance meet the expected values or were they just the numbers we observed? This notion of taking the observed numbers and using them for forecasting the future is an Open Loop control system. What SHOULD the numbers have been to meet our goals? What SHOULD the goal have been? Did know that, then there is no baseline to compare the past performance against to see if it will be able to meet the future goal. 

I'll say this again - THIS IS OPEN LOOP control, NOT CLOSED LOOP. No about of dancing around will get over this, it's a simple control systems principle found here. Open and Close Loop Project Controls

  • Measures of physical percent complete to forecast future performance with cost, schedule, and technical performance measures - once we have the notion of Closed Loop Control and have constructed a steering target, can capture actual against plan, we need to define measures that are meaningful to the decisions makers. Agile does a good jib of forcing working product to appear often. The assessment of Physical Percent Complete though needs to define what that working software is supposed to do in support of the business plan.
  • Measures of Effectiveness - one very good measure is of Effectiveness. Does the software provide and effective solution to the problem. This begs the question or questions. What is the problem and what does an effective solution looks like were it to show up. 
    • MOE's are operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions.
  • Key performance parameters - the companion of Measures of Effectiveness are Measures of Performance.
    • MOP's characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.
  • Along with these two measures are Technical Performance Measures
    • TPM's are attributes that determine how well a system or system element is satisfying or expected to satisfy a technical requirement or goal.
  • And finally there are Key Performance Parameters
    • KPPs represent the capabilities and characteristics so significant  that failure to meet them can be cause for reevaluation, reassessing, or termination of the program.

The connections between these measures are shown below.

Screen Shot 2015-03-28 at 4.37.51 PM

With these measures, tools for making estimates of the future - forecasts - using statistical tools, we can use yesterdays weather, tomorrow models and related reference classes, desired MOE's, MOP's, KPP's, and TPM's and construct a credible estimate of what needs to happen and then measure what is happening and close the loop with an error signal and take corrective action to stay on track toward our goal.

This all sounds simple in principle, but in practice of course it's not. It's hard work, but when you assess the value at risk to be outside the tolerance range where thj customer is unwilling to risk their investment, we need tools and processes wot actually control the project.

Related articles Hope is not a Strategy Incremental Delivery of Features May Not Be Desirable
Categories: Project Management

Doing the Math

Herding Cats - Glen Alleman - Tue, 04/21/2015 - 15:09

In the business of building software intensive systems; estimating, performance forecasting and  management, closed loop control in the presence of uncertainty for all variables is the foundation needed for  increasing the probability of success.

This means math is involved in planning, estimating, measuring,  analysis, and corrective actions to Keep the Program Green.

When we have past performance data, here's one approach...

And the details of the math in the Conference paper

  Related articles Hope is not a Strategy How to Avoid the "Yesterday's Weather" Estimating Problem Critical Success Factors of IT Forecasting
Categories: Project Management

Thinking About Estimation

I have an article up on agileconnection.com. It’s called How Do Your Estimates Provide Value?

I’ve said before that We Need Planning; Do We Need Estimation? Sometimes we need estimates. Sometimes we don’t. That’s why I wrote Predicting the Unpredictable: Pragmatic Approaches for Estimating Cost or Schedule.

I’m not judging your estimates. I want you to consider how you use estimates.

BTW, if you have an article you would like to write for agileconnection.com, email it to me. I would love to provide you a place for your agile writing.

Categories: Project Management

ScrumMaster – Full Time or Not?

Mike Cohn's Blog - Tue, 04/21/2015 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

I’ve been in some debates recently about whether the ScrumMaster should be full time. Many of the debates have been frustrating because they devolved into whether a team was better off with a full-time ScrumMaster or not.

I’ll be very clear on the issue: Of course, absolutely, positively, no doubt about it a team is better off with a full-time ScrumMaster.

But, a team is also better off with a full-time, 100 percent dedicated barista. Yes, that’s right: Your team would be more productive, quality would be higher, and you’d have more satisfied customers, if you had a full-time barista on your team.

What would a full-time barista do? Most of the time, the barista would probably just sit there waiting for someone to need coffee. But whenever someone was thirsty or under-caffeinated, the barista could spring into action.

The barista could probably track metrics to predict what time of day team members were most likely to want drinks, and have their drinks prepared for them in advance.

Is all this economically justified? I doubt it. But I am 100 percent sure a team would be more productive if they didn’t have to pour their own coffee. Is a team more productive when it has a fulltime ScrumMaster? Absolutely. Is it always economically justified? No.

What I found baffling while debating this issue was that teams who could not justify a full-time ScrumMaster were not really being left a viable Scrum option. Those taking the “100 percent or nothing” approach were saying that if you don’t have a dedicated ScrumMaster, don’t do Scrum. That’s wrong.

A dedicated ScrumMaster is great, but it is not economically justifiable in all cases. When it’s not, that should not rule out the use of Scrum.

And a note: I am not saying that one of the duties of the ScrumMaster is to fetch coffee for the team. It’s just an exaggeration of a role that would make any team more productive.