Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=3' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Re-Read Saturday: Commitment – Novel about Managing Project Risk, Part 7

Picture of the book cover


Today we conclude our read of Commitment – Novel about Managing Project Risk with a few highlights.   

The novel Commitment presents Rose’s evolution from an adjunct of a traditional project manager into an Agile leader. Rose is ripped out of her safe place and presented with an adventure she is reticent to take.  The project she is thrust into leading is failing and Rose can either take the fall or change. Real options and Agile techniques are introduced as a path forward for both Rose and the team. In the novel, Agile concepts such as self-organization are at odds with how things are done.  When a change is introduced that clashes with how we do things, it generates cognitive dissonance. Coaching and mentoring are methods for sorting out the problems caused when dissonance disrupts and organization.

One the hardest changes Rose has to address during the novel is that the job is not really done until the work is delivered to the customer. And as we find out later in the story, the job is not done until the customer uses what has been delivered. Many software projects fall prey to this problem because developers and testers are incentivized to complete their portion of the work so they can begin the next project. In many cases as soon as work is thrown over the wall it disappears from short-term memory. Throwing work over the wall breaks or delays the feedback cycle which makes rework more costly.  In the novel we see this problem occur twice, once between development and testing and later between the whole team and the customer who was afraid to implement the code every two weeks. Completing work and generating feedback are critical to making decisions

The novel’s explanation of staff liquidity was excellent. The process of staff liquidity begins by allocating the people with the least options (the fewest things they can do or most specialized) to work.  In self-managing teams, this requires that team members have a good deal of team and self-knowledge (see Johari’s Window). Those with more options fill in the gaps in capability after the first wave of allocation and are available to react when things happen. Allocating personnel with the most options last provides the team with the most flexibility. It should be noted that just because someone has a large amount of experience, that might not translate to options. Expertise and experience in one capability (for example, a senior person that can only test) has very few options, and therefore, have to be allocated early. Steven Adams connotes staff liquidity to T-shaped people. T-shaped people have a depth of expertise in one or few areas but have shallower expertise outside his or her specialty.  A T-shaped person enjoys learning and will have a good handle on their learning lead time. A team of T-shaped people and the use of staff liquidity increases the number of options a team has to deal with problems and changes as they are recognized.

In the epilogue of Commitment – Novel about Managing Project Risk everyone lives happily ever after.  At the end of the novel, I am left both with a better handle on a number of Agile and  lean techniques and perhaps more importantly with the need to see the options possible so that we can discern the difference between making a commitment and when we actually have choices. In the end, options allow us to maximize the value we deliver as we navigate the world full of changing context.

Thanks to Steven Adams who recommended Commitment.  Steven re-read the book and provided great comments week in and week out (Steven’s blog). His comments filled in gaps and drew my eye to ideas that I had not put together.  

Next week we begin the re-read of be Kent Beck’s xP Explained, Second Edition.

Previous Installments:

Part 1 (Chapters 1 and 2)

Part 2 (Chapter 3)

Part 3 (Chapter 4)

Part 4 (Chapter 5)

Part 5(Chapter 6)

Part 6 (Chapter 7)


Categories: Process Management

Security "Crypto" provider deprecated in Android N

Android Developers Blog - Fri, 06/10/2016 - 20:10

Posted by Sergio Giro, software engineer


If your Android app derives keys using the SHA1PRNG algorithm from the Crypto provider, you must start using a real key derivation function and possibly re-encrypt your data.

The Java Cryptography Architecture allows developers to create an instance of a class like a cipher, or a pseudo-random number generator, using calls like:

SomeClass.getInstance("SomeAlgorithm", "SomeProvider");

Or simply:


For instance,


On Android, we don’t recommend specifying the provider. In general, any call to the Java Cryptography Extension (JCE) APIs specifying a provider should only be done if the provider is included in the application or if the application is able to deal with a possible ProviderNotFoundException.

Unfortunately, many apps depend on the now removed “Crypto” provider for an anti-pattern of key derivation.

This provider only provided an implementation of the algorithm “SHA1PRNG” for instances of SecureRandom. The problem is that the SHA1PRNG algorithm is not cryptographically strong. For readers interested in the details, On statistical distance based testing of pseudo random sequences and experiments with PHP and Debian OpenSSL,Section 8.1, by Yongge Want and Tony Nicol, states that the “random” sequence, considered in binary form, is biased towards returning 0s, and that the bias worsens depending on the seed.

As a result, in Android N we are deprecating the implementation of the SHA1PRNG algorithm and the Crypto provider altogether. We’d previously covered the issues with using SecureRandom for key derivation a few years ago in Using Cryptography to Store Credentials Safely. However, given its continued use, we will revisit it here.

A common but incorrect usage of this provider was to derive keys for encryption by using a password as a seed. The implementation of SHA1PRNG had a bug that made it deterministic if setSeed() was called before obtaining output. This bug was used to derive a key by supplying a password as a seed, and then using the "random" output bytes for the key (where “random” in this sentence means “predictable and cryptographically weak”). Such a key could then be used to encrypt and decrypt data.

In the following, we explain how to derive keys correctly, and how to decrypt data that has been encrypted using an insecure key. There’s also a full example, including a helper class to use the deprecated SHA1PRNG functionality, with the sole purpose of decrypting data that would be otherwise unavailable.

Keys can be derived in the following way:

  • If you're reading an AES key from disk, just store the actual key and don't go through this weird dance. You can get a SecretKey for AES usage from the bytes by doing:

    SecretKey key = new SecretKeySpec(keyBytes, "AES");

  • If you're using a password to derive a key, follow Nikolay Elenkov's excellent tutorial with the caveat that a good rule of thumb is the salt size should be the same size as the key output. It looks like this:
   /* User types in their password: */  
   String password = "password";  

   /* Store these things on disk used to derive key later: */  
   int iterationCount = 1000;  
   int saltLength = 32; // bytes; should be the same size
              as the output (256 / 8 = 32)  
   int keyLength = 256; // 256-bits for AES-256, 128-bits for AES-128, etc  
   byte[] salt; // Should be of saltLength  

   /* When first creating the key, obtain a salt with this: */  
   SecureRandom random = new SecureRandom();  
   byte[] salt = new byte[saltLength];  

   /* Use this to derive the key from the password: */  
   KeySpec keySpec = new PBEKeySpec(password.toCharArray(), salt,  
              iterationCount, keyLength);  
   SecretKeyFactory keyFactory = SecretKeyFactory  
   byte[] keyBytes = keyFactory.generateSecret(keySpec).getEncoded();  
   SecretKey key = new SecretKeySpec(keyBytes, "AES");  

That's it. You should not need anything else.

To make transitioning data easier, we covered the case of developers that have data encrypted with an insecure key, which is derived from a password every time. You can use the helper class InsecureSHA1PRNGKeyDerivator in the example app to derive the key.

 private static SecretKey deriveKeyInsecurely(String password, int
 keySizeInBytes) {  
    byte[] passwordBytes = password.getBytes(StandardCharsets.US_ASCII);  
    return new SecretKeySpec(  
                     passwordBytes, keySizeInBytes),  

You can then re-encrypt your data with a securely derived key as explained above, and live a happy life ever after.

Note 1: as a temporary measure to keep apps working, we decided to still create the instance for apps targeting SDK version 23, the SDK version for Marshmallow, or less. Please don't rely on the presence of the Crypto provider in the Android SDK, our plan is to delete it completely in the future.

Note 2: Because many parts of the system assume the existence of a SHA1PRNG algorithm, when an instance of SHA1PRNG is requested and the provider is not specified we return an instance of OpenSSLRandom, which is a strong source of random numbers derived from OpenSSL.

Categories: Programming

One Week Remaining for Super-Early-Bird Registration for Product Owner and Writing Workshops

I have two online workshops starting in late August:

If you have been reading the estimations posts and are wondering, “How do I help my project deliver something every day or more often,” you should register for the workshop. We’ll discuss what your role is, how to plan for the future and the present, how to decide what features to do when, how to know when a feature is done, and tips and traps. See Practical Product Owner Workshop: Deliver What Your Customers Need for more details.

If you like the way I write (regardless of whether you agree with me), and you need to write more for your job or to help your business, take my Non-Fiction Writing Workshop: Write Non-Fiction to Enhance Your Business and Reputation. That workshop is about building a writing habit, learning the separate parts of pre-writing, writing, editing, and publishing. We’ll address your specific writing challenges, concerns, and fears. I’m the only one who will read your work, so no worries about other people seeing your writing.

Super-early-bird registration for both workshops ends June 17, 2016. I hope you decide to join us.

Categories: Project Management

How Smart is Your City Transportation?

How easy is it to get around in your city from point A to point B?

Here’s an interesting article that rounds up some of the latest ideas:

Getting Around in European capitals: How smart is your city?

I really like this—talk about impact:

Autolib’ has taken thousands of cars off the roads, brought down driving costs by 90% and is reducing pollution by millions of metric tons per year.

Dense city + mass transit creates opportunities.

According to the article, here are what some cities are doing:

  1. In London, Transport of London implemented a contactless payment system, so users can just “touch in and out” to pay. When you’re dealing with a billion commuters a year, that’s a big deal.   Using Internet-of-Things, developers can use the sensors across London’s transport system, along with meaningful data in the Cloud, to build better transport apps that address technical incidents and protect passengers in new ways.
  2. In Paris, the Internet-of-Things made it possible to create Autolib, an electronic car-sharing solution. The fleet of electronic cars is managed centrally in the Cloud, allowing users to rent cars from kiosks and easily find charging stations. And users can easily find parking, too, with GPS-enabled parking.
  3. In Barcelona, they are using Internet-of-Things to improve Bicing, their bicycle sharing program. They can use sensors to monitor bicycle usage and detect issues between supply and demand. They can use that insight to distribute bikes better so that the bikes can be used in a more sustainable way. It’s smart logistics for bicycles in action.
  4. In Helsinki, they are using Internet-of-Things to get more value out of their 400 buses. By measuring acceleration, speed, engine temperature, fuel consumption, brake performance, and GPS location, they reduce fuel consumption, improve driver performance, and provide safer bus rides.

I also like how the article framed the challenge right up front by painting the scene of a common scenario where you have to stitch together various modes of transport to reach your destination:

“You just need to take Bus 2 for three stops,
then change to Bus 8 towards the City station,
walk for 10 minutes towards the docks,
then take Line 5 on the metro for 5 stops.
Then call a taxi.”

You can imagine all the opportunities to reimagine how people get around, and how inclusive the design can be (whether that means helping blind people safely find their next stop, or helping somebody from out of town, navigate their way around.)

Depending on how big the city is and how far out the city is spread, there is still room for Uber and Lyft to help stitch the end-to-end mass transit journey together.

And I wonder how long before Amazon’s Now drivers, go from local residents that fulfill orders, to become another ride share option (do Uber drivers become Amazon Now or do Amazon Now become Uber drivers?).

Categories: Architecture, Programming

Starting An Agile Effort: People

Broken Chinese Statues

A good team will not go to pieces!

One of the most iconic television shows of the 1980s (83 – 87) was the A-Team. In the A-Team, the four team members combined their talents to right wrongs and conquer evil. In a precursor to Agile, the team was cross functional and in the context of the larger world, self-organizing. While the A-Team reflects Hollywood’s perception of a team, the lesson shouldn’t be lost that for most software development or maintenance efforts teams are necessary to get things done. If teams are a necessity, then it is important to understand the attributes of an effective team.  For any specific effort, the best team (or teams) is a function of team dynamics, capabilities and the right number of bodies.

Team dynamics are an expression of how the team interacts with each other and those outside the team. A recent installment of Google’s “The Water Cooler” blog reported on a study done by Google on what makes a team effective at Google.  They found five critical factors:

  1. Psychological safety. Risks can be taken without causing insecurity or embarrassment.
  2. Dependability. Team members can count on each other (people say what they will do and do what they say).
  3. Structure and Clarity. The goals, roles, and execution plans are clear (and shared).
  4. The Meaning of Work. The work is important to the team members.
  5. The Impact of Work. The team believes that their work matters.

What team members believe and how they interact turned out to be the most important factors that lead to effective teams in the Google universe.  Interestingly, a New York Times article (Sunday, May 29th, 2016, Sunday Review Section p1, 4-5) written by Alain de Botton, titled “Why You Will Marry the Wrong Person” makes a similar argument about couples.  In the article, he stated:

The person who is best suited to us is not the person who shares our every taste (he or she doesn’t exist), but the person who can negotiate differences in taste intelligently — the person who is good at disagreement.

The same focus on dynamics reinforces Google’s findings that the right dynamics unlock a team’s potential.  Note:  This another VERY strong argument not to mess with the structure of teams that work well, if at all possible. 

Capabilities are abilities individuals have or can acquire to solve problems presented to the team. Capabilities include the knowledge of the business problem, application, and technical skills in applying tools, languages, and frameworks. One capability often not readily considered is knowledge of the lead time (Knowledge Options) to learn a new skill or techniques that might be needed to address a problem.  Knowledge options are where you know just enough about a subject to know how to apply the subject and how long it will take to get up to speed on the topic when needed. Understanding both current skills and the collection of a team’s knowledge options dramatically increases the breadth of technical and business problems any specific team can address.

Individual Agile teams are typically constrained to five to nine bodies. More than nine tends to cause communication problems, and in teams of less than five, it is difficult for the team to have the capabilities needed to address most sizable software problems. Large Agile efforts typically require multiple teams (scaling Agile). Business need and budget help to determine how many people (and therefore teams) will be needed. 

Part of starting an Agile effort is determining who will be involved, what their capabilities are or can be and finally a rough count of the teams that will be needed.  Even if the goal of the effort does not change, the path towards that goal will evolve, which means that many of the factors originally forecast (including scope, budget, people, and capabilities needed to deliver) will also evolve.  However, without some forecast of the people or teams needed it is difficult to get any effort started Agile.

Categories: Process Management

Web Standards and Coffee with Googler Alex Danilo

Google Code Blog - Thu, 06/09/2016 - 22:14

Posted by Laurence Moroney, Developer Advocate

“Without standards, things don’t work right,” said Alex Danilo, a Googler working on the HTML5 specs, trying to help us all build a better web.

In 1999, the Mars Climate Orbiter mission failed because of a bug, where onboard software represented output in one standard of measurement, while a different software module needed data in a different format. Alex discusses many other examples of how the lack of industry standards can result in problems, such as early rail systems having different gauge widths in different states, impeding travel.

Alex works with the Web Platform Working Group, whose charter is to continue the development of the HTML language, improving client-side application development, including APIs and markup vocabularies.

He shares with us details of the upcoming HTML 5.1, a refinement of HTML 5, showing us the great validator tool that makes it easier for developers to ensure that their markup is meeting standards, and the test the web forward initiative to help uncover bugs and compatibility issues between browsers.

You can learn more about Google and Web development at the Web Fundamentals site.

Categories: Programming

The Case for and Against Estimates, Part 4

When we think about the discussion about estimates and #noestimates, I have one big question:

Where do you want to spend your time?

In projects, we need to decide where to spend our time. In agile and lean projects, we limit the work in progress. We prefer to spend our time delivering, not estimating. That’s because we want to be able to change what we do, as often as we can. That is not appropriate for all projects. (See When is Agile Wrong for You?)

In some projects, as in the domain that Glen Alleman has commented on in the first post in this series, it might well be worth spending enough time to generate reasonable estimates and to have an idea about how much the estimate might be off. 

In some projects, as in the gather-a-bunch-of-people-across-the-universe that David Gordon discusses in Part 2 of this series, you might need the exercise more of “who will deliver what” first, and then ask “when.” 

For both of those kinds of projects (I might call them programs), the cost of people going open-loop is too high. Of course, I would do an everyone-in-the-room planning session for the first month to iron out our deliverables. (When people tell me the cost of that is too high, I remind them about the cost of not delivering.) It’s possible if people understand how to use agile and lean to deliver at least as often as once a month, we don’t need more planning sessions. (If you want to run a program in an agile and lean way, see my program management book.)

In my experience, many people work on one- or two-team projects. The organization has decided on those projects. If you use agile and lean,  you might not need to estimate, if you deliver something every day. The delivery builds trust and provides sufficient feedback and the ability to change.

Here’s the way I like to think about #noestimates:

Noestimates is not about not estimating. It’s about delivering value often enough so you don’t have to estimate. You can spend time on different activities, all leading to delivering product.

I don’t buy what some #noestimates people say, that estimation is a sign of dysfunction. I have found the estimation process useful, as I explained in part 3 of this series.

In both Glen’s and Dave’s examples, it’s quite difficult to deliver value often, especially at the beginning of a program. Sure, you might decide to limit work in progress, or work in one- or two-week iterations. But the value you can deliver? Wow, the projects are so large and dispersed, it’s quite difficult to see value that often. You might see pieces of value. One vendor produces a proof of concept. Maybe another integrates two small chunks. That’s probably not enough value for people to see the product evolve.

On the other hand, when I can use an agile and lean approach for programs, I have been successful in delivering working product across the program every day. If you have SaaS, you can do this. I have done this with the software part of the program for a software/hardware product. That was valuable for everyone on the program.

When I think in #noestimate terms, I think of showing value for the entire product.

Here’s an example from my work. I write in small chunks. Okay, these blog posts have been massive. Not what I normally do on a daily basis. Because I write in small chunks, I can make progress on several books in the same week. That’s because I only have to finish a few paragraphs and I can be done with that part.

When I develop new workshops, I often start with the simulation(s). Once I know the activities, it’s even easier to design the debriefs and the material. I might take several days to develop a simulation. I call them drafts. I can do a draft in about an hour. The draft has value because I can ask people to review it. It’s a small deliverable.

In general, I timebox my work to finish something valuable in an hour. That’s because I make my deliverable small enough to show value in an hour. That’s the same idea as having a size “one” story. For you, a 1 might be a morning, but it’s probably not an entire day.

Back when I wrote code for a living, I was not good enough to deliver in  hour-long chunks. Maybe if I’d used TDD, I could have. I found estimation helpful. That’s why I worked in inch-pebbles. I could still show value, and it might not be several times a day. It was always at least every other day.

When I was a tester, I was good enough to write very small code chunks to test the product. That’s when I realized I’d been working in too-large chunks as a developer. When I was a manager, I tried to timebox all meetings to 50 or 55 minutes. I didn’t always succeed, but I was successful more often than not. Some meetings, such as one-on-ones, I timeboxed to 20 minutes.

In my work, I want to show value as early and as often as possible.  When I work with teams and managers, that’s part of my work with them.  Not because delivering something in an hour is the goal. It’s because the faster you deliver something, the more value you show. The faster you can get feedback and know if you are on the right track.

I have found it helpful to create an order of magnitude estimate for a project, so we all understand the general size/cost/duration of the project. Then, I start to deliver. Or, if I’m leading the team in some way, the team delivers.

The smaller the deliverable, the more often  you can get feedback and show value. I have seen these benefits from working this way:

  • The project ended earlier than we expected. That’s because we delivered “enough” to satisfy the PO/customer. (I’ve seen this many times, not just a couple of times.) If we had spent more time generating a detailed estimate, we would not have delivered as quickly.
  • We learned enough about the deliverables that we were able to do more discovery (as Ellen Gottesdiener says) as we delivered. We made the whole requirements process continuous and just in time. We did not require hour-long pre-sprint planning meetings. (Okay, that’s only happened twice. I have hope for more experiences like this.)
  • We were able to fix things before they got too large. We’d started in one direction on a feature set, realized we were headed in the wrong direction and replanned what to do and how to do it.

To me, the idea of #noestimates is tied up in small chunks of value.

#Noestimates is not for everyone. Just as detailed estimation is not for everyone. Think about what is right for your context: your team, your project, and yes, your management.

The posts to now:

  • Part 1 talked about targets and order of magnitude estimates.
  • Part 2 discussed when estimates are not that helpful. I did not include bad management in this post. Managers who treat estimates as commitments are not helping anyone.
  • Part 3 is about when estimates are helpful.
  • Part 4, this post, is about #noestimates.

I’ll do a summary post, including a pointer to the original article in part 5.

Categories: Project Management

The Case for and Against Estimates, Part 3

In Part 1, I discussed order-of-magnitude estimates and targets. In part 2, I said how estimates can be misused. In this part, I’ll discuss when estimation is useful. Here are several possibilities:

  • How big is this problem that we are trying to solve?
  • Where are the risks in this problem?
  • Is there something we can do to manage the risk and explain more about what we need to do?
Estimates can be useful when they help people understand the magnitude of the problem.

One of my previous Practical Product Owner students said, “We use story size to know when to swarm or mob on a story.” People tackle stories up to 5. (They use Fibonacci series for story size.) They might pair or swarm on stories starting at size 8. Even if they have a 21 (or larger) size story, they swarm on it and finish it in a couple of days, as opposed to splitting the story.

They use estimates to understand the size and complexity of the feature. (I would call their features “feature-sets,” but they like to call that one big thing a feature.)

You might not like that approach. I think it’s a fine way of not fighting with the PO to split stories. It’s also helpful to work together to solve a problem. Working together spreads knowledge throughout the team, as a team.

My experience with estimation is that it’s easy for me to not understand the magnitude of the work. We manage this problem in agile/lean by estimating together, or working together, or with timeboxing in some way.

The first time we solve a particular problem, it takes longer. The first time I worked on a control system (embedded software), I had to learn how things worked together. Where did the software interact with the hardware? What were the general risks with this kind of a product? The first time I self-published a book, everything took longer. What were the steps I needed to finish, in what order?

I worked on many control systems as a developer. Once I understood the general risks, my estimates were better. They were not sufficiently accurate until I applied the rules of deliverable-based planning. What deliverables did I need to deliver? (I delivered something at least once a week, even if it was data from what I now know is a spike.) What inch-pebbles did I need to create that deliverable?

The more I broke the work down into deliverables, the better the estimate was. The smaller the chunks, the better my estimate was. The more I broke the problem down, the more I understood what I had to do and what the risks were.

One of the things I like about agile and lean is the insistence on small chunks of value. The smaller my chunk is, the more accurate my estimate is.

Estimates can help people understand risks.

You’ll notice I talked a lot about risks in the above section. There are general project risks, such as what is driving the project? (See Manage It! or Predicting the Unpredictable, or a series I wrote a few years ago, Estimating the Unknown.) We optimize different work when we know what is driving the project. That’s the project view.

We have possible risks in many deliverables. We have the general risks: people get sick, they need to talk the duck, they multitask. But, each deliverable has its own risk.

I’ve said before software is learning, innovation. You may have done something like this project before, so you have domain expertise. But, you have probably not done this new thing here.

When I estimate, I start thinking about what I need to do, how to solve this problem. Then, I start thinking about the problems I might encounter in solving those problems.

I can’t get to the problems unless I have inch-pebbles. I am a big-picture person. I see the whole problem, possibly even the whole solution, and I skip some of the steps in my head. I estimate top-down as a start. Unless I create my inch-pebbles, I am likely to gloss over some of the risks because I start top-down.

You might not be like me. You might estimate bottom-up. You might see all the details. You might not miss any steps in solving the problem as you think about it. (I wonder about people like you: do you see the big picture at the beginning, or does it evolve for you?)

I have met some people who estimate inside out. They tell me they see part of the big picture and part of the small steps. They iterate on both parts until they see and can estimate the whole thing.

I have taught a number of estimation workshops. Most of my participants are top-down people. They see the result they want and then envision the steps to get there. I have met some small number who start bottom up. I have met two people who are inside-out. I don’t know if that’s a normal distribution, or just the people who participate in my workshops.

Estimates can help people understand possible first steps.

When people think about the first thing that can provide value, and they think about how to make that first thing small (either inch-pebbles or agile stories), they can more easily see what the first deliverable could be. They can discuss the deliverable progression (in agile with a product owner and in a more traditional life cycle with a project manager or a product manager).

I have found the discussion of deliverable progression very helpful. Many years ago, I was the lead developer for a gauge inspection system (machine vision on an assembly line). I asked the customer what he wanted to see first. “Can you see the gauge enough to give us some kind of an answer as to whether it’s a good gauge?” was his answer.

Notice he said “enough,” not “a perfect inspection.” We did a proof of concept in a couple of days. In the lab, with the right lighting, we had an algorithm that worked well enough. You might think of this as a discovery project. Based on that deliverable, we got the contract for the rest of the project. If I remember correctly, it took us close to 6 months to deliver a final system.

For that project, I acted as a cross between a project manager and what we now call a product owner. We had release criteria for the project, so I knew where we were headed. I worked with the customer to define deliverables every two weeks, after showing a demo of what we had finished every two weeks. (This was staged delivery, not agile. We worked in week-long timeboxes with demos to the customer every two weeks.)

This is in the days before we had project scheduling software. I drew PERT diagrams for the customer, showing date ranges and expected deliverables.

A few years ago, I coached a project manager. She was the Queen of the Gantt. She could make the Gantt chart do anything. I was in awe of her.

However, her projects were always late—by many months. She would work with a team. They would think it was a six-month project. She would put tasks into the Gantt that were two-, three-, and four weeks long. That’s when I understood the problem of the estimation unit. “If you measure in weeks, you’ll be off by weeks.” Her people were top-down thinkers, as I am. They glossed over some of the steps they needed to make the product work.

I explained how to do deliverable-based planning with yellow stickies. The people could generate their tasks and see their intersections and what they had to deliver. She and the team realized they didn’t have a 6-month project. They had a project of at least a year, and that was if the requirements didn’t change.

When they started thinking about estimating the bits, as opposed to a gross estimate and applying a fudge factor, they realized they had to spend much more time on estimating and that their estimate would be useful. For them, the estimation time was the exploration time. (Yes, I had suggested they do spikes instead. They didn’t like that idea. Every project has its own culture.)

How do your estimates help you?

Maybe your estimates help you in some specific way that I haven’t mentioned. If so, great.

powerlawdistributionThe problem I have with using estimates is that they are quite difficult to get right. See Pawel Brodzinski’s post, Estimation? It’s Complicated… In

In Predicting the Unpredictable, I have a chart of how my estimates work. See the Power Law Distribution: Example for Estimation. (In that book, I also have plenty of advice about how to get reasonable estimates and what to do if your estimate is wrong.)

In my measurements with my clients and over time, I no longer buy the cone of estimation. I can’t make it work for agile or incremental approaches. In my experience, my estimates are either off by hundreds of %, or I am very close. We discover how much I am off when the customer sees the first deliverable. (In Bossavit’s Leprechauns of Software Engineering, (or on leanpub) he says the cone of estimation was never correct.)

For me, deliverables are key to understanding the estimate. If, by estimating, you learn about more deliverables, maybe your estimation time is useful.

Since I use agile and lean, estimating time for me is not necessarily useful. It’s much more important to get a ranked backlog, learn if I have constraints on the project, and deliver. When I deliver, we discover: changes in requirements, that we have done enough, something. My delivery incorporates feedback. The more feedback I get, the more I can help my customer understand what is actually required and what is not required. (I often work on projects where requirements change. Maybe you don’t.)

I realized that I need a part 4, that specifically talks about noestimates and how you decide if you want to use noestimates. Enough for this post.

Categories: Project Management

Surface new proximity-based experiences to users with Nearby

Google Code Blog - Thu, 06/09/2016 - 17:16

Posted by Akshay Kannan, Product Manager

Today we're launching Nearby on Android, a new surface for users to discover and interact with the things around them. This extends the Nearby APIs we launched last year, which make it easy to discover and communicate with other nearby devices and beacons. Earlier this year, we also started experimenting with Physical Web beacons in Chrome for Android. With Nearby, we’re taking this a step further.

Imagine pulling up a barcode scanner when you’re at the store, or discovering an audio tour while you’re exploring a museum–these are the sorts of experiences that Nearby can enable. To make this possible, we're allowing developers to associate their mobile app or a website with a beacon.

A number of developers have already been building compelling proximity-based experiences, using beacons and Nearby:

Getting started is simple. First, get some Eddystone Beacons- you can order these from any one of our Eddystone-certified manufacturers. Android devices and and other BLE-equipped smart devices can also be configured to broadcast in the Eddystone Format.

Second, configure your beacon to point to your desired experience. This can be a mobile web page using the Physical Web, or you can link directly to an experience in your app. For users who don’t have your app, you can either provide a mobile web fallback or request a direct app install.

Nearby has started rolling out to users as part of the upcoming Google Play Services release and will work on Android devices running 4.4 (KitKat) and above. Check out our developer documentation to get started. To learn more about Nearby Notifications in Android, also check out our I/O 2016 session, starting at 17:10.

Categories: Programming

Starting An Agile Effort

Looking at the map to starting an Agile effort?

Looking at the map to starting an Agile effort?

What is needed to start an Agile project?  There are a number of requirements for beginning an Agile effort.  Those requirements typically include a big picture understanding of the business need, a budget, resources, and a team.   Somewhere in that mess, someone needs to understand if there are any unchangeable constraints. A high-level view of the five categories of requirements for starting an Agile effort are: 

  1. Business Need – All efforts need to begin with a goal firmly in mind.   While the absolute detail of how to achieve that goal may be discovered along the way, it is unconscionable to begin without firmly understanding the goal. Understanding the goal of the effort is the single most important requirement for starting any effort. Storytelling is a tool to develop and share the effort’s goal.
  2. Budget – In almost every organization, spending money, effort or time (the calendar kind) on an effort means that something else does not get funded. Very few efforts are granted the luxury of unlimited funds (money or effort). All efforts require a budget.  Budgeting begins at the portfolio level where decisions on which piece of work need to be addressed, then flows downward to programs or release trains where it divided up into finite pots of money.  The trickle down of the budget typically ends a team’s doorstep with a note saying that they have this much “money” to address a business need. The term money is used loosely as the effort is often used as a currency in many organizations.  If the business need or goal is the most important, then having a budget is a close second.
  3. Resources – In corporate environments resources generally include hardware, software, network resources and physical plant (a place to work).  People are not resources. In the late 90’s I participated in large bank merger projects.  In one of the projects the resources that had to be planned for were renting a floor in building (and lots of desks, chairs, phones, and stuff) and funding a route on an airline for the length of the project.
  4. People – People, often organized in teams, get the work accomplished.  Individual Agile teams should be cross-functional, self-organized and self-managed. A good team is a mixture of behaviors, capabilities and the right number of bodies. People are the third most important requirement for beginning an Agile effort.
  5. Constraints – Understanding the hard constraints for any effort has to operate within is important.  Some efforts are in response to legal mandates (income tax changes for example) or have to fit within specific hardware footprints (embedded code for example).  Constraints often are the impetuous for innovative solutions if they are known and anticipated. Note: constraints, like risks, can evolve, therefore they need to be revisited as an effort progresses.  

There is a hierarchy of requirements.  An effort needs a goal. A goal is needed to acquire a budget. A budget is needed to acquire a team and resources. Constraints are a wildcard that can shape the all of the other requirements.  Understanding and ensuring that the effort’s requirements are addressed is what is necessary for starting an Agile effort. 

The next few blog entries will explore each category in greater detail.

Categories: Process Management

The Case for and Against Estimates, Part 2

In the first part of this series, I said I liked order-of-magnitude estimates. I also like targets in lieu of estimates. I’ll say more about how estimates can be useful in part 3.

In this part, I’ll discuss when I don’t like estimates.

I find estimates not useful under these conditions:

  • When the people estimating are not the people doing the work.
  • When managers use old estimates for valuing the work in the project portfolio.
  • When management wants a single date instead of a date range or confidence level.

There are more possibilities for using estimates in not-so-hot ways. These are my “favorite” examples.

Let me take each of these in turn and explain how agile specifically helps these. That’s not because I think agile is the One and Only Answer. No, it’s because of the #noestimates discussion. I have used #noestimates in a staged-delivery project and on agile projects. I have not been able to do so on iterative or serial (waterfall/phase gate) projects. Of course, with my inch-pebble philosophy, I have almost always turned an iterative or serial project into some sort of incremental project.

People Estimate on Behalf of the Project Team

We each have some form of estimation bias. I have a pretty good idea of what it takes me to finish my work. When I pair with people, sometimes it takes longer as we learn how to work with each other. Sometimes, it takes much less time than we expected. I expect a superior product when I pair, and I don’t always know how long it will take us to deliver that product. (I pair-write with any number of people during the course of the year.) Even with that lack of knowledge, we can pair for a short time and project to a reasonable estimate. (Do a little work and then re-estimate.)

When people who are not part of the project team estimate on behalf of other people, they don’t know at least these things: what it will take the real project team to deliver, how the people will work together, and how/if/when the requirements will change. I have my estimation bias. You have yours. We might learn to agree if we work together. But, if we are “experts” of some sort, we don’t know what the team will encounter and how they will handle it.

I too often see experts ignore requirements risks and the potential for requirements changes. I don’t trust these kinds of software estimates.

Now, when you talk to me about construction, I might answer that we know more about construction. We have dollars per sq. foot for houses. We have dollars per road mile for roads. And, I live in Boston, the home of the Big Dig. Every time we remodeled/rebuilt our house, it came in at just 10% over the original number. We worked hard with the builder to manage that cost.

Those projects, including the Big Dig, were worth it.

How do we make software projects worth it? By delivering value as often as possible and asking these questions:

  • Is there still risk to manage?
  • Is there more value in the backlog?
  • How much more do we want to invest?

Software is not a hard product. It is infinitely malleable. What we deliver on Monday can change the estimate for what we want to deliver on Tuesday, Wednesday and Thursday. We can’t do that with hard products.

When other people estimate, we can’t use what we learn by working together and what we have learned already about this domain. Agile helps this specifically, because we deliver often and can re-estimate the backlog if we need to do so. We understand more about the remaining risks because we deliver.

Managers Use (Old) Estimates for the Project Portfolio

I have seen managers use estimates to value projects in the project portfolio. I wrote a post about that years ago: Why Cost is the Wrong Question for Evaluating Projects in Your Project Portfolio.

Here’s the problem with old estimates. Estimates expire. Estimates are good for some time period. Not forever, but for some number of weeks. Depending on how you work, maybe the estimate is good for a couple of months. Estimates expire because things change: the team might change. The codebase and the requirements have certainly changed.

However, project cost is only one part of the equation. Value has to be another part when you think about the project portfolio. Otherwise, you fall prey to the Sunk Cost Fallacy.

You might say, “We use ROI (return on investment) as a way to value projects in the project portfolio.” Now you depend on two guesses: what it will take for you to complete the project and the sales/adoption rate for the release.

ROI is a surrogate measure of value. When I have measured the actuals (what it actually took us to finish the project and the actual revenue at three, six, nine and twelve months out, we almost always did not meet the projected ROI. And, because we chose that project with great-looking ROI, we incurred a cost of delay for other projects. “If we don’t release this project because we are doing something else, what is the effect on our revenue/adoption/etc?” (See Diving for Hidden Treasures to read about the different costs of delay.)

People often say, “These two projects are equal in terms of project cost. If I don’t use ROI, how can I decide between these projects?”

I have never seen this to be true, and it’s quite difficult to predict which project will be shorter. Here are some options:

  • Use Cost of Delay as a way to value the projects in the project portfolio. See Diving for Hidden Treasures for ways to see Cost of Delay. See Manage Your Project Portfolio for many other ranking ideas for the project portfolio.
  • Determine the first releasable deliverable of value for each project. How long will that take? If you do one project, release something, does that provide you enough revenue so you can go to the other project and release something there?
  • Make all the deliverables small, so, if necessary, you could flow work from both projects through one team. The team can finish a feature/deliverable and move to the next one. I recommend using a kanban board and swarming over each feature so you get maximum throughput. Once the team has finished “enough” features, decide which project to spend more time on.

Agile helps the entire project portfolio problem because we can all see progress on an agile project: demos, product backlog burnup chart, and retrospective results. We know a lot more about what we finish and where we are headed. We can stop working on one project because we don’t leave work in an unfinished state.

Management Wants the Comfort of a Single Estimation Date

I supply a range of dates for my projects: possible, likely, pessimistic. I sometimes supply a confidence range. I have met many managers who do not want the reality of estimation. They want a single date: September 1, 2pm.

The problem is that an estimate is a guess. I can only know the exact duration or cost when I’m done with the project. I can get closer as we finish work, but I can’t know for sure months in advance. For a year-long project, I can guess as to which quarter/three month period. As we finish the project, I can spiral in on a date. By the last quarter, I can be within a couple of weeks of knowing.

Managers get paid the big bucks to manage the organization with assumptions, risks, and unknowns that we explain to them. When we work on projects, it’s our job to manage our risks and deliver value. The more value we deliver, the fewer unknowns our managers have.

Agile (and incremental approaches) help us manage those unknowns. Nothing is perfect, but they are better than other approaches.

I’ve worked with several managers who wanted one date. I gave them the pessimistic date. Sometimes, I provided the 90% confidence date. Even then, there were times we had more problems than we anticipated. Meeting that date became impossible.

A single-point estimate is something we like. Unfortunately, a single-point date is often wrong. Management wants it for any number of reasons.

If one of those reasons is assurance that the team can deliver, agile provides us numerous ways to get this result without a single-point estimate: set a target, see demos, see the product backlog burnup chart.

I have nothing against estimation when used properly. These are just three examples of improper estimate use. Estimates are guesses. In Part 3, I’ll talk when estimates might be useful.

(Sorry for the length of this post. I’ll stop writing because otherwise I’ll keep adding. Sigh.)

Categories: Project Management

SE-Radio Episode 259: John Purrier on OpenStack

John Purrier talks with Jeff Meyerson about OpenStack, an open-source cloud operating system for managing compute resources. They explore infrastructure-as-a-service, platform-as-a-service, virtualization, containers, and the future of systems development and management. Cloud service providers like Amazon, Google, and Microsoft provide both infrastructure-as-a-service and platform-as-a-service. Infrastructure-as-a-service gives developers access to virtual machines, servers, and network infrastructure. […]
Categories: Programming

Daydream Labs: VR plays well with others

Google Code Blog - Tue, 06/07/2016 - 17:04

Posted by Rob Jagnow, Software Engineer, Google VR

At Daydream Labs, we pair engineers with designers to rapidly prototype virtual reality concepts, and we’ve already started to share our learnings with the VR community. This week, we focus on social. In many of our experiments, we’ve found that being in VR with others amplifies and improves experiences in VR, as long as you take a few things into account. Here’s what we’ve learned so far:

Simplicity can be powerful: Avatars (or the virtual representations of people in VR) can be simplified to just a floating head with googly eyes and still convey a surprising degree of emotion, intent, and number of social cues. Eyes give people a location to look to and speak towards, but they also increase face-to-face communication by making even basic avatars feel more human. When we combine this with hands and a spatially-located voice, it comes together to create a sense of shared presence.

Connecting the real and the virtual: Even when someone is alone in VR, you can make them feel connected. For example, you can continue to carry a conversation even if you’re not in VR with them. Your voice can serve as a subtle reminder that they’re spanning two spaces—the real and the virtual. This asymmetric experience can be a fun way to help ground party games where one player is in VR but other players aren’t, like with charades or Pictionary.

But when someone else joins that virtual world with them, we’ve seen time and time again that the real world melts away. For most multiplayer activities, this is ideal because it makes the experience incredibly engaging.

Join the party: When you first start a VR experience with others, it can be tough to know where to begin. After all, it’s easier to join a party than to start one! Create shared goals for multi-player experiences. When you give people something to play with together, it can help them break the ice, allow them to make friends, and have more fun in VR.

You think you know somebody: Lastly, people who know each other offline immediately notice stature or differences in a person’s height in VR. We can re-calibrate environments to play with height and scale values to build a VR world where everyone appears to be the same height. Or we can adjust display settings to make each person feel like they’re the tallest person in the room. Height is such a powerful social cue in the real world and we can tune these settings in VR to nudge people into having more friendly, prosocial interactions.

If you’d like to learn more about Daydream Labs and what we’ve learned so far, check out our recent Lessons Learned from VR Prototyping talk at Google I/O.

Categories: Programming

Announcing the Certification of Agencies as part of Google Developers Agency Program

Google Code Blog - Tue, 06/07/2016 - 16:58

Posted by Uttam Kumar Tripathi, Global Lead, Developer Agency Program

Back in December 2015, we had shared our initial plans to offer a unique program to software development agencies working on mobile apps.

The Agency Program is an effort by Google’s Developer Relations team to work closely with development agencies around the world and help them build high quality user experiences. It includes providing agencies with personalized training through local events and hangouts, dedicated content, priority support from product and developer relations teams, and early access to upcoming developer products.

Over the past few months, the program drew a lot of interest from hundreds of Agencies and we have since successfully launched this program in a number of countries including India, UK, Russia, Indonesia, USA and Canada.

Having worked with various agencies for several months, the Agency Program has now launched certification for those partners that have undergone the required training and have demonstrated excellence in building Android applications using our platforms. The Agency Program hopes that doing so would make it easier for clients who’re looking to hire an agency to make an informed decision while also pushing the entire development agency ecosystem to improve.

The list of our first set of certified agencies Agencies is available here.

We do plan to review and add more agencies to this list over the year and also expand the program to other countries.

Categories: Programming

Don’t Pull My Finger

NOOP.NL - Jurgen Appelo - Tue, 06/07/2016 - 11:54
Dont Pull My Finger

As a public speaker, I get the weirdest requests.

Sometimes, event organizers want me to provide some “seed questions” that a moderator can ask me after a presentation. Usually, they use these when audience members do not immediately raise their hands when asked if they have any questions. In such a case, the moderator switches to a prearranged seed question after which the audience has usually awakened from its coma.

I don’t like giving organizers seed questions. Why should I be the one to tell them which questions they must ask me? It makes me think of a not-to-be-named family member who asked kids to pull on one of his fingers when he felt some gas coming up. And when they innocently pulled a finger, guess what happened? He thought it was hilarious.

Event organizers know their audiences better than I do. There’s no need to be lazy or to defer the bootstrapping of the Q&A to me. It is their job to make sure that we answer the important questions of their audience. And they shouldn’t care about anything that I most urgently want to get out of me.

When moderators do their job well, they generate a question or two on behalf of their audiences. And when they do, it happens often enough that they pose me questions I would never have imagined, and I need to think and offer an answer fast!

That prevents me from being lazy too.

I’m not going to let anyone pull my fingers. So don’t ask. If I have something relevant to say, I’ll say it. I don’t need to have it pulled out of me. Instead, I look forward to getting surprising questions that will challenge me!

(c) 2008 Derek Bridges, Creative Commons 2.0

My new book Managing for Happiness is available from June 2016. PRE-ORDER NOW!

Managing for Happiness cover (front)

The post Don’t Pull My Finger appeared first on NOOP.NL.

Categories: Project Management

Behind the scenes: Firebass ARG Challenge

Google Code Blog - Mon, 06/06/2016 - 21:47

Originally posted on Firebase blog

Posted by Karin Levi, Firebase Marketing Manager

This year's Google I/O was an exciting time for Firebase. In addition to sharing the many innovations in our platform, we also hatched a time-traveling digital fish named Firebass.

Firebass is an Alternate Reality Game (ARG) that lives across a variety of static web pages. If you haven’t played it yet, you might want to stop reading now and go fishing. After you’ve caught the Firebass and passed the challenge, come back -- we’re going to talk about how we built Firebass.

How we began

We partnered with Instrument, a Portland-based digital creative agency, to help us to create an ARG. We chose ARG because this allowed us to utilize developers’ own software tools and ingenuity for game functionality.

Our primary objective behind Firebass was to make you laugh, while teaching you a little bit about the new version of Firebase. The payoff for us? We had a blast building it. The payoff for you? A chance to win a free ticket to I/O 2017.

To begin, we needed to establish a central character and theme. Through brainstorming and a bit of serendipity, Firebass was born. Firebass is the main character who has an instinctive desire to time-travel back through prior eras of the web. Through developing the story, we had the chance to revisit the old designs and technologies from the past that we all find memorable -- as you can imagine, this was really fun.

Getting started

We put together a functional prototype of the first puzzle to test with our own developers here at Google. This helped us gauge both the enjoyment level of the puzzle and their difficulty. Puzzle clues were created by thinking of various ways to obfuscate information that developers would be able to recognize and manipulate. Ideas included encoding information in binary, base64, hex, inside images, and other assets such as audio files.

The core goal with each of the puzzles was to make them both logical but not too difficult -- we wanted to make sure players stayed engaged. A bulk of the game’s content was stored in Firebase, which allowed us to prevent players from accessing certain game details too early via inspecting the source code. As an added bonus, this also allowed us to demonstrate a use-case for Firebase remote data storage.

Driving the game forward

One of our first challenges was to find a way to communicate a story through static web pages. Our solution was to create a fake command line interface that acted as an outlet for Firebass to interact with players.

In order to ground our time travel story further, we kept the location of Firebass consistent at https://probassfinders.foo/ but changed the design with each puzzle era.

Continuing the journey

After establishing the Pro Bass Finders site and fake terminal as the centerpieces of the game, we focused on flushing out the rest of the puzzle mechanics. Each puzzle began with the era-specific design of the Pro Bass Finders home page. We then concepted new puzzle pieces and designed additional pages to support them. An example of this was creating a fake email archive to hide additional clues.

Another clue was the QR code pieces in puzzle 2.

The QR codes demonstrate Firebase time-based read permissions and provide a way to keep players revisiting the site prior to reaching the end of puzzle 2. There were a total of three pieces of a QR code that each displayed at different times during the day. It was really fun and impressive to see all of the different ways players were able to come up with the correct answer. The full image translates to ‘Locating’, making the answer the letter ‘L’, but many players managed to solve this without needing to read the QR code. You're all smart cookies.

Final part of the Puzzle

Puzzle 3 encompassed our deep nostalgia for the early web, and we did our best to authentically represent the anti-design look and feel of the 90s.

In one of the clues, we demonstrated Firebase Storage by storing an audio file remotely. Solving this required players to reference Firebase documentation to finish writing the code to retrieve the file.

<!-- connect to Firebase Storage below -->
console.log('TODO: Complete connection to Firebase Storage');
var storageRef = firebase.app().storage().ref();
var file = storageRef.child('spectrogram.wav');

// TODO: Get download URL for file (https://developers.google.com/firebase/docs/storage/web/download-files)
The finale

While the contest was still active, players who completed the game were given a URL to submit their information for a chance to win a ticket to Google I/O 2017. After the contest was closed, we simply changed the final success message to provide a URL directly to the Firebass Gift Shop, a treasure in and of itself. :)

Until next time

This was an unforgettable experience with a fervently positive reaction. When puzzle 3 unlocked, server traffic increased 30x! The community response in sharing photos, Slack channels, music, jokes, posts, etc. was incredible. And all because of one fish. We can’t wait to see all the swimmer winners next year at I/O 2017. Until then, try playing the game yourself at firebase.foo. Thank you, Firebass. Long may you swim.

Categories: Programming

Auto-generating Google Forms

Google Code Blog - Mon, 06/06/2016 - 21:07

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Apps

pre.CICodeFormatter{ font-family:arial; font-size:12px; width:99%; height:auto; overflow:auto; line-height:20px; padding:0px; color:#000000; text-align:left; } pre.CICodeFormatter code{ color:#000000; word-wrap:normal; }
 function createForm() {  
// create & name Form
var item = "Speaker Information Form";
var form = FormApp.create(item)

// single line text field
item = "Name, Title, Organization";

// multi-line "text area"
item = "Short biography (4-6 sentences)";

// radiobuttons
item = "Handout format";
var choices = ["1-Pager", "Stapled", "Soft copy (PDF)", "none"];

// (multiple choice) checkboxes
item = "Microphone preference (if any)";
choices = ["wireless/lapel", "handheld", "podium/stand"];

If you’re ready to get started, you can find more information, including another intro code sample, in the Google Forms reference section of the Apps Script docs. In the video, I challenge viewers to enhance the code snippet above to read in “forms data” from an outside source such as a Google Sheet, Google Doc, or even an external database (accessible via Apps Script’s JDBC Service) to generate multiple Forms with. What are other things you can do with Forms?

One example is illustrated by this Google Docs add-on I created for users to auto-generate Google Forms from a formatted Google Doc. If you’re looking to do integration with a variety of Google services, check out this advanced Forms quickstart that uses Google Sheets, Docs, Calendar, and Gmail! Finally, Apps Script also powers add-ons for Google Forms. To learn how to write those, check out this Forms add-on quickstart.

We hope the DevByte and all these examples inspire you to create awesome tools with Google Forms, and taking the manual creation burden off your shoulders! If you’re new to the Launchpad Online developer series, we share technical content aimed at novice Google developers, as well as discuss the latest tools and features to help you build your app. Please subscribe to our channel, give us your feedback below, and tell us what topics you would like to see in future episodes!

Categories: Programming

The Case for and Against Estimates, Part 1

After the article I referenced in Moving to Agile Contracts was published, there was a little kerfuffle on Twitter. Some people realized I was talking about the value of estimates and #noestimates. Some folks thought I was advocating never estimating anything.

Let me clarify my position.

I like order-of-magnitude estimates. I don’t hire people without either a not-to-exceed or an order-of-magnitude estimate. I explained how to do that in Predicting the Unpredictable: Pragmatic Approaches to Estimating Project Cost or Schedule.  That’s because I have to manage the risk of the money and the risk I won’t get what I want.

Notice there are two risks: money and value. When I need to manage both risks, I ask for order-of-magnitude estimation and frequent demos. When we did the remodel for the house we are living in now—almost a rebuild—we had an estimate from our builder. Our builder was great. He encouraged us to see the house every day to see their progress. The builder was transparent with problems. Was he truly agile? In order for him to create an estimate, we iterated on the designs for each room before he broke ground.

Construction, hardware development, mechanical products—all those “hard” products require iteration on the design before implementation. That’s because the cost of change is so high when you move to physical form.  In my experience, the cost of not iterating before you go to physical form is prohibitive.

So, what is the value of estimation for software? I have said (In Predicting) that software is learning, innovation. We learn from every software project. That makes estimation tricky, if not impossible. Can you estimate? Of course. The problem I see is in the value of the estimate. That value changes for the domain and customer.

If you have a reluctant-to-agile customer, you might want to do more estimation as you work through the project. That was the point of the Moving to Agile Contracts article. You might not even convince a customer that agile is good for them. If you get the value out of working in an agile way, great. You still get the value, even if the customer doesn’t.

Example.AgileRoadmapOneQuarterIf you have a regulated domain or a complex project that you might want to cancel, you might need more estimation as you proceed. I still like using my deliverable-based roadmaps and a not-to-exceed project cost. I would ask, “How much change do we expect?” If the deliverables are going to change every day or week, I don’t see how you can estimate and believe it. You can do a not-to-exceed for a date or cost.

In software, most of the cost is in the run rate for the project.

The image here is an example one-quarter roadmap from Agile and Lean Program Management. In a program, people often need to see further into the future than a backlog or two. I often see organizations requiring six-quarter roadmaps. That’s fine. The roadmap is a wish list. Why? Because it’s not possible to provide a reasonable estimate of that much work that far out without doing some work.

Here’s the tricky part: how much work do you need to do for estimation? I don’t know.

StagedDeliveryIn the Twitter conversation, Glen Alleman mentioned that Raytheon is doing a project using agile. I am pretty sure the agile Raytheon guy I know (socially) is on that project. Yes, they do 4-week iterations. They work feature-by-feature. I believe, although I am not positive, they started that project with a significant investigation period. To me, that project looks a lot more like staged delivery. On the other hand, does it matter??

Does that mean it’s not agile? Well, staged delivery does not require the same transparency and culture change that agile does. On the other hand, does it matter? Remember, I am not religious about what you do. I want your project to succeed.

So what about #noestimates? How does that figure into this conversation?

Here are times when you might not want to bother estimating:

  • You have a fixed target. Management said, “We need this project done by that date.” In that case, get a ranked backlog and get to work. Why fight with people or waste time estimating when you have no idea what you can do? In Predicting, I say something like this, “Get a ranked backlog. Get to work. Get some data. Show progress. If you can’t deliver what they want when they want it, you now have data for a discussion.”
  • You think things will change every day or every week. Management/your sponsor says, “Here’s the ranked backlog. We want to see what you can do so we know what we want to change.” Inviting change is why we use agile. Otherwise, we could used staged-delivery. Why estimate? I would use a not-to-exceed date or cost.
  • You are on the project to save the company. Get a ranked backlog and get to work. Determine how often you can release to get revenue.

I have been on all these kinds of projects. I have gotten a ranked backlog and gotten to work. I have succeeded. Oh, in one case, the company management started the project to save the company too late to make a difference. I didn’t succeed then. We needed four weeks to make a difference and had two.

I like delivering small chunks often. Yes, I use deliverables in my work that are small, often an hour or less.  I can stop when I get to the end of them and not worry about the next chunk. I am sure I do different work than you do.

That is why, as Glen says, the domain is critical. I think it’s also the customer. Maybe there are more things to consider.

In my next post, I will discuss when estimates are harmful.

Categories: Project Management

Moving to Agile Contracts

Marcus Blankenship and I wrote a follow-up piece to our first article, mentioned in Discovery Projects Work for Agile Contracts. That article was about when your client wants the benefit of agile, but wants you to estimate everything in advance and commit to a fixed price/fixed scope (and possibly fixed date) project. Fixing all of that is nuts.

The next article is Use Demos to Build Trust.

That post prompted much Twitter discussion about the purpose of estimates and trust. I’ll write a whole post on that because it deserves a thoughtful answer.

Categories: Project Management

The Inquiry Method for Test Planning

Google Testing Blog - Mon, 06/06/2016 - 14:07
by Anthony Vallone

Creating a test plan is often a complex undertaking. An ideal test plan is accomplished by applying basic principles of cost-benefit analysis and risk analysis, optimally balancing these software development factors:
  • Implementation cost: The time and complexity of implementing testable features and automated tests for specific scenarios will vary, and this affects short-term development cost.
  • Maintenance cost: Some tests or test plans may vary from easy to difficult to maintain, and this affects long-term development cost. When manual testing is chosen, this also adds to long-term cost.
  • Monetary cost: Some test approaches may require billed resources.
  • Benefit: Tests are capable of preventing issues and aiding productivity by varying degrees. Also, the earlier they can catch problems in the development life-cycle, the greater the benefit.
  • Risk: The probability of failure scenarios may vary from rare to likely, and their consequences may vary from minor nuisance to catastrophic.
Effectively balancing these factors in a plan depends heavily on project criticality, implementation details, resources available, and team opinions. Many projects can achieve outstanding coverage with high-benefit, low-cost unit tests, but they may need to weigh options for larger tests and complex corner cases. Mission critical projects must minimize risk as much as possible, so they will accept higher costs and invest heavily in rigorous testing at all levels.
This guide puts the onus on the reader to find the right balance for their project. Also, it does not provide a test plan template, because templates are often too generic or too specific and quickly become outdated. Instead, it focuses on selecting the best content when writing a test plan.

Test plan vs. strategy
Before proceeding, two common methods for defining test plans need to be clarified:
  • Single test plan: Some projects have a single "test plan" that describes all implemented and planned testing for the project.
  • Single test strategy and many plans: Some projects have a "test strategy" document as well as many smaller "test plan" documents. Strategies typically cover the overall test approach and goals, while plans cover specific features or project updates.
Either of these may be embedded in and integrated with project design documents. Both of these methods work well, so choose whichever makes sense for your project. Generally speaking, stable projects benefit from a single plan, whereas rapidly changing projects are best served by infrequently changed strategies and frequently added plans.
For the purpose of this guide, I will refer to both test document types simply as "test plans”. If you have multiple documents, just apply the advice below to your document aggregation.

Content selection
A good approach to creating content for your test plan is to start by listing all questions that need answers. The lists below provide a comprehensive collection of important questions that may or may not apply to your project. Go through the lists and select all that apply. By answering these questions, you will form the contents for your test plan, and you should structure your plan around the chosen content in any format your team prefers. Be sure to balance the factors as mentioned above when making decisions.

  • Do you need a test plan? If there is no project design document or a clear vision for the product, it may be too early to write a test plan.
  • Has testability been considered in the project design? Before a project gets too far into implementation, all scenarios must be designed as testable, preferably via automation. Both project design documents and test plans should comment on testability as needed.
  • Will you keep the plan up-to-date? If so, be careful about adding too much detail, otherwise it may be difficult to maintain the plan.
  • Does this quality effort overlap with other teams? If so, how have you deduplicated the work?

  • Are there any significant project risks, and how will you mitigate them? Consider:
    • Injury to people or animals
    • Security and integrity of user data
    • User privacy
    • Security of company systems
    • Hardware or property damage
    • Legal and compliance issues
    • Exposure of confidential or sensitive data
    • Data loss or corruption
    • Revenue loss
    • Unrecoverable scenarios
    • SLAs
    • Performance requirements
    • Misinforming users
    • Impact to other projects
    • Impact from other projects
    • Impact to company’s public image
    • Loss of productivity
  • What are the project’s technical vulnerabilities? Consider:
    • Features or components known to be hacky, fragile, or in great need of refactoring
    • Dependencies or platforms that frequently cause issues
    • Possibility for users to cause harm to the system
    • Trends seen in past issues

  • What does the test surface look like? Is it a simple library with one method, or a multi-platform client-server stateful system with a combinatorial explosion of use cases? Describe the design and architecture of the system in a way that highlights possible points of failure.
  • What are the features? Consider making a summary list of all features and describe how certain categories of features will be tested.
  • What will not be tested? No test suite covers every possibility. It’s best to be up-front about this and provide rationale for not testing certain cases. Examples: low risk areas that are a low priority, complex cases that are a low priority, areas covered by other teams, features not ready for testing, etc. 
  • What is covered by unit (small), integration (medium), and system (large) tests? Always test as much as possible in smaller tests, leaving fewer cases for larger tests. Describe how certain categories of test cases are best tested by each test size and provide rationale.
  • What will be tested manually vs. automated? When feasible and cost-effective, automation is usually best. Many projects can automate all testing. However, there may be good reasons to choose manual testing. Describe the types of cases that will be tested manually and provide rationale.
  • How are you covering each test category? Consider:
  • Will you use static and/or dynamic analysis tools? Both static analysis tools and dynamic analysis tools can find problems that are hard to catch in reviews and testing, so consider using them.
  • How will system components and dependencies be stubbed, mocked, faked, staged, or used normally during testing? There are good reasons to do each of these, and they each have a unique impact on coverage.
  • What builds are your tests running against? Are tests running against a build from HEAD (aka tip), a staged build, and/or a release candidate? If only from HEAD, how will you test release build cherry picks (selection of individual changelists for a release) and system configuration changes not normally seen by builds from HEAD?
  • What kind of testing will be done outside of your team? Examples:
    • Dogfooding
    • External crowdsource testing
    • Public alpha/beta versions (how will they be tested before releasing?)
    • External trusted testers
  • How are data migrations tested? You may need special testing to compare before and after migration results.
  • Do you need to be concerned with backward compatibility? You may own previously distributed clients or there may be other systems that depend on your system’s protocol, configuration, features, and behavior.
  • Do you need to test upgrade scenarios for server/client/device software or dependencies/platforms/APIs that the software utilizes?
  • Do you have line coverage goals?

Tooling and Infrastructure
  • Do you need new test frameworks? If so, describe these or add design links in the plan.
  • Do you need a new test lab setup? If so, describe these or add design links in the plan.
  • If your project offers a service to other projects, are you providing test tools to those users? Consider providing mocks, fakes, and/or reliable staged servers for users trying to test their integration with your system.
  • For end-to-end testing, how will test infrastructure, systems under test, and other dependencies be managed? How will they be deployed? How will persistence be set-up/torn-down? How will you handle required migrations from one datacenter to another?
  • Do you need tools to help debug system or test failures? You may be able to use existing tools, or you may need to develop new ones.

  • Are there test schedule requirements? What time commitments have been made, which tests will be in place (or test feedback provided) by what dates? Are some tests important to deliver before others?
  • How are builds and tests run continuously? Most small tests will be run by continuous integration tools, but large tests may need a different approach. Alternatively, you may opt for running large tests as-needed. 
  • How will build and test results be reported and monitored?
    • Do you have a team rotation to monitor continuous integration?
    • Large tests might require monitoring by someone with expertise.
    • Do you need a dashboard for test results and other project health indicators?
    • Who will get email alerts and how?
    • Will the person monitoring tests simply use verbal communication to the team?
  • How are tests used when releasing?
    • Are they run explicitly against the release candidate, or does the release process depend only on continuous test results? 
    • If system components and dependencies are released independently, are tests run for each type of release? 
    • Will a "release blocker" bug stop the release manager(s) from actually releasing? Is there an agreement on what are the release blocking criteria?
    • When performing canary releases (aka % rollouts), how will progress be monitored and tested?
  • How will external users report bugs? Consider feedback links or other similar tools to collect and cluster reports.
  • How does bug triage work? Consider labels or categories for bugs in order for them to land in a triage bucket. Also make sure the teams responsible for filing and or creating the bug report template are aware of this. Are you using one bug tracker or do you need to setup some automatic or manual import routine?
  • Do you have a policy for submitting new tests before closing bugs that could have been caught?
  • How are tests used for unsubmitted changes? If anyone can run all tests against any experimental build (a good thing), consider providing a howto.
  • How can team members create and/or debug tests? Consider providing a howto.

  • Who are the test plan readers? Some test plans are only read by a few people, while others are read by many. At a minimum, you should consider getting a review from all stakeholders (project managers, tech leads, feature owners). When writing the plan, be sure to understand the expected readers, provide them with enough background to understand the plan, and answer all questions you think they will have - even if your answer is that you don’t have an answer yet. Also consider adding contacts for the test plan, so any reader can get more information.
  • How can readers review the actual test cases? Manual cases might be in a test case management tool, in a separate document, or included in the test plan. Consider providing links to directories containing automated test cases.
  • Do you need traceability between requirements, features, and tests?
  • Do you have any general product health or quality goals and how will you measure success? Consider:
    • Release cadence
    • Number of bugs caught by users in production
    • Number of bugs caught in release testing
    • Number of open bugs over time
    • Code coverage
    • Cost of manual testing
    • Difficulty of creating new tests

Categories: Testing & QA