Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management

Understanding Software Project Size - New Lecture Posted

10x Software Development - Steve McConnell - Thu, 08/06/2015 - 21:36

I've uploaded a new lecture in my Understanding Software Projects lecture series. This lecture focuses on the critical topic of Software Size. If you've ever wondered why some early projects succeed while later similar projects fail, this lecture explains the basic dynamics that cause that. If you've wondered why Scrum projects struggle to scale, I share some insights on that topic. 

I believe this is one of my best lectures in the series so far -- and it's a very important topic. It will be free for the next week, so check it out: https://cxlearn.com.

Lectures posted so far include:  

0.0 Understanding Software Projects - Intro
     0.1 Introduction - My Background
     0.2 Reading the News
     0.3 Definitions and Notations 

1.0 The Software Lifecycle Model - Intro
     1.1 Variations in Iteration 
     1.2 Lifecycle Model - Defect Removal
     1.3 Lifecycle Model Applied to Common Methodologies 
     1.4 Lifecycle Model - Selecting an Iteration Approach  

2.0 Software Size - Introduction (New)
     1.01 Size - Examples of Size     
     2.05 Size - Comments on Lines of Code
     2.1 Size - Staff Sizes 
     2.2 Size - Schedule Basics 
     2.3 Size - Debian Size Claims 

3.0 Human Variation - Introduction

Check out the lectures at http://cxlearn.com!

Are 64% of Features Really Rarely or Never Used?

Mike Cohn's Blog - Wed, 08/05/2015 - 15:00

A very oft-cited metric is that 64 percent of features in products are “rarely or never used.” The source for this claim was Jim Johnson, chairman of the Standish Group, who presented it in a keynote at the XP 2002 conference in Sardinia. The data Johnson presented can be seen in the following chart.

Johnson’s data has been repeated again and again to the extent that those citing it either don’t understand its origins or never bothered to check into them.

The misuse or perhaps just overuse of this data has been bothering me for a while, so I decided to investigate it. I was pretty sure of the facts but didn’t want to rely solely on my memory, so I got in touch with the Standish Group, and they were very helpful in clarifying the data.

The results Jim Johnson presented at XP 2002 and that have been repeated so often were based on a study of four internal applications. Yes, four applications. And, yes, all internal-use applications. No commercial products.

So, if you’re citing this data and using it to imply that every product out there contains 64 percent “rarely or never used features,” please stop. Please be clear that the study was of four internally developed projects at four companies.

#NotImplementedNoValue

Music on the page is just a potential. Like software, it changes when it is implemented.

Music on the page is just a potential. Like software, it changes when it is implemented.

The twelve principles that underpin the Agile Manifesto include several that link the concept of value to the delivery of working software. The focus on working software stems from one of the four values, “Working software over comprehensive documentation,” which is a reaction to projects and programs that seem to value reports and PowerPoint presentations more than putting software in the hands of users. For a typical IT organization that develops, enhances and maintains the software that the broader organization uses to do their ultimate business, value is only delivered when software can be used in production. Implementing software provides value through the following four mechanisms:

  1. Validation – In order to get the point where software is written, tested and implemented a number of  decisions must be make.  The process of implementing functional software and getting real people to use the software provides a tool to validate not only the ideas that the software represents, but also the assumptions that were made to prioritize the need and build the software. Implementing and using software provides the information needed to validate the ideas and decision made during the process.
  2. Real life feedback – The best feedback is generated when users actually have to use the software to do their job in their day-to-day environment. Reviews and demonstrations are a great tool to generate initial feedback, however those are artificial environments that lack the complexity of most office environments.
  3. Proof of performance – One of the most salient principles of the Agile Manifesto is that working software is the primary measure of progress. The delivery of valuable working software communicates with the wider organizational community that they are getting something of value for their investment.
  4. Revenue – In scenarios in which the software being delivered, enhanced or maintained is customer facing until it is in use it can’t generate revenue whether the implementation is a new software supported product or improvement in the user experience of an existing product.

In most scenarios,  software that is both in production and being used creates value for the organization. Software that is either being worked on or sitting in library waiting to be implemented into production might have potential value, but that potential has little real value unless it can be converted. In batteries, the longer we wait to convert potential energy into kinetic energy the less energy that exists because the capacity of the battery decays over time. In any reasonably dynamic environment information, like the capacity of a battery, decays over time. Software requirements and the ideas encompassed by the physical software also decay over time as the world we live and work in changes. Bottom line: If the software is not in production, we can’t get value from using it, nor can we get feedback that tells us that if the work environment it will someday run in is changing; therefore, all we have is a big ball of uncertainty.  And, as we know, uncertainty reduces value.


Categories: Process Management

Building IntelliJ plugins from the command line

Xebia Blog - Mon, 08/03/2015 - 13:16

For a few years already, IntelliJ IDEA has been my IDE of choice. Recently I dove into the world of plugin development for IntelliJ IDEA and was unhappily surprised. Plugin development all relies on IDE features. It looked hard to create a build script to do the actual plugin compilation and packaging from a build script. The JetBrains folks simply have not catered for that. Unless you're using TeamCity as your CI tool, you're out of luck.

For me it makes no sense writing code if:

  1. it can not be compiled and packaged from the command line
  2. the code can not be compiled and tested on a CI environment
  3. IDE configurations can not be generated from the build script

Google did not help out a lot. Tomasz Dziurko put me in the right direction.

In order to build and test a plugin, the following needs to be in place:

  1. First of all you'll need IntelliJ IDEA. This is quite obvious. The Plugin DevKit plugins need to be installed. If you want to create a language plugin you might want to install Grammar-Kit too.
  2. An IDEA SDK needs to be registered. The SDK can point to your IntelliJ installation.

The plugin module files are only slightly different from your average project.

Update: I ran into some issues with forms and language code generation and added some updates at the end of this post.

Compiling and testing the plugin

Now for the build script. My build tool of choice is Gradle. My plugin code adheres to the default Gradle project structure.

First thing to do is to get a hold of the IntelliJ IDEA libraries in an automated way. Since the IDEA libraries are not available via Maven repos, an IntelliJ IDEA Community Edition download is probably the best option to get a hold of the libraries.

The plan is as follows: download the Linux version of IntelliJ IDEA, and extract it in a predefined location. From there, we can point to the libraries and subsequently compile and test the plugin. The libraries are Java, and as such platform independent. I picked the Linux version since it has a nice, simple file structure.

The following code snippet caters for this:

apply plugin: 'java'

// Pick the Linux version, as it is a tar.gz we can simply extract
def IDEA_SDK_URL = 'http://download.jetbrains.com/idea/ideaIC-14.0.4.tar.gz'
def IDEA_SDK_NAME = 'IntelliJ IDEA Community Edition IC-139.1603.1'

configurations {
    ideaSdk
    bundle // dependencies bundled with the plugin
}

dependencies {
    ideaSdk fileTree(dir: 'lib/sdk/', include: ['*/lib/*.jar'])

    compile configurations.ideaSdk
    compile configurations.bundle
    testCompile 'junit:junit:4.12'
    testCompile 'org.mockito:mockito-core:1.10.19'
}

// IntelliJ IDEA can still run on a Java 6 JRE, so we need to take that into account.
sourceCompatibility = 1.6
targetCompatibility = 1.6

task downloadIdeaSdk(type: Download) {
    sourceUrl = IDEA_SDK_URL
    target = file('lib/idea-sdk.tar.gz')
}

task extractIdeaSdk(type: Copy, dependsOn: [downloadIdeaSdk]) {
    def zipFile = file('lib/idea-sdk.tar.gz')
    def outputDir = file("lib/sdk")

    from tarTree(resources.gzip(zipFile))
    into outputDir
}

compileJava.dependsOn extractIdeaSdk

class Download extends DefaultTask {
    @Input
    String sourceUrl

    @OutputFile
    File target

    @TaskAction
    void download() {
       if (!target.parentFile.exists()) {
           target.parentFile.mkdirs()
       }
       ant.get(src: sourceUrl, dest: target, skipexisting: 'true')
    }
}

If parallel test execution does not work for your plugin, you'd better turn it off as follows:

test {
    // Avoid parallel execution, since the IntelliJ boilerplate is not up to that
    maxParallelForks = 1
}
The plugin deliverable

Obviously, the whole build process should be automated. That includes the packaging of the plugin. A plugin is simply a zip file with all libraries together in a lib folder.

task dist(type: Zip, dependsOn: [jar, test]) {
    from configurations.bundle
    from jar.archivePath
    rename { f -> "lib/${f}" }
    into project.name
    baseName project.name
}

build.dependsOn dist
Handling IntelliJ project files

We also need to generate IntelliJ IDEA project and module files so the plugin can live within the IDE. Telling the IDE it's dealing with a plugin opens some nice features, mainly the ability to run the plugin from within the IDE. Anton Arhipov's blog post put me on the right track.

The Gradle idea plugin helps out in creating those files. This works out of the box for your average project, but for plugins IntelliJ expects some things differently. The project files should mention that we're dealing with a plugin project and the module file should point to the plugin.xml file required for each plugin. Also, the SDK libraries are not to be included in the module file; so, I excluded those from the configuration.

The following code snippet caters for this:

apply plugin: 'idea'

idea {
    project {
        languageLevel = '1.6'
        jdkName = IDEA_SDK_NAME

        ipr {
            withXml {
                it.node.find { node ->
                    node.@name == 'ProjectRootManager'
                }.'@project-jdk-type' = 'IDEA JDK'

                logger.warn "=" * 71
                logger.warn " Configured IDEA JDK '${jdkName}'."
                logger.warn " Make sure you have it configured IntelliJ before opening the project!"
                logger.warn "=" * 71
            }
        }
    }

    module {
        scopes.COMPILE.minus = [ configurations.ideaSdk ]

        iml {
            beforeMerged { module ->
                module.dependencies.clear()
            }
            withXml {
                it.node.@type = 'PLUGIN_MODULE'
                //  <component name="DevKit.ModuleBuildProperties" url="file://$MODULE_DIR$/src/main/resources/META-INF/plugin.xml" />
                def cmp = it.node.appendNode('component')
                cmp.@name = 'DevKit.ModuleBuildProperties'
                cmp.@url = 'file://$MODULE_DIR$/src/main/resources/META-INF/plugin.xml'
            }
        }
    }
}
Put it to work!

Combining the aforementioned code snippets will result in a build script that can be run on any environment. Have a look at my idea-clock plugin for a working example.

Update 1: Forms

For an IntelliJ plugin to use forms it appeared some extra work has to be performed.
This difference is only obvious once you compare the plugin built by IntelliJ with the one built by Gradle:

  1. Include a bunch of helper classes
  2. Instrument the form classes

Including more files in the plugin was easy enough. Check out this commit to see what has to be added. Those classes are used as "helpers" for the form after instrumentation. For instrumentation an Ant task is available. This task can be loaded in Gradle and used as a last step of compilation.

Once I knew what to look for, this post helped me out: How to manage development life cycle of IntelliJ plugins with Maven, along with this build script.

Update 2: Language code generation

The Jetbrains folks promote using JFlex to build the lexer for your custom language. In order to use this from Gradle a custom version of JFlex needs to be used. This was used in an early version of the FitNesse plugin.

SPaMCAST 353 -Learning Styles, Microservices for All, Tame Flow

Software Process and Measurement Cast - Sun, 08/02/2015 - 22:00

This week’s Software Process and Measurement Cast features three columns.  The first is our essay on learning styles.  Learning styles are useful to consider when you are trying to change the world or just and an organization.  While opposites might attract in poetry and sitcoms, however rarely do opposite learning styles work together well in teams without empathy and a dash of coaching. Therefore, the coach and teams need to have an inventory of learning styles on the team. Models and active evaluation against a model are tools to generate knowledge about teams so they can tune how they work to maximize effectiveness.

Our second column features Gene Hughson bringing the ideas from his wonderful Form Follows Function Blog.  Gene talks about the topic of microservices. Gene challenges the idea that microservices are a silver bullet.

We anchor this week’s SPaMCAST with Steve Tendon’s column discussing the TameFlow methodology and his great new book, Hyper-Productive Knowledge Work Performance.   One of the topics Steve tackles this week is the idea of knowledge workers and why a knowledge worker is different.  The differences Steve describes are key to developing a hyper-productive environment.

Call to Action!

I have a challenge for the Software Process and Measurement Cast listeners for the next few weeks. I would like you to find one person that you think would like the podcast and introduce them to the cast. This might mean sending them the URL or teaching them how to download podcasts. If you like the podcast and think it is valuable they will be thankful to you for introducing them to the Software Process and Measurement Cast. Thank you in advance!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.  This week we tackle the essay titled “The Second-System Effect”!  Check out the new installment at Software Process and Measurement Blog.

Upcoming Events

Software Quality and Test Management 

September 13 – 18, 2015

San Diego, California

http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams!  Let me know if you are attending! If you are still deciding on attending let me know because I have a discount code!

 

Agile Development Conference East

November 8-13, 2015

Orlando, Florida

http://adceast.techwell.com/

I will be speaking on November 12th on the topic of Agile Risk!  Let me know if you are going and we will have a SPaMCAST Meetup.

Next SPaMCAST

The next Software Process and Measurement Cast features our interview with Allan Kelly.  We talked #NoProjects and having a focus of delivering a consistent flow of value.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

SPaMCAST 353 -Learning Styles, Microservices for All, Tame Flow

 www.spamcast.net

http://www.spamcast.net

SPaMCAST 353

Subscribe on iTunes

This week’s Software Process and Measurement Cast features three columns.  The first is our essay on learning styles.  Learning styles are useful to consider when you are trying to change the world or just and an organization.  While opposites might attract in poetry and sitcoms, however rarely do opposite learning styles work together well in teams without empathy and a dash of coaching. Therefore, the coach and teams need to have an inventory of learning styles on the team. Models and active evaluation against a model are tools to generate knowledge about teams so they can tune how they work to maximize effectiveness.

Our second column features Gene Hughson bringing the ideas from his wonderful Form Follows Function Blog.  Gene talks about the topic of microservices. Gene challenges the idea that microservices are a silver bullet.

We anchor this week’s SPaMCAST with Steve Tendon’s column discussing the TameFlow methodology and his great new book, Hyper-Productive Knowledge Work Performance.   One of the topics Steve tackles this week is the idea of knowledge workers and why a knowledge worker is different.  The differences Steve describes are key to developing a hyper-productive environment.

Call to Action!

I have a challenge for the Software Process and Measurement Cast listeners for the next few weeks. I would like you to find one person that you think would like the podcast and introduce them to the cast. This might mean sending them the URL or teaching them how to download podcasts. If you like the podcast and think it is valuable they will be thankful to you for introducing them to the Software Process and Measurement Cast. Thank you in advance!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.  This week we tackle the essay titled “The Second-System Effect”!  Check out the new installment at Software Process and Measurement Blog.

Upcoming Events

Software Quality and Test Management 

September 13 – 18, 2015

San Diego, California

http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams!  Let me know if you are attending! If you are still deciding on attending let me know because I have a discount code!

 

Agile Development Conference East

November 8-13, 2015

Orlando, Florida

http://adceast.techwell.com/

I will be speaking on November 12th on the topic of Agile Risk!  Let me know if you are going and we will have a SPaMCAST Meetup.

Next SPaMCAST

The next Software Process and Measurement Cast features our interview with Allan Kelly.  We talked #NoProjects and having a focus of delivering a consistent flow of value.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

17 Theses on Software Estimation

10x Software Development - Steve McConnell - Sun, 08/02/2015 - 17:20

(with apologies to Martin Luther for the title)

Arriving late to the #NoEstimates discussion, I’m amazed at some of the assumptions that have gone unchallenged, and I’m also amazed at the absence of some fundamental points that no one seems to have made so far. The point of this article is to state unambiguously what I see as the arguments in favor of estimation in software and put #NoEstimates in context.  

1. Estimation is often done badly and ineffectively and in an overly time-consuming way. 

My company and I have taught upwards of 10,000 software professionals better estimation practices, and believe me, we have seen every imaginable horror story of estimation done poorly. There is no question that “estimation is often done badly” is a true observation of the state of the practice. 

2. The root cause of poor estimation is usually lack of estimation skills. 

Estimation done poorly is most often due to lack of estimation skills. Smart people using common sense is not sufficient to estimate software projects. Reading two page blog articles on the internet is not going to teach anyone how to estimate very well. Good estimation is not that hard, once you’ve developed the skill, but it isn’t intuitive or obvious, and it requires focused self-education or training. 

3. Many comments in support of #NoEstimates demonstrate a lack of basic software estimation knowledge. 

I don’t expect most #NoEstimates advocates to agree with this thesis, but as someone who does know a lot about estimation I think it’s clear on its face. Here are some examples

(a) Are estimation and forecasting the same thing? As far as software estimation is concerned, yes they are. (Just do a Google or Bing search of “definition of forecast”.) Estimation, forecasting, prediction--it's all the same basic activity, as far as software estimation is concerned. 

(b) Is showing someone several pictures of kitchen remodels that have been completed for $30,000 and implying that the next kitchen remodel can be completed for $30,000 estimation? Yes, it is. That’s an implementation of a technique called Reference Class Forecasting. 

(c) Is doing a few iterations, calculating team velocity, and then using that empirical velocity data to project a completion date count as estimation? Yes it does. Not only is it estimation, it is a really effective form of estimation. I’ve heard people argue that because velocity is empirically based, it isn’t estimation. That argument is incorrect and shows a lack of basic understanding of the nature of estimation. 

(d) Is estimation time consuming and a waste of time? One of the most common symptoms of lack of estimation skill is spending too much time on the wrong activities. This work is often well-intentioned, but it’s common to see well-intentioned people doing more work than they need to get worse answers than they could be getting.  

4. Being able to estimate effectively is a skill that any true software professional needs to develop, even if they don’t need it on every project. 

“Estimating is problematic, therefore software professionals should not develop estimation skill” – this is a common line of reasoning in #NoEstimates. Unless a person wants to argue that the need for estimation is rare, this argument is not supported by the rest of #NoEstimate’s premises. 

If I agreed, for sake of argument, that 50% of the projects don’t need to be estimated, the other 50% of the projects would still benefit from the estimators having good estimation skills. If you’re a true software professional, you should develop estimation skill so that you can estimate competently on the 50% of projects that do require estimation. 

In practice, I think the number of projects that need estimates is much higher than 50%. 

5. Estimates serve numerous legitimate, important business purposes.

Estimates are used by businesses in numerous ways, including: 

  • Allocating budgets to projects (i.e., estimating the effort and budget of each project)
  • Making cost/benefit decisions at the project/product level, which is based on cost (software estimate) and benefit (defined feature set)
  • Deciding which projects get funded and which do not, which is often based on cost/benefit
  • Deciding which projects get funded this year vs. next year, which is often based on estimates of which projects will finish this year
  • Deciding which projects will be funded from CapEx budget and which will be funded from OpEx budget, which is based on estimates of total project effort, i.e., budget
  • Allocating staff to specific projects, i.e., estimates of how many total staff will be needed on each project
  • Allocating staff within a project to different component teams or feature teams, which is based on estimates of scope of each component or feature area
  • Allocating staff to non-project work streams (e.g., budget for a product support group, which is based on estimates for the amount of support work needed)
  • Making commitments to internal business partners (based on projects’ estimated availability dates)
  • Making commitments to the marketplace (based on estimated release dates)
  • Forecasting financials (based on when software capabilities will be completed and revenue or savings can be booked against them)
  • Tracking project progress (comparing actual progress to planned (estimated) progress)
  • Planning when staff will be available to start the next project (by estimating when staff will finish working on the current project)
  • Prioritizing specific features on a cost/benefit basis (where cost is an estimate of development effort)

These are just a subset of the many legitimate reasons that businesses request estimates from their software teams. I would be very interested to hear how #NoEstimates advocates suggest that a business would operate if you remove the ability to use estimates for each of these purposes.

The #NoEstimates response to these business needs is typically of the form, “Estimates are inaccurate and therefore not useful for these purposes” rather than, “The business doesn’t need estimates for these purposes.” 

That argument really just says that businesses are currently operating on much worse quality information than they should be, and probably making poorer decisions as a result, because the software staff are not providing very good estimates. If software staff provided more accurate estimates, the business would make better decisions in each of these areas, which would make the business stronger. 

This all supports my point that improved estimation skill should be part of the definition of being a true software professional. 

6. Part of being an effective estimator is understanding that different estimation techniques should be used for different kinds of estimates. 

One thread that runs throughout the #NoEstimates discussions is lack of clarity about whether we’re estimating before the project starts, very early in the project, or after the project is underway. The conversation is also unclear about whether the estimates are project-level estimates, task-level estimates, sprint-level estimates, or some combination. Some of the comments imply ineffective attempts to combine kinds of estimates—the most common confusion I’ve read is trying to use task-level estimates to estimate a whole project, which is another example of lack of software estimation skill. 

Effective estimation requires that the right kind of technique be applied to each different kind of estimate. Learning when to use each technique, as well as learning each technique, requires some professional skills development. 

7. Estimation and planning are not the same thing, and you can estimate things that you can’t plan. 

Many of the examples given in support of #NoEstimates are actually indictments of overly detailed waterfall planning, not estimation. The simple way to understand the distinction is to remember that planning is about “how” and estimation is about “how much.” 

Can I “estimate” a chess game, if by “estimate” I mean how each piece will move throughout the game? No, because that isn’t estimation; it’s planning; it’s “how.”

Can I estimate a chess game in the sense of “how much”? Sure. I can collect historical data on the length of chess games and know both the average length and the variation around that average and predict the length of a game. 

More to the point, estimating software projects is not analogous to estimating one chess game. It’s analogous to estimating a series of chess games. People who are not skilled in estimation often assume it’s more difficult to estimate a series of games than to estimate an individual game, but estimating the series is actually easier. Indeed, the more chess games in the set, the more accurately we can estimate the set, once you understand the math involved. 

8. You can estimate what you don’t know, up to a point. 

In addition to estimating “how much,” you can also estimate “how uncertain.” In the #NoEstimates discussions, people throw out lots of examples along the lines of, “My project was doing unprecedented work in Area X, and therefore it was impossible to estimate the whole project.” That isn’t really true. What you would end up with in cases like that is high variability in your estimate for Area X, and a common estimation mistake would be letting X’s uncertainty apply to the whole project rather than constraining it’s uncertainty just to Area X. 

Most projects contain a mix of precedented and unprecedented work, or certain and uncertain work. Decomposing the work, estimating uncertainty in different areas, and building up an overall estimate from that is one way of dealing with uncertainty in estimates. 

9. Both estimation and control are needed to achieve predictability. 

Much of the writing on Agile development emphasizes project control over project estimation. I actually agree that project control is more powerful than project estimation, however, effective estimation usually plays an essential role in achieving effective control. 

To put this in Agile Manifesto-like terms:

We have come to value project control over project estimation, 
as a means of achieving predictability

As in the Agile Manifesto, we value both terms, which means we still value the term on the right. 

#NoEstimates seems to pay lip service to both terms, but the emphasis from the hashtag onward is really about discarding the term on the right. This is a case where I believe the right answer is both/and, not either/or

10. People use the word "estimate" sloppily. 

No doubt. Lack of understanding of estimation is not limited to people tweeting about #NoEstimates. Business partners often use the word “estimate” to refer to what would more properly be called a “planning target” or “commitment.” Further, one common mistake software professionals make is trying to create estimates when the business is really asking for a commitment, or asking for a plan to meet a target, but using the word “estimate” to ask for that. 

We have worked with many companies to achieve organizational clarity about estimates, targets, and commitments. Clarifying these terms makes a huge difference in the dynamics around creating, presenting, and using software estimates effectively. 

11. Good project-level estimation depends on good requirements, and average requirements skills are about as bad as average estimation skills. 

A common refrain in Agile development is “It’s impossible to get good requirements,” and that statement has has never been true. I agree that it’s impossible to get perfect requirements, but that isn’t the same thing as getting good requirements. I would agree that “It is impossible to get good requirements if you don’t have very good requirement skills,” and in my experience that is a common case.  I would also agree that “Projects usually don’t have very good requirements,” as an empirical observation—but not as a normative statement that we should accept as inevitable. 

Like estimation skill, requirements skill is something that any true software professional should develop, and the state of the art in requirements at this time is far too advanced for even really smart people to invent everything they need to know on their own. Like estimation skill, a person is not going to learn adequate requirements skills by reading blog entries or watching short YouTube videos. Acquiring skill in requirements requires focused, book-length self-study or explicit training or both. 

Why would we care about getting good requirements if we’re Agile? Isn’t trying to get good requirements just waterfall? The answer is both yes and no. You can’t achieve good predictability of the combination of cost, schedule, and functionality if you don’t have a good definition of functionality. If your business truly doesn’t care about predictability (and some truly don’t), then letting your requirements emerge over the course of the project can be a good fit for business needs. But if your business does care about predictability, you should develop the skill to get good requirements, and then you should actually do the work to get them. You can still do the rest of the project using by-the-book Scrum, and then you’ll get the benefits of both good requirements and Scrum. 

12. The typical estimation context involves moderate volatility and a moderate levels of unknowns

Ron Jeffries writes, “It is conventional to behave as if all decent projects have mostly known requirements, low volatility, understood technology, …, and are therefore capable of being more or less readily estimated by following your favorite book.” 

I don’t know who said that, but it wasn’t me, and I agree with Ron that that statement doesn’t describe most of the projects that I have seen. 

I think it would be more true to say, “The typical software project has requirements that are knowable in principle, but that are mostly unknown in practice due to insufficient requirements skills; low volatility in most areas with high volatility in selected areas; and technology that tends to be either mostly leading edge or mostly mature; …; and are therefore amenable to having both effective requirements work and effective estimation work performed on those projects, given sufficient training in both skill sets.”

In other words, software projects are challenging, and they’re even more challenging if you don’t have the skills needed to work on them. If you have developed the right skills, the projects will still be challenging, but you’ll be able to overcome most of the challenges or all of them. 

Of course there is a small percentage of projects that do have truly unknowable requirements and across-the-board volatility. I consider those to be corner cases. It’s good to explore corner cases, but also good not to lose sight of which cases are most common. 

13. Responding to change over following a plan does not imply not having a plan. 

It’s amazing that in 2015 we’re still debating this point. Many of the #NoEstimates comments literally emphasize not having a plan, i.e., treating 100% of the project as emergent. They advocate a process—typically Scrum—but no plan beyond instantiating Scrum. 

According to the Agile Manifesto, while agile is supposed to value responding to change, it also is supposed to value following a plan. Doing no planning at all is not only inconsistent with the Agile Manifesto, it also wastes some of Scrum's capabilities. One of the amazingly powerful aspects of Scrum is that it gives you the ability to respond to change; and that doesn’t imply that you need to avoid committing to plans in the first place. 

My company and I have seen Agile adoptions shut down in some companies because an Agile team is unwilling to commit to requirements up front or refuses to estimate up front. As a strategy, that’s just dumb. If you fight your business up front about providing estimates, even if you win the argument that day, you will still get knocked down a peg in the business’s eyes. 

Instead, use your velocity to estimate how much work you can do over the course of a project, and commit to a product backlog based on your demonstrated capacity for work. Your business will like that. Then, later, when your business changes its mind—which it probably will—you’ll be able to respond to change. Your business will like that even more. Wouldn’t you rather look good twice than look bad once? 

14. Scrum provides better support for estimation than waterfall ever did, and there does not have to be a trade off between agility and predictability. 

Some of the #NoEstimates discussion seems to interpret challenges to #NoEstimates as challenges to the entire ecosystem of Agile practices, especially Scrum. Many of the comments imply that predictability comes at the expense of agility. The examples cited to support that are mostly examples of unskilled misapplications of estimation practices, so I see them as additional examples of people not understanding estimation very well. 

The idea that we have to trade off agility to achieve predictability is a false trade off. In particular, if no one had ever uttered the word “agile,” I would still want to use Scrum because of its support for estimation and predictability. 

The combination of story pointing, product backlog, velocity calculation, short iterations, just-in-time sprint planning, and timely retrospectives after each sprint creates a nearly perfect context for effective estimation. Scrum provides better support for estimation than waterfall ever did. 

If a company truly is operating in a high uncertainty environment, Scrum can be an effective approach. In the more typical case in which a company is operating in a moderate uncertainty environment, Scrum is well-equipped to deal with the moderate level of uncertainty and provide high predictability (e.g., estimation) at the same time. 

15. There are contexts where estimates provide little value. 

I don’t estimate how long it will take me to eat dinner, because I know I’m going to eat dinner regardless of what the estimate says. If I have a defect that keeps taking down my production system, the business doesn’t need an estimate for that because the issue needs to get fixed whether it takes an hour, a day, or a week. 

The most common context I see where estimates are not done on an ongoing basis and truly provide little business value is online contexts, especially mobile, where the cycle times are measured in days or shorter, the business context is highly volatile, and the mission truly is, “Always do the next most useful thing with the resources available.” 

In both these examples, however, there is a point on the scale at which estimates become valuable. If the work on the production system stretches into weeks or months, the business is going to want and need an estimate. As the mobile app matures from one person working for a few days to a team of people working for a few weeks, with more customers depending on specific functionality, the business is going to want more estimates. Enjoy the #NoEstimates context while it lasts; don’t assume that it will last forever. 

16. This is not religion. We need to get more technical and economic about software discussions. 

I’ve seen #NoEstimates advocates treat these questions of requirements volatility, estimation effectiveness, and supposed tradeoffs between agility and predictability as value-laden moral discussions in which their experience with usually-bad requirements and usually-bad estimates calls for an iterative approach like pure Scrum, rather than a front-loaded approach like Scrum with a pre-populated product backlog. In these discussions, “Waterfall” is used as an invective, where the tone of the argument is often more moral than economic. That religion isn’t unique to Agile advocates, and I’ve seen just as much religion on the non-Agile sides of various discussions. I’ve appreciated my most recent discussion with Ron Jeffries because he hasn’t done that. It would be better for the industry at large if people could stay more technical and economic more often. 

For my part, software is not religion, and the ratio of work done up front on a software project is not a moral issue. If we assume professional-level skills in agile practices, requirements, and estimation, the decision about how much work to do up front should be an economic decision based on cost of change and value of predictability. If the environment is volatile enough, then it’s a bad economic decision to do lots of up front requirements work just to have a high percentage of requirements spoil before they can be implemented. If there’s little or no business value created by predictability, that also suggests that emphasizing up front estimation work would be a bad economic decision.

On the other hand, if the business does value predictability, then how we support that predictability should also be an economic decision. If we do a lot of the requirements work up front, and some requirements spoil, but most do not, and that supports improved predictability, and the business derives value from that, that would be a good economic choice. 

The economics of these decisions are affected by the skills of the people involved. If my team is great at Scrum but poor at estimation and requirements, the economics of up front vs. emergent will tilt one way. If my team is great at estimation and requirements but poor at Scrum, the economic might tilt the other way. 

Of course, skill sets are not divinely dictated or cast in stone; they can be improved through focused self-study and training. So we can treat the question of whether we should invest in developing additional skills as an economic issue too. 

What is the cost of training staff to reach proficiency in estimation and requirements? Does the cost of achieving proficiency exceed the likely benefits that would derive from proficiency? That goes back to the question of how much the business values predictability. If the business truly places no value on predictability, there’s won’t be any ROI from training staff in practices that support predictability. But I do not see that as the typical case. 

My company and I can train software professionals to become proficient in both requirements and estimation in about a week. In my experience most businesses place enough value on predictability that investing a week to make that option available provides a good ROI to the business. Note: this is about making the option available, not necessarily exercising the option on every project. 

My company and I can also train software professionals to become proficient in a full complement of Scrum and other Agile technical practices in about a week. That produces a good ROI too. In any given case, I would recommend both sets of training. If I had to recommend only one or the other, sometimes I would recommend starting with the Agile practices. But I wouldn’t recommend stopping with them. 

Skills development in practices that support predictability vs. practices that support agility is not an either/or decision. A truly agile business would be able to be flexible when needed, or predictable when needed. A true software professional will be most effective when skilled in both skill sets. 

17. Agility plus predictability is better than agility alone. 

If you think your business values agility only, ask your business what it values. Businesses vary, and you might work in a business that truly does value agility over predictability or that values agility exclusively. 

In some cases, businesses will value predictability over agility. Odds are that your business actually values both agility and predictability. The point is, ask the business, don’t just assume it’s one or the other. 

I think it’s self-evident that a business that has both agility and predictability will outperform a business that has agility only. We need to get past the either/or thinking that limits us to one set of skills or the other and embrace both/and thinking that leads us to develop the full set of skills needed to become true software professionals. 

Resources 

Re-Read Saturday: The Mythical Man-Month, Part 5 – The Second-System Effect

The Mythical Man-Month

The Mythical Man-Month

In the fifth essay of The Mythical Man-Month, titled The Second-System Effect, Brooks circles back to question he left unanswered in the essay Aristocracy, Democracy and System Design. The question was: If functional specialties are split, what bounds are left to constrain the possibility of a runaway architecture and design? The thought is that is that without the pressure of implementation an architect does not have to consider constraints.

Brooks begins the essay by establishing a context to consider the second-system effect with a section titled, “Interactive discipline for the architect”. All architects work within a set of constraints typically established by project stakeholders. Operating within these constraints requires self-discipline. In order to drive home the point, Brooks uses the analogy of a building architect. When designing a building an architect works against a budget and other constraints, such as a location and local ordinances. Implementation falls to the general contractor and subcontractors. In order to test the design, the architect will ask for estimates from the contractors (analogous to the teams in a software environment). Estimates provide the architect with the feedback needed to test ideas and assumptions and to assure the project’s stakeholders that their build can be completed within the constraints. When estimates come in too high, the architect will need to either alter the design or challenge the estimates.

When an architect challenges an estimate her or she could be seen as leveraging the power hierarchy established by separating functions (see last week’s essay). However the architect needs to recognize that in order to successfully challenge the estimate they need to remember four points.

  1. The contractors (development personnel in software) are responsible for implementation. The architect can only suggest, not dictate, changes in implementation. Force will generate a power imbalance that will generate poor behaviors.
  2. When challenging an estimate be prepared to suggest a means of implementation, but be willing to accept other ways to implement to achieve the same goal. Recognize that if you make a suggestion before being asked that you will establish an anchor bias and may not end up with an optimal solution.
  3. When making suggestions, make them discretely. Quiet leadership is often most effective.
  4. The architect should be prepared to forego credit for the changes generated as estimates and constraints are negotiated. Brooks pointed out that in the end it is not about architect, but rather about the solution.

The first part of the essay established both the context and framework for developing the self-discipline needed to control runaway design. Brooks concludes the essay by exposing the exception he observed. Brooks called this exception the second-system effect. In a nutshell, the second-system effect reflects the observation that a first work is apt to be spare, where as second attempts tend to be over designed as frills and signature embellishments start to creep in. Brooks points out this behavior can often be seen in scenarios in which a favorite function or widget is continually refined even as it becomes obsolete. For example, why are designers spending precious design time on the steering wheel for the Google self-driving car? (It is should be noted that recently the steering wheel was removed from the self-driving car then put back in  . . . with a brake).

How can you avoid the second-system effect? The simplest would be to never hire an architect with only one design job under their belt therefore avoiding the second-system effect. Unfortunately that solution over time is a non-starter. Who would replace the system architects that retire or move to other careers. Other techniques, like ensuring everyone is aware of the effect or stealing an idea from Extreme Programming and paring architects with other more seasoned architects or business analysts, are far more effective and scalable.

Brooks provides the punch line for the essay in the first two paragraphs. A project or organization must establish an environment in which self-discipline and communication exist to reduce the potential for runaway design.

Previous installments of Re-Read Saturday for the The Mythical Man-Month

Introductions and The Tar Pit

The Mythical Man-Month (The Essay)

The Surgical Team

Aristocracy, Democracy and System Design


Categories: Process Management

#NoEstimates - Response to Ron Jeffries

10x Software Development - Steve McConnell - Fri, 07/31/2015 - 19:22

Ron Jeffries posted a thoughtful response to my #NoEstimates video. While I like some elements of his response, it still ultimately glosses over problems with #NoEstimates. 

I'll walk through Ron's critique and show where I think it makes good points vs. where it misses the point. 

Ron's First Remodel of my Kitchen Remodel Example

Ron describes a variation on my video's kitchen remodel story. (If Ron is modifying my original story, does that qualify as fan fic??? Cool!) He says the real #NoEstimates way to approach that kind of remodel would be for the contractor to say something like, "Let's divide your remodel up into areas, and we'll allocate $6,000 per area." The customer then says, "I need 15 linear feet of cabinets. What kind of cabinets can I get for $6,000?" Ron characterizes that as a "very answerable question." The contractor then goes through the rest of the project similarly. 

I like the idea of dividing the project into pieces, and I like the idea of budgeting each piece individually. But what makes it possible to break down the project into areas with budget amounts in each area? What makes it possible to know that we can deliver 15 linear feet of cabinets for $6,000? Ron says that question is "very answerable." What makes that question "very answerable?"  

Estimation! 

Specifically, we can answer that question because we have lots of historical data about the cost of kitchen cabinets. As Ron says, "Here are pictures of $30,000 kitchens I've done in the past." That's historical data from completed past projects. As I discuss in my estimation book, historical data is the key to good estimation in general, whether kitchen cabinets or software. In software, Ron's example would called a "reference table of similar completed projects," which is a kind of "estimation by analogy." 

Far from supporting #NoEstimates, the example supports the value of collecting historical data so that you can use it in estimates. 

Ron's Second Remodel of My Kitchen Remodel Example

Ron presents a second modification of my scenario, this one based on the observation that kitchens involve physical material and software doesn't. "Kitchens are 'hard', Software is 'soft'."

(The whole hard vs. soft argument is a red herring. Yes, there are physical components in a kitchen remodel and there are few if any physical components in most software projects. So that's a difference. But even with the physical components in a kitchen remodel, the cost of labor is a major cost driver, just as it is in software, and more to the point, the labor cost is the primary source of uncertainty and risk in both cases. The presence of uncertainty and risk is the factor that makes estimation interesting in both cases. If there wasn't any uncertainty or risk, we could just look up the correct answer in a reference guide. Estimation would not present any challenges, and we would not need to write blog articles or create hashtags about it. So I think the contexts are more similar than different. Having said that, this issue really is beside the point.)

Ron goes on to say that, because software is soft, if the kitchen remodel was a software project we could just build it up $1,000 at a time, always doing the next most useful thing, and always leaving the kitchen in a state in which it can be used each day. The customer can inspect progress each day and give feedback on the direction. As we go along, if we see we don't really need $6,000 for a sink, we can redirect those funds to other areas, or just not spend them at all. If we get to the point where we've spent $20,000 and we're satisfied with where we are, we can can just stop, and we'll have saved $10,000. 

This sounds appealing and probably works in some cases, especially in cases where the people have done the same kind of work many, many times and have a well calibrated gut feel that they can do the whole project satisfactorily for $30,000. However, it also depends on available resources exceeding the resources needed to satisfy requirements. I would love to work in an environment that had excess resources, but my experience says that resources normally fall short of what is needed to satisfy requirements. 

When resources are not sufficient to satisfy the requirements, a less idealized version of Ron's scenario would go more like this: 

The contractor gets to work diligently working and spending $1,000 per day. The kitchen is indeed usable each day, and each day the customer agrees that the kitchen is incrementally better. After reaching the $15,000 mark, however, the customer says, "It doesn't look to me like we're halfway done with the work. I like each piece better than I did before, but we're no where near the end state I wanted." The contractor asks the customer for more detailed feedback and tries to adjust. The daily deliveries continue until the entire $30,000 is gone. 

The kitchen is better, and it is usable, but at the project retrospective the customer says, "None of the major parts are really what I wanted. If I'd known ahead of time that this approach would not get me what I wanted in any category, I would said, Do the appliances and the countertops, and that's all. That way I would at least have been satisfied with something. As it turned out, I'm not satisfied with anything." 

In this case, "collaboration" turned into "going down the drain together," which is not what anyone wanted. 

How do you avoid this outcome? You estimate the cost of each of the components. Or you give ranges of estimates and work with the customer to develop budgets for each area. Estimates and budgets help the customer prioritize, which is one of the more common reasons customers want estimates.  

Ron's Third Example 

Ron gives a third example in which he built a database product that no one had built before. There are ways to estimate that kind of work (more than you'd think, if you haven't received training in software estimation), but there is going to be more variability in those estimates, and if there are enough unknowns the variability might be high enough to make the estimates worthless. That's a better example of a case in which #NoEstimates might apply. But even then, I think #AskTheCustomer is a better position than #NoEstimates, or at least better than #AssumeNoEstimates, which is what #NoEstimates is often taken to imply. 

Summary

Ron's first example is based on expert estimation using historical data and directly supports #KnowWhenToEstimate. His example actually undermines #NoEstimates. 

Ron's second example assumes resources exceed what is needed to satisfy the requirements. When assumptions are adjusted to the more common condition of scarce resources, Ron's second example also supports the need for estimates. 

Ron closes with encouragement to get better at working within budgets (I agree!), collaborating with customers to identify budgets and similar constraints (I agree!). He also closes with the encouragement to get better at "giving an idea what we can do with that slice, for a slice of the budget"--I agree again, and we can only provide "giving an idea of what we can do with that slice" through estimation! 

None of this should be taken as a knock against decomposing into parts or building incrementally. Estimation by decomposition is a fundamental estimation approach. And I like the incremental emphasis in Ron's examples. It's just that, while building incrementally is good, building incrementally with predictability is even better. 

Software Development Conferences Forecast July 2015

From the Editor of Methods & Tools - Fri, 07/31/2015 - 08:20
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & […]

Ready, Set and Go at Scale

Ready, Set, Go!

Ready, Set, Go!

At the beginning of most races the starter will say something like, “ready, set go.” At the team-level the concept of ready to develop acts as a filter to determine whether a user story is ready to be passed to the team and taken into a sprint. Ready to develop is often considered as a bookend to the definition of done. As a comparison, the definition of done is applicable both at the team level and at scale when multiple teams are pursuing a common goal. In practice, ready does not scale as easily as done. Ready often requires two set of unique criteria for scaled Agile projects, Ready to Develop and Ready to Go.

The most common set of ready criteria is encapsulated in Ready to Develop.  Ready to Develop is used at a team level.  A simple set of five criteria are :

  1. The story is well formed.
  2. The story fulfills the criteria encompassed by INVEST.
  3. A story must have acceptance criteria.
  4. Each story should have any external subject matter experts (not on the team) identified with contact details.
  5. There are no external dependencies that will prevent the story from being completed.

These five criteria are a great filter for a team to use to determine if the user story they are considering for a sprint can be worked on immediately, or if more grooming is needed. Teams will gravitate toward work they can address now rather than work that is ill defined. The same predilection is true when viewing work being considered by a team of teams (an Agile Release Train in SAFe is an example of a team of teams). However, a team of teams need a higher-level set of criteria  to define whether they are ready to begin work. The Ready to Go criteria I most use includes:

  1. The teams have synchronized their development cadences. Synchronizing team cadences and agreeing on a team of team cadence is a powerful tool to ensure communication and integration.
  2. A sufficient groomed backlog exists. Identify and prepare enough work  (see the team-level criteria) to begin and sustain the teams that are in place, but no more. The backlog does not need to be complete (or completely groomed) before work begins.
  3. Enough of the architecture and standards have been defined. The SAFe addresses this issue by developing an architectural runway for the developers. Just enough design and architecture is developed ahead of the development teams to provide the guidance they need just before they need it.
  4. Knowable constraints have been identified. Constraints typically include attributes such as due dates (for example legal mandates can lead to hard due dates that even Agile teams need to accommodate), fixed budget, available capabilities and perhaps physical or technical constraints.  Everyone on ALL teams must be aware of the constraints they will have to perform within.
  5. The required infrastructure has been implemented. Infrastructure is the basic structures, tools and services need to deliver value. If you are delivering software, your infrastructure needs could be as simple as a place to sit and consistent access to electric power or as complex as servers, routers, networks and development tools.
  6. Teams and roles have been established (and if needed filled). Make sure you have organized and staffed the teams that will be involved in the effort (assuming that you not using standing teams), and identified and trained any ancillary roles (e.g. build master or DevOps team). Organizing and staffing teams on the fly is an accident waiting to happen.

Implementing the concept of Ready to Go for starting scaled Agile project, release or program increment (SAFe construct) has a different goal and therefore different criteria than Ready to Develop. When the starter calls out ready, mentally run through the criteria for whether you are Ready to Go  in order to begin work at scale.  Once you clear that hurdle you can apply the second set of unique criteria that is Ready to Develop.  Using both you will be more apt to sprint from the starting line when it is time to GO.


Categories: Process Management

#NoEstimates

10x Software Development - Steve McConnell - Thu, 07/30/2015 - 22:13

I've posted a YouTube video that gives my perspective on #NoEstimates. 

This is in the new Construx Brain Casts video series. 

 

The Definition of Done at Scale

19452897313_0dd46dd8fa_k

While there is agreement that you should use DoD at scale, how to apply it is less clear.

The Definition of Done (DoD) is an important technique for increasing the operational effectiveness of team-level Agile. The DoD provides a team with a set of criteria that they can use to plan and bound their work. As Agile is scaled up to deliver larger, more integrated solutions the question that is often asked is whether the concept of the DoD can be applied. And if it is applied, does the application require another layer of done (more complexity)?

The answer to the first question is simple and straightforward. If the question is whether the Definition of Done technique can be used as Agile projects are scaled, then the answer is an unequivocal ‘yes’. In preparation for this essay I surveyed a few dozen practitioners and coaches on the topic to ensure that my use of the technique at scale wasn’t extraordinary. To a person, they all used the technique in some form. Mario Lucero, an Agile Coach in Chile, (interviewed on SPaMCAST 334) said it succinctly, “No, the use of Definition of Done doesn’t depend on how large is the project.”

While everyone agreed that the DoD makes sense in a scaled Agile environment, there is far less consensus on how to apply the technique. The divergence of opinion and practice centered on whether or not the teams working together continually integrated their code as part of their build management process. There are two different camps. The first camp typically finds themselves in organizations that integrated functions either as a final step in a sprint, performed integration as a separate function outside of development or as a separate hardening sprint. This camp generally feels that to apply the Definition of Done requires a separate DoD specifically for integration. This DoD would include requirements for integrating functions, testing integration and architectural requirements that span teams. The second camp of respondent finds themselves in environments where continuous integration is performed. In this scenario each respondent either added integration criteria in the team DoD or did nothing at all. The primary difference boiled down to whether the team members were responsible for making sure their code integrated with the overall system or whether someone else (real or perceived) was responsible.

In practice the way that DoD is practiced includes a bit of the infamous “it depends” magic. During our discussion on the topic, Luc Bourgault from Wolters Kluwer stated, “in a perfect world the definition should be same, but I think we should be accept differences when it makes sense.” Pradeep Chennavajhula, Senior Global VP at QAI, made three points:

  1. Principles/characteristics of Definition of done do not change by size of the project.
  2. However, the considerations and detail will be certainly impacted.
  3. This may however, create a perception that Definition of Done varies by size of project.

The Definition of Done is useful for all Agile work whether a single team or a large scaled effort. However, how you have organized your Agile effort will have more of a potential impact on your approach.


Categories: Process Management

Estimates on Split Stories Do Not Need to Equal the Original

Mike Cohn's Blog - Tue, 07/28/2015 - 15:00

It is good practice to first write large user stories (commonly known as epics) and then to split them into smaller pieces, a process known as product backlog refinement or grooming. When product backlog items are split, they are often re-estimated.

I’m often asked if the sum of the estimates on the smaller stories must equal the estimate on the original, larger story.

No.

Part of the reason for splitting the stories is to understand them better. Team members discuss the story with the product owner. As a product owner clarifies a user story, the team will know more about the work they are to do.

That improved knowledge should be reflected in any estimates they provide. If those estimates don’t sum to the same value as the original story, so be it.

But What About the Burndown?

But, I hear you asking, what about the release burndown chart? A boss, client or customer was told that a story was equal to 20 points. Now that the team split it apart, it’s become bigger.

Well, first, and I always feel compelled to say this: We should always stress to our bosses, clients and customers that estimates are estimates and not commitments.

When we told them the story would be 20 points, that meant perhaps 20, perhaps 15, perhaps 25. Perhaps even 10 or 40 if things went particularly well or poorly.

OK, you’ve probably delivered that message, and it may have gone in one ear and out the other of your boss, client or customer. So here’s something else you should be doing that can protect you against a story becoming larger when split and its parts are re-estimated.

I’ve always written and trained that the numbers in Planning Poker are best thought of as buckets of water.

You have, for example an 8 and a 13 but not a 10 card. If you have a story that you think is a 10, you need to estimate it as a 13. This slight rounding up (which only occurs on medium to large numbers) will mitigate the effect of stories becoming larger when split.

Consider the example of a story a team thinks is a 15. If they play Planning Poker the way I recommend, they will call that large story a 20.

Later, they split it into multiple smaller stories. Let’s say they split it into stories they estimate as 8, 8 and 5. That’s 21. That’s significantly larger than the 15 they really thought it was, but not much larger at all than the 20 they put on the story.

In practice, I’ve found this slight pessimistic bias to work well to counter the natural tendency I believe many developers have to underestimate, and to provide a balance against those who will be overly shocked when any actual overruns its estimate.

The monolithic frontend in the microservices architecture

Xebia Blog - Mon, 07/27/2015 - 16:39

When you are implementing a microservices architecture you want to keep services small. This should also apply to the frontend. If you don't, you will only reap the benefits of microservices for the backend services. An easy solution is to split your application up into separate frontends. When you have a big monolithic frontend that can’t be split up easily, you have to think about making it smaller. You can decompose the frontend into separate components independently developed by different teams.

Imagine you are working at a company that is switching from a monolithic architecture to a microservices architecture. The application your are working on is a big client facing web application. You have recently identified a couple of self-contained features and created microservices to provide each functionality. Your former monolith has been carved down to bare essentials for providing the user interface, which is your public facing web frontend. This microservice only has one functionality which is providing the user interface. It can be scaled and deployed separate from the other backend services.

You are happy with the transition: Individual services can fit in your head, multiple teams can work on different applications, and you are speaking on conferences on your experiences with the transition. However you’re not quite there yet: The frontend is still a monolith that spans the different backends. This means on the frontend you still have some of the same problems you had before switching to microservices. The image below shows a simplification of the current architecture.

Single frontend

With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Backend teams can't deliver business value without the frontend being updated since an API without a user interface doesn't do much. More backend teams means more new features, and therefore more pressure is put on the frontend team(s) to integrate new features. To compensate for this it is possible to make the frontend team bigger or have multiple teams working on the same project. Because the frontend still has to be deployed in one go, teams cannot work independently. Changes have to be integrated in the same project and the whole project needs to be tested since a change can break other features.
Another option is to have the backend teams integrate their new features with the frontend and submitting a pull request. This helps in dividing the work, but to do this effectively a lot of knowledge has to be shared across the teams to get the code consistent and on the same quality level. This would basically mean that the teams are not working independently. With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Besides not being able to scale, there is also the classical overhead of a separate backend and frontend team. Each time there is a breaking change in the API of one of the services, the frontend has to be updated. Especially when a feature is added to a service, the frontend has to be updated to ensure your customers can even use the feature. If you have a frontend small enough it can be maintained by a team which is also responsible for one or more services which are coupled to the frontend. This means that there is no overhead in cross team communication. But because the frontend and the backend can not be worked on independently, you are not really doing microservices. For an application which is small enough to be maintained by a single team it is probably a good idea not to do microservices.

If you do have multiple teams working on your platform, but you were to have multiple smaller frontend applications there would have been no problem. Each frontend would act as the interface to one or more services. Each of these services will have their own persistence layer. This is known as vertical decomposition. See the image below.

frontend-per-service

When splitting up your application you have to make sure you are making the right split, which is the same as for the backend services. First you have to recognize bounded contexts in which your domain can be split. A bounded context is a partition of the domain model with a clear boundary. Within the bounded context there is high coupling and between different bounded contexts there is low coupling. These bounded contexts will be mapped to micro services within your application. This way the communication between services is also limited. In other words you limit your API surface. This in turn will limit the need to make changes in the API and ensure truly separately operating teams.

Often you are unable to separate your web application into multiple entirely separate applications. A consistent look and feel has to be maintained and the application should behave as single application. However the application and the development team are big enough to justify a microservices architecture. Examples of such big client facing applications can be found in online retail, news, social networks or other online platforms.

Although a total split of your application might not be possible, it might be possible to have multiple teams working on separate parts of the frontend as if they were entirely separate applications. Instead of splitting your web app entirely you are splitting it up in components, which can be maintained separately. This way you are doing a form of vertical decomposition while you still have a single consistent web application. To achieve this you have a couple of options.

Share code

You can share code to make sure that the look and feel of the different frontends is consistent. However then you risk coupling services via the common code. This could even result in not being able to deploy and release separately. It will also require some coordination regarding the shared code.

Therefore when you are going to share code it is generally a good a idea to think about the API that it’s going to provide. Calling your shared library “common”, for example, is generally a bad idea. The name suggests developers should put any code which can be shared by some other service in the library. Common is not a functional term, but a technical term. This means that the library doesn’t focus on providing a specific functionality. This will result in an API without a specific goal, which will be subject to change often. This is especially bad for microservices when multiple teams have to migrate to the new version when the API has been broken.

Although sharing code between microservices has disadvantages, generally all microservices will share code by using open source libraries. Because this code is always used by a lot of projects, special care is given to not breaking compatibility. When you’re going to share code it is a good idea to uphold your shared code to the same standards. When your library is not specific to your business, you might as well release it publicly to encourage you think twice about breaking the API or putting business specific logic in the library.

Composite frontend

It is possible to compose your frontend out of different components. Each of these components could be maintained by a separate team and deployed independent of each other. Again it is important to split along bounded contexts to limit the API surface between the components. The image below shows an example of such a composite frontend.

composite-design

Admittedly this is an idea we already saw in portlets during the SOA age. However, in a microservices architecture you want the frontend components to be able to deploy fully independently and you want to make sure you do a clean separation which ensures there is no or only limited two way communication needed between the components.

It is possible to integrate during development, deployment or at runtime. At each of these integration stages there are different tradeoffs between flexibility and consistency. If you want to have separate deployment pipelines for your components, you want to have a more flexible approach like runtime integration. If it is more likely different versions of components might break functionality, you need more consistency. You would get this at development time integration. Integration at deployment time could give you the same flexibility as runtime integration, if you are able to integrate different versions of components on different environments of your build pipeline. However this would mean creating a different deployment artifact for each environment.

Software architecture should never be a goal, but a means to an end

Combining multiple components via shared libraries into a single frontend is an example of development time integration. However it doesn't give you much flexibility in regards of separate deployment. It is still a classical integration technique. But since software architecture should never be a goal, but a means to an end, it can be the best solution for the problem you are trying to solve.

More flexibility can be found in runtime integration. An example of this is using AJAX to load html and other dependencies of a component. Then the main application only needs to know where to retrieve the component from. This is a good example of a small API surface. Of course doing a request after page load means that the users might see components loading. It also means that clients that don’t execute javascript will not see the content at all. Examples are bots / spiders that don’t execute javascript, real users who are blocking javascript or using a screenreader that doesn’t execute javascript.

When runtime integration via javascript is not an option it is also possible to integrate components using a middleware layer. This layer fetches the html of the different components and composes them into a full page before returning the page to the client. This means that clients will always retrieve all of the html at once. An example of such middleware are the Edge Side Includes of Varnish. To get more flexibility it is also possible to manually implement a server which does this. An open source example of such a server is Compoxure.

Once you have you have your composite frontend up and running you can start to think about the next step: optimization. Having separate components from different sources means that many resources have to be retrieved by the client. Since retrieving multiple resources takes longer than retrieving a single resource, you want to combine resources. Again this can be done at development time or at runtime depending on the integration techniques you chose decomposing your frontend.

Conclusion

When transitioning an application to a microservices architecture you will run into issues if you keep the frontend a monolith. The goal is to achieve good vertical decomposition. What goes for the backend services goes for the frontend as well: Split into bounded contexts to limit the API surface between components, and use integration techniques that avoid coupling. When you are working on single big frontend it might be difficult to make this decomposition, but when you want to deliver faster by using multiple teams working on a microservices architecture, you cannot exclude the frontend from decomposition.

Resources

Sam Newman - From Macro to Micro: How Big Should Your Services Be?
Dan North - Microservices: software that fits in your head

Software Development Linkopedia July 2015

From the Editor of Methods & Tools - Mon, 07/27/2015 - 14:50
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about Agile retrospectives, remote teams, Agile testing, Cloud architecture, exploratory testing, software entropy, introverted software developers and Scrum myths. Blog: 7 Best Practices for Facilitating Agile Retrospectives Blog: How Pairing Powers […]

Super fast unit test execution with WallabyJS

Xebia Blog - Mon, 07/27/2015 - 11:24

Our current AngularJS project has been under development for about 2.5 years, so the number of unit tests has increased enormously. We tend to have a coverage percentage near 100%, which led to 4000+ unit tests. These include service specs and view specs. You may know that AngularJS - when abused a bit - is not suited for super large applications, but since we tamed the beast and have an application with more than 16,000 lines of high performing AngularJS code, we want to keep in charge about the total development process without any performance losses.

We are using Karma Runner with Jasmine, which is fine for a small number of specs and for debugging, but running the full test suite takes up to 3 minutes on a 2.8Ghz MacBook Pro.

We are testing our code continuously, so we came up with a solution to split al the unit tests into several shards. This parallel execution of the unit tests decreased the execution time a lot. We will later write about the details of this Karma parallelization on this blog. Sharding helped us a lot when we want to run the full unit test suite, i.e. when using it in the pre push hook, but during development you want quick feedback cycles about coverage and failing specs (red-green testing).

With such a long unit test cycle, even when running in parallel, many of our developers are fdescribe-ing the specs on which they are working, so that the feedback is instant. However, this is quite labor intensive and sometimes an fdescribe is pushed accidentally.

And then.... we discovered WallabyJS. It is just an ordinary test runner like Karma. Even the configuration file is almost a copy of our karma.conf.js.
The difference is in the details. Out of the box it runs the unit test suite in 50 secs, thanks to the extensive use of Web Workers. Then the fun starts.

Screenshot of Wallaby In action (IntelliJ). Shamelessly grabbed from wallaby.com

I use Wallaby as IntelliJ IDEA plugin, which adds colored annotations to the left margin of my code. Green squares indicate covered lines/statements, orange give me partly covered code and grey means "please write a test for this functionality or I introduce hard to find bugs". Colorblind people see just kale green squares on every line, since the default colors are not chosen very well, but these colors are adjustable via the Preferences menu.

Clicking on a square pops up a box with a list of tests that induces the coverage. When the test failed, it also tells me why.

dialog

A dialog box showing contextual information (wallaby.com)

Since the implementation and the tests are now instrumented, finding bugs and increasing your coverage goes a lot faster. Beside that, you don't need to hassle with fdescribes and fits to run individual tests during development. Thanks to the instrumentation Wallaby is running your tests continuously and re-runs only the relevant tests for the parts that you are working on. Real time.

5 Reasons why you should test your code

Xebia Blog - Mon, 07/27/2015 - 09:37

It is just like in mathematics class when I had to make a proof for Thales’ theorem I wrote “Can’t you see that B has a right angle?! Q.E.D.”, but he still gave me an F grade.

You want to make things work, right? So you start programming until your feature is implemented. When it is implemented, it works, so you do not need any tests. You want to proceed and make more cool features.

Suddenly feature 1 breaks, because you did something weird in some service that is reused all over your application. Ok, let’s fix it, keep refreshing the page until everything is stable again. This is the point in time where you regret that you (or even better, your teammate) did not write tests.

In this article I give you 5 reasons why you should write them.

1. Regression testing

The scenario describes in the introduction is a typical example of a regression bug. Something works, but it breaks when you are looking the other way.
When you had tests with 100% code coverage, a red error had been appeared in the console or – even better – a siren goes off in the room where you are working.

Although there are some misconceptions about coverage, it at least tells others that there is a fully functional test suite. And it may give you a high grade when an audit company like SIG inspects your software.

coverage

100% Coverage feels so good

100% Code coverage does not mean that you have tested everything.
This means that the test suite it implemented in such a way that it calls every line of the tested code, but says nothing about the assertions made during its test run. If you want to measure if your specs do a fair amount of assertions, you have to do mutation testing.

This works as follows.

An automatic task is running the test suite once. Then some parts of you code are modified, mainly conditions flipped, for loops made shorter/longer, etc. The test suite is run a second time. If there are tests failing after the modifications begin made, there is an assertion done for this case, which is good.
However, 100% coverage does feel really good if you are an OCD-person.

The better your test coverage and assertion density is, the higher probability to catch regression bugs. Especially when an application grows, you may encounter a lot of regression bugs during development, which is good.

Suppose that a form shows a funny easter egg when the filled in birthdate is 06-06-2006 and the line of code responsible for this behaviour is hidden in a complex method. A fellow developer may make changes to this line. Not because he is not funny, but he just does not know. A failing test notices him immediately that he is removing your easter egg, while without a test you would find out the the removal 2 years later.

Still every application contains bugs which you are unaware of. When an end user tells you about a broken page, you may find out that the link he clicked on was generated with some missing information, ie. users//edit instead of users/24/edit.

When you find a bug, first write a (failing) test that reproduces the bug, then fix the bug. This will never happen again. You win.

2. Improve the implementation via new insights

“Premature optimalization is the root of all evil” is something you hear a lot. This does not mean that you have to implement you solution pragmatically without code reuse.

Good software craftmanship is not only about solving a problem effectively, also about maintainability, durability, performance and architecture. Tests can help you with this. If forces you to slow down and think.

If you start writing your tests and you have trouble with it, this may be an indication that your implementation can be improved. Furthermore, your tests let you think about input and output, corner cases and dependencies. So do you think that you understand all aspects of the super method you wrote that can handle everything? Write tests for this method and better code is garanteed.

Test Driven Development even helps you optimizing your code before you even write it, but that is another discussion.

3. It saves time, really

Number one excuse not to write tests is that you do not have time for it or your client does not want to pay for it. Writing tests can indeed cost you some time, even if you are using boilerplate code elimination frameworks like Mox.

However, if I ask you whether you would make other design choices if you had the chance (and time) to start over, you probably would say yes. A total codebase refactoring is a ‘no go’ because you cannot oversee what parts of your application will fail. If you still accept the refactoring challenge, it will at least give you a lot of headache and costs you a lot of time, which you could have been used for writing the tests. But you had no time for writing tests, right? So your crappy implementation stays.

Dilbert bugfix

A bug can always be introduced, even with good refactored code. How many times did you say to yourself after a day of hard working that you spend 90% of your time finding and fixing a nasty bug? You are want to write cool applications, not to fix bugs.
When you have tested your code very well, 90% of the bugs introduced are catched by your tests. Phew, that saved the day. You can focus on writing cool stuff. And tests.

In the beginning, writing tests can take up to more than half of your time, but when you get the hang of it, writing tests become a second nature. It is important that you are writing code for the long term. As an application grows, it really pays off to have tests. It saves you time and developing becomes more fun as you are not being blocked by hard to find bugs.

4. Self-updating documentation

Writing clean self-documenting code is one if the main thing were adhere to. Not only for yourself, especially when you have not seen the code for a while, but also for your fellow developers. We only write comments if a piece of code is particularly hard to understand. Whatever style you prefer, it has to be clean in some way what the code does.

  // Beware! Dragons beyond this point!

Some people like to read the comments, some read the implementation itself, but some read the tests. What I like about the tests, for example when you are using a framework like Jasmine, is that they have a structured overview of all method's features. When you have a separate documentation file, it is as structured as you want, but the main issue with documentation is that it is never up to date. Developers do not like to write documentation and forget to update it when a method signature changes and eventually they stop writing docs.

Developers also do not like to write tests, but they at least serve more purposes than docs. If you are using the test suite as documentation, your documentation is always up to date with no extra effort!

5. It is fun

Nowadays there are no testers and developers. The developers are the testers. People that write good tests, are also the best programmers. Actually, your test is also a program. So if you like programming, you should like writing tests.
The reason why writing tests may feel non-productive is because it gives you the idea that you are not producing something new.

OLYMPUS DIGITAL CAMERA

Is the build red? Fix it immediately!

However, with the modern software development approach, your tests should be an integrated part of your application. The tests can be executed automatically using build tools like Grunt and Gulp. They may run in a continuous integration pipeline via Jenkins, for example. If you are really cool, a new deploy to production is automatically done when the tests pass and everything else is ok. With tests you have more confidence that your code is production ready.

A lot of measurements can be generated as well, like coverage and mutation testing, giving the OCD-oriented developers a big smile when everything is green and the score is 100%.

If the test suite fails, it is first priority to fix it, to keep the codebase in good shape. It takes some discipline, but when you get used to it, you have more fun developing new features and make cool stuff.

SPaMCAST 352 – Gil Broza, The Agile Mind-Set

Software Process and Measurement Cast - Sun, 07/26/2015 - 22:00

Software Process and Measurement Cast 352 features our interview with Gil Broza.  We discussed Gil’s new book The Agile Mind-Set. Do you know what the Agile Mind-Set is or how to get one?  Gil’s new book explains the concept of the Agile Mind-Set and how you can find it in order to deliver more value!

Gil Broza helps organizations, teams and individuals implement high-performance Agile principles and practices that work for them. His coaching and training clients – over 1,300 professionals in 40 companies – have delighted their customers, shipped working software on time, increased their productivity and decimated their software defects. Beyond teaching, Gil helps people overcome limiting habits, fears of change, blind spots and outdated beliefs, and reach higher levels of performance, confidence and accomplishment.

Gil is the author of The Agile Mind-Set and The Human Side of Agile: How to Help Your Team Deliver.

Gil has a M.Sc. in Computational Linguistics and a B.Sc. in Computer Science and Mathematics from the Hebrew University of Jerusalem, Israel. He is a certified NLP Master Practitioner and has studied organizational behavior and development extensively. He has written several practical papers for the Cutter IT Journal, other trade magazines, and for conferences, winning the Best Practical Paper award at XP/Agile Universe 2004. Gil co-produced the Agile Coaching stage for the “Agile 2010” and “Agile 2009” conferences.

Gil lives in Toronto, Canada.

Contact Data:
http://www.3pvantage.com/index.htm
https://leanpub.com/theagilemindset
http://thehumansideofagile.com/
https://twitter.com/gilbroza

Gil was last interviewed on SPaMCAST 210.  We discussed his first book The Human Side of Agile.

 

 

Call to Action!

I have a challenge for the Software Process and Measurement Cast listeners for the next few weeks. I would like you to find one person that you think would like the podcast and introduce them to the cast. This might mean sending them the URL or teaching them how to download podcasts. If you like the podcast and think it is valuable they will be thankful to you for introducing them to the Software Process and Measurement Cast. Thank you in advance!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.  This week we tackle the essay titled “Aristocracy, Democracy and System Design”!

The Re-Read Saturday and other great articles can be found on the Software Process and Measurement Blog.

Remember: We just completed the Re-Read Saturday of Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement which began on February 21nd. What did you think?  Did the re-read cause you to read The Goal for a refresher? Visit the Software Process and Measurement Blog and review the whole re-read.

Note: If you don’t have a copy of the book, buy one. If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

Upcoming Events

Software Quality and Test Management
 
September 13 – 18, 2015
San Diego, California

http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams!  Let me know if you are attending! If you are still deciding on attending let me know because I have a discount code!

 

More on other great conferences soon!

 

Next SPaMCAST

The next Software Process and Measurement Cast features three columns.  The first is our essay on leaning styles.  Learning styles are an interesting set of constructs that are useful to consider when you are trying to change the world or just and an organization. 

We will also include Steve Tendon’s column discussing the TameFlow methodology and his great new book, Hyper-Productive Knowledge Work Performance.

Anchoring the cast will be Gene Hughson returning with an entry from his Form Follows Function column. 

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

SPaMCAST 352 – Gil Broza, The Agile Mind-Set

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

Software Process and Measurement Cast 352 features our interview with Gil Broza.  We discussed Gil’s new book The Agile Mind-Set. Do you know what the Agile Mind-Set is or how to get one?  Gil’s new book explains the concept of the Agile Mind-Set and how you can find it in order to deliver more value!

Gil Broza helps organizations, teams and individuals implement high-performance Agile principles and practices that work for them. His coaching and training clients – over 1,300 professionals in 40 companies – have delighted their customers, shipped working software on time, increased their productivity and decimated their software defects. Beyond teaching, Gil helps people overcome limiting habits, fears of change, blind spots and outdated beliefs, and reach higher levels of performance, confidence and accomplishment.

Gil is the author of The Agile Mind-Set and The Human Side of Agile: How to Help Your Team Deliver.

Gil has a M.Sc. in Computational Linguistics and a B.Sc. in Computer Science and Mathematics from the Hebrew University of Jerusalem, Israel. He is a certified NLP Master Practitioner and has studied organizational behavior and development extensively. He has written several practical papers for the Cutter IT Journal, other trade magazines, and for conferences, winning the Best Practical Paper award at XP/Agile Universe 2004. Gil co-produced the Agile Coaching stage for the “Agile 2010” and “Agile 2009” conferences.

Gil lives in Toronto, Canada.

Contact Data:
http://www.3pvantage.com/index.htm
https://leanpub.com/theagilemindset
http://thehumansideofagile.com/
https://twitter.com/gilbroza

Gil was last interviewed on SPaMCAST 210.  We discussed his first book The Human Side of Agile.

Call to Action!

I have a challenge for the Software Process and Measurement Cast listeners for the next few weeks. I would like you to find one person that you think would like the podcast and introduce them to the cast. This might mean sending them the URL or teaching them how to download podcasts. If you like the podcast and think it is valuable they will be thankful to you for introducing them to the Software Process and Measurement Cast. Thank you in advance!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.  This week we tackle the essay titled “Aristocracy, Democracy and System Design”!

The Re-Read Saturday and other great articles can be found on the Software Process and Measurement Blog.

Remember: We just completed the Re-Read Saturday of Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement which began on February 21nd. What did you think?  Did the re-read cause you to read The Goal for a refresher? Visit the Software Process and Measurement Blog and review the whole re-read.

Note: If you don’t have a copy of the book, buy one. If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

Upcoming Events

Software Quality and Test Management
September 13 – 18, 2015
San Diego, California
http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams!  Let me know if you are attending! If you are still deciding on attending let me know because I have a discount code!

More on other great conferences soon!

 Next SPaMCAST

The next Software Process and Measurement Cast features three columns.  The first is our essay on leaning styles.  Learning styles are an interesting set of constructs that are useful to consider when you are trying to change the world or just and an organization.

We will also include Steve Tendon’s column discussing the TameFlow methodology and his great new book, Hyper-Productive Knowledge Work Performance.

Anchoring the cast will be Gene Hughson returning with an entry from his Form Follows Function column.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management