Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Google Play services 4.4

Android Developers Blog - Wed, 07/02/2014 - 20:01
gps

A new release of Google Play services has now been rolled out to the world, and as usual we have a number of features that can make your apps better than before. This release includes a major enhancement to Maps with the introduction of Street View, as well as new features in Location, Games Services, Mobile Ads, and Wallet API.

Here are the highlights of Google Play services release 4.4:


Google Maps Android API

Starting with a much anticipated announcement for the Google Maps Android API: Introducing Street View. You can now embed Street View imagery into an activity enabling your users to explore the world through panoramic 360-degree views. Programmatically control the zoom and orientation (tilt and bearing) of the Street View camera, and animate the camera movements over a given duration. Here is an example of what you can do with the API, where the user navigates forward one step:
We've also added more features to the Indoor Maps feature of the API. You can turn the default floor picker off - useful if you want to build your own. You can also detect when a new building comes into focus, and find the currently-active building and floor. Great if you want to show custom markup for the active level, for example.


Activity Recognition

And while we are on the topic of maps, let’s turn to some news in the Location API. For those of you that have used this API, you may have seen the ability already there to detect if the device is in a vehicle, on a bicycle, on foot, still, or tilting.

In this release, two new activity detectors have been added: Running, and Walking. So a great opportunity to expand your app to be even more responsive to your users. And for you that have not worked with this capability earlier, we hardly need to tell the cool things you can do with it. Just imagine combining this capability with features in Maps, Games Services, and other parts of Location...


Games Services Update

In the 4.3 release we introduced Game Gifts, which allows you to request gifts or wishes. And although there are no external API changes this time, the default requests sending UI has been extended to now allow the user to select multiple Game Gifts recipients. For your games this means more collaboration and social engagement between your players.


Mobile Ads

For Mobile Ads, we’ve added new APIs for publishers to display in-app promo ads, which enables users to purchase advertised items directly. We’re offering app developers control of targeting specific user segments with ads, for example offering high-value users an ad for product A, or new users with an ad for product B, etc.

With these extensions, users can conveniently purchase in-app items that interest them, advertisers can reach consumers, and your app connects the dots; a win-win-win in other words.


Wallet Fragments

For the Instant Buy API, we’ve now reduced the work involved to place a Buy With Google button in an app. The WalletFragment API introduced in this release makes it extremely easy to integrate Google Wallet Instant Buy with an existing app. Just configure these fragments and add them to your app.

And that’s another release of Google Play services. The updated Google Play services SDK is now available through the Android SDK manager. Coming up in June is Google I/O, no need to say more…


For the release video, please see:
DevBytes: Google Play services 4.4

For details on the APIs, please see:
New Features in Google Play services 4.4



Join the discussion on
+Android Developers


Categories: Programming

Art, made with code, opens at London’s Barbican

Google Code Blog - Wed, 07/02/2014 - 19:30
Author PhotoBy Paul Kinlan, Staff Developer Advocate and tinkerer

Good News Everybody! DevArt has officially opened at the Barbican’s Digital Revolution Exhibition, the biggest exploration of digital creativity ever staged in the UK.

(Images - Andrew Meredith)
Technology has long gone hand in hand with art and with DevArt we’re showcasing the developers who use technology as their canvas and code as their raw material to create innovative, interactive digital art installations. Karsten Schmidt, Zach Lieberman, and duo Varvara Guljajeva and Mar Canet, have been commissioned by Google and the Barbican for Digital Revolution. Alongside these three commissions, a fourth - Cyril Diagne and Beatrice Lartigue - were handpicked as a result of DevArt’s global initiative to discover the interactive artists of tomorrow. You can also see their incredible art online and through our exhibition launch film here:


Play the World, 2014. Zach Lieberman [View on Github]
Using Google Compute Engine, Google Maps Geolocation API and openFrameworks, Zach has been able to find musical notes from hundreds of live radio stations around the world, resulting in a unique geo-orientated piece of music every time a visitor plays the piano at the centre of the piece.


Image by Andrew Meredith

Wishing Wall, 2014, Varvara Guljajeva & Mar Canet [View on Github]
Taking advantage of Google Compute Engine, Web Speech API, Chrome Apps, openFrameworks and node.js, Varvara and Mar are able to capture a whispered wish, and let you watch it transform before your eyes, allowing you to reach out and let it land on your hand.

Image by Andrew Meredith

Co(de) Factory, 2014, Karsten Schmidt [View on Github]
Android, Google Cloud Platform, Google Closure Compiler, WebGL, WebSockets, and YouTube have been combined by Karsten to allow anybody to create art and become an artist. It empowers people by giving them the tools to create, and offers them the chance to have their digital piece fabricated in 3D and showcased in the exhibition.

Image by Andrew Meredith

Les MĂ©tamorphoses de Mr. Kalia, 2014, BĂ©atrice Lartigue and Cyril Diagne [View on Github]
Android, Chrome Apps, Google App Engine, node.js, openFrameworks have enabled BĂ©atrice and Cyril to create tracking technology that transforms movement into a visual performance where visitors take on the persona of Mr. Kalia, a larger-than-life animated character, that undergoes a series of surreal changes while following your every movement.

Image by Andrew Meredith

DevArt will tour the world with the Digital Revolution Exhibition for up to five years following the Barbican show in London.

Soon we’re also starting our DevArt Young Creators program — an education component of DevArt designed to inspire a new generation of coders — each led by the DevArt interactive artists. Developed alongside the UK’s new computing curriculum, the workshops have been designed especially for students aged 9-13 years who have never tried coding before. Each workshop will be developed into lesson plans in-line with the UK’s new national computing curriculum, and distributed to educators by arts and technology organisations.

Paul Kinlan is a Developer Advocate in the UK on the Chrome team specialising on mobile. He lives in Liverpool and loves trying to progress the city's tech community from places like DoES Liverpool hack-space.

Posted by Louis Gray, Googler
Categories: Programming

Why does data need to have sex?

Data needs the ability to combine with other data in new ways to reach maximum value. So data needs to have the equivalent of sex.

That's why I used sex in the title of my previous article, Data Doesn't Need To Be Free, But It Does Need To Have Sex. So it wasn't some sort of click-bait title as some have suggested.

Sex is nature's way of bringing different data sets together, that is our genome, and creating something new that has a chance to survive and thrive in changing environments.

Currently data is cloistered behind Walled Gardens and thus has far less value than it could have. How do we coax data from behind these walls? With money. So that's where the bit about "data doesn't need to be free" comes from. How do we make money? Through markets. What do we have as a product to bring to market? Data. What do services need to keep producing data as a product? Money.

So it's a virtuous circle. Services generate data from their relationship with users. That data can be sold for the money services need to make a profit. Profit keeps the service that users like running. A running service  produces even more data to continue the cycle.

Why do we even care about data having a sex?

Historically one lens we can use to look at the world is to see everything in terms of how resources have been exploited over the ages. We can see the entire human diaspora as largely being determined by the search for and exploitation of different resource reservoirs.

We live near the sea for trade and access to fisheries. Early on we lived next to rivers for water, for food, for transportation, and later for power. People move to where there is lumber to harvest, gold to mine, coal to mine, iron to mine, land to grow food, steel to process, and so on. Then we build roads, rail roads, canals and ports to connect resource reservoirs to consumers.

In Nova Scotia, where I've been on vacation, a common pattern was for England and France to fight each other over land and resources. In the process they would build forts, import soldiers, build infrastructure, and make it relatively safe to trade. These forts became towns which then became economic hubs. We see these places as large cities now, like Halifax Nova Scotia, but it's the resources that came first.

When you visit coves along the coast of Nova Scotia they may tell you with interpretive signage, spaced out along a boardwalk, about the boom and bust cycles of different fish stocks as they were discovered, exploited, and eventually fished out.

In the early days in Nova Scotia great fortunes were made on cod. Then when cod was all fished out other resource reservoirs like sardines, halibut, and lobster were exploited. Atlantic salmon was over fished. Production moved to the Pacific where salmon was once again over fished. Now a big product is scallops and what were once trash fish, like redfish, is now the next big thing because that's what's left.

During these cycles great fortunes were made. But when a resource runs out people move on and find another. And when that runs out people move on and keep moving on until they find a place to make make living.

Places associated with old used up resources often just fade away. Ghosts of the original economic energy that created them. As a tourist I've noticed what is mined now as a resource is the history of the people and places that were created in the process of exploiting previous resources. We call it tourism.

Data is a resource reservoir like all the other resource reservoirs we've talked about, but data is not being treated like a resource. It's as if forts and boats and fishermen all congregated to catch cod, but then didn't sell the cod on an open market. If that were the case limited wealth would have been generated, but because all these goods went to market as part of a vast value chain, a decent living was made by a great many people.

If we can see data as a resource reservoir, as natural resources run out, we'll be able to switch to unnatural resources to continue the great cycle of resource exploitation.

Will this work? I don't know. It's just a thought that seems worth exploring.

Categories: Architecture

Do You Encourage People to Bring You Problems?

One of the familiar tensions in management is how you encourage or discourage people from bringing you problems. One of my clients had a favorite saying, “Don’t bring me problems. Bring me solutions.”

I could see the problems that saying caused in the organization. He prevented people from bringing him problems until the problems were enormous. He didn’t realize that his belief that he was helping people solve their own problems was the cause of these huge problems.

How could I help?

I’d only been a consultant for a couple of years. I’d been a manager for several years, and a program manager and project manager for several years before that. I could see the system. This senior manager wasn’t really my client. I was consulting to a project manager, who reported to him, but not him. His belief system was the root cause of many of the problems.

What could I do?

I tried coaching my project manager, about what to say to his boss. That had some effect, but didn’t work well. My client, the project manager, was so dejected going into the conversation that the conversation was dead before it started. I needed to talk to the manager myself.

I thought about this first. I figured I would only get one shot before I was out on my ear. I wasn’t worried about finding more consulting—but I really wanted to help this client. Everyone was suffering.

I asked for a one-on-one with the senior manager. I explained that I wanted to discuss the project, and that the project manager was fine with this meeting. I had 30 minutes.

I knew that Charlie, this senior manager cared about these things: how fast we could release so we could move to the next project and what the customers would see (customer perception). He thought those two things would affect sales and customer retention.

Charlie had put tremendous pressure on the project to cut corners to release faster. But that would change the customer perception of what people saw and how they would use the product. I wanted to change his mind and provide him other options.

“Hey Charlie, this time still good?”

“Yup, come on in. You’re our whiz-bang consultant, right?”

“Yes, you could call me that. My job is to help people think things through and see alternatives. That way they can solve problems on the next project without me.”

“Well, I like that. You’re kind of expensive.”

“Yes, I am. But I’m very good. That’s why you pay me. So, let’s talk about how I’m helping people solve problems.”

“I help people solve problems. I always tell them, ‘Don’t bring me problems. Bring me solutions.’ It works every time.” He actually laughed when he said this.

I waited until he was done laughing. I didn’t smile.

“You’re not smiling.” He started to look puzzled.

“Well, in my experience, when you say things like that, people don’t bring you small problems. They wait until they have no hope of solving the problem at all. Then, they have such a big problem, no one can solve the problem. Have you seen that?”

He narrowed his eyes.

“Let’s talk about what you want for this project. You want a great release in the next eight weeks, right? You want customers who will be reference accounts, right? I can help you with that.”

Now he looked really suspicious.

“Okay, how are you going to pull off this miracle? John, the project manager was in here the other day, crying about how this project was a disaster.”

“Well, the project is in trouble. John and I have been talking about this. We have some plans. We do need more people. We need you to make some decisions. We have some specific actions only you can take. John has specific actions only he can take.

“Charlie, John needs your support. You need to say things like, “I agree that cross-functional teams work. I agree that people need to work on just one thing at a time until they are complete. I agree that support work is separate from project work, and that we won’t ask the teams to do support work until they are done with this project.” Can you do that? Those are specific things that John needs from you. But even those won’t get the project done in time.

“Well, what will get the project done in time?” He practically growled at me.

“We need consider alternatives to the way the project has been working. I’ve suggested alternatives to the teams. They’re afraid of you right now, because they don’t know which solution you will accept.”

“AFRAID? THEY’RE AFRAID OF ME?” He was screaming by this time.

“Charlie, do you realize you’re yelling at me?” I did not tell him to calm down. I knew better than that. I gave him the data.

“Oh, sorry. No. Maybe that’s why people are afraid of me.”

I grinned at him.

“You’re not afraid of me.”

“Not a chance. You and I are too much alike.” I kept smiling. “Would you like to hear some options? I like to use the Rule of Three to generate alternatives. Is it time to bring John in?”

We discussed the options with John. Remember, this is before agile. We discussed timeboxing, short milestones with criteria, inch-pebbles, yellow-sticky scheduling, and decided to go with what is now a design-to-schedule lifecycle for the rest of the project. We also decided to move some people over from support to help with testing for a few weeks.

We didn’t release in eight weeks. It took closer to twelve weeks. But the project was a lot better after that conversation. And, after I helped the project, I gained Charlie as a coaching client, which was tons of fun.

Many managers have rules about their problem solving and how to help or not help their staff. “Don’t bring me a problem. Bring me a solution” is not helpful.

That is the topic of this month’s management myth: Myth 31: I Don’t Have to Make the Difficult Choices.

When you say, “Don’t bring me a problem. Bring me a solution” you say, “I’m not going to make the hard choices. You are.” But you’re the manager. You get paid to make the difficult choices.

Telling people the answer isn’t always right. You might have to coach people. But not making decisions isn’t right either. Exploring options might be the right thing. You have to do what is right for your situation.

Go read Myth 31: I Don’t Have to Make the Difficult Choices.

Categories: Project Management

Loan calculator

Phil Trelford's Array - Wed, 07/02/2014 - 08:17

Yesterday I came across a handy loan payment calculator in C# by Jonathon Wood via Alvin Ashcraft’s Morning Dew links. The implementation appears to be idiomatic C# using a class, mutable properties and wrapped in a host console application to display the results.

I thought it’d be fun to spend a few moments re-implementing it in F# so it can be executed in F# interactive as a script or a console application.

Rather than use a class, I’ve plumped for a record type that captures all the required fields:

/// Loan record
type Loan = {
   /// The total purchase price of the item being paid for.
   PurchasePrice : decimal
   /// The total down payment towards the item being purchased.
   DownPayment : decimal
   /// The annual interest rate to be charged on the loan
   InterestRate : double
   /// The term of the loan in months. This is the number of months
   /// that payments will be made.
   TermMonths : int
   }

 

And for the calculation simply a function:

/// Calculates montly payment amount
let calculateMonthlyPayment (loan:Loan) =
   let monthsPerYear = 12
   let rate = (loan.InterestRate / double monthsPerYear) / 100.0
   let factor = rate + (rate / (Math.Pow(rate+1.,double loan.TermMonths) 1.))
   let amount = loan.PurchasePrice - loan.DownPayment
   let payment = amount * decimal factor
   Math.Round(payment,2)

 

We can test the function immediately in F# interactive

let loan = {
   PurchasePrice = 50000M
   DownPayment = 0M
   InterestRate = 6.0
   TermMonths = 5 * 12
   }

calculateMonthlyPayment loan

 

Then a test run (which produces the same results as the original code):

let displayLoanInformation (loan:Loan) =
   printfn "Purchase Price: %M" loan.PurchasePrice
   printfn "Down Payment: %M" loan.DownPayment
   printfn "Loan Amount: %M" (loan.PurchasePrice - loan.DownPayment)
   printfn "Annual Interest Rate: %f%%" loan.InterestRate
   printfn "Term: %d months" loan.TermMonths
   printfn "Monthly Payment: %f" (calculateMonthlyPayment loan)
   printfn ""

for i in 0M .. 1000M .. 10000M do
   let loan = { loan with DownPayment = i }
   displayLoanInformation loan

 

Another option is to simply skip the record and use arguments:

/// Calculates montly payment amount
let calculateMonthlyPayment(purchasePrice,downPayment,interestRate,months) =
   let monthsPerYear = 12
   let rate = (interestRate / double monthsPerYear) / 100.0
   let factor = rate + (rate / (Math.Pow(rate + 1.0, double months) - 1.0))
   let amount = purchasePrice - downPayment
   let payment = amount * decimal factor
   Math.Round(payment,2
Categories: Programming

R/plyr: ddply – Renaming the grouping/generated column when grouping by date

Mark Needham - Wed, 07/02/2014 - 07:30

On Nicole’s recommendation I’ve been having a look at R’s plyr package to see if I could simplify my meetup analysis and I started by translating my code that grouped meetup join dates by day of the week.

To refresh, the code without plyr looked like this:

library(Rneo4j)
timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01")
 
query = "MATCH (:Person)-[:HAS_MEETUP_PROFILE]->()-[:HAS_MEMBERSHIP]->(membership)-[:OF_GROUP]->(g:Group {name: \"Neo4j - London User Group\"})
         RETURN membership.joined AS joinDate"
meetupMembers = cypher(graph, query)
meetupMembers$joined <- timestampToDate(meetupMembers$joinDate)
 
dd = aggregate(meetupMembers$joined, by=list(format(meetupMembers$joined, "%A")), function(x) length(x))
colnames(dd) = c("dayOfWeek", "count")

which returns the following:

> dd
  dayOfWeek count
1    Friday   135
2    Monday   287
3  Saturday    80
4    Sunday   102
5  Thursday   187
6   Tuesday   286
7 Wednesday   211

We need to use plyr’s ddply function which takes a data frame and transforms it into another one.

To refresh, this is what the initial data frame looks like:

> meetupMembers[1:10,]
       joinDate              joined
1  1.376572e+12 2013-08-15 14:13:40
2  1.379491e+12 2013-09-18 08:55:11
3  1.349454e+12 2012-10-05 17:28:04
4  1.383127e+12 2013-10-30 09:59:03
5  1.372239e+12 2013-06-26 10:27:40
6  1.330295e+12 2012-02-26 22:27:00
7  1.379676e+12 2013-09-20 12:22:39
8  1.398462e+12 2014-04-25 22:41:19
9  1.331734e+12 2012-03-14 14:11:43
10 1.396874e+12 2014-04-07 13:32:26

Most of the examples of using ddply show how to group by a specific ‘column’ e.g. joined but I want to group by part of the value in that column and eventually came across an example which showed how to do it:

> ddply(meetupMembers, .(format(joined, "%A")), function(x) {
    count <- length(x$joined)
    data.frame(count = count)
  })
  format(joined, "%A") count
1               Friday   135
2               Monday   287
3             Saturday    80
4               Sunday   102
5             Thursday   187
6              Tuesday   286
7            Wednesday   211

Unfortunately the generated column heading for the group by key isn’t very readable and it took me way longer than it should have to work out how to name it as I wanted! This is how you do it:

> ddply(meetupMembers, .(dayOfWeek=format(joined, "%A")), function(x) {
    count <- length(x$joined)
    data.frame(count = count)
  })
  dayOfWeek count
1    Friday   135
2    Monday   287
3  Saturday    80
4    Sunday   102
5  Thursday   187
6   Tuesday   286
7 Wednesday   211

If we want to sort that in descending order by ‘count’ we can wrap that ddply in another one:

> ddply(ddply(meetupMembers, .(dayOfWeek=format(joined, "%A")), function(x) {
    count <- length(x$joined)
    data.frame(count = count)
  }), .(count = count* -1))
  dayOfWeek count
1    Monday   287
2   Tuesday   286
3 Wednesday   211
4  Thursday   187
5    Friday   135
6    Sunday   102
7  Saturday    80

From reading a bit about ddply I gather that its slower than using some other approaches e.g. data.table but I’m not dealing with much data so it’s not an issue yet.

Once I got the hang of how it worked ddply was quite nice to work with so I think I’ll have a go at translating some of my other code to use it now.

Categories: Programming

Sprint Planning and Non-Story Related Tasks

Burn down chart

Burn down chart

I have been asked more than once about what to do with tasks that occur during a sprint that are not directly related to a committed story.  You need to first determine 1) whether the teams commit to the task, and 2) whether it is generally helpful for the team to account for the effort and burn it down.  Tasks can be categorized to help determine whether they affect capacity or need to be planned and managed at the team level.  Tasks that the team commits to performing need to be managed as part of the teams capacity while administrative tasks reduce capacity.

Administrative tasks.  Administrative tasks include vacations (planned), corporate meetings, meetings with human resources managers, non-project related training and others.  Classify any tasks that are not related to delivering project value under administrative tasks. One attribute of these types of tasks is that team members do not commit to these tasks, they are levied by the organization. The effort planned for these tasks should be deducted from the capacity of the team.  For example, in a five person team with 200 hours to spend on a project during a sprint (capacity), if one person was taking 20 hours of vacation the team’s capacity would be 180 hours.  If in addition to the vacation all five had to attend a two hour department staff meeting (even an important staff meeting), the team’s capacity would be reduced to 170 hours.  Administrative tasks can add up, deducting them from capacity makes the impact of these tasks obvious to everyone involved with the team.  Note: in organizations that have a very high administrative burden I sometime draw a line on the burn down chart that represents capacity before administrative tasks are removed. 

Project-related non-story tasks. Project-related non-story tasks are required to deliver the project value.  This category of tasks include backlog grooming, spikes and retrospectives.  There is a school of thought that the effort for these tasks should be deducted from the capacity.  Deducting the effort from capacity takes away the team’s impetus to manage the effort and the tasks. This takes away some of the team’s ability to self-organize and self-manage. The team should plan and commit to these tasks, therefore they are added to the backlog and burned down. This puts the onus on the team to complete the tasks and manage the time need to complete the tasks. As example if our team with 170 hours of capacity planned to do a 10 hour spike and have three people perform sprint grooming for an hour (total of 13 hours for both), I would expect to see cards for these tasks in the backlog and as they are completed the 13 hours would be burned down from the capacity.

Tasks that are under the control of the team need to be planned and burned against their capacity.  The acts of planning and accounting for the time provide the team with ability to plan and control the work they commit to completing.  When tasks are planned for the team that they can’t control, deducting it from the overall capacity helps the team keep from over committing to work that must be delivered.   


Categories: Process Management

GTAC is Almost Here!

Google Testing Blog - Tue, 07/01/2014 - 20:28
by The GTAC Committee

GTAC is just around the corner, and we’re all very busy and excited. I know we say this every year, but this is going to be the best GTAC ever! We have updated the GTAC site with important details:


If you are on the attendance list, we’ll see you on April 23rd. If not, check out the Live Stream page where you can watch the conference live and can get involved in Q&A after each talk. Perhaps your team can gather in a conference room and attend remotely.

Categories: Testing & QA

GTAC 2013 Wrap-up

Google Testing Blog - Tue, 07/01/2014 - 20:27
by The GTAC Committee

The Google Test Automation Conference (GTAC) was held last week in NYC on April 23rd & 24th. The theme for this year's conference was focused on Mobile and Media. We were fortunate to have a cross section of attendees and presenters from industry and academia. This year’s talks focused on trends we are seeing in industry combined with compelling talks on tools and infrastructure that can have a direct impact on our products. We believe we achieved a conference that was focused for engineers by engineers. GTAC 2013 demonstrated that there is a strong trend toward the emergence of test engineering as a computer science discipline across companies and academia alike.

All of the slides, video recordings, and photos are now available on the GTAC site. Thank you to all the speakers and attendees who made this event spectacular. We are already looking forward to the next GTAC. If you have suggestions for next year’s location or theme, please comment on this post. To receive GTAC updates, subscribe to the Google Testing Blog.

Here are some responses to GTAC 2013:

“My first GTAC, and one of the best conferences of any kind I've ever been to. The talks were consistently great and the chance to interact with so many experts from all over the map was priceless.” - Gareth Bowles, Netflix

“Adding my own thanks as a speaker (and consumer of the material, I learned a lot from the other speakers) -- this was amazingly well run, and had facilities that I've seen many larger conferences not provide. I got everything I wanted from attending and more!” - James Waldrop, Twitter

“This was a wonderful conference. I learned so much in two days and met some great people. Can't wait to get back to Denver and use all this newly acquired knowledge!” - Crystal Preston-Watson, Ping Identity

“GTAC is hands down the smoothest conference/event I've attended. Well done to Google and all involved.” - Alister Scott, ThoughtWorks

“Thanks and compliments for an amazingly brain activity spurring event. I returned very inspired. First day back at work and the first thing I am doing is looking into improving our build automation and speed (1 min is too long. We are not building that much, groovy is dynamic).” - Irina Muchnik, Zynx Health

Categories: Testing & QA

Abuse of Management Power: Women’s Access to Contraceptives

You might think this is a political post. It is not. This is about management power and women’s health at first. Then, it’s about employee health.

Yesterday, the US Supreme Court decided that a company could withhold contraceptive care from women, based on the company’s owner’s religious beliefs. Hobby Lobby is a privately held corporation.

Women take contraception pills for many reasons. If you have endometriosis, you take them so you can have children in the future. You save your fertility by not bleeding out every month. (There’s more to it than that. That’s the short version.)

If you are subject to ovarian cysts, birth control pills control them, too. If you are subject to the monthly “crazies” and you want to have a little control over your hormones, the pill can do wonders for you.

It’s not about sex. It’s not about pregnancy. It’s about health. It’s about power over your own body.

And, don’t get me started on the myriad reasons for having a D&C. As someone who had a blighted ovum, and had to have a D&C at 13 weeks (yes, 13 weeks), I can tell you that I don’t know anyone who goes in for an abortion who is happy about it.

It was the saddest day of my life.

I had great health care and a supportive spouse. I had grief counseling. I eventually had another child. Because, you see, a blighted ovum is not really a miscarriage. It felt like one to me. But it wasn’t really. Just ask your healthcare provider.

Maybe some women use abortion or the morning-after pill as primary contraception. It’s possible. You don’t have to like other people’s choices. That should be their choice. If you make good contraception free, women don’t have to use abortion or the morning-after pill as a primary contraception choice.

When other people remove a woman’s right to choose how she gets health care for her body, it’s the first step down an evil road. This is not about religious freedom. Yes, it’s couched in those terms now. But this is about management power.

It’s the first step towards management deciding that they can make women a subservient class and what they can do to that subservient class. Right now, that class is women and contraception. What will the next class be?

Will management decide everyone must get genetic counseling before you have a baby? Will they force you to abort a not-perfect baby because they don’t want to pay for the cost of a preemie? Or the cost of a Down Syndrome baby? What about if you have an autistic child?

Men, don’t think you’re immune from this either. What if you indulge in high-risk behavior, such as helicopter skiing? Or, if you gain too much weight? What if you need knee replacements or hip replacements?

What if you have chronic diseases? What happens if you get cancer?

What about when people get old? Will we have euthanasia?

We have health care, including contraception, as the law of the United States. I cannot believe that a non-religious company, i.e, not a church, is being allowed to flaunt that law. This is about management power. This is not about religion.

If they can say, “I don’t wanna” to this, what can they say, “I don’t wanna” to next?

This is the abuse of management power.

This is the first step down a very slippery slope.

Categories: Project Management

New Entreprogrammers Episodes: 18 and 19

Making the Complex Simple - John Sonmez - Tue, 07/01/2014 - 18:37

We have two new episodes this week, since I had to call an emergency meeting last week about an important life decision. Check them out here: http://entreprogrammers.com/ I can’t believe we are already at episode 19.

The post New Entreprogrammers Episodes: 18 and 19 appeared first on Simple Programmer.

Categories: Programming

Three Ways to Run (Your Business)

NOOP.NL - Jurgen Appelo - Tue, 07/01/2014 - 17:37
Three Ways to Run

I’ve learned that there are three approaches to running (your business).

The first is to create goals for yourself, maybe with a training (change) program. You start running and try to run 5 km. When the pain doesn’t kill you, you could set yourself a target for next time: 6 km. And if you survive that ordeal then maybe next time you can run 7 km. This goes on and on until either you run a marathon, or you never run again because of the shin splints, wrecked knees, and the terrible pain in your back.

The post Three Ways to Run (Your Business) appeared first on NOOP.NL.

Categories: Project Management

Economics of Iterative Software Development

Herding Cats - Glen Alleman - Tue, 07/01/2014 - 15:41

Economics of Iterative DevelopmentSoftware development is microeconomics. Microeconomics is about making decisions - choices - based on knowing something about cost, schedule, and techncial impacts from that decision. In the microeconomics paradigm, this information is part of the normal business process.

This is why when there is conjecture that you can make decisions in the absence of estimating the impacts of that decision, it ignores the principles of business. Or the notion that when numbers are flying around in an organization they lay the seeds for dysfunction, we need to stop and think about how business actually works. Money is used to produce value which is then exchanged for more money. No business will survivie for long in the absence of knowing about the numbers contained in the balance sheet and general ledger.

This book should be mandatory reading anyone thinking about making improvements in what they see as dysfunctions in their work environment. No need to run off and start inventing new untested ideas, they're right here for the using. With this knowledge comes the understanding about why estimates are needed to make decisions. In the microeconomics paradigm, making a choice is about opportunity cost. What will it cost me to NOT do something. The set of choices that can be acted on given their economic behaviour. Value produced from the invested cost. Opportunities created from the cost of development. And other trade space discussions. 

To make those decisions with any level of confidence, information is needed. This information is almost always about the future - return on investment, opportunity, risk reduction strategies. That information is almost always probabilistically driven by an underlying statistical process. This is the core motivation for learning to estimate - to make decisions about the future that are most advantageous for the invested cost.

That's the purpose of estimates, to support business decisions.

This decision making processes is the basis of Governance which is the structure, oversight, and management process that ensures delivery of the needed benefits of IT in a controlled way to enhance the long term sustainable success of the enterprise.

Related articles Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices The Calculus of Writing Software for Money, Part II How To Estimate, If You Really Want To How to "Lie" with Statistics How to Deal With Complexity In Software Projects? Why is Statistical Thinking Hard?
Categories: Project Management

Adding Decorated User Roles to Your User Stories

Mike Cohn's Blog - Tue, 07/01/2014 - 15:00

When writing user stories there are, of course, two parts: user and story. The user is embedded right at the front of stories when using the most common form of writing user stories:

As a <user role> I <want/can/am required to> <some goal> so that <some reason>.

Many of the user roles in a system can be considered first class users. These are the roles that are significant to the success of a system and will occur over and over in the user stories that make up a system’s product backlog.

These roles can often be displayed in a hierarchy for convenience. An example for a website that offers benefits to registered members is as follows:

 

 

 

 

 

 

 

In this example, some stories will be written with “As a site visitor, …” This would be appropriate for something anyone visiting the site can do, such as view a license agreement.

Other stories could be written specifically for registered members. For example, a website might have, “As a registered member, I can update my payment information.” This is appropriate to write for a registered member because it’s not appropriate for other roles such as visitors, former members, or trial members.

Stories can also be written for premium members (who are the only ones who can perhaps view certain content) or trial members (who are occasionally prompted to join).

But, beyond the first class user roles shown in a hierarchy like this, there are also other occasional users—that is, users who may be important but who aren’t really an entirely separate type of user.

First-time users can be considered an example. A first-time user is usually really a first-class role but they can be important to the success of a product, such as the website in this example.

Usually, the best way to handle this type of user role is by adding an adjective in front of the user role. That could give us roles such as:

  • First-time visitor
  • First-time member

The former would refer to someone on their absolute first visit to the website. The latter could refer to someone who is on the site for the first time as a subscribed member.

This role could have stories such as, “As a first-time visitor, additional prompts appear on the site to direct me toward the most common starting points so that I learn how to navigate the system.”

A similar example could be “forgetful member.” Forgetful members are very unlikely to be a primary role you identify as vital to the success of your product. We should, however, be aware of them.

A forgetful member may have stories such as “As a forgetful member, I can request a password reminder so that I can log in without having to first call tech support.”

I refer to these as decorated user roles, borrowing the term from mathematics where decorated symbols such as a-bar (x̄) and p-hat (x̂) are common.

Decorated users add the ability to further refine user roles without adding complexity to the user role model itself through the inclusion of additional first-class roles. As a fan of user stories, I hope you find the idea as helpful as I do.

In the comments, let me know what you think and please share some examples of decorated user roles you’ve used or encountered.

Adding Decorated User Roles to Your User Stories

Mike Cohn's Blog - Tue, 07/01/2014 - 15:00

When writing user stories there are, of course, two parts: user and story. The user is embedded right at the front of stories when using the most common form of writing user stories:

As a <user role> I <want/can/am required to> <some goal> so that <some reason>.

Many of the user roles in a system can be considered first class users. These are the roles that are significant to the success of a system and will occur over and over in the user stories that make up a system’s product backlog.

These roles can often be displayed in a hierarchy for convenience. An example for a website that offers benefits to registered members is as follows:

 

 

 

 

 

 

In this example, some stories will be written with “As a site visitor, …” This would be appropriate for something anyone visiting the site can do, such as view a license agreement.

Other stories could be written specifically for registered members. For example, a website might have, “As a registered member, I can update my payment information.” This is appropriate to write for a registered member because it’s not appropriate for other roles such as visitors, former members, or trial members.

Stories can also be written for premium members (who are the only ones who can perhaps view certain content) or trial members (who are occasionally prompted to join).

But, beyond the first class user roles shown in a hierarchy like this, there are also other occasional users—that is, users who may be important but who aren’t really an entirely separate type of user.

First-time users can be considered an example. A first-time user is usually really a first-class role but they can be important to the success of a product, such as the website in this example.

Usually, the best way to handle this type of user role is by adding an adjective in front of the user role. That could give us roles such as:

  • First-time visitor
  • First-time member

The former would refer to someone on their absolute first visit to the website. The latter could refer to someone who is on the site for the first time as a subscribed member.

This role could have stories such as, “As a first-time visitor, additional prompts appear on the site to direct me toward the most common starting points so that I learn how to navigate the system.”

A similar example could be “forgetful member.” Forgetful members are very unlikely to be a primary role you identify as vital to the success of your product. We should, however, be aware of them.

A forgetful member may have stories such as “As a forgetful member, I can request a password reminder so that I can log in without having to first call tech support.”

I refer to these as decorated user roles, borrowing the term from mathematics where decorated symbols such as a-bar (x̄) and p-hat (x̂) are common.

Decorated users add the ability to further refine user roles without adding complexity to the user role model itself through the inclusion of additional first-class roles. As a fan of user stories, I hope you find the idea as helpful as I do.

In the comments, let me know what you think and please share some examples of decorated user roles you’ve used or encountered.

How combined Lean- and Agile practices will change the world as we know it

Xebia Blog - Tue, 07/01/2014 - 08:50

You might have attended this month at our presentation about eXtreme Manufacturing and the keynote of Nalden last week on XebiCon 2014. There are a few epic takeaways and additions I would like to share with you in this blogpost.

Epic TakeAway #1: The Learn, Unlearn and Relearn Cycle Like Nalden expressed in his inspiring keynote, one of the major things for him to be successful is being able to Learn, Unlearn and Relearn every time again. In my opinion, this will be the key ability for every successful company in the near future.  In fact, this is how nature evolutes: in the end, only the species who are able to adapt to changing circumstances will survive and evolute. This mechanism makes for example, most of the startups fail, but those who will survive, can be extremely disruptive for non-agile organizations.  Best example for this is of course Whatsapp.  Beating up the Telco Industry by almost destroying their whole businessmodel in only a few months. Learn more about disruptive innovation from one of my personal heroes, Harvard Professor Clayton Christensen.

Epic TakeAway #2: Unlearning Waterfall, Relearning Lean & Agile Globally, Waterfall is still the dominant method in companies and universities.  Waterfall has its origins more than 40 years ago. Times have changed. A lot. A new, successful and disruptive product could be there in only a matter of days instead of (many) years. Finally, things are changing. For example, the US Department of Defence has recently embraced Lean and Agile as mandatory practices, especially Scrum. Schools and universities are also more and more adopting the Agile way of working. Later more in this blogpost.

Epic TakeAway #3: Combined Lean- and Agile practices =  XM Lean practices arose in Japan in the 1980’s , mainly in the manufacturing industry, Toyota being the frontrunner here.  Agile practices like Scrum, were first introduced in the 1990’s by Ken Schwaber and Jeff Sutherland, these practices were mainly applied in the IT-industry. Until now, the manufacturing and IT world didn’t really joined forces combining Lean and Agile practices.  Until recently.  The WikiSpeed initiative of Joe Justice proved combining these practices result in a hyper-productive environment, where a 100 Mile/Gallon road legal sportscar could be developed in less than 3 months.  Out of this success eXtreme Manufacturing (XM) arose. Finally, a powerful combination of best practices from the manufacturing- and IT-world came together.

Epic TakeAway #4: Agile Mindset & Education fotoLike Sir Ken Robinson and Dan Pink already described in their famous TED-talks, the way most people are educated and rewarded, is not suitable anymore for modern times and even conflicts with the way we are born.  We learn by "failing", not by preventing it.  Failing in it’s essence should stimulate creativity to do things better next time, not be punished.  On the long run, failing (read: learning!) has more added value than short-term succes, for example by chasing milestones blindly. EduScrum in the Netherlands stimulates schools and universities to apply Scrum in their daily classes in order to stimulate creativity, happiness, self-reliantness and talent. The results of the schools joining these initiative are spectacular: happy students, less dropouts an significantly higher grades. For a prestigious project for the Delft University, Forze, the development of a hydrogen race car, the students are currently being trained and coached to apply Agile and Lean practices.  Also these results are more than promising. The Forze team is happier, more productive and more able to learn faster and better from setbacks.  Actually, they are taking the first steps of being anti-fragile.  Due too an intercession of the Forze team members themselves,  the current support of agile (Xebia) coaches is now planned being extended to the flagship of the Delft University:  the NUON solar team.

The Final Epic TakeAway In my opinion, we reached a tipping point in the way goals should be achieved.  Organizations are massively abandoning Waterfall and embracing Agile practices, like Scrum.  Adding Lean practices like Joe Justice did in his WikiSpeed project, makes Agile and Lean extremely powerful.  Yes, this will even make this world a much better place.  We cannot prevent nature disasters with this, but we can be anti-fragile.  We cannot prevent every epidemic, but we can respond in an XM-fashion on this by developing a vaccin in only days instead of years.  This brings me finally to the missing statement of the current Agile Manifesto:   We should Unlearn and Relearn before we Judge.  Dare to Dream like a little kid again. Unlearn your skepticism.  Companies like Boeing, Lockheed Martin and John Deere already did. Adopting XM speeded up their velocity in some cases with more than 7 times.

What is Capacity in software development? - The #NoEstimates journey

Software Development Today - Vasco Duarte - Tue, 07/01/2014 - 04:00

I hear this a lot in the #NoEstimates discussion: you must estimate to know what you can deliver for a certain price, time or effort.

Actually, you don’t. There’s a different way to look at your organization and your project. Organizations and projects have an inherent capacity, that capacity is a result of many different variables - not all can be predicted. Although you can add more people to a team, you don’t actually know what the impact of that addition will be until you have some data. Estimating the impact is not going to help you, if we are to believe the track record of the software industry.

So, for me the recipe to avoid estimates is very simple: Just do it, measure it and react. Inspect and adapt - not a very new idea, but still not applied enough.

Let’s make it practical. How many of these stories or features is my team or project going to deliver in the next month? Before you can answer that question, you must find out how many stories or features your team or project has delivered in the past.

Look at this example.

How many stories is this team going to deliver in the next 10 sprints? The answer to this question is the concept of capacity (aka Process Capability). Every team, project or organization has an inherent capacity. Your job is to learn what that capacity is and limit the work to capacity! (Credit to Mary Poppendieck (PDF, slide 15) for this quote).

Why is limiting work to capacity important? That’s a topic for another post, but suffice it to say that adding more work than the available capacity, causes many stressful moments and sleepless nights; while having less work than capacity might get you and a few more people fired.

My advice is this: learn what the capacity of your project or team is. Only then you will be able to deliver reliably, and with quality the software you are expected to deliver.

How to determine capacity?

Determining the capacity of capability of a team, organization or project is relatively simple. Here's how

  • 1- Collect the data you have already:
    • If using timeboxes, collect the stories or features delivered(*) in each timebox
    • If using Kanban/flow, collect the stories or features delivered(*) in each week or period of 2 weeks depending on the length of the release/project
  • 2- Plot a graph with the number of stories delivered for the past N iterations, to determine if your System of Development (slideshare) is stable
  • 3- Determine the process capability by calculating the upper (average + 1*sigma) and the lower limits(average - 1*sigma) of variability

At this point you know what your team, organization or process is likely to deliver in the future. However, the capacity can change over time. This means you should regularly review the data you have and determine (see slideshare above) if you should update the capacity limits as in step 3 above.

(*): by "delivered" I mean something similar to what Scrum calls "Done". Something that is ready to go into production, even if the actual production release is done later. In my language delivered means: it has been tested and accepted in a production-like environment.

Note for the statisticians in the audience: Yes, I know that I am assuming a normal distribution of delivered items per unit of time. And yes, I know that the Weibull distribution is a more likely candidate. That's ok, this is an approximation that has value, i.e. gives us enough information to make decisions.

You can receive exclusive content (not available on the blog) on the topic of #NoEstimates, just subscribe to the #NoEstimates mailing list below. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to our mailing list* indicates required Email Address * First Name Last Name

Picture credit: John Hammink, follow him on twitter

Measures of Central Tendency

Clouds distributed around the mean, err, horizon.

Clouds distributed around the mean, err, horizon.

Measures of central tendency attempt to define the center of a set of data. Measures of central tendency are important to for interpreting benchmarks and when single rates are used in contracts.  There are many ways to measure however the three most popular are mean, mode and median.  Each of the measures of central tendency provides interesting information and each is more or less useful in different circumstances.  Let’s explore the following simple data set.

Untitled

Mean

The mean is the most popular and well known measure of central tendency.  The mean is calculated by summing the values in the sample (or population) and dividing by the total number of observations.  In the example the mean is calculated as 231.43 / 13 or 17.80.  The mean is most useful when the data points are disturbed evenly around the mean or are normally distributed. A mean is highly influenced by outliers.  

Advantages include:

  • Most common measure of central tendency used and therefore most quickly understood.
  • The answer is unique.

Disadvantages include

  • Influenced by extremes (skewed data and outliers).

Median

Median is the middle observation in a set of data.  Median is affected less by outliers or skewed data.  In order to find the median (by hand) you need to arrange the data in numerical order.  Using the same data set:

Untitled2

The median is 18.64 (six observations above and six observations below.  Since the median is positional, it is less affected by extreme values. Therefore the median is a better reflection of central tendency for data that has outliers or is skewed.  Most project metrics include outliers and tend to be skewed therefore the median is very valuable when evaluating software measures. 

Advantages

  • Extreme values (outliers) do not affect the median as strongly as they do the mean.
  • The answer is unique.

Disadvantages

  • Not as popular as the mean.

Mode

The mode is the most frequent observation in the set of data.  Modes may not be the best measure of central tendency and may not be unique. Worse the set may not have a mode.  The mode is most useful when the data is non-numeric or when you are attempting to the most popular item in a data set. Determine the mode by counting the number of each unique observations. In our example data set:

Untitled3

The mode in this data set is 26.43; it has two observations.  

Advantages:

  • Extreme values (outliers) do not affect the mode.

Disadvantages:

  • May be more than one answer.
  • If every value is unique the mode is useless (every value is the mode).
  • May be difficult to interpret.

Based on our test data set the three measures of central tendency return the following values:

  • Mean: 17.8
  • Median: 18.64
  • Mode: 26.43

Each statistic returns different values.  The mean and median provide relatively similar values therefore it would be important to understand whether the data set represents a sample or whether the data set represents the population.  If the data is from a sample or could become more skewed by extreme values, the median is probably a better representation of the central tendency in this case.  If the population is evenly distributed about the mean (or is normally distributed) the mean is a better representation of central tendency. In the sample data set the mode provides little explanative power. Understanding which measure of central tendency allows change agents to better target changes and if your contract uses metrics to determine performance, which measure of central measure you can have an impact.  Changing or arguing over which to use smacks of poor contracting or gaming the measure.  


Categories: Process Management

R: Aggregate by different functions and join results into one data frame

Mark Needham - Mon, 06/30/2014 - 23:47

In continuing my analysis of the London Neo4j meetup group using R I wanted to see which days of the week we organise meetups and how many people RSVP affirmatively by the day.

I started out with this query which returns each event and the number of ‘yes’ RSVPS:

library(Rneo4j)
timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01")
 
query = "MATCH (g:Group {name: \"Neo4j - London User Group\"})-[:HOSTED_EVENT]->(event)<-[:TO]-({response: 'yes'})<-[:RSVPD]-()
         WHERE (event.time + event.utc_offset) < timestamp()
         RETURN event.time + event.utc_offset AS eventTime, COUNT(*) AS rsvps"
events = cypher(graph, query)
events$datetime <- timestampToDate(events$eventTime)
      eventTime rsvps            datetime
1  1.314815e+12     3 2011-08-31 19:30:00
2  1.337798e+12    13 2012-05-23 19:30:00
3  1.383070e+12    29 2013-10-29 18:00:00
4  1.362474e+12     5 2013-03-05 09:00:00
5  1.369852e+12    66 2013-05-29 19:30:00
6  1.385572e+12    67 2013-11-27 17:00:00
7  1.392142e+12    35 2014-02-11 18:00:00
8  1.364321e+12    23 2013-03-26 18:00:00
9  1.372183e+12    22 2013-06-25 19:00:00
10 1.401300e+12    60 2014-05-28 19:00:00

I wanted to get a data frame which had these columns:

Day of Week | RSVPs | Number of Events

Getting the number of events for a given day was quite easy as I could use the groupBy function I wrote last time:

groupBy = function(dates, format) {
  dd = aggregate(dates, by=list(format(dates, format)), function(x) length(x))
  colnames(dd) = c("key", "count")
  dd
}
 
> groupBy(events$datetime, "%A")
        key count
1  Thursday     9
2   Tuesday    24
3 Wednesday    35

The next step is to get the sum of RSVPs by the day which we can get with the following code:

dd = aggregate(events$rsvps, by=list(format(events$datetime, "%A")), FUN=sum)
colnames(dd) = c("key", "count")

The difference between this and our previous use of the aggregate function is that we’re passing in the number of RSVPs for each event and then grouping by the day and summing up the values for each day rather than counting how many occurrences there are.

If we evaluate ‘dd’ we get the following:

> dd
        key count
1  Thursday   194
2   Tuesday   740
3 Wednesday  1467

We now have two data tables with a very similar shape and it turns out there’s a function called merge which makes it very easy to convert these two data frames into a single one:

x = merge(groupBy(events$datetime, "%A"), dd, by = "key")
colnames(x) = c("day", "events", "rsvps")
> x
        day events rsvps
1  Thursday      9   194
2   Tuesday     24   740
3 Wednesday     35  1467

We could now choose to order our new data frame by number of events descending:

> x[order(-x$events),]
        day events rsvps
3 Wednesday     35  1467
2   Tuesday     24   740
1  Thursday      9   194

We might also add an extra column to calculate the average number of RSVPs per day:

> x$rsvpsPerEvent = x$rsvps / x$events
> x
        day events rsvps rsvpsPerEvent
1  Thursday      9   194      21.55556
2   Tuesday     24   740      30.83333
3 Wednesday     35  1467      41.91429

I’m still getting the hang of it but already it seems like the combination of R and Neo4j allows us to quickly get insights into our data and I’ve barely scratched the surface!

Categories: Programming

Everything's a Remix

Herding Cats - Glen Alleman - Mon, 06/30/2014 - 23:12

In the estimating discussion there is a popular notion that we can't possibly estimate something we haven't done before. So we have to explore - using the customers money by the way - to discover what we don't know.

The when we hear about we've never done this before and estimating is a waste of time, think about the title of the post.

Everything's a Remix

Other than inventing new physics all software development has been done in some form or another before. The only true original thing in the universe is the Big Bang. Everything else  is derived from something that came before.

Now we may not know about this thing in the past, but that's a different story. It as done before in some form, but we didn't realize it. There are endless examples of copying ideas from the past is thinking they are innovative, new and break through. The iPad and all laptops came from Allan Kay's 1972 paper, "A personal computer for childern of all ages." Even how the touch screen on the iPhone works was done before Apple announced it as the biggest breakthrough in the history of computing.

In our formal defense acquisition paradigm there are many programs that are research. This looks like this flow. Making estiimates about the effort and duration is difficult, so blocks of money are provided to find out. But these are not product production or systems development processes. The Systems Design and Development (SDD) is between MS-B and MS-C. We don't confuse exploring from developing. Want to explore work on a DARPA program. Want to develop, work in post-MS-B and know somethiong about what came before.

5000.02

The Pre-milestone A works is to identify what capabilities will be needed in the final product. The DARPA programs I work are even further to the left of Milestone A. 

On the other end of the spectrum from this formal process, a collection of sticky notes on the wall could have similar flow of maturity. But the principles are still the same.

So How To Estimate in the Presence of We've Never Done This Before

  • The first thing to do is go find someone who has. Hire them, buy them dinner, pick their brain. 
  • The next would be to find an example of what you want to build and take it apart. This is what every product designer does. In the fashion business they just copy. In the software business they copy and make it better.
  • Long ago I had an idea, along with several others, of writing a book of reusable code in our domain. Algorithms that could be reused. The IBM FORTRAN Scientific Subroutine Library was our model. The remix of the code elements is now built into hardware chips for doing what we did - process signals from radar systems. The Collected Algorithms of the ACM is a similar idea.

Here's a critical concept - we can't introduce anything new until we're fluent in the language of our domain, and we do that through emulation.† This means for us to move forward we have to have done something like this in the past. So if we haven't done something like this in the past, don't know anyone who has, or can't find an example of it being done, we will have little success being innovative. As well, we will hopelessly fail in trying to estimate the cost, schedule, and probability of delivering capabilities. In other words we'll fail and blame it on the estimating process and assume that we'll be successful if we stop estimating.

So stop thinking about we can't know what we don't know and start thinking someone has done this before and we just need to find that someone, somewhere, something. Nobody starts out being original, we need copying to get started. Once copied, transformation is the next step. With the copy we can estimate size and effort. We can now transform it into something that is better, and since we now know about the thing we copied, we have a reference class. Yes that famous Reference Class Forecasting used by all mature estimating shops. With the copy and it's transformed item, we can them combine ideas into something new. The Alto from Xerox and then the Xerox Star for executives, was the basis of the Lisa and Mac.

The End

You can estimate almost anything, and every software system if you do some home work and suspend the belief it can't be done. WHY? because it's not your money, and those providing you the money have an interest in several things about their money - what will it cost, when will you be done, and using the revenue side of the balanced sheet, when will they break even on the exchange of money for value? This is the principle of every for profit business on the planet. The not-for-profits have to pay the electric bill as well, as do the non-profits. So everyone, everywhere needs to know the cost of that value they asked us top produce BEFORE we've spent all their money and ran out of time to reach the target market for that pesky break even equation. 

Anyone tells you otherwise is not in the business of business, but just on the expense side and that means not on the decision making side either, just labor doing what they're told to do - which is a noble profession, but unlikely to influence how decisions are made. 

The notion of decision rights is the basis of governance. When you hear about doing or not doing something in the absence of who needs this information, ask who needs this information and is it your decision right to decide to fulfill or not fulfill the request for that information? As my colleague, retired NASA Cost Director, says follow the money, that's where you find the decider. 

† Everything is a remix, Part 3, Kirby Furgeson.

Related articles Let's Stop Guessing and Learn How to Estimate How To Fix Martin Fowler's Estimating Problem in 3 Easy Steps How to Deal With Complexity In Software Projects? Do It Right or Do It Twice An Agile Estimating Story Measurement of Uncertainty Reference Design Concept Project Finance

 

Categories: Project Management