Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Be Bold!

Making the Complex Simple - John Sonmez - Thu, 07/03/2014 - 15:00

Sometimes you have to just be bold if you want to maximize your opportunities.

The post Be Bold! appeared first on Simple Programmer.

Categories: Programming

How architecture enables kick ass teams (1): replication considered harmful?

Xebia Blog - Thu, 07/03/2014 - 11:51

At Xebia we regularly have discussions regarding Agile Architecture? What is it? What does it take? How should you organise this? Is it technical or organisational? And much more questions… which I won’t be answering today. What I will do today is kick off a blog series covering subjects that are often part of these heated debates. In general what we strive for with Agile Architecture is an architecture that enables the organisation to keep moving fast and without IT be a limiting factor for realising changes. As you read this series you’ll start noticing one theme coming back over and over again: Autonomy. Sometimes we’ll be focussing on the architecture of systems, sometimes on the architecture of the organisation or teams, but autonomy is the overarching theme. And if you’re familiar with Conways Law it should be no surprise that there is a strong correlation between team and system structure. Having a structure of teams  that is completely different from your system landscape causes friction. We are convinced that striving for optimal team and system autonomy will lead to an organisation which is able to quickly adapt and respond to changes.

The first subject is replication of data, this is more a systems (landscape) issue and less of an organisational issue and definitely not the only one, more posts will follow.

We all have to deal with situations where:

  • consumers of a data retrieval service (e.g. customer account details) require this service to be highly available, or
  • compute intensive analysis must be done using the data in a system, or
  • data owned by a system must be searched in a way that is not (efficiently) supported by that system

These situations all impact the autonomy of the system owning the data.Is the system able to provide the it's functionality at the require quality level or do these external requirements lead to negative consequences on quality of the service provided or maintainability? Should these requirements be forced into the system or is another approach more appropriate?

Above examples all could be solved by replicating data into another system which is more suitable for meeting these requirements but … replication of data is considered to be harmful by some. Is it really? Often mentioned reasons not to replicate data are:

  • The replicated data will always be less accurate and timely than the original data
    True, and is this really a problem for the specific situation you’re dealing with? Sometimes you really need the latest version of a customer record, but in many situations it is no problem is the data is seconds, minutes or even hours old.
  • Business logic that interprets the data is implemented twice and needs to be maintained
    Yes, and you have to compare the costs of this against the benefits. As long as the benefits outweigh the costs, it is a good choice.  You can even consider to provide a library that is used in both systems.
  • System X is the authoritative source of the data and should be the only one that exposes it
    Agree, and keeping that system as the authoritative source is good practice and does not mean that there could be read only access to the same (replicated) data in other systems.

As you can see it is never a black and white decision, you’ll have to make a balanced decision and include benefits and costs of both alternatives. The gained autonomy and business benefits derived from this can easily outweigh the extra development, hosting and maintenance costs of replicating data.

A few concrete examples from my own experience:

We had a situation where a CRM system instance owned data which was also required in a 24x7 emergency support proces. The data was nicely exposed by a number of data retrieval services. At that organisation the CRM system deployment was such that most components were redundant, but during updates the system as a whole would still be down for several hours. Which was not acceptable given that the data was required in a 24x7 emergency support process. Making the CRM system deployment upgradable without downtime was not possible or would cost .
In this situation the costs of replicating the CRM system database to another datacenter using standard database features and having the data retrieval services access either that replicated database or the original database (as fall back) was much cheaper than trying to make CRM system itself high available. The replicated database would remain running accessible even when CRM system  got upgraded. Yes, we’re bypassing the CRM system business logic for interpreting the data, but for this situation the logic was so simple that the costs of reimplementing and maintaining this in a new lightweight service (separate from CRM system) were neglectable.

Another example is from a telecom provider that uses a chain of fulfilment systems in which it registered all network products sold to its customers (e.g. internet access, telephony, tv). Each product instance depends on instances registered in another system and if you drill down deep enough you’ll reach the physical network hardware ports on which it runs. The systems that registered all products used a relational model which was okay for registration. However, questions like “if this product instance breaks, which customers are impacted” were impossible to answer without overheating CPUs in those systems. By publishing all changes in the registrations to a separate system we could model the whole inventory of services as a network graph and easily do analysis on it without impacting the fulfilment systems. The fact that the data would be a (at most) a few seconds old was absolutely no problem.

And a last example is that sometimes you want to do a full (phonetic) text search through a subset of your domain model. Relational data models quickly get you into an unmaintainable situation. You’re SQL queries will require many tables, lot’s of inefficient “LIKE ‘%gold%’" and developers that have a hard time understanding what a query actually intended to do. Replicating the data to a search engine makes searching far easier and provides more possibilities for searches that are hard to realise in a relational database.

As you can see replication of data can increase autonomy of systems and teams and thereby make your system or system landscape and organisation more agile. I.e. you can realise new functionality faster and get it available for your users quicker because the coupling with other systems or teams is reduced.

In a next blog we'll discuss another subject that impacts team or system autonomy.

Teams Have a Peak Load, Revisited

Does the raft have a peak load?

Does the raft have a peak load?

Peak load is an engineering concept that has found its way into software development and maintenance conversation.  Peak load is a spike over a specific period of time and not a sustainable level of performance. When applied to a software team, the peak load is how much additional work a team can do for a short period of time. Before we concluded with the admonition; “The idea of pushing a team to a peak load should be used judiciously.” To which Assaf Sternberg asked “Tom, how do you square this away with another thing that differentiates good software engineering from assembly line work – the ability to refactor/reengineer the solution in anticipation of future work to make the latter easier/faster/less risky? Over the long run, this should make it possible for the ‘functional point’ count per sprint to continue to rise (while these items would require less effort)”

Refactoring, also known as code refactoring, is the process changing or restructuring code to be simpler, more effective or more efficient without changing the code’s functional behavior. Refactoring can also be done to make code be more maintainable or extensible. The need to refactor can be inferred from the Agile principles of simplicity and emergent design. Refactoring is an integral part of development in most implementations of Agile.  For example, in test driven development the final step in process is to refactor both the code and design.

In order for refactoring to be effective, it needs to be planned part of work and needs to done in pursuit of an overall goal that can be tested. During sprint planning teams need to identify tasks for refactoring just as they do for other development activities.  Refactoring is just another task that requires time and uses the team’s capacity. When the team plans for refactoring, it is reflected in the team’s velocity and productivity. When a team adopts the technique of refactoring it will initially reduce their functional output, thereby reducing velocity and productivity. But, over the long run, data I have collected as an Agile coach shows that that productivity and velocity increase (about 5% year over year). When productivity goes up more functionality is delivered for less effort. Refactoring is at least partially responsible for this increase.

Refactoring is done to attain a stated goal.  For example, a team recently I worked with focus their refactoring efforts on maintainability (the team had developed standards for maintainability). Given that they had to implement, maintain and enhance the code as a team maintainability improves their overall efficiency (reflected in velocity and productivity changes over time).  The team developed the goal, then agreed on how to pursue the goal and finally agreed how they would know if they were successful.  A goal is important to ensure that team members do not act in an ad hoc manner.

How does or should refactoring impact a team’s a sustainable pace and by extension their peak load? Refactoring does not extend the day, there are same number of hours in your work day. What it does is help the team be more efficient and effective over time. Therefore refactoring increases velocity and productivity.  This is only possible if refactoring is planned as part of the teams normal activity and focused on achieving a goal.


Categories: Process Management

Google Play services 4.4

Android Developers Blog - Wed, 07/02/2014 - 20:01
gps

A new release of Google Play services has now been rolled out to the world, and as usual we have a number of features that can make your apps better than before. This release includes a major enhancement to Maps with the introduction of Street View, as well as new features in Location, Games Services, Mobile Ads, and Wallet API.

Here are the highlights of Google Play services release 4.4:


Google Maps Android API

Starting with a much anticipated announcement for the Google Maps Android API: Introducing Street View. You can now embed Street View imagery into an activity enabling your users to explore the world through panoramic 360-degree views. Programmatically control the zoom and orientation (tilt and bearing) of the Street View camera, and animate the camera movements over a given duration. Here is an example of what you can do with the API, where the user navigates forward one step:
We've also added more features to the Indoor Maps feature of the API. You can turn the default floor picker off - useful if you want to build your own. You can also detect when a new building comes into focus, and find the currently-active building and floor. Great if you want to show custom markup for the active level, for example.


Activity Recognition

And while we are on the topic of maps, let’s turn to some news in the Location API. For those of you that have used this API, you may have seen the ability already there to detect if the device is in a vehicle, on a bicycle, on foot, still, or tilting.

In this release, two new activity detectors have been added: Running, and Walking. So a great opportunity to expand your app to be even more responsive to your users. And for you that have not worked with this capability earlier, we hardly need to tell the cool things you can do with it. Just imagine combining this capability with features in Maps, Games Services, and other parts of Location...


Games Services Update

In the 4.3 release we introduced Game Gifts, which allows you to request gifts or wishes. And although there are no external API changes this time, the default requests sending UI has been extended to now allow the user to select multiple Game Gifts recipients. For your games this means more collaboration and social engagement between your players.


Mobile Ads

For Mobile Ads, we’ve added new APIs for publishers to display in-app promo ads, which enables users to purchase advertised items directly. We’re offering app developers control of targeting specific user segments with ads, for example offering high-value users an ad for product A, or new users with an ad for product B, etc.

With these extensions, users can conveniently purchase in-app items that interest them, advertisers can reach consumers, and your app connects the dots; a win-win-win in other words.


Wallet Fragments

For the Instant Buy API, we’ve now reduced the work involved to place a Buy With Google button in an app. The WalletFragment API introduced in this release makes it extremely easy to integrate Google Wallet Instant Buy with an existing app. Just configure these fragments and add them to your app.

And that’s another release of Google Play services. The updated Google Play services SDK is now available through the Android SDK manager. Coming up in June is Google I/O, no need to say more…


For the release video, please see:
DevBytes: Google Play services 4.4

For details on the APIs, please see:
New Features in Google Play services 4.4



Join the discussion on
+Android Developers


Categories: Programming

Art, made with code, opens at London’s Barbican

Google Code Blog - Wed, 07/02/2014 - 19:30
Author PhotoBy Paul Kinlan, Staff Developer Advocate and tinkerer

Good News Everybody! DevArt has officially opened at the Barbican’s Digital Revolution Exhibition, the biggest exploration of digital creativity ever staged in the UK.

(Images - Andrew Meredith)
Technology has long gone hand in hand with art and with DevArt we’re showcasing the developers who use technology as their canvas and code as their raw material to create innovative, interactive digital art installations. Karsten Schmidt, Zach Lieberman, and duo Varvara Guljajeva and Mar Canet, have been commissioned by Google and the Barbican for Digital Revolution. Alongside these three commissions, a fourth - Cyril Diagne and Beatrice Lartigue - were handpicked as a result of DevArt’s global initiative to discover the interactive artists of tomorrow. You can also see their incredible art online and through our exhibition launch film here:


Play the World, 2014. Zach Lieberman [View on Github]
Using Google Compute Engine, Google Maps Geolocation API and openFrameworks, Zach has been able to find musical notes from hundreds of live radio stations around the world, resulting in a unique geo-orientated piece of music every time a visitor plays the piano at the centre of the piece.


Image by Andrew Meredith

Wishing Wall, 2014, Varvara Guljajeva & Mar Canet [View on Github]
Taking advantage of Google Compute Engine, Web Speech API, Chrome Apps, openFrameworks and node.js, Varvara and Mar are able to capture a whispered wish, and let you watch it transform before your eyes, allowing you to reach out and let it land on your hand.

Image by Andrew Meredith

Co(de) Factory, 2014, Karsten Schmidt [View on Github]
Android, Google Cloud Platform, Google Closure Compiler, WebGL, WebSockets, and YouTube have been combined by Karsten to allow anybody to create art and become an artist. It empowers people by giving them the tools to create, and offers them the chance to have their digital piece fabricated in 3D and showcased in the exhibition.

Image by Andrew Meredith

Les Métamorphoses de Mr. Kalia, 2014, Béatrice Lartigue and Cyril Diagne [View on Github]
Android, Chrome Apps, Google App Engine, node.js, openFrameworks have enabled Béatrice and Cyril to create tracking technology that transforms movement into a visual performance where visitors take on the persona of Mr. Kalia, a larger-than-life animated character, that undergoes a series of surreal changes while following your every movement.

Image by Andrew Meredith

DevArt will tour the world with the Digital Revolution Exhibition for up to five years following the Barbican show in London.

Soon we’re also starting our DevArt Young Creators program — an education component of DevArt designed to inspire a new generation of coders — each led by the DevArt interactive artists. Developed alongside the UK’s new computing curriculum, the workshops have been designed especially for students aged 9-13 years who have never tried coding before. Each workshop will be developed into lesson plans in-line with the UK’s new national computing curriculum, and distributed to educators by arts and technology organisations.

Paul Kinlan is a Developer Advocate in the UK on the Chrome team specialising on mobile. He lives in Liverpool and loves trying to progress the city's tech community from places like DoES Liverpool hack-space.

Posted by Louis Gray, Googler
Categories: Programming

Why does data need to have sex?

Data needs the ability to combine with other data in new ways to reach maximum value. So data needs to have the equivalent of sex.

That's why I used sex in the title of my previous article, Data Doesn't Need To Be Free, But It Does Need To Have Sex. So it wasn't some sort of click-bait title as some have suggested.

Sex is nature's way of bringing different data sets together, that is our genome, and creating something new that has a chance to survive and thrive in changing environments.

Currently data is cloistered behind Walled Gardens and thus has far less value than it could have. How do we coax data from behind these walls? With money. So that's where the bit about "data doesn't need to be free" comes from. How do we make money? Through markets. What do we have as a product to bring to market? Data. What do services need to keep producing data as a product? Money.

So it's a virtuous circle. Services generate data from their relationship with users. That data can be sold for the money services need to make a profit. Profit keeps the service that users like running. A running service  produces even more data to continue the cycle.

Why do we even care about data having a sex?

Historically one lens we can use to look at the world is to see everything in terms of how resources have been exploited over the ages. We can see the entire human diaspora as largely being determined by the search for and exploitation of different resource reservoirs.

We live near the sea for trade and access to fisheries. Early on we lived next to rivers for water, for food, for transportation, and later for power. People move to where there is lumber to harvest, gold to mine, coal to mine, iron to mine, land to grow food, steel to process, and so on. Then we build roads, rail roads, canals and ports to connect resource reservoirs to consumers.

In Nova Scotia, where I've been on vacation, a common pattern was for England and France to fight each other over land and resources. In the process they would build forts, import soldiers, build infrastructure, and make it relatively safe to trade. These forts became towns which then became economic hubs. We see these places as large cities now, like Halifax Nova Scotia, but it's the resources that came first.

When you visit coves along the coast of Nova Scotia they may tell you with interpretive signage, spaced out along a boardwalk, about the boom and bust cycles of different fish stocks as they were discovered, exploited, and eventually fished out.

In the early days in Nova Scotia great fortunes were made on cod. Then when cod was all fished out other resource reservoirs like sardines, halibut, and lobster were exploited. Atlantic salmon was over fished. Production moved to the Pacific where salmon was once again over fished. Now a big product is scallops and what were once trash fish, like redfish, is now the next big thing because that's what's left.

During these cycles great fortunes were made. But when a resource runs out people move on and find another. And when that runs out people move on and keep moving on until they find a place to make make living.

Places associated with old used up resources often just fade away. Ghosts of the original economic energy that created them. As a tourist I've noticed what is mined now as a resource is the history of the people and places that were created in the process of exploiting previous resources. We call it tourism.

Data is a resource reservoir like all the other resource reservoirs we've talked about, but data is not being treated like a resource. It's as if forts and boats and fishermen all congregated to catch cod, but then didn't sell the cod on an open market. If that were the case limited wealth would have been generated, but because all these goods went to market as part of a vast value chain, a decent living was made by a great many people.

If we can see data as a resource reservoir, as natural resources run out, we'll be able to switch to unnatural resources to continue the great cycle of resource exploitation.

Will this work? I don't know. It's just a thought that seems worth exploring.

Categories: Architecture

Do You Encourage People to Bring You Problems?

One of the familiar tensions in management is how you encourage or discourage people from bringing you problems. One of my clients had a favorite saying, “Don’t bring me problems. Bring me solutions.”

I could see the problems that saying caused in the organization. He prevented people from bringing him problems until the problems were enormous. He didn’t realize that his belief that he was helping people solve their own problems was the cause of these huge problems.

How could I help?

I’d only been a consultant for a couple of years. I’d been a manager for several years, and a program manager and project manager for several years before that. I could see the system. This senior manager wasn’t really my client. I was consulting to a project manager, who reported to him, but not him. His belief system was the root cause of many of the problems.

What could I do?

I tried coaching my project manager, about what to say to his boss. That had some effect, but didn’t work well. My client, the project manager, was so dejected going into the conversation that the conversation was dead before it started. I needed to talk to the manager myself.

I thought about this first. I figured I would only get one shot before I was out on my ear. I wasn’t worried about finding more consulting—but I really wanted to help this client. Everyone was suffering.

I asked for a one-on-one with the senior manager. I explained that I wanted to discuss the project, and that the project manager was fine with this meeting. I had 30 minutes.

I knew that Charlie, this senior manager cared about these things: how fast we could release so we could move to the next project and what the customers would see (customer perception). He thought those two things would affect sales and customer retention.

Charlie had put tremendous pressure on the project to cut corners to release faster. But that would change the customer perception of what people saw and how they would use the product. I wanted to change his mind and provide him other options.

“Hey Charlie, this time still good?”

“Yup, come on in. You’re our whiz-bang consultant, right?”

“Yes, you could call me that. My job is to help people think things through and see alternatives. That way they can solve problems on the next project without me.”

“Well, I like that. You’re kind of expensive.”

“Yes, I am. But I’m very good. That’s why you pay me. So, let’s talk about how I’m helping people solve problems.”

“I help people solve problems. I always tell them, ‘Don’t bring me problems. Bring me solutions.’ It works every time.” He actually laughed when he said this.

I waited until he was done laughing. I didn’t smile.

“You’re not smiling.” He started to look puzzled.

“Well, in my experience, when you say things like that, people don’t bring you small problems. They wait until they have no hope of solving the problem at all. Then, they have such a big problem, no one can solve the problem. Have you seen that?”

He narrowed his eyes.

“Let’s talk about what you want for this project. You want a great release in the next eight weeks, right? You want customers who will be reference accounts, right? I can help you with that.”

Now he looked really suspicious.

“Okay, how are you going to pull off this miracle? John, the project manager was in here the other day, crying about how this project was a disaster.”

“Well, the project is in trouble. John and I have been talking about this. We have some plans. We do need more people. We need you to make some decisions. We have some specific actions only you can take. John has specific actions only he can take.

“Charlie, John needs your support. You need to say things like, “I agree that cross-functional teams work. I agree that people need to work on just one thing at a time until they are complete. I agree that support work is separate from project work, and that we won’t ask the teams to do support work until they are done with this project.” Can you do that? Those are specific things that John needs from you. But even those won’t get the project done in time.

“Well, what will get the project done in time?” He practically growled at me.

“We need consider alternatives to the way the project has been working. I’ve suggested alternatives to the teams. They’re afraid of you right now, because they don’t know which solution you will accept.”

“AFRAID? THEY’RE AFRAID OF ME?” He was screaming by this time.

“Charlie, do you realize you’re yelling at me?” I did not tell him to calm down. I knew better than that. I gave him the data.

“Oh, sorry. No. Maybe that’s why people are afraid of me.”

I grinned at him.

“You’re not afraid of me.”

“Not a chance. You and I are too much alike.” I kept smiling. “Would you like to hear some options? I like to use the Rule of Three to generate alternatives. Is it time to bring John in?”

We discussed the options with John. Remember, this is before agile. We discussed timeboxing, short milestones with criteria, inch-pebbles, yellow-sticky scheduling, and decided to go with what is now a design-to-schedule lifecycle for the rest of the project. We also decided to move some people over from support to help with testing for a few weeks.

We didn’t release in eight weeks. It took closer to twelve weeks. But the project was a lot better after that conversation. And, after I helped the project, I gained Charlie as a coaching client, which was tons of fun.

Many managers have rules about their problem solving and how to help or not help their staff. “Don’t bring me a problem. Bring me a solution” is not helpful.

That is the topic of this month’s management myth: Myth 31: I Don’t Have to Make the Difficult Choices.

When you say, “Don’t bring me a problem. Bring me a solution” you say, “I’m not going to make the hard choices. You are.” But you’re the manager. You get paid to make the difficult choices.

Telling people the answer isn’t always right. You might have to coach people. But not making decisions isn’t right either. Exploring options might be the right thing. You have to do what is right for your situation.

Go read Myth 31: I Don’t Have to Make the Difficult Choices.

Categories: Project Management

Loan calculator

Phil Trelford's Array - Wed, 07/02/2014 - 08:17

Yesterday I came across a handy loan payment calculator in C# by Jonathon Wood via Alvin Ashcraft’s Morning Dew links. The implementation appears to be idiomatic C# using a class, mutable properties and wrapped in a host console application to display the results.

I thought it’d be fun to spend a few moments re-implementing it in F# so it can be executed in F# interactive as a script or a console application.

Rather than use a class, I’ve plumped for a record type that captures all the required fields:

/// Loan record
type Loan = {
   /// The total purchase price of the item being paid for.
   PurchasePrice : decimal
   /// The total down payment towards the item being purchased.
   DownPayment : decimal
   /// The annual interest rate to be charged on the loan
   InterestRate : double
   /// The term of the loan in months. This is the number of months
   /// that payments will be made.
   TermMonths : int
   }

 

And for the calculation simply a function:

/// Calculates montly payment amount
let calculateMonthlyPayment (loan:Loan) =
   let monthsPerYear = 12
   let rate = (loan.InterestRate / double monthsPerYear) / 100.0
   let factor = rate + (rate / (Math.Pow(rate+1.,double loan.TermMonths) 1.))
   let amount = loan.PurchasePrice - loan.DownPayment
   let payment = amount * decimal factor
   Math.Round(payment,2)

 

We can test the function immediately in F# interactive

let loan = {
   PurchasePrice = 50000M
   DownPayment = 0M
   InterestRate = 6.0
   TermMonths = 5 * 12
   }

calculateMonthlyPayment loan

 

Then a test run (which produces the same results as the original code):

let displayLoanInformation (loan:Loan) =
   printfn "Purchase Price: %M" loan.PurchasePrice
   printfn "Down Payment: %M" loan.DownPayment
   printfn "Loan Amount: %M" (loan.PurchasePrice - loan.DownPayment)
   printfn "Annual Interest Rate: %f%%" loan.InterestRate
   printfn "Term: %d months" loan.TermMonths
   printfn "Monthly Payment: %f" (calculateMonthlyPayment loan)
   printfn ""

for i in 0M .. 1000M .. 10000M do
   let loan = { loan with DownPayment = i }
   displayLoanInformation loan

 

Another option is to simply skip the record and use arguments:

/// Calculates montly payment amount
let calculateMonthlyPayment(purchasePrice,downPayment,interestRate,months) =
   let monthsPerYear = 12
   let rate = (interestRate / double monthsPerYear) / 100.0
   let factor = rate + (rate / (Math.Pow(rate + 1.0, double months) - 1.0))
   let amount = purchasePrice - downPayment
   let payment = amount * decimal factor
   Math.Round(payment,2
Categories: Programming

R/plyr: ddply – Renaming the grouping/generated column when grouping by date

Mark Needham - Wed, 07/02/2014 - 07:30

On Nicole’s recommendation I’ve been having a look at R’s plyr package to see if I could simplify my meetup analysis and I started by translating my code that grouped meetup join dates by day of the week.

To refresh, the code without plyr looked like this:

library(Rneo4j)
timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01")
 
query = "MATCH (:Person)-[:HAS_MEETUP_PROFILE]->()-[:HAS_MEMBERSHIP]->(membership)-[:OF_GROUP]->(g:Group {name: \"Neo4j - London User Group\"})
         RETURN membership.joined AS joinDate"
meetupMembers = cypher(graph, query)
meetupMembers$joined <- timestampToDate(meetupMembers$joinDate)
 
dd = aggregate(meetupMembers$joined, by=list(format(meetupMembers$joined, "%A")), function(x) length(x))
colnames(dd) = c("dayOfWeek", "count")

which returns the following:

> dd
  dayOfWeek count
1    Friday   135
2    Monday   287
3  Saturday    80
4    Sunday   102
5  Thursday   187
6   Tuesday   286
7 Wednesday   211

We need to use plyr’s ddply function which takes a data frame and transforms it into another one.

To refresh, this is what the initial data frame looks like:

> meetupMembers[1:10,]
       joinDate              joined
1  1.376572e+12 2013-08-15 14:13:40
2  1.379491e+12 2013-09-18 08:55:11
3  1.349454e+12 2012-10-05 17:28:04
4  1.383127e+12 2013-10-30 09:59:03
5  1.372239e+12 2013-06-26 10:27:40
6  1.330295e+12 2012-02-26 22:27:00
7  1.379676e+12 2013-09-20 12:22:39
8  1.398462e+12 2014-04-25 22:41:19
9  1.331734e+12 2012-03-14 14:11:43
10 1.396874e+12 2014-04-07 13:32:26

Most of the examples of using ddply show how to group by a specific ‘column’ e.g. joined but I want to group by part of the value in that column and eventually came across an example which showed how to do it:

> ddply(meetupMembers, .(format(joined, "%A")), function(x) {
    count <- length(x$joined)
    data.frame(count = count)
  })
  format(joined, "%A") count
1               Friday   135
2               Monday   287
3             Saturday    80
4               Sunday   102
5             Thursday   187
6              Tuesday   286
7            Wednesday   211

Unfortunately the generated column heading for the group by key isn’t very readable and it took me way longer than it should have to work out how to name it as I wanted! This is how you do it:

> ddply(meetupMembers, .(dayOfWeek=format(joined, "%A")), function(x) {
    count <- length(x$joined)
    data.frame(count = count)
  })
  dayOfWeek count
1    Friday   135
2    Monday   287
3  Saturday    80
4    Sunday   102
5  Thursday   187
6   Tuesday   286
7 Wednesday   211

If we want to sort that in descending order by ‘count’ we can wrap that ddply in another one:

> ddply(ddply(meetupMembers, .(dayOfWeek=format(joined, "%A")), function(x) {
    count <- length(x$joined)
    data.frame(count = count)
  }), .(count = count* -1))
  dayOfWeek count
1    Monday   287
2   Tuesday   286
3 Wednesday   211
4  Thursday   187
5    Friday   135
6    Sunday   102
7  Saturday    80

From reading a bit about ddply I gather that its slower than using some other approaches e.g. data.table but I’m not dealing with much data so it’s not an issue yet.

Once I got the hang of how it worked ddply was quite nice to work with so I think I’ll have a go at translating some of my other code to use it now.

Categories: Programming

Sprint Planning and Non-Story Related Tasks

Burn down chart

Burn down chart

I have been asked more than once about what to do with tasks that occur during a sprint that are not directly related to a committed story.  You need to first determine 1) whether the teams commit to the task, and 2) whether it is generally helpful for the team to account for the effort and burn it down.  Tasks can be categorized to help determine whether they affect capacity or need to be planned and managed at the team level.  Tasks that the team commits to performing need to be managed as part of the teams capacity while administrative tasks reduce capacity.

Administrative tasks.  Administrative tasks include vacations (planned), corporate meetings, meetings with human resources managers, non-project related training and others.  Classify any tasks that are not related to delivering project value under administrative tasks. One attribute of these types of tasks is that team members do not commit to these tasks, they are levied by the organization. The effort planned for these tasks should be deducted from the capacity of the team.  For example, in a five person team with 200 hours to spend on a project during a sprint (capacity), if one person was taking 20 hours of vacation the team’s capacity would be 180 hours.  If in addition to the vacation all five had to attend a two hour department staff meeting (even an important staff meeting), the team’s capacity would be reduced to 170 hours.  Administrative tasks can add up, deducting them from capacity makes the impact of these tasks obvious to everyone involved with the team.  Note: in organizations that have a very high administrative burden I sometime draw a line on the burn down chart that represents capacity before administrative tasks are removed. 

Project-related non-story tasks. Project-related non-story tasks are required to deliver the project value.  This category of tasks include backlog grooming, spikes and retrospectives.  There is a school of thought that the effort for these tasks should be deducted from the capacity.  Deducting the effort from capacity takes away the team’s impetus to manage the effort and the tasks. This takes away some of the team’s ability to self-organize and self-manage. The team should plan and commit to these tasks, therefore they are added to the backlog and burned down. This puts the onus on the team to complete the tasks and manage the time need to complete the tasks. As example if our team with 170 hours of capacity planned to do a 10 hour spike and have three people perform sprint grooming for an hour (total of 13 hours for both), I would expect to see cards for these tasks in the backlog and as they are completed the 13 hours would be burned down from the capacity.

Tasks that are under the control of the team need to be planned and burned against their capacity.  The acts of planning and accounting for the time provide the team with ability to plan and control the work they commit to completing.  When tasks are planned for the team that they can’t control, deducting it from the overall capacity helps the team keep from over committing to work that must be delivered.   


Categories: Process Management

GTAC is Almost Here!

Google Testing Blog - Tue, 07/01/2014 - 20:28
by The GTAC Committee

GTAC is just around the corner, and we’re all very busy and excited. I know we say this every year, but this is going to be the best GTAC ever! We have updated the GTAC site with important details:


If you are on the attendance list, we’ll see you on April 23rd. If not, check out the Live Stream page where you can watch the conference live and can get involved in Q&A after each talk. Perhaps your team can gather in a conference room and attend remotely.

Categories: Testing & QA

GTAC 2013 Wrap-up

Google Testing Blog - Tue, 07/01/2014 - 20:27
by The GTAC Committee

The Google Test Automation Conference (GTAC) was held last week in NYC on April 23rd & 24th. The theme for this year's conference was focused on Mobile and Media. We were fortunate to have a cross section of attendees and presenters from industry and academia. This year’s talks focused on trends we are seeing in industry combined with compelling talks on tools and infrastructure that can have a direct impact on our products. We believe we achieved a conference that was focused for engineers by engineers. GTAC 2013 demonstrated that there is a strong trend toward the emergence of test engineering as a computer science discipline across companies and academia alike.

All of the slides, video recordings, and photos are now available on the GTAC site. Thank you to all the speakers and attendees who made this event spectacular. We are already looking forward to the next GTAC. If you have suggestions for next year’s location or theme, please comment on this post. To receive GTAC updates, subscribe to the Google Testing Blog.

Here are some responses to GTAC 2013:

“My first GTAC, and one of the best conferences of any kind I've ever been to. The talks were consistently great and the chance to interact with so many experts from all over the map was priceless.” - Gareth Bowles, Netflix

“Adding my own thanks as a speaker (and consumer of the material, I learned a lot from the other speakers) -- this was amazingly well run, and had facilities that I've seen many larger conferences not provide. I got everything I wanted from attending and more!” - James Waldrop, Twitter

“This was a wonderful conference. I learned so much in two days and met some great people. Can't wait to get back to Denver and use all this newly acquired knowledge!” - Crystal Preston-Watson, Ping Identity

“GTAC is hands down the smoothest conference/event I've attended. Well done to Google and all involved.” - Alister Scott, ThoughtWorks

“Thanks and compliments for an amazingly brain activity spurring event. I returned very inspired. First day back at work and the first thing I am doing is looking into improving our build automation and speed (1 min is too long. We are not building that much, groovy is dynamic).” - Irina Muchnik, Zynx Health

Categories: Testing & QA

Abuse of Management Power: Women’s Access to Contraceptives

You might think this is a political post. It is not. This is about management power and women’s health at first. Then, it’s about employee health.

Yesterday, the US Supreme Court decided that a company could withhold contraceptive care from women, based on the company’s owner’s religious beliefs. Hobby Lobby is a privately held corporation.

Women take contraception pills for many reasons. If you have endometriosis, you take them so you can have children in the future. You save your fertility by not bleeding out every month. (There’s more to it than that. That’s the short version.)

If you are subject to ovarian cysts, birth control pills control them, too. If you are subject to the monthly “crazies” and you want to have a little control over your hormones, the pill can do wonders for you.

It’s not about sex. It’s not about pregnancy. It’s about health. It’s about power over your own body.

And, don’t get me started on the myriad reasons for having a D&C. As someone who had a blighted ovum, and had to have a D&C at 13 weeks (yes, 13 weeks), I can tell you that I don’t know anyone who goes in for an abortion who is happy about it.

It was the saddest day of my life.

I had great health care and a supportive spouse. I had grief counseling. I eventually had another child. Because, you see, a blighted ovum is not really a miscarriage. It felt like one to me. But it wasn’t really. Just ask your healthcare provider.

Maybe some women use abortion or the morning-after pill as primary contraception. It’s possible. You don’t have to like other people’s choices. That should be their choice. If you make good contraception free, women don’t have to use abortion or the morning-after pill as a primary contraception choice.

When other people remove a woman’s right to choose how she gets health care for her body, it’s the first step down an evil road. This is not about religious freedom. Yes, it’s couched in those terms now. But this is about management power.

It’s the first step towards management deciding that they can make women a subservient class and what they can do to that subservient class. Right now, that class is women and contraception. What will the next class be?

Will management decide everyone must get genetic counseling before you have a baby? Will they force you to abort a not-perfect baby because they don’t want to pay for the cost of a preemie? Or the cost of a Down Syndrome baby? What about if you have an autistic child?

Men, don’t think you’re immune from this either. What if you indulge in high-risk behavior, such as helicopter skiing? Or, if you gain too much weight? What if you need knee replacements or hip replacements?

What if you have chronic diseases? What happens if you get cancer?

What about when people get old? Will we have euthanasia?

We have health care, including contraception, as the law of the United States. I cannot believe that a non-religious company, i.e, not a church, is being allowed to flaunt that law. This is about management power. This is not about religion.

If they can say, “I don’t wanna” to this, what can they say, “I don’t wanna” to next?

This is the abuse of management power.

This is the first step down a very slippery slope.

Categories: Project Management

New Entreprogrammers Episodes: 18 and 19

Making the Complex Simple - John Sonmez - Tue, 07/01/2014 - 18:37

We have two new episodes this week, since I had to call an emergency meeting last week about an important life decision. Check them out here: http://entreprogrammers.com/ I can’t believe we are already at episode 19.

The post New Entreprogrammers Episodes: 18 and 19 appeared first on Simple Programmer.

Categories: Programming

Three Ways to Run (Your Business)

NOOP.NL - Jurgen Appelo - Tue, 07/01/2014 - 17:37
Three Ways to Run

I’ve learned that there are three approaches to running (your business).

The first is to create goals for yourself, maybe with a training (change) program. You start running and try to run 5 km. When the pain doesn’t kill you, you could set yourself a target for next time: 6 km. And if you survive that ordeal then maybe next time you can run 7 km. This goes on and on until either you run a marathon, or you never run again because of the shin splints, wrecked knees, and the terrible pain in your back.

The post Three Ways to Run (Your Business) appeared first on NOOP.NL.

Categories: Project Management

Economics of Iterative Software Development

Herding Cats - Glen Alleman - Tue, 07/01/2014 - 15:41

Economics of Iterative DevelopmentSoftware development is microeconomics. Microeconomics is about making decisions - choices - based on knowing something about cost, schedule, and techncial impacts from that decision. In the microeconomics paradigm, this information is part of the normal business process.

This is why when there is conjecture that you can make decisions in the absence of estimating the impacts of that decision, it ignores the principles of business. Or the notion that when numbers are flying around in an organization they lay the seeds for dysfunction, we need to stop and think about how business actually works. Money is used to produce value which is then exchanged for more money. No business will survivie for long in the absence of knowing about the numbers contained in the balance sheet and general ledger.

This book should be mandatory reading anyone thinking about making improvements in what they see as dysfunctions in their work environment. No need to run off and start inventing new untested ideas, they're right here for the using. With this knowledge comes the understanding about why estimates are needed to make decisions. In the microeconomics paradigm, making a choice is about opportunity cost. What will it cost me to NOT do something. The set of choices that can be acted on given their economic behaviour. Value produced from the invested cost. Opportunities created from the cost of development. And other trade space discussions

To make those decisions with any level of confidence, information is needed. This information is almost always about the future - return on investment, opportunity, risk reduction strategies. That information is almost always probabilistically driven by an underlying statistical process. This is the core motivation for learning to estimate - to make decisions about the future that are most advantageous for the invested cost.

That's the purpose of estimates, to support business decisions.

This decision making processes is the basis of Governance which is the structure, oversight, and management process that ensures delivery of the needed benefits of IT in a controlled way to enhance the long term sustainable success of the enterprise.

Related articles Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices The Calculus of Writing Software for Money, Part II How To Estimate, If You Really Want To How to "Lie" with Statistics How to Deal With Complexity In Software Projects? Why is Statistical Thinking Hard?
Categories: Project Management

Adding Decorated User Roles to Your User Stories

Mike Cohn's Blog - Tue, 07/01/2014 - 15:00

When writing user stories there are, of course, two parts: user and story. The user is embedded right at the front of stories when using the most common form of writing user stories:

As a <user role> I <want/can/am required to> <some goal> so that <some reason>.

Many of the user roles in a system can be considered first class users. These are the roles that are significant to the success of a system and will occur over and over in the user stories that make up a system’s product backlog.

These roles can often be displayed in a hierarchy for convenience. An example for a website that offers benefits to registered members is as follows:

 

 

 

 

 

 

 

In this example, some stories will be written with “As a site visitor, …” This would be appropriate for something anyone visiting the site can do, such as view a license agreement.

Other stories could be written specifically for registered members. For example, a website might have, “As a registered member, I can update my payment information.” This is appropriate to write for a registered member because it’s not appropriate for other roles such as visitors, former members, or trial members.

Stories can also be written for premium members (who are the only ones who can perhaps view certain content) or trial members (who are occasionally prompted to join).

But, beyond the first class user roles shown in a hierarchy like this, there are also other occasional users—that is, users who may be important but who aren’t really an entirely separate type of user.

First-time users can be considered an example. A first-time user is usually really a first-class role but they can be important to the success of a product, such as the website in this example.

Usually, the best way to handle this type of user role is by adding an adjective in front of the user role. That could give us roles such as:

  • First-time visitor
  • First-time member

The former would refer to someone on their absolute first visit to the website. The latter could refer to someone who is on the site for the first time as a subscribed member.

This role could have stories such as, “As a first-time visitor, additional prompts appear on the site to direct me toward the most common starting points so that I learn how to navigate the system.”

A similar example could be “forgetful member.” Forgetful members are very unlikely to be a primary role you identify as vital to the success of your product. We should, however, be aware of them.

A forgetful member may have stories such as “As a forgetful member, I can request a password reminder so that I can log in without having to first call tech support.”

I refer to these as decorated user roles, borrowing the term from mathematics where decorated symbols such as a-bar (x̄) and p-hat (x̂) are common.

Decorated users add the ability to further refine user roles without adding complexity to the user role model itself through the inclusion of additional first-class roles. As a fan of user stories, I hope you find the idea as helpful as I do.

In the comments, let me know what you think and please share some examples of decorated user roles you’ve used or encountered.

Adding Decorated User Roles to Your User Stories

Mike Cohn's Blog - Tue, 07/01/2014 - 15:00

When writing user stories there are, of course, two parts: user and story. The user is embedded right at the front of stories when using the most common form of writing user stories:

As a <user role> I <want/can/am required to> <some goal> so that <some reason>.

Many of the user roles in a system can be considered first class users. These are the roles that are significant to the success of a system and will occur over and over in the user stories that make up a system’s product backlog.

These roles can often be displayed in a hierarchy for convenience. An example for a website that offers benefits to registered members is as follows:

 

 

 

 

 

 

In this example, some stories will be written with “As a site visitor, …” This would be appropriate for something anyone visiting the site can do, such as view a license agreement.

Other stories could be written specifically for registered members. For example, a website might have, “As a registered member, I can update my payment information.” This is appropriate to write for a registered member because it’s not appropriate for other roles such as visitors, former members, or trial members.

Stories can also be written for premium members (who are the only ones who can perhaps view certain content) or trial members (who are occasionally prompted to join).

But, beyond the first class user roles shown in a hierarchy like this, there are also other occasional users—that is, users who may be important but who aren’t really an entirely separate type of user.

First-time users can be considered an example. A first-time user is usually really a first-class role but they can be important to the success of a product, such as the website in this example.

Usually, the best way to handle this type of user role is by adding an adjective in front of the user role. That could give us roles such as:

  • First-time visitor
  • First-time member

The former would refer to someone on their absolute first visit to the website. The latter could refer to someone who is on the site for the first time as a subscribed member.

This role could have stories such as, “As a first-time visitor, additional prompts appear on the site to direct me toward the most common starting points so that I learn how to navigate the system.”

A similar example could be “forgetful member.” Forgetful members are very unlikely to be a primary role you identify as vital to the success of your product. We should, however, be aware of them.

A forgetful member may have stories such as “As a forgetful member, I can request a password reminder so that I can log in without having to first call tech support.”

I refer to these as decorated user roles, borrowing the term from mathematics where decorated symbols such as a-bar (x̄) and p-hat (x̂) are common.

Decorated users add the ability to further refine user roles without adding complexity to the user role model itself through the inclusion of additional first-class roles. As a fan of user stories, I hope you find the idea as helpful as I do.

In the comments, let me know what you think and please share some examples of decorated user roles you’ve used or encountered.

Adding Decorated User Roles to Your User Stories

Mike Cohn's Blog - Tue, 07/01/2014 - 15:00

When writing user stories there are, of course, two parts: user and story. The user is embedded right at the front of stories when using the most common form of writing user stories:

As a <user role> I <want/can/am required to> <some goal> so that <some reason>.

Many of the user roles in a system can be considered first class users. These are the roles that are significant to the success of a system and will occur over and over in the user stories that make up a system’s product backlog.

These roles can often be displayed in a hierarchy for convenience. An example for a website that offers benefits to registered members is as follows:

 

 

 

 

 

 

In this example, some stories will be written with “As a site visitor, …” This would be appropriate for something anyone visiting the site can do, such as view a license agreement.

Other stories could be written specifically for registered members. For example, a website might have, “As a registered member, I can update my payment information.” This is appropriate to write for a registered member because it’s not appropriate for other roles such as visitors, former members, or trial members.

Stories can also be written for premium members (who are the only ones who can perhaps view certain content) or trial members (who are occasionally prompted to join).

But, beyond the first class user roles shown in a hierarchy like this, there are also other occasional users—that is, users who may be important but who aren’t really an entirely separate type of user.

First-time users can be considered an example. A first-time user is usually really a first-class role but they can be important to the success of a product, such as the website in this example.

Usually, the best way to handle this type of user role is by adding an adjective in front of the user role. That could give us roles such as:

  • First-time visitor
  • First-time member

The former would refer to someone on their absolute first visit to the website. The latter could refer to someone who is on the site for the first time as a subscribed member.

This role could have stories such as, “As a first-time visitor, additional prompts appear on the site to direct me toward the most common starting points so that I learn how to navigate the system.”

A similar example could be “forgetful member.” Forgetful members are very unlikely to be a primary role you identify as vital to the success of your product. We should, however, be aware of them.

A forgetful member may have stories such as “As a forgetful member, I can request a password reminder so that I can log in without having to first call tech support.”

I refer to these as decorated user roles, borrowing the term from mathematics where decorated symbols such as a-bar (x̄) and p-hat (x̂) are common.

Decorated users add the ability to further refine user roles without adding complexity to the user role model itself through the inclusion of additional first-class roles. As a fan of user stories, I hope you find the idea as helpful as I do.

In the comments, let me know what you think and please share some examples of decorated user roles you’ve used or encountered.

How combined Lean- and Agile practices will change the world as we know it

Xebia Blog - Tue, 07/01/2014 - 08:50

You might have attended this month at our presentation about eXtreme Manufacturing and the keynote of Nalden last week on XebiCon 2014. There are a few epic takeaways and additions I would like to share with you in this blogpost.

Epic TakeAway #1: The Learn, Unlearn and Relearn Cycle Like Nalden expressed in his inspiring keynote, one of the major things for him to be successful is being able to Learn, Unlearn and Relearn every time again. In my opinion, this will be the key ability for every successful company in the near future.  In fact, this is how nature evolutes: in the end, only the species who are able to adapt to changing circumstances will survive and evolute. This mechanism makes for example, most of the startups fail, but those who will survive, can be extremely disruptive for non-agile organizations.  Best example for this is of course Whatsapp.  Beating up the Telco Industry by almost destroying their whole businessmodel in only a few months. Learn more about disruptive innovation from one of my personal heroes, Harvard Professor Clayton Christensen.

Epic TakeAway #2: Unlearning Waterfall, Relearning Lean & Agile Globally, Waterfall is still the dominant method in companies and universities.  Waterfall has its origins more than 40 years ago. Times have changed. A lot. A new, successful and disruptive product could be there in only a matter of days instead of (many) years. Finally, things are changing. For example, the US Department of Defence has recently embraced Lean and Agile as mandatory practices, especially Scrum. Schools and universities are also more and more adopting the Agile way of working. Later more in this blogpost.

Epic TakeAway #3: Combined Lean- and Agile practices =  XM Lean practices arose in Japan in the 1980’s , mainly in the manufacturing industry, Toyota being the frontrunner here.  Agile practices like Scrum, were first introduced in the 1990’s by Ken Schwaber and Jeff Sutherland, these practices were mainly applied in the IT-industry. Until now, the manufacturing and IT world didn’t really joined forces combining Lean and Agile practices.  Until recently.  The WikiSpeed initiative of Joe Justice proved combining these practices result in a hyper-productive environment, where a 100 Mile/Gallon road legal sportscar could be developed in less than 3 months.  Out of this success eXtreme Manufacturing (XM) arose. Finally, a powerful combination of best practices from the manufacturing- and IT-world came together.

Epic TakeAway #4: Agile Mindset & Education fotoLike Sir Ken Robinson and Dan Pink already described in their famous TED-talks, the way most people are educated and rewarded, is not suitable anymore for modern times and even conflicts with the way we are born.  We learn by "failing", not by preventing it.  Failing in it’s essence should stimulate creativity to do things better next time, not be punished.  On the long run, failing (read: learning!) has more added value than short-term succes, for example by chasing milestones blindly. EduScrum in the Netherlands stimulates schools and universities to apply Scrum in their daily classes in order to stimulate creativity, happiness, self-reliantness and talent. The results of the schools joining these initiative are spectacular: happy students, less dropouts an significantly higher grades. For a prestigious project for the Delft University, Forze, the development of a hydrogen race car, the students are currently being trained and coached to apply Agile and Lean practices.  Also these results are more than promising. The Forze team is happier, more productive and more able to learn faster and better from setbacks.  Actually, they are taking the first steps of being anti-fragile.  Due too an intercession of the Forze team members themselves,  the current support of agile (Xebia) coaches is now planned being extended to the flagship of the Delft University:  the NUON solar team.

The Final Epic TakeAway In my opinion, we reached a tipping point in the way goals should be achieved.  Organizations are massively abandoning Waterfall and embracing Agile practices, like Scrum.  Adding Lean practices like Joe Justice did in his WikiSpeed project, makes Agile and Lean extremely powerful.  Yes, this will even make this world a much better place.  We cannot prevent nature disasters with this, but we can be anti-fragile.  We cannot prevent every epidemic, but we can respond in an XM-fashion on this by developing a vaccin in only days instead of years.  This brings me finally to the missing statement of the current Agile Manifesto:   We should Unlearn and Relearn before we Judge.  Dare to Dream like a little kid again. Unlearn your skepticism.  Companies like Boeing, Lockheed Martin and John Deere already did. Adopting XM speeded up their velocity in some cases with more than 7 times.