Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

New Tools to Supercharge Your Games on Google Play

Android Developers Blog - Thu, 05/14/2015 - 04:11

Posted by Greg Hartrell, Senior Product Manager of Google Play Games

Everyone has a gaming-ready device in their pocket today. In fact, of the one billion Android users in more than 190 countries, three out of four of them are gamers. This allows game developers to reach a global audience and build a successful business. Over the past year, we paid out more than $7 billion to developers distributing apps and games on Google Play.

At our Developer Day during the Game Developers Conference (GDC) taking place this week, we announced a set of new features for Google Play Games and AdMob to power great gaming. Rolling out over the next few weeks, these launches can help you better measure and monetize your games.

Better measure and adapt to player needs

“Player Analytics has helped me hone in on BombSquad’s shortcomings, right the ship, and get to a point where I can financially justify making the games I want to make.”

Eric Froemling, BombSquad developer

Google Play Games is a set of services that help game developers reach and engage their audience. To further that effort, we’re introducing Player Analytics, giving developers access to powerful analytics reports to better measure overall business success and understand in-game player behavior. Launching in the next few weeks in the Google Play Developer Console, the new tool will give indie developers and big studios better insight into how their players are progressing, spending, and churning; access to critical metrics like ARPPU and sessions per user; and assistance setting daily revenue targets.

BombSquad, created by a one-person game studio in San Francisco, was able to more than double its revenue per user on Google Play after implementing design changes informed during beta testing Player Analytics.

Optimizing ads to earn the most revenue

After optimizing your game for performance, it’s important to build a smarter monetization experience tailored to each user. That’s why we’re announcing three important updates to the AdMob platform:

  • Native Ads: Currently available as a limited beta, participating game developers will be able to show ads in their app from Google advertisers, and then customize them so that users see ads that match the visual design of the game. Atari is looking to innovate on its games, like RollerCoaster Tycoon 4 Mobile, and more effectively engage users with this new feature.
  • In-App Purchase House Ads Beta: Game developers will be able to smartly grow their in-app purchase revenue for free. AdMob can now predict which users are more likely to spend on in-app purchases, and developers will be able to show these users customized text or display ads promoting items for sale. Currently in beta, this feature will be coming to all AdMob accounts in the next few weeks.
  • Audience Builder: A powerful tool that enables game developers to create lists of audiences based on how they use their game. They will be able to create customized experiences for users, and ultimately grow their app revenue.

"Atari creates great game experiences for our broad audience. We're happy to be partnering with Google and be the first games company to take part in the native ads beta and help monetize games in a way that enhances our users' experience."

Todd Shallbetter, Chief Operating Officer, Atari

New game experiences powered by Google

Last year, we launched Android TV as a way to bring Android into the living room, optimizing games for the big screen. The OEM ecosystem is growing with announced SmartTVs and micro-consoles from partners like Sony, TPVision/Philips and Razer.

To make gaming even more dynamic on Android TV, we’re launching the Nearby Connections API with the upcoming update of Google Play services. With this new protocol, games can seamlessly connect smartphones and tablets as second-screen controls to the game running on your TV. Beach Buggy Racing is a fun and competitive multiplayer racing game on Android TV that plans to use Nearby Connections in their summer release, and we are looking forward to more living room multiplayer games taking advantage of mobile devices as second screen controls.

At Google I/O last June, we also unveiled Google Cardboard with the goal of making virtual reality (VR) accessible to everyone. With Cardboard, we are giving game developers more opportunities to build unique and immersive experiences from nothing more than a piece of cardboard and your smartphone. The Cardboard SDKs for Android and Unity enable you to easily build VR apps or adapt your existing app for VR.

Check us out at GDC

Visit us at the Google booth #502 on the Expo floor to get hands on experience with Project Tango, Niantic Labs and Cardboard starting on Wednesday, March 4. Our teams from AdMob, AdWords, Analytics, Cloud Platform and Firebase will also be available to answer any of your product questions.

For more information on what we’re doing at GDC, please visit g.co/dev/gdc2015.

Join the discussion on

+Android Developers
Categories: Programming

Hello Places API for Android and iOS!

Android Developers Blog - Thu, 05/14/2015 - 04:09

Posted by Jen Kovnats Harrington, Product Manager, Google Maps APIs

Originally posted to Google Geo Developers blog

People don’t think of their location in terms of coordinates on a map. They want context on what shops or restaurants they’re at, and what’s around them. To help your apps speak your users’ language, we’re launching the Places API for Android, as well as opening a beta program for the Places API for iOS.

The Places API web service and JavaScript library have been available for some time. By providing native support for Android and iOS devices, you can optimize the mobile experience with the new APIs by taking advantage of the device’s location signals.

The Places APIs for Android and iOS bridge the gap between simple geographic locations expressed as latitude and longitude, and how people associate location with a known place. For example, you wouldn’t tell someone you were born at 25.7918359,-80.2127959. You’d simply say, “I was born in Jackson Memorial Hospital in Miami, Florida.” The Places API brings the power of Google’s global places database into your app, providing more than 100 million places, like restaurants, local businesses, hotels, museums, and other attractions.

Key features include:

  • Add a place picker: a drop-in UI widget that allows your users to specify a place
  • Get the place where the user is right now
  • Show detailed place information, including the place’s name, address, phone number, and website
  • Use autocomplete to save your users time and frustration typing out place names, by automatically completing them as they type
  • Make your app stand out by adding new places that are relevant to your users and seeing the places appear in Google's Places database
  • Improve the map around you by reporting the presence of a device at a particular place.

To get started with the Places API for Android, watch this DevByte, check out the developer documentation, and play with the demos. To apply for the Places API for iOS beta program, go here.

Join the discussion on

+Android Developers
Categories: Programming

Developing audio apps for Android Auto

Android Developers Blog - Thu, 05/14/2015 - 04:09

Posted by Joshua Gordon, Developer Advocate

Have you ever wanted to develop apps for the car, but found the variety of OEMs and proprietary platforms too big of a hurdle? Now with Android Auto, you can target a single platform supported by vehicles coming soon from 28 manufacturers.

Using familiar Android APIs, you can easily add a great in-car user experience to your existing audio apps, with just a small amount of code. If you’re new to developing for Auto, watch this DevByte for an overview of the APIs, and check out the training docs for an end-to-end tutorial.


Playback and custom controls
table, th, td { border: 1px solid black; border-collapse: collapse; }

Custom playback controls on NPR One and iHeartRadio.

The first thing to understand about developing audio apps on Auto is that you don’t draw your user interface directly. Instead, the framework has two well-defined UIs (one for playback, one for browsing) that are created automatically. This ensures consistent behavior across audio apps for drivers, and frees you from dealing with any car specific functionalities or layouts. Although the layout is predefined, you can customize it with artwork, color themes, and custom controls.

Both NPR One and iHeartRadio customize their UI. NPR One adds controls to mark a story as interesting, to view a list of upcoming stories, and to skip to the next story. iHeartRadio adds controls to favorite stations and to like songs. Both apps store user preferences across form factors.

Because the UI is drawn by the framework, playback commands need to be relayed to your app. This is accomplished with the MediaSession callback, which has methods like onPlay() and onPause(). All car specific functionality is handled behind the scenes. For example, you don’t need to be aware if a command came from the touch screen, the steering wheel buttons, or the user’s voice.

Browsing and recommendations
table, th, td { border: 1px solid black; border-collapse: collapse; }

Browsing content on NPR One and iHeartRadio.

The browsing UI is likewise drawn by the framework. You implement the MediaBrowserService to share your content hierarchy with the framework. A content hierarchy is a collection of MediaItems that are either playable (e.g., a song, audio book, or radio station) or browsable (e.g., a favorites folder). Together, these form a tree used to display a browsable menu of your content.

With both apps, recommendations are key. NPR One recommends a short list of in-depth stories that can be selected from the browsing menu. These improve over time based on user feedback. iHeartRadio’s browsing menu lets you pick from favorites and recommended stations, and their “For You” feature gives recommendations based on user location. The app also provides the ability create custom stations, from the browsing menu. Doing so is efficient and requires only three taps (“Create Station” -> “Rock” -> “Foo Fighters”).

When developing for the car, it’s important to quickly connect users with content to minimize distractions while driving. It’s important to note that design considerations on Android Auto are different than on a mobile device. If you imagine a typical media player on a phone, you may picture a browsable menus of “all tracks” or “all artists”. These are not ideal in the car, where the primary focus should be on the road. Both NPR One and iHeartRadio provide good examples of this, because they avoid deep menu hierarchies and lengthy browsable lists.

Voice actions for hands free operation

Voice actions (e.g., “Play KQED”) are an important part of Android Auto. You can support voice actions in your app by implementing onPlayFromSearch() in the MediaSession.Callback. Voice actions may also be used to start your app from the home screen (e.g., “Play KQED on iHeartRadio”). To enable this functionality, declare the MEDIA_PLAY_FROM_SEARCH intent filter in your manifest. For an example, see this sample app.

Next steps

NPR One and iHeartRadio are just two examples of great apps for Android Auto today. They feel like a part of the car, and look and sound great. You can extend your apps to the car today, too, and developing for Auto is easy. The framework handles the car specific functionalities for you, so you’re free to focus on making your app special. Join the discussion at http://g.co/androidautodev if you have questions or ideas to share. To get started on your app, visit developer.android.com/auto.

Join the discussion on

+Android Developers
Categories: Programming

Integrate Play data into your workflow with data exports

Android Developers Blog - Thu, 05/14/2015 - 04:08

Posted by Frederic Mayot, Google Play team

The Google Play Developer Console makes a wealth of data available to you so you have the insight needed to successfully publish, grow, and monetize your apps and games. We appreciate that some developers want to access and analyze their data beyond the visualization offered today in the Developer Console, which is why we’ve made financial information, crash data, and user reviews available for export. We're now also making all the statistics on your apps and games (installs, ratings, GCM usage, etc.) accessible via Google Cloud Storage.

New Reports section in the Google Play Developer Console

We’ve added a Reports tab to the Developer Console so that you can view and access all available data exports in one place.

A reliable way to access Google Play data

This is the easiest and most reliable way to download your Google Play Developer Console statistics. You can access all of your reports, including install statistics, reviews, crashes, and revenue.

Programmatic access to Google Play data

This new Google Cloud Storage access will open up a wealth of possibilities. For instance, you can now programmatically:

  • import install and revenue data into your in-house dashboard
  • run custom analysis
  • import crashes and ANRs into your bug tracker
  • import reviews into your CRM to monitor feedback and reply to your users

Your data is available in a Google Cloud Storage bucket, which is most easily accessed using gsutil. To get started, follow these three simple steps to access your reports:

  1. Install the gsutil tool.
    • Authenticate to your account using your Google Play Developer Console credentials.
  2. Find your reporting bucket ID on the new Reports section.
    • Your bucket ID begins with: pubsite_prod_rev (example:pubsite_prod_rev_1234567890)
  3. Use the gsutil ls command to list directories/reports and gsutil cp to copy the reports. Your reports are organized in directories by package name, as well as year and month of their creation.

Read more about exporting report data in the Google Play Developer Help Center.

Note about data ownership on Google Play and Cloud Platform: Your Google Play developer account is gaining access to a dedicated, read-only Google Cloud Storage bucket owned by Google Play. If you’re a Google Cloud Storage customer, the rest of your data is unaffected and not connected to your Google Play developer account. Google Cloud Storage customers can find out more about their data storage on the terms of service page.

Categories: Programming

Enable your messaging app for Android Auto

Android Developers Blog - Thu, 05/14/2015 - 04:06

Posted by Joshua Gordon, Developer Advocate

What if there was a way for drivers to stay connected using your messaging app, while keeping their hands on the wheel and eyes on the road?

Android Auto helps drivers stay connected, but in a more convenient way that's integrated with the car. It eliminates the need to type and read messages by replacing these activities with a voice controlled interface.

Enabling your messaging app to work with Android Auto is easy. Developers like Skype and textPlus have already done so. Check out this DevByte for an overview of the messaging APIs, and see the developer training guide for a deep dive. Read on for a look at the key steps involved.


Message notifications on the car’s display

When an Android 5.0+ phone is connected to a compatible car, users receive incoming message notifications from Auto-enabled apps on the car’s head unit display. Your app runs on the phone, but is controlled by the car. To learn more about how this works, watch the Introduction to Android Auto DevByte.

A new message notification from Skype

If your app already uses notifications to alert the user to incoming messages, it’ll be easy to extend these for Auto. It takes just a few lines of code, and you won’t have to change how your app works on the phone.

There are a couple small differences between message notifications on Auto vs. a phone. On Auto, a preview of the message content isn’t shown, because messaging is driven entirely by voice. Second, message notifications are backed by a conversation object. This is simply a collection of unread messages from a particular sender.

Decorate your notification with the CarExtender to add support for the car. Next, use the UnreadConversation.Builder to create a conversation, and populate it by iterating over your app's unread messages (from a certain sender) and adding them to the conversation. Pass your conversation object to the CarExtender, and you’re done!

Tap to hear messages

Tapping on a message notification plays it back on the car's sound system, via text to speech. This is handled automatically by the framework; no additional code is required. Pretty cool, right?

In order to know when the user hears a message, you provide a PendingIntent that’s triggered by the system. That’s one of just two intents you’ll need to handle to enable your app for Auto.

Reply by voice

Voice control is the real magic of Android Auto. Users reply to messages by speaking, via voice recognition. This is far faster and more natural than typing.

Enabling this functionality is as simple as adding a RemoteInput instance to your conversation objects, before you issue the notification. Speech recognition is handled entirely by the framework. The recognition result is delivered to your app as a plain text string via a second PendingIntent.

Replying to a message from textPlus by voice.

Next Steps Make your messaging app more natural to use in the car by enabling it for Android Auto. Now drivers can stay connected, without typing or reading messages. It just takes a few lines of code. To learn more visit developer.android.com/auto Join the discussion on

+Android Developers
Categories: Programming

R: ggplot – Displaying multiple charts with a for loop

Mark Needham - Thu, 05/14/2015 - 01:17

Continuing with my analysis of the Neo4j London user group I wanted to drill into some individual meetups and see the makeup of the people attending those meetups with respect to the cohort they belong to.

I started by writing a function which would take in an event ID and output a bar chart showing the number of people who attended that event from each cohort.
<?p>

We can work out the cohort that a member belongs to by querying for the first event they attended.

Our query for the most recent Intro to graphs session looks like this:

library(RNeo4j)
graph = startGraph("http://127.0.0.1:7474/db/data/")
 
eventId = "220750415"
query =  "match (g:Group {name: 'Neo4j - London User Group'})-[:HOSTED_EVENT]->
                (e {id: {id}})<-[:TO]-(rsvp {response: 'yes'})<-[:RSVPD]-(person) 
          WITH rsvp, person
          MATCH (person)-[:RSVPD]->(otherRSVP)
 
          WITH person, rsvp, otherRSVP
          ORDER BY person.id, otherRSVP.time
 
          WITH person, rsvp, COLLECT(otherRSVP)[0] AS earliestRSVP
          return rsvp.time, earliestRSVP.time,  person.id"
 
df = cypher(graph, query, id= eventId)
 
> df %>% sample_n(10)
      rsvp.time earliestRSVP.time person.id
18 1.430819e+12      1.392726e+12 130976662
95 1.430069e+12      1.430069e+12  10286388
79 1.429035e+12      1.429035e+12  38344282
64 1.428108e+12      1.412935e+12 153473172
73 1.429513e+12      1.398236e+12 143322942
19 1.430389e+12      1.430389e+12 129261842
37 1.429643e+12      1.327603e+12   9750821
49 1.429618e+12      1.429618e+12 184325696
69 1.430781e+12      1.404554e+12  67485912
1  1.430929e+12      1.430146e+12 185405773

We’re not doing anything too clever here, just using a couple of WITH clauses to order RSVPs so we can get the earlier one for each person.

Once we’ve done that we’ll tidy up the data frame so that it contains columns containing the cohort in which the member attended their first event:

timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01", tz = "GMT")
 
df$time = timestampToDate(df$rsvp.time)
df$date = format(as.Date(df$time), "%Y-%m")
df$earliestTime = timestampToDate(df$earliestRSVP.time)
df$earliestDate = format(as.Date(df$earliestTime), "%Y-%m")
 
> df %>% sample_n(10)
      rsvp.time earliestRSVP.time person.id                time    date        earliestTime earliestDate
47 1.430697e+12      1.430697e+12 186893861 2015-05-03 23:47:11 2015-05 2015-05-03 23:47:11      2015-05
44 1.430924e+12      1.430924e+12 186998186 2015-05-06 14:49:44 2015-05 2015-05-06 14:49:44      2015-05
85 1.429611e+12      1.422378e+12  53761842 2015-04-21 10:13:46 2015-04 2015-01-27 16:56:02      2015-01
14 1.430125e+12      1.412690e+12   7994846 2015-04-27 09:01:58 2015-04 2014-10-07 13:57:09      2014-10
29 1.430035e+12      1.430035e+12  37719672 2015-04-26 07:59:03 2015-04 2015-04-26 07:59:03      2015-04
12 1.430855e+12      1.430855e+12 186968869 2015-05-05 19:38:10 2015-05 2015-05-05 19:38:10      2015-05
41 1.428917e+12      1.422459e+12 133623562 2015-04-13 09:20:07 2015-04 2015-01-28 15:37:40      2015-01
87 1.430927e+12      1.430927e+12 185155627 2015-05-06 15:46:59 2015-05 2015-05-06 15:46:59      2015-05
62 1.430849e+12      1.430849e+12 186965212 2015-05-05 17:56:23 2015-05 2015-05-05 17:56:23      2015-05
8  1.430237e+12      1.425567e+12 184979500 2015-04-28 15:58:23 2015-04 2015-03-05 14:45:40      2015-03

Now that we’ve got that we can group by the earliestDate cohort and then create a bar chart:

byCohort = df %>% count(earliestDate) 
 
ggplot(aes(x= earliestDate, y = n), data = byCohort) + 
    geom_bar(stat="identity", fill = "dark blue") +
    theme(axis.text.x=element_text(angle=90,hjust=1,vjust=1))

2015 05 13 00 30 59

This is good and gives us the insight that most of the members attending this version of intro to graphs just joined the group. The event was on 7th April and most people joined in March which makes sense.

Let’s see if that trend continues over the previous two years. To do this we need to create a for loop which goes over all the Intro to Graphs events and then outputs a chart for each one.

First I pulled out the code above into a function:

plotEvent = function(eventId) {
  query =  "match (g:Group {name: 'Neo4j - London User Group'})-[:HOSTED_EVENT]->
                (e {id: {id}})<-[:TO]-(rsvp {response: 'yes'})<-[:RSVPD]-(person) 
          WITH rsvp, person
          MATCH (person)-[:RSVPD]->(otherRSVP)
 
          WITH person, rsvp, otherRSVP
          ORDER BY person.id, otherRSVP.time
 
          WITH person, rsvp, COLLECT(otherRSVP)[0] AS earliestRSVP
          return rsvp.time, earliestRSVP.time,  person.id"
 
  df = cypher(graph, query, id= eventId)
  df$time = timestampToDate(df$rsvp.time)
  df$date = format(as.Date(df$time), "%Y-%m")
  df$earliestTime = timestampToDate(df$earliestRSVP.time)
  df$earliestDate = format(as.Date(df$earliestTime), "%Y-%m")
 
  byCohort = df %>% count(earliestDate) 
 
  ggplot(aes(x= earliestDate, y = n), data = byCohort) + 
    geom_bar(stat="identity", fill = "dark blue") +
    theme(axis.text.x=element_text(angle=90,hjust=1,vjust=1))
}

We’d call it like this for the Intro to graphs meetup:

> plotEvent("220750415")

Next I tweaked the code to look up all Into to graphs events and then loop through and output a chart for each event:

events = cypher(graph, "match (e:Event {name: 'Intro to Graphs'}) RETURN e.id ORDER BY e.time")
 
for(event in events$n) {
  plotEvent(as.character(event))
}

Unfortunately that doesn’t print anything at all which we can fix by storing our plots in a list and then displaying it afterwards:

library(gridExtra)
p = list()
for(i in 1:count(events)$n) {
  event = events[i, 1]
  p[[i]] = plotEvent(as.character(event))
}
 
do.call(grid.arrange,p)

2015 05 14 00 57 10

This visualisation is probably better without any of the axis so let’s update the function to scrap those. We’ll also add the date of the event at the top of each chart which will require a slight tweak of the query:

plotEvent = function(eventId) {
  query =  "match (g:Group {name: 'Neo4j - London User Group'})-[:HOSTED_EVENT]->
                (e {id: {id}})<-[:TO]-(rsvp {response: 'yes'})<-[:RSVPD]-(person) 
            WITH e,rsvp, person
            MATCH (person)-[:RSVPD]->(otherRSVP)
 
            WITH e,person, rsvp, otherRSVP
            ORDER BY person.id, otherRSVP.time
 
            WITH e, person, rsvp, COLLECT(otherRSVP)[0] AS earliestRSVP
            return rsvp.time, earliestRSVP.time,  person.id, e.time"
 
  df = cypher(graph, query, id= eventId)
  df$time = timestampToDate(df$rsvp.time)
  df$eventTime = timestampToDate(df$e.time)
  df$date = format(as.Date(df$time), "%Y-%m")
  df$earliestTime = timestampToDate(df$earliestRSVP.time)
  df$earliestDate = format(as.Date(df$earliestTime), "%Y-%m")
 
  byCohort = df %>% count(earliestDate) 
 
  ggplot(aes(x= earliestDate, y = n), data = byCohort) + 
    geom_bar(stat="identity", fill = "dark blue") +
    theme(axis.ticks = element_blank(), 
          axis.text.x = element_blank(), 
          axis.text.y = element_blank(),
          axis.title.x = element_blank(),
          axis.title.y = element_blank()) + 
    labs(title = df$eventTime[1])
}
2015 05 14 01 08 54

I think this makes it a bit easier to read although I’ve made the mistake of not having all the charts representing the same scale – one to fix for next time.

We started doing the intro to graphs sessions less frequently towards the end of last year so my hypothesis was that we’d see a range of people from different cohorts RSVPing for them but that doesn’t seem to be the case. Instead it’s very dominated by people signing up close to the event.

Categories: Programming

Just Because You Say Words, It Doesn't Make Then True

Herding Cats - Glen Alleman - Wed, 05/13/2015 - 22:33

When we hear words about any topic, my favorite of course is all things project manage, it doesn't make them true.

  • Earned Value is a bad idea in IT projects because it doesn't measure business value
    • Turns out this is actually true. The confusion was with the word VALUE
    • In Earned Value, Value is Budgeted Cost of Work Performed (BCWP) in the DOD vocabulary or Earned Value in the PMI vocabulary
  • Planning is a waste
    • Planning is a Strategy for the successful completion of the project
    • It'd be illogical not to have a Strategy for the success of the project
    • So we need a plan.
    • As Ben Franklin knew "Failure to Plan, means we're Planning to Fail"
  • The Backlog is a waste and grooming the Backlog is a bigger waste
    • The backlog is a list of planned work to produce the value of the project
    • The backlog can change and this is the "change control paradigm" for agile.
    • Change control is a critical processes for all non-trivial projects
    • Without change control we don't have a stable description of what "Done" looks like for the project. Without having some sense of "Done" we're on a Death March project
  • Unit Testing is a waste
    • Unit testing is the first step of Quality Assurance and Independent Verification and Validation
    • Without UT and in the presence of a QA and IV&V process, it will be "garbage in garbage out" for the software.
    • Assuming the developers can do the testing is naive at best on any non-trivial project
  • Decisions can be made in the presence of uncertainty without estimates
    • This violates the principles of Microeconomics 
    • Microeconomics is a branch of economics that studies the behavior of individuals and small impacting organizations in making decisions on the allocation of limited resources 
    • All projects have uncertainty - reducible and irreducible.
    • This uncertainty creates risk. This risk impacts the behaviors of the project work.
    • Making decisions - choices - in the presence of these uncertainties and resulting risks needs to assess some behavior that is probabilistic.
    • This probabilistic  behavior is driven by underlying statistical processes.

So when we hear some phrase, idea, or conjecture - ask for evidence. Ask for domain. Ask for examples. If you hear we're just exploring ask who's paying for that? Because it is likely those words are unsubstantiated conjecture from personal experience and not likely very useful outside that personal experience

Related articles Root Cause Analysis The Reason We Plan, Schedule, Measure, and Correct The Flaw of Empirical Data Used to Make Decisions About the Future Want To Learn How To Estimate? There is No Such Thing as Free Estimates
Categories: Project Management

Episode 226: Eric Evans on Domain-Driven Design at 10 Years

Eberhard Wolff talks with Eric Evans, the founder of domain-driven design (DDD), about its impact after 10 years. DDD consists of domain-modelling patterns; it has established itself as a sound approach for designing systems with complex requirements. The show covers an introduction to DDD, how the community’s understanding of DDD has changed in the last […]
Categories: Programming

New ''Understanding Software Projects'' Lectures Posted

10x Software Development - Steve McConnell - Wed, 05/13/2015 - 18:25

Two new lectures have been posted in my Understanding Software Projects lecture series at http://cxlearn.com. All the lectures that have been posted are still free (though this won't last forever). Lectures posted so far include: 

0.0 Understanding Sofware Projects - Intro

     0.1 Introduction - My Background (new this week)

     0.2 Reading the News

1.0 The Software Lifecycle Model - Intro

     1.1 Lifecycle Model - Defect Removal (new this week)

2.0 Software Size

Check out the lectures at http://cxlearn.com!

Understanding Software Projects - Steve McConnell

To see the Future of the Apple Watch Just Go to Disneyland


by AreteStock

 

Removing friction. That’s what the Apple Watch is good at.

Many think watches are a category flop because they don’t have that obvious killer app. Like hot sauce, maybe a watch isn’t something you eat all by itself, but it gives whatever you sprinkle it on a little extra flavor?

Walk into your hotel, the system recognizes you, your room number pops up on your watch, you walk directly to your room and unlock it with your watch.

Walk into an airport, your flight displays on your watch along with directions to your terminal. To get on the plane you just flash your watch. On landing, walk to your rental car and unlock it with your watch.

A notification arrives that it’s time to leave for your meeting, traffic is bad, best get an early start.

While shopping you check with your partner if you need milk by talking directly through your watch. In the future you’ll just know if you need milk, but we’re not there yet.

You can do all these things with a phone. Google Now, for example. What the easy accessibility of the watch does in these scenarios is remove friction. It makes it natural for a complex backend system to talk to you about things it learns from you and your environment. Hiding in a pocket or a purse, a phone is too inconvenient and too general purpose. Your watch becomes a small custom viewport on to a much larger more connected world.

After developing my own watch extension, using other extensions, and listening to a lot of discussion on the subject, it’s clear the form factor of a watch is very limiting and will always be limiting. You’ll never be able to do much UI-wise on a watch. Even the cleverest programmers can only do so much with so little screen real estate and low resource usage requirements. Instagram and Evernote simply aren’t the same on a watch.

But that’s OK. Every device has what it does well. It takes time for users and developers to explore a new device space.

What a watch does well is not so much enable new types of apps, but plug people into much larger and smarter systems. This is where the friction is removed.

Re-enchanting the World Disneyland Style
Categories: Architecture

The Fallacy of the Planning Fallacy

Herding Cats - Glen Alleman - Wed, 05/13/2015 - 15:06

The Planning Fallacy is well documented in many domains. Bent Flyvbjerg has documented this issue in one of his books, Mega Projects and Risk. But the Planning Fallacy is more complex than just the optimism bias. Many of the root causes for cost overruns are based in the politics of large projects.

The  planning fallacy is ...

...a phenomenon in which predictions about how much time will be needed to complete a future task display an optimistic bias (underestimate the time needed). This phenomenon occurs regardless of the individual's knowledge that past tasks of a similar nature have taken longer to complete than generally planned. The bias only affects predictions about one's own tasks; when outside observers predict task completion times, they show a pessimistic bias, overestimating the time needed. The planning fallacy requires that predictions of current tasks' completion times are more optimistic than the beliefs about past completion times for similar projects and that predictions of the current tasks' completion times are more optimistic than the actual time needed to complete the tasks.

The critical notion here is about ones own estimates. This is the critical reasons for 

With all that said, there still is a large body of evidence that estimating is still a major problem.† 

I have a colleague who is the former Cost Analysis Director of NASA. He has three reasons projects get in cost, schedule, and technical trouble:

  1. We couldn't know - we're working in a domain where discovery is actually the case. We're inventing new physics, discovering new drugs that have never been discovered before. We're doing unprecedented development. Most people using the term "we're exploring" don't likely know what they[re doing and those paying are paying for that exploring. Ask yourself if you're in the education or actually the research and development business.
  2. We didn't know - we could have known, but we just didn't want to. We couldn't afford to know. We didn't have time to know. We were incapable of knowing because we're outside our domain. Would you hire someone who didn't do his homework when it comes to providing the solution you're paying for? Probably not. Then why accept we didn't know as an excuse?
  3. We don't want to know - we could have known, but if we knew that'd be information that would cause this project to be canceled.

The Planning Fallacy

Daniel Kahneman (Princeton) and Amos Tversky (Stanford) describe it as “the tendency to underestimate the time, costs, and risks of future actions and overestimate the benefit of those actions”.  The results are time and cost overruns as well as benefit shortfalls.  The concept is not new: they coined the term in the 1970s and much research has taken place since, see the Resources below.

So the challenge is to not fall victim to this optimism bias and become a statistic in the Planning Fallacy.

How do we do that?

Here's our experience:

  • Start with a credible systems architecture with the topology of the delivered system:
    • By credible i mean not a paper drawing on the wall, but a a sysML description of the system and its components. sysML tool can be had for free along with commercial products.
    • Defining the interactions between the components is the critical issue to discover the location for optimism. The Big Visible Chart from sysML needs to hang on the wall for all to see where these connections take place.
    • Without this BVC, the optimism is It not that complicated, what could possibly be the issue with our estimates. 
    • It's the interfaces where the project goes bad. Self contained components have problems for sure, but when connected to other components this becomes a system of systems and the result is an N2 problem.
  • Look for reference classes for the components
    • Has anyone here done this before?
    • No,? Do we know anyone who knows anyone who's done this before?
    • Is there no system like this system in the world?
    • If the answer to that is NO, then we need another approach - we're inventing new physics and this project is actually a research project - act appropriately 
  • Do we have any experience doing this work in the past?
    • No, why would we get hired to work on this project?
    • Yes, but we've failed in the past?
      • No problem, did we learn anything.
      • Did we find the Root Cause of the past performance problems and take the corrective actions?
      • Did we follow a know process (APOLLO) in that Root Cause Analysis and Corrective actions?
      • No, you're being optimistic that the problems won't come back
  • Do we have any sense of the Measures of the system that will drive cost?
    • Effectiveness -  are the operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions.
    • Performance - characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.
    • Key Performance Parameters - represent the capabilities and characteristics so significant  that failure to meet them can be cause for reevaluation, reassessing, or termination of the program.
    • Technical Performance Measures - determine how well a system or system element is satisfying or expected to satisfy a technical requirement or goal
    • All the ...ilities
    • Without understanding these we have no real understanding of where the problems are going to be and the natural optimism comes out.
  • Do we know what technical and programmatic risks are going to be encountered in this project?
    • Do we have a risk register?
    • Do we know both the reducible and irreducible risks to the success of the project?
    • Do we have mitigation plans for the reducible risks?
    • For reducible risks without mitigation plans, do we have Management Reserve?
    • For irreducible risks do we have cost and schedule margin?
  • Do we have a Plan showing the increasing maturing of the deliverables of the project?
    • Do we know what Accomplishments must be performed to increase the maturity of the deliverable?
    • Do we know the Criteria for each Accomplishment, so we can measure the actual progress to plan?
    • Have we arranged the work to produce the deliverables in a logical network - or some other method like Kanban - that shows the dependencies between the work elements and the deliverables. 
    • This notion of dependencies is very underrated. 
      • The Kanban paradigm assumes this up front
      • Verifying there are actually NO dependencies is critical to all the processes based on having NO dependencies. 
      • It's seem rare that those verifications actually take place
      • This is an Optimism Bias in the agile software development world.
  • Do we have a credible, statistically adjusted, cost and schedule model for assessing the impact of any changes?
    • I'm confident our costs will not be larger than our revenue - sure right. Show me your probabilistic model.
    • No model, we're likely being optimistic and don;t even know it
    • Show Me The Numbers.

So With These And Others...We can remove the fallacy of the Planning Fallacy.

This doesn't mean our project will be successful. Nothing can guarantee that. But the Probability of Success will be increased.

In the end we MUST know the Mission we are trying to accomplish, the units of measure of that Mission in terms meaningful to the decision makers. Without that we can't now what DONE looks like. Amnd with that only our optimism will carry us along until it is too late to turn back.

Screen Shot 2015-05-12 at 1.53.57 PM

Anyone using Planning Fallacy as the excuse for project failure, not planning, not estimating, not actually doing their job as a project and business manager will likely succeed in the quest for project failure and get  what they deserve. Late, Over Budget, and the gadget they're building doesn't work as needed.

† Please note, that just because estimating is a problem in all domains, that's NO reason to not estimate. Like planning is a problem, it's no reason NOT to plan. Any suggestion that estimating or planning is not needed in the presence of uncertain future - as it is on all projects - is willfully ignoring the principles of Microeconomics - making choices in the presence of uncertainty based on opportunity cost . To suggest other wise confirms this ignorance.

Resources

These are some background on the Planning Fallacy problem from the anchoring and adjustment point of view that I've used over the years to inform our estimating processes for software intensive systems. After reading through these I hope you come to a better understanding of many of the mis-conceptions about estimate and the fallacies of how that is done in practice.

Interestingly there is a poster on twitter in the #NoEstimates thread that objects when people post links to their own work or work of others. Please do not fall prey to the notion that everyone has an equally informed opinion, unless you yourself have done all the research needed to cover the foundations of the topics. Outside resource are the very life blood of informed experience and the opinions that come from that experience. 

  1.  Kahneman, Daniel; Tversky, Amos (1979). "Intuitive prediction: biases and corrective procedures".TIMS Studies in Management Science 12: 313–327.
  2.  "Exploring the Planning Fallacy" (PDF). Journal of Personality and Social Psychology. 1994. Retrieved 7 November 2014.
  3. Estimating Software Project Effort Using Analogies, 
  4. Cost Estimation of Software Intensive Projects: A Survey of Current Practices
  5. "If you don't want to be late, enumerate: Unpacking Reduces the Planning Fallacy". Journal of Experimental Social Psychology. 15 October 2003. Retrieved 7 November 2014.
  6. A Causal Model for Software Cost Estimating Error, Albert L. Lederer and Jayesh Prasad, IEEE Transactions On Software Engineering, Vol. 24, No. 2, February 1998.
  7. Assuring Software Cost Estimates? Is It An Oxymoron? 2013 46th Hawaii International Conference on System Sciences.
  8. A Framework for the Analysis of Software Cost Estimating Accuracy, ISESE'06, September 21–22, 2006, Rio de Janeiro, Brazil. 
  9. "Overcoming the Planning Fallacy Through Willpower". European Journal of Social Psychology. November 2000. Retrieved 22 November 2014.
  10. Buehler, Roger; Griffin, Dale, & Ross, Michael (2002). "Inside the planning fallacy: The causes and consequences of optimistic time predictions". In Thomas Gilovich, Dale Griffin, & Daniel Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment, pp. 250–270. Cambridge, UK: Cambridge University Press.
  11. Buehler, Roger; Dale Griffin; Michael Ross (1995). "It's about time: Optimistic predictions in work and love". European Review of Social Psychology (American Psychological Association) 6: 1–32. doi:10.1080/14792779343000112.
  12. Lovallo, Dan; Daniel Kahneman (July 2003). "Delusions of Success: How Optimism Undermines Executives' Decisions". Harvard Business Review: 56–63.
  13. Buehler, Roger; Dale Griffin; Michael Ross (1994). "Exploring the "planning fallacy": Why people underestimate their task completion times". Journal of Personality and Social Psychology (American Psychological Association) 67 (3): 366–381. doi:10.1037/0022-3514.67.3.366.
  14. Buehler, Roger; Dale Griffin; Johanna Peetz (2010). "The Planning Fallacy: Cognitive, Motivational, and Social Origins" (PDF). Advances in Experimental Social Psychology (Academic Press) 43: 9.
  15. Hourglass Is Half Full or Half Empty: Temporal Framing and the Group Planning Fallacy". Group Dynamics: Theory, Research, and Practice. September 2005. Retrieved22 November 2014.
  16. Stephanie P. Pezzoa. Mark V. Pezzob, and Eric R. Stone. "The social implications of planning: How public predictions bias future plans" Journal of Experimental Social Psychology, 2006, 221–227.
  17. "Underestimating the Duration of Future Events: Memory Incorrectly Used or Memory Bias?". American Psychological Association. September 2005. Retrieved 21 November 2014.
  18. "Focalism: A source of durability bias in affective forecasting.". American Psychological Association. May 2000. Retrieved 21 November 2014.
  19. Jones,, Larry R; Euske, Kenneth J (October 1991). "Strategic misrepresentation in budgeting".Journal of Public Administration Research and Theory (Oxford University Press) 1 (4): 437–460. Retrieved 11 March 2013.
  20. Taleb, Nassem (2012-11-27). Antifragile: Things That Gain from Disorder. ISBN 9781400067824.
  21. "Allocating time to future tasks: The effect of task segmentation on planning fallacy bias". Memory & Cognition. June 2008. Retrieved 7 November 2014.
  22. "No Light at the End of his Tunnel: Boston's Central Artery/Third Harbor Tunnel Project". Project on Government Oversight. 1 February 1995. Retrieved 7 November 2014.
  23. "Denver International Airport" (PDF). United States General Accounting Office. September 1995. Retrieved 7 November 2014.
  24. Lev Virine and Michael Trumper. Project Decisions: The Art and Science, Vienna, VA: Management Concepts, 2008. ISBN 978-1-56726-217-9 
    • Michael and Lev provide the Risk Management tool we use - Risky Project.
    • Risky Project is a Monte Carlo Simulation tool for reducible and irreducible risk from probability distribution functions of the uncertainty in project.
    • Which by the way is an actual MCS tools not based on boot strapping from small number of past samples many times over.
  25. Overcoming the planning fallacy through willpower: effects of implementation intentions on actual and predicted task‐completion times, 

  26. Anchoring and Adjustment in Software Estimation, Jorge Aranda and Steve Easterbrook, ESEC-FSE’05, September 5–9, 2005, Lisbon, Portugal.
  27. Anchoring and Adjustment in Software Estimation, Jorge Aranda, PhD Thesis, University of Toronto, 2005.
  28. Anchoring and Adjustment in Software Project Management: An Experiment Investigation, Timothy P. Costello, Naval Post Graduate School, September 1992.
  29. Anchoring Effect, Thomas Mussweiler, Birte Englich, and Fritz Strack
  30. Anchoring, Non-Standard Preferences: How We choose by Comparing with a Nearby Reference Point.
  31. Reference points and redistributive preferences: Experimental evidence, Jimmy Charité, Raymond Fisman, and Ilyana Kuziemko
  32. Anchoring and Adjustment, (YouTube), Daniel Kahneman. This anchoring and adjustment discussion is critical to how we ask the question how much, how big, and when.
  33. Anchoring unbound, Nicholas Epley and Thomas Gilovich 

  34. Assessing Ranges and Possibilities, Decision Analysis for the Professional, Chapter 12, Strategic Decision and Risk Management, Stanford Certificate Program. 
    • This book by the way should be mandatory reading for anyone suggesting the decisions can be made in the absence of estimates.
    • They can't and don't accept they can, because they can't
  35. Attention and Effort, Daniel Kaheman, Prentice Hall, The Hebrew University of Jerusalem, 1973.
  36. Availability: A heuristic fir Judging Frequency and Probability, Amos Tversky and Daniel Kahneman.
  37. On the Reality of Cognitive Illusions, Daniel Kahneman, Princeton University, Amos Tversky, Stanford University.

  38. Efficacy of Bias Awareness in Debasing Oil and Gas Judgements, Matthew B. Welsh, Steve H. Begg, and Reidar B. Bratvold. 
  39. The framing effect and risky decisions: Examining cognitive functions with fMRI, Cleotilde Gonzalez, Jason Dana, Hideya Koshino, and ,Marcel Just, The Journal of Economic Psychology, 26 (2005), 1-20.

  40.  

    Discussion Note: Review of Tversky & Kahnemann (1974): Judgment under uncertainty: heuristics and biases, Micheal Axelsen UQ Business School The University of Queensland Brisbane, Australia
  41. The Anchoring-and-Adjustment Heuristic, Why the Adjustments Are Insufficient, Nicholas Epley and Thomas Gilovich.

  42.  

    Judgment under Uncertainty: Heuristics and Biases, Amos Tversky; Daniel Kahneman, Science, New Series, Vol. 185, No. 4157. (Sep. 27, 1974), pp. 1124-1131

This should be enough to get you started and set the stage for rejecting any half baked ideas about anchoring and adjustment, planning fallacies, no need to estimate and the collection of other cocka mammy ideas floating around the web on how to make credible decisions with other peoples money.

Related articles The Reason We Plan, Schedule, Measure, and Correct Herding Cats: Five Estimating Pathologies and Their Corrective Actions Tunnel to Nowhere Root Cause Analysis The Flaw of Empirical Data Used to Make Decisions About the Future
Categories: Project Management

Quote of the Month May 2015

From the Editor of Methods & Tools - Wed, 05/13/2015 - 14:45
Agile methods ask practitioners to think, and frankly, that‘s a hard sell. It is far more comfortable to simply follow what rules are given and claim you’re “doing it by the book.” It’s easy, it’s safe from ridicule or recrimination; you won’t get fired for it. While we might publicly decry the narrow confines of a set of rules, there is safety and comfort there. But of course, to be agile – or effective – isn’t about comfort […]. And if you only pick a handful of rules that you feel ...

Quote of the Month May 2015

From the Editor of Methods & Tools - Wed, 05/13/2015 - 14:45
Agile methods ask practitioners to think, and frankly, that‘s a hard sell. It is far more comfortable to simply follow what rules are given and claim you’re “doing it by the book.” It’s easy, it’s safe from ridicule or recrimination; you won’t get fired for it. While we might publicly decry the narrow confines of a set of rules, there is safety and comfort there. But of course, to be agile – or effective – isn’t about comfort […]. And if you only pick a handful of rules that you feel ...

Champfrogs: A SketchKeynote Presentation

NOOP.NL - Jurgen Appelo - Wed, 05/13/2015 - 13:10
champfrogs-2

I have blogged before about my experiment with a sketchkeynote format. In this presentation, the entire slide deck consists of hand-made drawings. And when I present, the drawings appear on-screen one-by-one, while the item I’m talking about is always highlighted.

The post Champfrogs: A SketchKeynote Presentation appeared first on NOOP.NL.

Categories: Project Management

Attributes That Drive The Need To Scale Agile

10390127_10152452543289484_3916202870628506345_n

No methodology or framework is 100% perfect, like weather prediction.

Agile was born as a synthesis of many minds and many frameworks. That synthesis yielded a single philosophy; however since each person and framework brought a perspective to the table that philosophy has been implemented as a wide variety of methodologies and frameworks. No methodology or framework is 100% perfect for every piece of work. Perfect is binary – when the answer to is that a specific framework isn’t a perfect fit, organizations need to either find another approach or to tailor the approach to make it fit. Scaling is a form of tailoring. Process frameworks always have recognized the need to tailor the process to meet the needs of organization and teams doing the work. For example, even the venerable CMMI, often lambasted for generating heavy or static processes, actually includes practices for tailoring across the model and the process created to implement the mode. In Agile scaling often requires tailoring the processes used at the team levels to support larger efforts. While all sorts of factors can drive the need to scale or tailor Agile frameworks, typically the size of the work is the single most critical driver. Other typical attributes that combine with size and affect how scaling is approached include complexity, risk and organizational politics.

Size of an effort is influenced by the amount functional, non-functional or technical requirements required to solve a specific problem. How related or integrated those requirements are will affect how work is organized, and therefore the number of teams required to deliver the work. A large, highly-related effort will required coordination, which will require added processes such as a Scrum-of-Scrum meetings or other forms of program management.

Complexity is is made up of a huge number of attributes that include (but is not limited to) how difficult the technical problem will be to solve, whether the team has ever used the technology before, how many disciplines will be involved in solving the problem and size. Complexity directly relates to how difficult the work will be to deliver. All things being equal, the more complex the more effort will be required. The larger the effort or the more disciplines needed to deliver the project typical means more teams and more coordination. More effort, more disciplines or more people leads to a need for techniques to scale team-level Agile.

Risk is reflection of uncertainty. Any event that could have a negative (or positive) impact that the team or organization can’t predict is risk. A risk that can have a significant impact on work or the organization as a whole needs to be monitored and managed. Risk management techniques are commonly added to scale Agile frameworks.

Organizational politics might be considered specialized type of complexity. Typically organization politics generates higher need for oversight and reporting in advance of typical team-level Agile techniques. The added organizational requirements that are required generate the need to add steps, processes and reviews, which will require scaling basic team-level Agile.

Not all projects are exactly the same. This is why we need to scale team-level. Team-level Agile smoothes some of the required process differences out through the use of techniques such as time boxes and user story grooming.  However despite of these techniques, variability of work effort (examples of work efforts include projects, programs, release or products) are not all the same. Size, complexity, risk and organizational politics generate a need to add steps and processes on top of team-level Agile to scale up to meet the needs of larger, more complexity or riskier work.


Categories: Process Management

Sponsored Post: Learninghouse, OpenDNS, MongoDB, Internap, Aerospike, SignalFx, InMemory.Net, Couchbase, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • The Cloud Platform team at OpenDNS is building a PaaS for our engineering teams to build and deliver their applications. This is a well rounded team covering software, systems, and network engineering and expect your code to cut across all layers, from the network to the application. Learn More

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • 90 Days. 1 Bootcamp. A whole new life. Interested in learning how to code? Concordia St. Paul's Coding Bootcamp is an intensive, fast-paced program where you learn to be a software developer. In this full-time, 12-week on-campus course, you will learn either .NET or Java and acquire the skills needed for entry-level developer positions. For more information, read the Guide to Coding Bootcamp or visit bootcamp.csp.edu.

  • June 2nd – 4th, Santa Clara: Register for the largest NoSQL event of the year, Couchbase Connect 2015, and hear how innovative companies like Cisco, TurboTax, Joyent, PayPal, Nielsen and Ryanair are using our NoSQL technology to solve today’s toughest big data challenges. Register Today.

  • The Art of Cyberwar: Security in the Age of Information. Cybercrime is an increasingly serious issue both in the United States and around the world; the estimated annual cost of global cybercrime has reached $100 billion with over 1.5 million victims per day affected by data breaches, DDOS attacks, and more. Learn about the current state of cybercrime and the cybersecurity professionals in charge with combatting it in The Art of Cyberwar: Security in the Age of Information, provided by Russell Sage Online, a division of The Sage Colleges.

  • MongoDB World brings together over 2,000 developers, sysadmins, and DBAs in New York City on June 1-2 to get inspired, share ideas and get the latest insights on using MongoDB. Organizations like Salesforce, Bosch, the Knot, Chico’s, and more are taking advantage of MongoDB for a variety of ground-breaking use cases. Find out more at http://mongodbworld.com/ but hurry! Super Early Bird pricing ends on April 3.
Cool Products and Services
  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Loggly alternative.

  • Instructions for implementing Redis functionality in Aerospike. Aerospike Director of Applications Engineering, Peter Milne, discusses how to obtain the semantic equivalent of Redis operations, on simple types, using Aerospike to improve scalability, reliability, and ease of use. Read more.

  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

There is No Such Thing as Free

Herding Cats - Glen Alleman - Tue, 05/12/2015 - 16:34

When our children were in High School, I strongly suggested they both take an economics class. Our daughter came home one day and announced at the dinner table

Dad, we learned something important today in Economics Class. 
Whats that dear?, 
I said, knowing the answer.
There's no such thing as free she said.

In finance, There is No Such Thing as a Free Lunch (TANSTAAFL) refers to the opportunity cost paid to make a decision. The decision to consume one product usually comes with the trade-off of giving up the consumption of something else. 

In a recent conversation, I was introduced to the notion of Extreme Contracts, 

  • Very short iterations, usually one week.
  • A flat fee for every iteration.
  • Money back guarantee.

The first bullet sounds like a good idea, provided one week can actually produce testable outcomes. The second bullet means that the variable will be the produced outcomes, since the probability that all work is of the same size, same risk, same effort is likely low on any non-trivial project.

The last bullet fails to acknowledge the principle of lost opportunity cost. This is the there is no such thing as free. When the delivered software is not what is needed, the cost of the software in Extreme Contracts, is free. But the lost capabilities in the time frame they are needed is not free. It has a cost. This is the Opportunity Cost that is lost. 

The basis of good project management, and especially in our domain, using Earned Value, depends on providing the needed capabilities at the needed time for the needed cost. Baked into that paradigm is all the cost, planned upfront with the appropriate confidence levels, alternative assessment, and margins and reserves. The discovery cost, risk mitigation cost - both reducible and irreducible - and the cost recovery for productive use of the delivered product or service on the planned date for the planned cost.

This is the foundation of Capabilities Based Planning used in enterprise IT and software intensive systems development.

Capabilities based planning (v2) from Glen Alleman Related articles The Reason We Plan, Schedule, Measure, and Correct How We Make Decisions is as Important as What We Decide. The Flaw of Empirical Data Used to Make Decisions About the Future
Categories: Project Management

R: Cohort heatmap of Neo4j London meetup

Mark Needham - Tue, 05/12/2015 - 00:16

A few months ago I had a go at doing some cohort analysis of the Neo4j London meetup group which was an interesting experiment but unfortunately resulted in a chart that was completely illegible.

I wasn’t sure how to progress from there but a few days ago I came across the cohort heatmap which seemed like a better way of visualising things over time.

The underlying idea is still the same – we’ve comparing different cohorts of users against each other to see whether a change or intervention we did at a certain time had any impact.

However, the way we display the cohorts changes and I think for the better.

To recap, we start with the following data frame:

df = read.csv("/tmp/df.csv")
> df %>% sample_n(5)
        rsvp.time person.id                time    date
255  1.354277e+12  12228948 2012-11-30 12:05:08 2012-11
2475 1.407342e+12  19057581 2014-08-06 16:26:04 2014-08
3988 1.421769e+12  66122172 2015-01-20 15:58:02 2015-01
4411 1.419377e+12 165750262 2014-12-23 23:27:44 2014-12
1010 1.383057e+12  74602292 2013-10-29 14:24:32 2013-10

And we need to transform this into a data frame which is grouped by cohort (members who attended their first meetup in a particular month). The following code gets us there:

firstMeetup = df %>% 
  group_by(person.id) %>% 
  summarise(firstEvent = min(time), count = n()) %>% 
  arrange(desc(count))
firstMeetup$date = format(as.Date(firstMeetup$firstEvent), "%Y-%m")
 
countsForCohort = function(df, firstMeetup, cohort) {
  members = (firstMeetup %>% filter(date == cohort))$person.id
 
  attendance = df %>% 
    filter(person.id %in% members) %>% 
    count(person.id, date) %>% 
    ungroup() %>%
    count(date)
 
  allCohorts = df %>% select(date) %>% unique
  cohortAttendance = merge(allCohorts, attendance, by = "date", all.x = TRUE)  
  cohortAttendance[is.na(cohortAttendance) & cohortAttendance$date > cohort] = 0
  cohortAttendance %>% mutate(cohort = cohort, retention = n / length(members), members = n)  
}
 
cohorts = collect(df %>% select(date) %>% unique())[,1]
 
cohortAttendance = data.frame()
for(cohort in cohorts) {
  cohortAttendance = rbind(cohortAttendance,countsForCohort(df, firstMeetup, cohort))      
}
 
monthNumber = function(cohort, date) {
  cohortAsDate = as.yearmon(cohort)
  dateAsDate = as.yearmon(date)
 
  if(cohortAsDate > dateAsDate) {
    "NA"
  } else {
    paste(round((dateAsDate - cohortAsDate) * 12), sep="")
  }
}
 
cohortAttendanceWithMonthNumber = cohortAttendance %>% 
  group_by(row_number()) %>% 
  mutate(monthNumber = monthNumber(cohort, date)) %>%
  filter(monthNumber != "NA") %>%
  filter(!is.na(members)) %>%
  mutate(monthNumber = as.numeric(monthNumber)) %>% 
  arrange(monthNumber)
 
> cohortAttendanceWithMonthNumber %>% head(10)
Source: local data frame [10 x 7]
Groups: row_number()
 
      date n  cohort retention members row_number() monthNumber
1  2011-06 4 2011-06      1.00       4            1           0
2  2011-07 1 2011-06      0.25       1            2           1
3  2011-08 1 2011-06      0.25       1            3           2
4  2011-09 2 2011-06      0.50       2            4           3
5  2011-10 1 2011-06      0.25       1            5           4
6  2011-11 1 2011-06      0.25       1            6           5
7  2012-01 1 2011-06      0.25       1            7           7
8  2012-04 2 2011-06      0.50       2            8          10
9  2012-05 1 2011-06      0.25       1            9          11
10 2012-06 1 2011-06      0.25       1           10          12

Now let’s create our first heatmap.

t <- max(cohortAttendanceWithMonthNumber$members)
 
cols <- c("#e7f0fa", "#c9e2f6", "#95cbee", "#0099dc", "#4ab04a", "#ffd73e", "#eec73a", "#e29421", "#e29421", "#f05336", "#ce472e")
ggplot(cohortAttendanceWithMonthNumber, aes(y=cohort, x=date, fill=members)) +
  theme_minimal() +
  geom_tile(colour="white", linewidth=2, width=.9, height=.9) +
  scale_fill_gradientn(colours=cols, limits=c(0, t),
                       breaks=seq(0, t, by=t/4),
                       labels=c("0", round(t/4*1, 1), round(t/4*2, 1), round(t/4*3, 1), round(t/4*4, 1)),
                       guide=guide_colourbar(ticks=T, nbin=50, barheight=.5, label=T, barwidth=10)) +
  theme(legend.position='bottom', 
        legend.direction="horizontal",
        plot.title = element_text(size=20, face="bold", vjust=2),
        axis.text.x=element_text(size=8, angle=90, hjust=.5, vjust=.5, face="plain")) +
  ggtitle("Cohort Activity Heatmap (number of members who attended event)")

2015 05 11 23 55 56

‘t’ is the maximum number of members within a cohort who attended a meetup in a given month. This makes it easy to see which cohorts started with the most members but makes it difficult to compare their retention over time.

We can fix that by showing the percentage of members in the cohort who attend each month rather than using absolute values. To do that we must first add an extra column containing the percentage values:

cohortAttendanceWithMonthNumber$retentionPercentage = ifelse(!is.na(cohortAttendanceWithMonthNumber$retention),  cohortAttendanceWithMonthNumber$retention * 100, 0)
t <- max(cohortAttendanceWithMonthNumber$retentionPercentage)
 
cols <- c("#e7f0fa", "#c9e2f6", "#95cbee", "#0099dc", "#4ab04a", "#ffd73e", "#eec73a", "#e29421", "#e29421", "#f05336", "#ce472e")
ggplot(cohortAttendanceWithMonthNumber, aes(y=cohort, x=date, fill=retentionPercentage)) +
  theme_minimal() +
  geom_tile(colour="white", linewidth=2, width=.9, height=.9) +
  scale_fill_gradientn(colours=cols, limits=c(0, t),
                       breaks=seq(0, t, by=t/4),
                       labels=c("0", round(t/4*1, 1), round(t/4*2, 1), round(t/4*3, 1), round(t/4*4, 1)),
                       guide=guide_colourbar(ticks=T, nbin=50, barheight=.5, label=T, barwidth=10)) +
  theme(legend.position='bottom', 
        legend.direction="horizontal",
        plot.title = element_text(size=20, face="bold", vjust=2),
        axis.text.x=element_text(size=8, angle=90, hjust=.5, vjust=.5, face="plain")) +
  ggtitle("Cohort Activity Heatmap (number of members who attended event)")

2015 05 12 00 01 55

This version allows us to compare cohorts against each other but now we don’t have the exact numbers which means earlier cohorts will look better since there are less people in them. We can get the best of both worlds by keeping this visualisation but showing the actual values inside each box:

t <- max(cohortAttendanceWithMonthNumber$retentionPercentage)
 
cols <- c("#e7f0fa", "#c9e2f6", "#95cbee", "#0099dc", "#4ab04a", "#ffd73e", "#eec73a", "#e29421", "#e29421", "#f05336", "#ce472e")
ggplot(cohortAttendanceWithMonthNumber, aes(y=cohort, x=date, fill=retentionPercentage)) +
  theme_minimal() +
  geom_tile(colour="white", linewidth=2, width=.9, height=.9) +
  scale_fill_gradientn(colours=cols, limits=c(0, t),
                       breaks=seq(0, t, by=t/4),
                       labels=c("0", round(t/4*1, 1), round(t/4*2, 1), round(t/4*3, 1), round(t/4*4, 1)),
                       guide=guide_colourbar(ticks=T, nbin=50, barheight=.5, label=T, barwidth=10)) +
  theme(legend.position='bottom', 
        legend.direction="horizontal",
        plot.title = element_text(size=20, face="bold", vjust=2),
        axis.text.x=element_text(size=8, angle=90, hjust=.5, vjust=.5, face="plain")) +
  ggtitle("Cohort Activity Heatmap (number of members who attended event)") + 
  geom_text(aes(label=members),size=3)

2015 05 12 00 04 31

What we can learn overall is that the majority of people seem to have a passing interest and then we have a smaller percentage who will continue to come to events.

It seems like we did a better job at retaining attendees in the middle of last year – one hypothesis is that the events we ran around then were more compelling but I need to do more analysis.

Next I’m going to drill further into some of the recent events and see what cohorts the attendees came from.

Categories: Programming

How to improve Scrum team performance with Kanban

Xebia Blog - Mon, 05/11/2015 - 21:22
This blogpost has been a collaborative effort between Jeroen Willemsen and Jasper Sonnevelt

In this post we will take a look at a real life example of a Scrum team transitioning to Kanban. We will address a few pitfalls and discuss how to circumvent those. With this we provide additional insights to advance your agile way of working.

The example

At the time we joined this project, three Scrum-teams worked together to develop a large software product. The teams shared a Product Owner (PO) and backlog and just had their MVP released. Now, bug-reports started coming in and the Scrum-teams faced several problems:

  • The developers were making a lot of progress per sprint, so the PO had to put a lot of effort in the backlog to create a sufficiently sized work inventory.New bugs came to light as the product started to become adopted. Filing and fixing these bugs and monitoring the progress was taking a lot of effort from the PO. Solving each bug in the next sprint reduced the team's response time.
  • Prioritizing the bug-fixing and the newly requested features became hard as the PO had to many stakeholders to manage.
  • The Product Owner saw Kanban as a possible solution to these problems. He convinced the team to implement it in an effort to deal with these problems and to provide a better way of working.

The practices that were implemented included:

  • No more sprint replenishment rhythm (planning & backlog refinement);
  • A Work in Progress (WIP) limit.

As a result the backlog quality started to deprecate. Less information was written down about a story and developers did not take the time to ask the right questions towards the PO. The WIP limit maximised the amount of work the team could take on. This made them focus on the current, sometimes unclear and too complex stories. Because of this developers would keep working on their current tasks, making assumptions along the way. These assumptions could sometimes lead to conflicting insights as developers would collaborate on stories, while at the same time being influenced by different stakeholders. This resulted in misalignment.

All of these actions resulted in a lower burn down rate, less predictability and therefore nervous stakeholders. And after a few weeks, the PO decided to stop the migration to Kanban, and go back to Scrum.

However, the latter might not have been necessary for the team to be successful. After all, it is about how you implement Kanban. There were a few key elements missing for instance. In our experience, teams that are successful (would) have also implemented:

  • Forecasting based on historical data and simulations: Scrum teams use Planning Poker and Velocity to make predictions about the amount of work they will be able to do in a sprint. When sprint cadence is let go this will become more difficult. Practices in the Kanban Method tell us that by using historical data in simulations, the same and probably more predictability can be achieved.
  • Policies for prioritizing, WIP monitoring and handover criteria: Kanban demands a very high amount of discipline from all participants. Policies will help here. In the case of this team, it would have benefited greatly from having clear defined policies about priorities and prioritization. For instance: Who can (and is allowed to) prioritize? How can the team see what has most priority? Are there different classes of service we can identify? And how do we treat those? The same holds for WIP-limits and handover criteria. We always use a Definition of Ready and Definition of Done in Scrum teams. Keeping these in a Kanban team is a very good practice.
  • A feedback cycle to review data and process on a regular basis: Where Scrum demands a retrospective after every sprint, Kanban does not explicitly define such a thing. Kanban only dictates that you should implement feedback loops. When an organization starts implementing Kanban it is key that they do review the implementation, data and process at a regular basis. This is crucial during the first weeks: all participants will have to get used to the new way of working. Policies should be reviewed as well, as they might need adjustments to facilitate the team in having an optimized workflow.

The hard thing about Kanban

What most people think and see when talking about Kanban is introducing WIP limits to their board, adding columns (other than To do, In Progress, Done) and stop using sprints. And herein lies one of the problems. For organizations that are used to working with Scrum letting go of sprints is a very unnatural thing to do. It feels like letting go of predictability and regular feedback. And instead of making the organization a bit better people feel like they have just taken a step back.

The hard thing about Kanban is that it doesn't provide you with a clear cut solution to your problems like Scrum does. Kanban is a method that helps you fix the problems in your organization in a way that best fits your context. For instance: it tells you to visualize a lot of things but only provides examples of what you could visualize. Typically teams and organizations visualize their process and work(types).

To summarize:

  • Kanban uses the current process and doesn't enforce a process of it's own. Therefore it demands a higher degree of discipline, both from the team-members and the the rest of the organization. If you are not aware of this the introduction of Kanban will only lead to chaos.
  • Kanban doesn't say anything about (To Do column) replenishment frequency or demo frequency.

Recommendations:

  • Implement forecasting based on historical data and simulations, policies for prioritizing, WIP limits and handover criteria and a feedback cycle to review data and process on a regular basis.
  • Define clear policies about how to collaborate. This will help create transparency and predictability.
  • The Scrum ceremonies happen in a rhythm that is very easy to understand and learn for stakeholders. Keep that mind when designing Kanban policies.

 

Spikes

Mike Cohn's Blog - Mon, 05/11/2015 - 18:39

Projects are, of course, undertaken to develop new functionality. We want to finish a project with more capabilities than when we began the project. However, we also undertake a project to become smarter. We would like to finish each project smarter than when we began it.

Most work will be directly related to building new features, but it is also important that teams plan for and allocate time for getting smarter. Agile teams use the term spike to refer to a time-boxed research activity.

For example, suppose you are trying to decide between competing design approaches. The product owner may decide to invest another 40 (or 4 or 400) hours into the investigation. Or the team may be making a build vs. buy decision involving a new component. A good first step would be a spike.

Because spikes are time-boxed, the investment is fixed. After the pre-determined number of hours, a decision is made. But that decision may be to invest more hours in gaining more knowledge.

Some teams opt to put spikes on the product backlog. Other teams take the approach that a spike is really part of some other product backlog item and will only expose spikes as sprint backlog items.

Either way, it is important to acknowledge the importance of learning in a successful project.