Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

How Can I Help You Enjoy Your Job?

NOOP.NL - Jurgen Appelo - Mon, 05/18/2015 - 14:08
helping-hand

People from all over the world sign up to join the Happy Melly business network because–apparently–they think we’re doing a good job. That’s so awesome. It also increases the pressure on us to report on what we’re doing. And it makes me think harder: What can I do to help people enjoy their job?

The post How Can I Help You Enjoy Your Job? appeared first on NOOP.NL.

Categories: Project Management

SPaMCAST 342 – Gorman, Gottesdiener, Discover to Deliver Revisited

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

Software Process and Measurement Cast 342 features our interview with Ellen Gottesdiener and Mary Gorman.  We discussed their great book, Discover to Deliver: Agile Product Planning and Analysis, requirements and Agile.  Ellen and Mary provided penetrating insight into how to work with requirements in an Agile environment, from discovery to delivery and beyond.

This is the second time Ellen, Mary and I talked Agile requirements.  After listening to this interview turn back the hands of time and listen to SPaMCAST 200.

Ellen Gottesdiener is an internationally recognized leader in the convergence of agile + requirements + product management + project management. She is founder and principal of EBG Consulting, which helps organizations adapt how they collaborate to improve business outcomes.

Ellen’s passion is helping people use modern product requirements practices to build valued products and great teams. She provides coaching, training, and facilitates discovery and planning workshops across diverse industries. Ellen is a world-renowned writer, speaker, and presenter. Her most recent book, co-authored with Mary Gorman, is Discover to Deliver: Agile Product Planning and Analysis. Ellen is author of two other acclaimed books: Requirements by Collaboration and The Software Requirements Memory Jogger.

Here’s where you digitally connect with Ellen: Blog | Twitter | Newsletter | LinkedIn

Mary Gorman, a leader in business analysis and requirements, is Vice President of Quality & Delivery at EBG Consulting. Mary coaches product teams, facilitates discovery workshops, and trains stakeholders in collaborative practices essential for defining high-value products. She speaks and writes for the agile, business analysis, and project management communities. Mary is co-author with Ellen Gottesdiener of Discover to Deliver: Agile Product Planning and Analysis.   

A Certified Business Analysis Professional™, Mary helped develop the IIBA®’s A Guide to the Business Analysis Body of Knowledge® and certification exam. She also served on the task force that created the PMI Professional in Business Analysis (PMI-PBA)® Examination Content Outline. You can reach Mary via: Twitter | LinkedIn

Call to action!

Reviews of the podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

Next up on Re-Read Saturday: The Mythical Man-Month

Get a copy now and start reading!

Upcoming Events

2015 PROFESSIONAL DEVELOPMENT & TRAINING WORKSHOP
June 9 – 12
San Diego, California
http://www.iceaaonline.com/2519-2/
I will be speaking on June 10.  My presnetaiton is titled “Agile Estimation Using Functional Metrics.”

Let me know if you are attending!

Also upcoming conferences I will be involved in include and SQTM in September, BIFPUG in November. More on these great conferences next week.

Next SPaMCast

The next Software Process and Measurement Cast will feature our essay on Commitment, Part 2. Is commitment anti-Agile?  We think not!  Commitment is a core behavior for effective Agile!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 342 – Gorman, Gottesdiener, Discover to Deliver Revisited

Software Process and Measurement Cast - Sun, 05/17/2015 - 22:00

Software Process and Measurement Cast 342 features our interview with Ellen Gottesdiener and Mary Gorman.  We discussed their great book, Discover to Deliver: Agile Product Planning and Analysis, requirements and Agile.  Ellen and Mary provided penetrating insight into how to work with requirements in an Agile environment, from discovery to delivery and beyond.

This is the second time Ellen, Mary and I talked Agile requirements.  After listening to this interview turn back the hands of time and listen to SPaMCAST 200.

Ellen Gottesdiener is an internationally recognized leader in the convergence of agile + requirements + product management + project management. She is founder and principal of EBG Consulting, which helps organizations adapt how they collaborate to improve business outcomes.

Ellen’s passion is helping people use modern product requirements practices to build valued products and great teams. She provides coaching, training, and facilitates discovery and planning workshops across diverse industries. Ellen is a world-renowned writer, speaker, and presenter. Her most recent book, co-authored with Mary Gorman, is Discover to Deliver: Agile Product Planning and Analysis. Ellen is author of two other acclaimed books: Requirements by Collaboration and The Software Requirements Memory Jogger.

Here’s where you digitally connect with Ellen: Blog | Twitter | Newsletter | LinkedIn

Mary Gorman, a leader in business analysis and requirements, is Vice President of Quality & Delivery at EBG Consulting. Mary coaches product teams, facilitates discovery workshops, and trains stakeholders in collaborative practices essential for defining high-value products. She speaks and writes for the agile, business analysis, and project management communities. Mary is co-author with Ellen Gottesdiener of Discover to Deliver: Agile Product Planning and Analysis.   

A Certified Business Analysis Professional™, Mary helped develop the IIBA®’s A Guide to the Business Analysis Body of Knowledge® and certification exam. She also served on the task force that created the PMI Professional in Business Analysis (PMI-PBA)® Examination Content Outline. You can reach Mary via: Twitter | LinkedIn

Call to action!

Reviews of the podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

Next up on Re-Read Saturday: The Mythical Man-Month

Get a copy now and start reading!

Upcoming Events

2015 PROFESSIONAL DEVELOPMENT & TRAINING WORKSHOP
June 9 – 12
San Diego, California
http://www.iceaaonline.com/2519-2/
I will be speaking on June 10.  My presnetaiton is titled “Agile Estimation Using Functional Metrics.”

Let me know if you are attending!

Also upcoming conferences I will be involved in include and SQTM in September, BIFPUG in November. More on these great conferences next week.

Next SPaMCast

The next Software Process and Measurement Cast will feature our essay on Commitment, Part 2. Is commitment anti-Agile?  We think not!  Commitment is a core behavior for effective Agile!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Neo4j: Refactoring the BBC football live text fouls graph

Mark Needham - Sun, 05/17/2015 - 12:04

Yesterday I wrote about a Neo4j graph I’ve started building which contains all the fouls committed in the Champions League game between Barcelona & Bayern Munich and surrounding meta data.

While adding other events into the graph I realised that I’d added some duplication in the model and the model could do with some refactoring to make it easier to use.

To recap, this is the model that we designed in the previous blog post:

The duplication is on the left hand side of the model – we model a foul as being committed by one player against another and then hook the foul back into the match. By doing that we’re not using the ‘appearance’ concept which links a player and a match together.

We can make the ‘COMMITTED_IN_MATCH’ relationship redundant by connecting the foul to appearance rather than to player. The match the foul was committed in can then be found by navigating through the appearance node.

This is what we want the graph to look like:

2015 05 17 10 40 44

We’ll move towards this new model in 3 steps:

  • Introduce the new structure alongside the existing one
  • Rewrite our queries to use the new structure
  • Remove the old structure
Introducing the new structure

First up let’s write a query to introduce the new structure.

match (foul:Foul)-[:COMMITTED_AGAINST]->(fouledPlayer),
      (foul)<-[:COMMITTED_FOUL]-(foulingPlayer),
      (foul)-[:COMMITTED_IN_MATCH]->(match:Match {id: "32683310"}),
      (foulingPlayer)-[:MADE_APPEARANCE]-(foulingPlayerApp)-[:IN_MATCH]->(match),
      (fouledPlayer)-[:MADE_APPEARANCE]-(fouledPlayerApp)-[:IN_MATCH]->(match)
MERGE (foul)<-[:COMMITTED_FOUL]-(foulingPlayerApp)
MERGE (foul)-[:COMMITTED_AGAINST]->(fouledPlayerApp)

Remember we’re not going to delete the old structure yet so that’s why there aren’t any delete statements in here.

Rewriting our queries

Now we need to update our queries to work against the new graph structure:

Where do the fouls happen?
match (match:Match {id: "32683310"})<-[:COMMITTED_IN_MATCH]-(foul)
RETURN foul.location AS location, COUNT(*) as fouls
ORDER BY fouls DESC

becomes

match (match:Match {id: "32683310"})<-[:IN_MATCH]-()<-[]-(foul:Foul)
RETURN foul.location AS location, COUNT(*) as fouls
ORDER BY fouls DESC
Who fouls the most?
match (match:Match {id: "32683310"})<-[:COMMITTED_IN_MATCH]-(foul:Foul)<-[:COMMITTED_FOUL]-(fouler:Player)
RETURN fouler.name AS fouler, COUNT(*) as fouls
ORDER BY fouls DESC
LIMIT 10;

becomes

match (match:Match {id: "32683310"})<-[:IN_MATCH]-(appearance)-[:COMMITTED_FOUL]->(foul:Foul),
      (appearance)<-[:MADE_APPEARANCE]-(fouler)
RETURN fouler.name AS fouler, COUNT(*) as fouls
ORDER BY fouls DESC
LIMIT 10
Who was fouled the most?
match (match:Match {id: "32683310"})<-[:IN_MATCH]-(appearance)-[r:COMMITTED_FOUL]->(foul:Foul),
      (appearance)<-[:MADE_APPEARANCE]-(fouler)
RETURN fouler.name AS fouler, COUNT(*) as fouls
ORDER BY fouls DESC
LIMIT 10

becomes

match (match:Match {id: "32683310"})<-[:IN_MATCH]-(appearance)<-[:COMMITTED_AGAINST]->(foul:Foul),
      (appearance)<-[:MADE_APPEARANCE]-(fouled)
RETURN fouled.name AS fouled, COUNT(*) as fouls
ORDER BY fouls DESC
LIMIT 10
Who fouled who the most?
match (match:Match {id: "32683310"})<-[:COMMITTED_IN_MATCH]-(foul:Foul)-[:COMMITTED_AGAINST]->(fouled:Player),
      (foul)<-[:COMMITTED_FOUL]-(fouler:Player)
RETURN fouler.name AS fouler, fouled.name AS fouled, COUNT(*) as fouls
ORDER BY fouls DESC
LIMIT 10

becomes

match (match:Match {id: "32683310"}),
      (match)<-[:IN_MATCH]-(fouledApp)<-[:COMMITTED_AGAINST]->(foul:Foul)<-[:COMMITTED_FOUL]-(foulerApp)-[:IN_MATCH]->(match),
      (fouledApp)<-[:MADE_APPEARANCE]-(fouled),
      (foulerApp)<-[:MADE_APPEARANCE]-(fouler)
RETURN fouler.name AS fouler, fouled.name AS fouled, COUNT(*) as fouls
ORDER BY fouls DESC
LIMIT 10;
Which team fouled most?
match (match:Match {id: "32683310"})<-[:COMMITTED_IN_MATCH]-()<-[:COMMITTED_FOUL]-(fouler),
      (fouler)-[:MADE_APPEARANCE]-(app)-[:IN_MATCH]-(match),
      (app)-[:FOR_TEAM]->(team)
RETURN team.name, COUNT(*) as fouls
ORDER BY fouls DESC

becomes

match (match:Match {id: "32683310"})<-[:IN_MATCH]-(app:Appearance)-[:COMMITTED_FOUL]->(),
      (app)-[:FOR_TEAM]->(team)
RETURN team.name, COUNT(*) as fouls
ORDER BY fouls DESC
Worst fouler for each team
match (match:Match {id: "32683310"})<-[:COMMITTED_IN_MATCH]-(foul)<-[:COMMITTED_FOUL]-(fouler),
      (fouler)-[:MADE_APPEARANCE]-(app)-[:IN_MATCH]-(match),
      (app)-[:FOR_TEAM]->(team)
WITH team, fouler, COUNT(*) AS fouls
ORDER BY team.name, fouls DESC
WITH team, COLLECT({fouler:fouler, fouls:fouls})[0] AS topFouler
RETURN team.name, topFouler.fouler.name, topFouler.fouls;

becomes

match (match:Match {id: "32683310"})<-[:IN_MATCH]-(app:Appearance)-[:COMMITTED_FOUL]->(),
      (app)-[:FOR_TEAM]->(team),
      (fouler)-[:MADE_APPEARANCE]->(app)
WITH team, fouler, COUNT(*) AS fouls
ORDER BY team.name, fouls DESC
WITH team, COLLECT({fouler:fouler, fouls:fouls})[0] AS topFouler
RETURN team.name, topFouler.fouler.name, topFouler.fouls;
Most fouled against for each team
match (match:Match {id: "32683310"})<-[:COMMITTED_IN_MATCH]-(foul)<-[:COMMITTED_FOUL]-(fouler),
      (fouler)-[:MADE_APPEARANCE]-(app)-[:IN_MATCH]-(match),
      (app)-[:FOR_TEAM]->(team)
WITH team, fouler, COUNT(*) AS fouls
ORDER BY team.name, fouls DESC
WITH team, COLLECT({fouler:fouler, fouls:fouls})[0] AS topFouler
RETURN team.name, topFouler.fouler.name, topFouler.fouls

becomes

match (match:Match {id: "32683310"})<-[:IN_MATCH]-(app:Appearance)<-[:COMMITTED_AGAINST]->(),
      (app)-[:FOR_TEAM]->(team),
      (fouled)-[:MADE_APPEARANCE]->(app)
WITH team, fouled, COUNT(*) AS fouls
ORDER BY team.name, fouls DESC
WITH team, COLLECT({fouled:fouled, fouls:fouls})[0] AS topFouled
RETURN team.name, topFouled.fouled.name, topFouled.fouls

The early queries are made more complicated by the refactoring but the latter ones are slightly simpler. I think we need to hook some more events onto the appearance node to see whether this refactoring is worthwhile or not.

Removing the old structure

Holding judgement for now, let’s look at how we’d remove the old structure – the final step in this refactoring:

match (match:Match {id: "32683310"})<-[oldRel:COMMITTED_IN_MATCH]-(foul:Foul)
DELETE oldRel
match (player:Player)<-[oldRel:COMMITTED_AGAINST]-(foul:Foul)
DELETE oldRel
match (player:Player)-[oldRel:COMMITTED_FOUL]->(foul:Foul)
DELETE oldRel

Hopefully you can see how you’d go about refactoring your own graph if you realise the model isn’t quite what you want.

Any questions/thoughts/suggestions let me know!

Categories: Programming

Re-Read Saturday: The Goal: A Process of Ongoing Improvement. Part 13

IMG_1249

This week I attended and spoke at the CMMI Global Congress. It was a great conference, and as with most conferences, the conversations in the hallways were as interesting as the presentations (including mine). I had a lot conversations about lean, Agile and scaling Agile, and while the attendees as a whole saw the value, there are still a few that view Agile and lean concepts with derision. These conversations, in conjunction with today’s re-read segment of The Goal, led me to consider whether much of the underlying resistance was being generated by fear; in particular the fear of discovering that what you know is no longer relevant. People facing that fear generally react in one of two ways: reinvention or rejection. In today’s segment Hilton Smyth chooses one of those options. . .

Part 1       Part 2       Part 3      Part 4      Part 5      Part 6      Part 7      Part 8    Part 9   Part 10   Part 11 Part 12

Chapter 31 Alex appears for the plant review, which is being chaired not by Bill Peach (Alex’s boss) but rather Hilton Smyth. Hilton is the assistant division controller. When Alex suggests that they wait for Bill Peach, Hilton indicates that he will not be coming and that his (Hilton’s) report will tip the scales on whether the plant stays open or not. The early exchanges clearly establish that Hilton does not buy into the turn around that Alex and his team have engineered. Alex reiterates the three core findings that have driven the turn around.

  1. Instead of balancing capacity with demand, they are focused on maintaining and improving the flow through the plant.
  2. For resources that are not bottlenecks, the level of activity from which the system is able to profit is not determined by individual capacity, but rather by some other constraint.
  3. Utilization and activation are not the same.

Hilton believes that Alex’s deviations from the tried and true formulas for batch size, capacity utilization and per unit costing are hiding problems that will cripple the plant in the future. Those tried and true formulas are central to Hilton’s perception of his own relevance, and he can’t see that with both profits and plant throughput up and inventory down that the plant is now on very solid footing. The report to Peach will be bad.

After the meeting, Alex decides to confront Peach. Peach listens as Alex tells him that Smyth would not listen to reason. Peach summons Jons (head of sales), Ethan Frost (division controller and Smyth’s boss) and Smyth. When they are assembled, Peach announces that Jons, Frost and himself have been promoted, and that Alex will also be promoted to head the division. While unstated in the book the inference is that recent profitability and the new orders from Bucky Burnside have made quite the stir at corporate. (In my head I could hear Smyth blustering, as much of his previous knowledge and experience became less relevant).

The chapter ends with Alex reaching out to Jonah to ask for help running the division. What he receives is a congratulation and advice to learn to trust his own judgement rather than to needing outside support.

Chapter 32 uses Alex’s and Julie’s celebration dinner as a backdrop for a discussion about the promotion as part of a journey and Johan’s method of coaching. Johan didn’t just provide answers to the questions Alex posed, but rather pushed Alex  in the right direction and made him and his team work for the answers, much like the Socratic method of generating critical thinking based on asking and answering questions.  This journey helped Alex generate ownership in new concepts that flew in the face of what he and his team previously thought to be true. The struggle to generate answers gave Alex and his team the courage to implement their new ideas. It should be noted that the feedback that their early successes generated also helped generate the courage to try further experiments (this dovetails nicely to the ideas in Kotter’s Leading Change – an earlier re-read).

Remember that the summary of previous entries in the re-read of The Goal have been shifted to a new page (click here).   Also, if you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version


Categories: Process Management

Neo4j: BBC football live text fouls graph

Mark Needham - Sat, 05/16/2015 - 22:13

I recently came across the Partially Derivative podcast and in episode 17 they describe how Kirk Goldsberry scraped a bunch of data about shots in basketball matches then ran some analysis on that data.

It got me thinking that we might be able to do something similar for football matches and although event based data for football matches only comes from Opta, the BBC does expose some of them in live text feeds.

We’ll start with the Champions League match between Barcelona and Bayern Munich from last Tuesday.

2015 05 16 23 10 43

Our first task is to extract the events that happened in the match along with the players involved. After we’ve got that we’ll generate a Neo4j graph and see if we can find some interesting insights.

I find the feedback cycle with this type of work is dramatically improved if we have the source data available locally so the first step was to get the BBC web page downloaded:

$ wget http://www.bbc.co.uk/sport/0/football/32683310

Next we need to write a scraper which will extract all the events. We want to get an array containing one entry for each event, where the following is an example of an event:

2015 05 16 22 19 00

HTML-wise it looks like this:

2015 05 16 22 20 28

4709393221 bddd85c64e z

Image courtesy of William Brawley

I do most of my scraping work in Python so I used the Beautiful Soup library with the soupselect wrapper to get the data into CSV format ready to import into Neo4j.

It was mostly a straight forward job of finding the appropriate CSS tag and pulling out the values although the way fouls are described in the page is a bit strange – sometimes the person fouled comes first row and the fouler comes on the next line and sometimes vice versa.

Luckily the two parts of the foul can be joined together by matching the time which made life easier.

The full code for the scrapper is on github if you want to play with it.

This is what the resulting CSV file looks like:

$ head -n 10 data/events.csv 
matchId,foulId,freeKickId,time,foulLocation,fouledPlayer,fouledPlayerTeam,foulingPlayer,foulingPlayerTeam
32683310,3,2,90:00 +0:40,in the defensive half.,Xabi Alonso,FC Bayern MĂĽnchen,Pedro,Barcelona
32683310,9,8,84:38,on the right wing.,Rafinha,FC Bayern MĂĽnchen,Pedro,Barcelona
32683310,12,13,83:17,in the attacking half.,Lionel Messi,Barcelona,Sebastian Rode,FC Bayern MĂĽnchen
32683310,15,14,82:43,in the defensive half.,Sebastian Rode,FC Bayern MĂĽnchen,Neymar,Barcelona
32683310,17,18,80:41,in the attacking half.,Pedro,Barcelona,Xabi Alonso,FC Bayern MĂĽnchen
32683310,22,23,76:31,in the defensive half.,Neymar,Barcelona,Rafinha,FC Bayern MĂĽnchen
32683310,25,26,75:03,in the attacking half.,Lionel Messi,Barcelona,Xabi Alonso,FC Bayern MĂĽnchen
32683310,31,30,69:37,in the attacking half.,Bastian Schweinsteiger,FC Bayern MĂĽnchen,Dani Alves,Barcelona
32683310,36,35,63:27,in the attacking half.,Robert Lewandowski,FC Bayern MĂĽnchen,Ivan Rakitic,Barcelona

Now it’s time to create a graph. We’ll aim to massage the data into this model:

2015 05 16 22 50 32

Next we need to write some Cypher code to get the CSV data into the graph. The full script is here, a sample of which is below:

// match
LOAD CSV WITH HEADERS FROM "file:///Users/markhneedham/projects/neo4j-bbc/data/events.csv" AS row
MERGE (:Match {id: row.matchId});
 
// teams
LOAD CSV WITH HEADERS FROM "file:///Users/markhneedham/projects/neo4j-bbc/data/events.csv" AS row
MERGE (:Team {name: row.foulingPlayerTeam});
 
LOAD CSV WITH HEADERS FROM "file:///Users/markhneedham/projects/neo4j-bbc/data/events.csv" AS row
MERGE (:Team {name: row.fouledPlayerTeam});
 
// players
LOAD CSV WITH HEADERS FROM "file:///Users/markhneedham/projects/neo4j-bbc/data/events.csv" AS row
MERGE (player:Player {id: row.foulingPlayer + "_" + row.foulingPlayerTeam})
ON CREATE SET player.name = row.foulingPlayer;
 
// appearances
LOAD CSV WITH HEADERS FROM "file:///Users/markhneedham/projects/neo4j-bbc/data/events.csv" AS row
MATCH (match:Match {id: row.matchId})
MATCH (player:Player {id: row.foulingPlayer + "_" + row.foulingPlayerTeam})
MATCH (team:Team {name: row.foulingPlayerTeam})
 
MERGE (appearance:Appearance {id: player.id + " in " + row.matchId})
MERGE (player)-[:MADE_APPEARANCE]->(appearance)
MERGE (appearance)-[:IN_MATCH]->(match)
MERGE (appearance)-[:FOR_TEAM]->(team);
 
// fouls
LOAD CSV WITH HEADERS FROM "file:///Users/markhneedham/projects/neo4j-bbc/data/events.csv" AS row
 
MATCH (foulingPlayer:Player {id:row.foulingPlayer + "_" + row.foulingPlayerTeam })
MATCH (fouledPlayer:Player {id:row.fouledPlayer + "_" + row.fouledPlayerTeam })
MATCH (match:Match {id: row.matchId})
 
MERGE (foul:Foul {eventId: row.foulId})
ON CREATE SET foul.time = row.time, foul.location = row.foulLocation
 
MERGE (foul)<-[:COMMITTED_FOUL]-(foulingPlayer)
MERGE (foul)-[:COMMITTED_AGAINST]->(fouledPlayer)
MERGE (foul)-[:COMMITTED_IN_MATCH]->(match);

We’ll use neo4j-shell to execute the script:

$ ./neo4j-community-2.2.1/bin/neo4j-shell --file import.cql

Now that we’ve got the data into Neo4j we need to come up with some questions to ask of it. I came up with the following but perhaps you can think of some others!

  • Where do the fouls happen on the pitch?
  • Who made the most fouls?
  • Who was fouled the most?
  • Who fouled who the most?
  • Which team fouled the most?
  • Who’s the worst fouler in each team?
  • Who’s the most fouled in each team?
Where do the fouls happen?
match (match:Match)<-[:COMMITTED_IN_MATCH]-(foul)
RETURN foul.location AS location, COUNT(*) as fouls
ORDER BY fouls DESC;
 
+----------------------------------+
| location                 | fouls |
+----------------------------------+
| "in the defensive half." | 12    |
| "in the attacking half." | 12    |
| "on the right wing."     | 3     |
| "on the left wing."      | 3     |
+----------------------------------+
4 rows
Who fouls the most?
match (match:Match)<-[:COMMITTED_IN_MATCH]-(foul)<-[:COMMITTED_FOUL]-(fouler)
RETURN fouler.name AS fouler, COUNT(*) as fouls
ORDER BY fouls DESC
LIMIT 10;
 
+------------------------------+
| fouler               | fouls |
+------------------------------+
| "Rafinha"            | 4     |
| "Pedro"              | 3     |
| "Medhi Benatia"      | 3     |
| "Dani Alves"         | 3     |
| "Xabi Alonso"        | 3     |
| "Javier Mascherano"  | 2     |
| "Thiago Alcántara"   | 2     |
| "Robert Lewandowski" | 2     |
| "Sebastian Rode"     | 1     |
| "Sergio Busquets"    | 1     |
+------------------------------+
10 rows
Who was fouled the most?
// who was fouled the most
match (match:Match)<-[:COMMITTED_IN_MATCH]-(foul)-[:COMMITTED_AGAINST]->(fouled)
RETURN fouled.name AS fouled, COUNT(*) as fouls
ORDER BY fouls DESC
LIMIT 10;
 
+----------------------------------+
| fouled                   | fouls |
+----------------------------------+
| "Robert Lewandowski"     | 4     |
| "Lionel Messi"           | 4     |
| "Neymar"                 | 3     |
| "Pedro"                  | 2     |
| "Xabi Alonso"            | 2     |
| "Andrés Iniesta"         | 2     |
| "Rafinha"                | 2     |
| "Bastian Schweinsteiger" | 2     |
| "Sebastian Rode"         | 1     |
| "Sergio Busquets"        | 1     |
+----------------------------------+
10 rows
Who fouled who the most?
match (match:Match)<-[:COMMITTED_IN_MATCH]-(foul)-[:COMMITTED_AGAINST]->(fouled),
      (foul)<-[:COMMITTED_FOUL]-(fouler)
RETURN fouler.name AS fouler, fouled.name AS fouled, COUNT(*) as fouls
ORDER BY fouls DESC
LIMIT 10;
 
+--------------------------------------------------------+
| fouler              | fouled                   | fouls |
+--------------------------------------------------------+
| "Javier Mascherano" | "Robert Lewandowski"     | 2     |
| "Dani Alves"        | "Bastian Schweinsteiger" | 2     |
| "Xabi Alonso"       | "Lionel Messi"           | 2     |
| "Rafinha"           | "Neymar"                 | 2     |
| "Rafinha"           | "Andrés Iniesta"         | 2     |
| "Dani Alves"        | "Xabi Alonso"            | 1     |
| "Thiago Alcántara"  | "Javier Mascherano"      | 1     |
| "Pedro"             | "Juan Bernat"            | 1     |
| "Medhi Benatia"     | "Pedro"                  | 1     |
| "Neymar"            | "Sebastian Rode"         | 1     |
+--------------------------------------------------------+
10 rows
Which team fouled the most?
match (match:Match)<-[:COMMITTED_IN_MATCH]-(foul)<-[:COMMITTED_FOUL]-(fouler),
      (fouler)-[:MADE_APPEARANCE]-(app)-[:IN_MATCH]-(match),
      (app)-[:FOR_TEAM]->(team)
RETURN team.name, COUNT(*) as fouls
ORDER BY fouls DESC
LIMIT 10;
 
+-----------------------------+
| team.name           | fouls |
+-----------------------------+
| "FC Bayern MĂĽnchen" | 18    |
| "Barcelona"         | 12    |
+-----------------------------+
2 rows
Worst fouler for each team?
match (match:Match)<-[:COMMITTED_IN_MATCH]-(foul)<-[:COMMITTED_FOUL]-(fouler),
      (fouler)-[:MADE_APPEARANCE]-(app)-[:IN_MATCH]-(match),
      (app)-[:FOR_TEAM]->(team)
WITH team, fouler, COUNT(*) AS fouls
ORDER BY team.name, fouls DESC
WITH team, COLLECT({fouler:fouler, fouls:fouls})[0] AS topFouler
RETURN team.name, topFouler.fouler.name, topFouler.fouls;
 
+---------------------------------------------------------------+
| team.name           | topFouler.fouler.name | topFouler.fouls |
+---------------------------------------------------------------+
| "FC Bayern MĂĽnchen" | "Rafinha"             | 4               |
| "Barcelona"         | "Pedro"               | 3               |
+---------------------------------------------------------------+
2 rows
Most fouled against for each team
match (match:Match)<-[:COMMITTED_IN_MATCH]-(foul)-[:COMMITTED_AGAINST]-(fouled),
      (fouled)-[:MADE_APPEARANCE]-(app)-[:IN_MATCH]-(match),
      (app)-[:FOR_TEAM]->(team)
WITH team, fouled, COUNT(*) AS fouls
ORDER BY team.name, fouls DESC
WITH team, COLLECT({fouled:fouled, fouls:fouls})[0] AS topFouled
RETURN team.name, topFouled.fouled.name, topFouled.fouls;
 
+---------------------------------------------------------------+
| team.name           | topFouled.fouled.name | topFouled.fouls |
+---------------------------------------------------------------+
| "FC Bayern MĂĽnchen" | "Robert Lewandowski"  | 4               |
| "Barcelona"         | "Lionel Messi"        | 4               |
+---------------------------------------------------------------+
2 rows

So Bayern fouled a bit more than Barca, the main forwards for each team (Messi/Lewandowski) were the most fouled players on the pitch and the fouling was mostly in the middle of the pitch.

I expect this graph will become much more interesting to query with more matches and with the other event types as well but I haven’t got those scraped yet. The code is on github if you want to play around with it and perhaps get the other events into the graph.

Categories: Programming

Multi-Repository Development

Google Testing Blog - Fri, 05/15/2015 - 22:00
Author: Patrik Höglund

As we all know, software development is a complicated activity where we develop features and applications to provide value to our users. Furthermore, any nontrivial modern software is composed out of other software. For instance, the Chrome web browser pulls roughly a hundred libraries into its third_party folder when you build the browser. The most significant of these libraries is Blink, the rendering engine, but there’s also ffmpeg for image processing, skia for low-level 2D graphics, and WebRTC for real-time communication (to name a few).

Figure 1. Holy dependencies, Batman!
There are many reasons to use software libraries. Why write your own phone number parser when you can use libphonenumber, which is battle-tested by real use in Android and Chrome and available under a permissive license? Using such software frees you up to focus on the core of your software so you can deliver a unique experience to your users. On the other hand, you need to keep your application up to date with changes in the library (you want that latest bug fix, right?), and you also run a risk of such a change breaking your application. This article will examine that integration problem and how you can reduce the risks associated with it.
Updating Dependencies is HardThe simplest solution is to check in a copy of the library, build with it, and avoid touching it as much as possible. This solution, however, can be problematic because you miss out on bug fixes and new features in the library. What if you need a new feature or bug fix that just made it in? You have a few options:
  • Update the library to its latest release. If it’s been a long time since you did this, it can be quite risky and you may have to spend significant testing resources to ensure all the accumulated changes don’t break your application. You may have to catch up to interface changes in the library as well. 
  • Cherry-pick the feature/bug fix you want into your copy of the library. This is even riskier because your cherry-picked patches may depend on other changes in the library in subtle ways. Also, you still are not up to date with the latest version. 
  • Find some way to make do without the feature or bug fix.
None of the above options are very good. Using this ad-hoc updating model can work if there’s a low volume of changes in the library and our requirements on the library don’t change very often. Even if that is the case, what will you do if a critical zero-day exploit is discovered in your socket library?

One way to mitigate the update risk is to integrate more often with your dependencies. As an extreme example, let’s look at Chrome.

In Chrome development, there’s a massive amount of change going into its dependencies. The Blink rendering engine lives in a separate code repository from the browser. Blink sees hundreds of code changes per day, and Chrome must integrate with Blink often since it’s an important part of the browser. Another example is the WebRTC implementation, where a large part of Chrome’s implementation resides in the webrtc.org repository. This article will focus on the latter because it’s the team I happen to work on.
How “Rolling” Works The open-sourced WebRTC codebase is used by Chrome but also by a number of other companies working on WebRTC. Chrome uses a toolchain called depot_tools to manage dependencies, and there’s a checked-in text file called DEPS where dependencies are managed. It looks roughly like this:
{
# ...
'src/third_party/webrtc':
'https://chromium.googlesource.com/' +
'external/webrtc/trunk/webrtc.git' +
'@' + '5727038f572c517204e1642b8bc69b25381c4e9f',
}

The above means we should pull WebRTC from the specified git repository at the 572703... hash, similar to other dependency-provisioning frameworks. To build Chrome with a new version, we change the hash and check in a new version of the DEPS file. If the library’s API has changed, we must update Chrome to use the new API in the same patch. This process is known as rolling WebRTC to a new version.

Now the problem is that we have changed the code going into Chrome. Maybe getUserMedia has started crashing on Android, or maybe the browser no longer boots on Windows. We don’t know until we have built and run all the tests. Therefore a roll patch is subject to the same presubmit checks as any Chrome patch (i.e. many tests, on all platforms we ship on). However, roll patches can be considerably more painful and risky than other patches.


Figure 2. Life of a Roll Patch.
On the WebRTC team we found ourselves in an uncomfortable position a couple years back. Developers would make changes to the webrtc.org code and there was a fair amount of churn in the interface, which meant we would have to update Chrome to adapt to those changes. Also we frequently broke tests and WebRTC functionality in Chrome because semantic changes had unexpected consequences in Chrome. Since rolls were so risky and painful to make, they started to happen less often, which made things even worse. There could be two weeks between rolls, which meant Chrome was hit by a large number of changes in one patch.
Bots That Can See the Future: “FYI Bots” We found a way to mitigate this which we called FYI (for your information) bots. A bot is Chrome lingo for a continuous build machine which builds Chrome and runs tests.

All the existing Chrome bots at that point would build Chrome as specified in the DEPS file, which meant they would build the WebRTC version we had rolled to up to that point. FYI bots replace that pinned version with WebRTC HEAD, but otherwise build and run Chrome-level tests as usual. Therefore:

  • If all the FYI bots were green, we knew a roll most likely would go smoothly. 
  • If the bots didn’t compile, we knew we would have to adapt Chrome to an interface change in the next roll patch. 
  • If the bots were red, we knew we either had a bug in WebRTC or that Chrome would have to be adapted to some semantic change in WebRTC.
The FYI “waterfall” (a set of bots that builds and runs tests) is a straight copy of the main waterfall, which is expensive in resources. We could have cheated and just set up FYI bots for one platform (say, Linux), but the most expensive regressions are platform-specific, so we reckoned the extra machines and maintenance were worth it.
Making Gradual Interface Changes This solution helped but wasn’t quite satisfactory. We initially had the policy that it was fine to break the FYI bots since we could not update Chrome to use a new interface until the new interface had actually been rolled into Chrome. This, however, often caused the FYI bots to be compile-broken for days. We quickly started to suffer from red blindness [1] and had no idea if we would break tests on the roll, especially if an interface change was made early in the roll cycle.

The solution was to move to a more careful update policy for the WebRTC API. For the more technically inclined, “careful” here means “following the API prime directive” [2]. Consider this example:
class WebRtcAmplifier {
...
int SetOutputVolume(float volume);
}

Normally we would just change the method’s signature when we needed to:
class WebRtcAmplifier {
...
int SetOutputVolume(float volume, bool allow_eleven1);
}

… but this would compile-break Chome until it could be updated. So we started doing it like this instead:
class WebRtcAmplifier {
...
int SetOutputVolume(float volume);
int SetOutputVolume2(float volume, bool allow_eleven);
}

Then we could:
  1. Roll into Chrome 
  2. Make Chrome use SetOutputVolume2 
  3. Update SetOutputVolume’s signature 
  4. Roll again and make Chrome use SetOutputVolume 
  5. Delete SetOutputVolume2
This approach requires several steps but we end up with the right interface and at no point do we break Chrome.
ResultsWhen we implemented the above, we could fix problems as they came up rather than in big batches on each roll. We could institute the policy that the FYI bots should always be green, and that changes breaking them should be immediately rolled back. This made a huge difference. The team could work smoother and roll more often. This reduced our risk quite a bit, particularly when Chrome was about to cut a new version branch. Instead of doing panicked and risky rolls around a release, we could work out issues in good time and stay in control.

Another benefit of FYI bots is more granular performance tests. Before the FYI bots, it would frequently happen that a bunch of metrics regressed. However, it’s not fun to find which of the 100 patches in the roll caused the regression! With the FYI bots, we can see precisely which WebRTC revision caused the problem.
Future Work: Optimistic Auto-rollingThe final step on this ladder (short of actually merging the repositories) is auto-rolling. The Blink team implemented this with their ARB (AutoRollBot). The bot wakes up periodically and tries to do a roll patch. If it fails on the trybots, it waits and tries again later (perhaps the trybots failed because of a flake or other temporary error, or perhaps the error was real but has been fixed).

To pull auto-rolling off, you are going to need very good tests. That goes for any roll patch (or any patch, really), but if you’re edging closer to a release and an unstoppable flood of code changes keep breaking you, you’re not in a good place.

References[1] Martin Fowler (May 2006) “Continuous Integration”
[2] Dani Megert, Remy Chi Jian Suen, et. al. (Oct 2014) “Evolving Java-based APIs”
Footnotes
  1. We actually did have a hilarious bug in WebRTC where it was possible to set the volume to 1.1, but only 0.0-1.0 was supposed to be allowed. No, really. Thus, our WebRTC implementation must be louder than the others since everybody knows 1.1 must be louder than 1.0.

Categories: Testing & QA

Quote of the Day

Herding Cats - Glen Alleman - Fri, 05/15/2015 - 21:33

Any process that does not have provisions for its own refinement will eventually fail or be abandoned

- W. R. Corcoran, PhD, P.E., The Phoenix Handbook: The Ultimate Event Evaluation Manual for Finding Profit Improvement in Adverse Events, Nuclear Safety Review Concepts, 19 October 1997.

Categories: Project Management

Agile goes beyond Epic Levels

Xebia Blog - Fri, 05/15/2015 - 17:10

IMG_5514A snapshot from my personal backlog last week:

  • The Agile transformation at ING was frontpage news in the Netherlands. This made us even more realize how epic this transformation and assignment actually is.
  • The Agile-built hydrogen race car from the TU Delft set an official track record on the Nurburgring. We're proud on our guys in Delft!
  • Hanging out with Boeings’ Agile champs at their facilities in Seattle exchanging knowledge. Impressive and extremely fruitful!
  • Coaching the State of Washington on their ground breaking Agile initiatives together with my friend and fellow consultant from ScrumInc, Joe Justice.

One thing became clear for me after a week like this: Something Agile is cookin’.  And it’s BIG!

In this blog I will be explaining why and how Agile will develop in the near future.

Introduction; what’s happening?

Human kind is currently facing the biggest era change since the 19th Century. Our industries, education, technologies and society are simply not compliant anymore with today’s and tomorrows needs.  Some systems like healthcare and the economy are that broken they actually should be to be reinvented again. Everything has just become too vulnerable and complex. Just Quantitative-easing“lubricating the engine” like quantitive easing the economy, are no sustainable solutions anymore. Like Russell Ackoff already said, you can only fix a system as a whole not only by separate parts.

This reinvention will only succeed when we are able to learn and adjust our systems very rapidly.  Agile, Lean and a different way of organizing our selfs, can make this reality.  Lean will provide us with the right tools to do exactly what’s needed, nothing more, nothing less.  But applying Lean for only efficiency purposes will not bring the innovations and creativity we need.  We also need an additional catalyst and engine: Agile. It will provide us with the right mindset and tools to innovate, inspect and adapt very fast.  And finally, we must reorganize ourself more on cooperation not on directive command and control. This was useful in the industrial revolution, not in our modern complex times.

Agile’s for everything

Unlike most people think, Agile is not only a software development tool. You can apply it to almost everything. slider_Forze_VI_ValkenburgFor example, as Xebia consultants we've successfully coached Agile and Lean non-IT initiatives in Marketing, Innovation, Education, Automotive, Aviation and non-Profit organizations. It just simply works! And how. A productivity increase of 500% is no exception. But above all, team members and customers are much happier.

Agile's for everybody

At this moment, a lot of people are still unconsciously addicted to their patterns and unaware about the unlimited possibilities out there.  It’s like having a walk in the forrest.  You can bring your own lunch like you always do, but there are some delicious fruits out there for free!  Technology like 3D printing offer unlimited possibilities straight on your desk, where only a few years a go, you needed a complicated, million dollar machine for this. The same goes for Agile. It’s open source and out there waiting for you. It will also help you getting more out of all these new awesome developments!

The maturity of Agile explained

Until recently, most agile initiatives emerged bottom up, but stalled on a misfit between Agile and (conventional) organizations.  Loads of software was produced, but could not be brought to production, simply because the whole development chain was not Agile yet. Tools like TDD and Continuous Integration improved the situation significantly, but dependencies were still not managed properly most of the time.

Screen Shot 2015-05-13 at 7.53.15 PM

The last couple of years, some good scaled agile frameworks like LeSS and SAFe emerged. Managing the dependencies better, but not directly encouraging the Agile Mindset and motivation of people.  In parallel, departments like HR, Control and Finance were struggling with Agile. There was a scaled agile framework implemented, but the hierarchical organization structure was not adjusted creating a gap between fast moving Agile teams and departments still hindered by non-Agile procedures, proceses and systems.

Therefor, we see conventional organizations moving towards a more Agile, community based model like Spotify, Google or Zappos.  ING is now transforming towards to a similar organization model.

Predictions for the near future

My expectation is that we will see Agile transformations continue on a much wider scale.  For example, people developing their own products in an agile fashion while using 3D printing.  Governments will use Agile and Holacracy for solving issues like the reshaping the economic system together with society. Or like I have observed last week, the State of Washington government using these techniques successfully in solving the challenges they're facing.

For me, it currently feels like the early Nineties again when the Internet emerged.  In that time I explained to many people the Internet would be like electricity for them in the near future.  Most people laughed and stated it was just a new way of communication.  The same applies now for the Agile Mindset.   It's not just a hype or a corporate tool. It will reshape the world as we know it today.

 

 

 

 

Stuff The Internet Says On Scalability For May 15th, 2015

Hey, it's HighScalability time:


Stand a top a volcano and survey the universe.  (By Shane Black & Judy Schmidt)
  • 1 million: Airbnb's room inventory; 2 billion: Telegram messages sent daily; Two billion: photos shared daily on Facebook; 10,000: sensors in every Airbus wing
  • Quotable Quotes:
    • Silicon Valley: “We’re about shaving yoctoseconds off latency for every layer in the stack,” he said. “If we rent from a public cloud, we’re using servers that are, by definition, generic and unpredictable.”
    • @liviutudor: Netflix: approx 250 Cassandra clusters over 7,000+ server instances #cloud
    • @GreylockVC: "More billion-dollar marketplaces will be created in the next five years than in the previous 20." - @simonrothman 
    • CDIXON: Exponential growth curves in the “feels gradual” phase are deceptive. There are many things happening today in technology that feel gradual and disappointing but will soon feel sudden and amazing.
    • @badnima: OH: "The gossip protocol has reached its scaling limits"
    • marcosdumay: People get pretty excited every time physicists talk about information. The bottom line is that information manipulation is just Math, viewed by a different angle.
    • Bill Janeway: There's only one way to hedge against uncertainty in venture capital...cash and control. Enough cash that when something goes wrong you can buy time to figure out what is and assess what you can do about it. 
    • zylo4747's coworker: Where's the step about preparing to have all your plans crushed and rushing shit out the door as fast as possible?
    • Martin Fowler: don't even consider microservices unless you have a system that's too complex to manage as a monolith. 
    • @postwait: Ingesting, querying, & visualizing data isn't a monitoring system. It isn't even sufficient plumbing for such a system. #srecon15europe
    • @techsummitpr: "Up to date weather conditions? It's not a marvel from Google, it's a marvel from the National Weather Service." @timoreilly #techsummitpr
    • @sovereignfund: Verified as legit: The top 25 hedge fund managers earn more than all kindergarten teachers in U.S. combined. 
    • Adrian Colyer: In their evaluation, the authors found that mixing MapReduce and memcached traffic in the same network extended memcached latency in the tail by 85x compared to an interference free network. 
    • @BenedictEvans: US ecommerce revenues 1999: $12bn 2013: $219bn
    • Gregory Hickok: the brain samples the world in rhythmic pulses, perhaps even discrete time chunks, much like the individual frames of a movie. From the brain’s perspective, experience is not continuous but quantized.
    • David Bollier: There is no master inventory of commons. They can arise whenever a community decides it wishes to manage a resource in a collective manner, with a special regard for equitable access, use and sustainability.

  • What’s Next for Moore’s Law?: I predict that Intel's 10nm process technology will use Quantum Well FETs (QWFETs) with a 3D fin geometry, InGaAs for the NFET channel, and strained Germanium for the PFET channel, enabling lower voltage and more energy efficient transistors in 2016, and the rest of the industry will follow suit at the 7nm node.

  • Don't read How to Build a Unicorn From Scratch – and Walk Away with Nothing if you are easily frightened. Years of work down the drain. **chills** To walk safely through the Valley: Focus on terms, not just valuation; Build a waterfall; Don’t do bad business deals just to get investment capital; Understand the motivations of others; Understand your own motivation.

  • How do you build a real-time chat system? Scaling Secret: Real-time Chat. Goal was to handle 50,000 simultaneous conversations. Pusher was used to deliver messages. For a database Secret used Google App Engine’s High-Replication Datastore. Some nice details on the schema and other issues. Good thread on HN where the main point of contention is should an expensive service like Pusher be used to do something so simple? Usual arguments about wasting money vs displaying your hacker plumage. 

  • Under the hood: Facebook’s cold storage system. A top to bottom reengineering to save power for infrequently accessed photos. Yes, that's cool. Each cold storage datacenter uses 1/6th the energy as a normal datacenter while storing hundreds of petabytes of data. Erasure coding is used to store data. Data is scanned every 30 days to recreate any lost data.  As capacity is added data is rebalanced to the new racks. No file system is used at all. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Exercise or Games? Why Not Both!

Android Developers Blog - Fri, 05/15/2015 - 07:52

Posted by Alice Ching, Google Engineer

We are pleased to announce the release of Games in Motion, an open source game sample to demonstrate how developers can make fun games using Google Fit and Android Wear. Do you ever go on a jog and feel like there is a lack of incentive to help you run better? What if you were a secret agent and had to use your speed and your nifty gadget to complete missions?

table, th, td { border: clear; border-collapse: collapse; }

With Games in Motion, you can enhance your exercise with missions and actions on your Android Wear device, while logging your jogs to the cloud.

Games in Motion is written in Java programming language using Android Studio. It demonstrates multiple Android technologies.

  • Android Wear bridges notifications from a phone or tablet to a paired Android Wear device. The notifications are stacked so we can show multiple stats at the same time.
  • Google Fit API collects and processes fitness data and sessions. This allows us to use the fitness data to show user progress. All exercise sessions done in Games in Motion will be recorded to Google Fit as well.
  • Google Play Games Services is used to create and unlock achievements.
  • Several different Android audio APIs are integrated.
  • JUnit tests are present for the data-driven parser, which demonstrates how unit testing can be done within Android Studio.

You can download the latest open source release from GitHub. We hope to inspire similar Android games, where multiple different form factors are combined for a fun experience.

Categories: Programming

What are Cognitive Biases?

Many of us have a cognitive bias toward eating scorpions!

Many of us have a cognitive bias toward eating scorpions!

Cognitive biases are patterns of behavior that reflect a deviation in judgment that occurs under particular situations. The phrase cognitive bias was introduced by Amos Tversky and Daniel Kahneman in 1972. Biases can affect how information is perceived, how teams and individuals behave and even our perception of ourselves. Biases are a part of nearly every human interaction, so we need to understand the potential biases that are in play if we are going to help teams grow and evolve.

Project teams make decisions on continuous basis. Most decisions are made based on how the decision maker perceives the information he or she has at hand. One bias that can affect how information is perceived is the illusory correlation. The illusory correlation is the perception of a relationship between two or more variables when no relationship exists. An example would be that a team that works more hours a week has higher productivity because working longer gives the perception of creating more output. The perception of a relationship causes you to pay less attention to other factors, such as the higher level of effort they are expending. There are numerous biases that affect how information is perceived, and these biases can impact the outcome of decisions or even whether we make needed decisions at all.

Biases can affect behavior. Neglect of probability is a type of cognitive bias common in IT organizations that are planning and estimating projects or in risk management. For example, most estimates should be represented as a range based on probability. Techniques like Monte Carlo analysis can be used to generate a range of probability based estimates to address type of bias.However, almost all estimates are represented as a single number and regardless of all the caveats attached, and the continuum of probability is ignored. Lottery ticket sales are another reflection of the neglect of probability bias; buying one or 10 doesn’t materially affect the probability of winning, but that does not stop those who think buying ten tickets increases their chances of winning. In both cases neglecting probability affects how we behave and make decisions.

Biases can affect our motivation. For example, a self-serving attributional bias, occurs when success is attributed to internal factors and failures are attributed to external factors. This type of bias can occur at an individual level or at the team level. While self-serving bias can improve self-esteem (or a team self-esteem) it can also cloud judgment by causing an overestimation of capability. For example, if a team is able to deliver more than their long-term productivity or velocity would predict, the team might then perceive that they have increased their capability to deliver. If no fundamental changes have occurred such as an infusion of knowledge, training or new tools, the higher velocity may not be attributable to the team. A good coach will help teams examine these types of biases during retrospectives.

Bias are powerful psychological filters that affect how both individuals and teams perceive the world around them and then how they behave. Biases reflect shortcuts in how we interpret and react to stimuli. In many cases these reactions are valuable; however they can also cause problems (as many shortcuts can). Understanding how biases impact how individuals and teams perceive the world around them can help team make better decisions and therefore deliver value more effectively.


Categories: Process Management

Complex, Complexity, Complicated

Herding Cats - Glen Alleman - Thu, 05/14/2015 - 18:47

In the agile community it is popular to use the terms complex, complexity, complicated many times interchangeably and and many times wrongly. These terms are many times overloaded with an agenda used to push a process or even a method.

First some definitions

  • Complex - consisting of many different and connected part. Not easy to analyze or understand. Complicated or intricate. When a system or problem is considered complex, analytical approaches, like dividing it into parts to make the problem tractable is not sufficient, because it is the interactions of the parts that make the system complex and without these interconnections, the system no longer functions.
  • Complex System - is a functional whole, consisting of interdependent and variable parts. Unlike conventional systems, the parts need not have fixed relationships, fixed behaviors or fixed quantities, and their individual functions may be undefined in traditional terms.
  • Complicated - containing a number of hidden parts, which must be revealed separately because they do not interact. Mutual interaction of the components creates nonlinear behaviors of the system. In principle all systems are complex. The number of parts or components is irrelevant n the definition of complexity. There can be complexity - nonlinear behaviour - in small systems of large systems. 
  • Complexity - there is no standard definition of complexity is a view of systems that suggests simple causes result in complex effects. Complexity as a term is generally used to characterize a system with many parts whose interactions with each other occur in multiple ways. Complexity can occur in a variety of forms
    • Complex behaviour
    • Complex mechanisms
    • Complex situations
    • Complex systems
    • Complex data
  • Complexity Theory - states that critically interacting components self-organize to form potentially evolving structures exhibiting a hierarchy of emergent system properties. This theory takes the view that systems are best regarded as wholes, and studied as such, rejecting the traditional emphasis on simplification and reduction as inadequate techniques on which to base this sort of scientific work.

One more item we need is the types of Complexity

  • Type 1 - fixed systems, where the structure doesn't change as a function of time.
  • Type 2 - systems where time causes changes. This can be repetitive cycles or change with time.
  • Type 3 - moves beyond repetitive systems into organic where change is extensive and non-cyclic in nature.
  • Type 4 - are self organizing where we can combine internal constraints of closed systems, like machines, with the creative evolution of open systems, like people.

And Now To The Point

When we hear complex, complexity, complex systems, complex adaptive system, pause to ask what kind of complex are you talking about. What Type of complex system. In what system are you applying the term complex. Have you classified that system in a way that actually matches a real system.

It is common use the terms complex, complicated, and complexity are interchanged. And software development is classified or mis-classified as one or the both or all three. It is also common to toss around these terms with not actual understanding of their meaning or application.

We need to move beyond buzz words. Words like Systems Thinking. Building software is part of a system. There are interacting parts that when assembled, produce an outcome. Hopefully a desired outcome. In the case of software the interacting parts are more than just the parts. Software has emergent properties. A Type 4 system, built from Type 1, 2, and 3 systems. With changes in time and uncertainty, modeling these systems requires stochastic processes. These processes depend on estimating behaviors as a starting point. 

The understanding that software development is an uncertain process (stochastic) is well known, starting in the 1980's [1] with COCOMO. Later models, like Cone of Uncertainty made it clear that these uncertainties, themselves, evolve with time. The current predictive models based on stochastic processes include Monte Carlo Simulation of networks of activities, Real Options, and Bayesian Networks. Each is directly applicable to modeling software development projects.

[1] Software Engineering Economics, Barry Boehm, Prentice-Hall, 1981.

Related articles Decision Analysis and Software Project Management Making Decisions in the Presence of Uncertainty Some More Background on Probability, Needed for Estimating Approximating for Improved Understanding The Microeconomics of a Project Driven Organization How to Avoid the "Yesterday's Weather" Estimating Problem Hope is not a Strategy
Categories: Project Management

Monte Carlo Simulation of Project Performance

Herding Cats - Glen Alleman - Thu, 05/14/2015 - 17:30

Monte-Carlo-3Project work is random. Most everything in the world  is random. The weather, commuter traffic, productivity of writing and testing code. Few things actually take as long as they are planned. Cost is less random, but there are variances in the cost of labor, the availability of labor. Mechanical devices have variances as well.

The exact fit of a water pump on a Toyota Camry is not the same for each pump. There is a tolerance in the mounting holes, the volume of water pumped. This is a variance in the technical performance.

Managing in the presence of these uncertainties is part of good project management. But there are two distinct paradigms of managing in the presence of these uncertainties.

  1. We have empirical data of the variances. We have samples of the hole positions and sizes of the water pump mounting plate for the last 10,000 pumps that were installed. We have samples of how long it took to write a piece of code and the attributes of the code that are correlated to that duration. We have empirical measures.
  2. We have a theoretical model of the water pump in the form of a 3D CAD model with the materials modeling for expansion, drilling errors of the holes and other static and dynamic variances. We have modeling the duration of work using a Probability Distribution Function and a Three Point estimate of the Most Likely Duration, the Pessimistic and Optimistic duration. These can be derived form past performance, but we don't have enough actual data to produce the PDF and have a low enough Sample Error for our needs.

In the first case we have empirical data. In he second case we don't. There are two approaches to modeling what the system will do in terms of cost and schedule outcomes.

Bootstrapping the Empirical Data

With samples of past performance and the proper statistical assessment of those samples, we can re-sample them to produce a model of future performance. This bootstrap resampling shares the principle of the second method - Monte Carlo Simulation - but with several important differences.

  • The researcher - and we are researching what the possible outcomes might be from our model - does not know nor have any control of the Probability Distribution Function that generated the past sample. You take what you got. 
  • As well we don;'t have any understanding of Why those samples appear as they do. They're just there. We get what we get.
  • This last piece is critical because it prevents us from defining what performance must be in place to meet some future goal. We can't tell what performance we need because we have not model of the need performance, just samples from the past.
  • This results from the statistical conditions that there is a PDF for the process that ius unobserved. All we have is a few samples of this process.
  • With these few samples, we're going to resample them to produce a modeled outcome. This resampling locks in any behavior of the future using the samples from the past, which may or may not actually represent the true underlying behavior. This may be all we can do because we don't have any theoretical model of the process.

This bootstrapping method is quick, easy, and produces a quick and easy result. But it has issues that must be acknowledged.

  • There is a fundamental assumption that the past empirical samples represent the future. That is the samples contained in the bootstrapped list and their resampling ae also contained in all the future samples.
  • Said in a more formal way
    • If the sample of data we have from the past is a reasonable representation of the underlying population of all samples from the work process, then the distribution of parameter estimates produced from the bootstrap  model on a series of resampled data sets will provide a good approximation of the distribution of that statistics in the population.
    • With this sample data and its parameters (statistical moments) we can make a good approximation of the future.
  • There are some important statistical behaviors though that must be considered, starting with the future samples are identical to the statistical behaviors of the past samples.
    • Nothing is going to change in the future
    • The past and the future are identical statistically
    • In the project domain that is very unlikely
  • With all these condition, for a small project, with few if any interdependencies, a static work process with little valance, boot strapping is a nice quick and dirty approach to forecasting (estimating the future)  based on the past.

Monte Carlo Simulation

This approach is more general and removes many of the restriction to the statistical confidence of bootstrapping.

Just as a reminder, in principle both the parametric and the non-parametric bootstrap are special cases of Monte Carlo simulations used for a very specific purpose: estimate some characteristics of the sampling distribution. But like all principles, in practice there are larger differences when modeling project behaviors.

In the more general approach  of Monte Carlo Simulation the algorithm repeatedly creating random data in some way, performing some modeling with that random data, and collecting some result.

  • The duration of a set independent tasks
  • The probabilistic completion date of a series of tasks connected in a network (schedule), each with a different Probability Distribution  Function evolving as the project moves into the future.
  • A probabilistic cost  correlated with the probabilistic schedule model. This is called the Joint Confidence Level. Both cost and schedule are random variance with time evolving changes in their respective PDFs.

In practice when we hear Monte Carlo simulation we are talking about a theoretical investigation, e.g. creating random data with no empirical content - or from reference classes -  used to investigate whether an estimator can represent known characteristics of this random data, while the (parametric) bootstrap refers to an empirical estimation and is not necessary a model of the underlying processes, just a small sample of observations independent from the actual processes that generated that data.

The key advantage of MCS is we don't necessarily need  past empirical data. MCS can be used to advantage if we do, but we don't need it for the Monte Carlo Simulation algorithm to work.

This approach could be used to estimate some outcome, like in the bootstrap, but also to theoretically investigate some general characteristic of an statistical estimator (cost, schedule, technical performance) which is difficult to derive from empirical data.

MCS removes the road block heard in many critiques of estimating - we don't have any past data on which to estimate.  No problem, build a model of the work, the dependencies between that work, and assign statistical parameters to the individual or collected PDFs and run the MCS to see what comes out.

This approach has several critical advantages:

  • The first is a restatement - we don't need empirical data, although it will add value to the modeling process.
    • This is the primary purpose of Reference Classes
    • They are the raw material for defining possible future behaviors form the past
  • We can make judgement of what he future will be like, or most importantly what the future MUST be like to meet or goals, run the simulation and determine is our planned work will produce a desired result.

So Here's the Killer Difference

Bootstrapping models make several key assumptions, which may not be true in general. So they must be tested before accepting any of the outcomes.

  • The future is like the past.
  • The statistical parameters are static - they don't evolve with time. That is the future is like the past, an unlikely prospect on any non-trivial project.
  • The sampled data is identical to the population data both in the past and in the future.

Monte Carlo Simulation models provide key value that bootstrapping can't.

  • Different Probability Distribution Function can be assigned to work as it progresses through time
  • The shape of that PDF can be defined from past performance, or defined from the needed performance.

The critical difference between Bootstrapping and Monte Carlo Simulation is MCS can show what the future performance has to be to stay on schedule (within variance), on cost, and have the technical performance meet the needs of the stakeholder.

Bootstrapping can only show what the future will be like if it like the past, not what it must be like. In Bootstrapping this future MUST be like the past. In MCS we can tune the PDFs to show what performance has to be to manage to that plan. Bootstrapping is reporting yesterday's weather as tomorrow's weather - just like Steve Martin in LA Story. If tomorrow's weather turns out not to be like yesterday's weather, you gonna get wet.

MCS can forecast tomorrows weather, by assigning PDFs to future activities that are different than past activities, then we can make any needed changes in that future model to alter the weather to meet or needs. This is in fact how weather forecasts are made - with much more sophisticated models of course here at the National Center for Atmospheric Research in Boulder, CO

This forecasting (estimating the future state) of possible outcomes and the alternation of those outcomes through management actions to change dependencies, add or remove resources, provide alternatives to the plan (on ramps and off maps of technology for example), buy down risk, apply management reserve, assess impacts of rescoping the project, etc. etc. etc.  is what project management is all about.

Bootstrapping is necessary but far from sufficient for any non-trivial project to show up on of before the need date (with schedule reserve), at o below the budgeted cost (with cost reserve) and have the produce or service provide the needed capabilities (technical performance reserve).

Here's an example of that probabilistic forecast of project performance from a MCS (Risky Project). This picture shows the probability for cost, finish date, and duration. But it is built on time evolving PDFs assigned to each activity in a network of dependent tasks, which models the work stream needed to complete as planned.

When that future work stream is changed to meet new requirements, unfavorable past performance and the needed corrective actions, or changes in any or all of the underlying random variables, the MCS can show us the expected impact on key parameters of the project so management in intervention can take place - since Project Management is a verb.

Untitled

The connection between the Bootstrap and Monte Carlo simulation of a statistic is simple.

Both are based on repetitive sampling and then direct examination of the results.

But there are significant differences between the methods (hence the difference in names and algorithms). Bootstrapping uses the original, initial sample as the population from which to resample. Monte Carlo Simulation uses a data generation process, with known values of the parameters of the Probability Distribution Function. The common algorithm for MCS is Lurie-Goldberg. Monte Carlo is used to test that the results of the estimators produce desired outcomes on the project. And if not, allow the modeler and her management to change those estimators and then mange to the changed plan.

Bootstrap can be used to estimate the variability of a statistic and the shape of its sampling distribution from past data. Assuming the future is like the past, make forecasts of throughput, completion and other project variables. 

In the end the primary differences (and again the reason for the name differences) is Bootstrapping is based on unknown distributions. Sampling and assessing the shape of the distribution in Bootstrapping adds no value to the outcomes. Monte Carlo is based on known or defined distributions usually from Reference Classes.

Related articles Do The Math Complex, Complexity, Complicated The Fallacy of the Planning Fallacy
Categories: Project Management

The 5 Dollar Rule

Making the Complex Simple - John Sonmez - Thu, 05/14/2015 - 16:00

In this episode I share a great financial advice that can be quite beneficial if implemented as a rule in your daily life. Full transcript: John:               Hey, this is John Sonmez from simpleprogrammer.com and today I’m actually going to change it up a little bit and I’m not going to do a question. I’ve been […]

The post The 5 Dollar Rule appeared first on Simple Programmer.

Categories: Programming

Google Play services 7.0 - Places Everyone!

Android Developers Blog - Thu, 05/14/2015 - 04:25
gps

Posted by Ian Lake, Developer Advocate

Today, we’re bringing you new tools to build better apps with the completion of the rollout of Google Play services 7.0. With this release, we’re delivering improvements to location settings experiences, a brand new API for place information, new fitness data, Google Play Games, and more.

Location Settings Dialog

While the FusedLocationProviderApi combines multiple sensors to give you the optimal location, the accuracy of the location your app receives still depends greatly on what settings are enabled on the device (e.g. GPS, wifi, airplane mode, etc). In Google Play services 7.0, we’re introducing a standard mechanism to check that the necessary location settings are enabled for a given LocationRequest to succeed. If there are possible improvements, you can display a one touch control for the user to change their settings without leaving your app.

This API provides a great opportunity to make for a much better user experience, particularly if location information is critical to the user experience of your app such as was the case with Google Maps when they integrated the Location Settings dialog and saw a dramatic increase in the number of users in a good location state.

Places API

Location can be so much more than a latitude and longitude: the new Places API makes it easy to get details from Google’s database of places and businesses. The built-in place picker makes it easy for the user to pick their current place and provides all the relevant place details including name, address, phone number, website, and more.

If you prefer to provide your own UI, the getCurrentPlace() API returns places directly around the user’s current location. Autocomplete predictions are also provided to allow a low latency search experience directly within your app.

You can also manually add places with the addPlace() API and report that the user is at a particular place, ensuring that even the most explorative users can input and share their favorite new places.

The Places API will also be available cross-platform: in a few days, you’ll be able to apply for the Places API for iOS beta program to ensure a great and consistent user experience across mobile platforms.

Google Fit

Google Fit makes building fitness apps easier with fitness specific APIs on retrieving sensor data like current location and speed, collecting and storing activity data in Google Fit’s open platform, and automatically aggregating that data into a single view of the user’s fitness data.

In Google Play services 7.0, the previous Fitness.API that you passed into your GoogleApiClient has now been replaced with a number of APIs, matching the high level set of Google Fit Android APIs:

  • SENSORS_API to access raw sensor data via SensorsApi
  • RECORDING_API to record data via RecordingApi
  • HISTORY_API for inserting, deleting, or reading data via HistoryApi
  • SESSIONS_API for managing sessions via SessionsApi
  • BLE_API to interact with Bluetooth Low Energy devices via BleApi
  • CONFIG_API to access custom data types and settings for Google Fit via ConfigApi

This change significantly reduces the memory requirement for Google Fit enabled apps running in the background. Like always, apps built on previous versions of Google Play services will continue to work, but we strongly suggest you rebuild your Google Fit enabled apps to take advantage of this change.

Having all the data can be an empowering part of making meaningful changes and Google Fit is augmenting their existing data types with the addition of body fat percentage and sleep data.

Google Mobile Ads

We’ve found integration of AdMob and Google Analytics a powerful combination for analyzing how your users really use your app since we launched Google Analytics in AdMob last year. This new release enables any Google Mobile Ads SDK implementation to automatically get Google Analytics integration giving you the number of users and sessions, session duration, operating systems, device models, geography, and automatic screen reporting without any additional development work.

In addition, we’ve made numerous improvements across the SDK including ad request prefetching (saving battery usage and improving apparent latency) and making the SDK MRAIDv2 compliant.

--> Google Play Games

Announced at Game Developers Conference (GDC), we’re offering new tools to supercharge your games on Google Play. Included in Google Play services 7.0 is the Nearby Connections API, allowing games to seamlessly connect smartphones and tablets as second-screen controls to the game running on your TV.

App Indexing

App Indexing lets Google index apps just like websites, enabling Google search results to deep-link directly into your native app. We've simplified the App Indexing API to make this integration even easier for you by combining the existing view()/viewEnd() and action()/end() flows into a single start() and end() API.

Changes to GoogleApiClient

GoogleApiClient serves as the common entry point for accessing Google APIs. For this release, we’ve made retrieval of Google OAuth 2.0 tokens part of GoogleApiClient, making it much easier to request server auth codes to access Google APIs.

SDK Now Available!

You can get started developing today by downloading the Google Play services SDK from the Android SDK Manager.

To learn more about Google Play services and the APIs available to you through it, visit the Google Services section on the Android Developer site.

Join the discussion on

+Android Developers
Categories: Programming

New Tools to Supercharge Your Games on Google Play

Android Developers Blog - Thu, 05/14/2015 - 04:11

Posted by Greg Hartrell, Senior Product Manager of Google Play Games

Everyone has a gaming-ready device in their pocket today. In fact, of the one billion Android users in more than 190 countries, three out of four of them are gamers. This allows game developers to reach a global audience and build a successful business. Over the past year, we paid out more than $7 billion to developers distributing apps and games on Google Play.

At our Developer Day during the Game Developers Conference (GDC) taking place this week, we announced a set of new features for Google Play Games and AdMob to power great gaming. Rolling out over the next few weeks, these launches can help you better measure and monetize your games.

Better measure and adapt to player needs

“Player Analytics has helped me hone in on BombSquad’s shortcomings, right the ship, and get to a point where I can financially justify making the games I want to make.”

Eric Froemling, BombSquad developer

Google Play Games is a set of services that help game developers reach and engage their audience. To further that effort, we’re introducing Player Analytics, giving developers access to powerful analytics reports to better measure overall business success and understand in-game player behavior. Launching in the next few weeks in the Google Play Developer Console, the new tool will give indie developers and big studios better insight into how their players are progressing, spending, and churning; access to critical metrics like ARPPU and sessions per user; and assistance setting daily revenue targets.

BombSquad, created by a one-person game studio in San Francisco, was able to more than double its revenue per user on Google Play after implementing design changes informed during beta testing Player Analytics.

Optimizing ads to earn the most revenue

After optimizing your game for performance, it’s important to build a smarter monetization experience tailored to each user. That’s why we’re announcing three important updates to the AdMob platform:

  • Native Ads: Currently available as a limited beta, participating game developers will be able to show ads in their app from Google advertisers, and then customize them so that users see ads that match the visual design of the game. Atari is looking to innovate on its games, like RollerCoaster Tycoon 4 Mobile, and more effectively engage users with this new feature.
  • In-App Purchase House Ads Beta: Game developers will be able to smartly grow their in-app purchase revenue for free. AdMob can now predict which users are more likely to spend on in-app purchases, and developers will be able to show these users customized text or display ads promoting items for sale. Currently in beta, this feature will be coming to all AdMob accounts in the next few weeks.
  • Audience Builder: A powerful tool that enables game developers to create lists of audiences based on how they use their game. They will be able to create customized experiences for users, and ultimately grow their app revenue.

"Atari creates great game experiences for our broad audience. We're happy to be partnering with Google and be the first games company to take part in the native ads beta and help monetize games in a way that enhances our users' experience."

Todd Shallbetter, Chief Operating Officer, Atari

New game experiences powered by Google

Last year, we launched Android TV as a way to bring Android into the living room, optimizing games for the big screen. The OEM ecosystem is growing with announced SmartTVs and micro-consoles from partners like Sony, TPVision/Philips and Razer.

To make gaming even more dynamic on Android TV, we’re launching the Nearby Connections API with the upcoming update of Google Play services. With this new protocol, games can seamlessly connect smartphones and tablets as second-screen controls to the game running on your TV. Beach Buggy Racing is a fun and competitive multiplayer racing game on Android TV that plans to use Nearby Connections in their summer release, and we are looking forward to more living room multiplayer games taking advantage of mobile devices as second screen controls.

At Google I/O last June, we also unveiled Google Cardboard with the goal of making virtual reality (VR) accessible to everyone. With Cardboard, we are giving game developers more opportunities to build unique and immersive experiences from nothing more than a piece of cardboard and your smartphone. The Cardboard SDKs for Android and Unity enable you to easily build VR apps or adapt your existing app for VR.

Check us out at GDC

Visit us at the Google booth #502 on the Expo floor to get hands on experience with Project Tango, Niantic Labs and Cardboard starting on Wednesday, March 4. Our teams from AdMob, AdWords, Analytics, Cloud Platform and Firebase will also be available to answer any of your product questions.

For more information on what we’re doing at GDC, please visit g.co/dev/gdc2015.

Join the discussion on

+Android Developers
Categories: Programming

Hello Places API for Android and iOS!

Android Developers Blog - Thu, 05/14/2015 - 04:09

Posted by Jen Kovnats Harrington, Product Manager, Google Maps APIs

Originally posted to Google Geo Developers blog

People don’t think of their location in terms of coordinates on a map. They want context on what shops or restaurants they’re at, and what’s around them. To help your apps speak your users’ language, we’re launching the Places API for Android, as well as opening a beta program for the Places API for iOS.

The Places API web service and JavaScript library have been available for some time. By providing native support for Android and iOS devices, you can optimize the mobile experience with the new APIs by taking advantage of the device’s location signals.

The Places APIs for Android and iOS bridge the gap between simple geographic locations expressed as latitude and longitude, and how people associate location with a known place. For example, you wouldn’t tell someone you were born at 25.7918359,-80.2127959. You’d simply say, “I was born in Jackson Memorial Hospital in Miami, Florida.” The Places API brings the power of Google’s global places database into your app, providing more than 100 million places, like restaurants, local businesses, hotels, museums, and other attractions.

Key features include:

  • Add a place picker: a drop-in UI widget that allows your users to specify a place
  • Get the place where the user is right now
  • Show detailed place information, including the place’s name, address, phone number, and website
  • Use autocomplete to save your users time and frustration typing out place names, by automatically completing them as they type
  • Make your app stand out by adding new places that are relevant to your users and seeing the places appear in Google's Places database
  • Improve the map around you by reporting the presence of a device at a particular place.

To get started with the Places API for Android, watch this DevByte, check out the developer documentation, and play with the demos. To apply for the Places API for iOS beta program, go here.

Join the discussion on

+Android Developers
Categories: Programming

Developing audio apps for Android Auto

Android Developers Blog - Thu, 05/14/2015 - 04:09

Posted by Joshua Gordon, Developer Advocate

Have you ever wanted to develop apps for the car, but found the variety of OEMs and proprietary platforms too big of a hurdle? Now with Android Auto, you can target a single platform supported by vehicles coming soon from 28 manufacturers.

Using familiar Android APIs, you can easily add a great in-car user experience to your existing audio apps, with just a small amount of code. If you’re new to developing for Auto, watch this DevByte for an overview of the APIs, and check out the training docs for an end-to-end tutorial.


Playback and custom controls
table, th, td { border: 1px solid black; border-collapse: collapse; }

Custom playback controls on NPR One and iHeartRadio.

The first thing to understand about developing audio apps on Auto is that you don’t draw your user interface directly. Instead, the framework has two well-defined UIs (one for playback, one for browsing) that are created automatically. This ensures consistent behavior across audio apps for drivers, and frees you from dealing with any car specific functionalities or layouts. Although the layout is predefined, you can customize it with artwork, color themes, and custom controls.

Both NPR One and iHeartRadio customize their UI. NPR One adds controls to mark a story as interesting, to view a list of upcoming stories, and to skip to the next story. iHeartRadio adds controls to favorite stations and to like songs. Both apps store user preferences across form factors.

Because the UI is drawn by the framework, playback commands need to be relayed to your app. This is accomplished with the MediaSession callback, which has methods like onPlay() and onPause(). All car specific functionality is handled behind the scenes. For example, you don’t need to be aware if a command came from the touch screen, the steering wheel buttons, or the user’s voice.

Browsing and recommendations
table, th, td { border: 1px solid black; border-collapse: collapse; }

Browsing content on NPR One and iHeartRadio.

The browsing UI is likewise drawn by the framework. You implement the MediaBrowserService to share your content hierarchy with the framework. A content hierarchy is a collection of MediaItems that are either playable (e.g., a song, audio book, or radio station) or browsable (e.g., a favorites folder). Together, these form a tree used to display a browsable menu of your content.

With both apps, recommendations are key. NPR One recommends a short list of in-depth stories that can be selected from the browsing menu. These improve over time based on user feedback. iHeartRadio’s browsing menu lets you pick from favorites and recommended stations, and their “For You” feature gives recommendations based on user location. The app also provides the ability create custom stations, from the browsing menu. Doing so is efficient and requires only three taps (“Create Station” -> “Rock” -> “Foo Fighters”).

When developing for the car, it’s important to quickly connect users with content to minimize distractions while driving. It’s important to note that design considerations on Android Auto are different than on a mobile device. If you imagine a typical media player on a phone, you may picture a browsable menus of “all tracks” or “all artists”. These are not ideal in the car, where the primary focus should be on the road. Both NPR One and iHeartRadio provide good examples of this, because they avoid deep menu hierarchies and lengthy browsable lists.

Voice actions for hands free operation

Voice actions (e.g., “Play KQED”) are an important part of Android Auto. You can support voice actions in your app by implementing onPlayFromSearch() in the MediaSession.Callback. Voice actions may also be used to start your app from the home screen (e.g., “Play KQED on iHeartRadio”). To enable this functionality, declare the MEDIA_PLAY_FROM_SEARCH intent filter in your manifest. For an example, see this sample app.

Next steps

NPR One and iHeartRadio are just two examples of great apps for Android Auto today. They feel like a part of the car, and look and sound great. You can extend your apps to the car today, too, and developing for Auto is easy. The framework handles the car specific functionalities for you, so you’re free to focus on making your app special. Join the discussion at http://g.co/androidautodev if you have questions or ideas to share. To get started on your app, visit developer.android.com/auto.

Join the discussion on

+Android Developers
Categories: Programming

Enable your messaging app for Android Auto

Android Developers Blog - Thu, 05/14/2015 - 04:06

Posted by Joshua Gordon, Developer Advocate

What if there was a way for drivers to stay connected using your messaging app, while keeping their hands on the wheel and eyes on the road?

Android Auto helps drivers stay connected, but in a more convenient way that's integrated with the car. It eliminates the need to type and read messages by replacing these activities with a voice controlled interface.

Enabling your messaging app to work with Android Auto is easy. Developers like Skype and textPlus have already done so. Check out this DevByte for an overview of the messaging APIs, and see the developer training guide for a deep dive. Read on for a look at the key steps involved.


Message notifications on the car’s display

When an Android 5.0+ phone is connected to a compatible car, users receive incoming message notifications from Auto-enabled apps on the car’s head unit display. Your app runs on the phone, but is controlled by the car. To learn more about how this works, watch the Introduction to Android Auto DevByte.

A new message notification from Skype

If your app already uses notifications to alert the user to incoming messages, it’ll be easy to extend these for Auto. It takes just a few lines of code, and you won’t have to change how your app works on the phone.

There are a couple small differences between message notifications on Auto vs. a phone. On Auto, a preview of the message content isn’t shown, because messaging is driven entirely by voice. Second, message notifications are backed by a conversation object. This is simply a collection of unread messages from a particular sender.

Decorate your notification with the CarExtender to add support for the car. Next, use the UnreadConversation.Builder to create a conversation, and populate it by iterating over your app's unread messages (from a certain sender) and adding them to the conversation. Pass your conversation object to the CarExtender, and you’re done!

Tap to hear messages

Tapping on a message notification plays it back on the car's sound system, via text to speech. This is handled automatically by the framework; no additional code is required. Pretty cool, right?

In order to know when the user hears a message, you provide a PendingIntent that’s triggered by the system. That’s one of just two intents you’ll need to handle to enable your app for Auto.

Reply by voice

Voice control is the real magic of Android Auto. Users reply to messages by speaking, via voice recognition. This is far faster and more natural than typing.

Enabling this functionality is as simple as adding a RemoteInput instance to your conversation objects, before you issue the notification. Speech recognition is handled entirely by the framework. The recognition result is delivered to your app as a plain text string via a second PendingIntent.

Replying to a message from textPlus by voice.

Next Steps Make your messaging app more natural to use in the car by enabling it for Android Auto. Now drivers can stay connected, without typing or reading messages. It just takes a few lines of code. To learn more visit developer.android.com/auto Join the discussion on

+Android Developers
Categories: Programming