Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Scaling Agile: Scrum of Scrums, Membership Revisited

Pick a direction?

Pick a direction?

A Scum of Scrums (SoS) is a mechanism that has been adopted to coordinate a group of teams so that they act as a team of teams.  Historically the concept of a SoS was not part of the original Scrum framework and is considered an add-on to the framework.  In other words a Scrum of Scrums is an optional technique that can be added to more canonical Scrum framework if useful however because the technique is optional the amount of guidance is patchy.  For example, when an organization adopts a SoS who should participate is often hotly debated.  The participation debate popped up after we published Scrum of Scrums, The Basics. I had a number of conversations with readers to discuss the topic. Consolidating the discussions suggest the type of membership the person supports depends on what they want to get from using the SoS. All of the readers felt that the SoS should always be focused on coordination however there can be two flavors; one focused on activities and the second on technical questions.

Coordination of Activities:  Those that believe the SoS is primarily a tool for coordinating team activities support the idea that Scrum Masters should be chosen for SoS membership.  The rationale put forward focuses on the idea that the Scrum Master is uniquely positioned to gather and communicate information related to the coordination of teams. Readers in this group view feel that since Scrum Masters interact with all team members as a facilitator rather than a technical leader they are better coordinators. The alternate view held by many Agilistas believe that Scrum Masters acting in this role violate Scrum principles and represent a prima facie reinstitution of the role of the project manager. Further the Agilista view suggests that when technical issues need to be dealt with in SoS meetings populated with Scrum Masters they can’t make decisions rather often need to act as a conduit between technical team members. When SoS become a conduit, a version of the classic telephone game, decision making effectiveness is reduced..

Technical Coordination:  Fred Brooks in his essay Tar Pit (foreshadowing the next installment of Re-Read Saturday) suggests that software development increases in complexity as it progresses past development of an individual program. Integrating work with the work of other requires sharing technical information and making technical decisions that often can impact more than a single person or team. Scrum of scrums that are being used for technical coordination require participants with relevant technical acumen.  Participants with technical acumen generally come from the technical part of the team (developers, architects or testers).

Pushing aside the noise of whether the coordination of activities is less Agile than using the SoS for technical coordination, a more pragmatic approach is to recognize the needs of the team of teams is context driven.  The type of decision the team of team needs to make will change across a sprint or a SAFe Program Increment therefore who should attend a SoS needs to vary. The downside to varying membership is ensuring the right people are in attendance to address the SoS’s current need. One solution I have observed is to develop a cadence for topics.  For example tackle coordination every fourth SoS gathering with other more technical topics being focused on in-between.  Predictability makes it easy to plan who should attend.  Regardless of approach I suggest that any SoS should agree upon a mechanism to decide on the type of to hold. Flexibility to identify the type of SoS will ensure the team does not fall prey to meeting schedule paralysis or the equally evil telephone game.


Categories: Process Management

R: Scraping Wimbledon draw data

Mark Needham - Fri, 06/26/2015 - 00:14

Given Wimbledon starts next week I wanted to find a data set to explore before it gets underway. Having searched around and failed to find one I had to resort to scraping the ATP World Tour’s event page which displays the matches in an easy to access format.

We’ll be using the Wimbledon 2013 draw since Andy Murray won that year! This is what the page looks like:

2015 06 25 23 47 16

Each match is in its own row of a table and each column has a class attribute which makes it really easy to scrape. We’ll be using R’s rvest again. I wrote the following script which grabs the player names, seedings and score of the match and stores everything in a data frame:

library(rvest)
library(dplyr)
library(stringr)
 
s = html_session("http://www.atpworldtour.com/en/scores/archive/wimbledon/540/2013/results")
rows = s %>% html_nodes("div#scoresResultsContent tr")
 
matches = data.frame()
for(row in rows) {  
  players = row %>% html_nodes("td.day-table-name a")
  seedings = row %>% html_nodes("td.day-table-seed")
  score = row %>% html_node("td.day-table-score a")
 
  if(!is.null(score)) {
    player1 = players[1] %>% html_text() %>% str_trim()
    seeding1 = ifelse(!is.na(seedings[1]), seedings[1] %>% html_node("span") %>% html_text() %>% str_trim(), NA)
 
    player2 = players[2] %>% html_text() %>% str_trim()
    seeding2 = ifelse(!is.na(seedings[2]), seedings[2] %>% html_node("span") %>% html_text() %>% str_trim(), NA)
 
    matches = rbind(data.frame(winner = player1, 
                               winner_seeding = seeding1, 
                               loser = player2, 
                               loser_seeding = seeding2,
                               score = score %>% html_text() %>% str_trim(),
                               round = round), matches)
 
  } else {
    round = row %>% html_node("th") %>% html_text()
  }
}

This is what the data frame looks like:

> matches %>% sample_n(10)
               winner winner_seeding                       loser loser_seeding            score                round
61      Wayne Odesnik            (4)                Thiago Alves          <NA>            61 64 1st Round Qualifying
4     Danai Udomchoke           <NA>            Marton Fucsovics          <NA>       61 57 1210 1st Round Qualifying
233    Jerzy Janowicz           (24)                Lukasz Kubot          <NA>         75 64 64       Quarter-Finals
90       Malek Jaziri           <NA>             Illya Marchenko           (9)        674 75 64 2nd Round Qualifying
222      David Ferrer            (4)         Alexandr Dolgopolov          (26) 676 762 26 61 62          Round of 32
54  Michal Przysiezny           (11)                 Dusan Lojda          <NA>         26 63 62 1st Round Qualifying
52           Go Soeda           (13)               Nikola Mektic          <NA>            62 60 1st Round Qualifying
42    Ruben Bemelmans           (23) Jonathan Dasnieres de Veigy          <NA>            63 64 1st Round Qualifying
31        Mirza Basic           <NA>              Tsung-Hua Yang          <NA>     674 33 (RET) 1st Round Qualifying
179     Jurgen Melzer           <NA>              Julian Reister           (Q)    36 762 765 62          Round of 64

It also contains qualifying matches which I’m not so interested in. Let’s strip those out:

main_matches = matches %>% filter(!grepl("Qualifying", round)) %>% mutate(year = 2013)

We’ll also put a column in for ‘year’ so that we can handle the draws for multiple years later on.

Next I wanted to clean up the data a bit. I’d like to be able to do some queries based on the seedings of the players but at the moment that column contains numeric brackets in values as well as some other values which indicate whether a player is a qualifier, lucky loser or wildcard entry.

I started by adding a column to store this extra information:

main_matches$winner_type = NA
main_matches$winner_type[main_matches$winner_seeding == "(WC)"] = "wildcard"
main_matches$winner_type[main_matches$winner_seeding == "(Q)"] = "qualifier"
main_matches$winner_type[main_matches$winner_seeding == "(LL)"] = "lucky loser"
 
main_matches$loser_type = NA
main_matches$loser_type[main_matches$loser_seeding == "(WC)"] = "wildcard"
main_matches$loser_type[main_matches$loser_seeding == "(Q)"] = "qualifier"
main_matches$loser_type[main_matches$loser_seeding == "(LL)"] = "lucky loser"

And then I cleaned up the existing column:

tidy_seeding = function(seeding) {
  no_brackets = gsub("\\(|\\)", "", seeding)
  return(gsub("WC|Q|L", NA, no_brackets))
}
 
main_matches = main_matches %>% 
  mutate(winner_seeding = as.numeric(tidy_seeding(winner_seeding)),
         loser_seeding = as.numeric(tidy_seeding(loser_seeding)))

Now we can write a query against the data frame to find out when the underdog won i.e. a player with no seeding beat a player with a seeding or a lower seeded player beat a higher seeded one:

> main_matches %>%  filter((winner_seeding > loser_seeding) | (is.na(winner_seeding) & !is.na(loser_seeding)))
                  winner winner_seeding                 loser loser_seeding                  score          round year
1          Jurgen Melzer             NA         Fabio Fognini            30           675 75 63 62   Round of 128 2013
2          Bernard Tomic             NA           Sam Querrey            21       766 763 36 26 63   Round of 128 2013
3        Feliciano Lopez             NA          Gilles Simon            19             62 64 7611   Round of 128 2013
4             Ivan Dodig             NA Philipp Kohlschreiber            16 46 676 763 63 21 (RET)   Round of 128 2013
5         Viktor Troicki             NA      Janko Tipsarevic            14              63 64 765   Round of 128 2013
6         Lleyton Hewitt             NA         Stan Wawrinka            11               64 75 63   Round of 128 2013
7           Steve Darcis             NA          Rafael Nadal             5             764 768 64   Round of 128 2013
8      Fernando Verdasco             NA      Julien Benneteau            31             761 764 64    Round of 64 2013
9           Grega Zemlja             NA       Grigor Dimitrov            29       36 764 36 64 119    Round of 64 2013
10      Adrian Mannarino             NA            John Isner            18               11 (RET)    Round of 64 2013
11         Igor Sijsling             NA          Milos Raonic            17              75 64 764    Round of 64 2013
12     Kenny De Schepper             NA           Marin Cilic            10                  (W/O)    Round of 64 2013
13        Ernests Gulbis             NA    Jo-Wilfried Tsonga             6         36 63 63 (RET)    Round of 64 2013
14     Sergiy Stakhovsky             NA         Roger Federer             3         675 765 75 765    Round of 64 2013
15          Lukasz Kubot             NA          Benoit Paire            25               61 63 64    Round of 32 2013
16     Kenny De Schepper             NA           Juan Monaco            22              64 768 64    Round of 32 2013
17        Jerzy Janowicz             24       Nicolas Almagro            15              766 63 64    Round of 32 2013
18         Andreas Seppi             23         Kei Nishikori            12        36 62 674 61 64    Round of 32 2013
19         Bernard Tomic             NA       Richard Gasquet             9          767 57 75 765    Round of 32 2013
20 Juan Martin Del Potro              8          David Ferrer             4              62 64 765 Quarter-Finals 2013
21           Andy Murray              2        Novak Djokovic             1               64 75 64         Finals 2013

There are actually very few times when a lower seeded player beat a higher seeded one but there are quite a few instances of non seeds beating seeds. We’ve got 21 occurrences of underdogs winning out of a total of 127 matches.

Let’s filter that set of rows and see which seeds lost in the first round:

> main_matches %>%  filter(round == "Round of 128" & !is.na(loser_seeding))
           winner winner_seeding                 loser loser_seeding                  score        round year
1   Jurgen Melzer             NA         Fabio Fognini            30           675 75 63 62 Round of 128 2013
2   Bernard Tomic             NA           Sam Querrey            21       766 763 36 26 63 Round of 128 2013
3 Feliciano Lopez             NA          Gilles Simon            19             62 64 7611 Round of 128 2013
4      Ivan Dodig             NA Philipp Kohlschreiber            16 46 676 763 63 21 (RET) Round of 128 2013
5  Viktor Troicki             NA      Janko Tipsarevic            14              63 64 765 Round of 128 2013
6  Lleyton Hewitt             NA         Stan Wawrinka            11               64 75 63 Round of 128 2013
7    Steve Darcis             NA          Rafael Nadal             5             764 768 64 Round of 128 2013

Rafael Nadal is the most prominent but Stan Wawrinka also lost in the first round that year which I’d forgotten about! Next let’s make the ’round’ column an ordered factor one so that we can sort matches by round:

main_matches$round = factor(main_matches$round, levels =  c("Round of 128", "Round of 64", "Round of 32", "Round of 16", "Quarter-Finals", "Semi-Finals", "Finals"))
 
> main_matches$round
...     
Levels: Round of 128 Round of 64 Round of 32 Round of 16 Quarter-Finals Semi-Finals Finals

We can now really easily work out which unseeded players went the furthest in the tournament:

> main_matches %>% filter(is.na(loser_seeding)) %>% arrange(desc(round)) %>% head(5)
             winner winner_seeding             loser loser_seeding           score          round year
1    Jerzy Janowicz             24      Lukasz Kubot            NA        75 64 64 Quarter-Finals 2013
2       Andy Murray              2 Fernando Verdasco            NA  46 36 61 64 75 Quarter-Finals 2013
3 Fernando Verdasco             NA Kenny De Schepper            NA        64 64 64    Round of 16 2013
4      Lukasz Kubot             NA  Adrian Mannarino            NA  46 63 36 63 64    Round of 16 2013
5    Jerzy Janowicz             24     Jurgen Melzer            NA 36 761 64 46 64    Round of 16 2013

Next up I thought it’d be cool to write a function which showed which round each player exited in:

round_reached = function(player, main_matches) {
  furthest_match = main_matches %>% 
    filter(winner == player | loser == player) %>% 
    arrange(desc(round)) %>% 
    head(1)  
 
    return(ifelse(furthest_match$winner == player, "Winner", as.character(furthest_match$round)))
}

Our function isn’t vectorisable – it only works if we pass in a single player at a time so we’ll have to group the data frame by player before calling it. Let’s check it works by seeing how far Andy Murray and Rafael Nadal got:

> round_reached("Rafael Nadal", main_matches)
[1] "Round of 128"
> round_reached("Andy Murray", main_matches)
[1] "Winner"

Great. What about if we try it against each of the top 8 seeds?

> rbind(main_matches %>% filter(winner_seeding %in% 1:8) %>% mutate(name = winner, seeding = winner_seeding), 
        main_matches %>% filter(loser_seeding %in% 1:8) %>% mutate(name = loser, seeding = loser_seeding)) %>%
    select(name, seeding) %>%
    distinct() %>%
    arrange(seeding) %>%
    group_by(name) %>%
    mutate(round_reached = round_reached(name, main_matches))
Source: local data frame [8 x 3]
Groups: name
 
                   name seeding  round_reached
1        Novak Djokovic       1         Finals
2           Andy Murray       2         Winner
3         Roger Federer       3    Round of 64
4          David Ferrer       4 Quarter-Finals
5          Rafael Nadal       5   Round of 128
6    Jo-Wilfried Tsonga       6    Round of 64
7         Tomas Berdych       7 Quarter-Finals
8 Juan Martin Del Potro       8    Semi-Finals

Neat. Next up I want to do a comparison between the round they reached and the round you’d expect them to get to given their seeding but that’s for the weekend!

I’ve put a CSV file containing all the data in this gist in case you want to play with it. I’m planning to scrape a few more years worth of data before Monday and add in some extra fields as well but in case I don’t get around to it the full script in this blog post is included in the gist as well so feel free to tweak it if tennis is your thing.

Categories: Programming

Two people coding is twice as productive, right?

Actively Lazy - Thu, 06/25/2015 - 22:14

Stands to reason, doesn’t it? If one person can make 5 widgets an hour, then two people can make 10 widgets an hour. Its just the natural way of things. You can’t argue with science.

The same is obviously true of software, isn’t it? If one developer can write 10 lines of code an hour, then clearly two can write 20 lines of code an hour. If you want more code written, just hire more developers. There’s nothing mythical about my man months.

And yet… somehow… software persists in being weird stuff.

This week I had an interesting experience. Me and one other developer have been working on a new, greenfield project. We’ve been ploughing through the work, ticking off stories at a decent rate. Only now it’s getting to that difficult stage where the original design ideas are rapidly giving way to new problems and new ideas; substantial refactoring is going on as we discuss better ways of representing our problem. This seems good and healthy.

Then I had one of those days where everywhere I turned there was a design problem. Not a single line of code could be written without me getting grumpy about the design. Worst of all, it was the code my co-worker had just finished that was showing the flaws in the original design. Cue much discussion. At one point he lamented that he could finish the task (that was blocking me from making progress) “if he could just get a 30 minute run at his computer”. It was nearly 5pm.

A day where a 30 minute spell of productive coding is hard to find is not a day where much code has been written. Oh we were productive, the design was much improved by day’s end. The code? Nothing to see here, move along, please. Were we really twice as productive that day? Hell no. I spent the entire day distracting him from completing his tasks to discuss design problems; he spent the entire day trying to merge a branch that my design refactoring had made difficult. We spent the entire day working against each other.

What could we have done differently? Well the first problem was trying to maintain two streams of development activity through the same (small) code base. We were tripping up over each other like crazy. Unwinding a few days, we probably would have got more done with just one person working. That way there would only be one narrative thread through the code, one sequence of refactoring steps at a time.

Wait, what – say that again: we would have got more done last week if only one person had been working on it. Well that’s just crazy talk, let me tell you about making widgets, boy…

I think we massively underestimate the cost of coordination and communication when building software. From the outside its very easy to miss: a quick 5 minute conversation laden with jargon. And yet… this is where the magic happens: this is where the design comes from. But if that 5 minute conversation interrupted someone’s work, the next 45 minutes could be lost while they try and reload into memory what they were working on. Pile up a few of these interruptions in your day, and no wonder it feels like you’re swimming upstream.

Clearly, what we should have been doing but weren’t was pairing. That way there would only have been one narrative thread. One sequence of ideas being applied at a time. Changes neatly serialized by there only being one keyboard.  Of course, by pairing we still could have had the design discussions – but instead they would appear at a time when we were both already stuck. There is no cost of interruption when you’re both already there, immersed in the problem. By pairing we would have stopped working against each other and created an interruption-free space for design discussions.

So in fact: two people can be more productive than one. Two people pairing is definitely better than one person working on their own. It’s made me realise that we’ve been explaining pairing all wrong: we try and justify the “cost” of pairing, as though we somehow have to explain why having two people working at the same machine really isn’t halving productivity. It’s all based on a false assumption: that two people working on different machines are twice as productive as one person working alone. Once you realise that this assumption is fundamentally flawed, the “cost” of pairing evaporates. Instead pairing removes the cost of coordination between two developers: no interruptions, no divergent ideas, no merge conflicts.

Pair programming is actually a cost-saving exercise.


Categories: Programming, Testing & QA

Everything I Learned About PM Came From a Elementary School Teacher

Herding Cats - Glen Alleman - Thu, 06/25/2015 - 18:08

Our daughter is an elementary teacher in Austin Texas. A nice school, Number 2 school in Texas.

While visiting this week, we were talking about a new book a group of us are working on. While showing her the TOC, she said Dad we do all that stuff (minus the finance side) every day, week, month, semester, and year. It's not that hard. That's what we've been trained to do. OK, but talent, dedication, skill, and a gift for teaching helps. 

Here's how an elementary school teacher sees her job as the Project Manager of 20 young clients.

  • Plan before starting anything, it’s going to go wrong, so know that up front and be able to recognize the train wreck is coming and get out of the way.
    • The plan is a strategy for the successful completion of the project.
    • Without the plan, you don't know how to assess progress in terms meaningful to the decision makers. Measures of cost and schedule are measures of effectiveness. Measurers of stories produces, features delivered aren;t measures or capabilities produced.
    • A Capabilities Based Plan is that measure. What capabilities doesn't the customer need to accomplishment the business case or fulfill a mission.
    • In education Blooms Taxonomy with TLOs and ELO's define the capabilities the student will possess at the end of the course.
  • Have a notion of what done looks like, so when you get there, you can stop and move on.
    • Done is defined as possessing a capability to accomplish something.
    • Write this down in units of Effectiveness and Performance 
  • Have your Plan B always ready to go and then start thinking of Plan C when Plan B is under way. No Plan A ever lasts too long in the presence of chaos.
    • Risk management is how adults manage projects - Tim Lister
    • Adult supervision is the role of the teacher. Many times adult supervision of the role of the project manager.
  • Make sure you’ve got all the right resources lined up and ready to spring into action when things go wrong. Classroom aides, class leaders, parents, staff all ready to go when the plan goes in the ditch.
    • Resource planning is a critical success factor for all projects.
  • Know what can go wrong before you start, steer away from trouble and trouble will stay away.
    • Risk planning is planning. Planning is strategy. 
    • Apply good risk management to all activities on the project. Perform some formal sequence of risk management. Pick one. My favorite is the SEI Continuous Risk Management process
  • Separate the trouble makers from the main stream. You know them on day one. 
    • Any good project manager can see trouble coming.
    • Isolate the troubled parts. Assign them to separate teams. Have them fix the problem so the rest of the project isn't impacted by them
  • Show up early, prepare for the work, clean up afterward, so you can start “clean” again the next day. No less that 100% complete at the end of each period of performance. If not you’ll pay dearly for  it later.
    • Being prepared is the major attribute of project success.
    • This means planning.
    • Let's things emerge is nice of small non-trivial projects with low value at risk. 
  • Always ask “is this your best work?” and “did you put your name on it?” Otherwise you're creating re-work.
    • Set the highest quality standards possible
  • No crying when it doesn’t work. Redo it and get back on schedule, recess time is schedule margin - you get to stay in and finish your planned work.
    • No whining, every one put your "big boy " pants on a do the work needed to get the job done.
  • Take a break, go outside and play, think what you’re going to do next hour. Come back and do it.
    • Have retrospectives.
    • Look back for opportunities for improvement
    • Do Root Cause Analysis to find out the "real" why things didn't work
    • Have fun while still working hard

Is This Your Best Work

Related articles Who's Budget is it Anyway? Systems Thinking, System Engineering, and Systems Management Myth's Abound
Categories: Project Management

I Literally Don’t Give A Shit!

Making the Complex Simple - John Sonmez - Thu, 06/25/2015 - 16:00

In this episode, I literally don’t give a shit. Full transcript: John:               Hey, John Sonmez from simpleprogrammer.com. I’ve got a bit of a long question. This is kind of an interesting one and there might be some profanity. I try to keep the content pretty clean, but I might get a little worked up about […]

The post I Literally Don’t Give A Shit! appeared first on Simple Programmer.

Categories: Programming

New Azure Billing APIs Available

ScottGu's Blog - Scott Guthrie - Thu, 06/25/2015 - 06:59

Organizations moving to the cloud can achieve significant cost savings.  But to achieve the maximum benefit you need to be able to accurately track your cloud spend in order to monitor and predict your costs. Enterprises need to be able to get detailed, granular consumption data and derive insights to effectively manage their cloud consumption.

I’m excited to announce the public preview release of two new Azure Billing APIs today: the Azure Usage API and Azure RateCard API which provide customers and partners programmatic access to their Azure consumption and pricing details:

Azure Usage API – A REST API that customers and partners can use to get their usage data for an Azure subscription. As part of this new Billing API we now correlate the usage/costs by the resource tags you can now set set on your Azure resources (for example: you could assign a tag “Department abc” or “Project X” to a VM or Database in order to better track spend on a resource and charge it back to an internal group within your company). To get more details, please read the MSDN page on the Usage API. Enterprise Agreement (EA) customers can also use this API to get a more granular view into their consumption data, and to complement what they get from the EA Billing CSV.

Azure RateCard API – A REST API that customers and partners can use to get the list of the available resources they can use, along with metadata and price information about them. To get more details, please read the MSDN page on the RateCard API.

You can start taking advantage of both of these APIs today.  You can write your own custom code that uses the APIs to construct your own custom reports, or alternatively you can also now take advantage of pre-built bill tracking systems provided by our partners which already integrate the APIs into their existing solutions.

Partner Solutions

Two of our Azure Billing partners (Cloudyn and Cloud Cruiser) have already integrated the new Billing APIs into their products:

Cloudyn has integrated with Azure Billing APIs to provide IT financial management insights on cost optimization. You can read more about their integration experience in Microsoft Azure Billing APIs enable Cloudyn to Provide ITFM for Customers.

Cloud Cruiser has integrated with the Azure RateCard API to provide an estimate of what it would cost the customer to run the same workloads on Azure. They are also working on integrating with the Azure Usage API to provide insights based on the Azure consumption. You can read more about their integration in Cloud Cruiser and Microsoft Azure Billing API Integration.

You can adopt one or both of the above solutions immediately and use them to better track your Azure bill without having to write a single line of code.

image

Cloudyn's integration enables you to view and query the breakdown of Azure usage by resource tags (e.g. “Dev/Test”, “Department abc”, “Project X”):

image

Cloudyn's integration showing trend of estimated charges over time:

image

Cloud Cruiser's integration to show estimated cost of running workload on Azure:  

image Using the Billing APIs directly

You can also use the new Billing APIs directly to write your own custom reports and billing tracking logic.  To get started with the APIs, you can leverage the code samples on Github.

The Billing APIs leverage the new Azure Resource Manager and use Azure Active Directory for Authentication and follow the Azure Role-based access control policies.  The code samples we’ve published show a variety of common scenarios and how to integrate this logic end to end. Summary

The new Azure Billing APIs make it much easier to track your bill and save money.

As always, please reach out to us on the Azure Feedback forum and through the Azure MSDN forum.

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Android Developer Story: Shifty Jelly drives double-digit growth with material design and expansion to the car and wearables

Android Developers Blog - Wed, 06/24/2015 - 19:33

Posted by Lily Sheringham, Google Play team

Pocket Casts is a leading podcasting app on Google Play built by Australian-based mobile development company Shifty Jelly. The company recently achieved $1 million in sales for the first time, reaching more than 500K users.

According to the co-founder Russell Ivanovic, the adoption of material design played a significant role in driving user engagement for Pocket Casts by streamlining the user experience. Moreover, users are now able to access the app beyond the smartphone -- in the car with Android Auto, on a watch with Android Wear or on the TV with Google Cast. The rapid innovation of Android features helped Pocket Casts increase sales by 30 percent.

We chatted with co-founders and Android developers Russell and Philip Simpson to learn more about how they are growing their business with Android.

Here are some of the features Pocket Casts used:

  • Material Design: Learn more about material design and how it helps you create beautiful, engaging apps.
  • Android Wear: Extend your app to Android Wear devices with enhanced notifications or a standalone wearable app.
  • Android Auto: Extend your app to an interface that’s optimized for driving with Android Auto.
  • Google Cast: let your users cast your app’s content to Google Cast devices like Chromecast, Android TV, and speakers with Google Cast built-in.

And check out the Pocket Casts app on Google Play!

Join the discussion on

+Android Developers
Categories: Programming

QuakeÂź III on your TV with Cast Remote Display API

Google Code Blog - Wed, 06/24/2015 - 18:45

Posted by Leon Nicholls, Developer Programs Engineer and Antonio Fontan, Software Engineer

At Google I/O 2015 we announced the new Google Cast Remote Display APIs for Android and iOS that make it easy for mobile developers to bring graphically intensive apps or games to Google Cast receivers. Now you can use the powerful GPUs, CPUs and sensors of the mobile device in your pocket to render both a local display and a virtual one to the TV. This dual display model also allows you to design new game experiences for the display on the mobile device to show maps, game pieces and private game information.

We wanted to show you how easy it is to take an existing high performance game and run it on a Chromecast. So, we decided to port the classic Quake¼ III Arena open source engine to support Cast Remote Display. We reached out to ID Software and they thought it was a cool idea too. When all was said and done, during our 2015 I/O session “Google Cast Remote Display APIs for Games” we were able to present the game in 720p at 60 fps!

During the demo we used a wired USB game controller to play the game, but we've also experimented with using the mobile device sensors, a bluetooth controller, a toy gun and even a dance mat as game controllers.

Since you're probably wondering how you can do this too, here's the details of how we added Cast Remote Display to Quake. The game engine was not modified in any way and the whole process took less than a day with most of our time spent removing UI code not needed for the demo. We started by using an existing source port of Quake III to Android which includes some usage of kwaak3 and ioquake3 source code.

Next, we registered a Remote Display App ID using the Google Cast SDK Developer Console. There’s no need to write a Cast receiver app as the Remote Display APIs are supported natively by all Google Cast receivers.

To render the local display, the existing main Activity was converted to an ActionBarActivity. To discover devices and to allow a user to select a Cast device to connect to, we added support for the Cast button using MediaRouteActionProvider. The MediaRouteActionProvider adds a Cast button to the action bar. We then set the MediaRouteSelector for the MediaRouter using the App ID we obtained and added a callback listener using MediaRouter.addCallback. We modified the existing code to display an image bitmap on the local display.

To render the remote display, we extended CastPresentation and called setContentView with the game’s existing GLSurfaceView instance. Think of the CastPresentation as the Activity for the remote display. The game audio engine was also started at that point.

Next we created a service extending CastRemoteDisplayLocalService which would then create an instance of our CastPresentation class. The service will manage the remote display even when the local app goes into the background. The service automatically provides a convenient notification to allow the user to dismiss the remote display.

Then we start our service when the MediaRouter onRouteSelected event is called by using CastRemoteDisplayLocalService.startService and stop the service when the MediaRouter onRouteUnselected event is called by using CastRemoteDisplayLocalService.stopService.

To see a more detailed description on how to use the Remote Display APIs, read our developer documentation. We have also published a sample app on GitHub that is UX compliant.

You can download the code that we used for the demo. To run the app you have to compile it using Gradle or Android Studio. You will also need to copy the "baseq3" folder from your Quake III game to the “qiii4a” folder in the root of the SD card of your Android mobile device. Your mobile device needs to have at least Android KitKat and Google Play services version 7.5.71.

With 17 million Chromecast devices sold and 1.5 billion touches of the Cast button, the opportunity for developers is huge, and it’s simple to add this extra functionality to an existing game. We're eager to see what amazing experiences you create using the Cast Remote Display APIs.

QUAKE II © 1997 and QUAKE III © 1999 id Software LLC, a ZeniMax Media company. QUAKE is a trademark or registered trademark of id Software LLC in the U.S. and/or other countries. QUAKE game assets used under license from id Software LLC. All Rights Reserved

QIII4A © 2012 n0n3m4. GNU General Public License.

Q3E © 2012 n0n3m4. GNU General Public License.

Categories: Programming

Predicting the Unpredictable is Available

PredictingUnpredictable-smallI’m happy to announce that Predicting the Unpredictable: Pragmatic Approaches to Estimating Cost or Schedule is done and available. It’s available in electronic and print formats. If you need a little help explaining your estimates or how to use estimation (even #noestimate), read this book.

 

Categories: Project Management

Scaling Agile: Scrum of Scrums – The Basics

Scaling - going from one project to two (or more!)

Scaling – going from one project to two (or more!)

The Scrum of Scrums (SoS) is a technique for scaling Scrum and other Agile techniques to larger efforts. On paper, the SoS is a simple extrapolation of the daily Scrum or stand-up meeting that has become a hallmark of the practice of Agile. A typical stand-up meeting occurs on a daily basis so that all of the team members of can coordinate and plan daily activities. Classically the team uses the three question technique to organize the meeting. In a similar manner, a typical Scrum of Scrums acts as a focal point to help synchronize a team of teams or even teams of teams. There are four basics of a SoS that need to be understood before any nuances can be addressed.

  1. Who Attends: There are two basic schools of thought in picking SoS attendees. The first (and most common) school suggests that the Scrum Master attend the SoS. The Scaled Agile Framework Enterprise (SAFe) is an example of a methodology that leverages the Scrum Master during both the planning event and during the program increment. Alternately, the second group takes a more egalitarian approach (possibly more pragmatic) allowing the each team to select an attendee that can most easily convey or understand the current issues the team and the larger group are having at the time. For example, if design decisions are at the front, perhaps team members with the most UX acumen would make sense. In this scenario attendees would vary over time.
  2. Who Facilitates: Small SoS groups, for example a handful of teams that are co-located, may not require facilitation. The SoS can self-organize and execute the meeting with little overhead. However as the number of participants increases (I bound SoS meetings using the same 5 -9 members rule used in Scrum teams) or as the distribution of the members becomes more varied facilitation becomes more important. Facilitators help the team use the SoS practice, ensure logistics are set up and champion the schedule. In larger efforts, a program manager often delivers facilitation, or if practicing SAFe the Release Train Engineer performs the facilitation role.
  3. Frequency: Scrum of Scrums often follows the same pattern as the daily Scrum/stand-up meeting. A second frequency pattern is risk-based; the frequency of the SoS meetings varies depending on need. Early in a project the SoS meets daily as teams are forming, norming and as early decisions are being made. The SoS meeting become twice weekly in the middle of the project and then returns to daily as a release approaches.
  4. Format: Daily stand-up meetings commonly leverage the classic three question approach (What did I do? What am I going to do? and What are my blockers?). The Scrum of Scrums generally follows a VERY similar format with each participant answering the following four questions:
    1. What did my team do since the last meeting?
    2. What will my team do before we meet again?
    3. Is my team experiencing blockers?
    4. Is my team going to put something in another team’s path?

The meeting follows the same round robin approach with each participant interacting with each other. The facilitator (if used) should never be the focal point of the meeting, nor should the SoS devolve into a purely a status meeting.

The daily stand-up and the SoS are very similar meetings. Both meetings are held to share information, coordinate activities and to identify problems. The scope of SoS meeting is broader than a single team with the meeting providing coordination and planning activities within and across teams.

Next: Using the Scrum of Scrums to Scale Planning and Scrum of Scrum Anti-patterns


Categories: Process Management

R: Scraping the release dates of github projects

Mark Needham - Tue, 06/23/2015 - 23:34

Continuing on from my blog post about scraping Neo4j’s release dates I thought it’d be even more interesting to chart the release dates of some github projects.

In theory the release dates should be accessible through the github API but the few that I looked at weren’t returning any data so I scraped the data together.

We’ll be using rvest again and I first wrote the following function to extract the release versions and dates from a single page:

library(dplyr)
library(rvest)
 
process_page = function(releases, session) {
  rows = session %>% html_nodes("ul.release-timeline-tags li")
 
  for(row in rows) {
    date = row %>% html_node("span.date")
    version = row %>% html_node("div.tag-info a")
 
    if(!is.null(version) && !is.null(date)) {
      date = date %>% html_text() %>% str_trim()
      version = version %>% html_text() %>% str_trim()
      releases = rbind(releases, data.frame(date = date, version = version))
    }  
  }
  return(releases)
}

Let’s try it out on the Cassandra release page and see what it comes back with:

> r = process_page(data.frame(), html_session("https://github.com/apache/cassandra/releases"))
> r
           date               version
1  Jun 22, 2015       cassandra-2.1.7
2  Jun 22, 2015      cassandra-2.0.16
3   Jun 8, 2015       cassandra-2.1.6
4   Jun 8, 2015   cassandra-2.2.0-rc1
5  May 19, 2015 cassandra-2.2.0-beta1
6  May 18, 2015      cassandra-2.0.15
7  Apr 29, 2015       cassandra-2.1.5
8   Apr 1, 2015      cassandra-2.0.14
9   Apr 1, 2015       cassandra-2.1.4
10 Mar 16, 2015      cassandra-2.0.13

That works pretty well but it’s only one page! To get all the pages we can use the follow_link function to follow the ‘Next’ link until there aren’t anymore pages to process.

We end up with the following function to do this:

find_all_releases = function(starting_page) {
  s = html_session(starting_page)
  releases = data.frame()
 
  next_page = TRUE
  while(next_page) {
    possibleError = tryCatch({  
      releases = process_page(releases, s)
      s = s %>% follow_link("Next") 
    }, error = function(e) { e })
 
    if(inherits(possibleError, "error")){
      next_page = FALSE
    }
  }
  return(releases)
}

Let’s try it out starting from the Cassandra page:

> cassandra = find_all_releases("https://github.com/apache/cassandra/releases")
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-2.0.13
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-2.0.10
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-2.0.8
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-1.2.13
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-2.0.0-rc1
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-1.2.3
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-1.2.0-beta2
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-1.0.10
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-1.0.6
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-1.0.0-rc2
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-0.7.7
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-0.7.4
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-0.7.0-rc3
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-0.6.4
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-0.5.0-rc3
Navigating to https://github.com/apache/cassandra/releases?after=cassandra-0.4.0-final
 
> cassandra %>% sample_n(10)
            date               version
151 Mar 13, 2010   cassandra-0.5.0-rc2
25   Jul 3, 2014      cassandra-1.2.18
51  Jul 27, 2013       cassandra-1.2.8
21  Aug 19, 2014   cassandra-2.1.0-rc6
73  Sep 24, 2012 cassandra-1.2.0-beta1
158 Mar 13, 2010   cassandra-0.4.0-rc2
113 May 20, 2011     cassandra-0.7.6-2
15  Oct 24, 2014       cassandra-2.1.1
103 Sep 15, 2011 cassandra-1.0.0-beta1
93  Nov 29, 2011       cassandra-1.0.4

I want to plot when the different releases happened in time and in order to do that we need to create an extra column containing the ‘release series’ which we can do with the following transformation:

series = function(version) {
  parts = strsplit(as.character(version), "\\.")  
  return(unlist(lapply(parts, function(p) paste(p %>% unlist %>% head(2), collapse = "."))))  
}
 
bySeries = cassandra %>%
  mutate(date2 = mdy(date), series = series(version),
         short_version = gsub("cassandra-", "", version),
         short_series = series(short_version))
 
> bySeries %>% sample_n(10)
            date               version      date2        series short_version short_series
3    Jun 8, 2015       cassandra-2.1.6 2015-06-08 cassandra-2.1         2.1.6          2.1
161 Mar 13, 2010 cassandra-0.4.0-beta1 2010-03-13 cassandra-0.4   0.4.0-beta1          0.4
62  Feb 15, 2013      cassandra-1.1.10 2013-02-15 cassandra-1.1        1.1.10          1.1
153 Mar 13, 2010 cassandra-0.5.0-beta2 2010-03-13 cassandra-0.5   0.5.0-beta2          0.5
37   Feb 7, 2014       cassandra-2.0.5 2014-02-07 cassandra-2.0         2.0.5          2.0
36   Feb 7, 2014      cassandra-1.2.15 2014-02-07 cassandra-1.2        1.2.15          1.2
29   Jun 2, 2014   cassandra-2.1.0-rc1 2014-06-02 cassandra-2.1     2.1.0-rc1          2.1
21  Aug 19, 2014   cassandra-2.1.0-rc6 2014-08-19 cassandra-2.1     2.1.0-rc6          2.1
123 Feb 16, 2011       cassandra-0.7.2 2011-02-16 cassandra-0.7         0.7.2          0.7
135  Nov 1, 2010 cassandra-0.7.0-beta3 2010-11-01 cassandra-0.7   0.7.0-beta3          0.7

Now let’s plot those releases and see what we get:

ggplot(aes(x = date2, y = short_series), 
       data = bySeries %>% filter(!grepl("beta|rc", short_version))) +     
  geom_text(aes(label=short_version),hjust=0.5, vjust=0.5, size = 4, angle = 90) + 
  theme_bw()

2015 06 23 22 59 19

An interesting thing we can see from this visualisation is what overlap the various series of versions have. Most of the time there are only two series of versions overlapping but the 1.2, 2.0 and 2.1 series all overlap which is unusual.

In this chart we excluded all beta and RC versions. Let’s bring those back in and just show the last 3 versions:

ggplot(aes(x = date2, y = short_series), 
       data = bySeries %>% filter(grepl("2\\.[012]\\.|1\\.2\\.", short_version))) +     
  geom_text(aes(label=short_version),hjust=0.5, vjust=0.5, size = 4, angle = 90) + 
  theme_bw()

2015 06 23 23 08 04

From this chart it’s clearer that the 2.0 and 2.1 series have recent releases so there will probably be three overlapping versions when the 2.2 series is released as well.

The chart is still a bit cluttered although less than before. I’m not sure of a better way of visualising this type of data so if you have any ideas do let me know!

Categories: Programming

Success and Failure (Get Your Free Celebration Grid Poster!)

NOOP.NL - Jurgen Appelo - Tue, 06/23/2015 - 22:28
Success and Failure v1.00 - Poster

Success and Failure – The Celebration Grid

The post Success and Failure (Get Your Free Celebration Grid Poster!) appeared first on NOOP.NL.

Categories: Project Management

Attach Google Drive files to Calendar events with the Calendar API

Google Code Blog - Tue, 06/23/2015 - 20:43

Posted by Iskander Akishev, Software Engineer, Google Calendar

Originally posted to the Google Apps Developer blog

The Google Calendar API allows you to create and modify events on Google Calendar. Starting today, you can use the API to also attach Google Drive files to Calendar events to make them—and your app—even more useful and integrated. With the API, you can easily attach meeting notes or add PDFs of booking confirmations to events.

Here's how you set it up:

1) Get the file information from Google Drive (e.g. via the Google Drive API):

GET https://www.googleapis.com/drive/v2/files

{
...
"items": [
{
"kind": "drive#file",
"id": "9oNKwQI7dkW-xHJ3eRvTO6Cp92obxs1kJsZLFRGFMz9Q,
...
"alternateLink": "https://docs.google.com/presentation/d/9oNKwQI7dkW-xHJ3eRvTO6Cp92obxs1kJsZLFRGFMz9Q/edit?usp=drivesdk",
"title": "Workout plan",
"mimeType": "application/vnd.google-apps.presentation",
...
},
...
]
}

2) Pass this information into an event modification operation using the Calendar API:

POST https://www.googleapis.com/calendar/v3/calendars/primary/events?supportsAttachments=true

{
"summary": "Workout",
"start": { ... },
"end": { ... },
...
"attachments": [
{
"fileUrl": "https://docs.google.com/presentation/d/9oNKwQI7dkW-xHJ3eRvTO6Cp92obxs1kJsZLFRGFMz9Q/edit?usp=drivesdk",
"title": "Workout plan",
"mimeType": "application/vnd.google-apps.presentation"
},
...
]
}

VoilĂ !

You don’t need to do anything special in order to see the existing attachments - they are now always exposed as part of an event:

GET https://www.googleapis.com/calendar/v3/calendars/primary/events/ja58khmqndmulcongdge9uekm7

{
"kind": "calendar#event",
"id": "ja58khmqndmulcongdge9uekm7",
"summary": "Workout",
...
"attachments": [
{
"fileUrl": "https://docs.google.com/presentation/d/9oNKwQI7dkW-xHJ3eRvTO6Cp92obxs1kJsZLFRGFMz9Q/edit?usp=drivesdk",
"title": "Workout plan",
"mimeType": "application/vnd.google-apps.presentation",
"iconLink": "https://ssl.gstatic.com/docs/doclist/images/icon_11_presentation_list.png"
},
...
]
}

Check out the guide and reference in the Google Calendar API documentation for additional details.

For any questions related to attachments or any other Calendar API features you can reach out to us on StackOverflow.com, using the tag #google-calendar.

Categories: Programming

Fitness Apps on Android Wear

Android Developers Blog - Tue, 06/23/2015 - 19:22

Posted by Joshua Gordon, Developer Advocate

Go for a run, improve your game, and explore the great outdoors with Android Wear! Developers are creating a diverse array of fitness apps that provide everything from pace and heart rate while running, to golf tips on your favorite course, to trail maps for hiking. Let’s take a look features of the open and flexible Wear platform they use to create great user experiences.

Always-on stats

If your app supports always-on, you’ll never have to touch or twist your watch to activate the display. Running and want to see your pace? Glance at your wrist and it’s there! Runtastic, Endomondo, and MapMyRun use always-on to keep your stats visible, even in ambient mode. When it’s time for golf, I use Golfshot. Likewise, Golfshot uses always-on to continuously show yardage to the hole, so I never have to drop my club. Check out the doc, DevByte, and code sample to learn more.

table, th, td { border: clear; border-collapse: collapse; } Runtastic automatically transitions to ambient mode to conserve battery. There, it reduces the frequency at which stats are updated to about once per 10 seconds.
Maps, routes, and markers

It's encouraging to see how much ground I’ve covered when I go for a run or ride! Using the Maps API, you can show users their route, position, and place markers on the map they can tap to see more info you provide. All of this functionality is available to you using the same Maps API you’ve already worked with on Android. Check out the doc, DevByte, code sample, and blog post to learn more.

table, th, td { border: clear; border-collapse: collapse; } Endomondo tracks your route while your run. You can pan and zoom the map.
Google Fit

Google Fit is an open platform designed to make it easier to write fitness apps. It provides APIs to help with many common tasks. For example, you can use the Recording API to estimate how many steps the user has taken and how many calories they've burned. You can make that data to your app via the History API, and even access it over the web via REST, without having to write your own backend. Now, Google Fit can store data from a wide variety of exercises, from running to weightlifting. Check out the DevByte and code samples to learn more.

Bluetooth Low Energy: pair with your watch

With the latest release of Android Wear, developers can now pair BLE devices directly with the Wearable. This is a great opportunity for all fitness apps -- and especially for running -- where carrying both a phone and the Wearable can be problematic. Imagine if your users could pair their heart rate straps or bicycle cadence sensors directly to their Wear device, and leave their phones at home. BLE is now supported by all Wear devices, and is supported by Google Fit. To learn more about it, check out this guide and DevByte.

Pack light with onboard GPS

When I’m running, carrying both a phone and a wearable can be a bit much. If you’re using an Android Wear device that supports onboard GPS, you can leave your phone at home! Since not all Wear devices have an onboard GPS sensor, you can use the FusedLocationProviderApi to seamlessly retrieve GPS coordinates from the phone if not available on the wearable. Check out this handy guide for more about detecting location on Wear.

table, th, td { border: clear; border-collapse: collapse; } RunKeeper supports onboard GPS if it’s available on your Wearable.
Sync data transparently

When I’m back home and ready for more details on my activity, I can see them by opening the app on my phone. My favorite fitness apps transparently sync data between my Wearable and phone. To learn more about syncing data between devices, watch this DevByte on the DataLayer API.

Next Steps

Android Wear gives you the tools and training you need to create exceptional fitness apps. To get started on yours, visit developer.android.com/wear and join the discussion at g.co/androidweardev.

Join the discussion on

+Android Developers
Categories: Programming

Growing Android TV engagement with search and recommendations

Android Developers Blog - Tue, 06/23/2015 - 18:33

Posted by Maru Ahues, Media Developer Advocate

When it comes to TV, content is king. But to enjoy great content, you first need to find it. We created Android TV with that in mind: a truly smart TV should deliver interesting content to users. Today, EPIXÂź joins a growing list of apps that use the Android TV platform to make it easy to enjoy movies, TV shows, sports highlights, music videos and more.

Making TV Apps Searchable

Think of your favorite movie. Now try to locate it in one of your streaming apps. If you have a few apps to choose from, it might take some hunting before you can watch that movie. With Android TV, we want to make it easier to be entertained. Finding ‘Teenage Mutant Ninja Turtles’ should be as easy as picking up the remote, saying ‘Teenage Mutant Ninja Turtles’ and letting the TV find it.

Searching for ‘Teenage Mutant Ninja Turtles’ shows results from Google Play and EPIX

You can drive users directly to content within your app by making it searchable from the Android TV search interface. Join app developers like EPIX, Sky News, YouTube, and Hulu Plus who are already making content discovery a breeze.

Recommending TV Content

When users want suggestions for content, the recommendations row on Android TV helps them quickly access relevant content right from the home screen. Recommendations are based on the user’s recent and frequent usage behaviors, as well as content preferences.

Recommendations from installed apps, like EPIX, appear in the Android TV home screen

Android TV allows developers to create recommendations for movies, TV shows, music and other types of content. Your app can provide recommendations to users to help get your content noticed. As an example, EPIX shows hollywood movies. NBA Game Time serves up basketball highlights. Washington Post offers video summaries of world events, and YouTube suggests videos based on your subscriptions and viewing history.

With less than one year since the consumer launch of Android TV, we’re already building upon a simpler, smarter and more personalized TV experience, and we can’t wait to see what you create.

Join the discussion on

+Android Developers
Categories: Programming

You Don’t Need a Complicated Story Hierarchy

Mike Cohn's Blog - Tue, 06/23/2015 - 15:00

Consultants and tool vendors seem to have a penchant for making things complicated. It seems the more complicated we make things, the more our clients need us. And that sells tools and services, I suppose.

On the other hand, I find unnecessary complexity extremely frustrating. It’s like the novel I read this week by a first-time author. It was good, but it had too many minor characters who complicated the plot and made the book hard to follow.

The same thing happens when people introduce complicated hierarchies or taxonomies for user stories like this:

You don’t need this. When teams are forced to use complicated taxonomies for their stories, they spend time worrying about whether a particular story is an epic, a saga or merely a headline. That discussion is like the minor character who walks into the novel and needlessly complicates the plot.

But, Mike -- I can hear you asking -- you’ve written about epics and themes before.

Yes, but those are labels. A story is a story so my recommended story taxonomy is this:

A story is a story is a story.

Some stories are big and they can be labeled as epics. I’ve used the analogy of movies before. All movies are movies but some movies are romantic comedies—that’s a label, just like epic is.

Similarly, theme refers to a group of related stories, but not does have to work within a hierarchy. Again using movies, I could have a group of spy movies that would include the James Bond movies and the Austin Powers movies. But a group of comedy movies would include Austin Powers but not James Bond.

So, again, theme and epic are labels not an implied hierarchy. Don’t make things more complicated than they need to be. I haven’t come across any reasons to have fancy story hierarchies or taxonomies.

Selecting an Iteration Approach - New Lecture Posted

10x Software Development - Steve McConnell - Tue, 06/23/2015 - 11:17

In this week's lecture (https://cxlearn.com) I explain how the lifecycle model can be used to show the incredibly large number of variations in approaches to software projects, especially including numerous variations in kinds of iteration. I identify approaches that work if you need predictability, if you need flexibility, if you need to attack uncertainty in requirements (i.e., unknown requirements), and if you need to attack uncertainty in architecture (i.e., technical risk).  

The overarching message is that there are lots of different ways to organize the activities on a software project, and the way you organize the activities significantly affects what a project will accomplish. 

Lectures posted so far include:  

0.0 Understanding Software Projects - Intro
     0.1 Introduction - My Background
     0.2 Reading the News
     0.3 Definitions and Notations 

1.0 The Software Lifecycle Model - Intro
     1.1 Variations in Iteration 
     1.2 Lifecycle Model - Defect Removal
     1.3 Lifecycle Model Applied to Common Methodologies
     1.4 Lifecycle Model - Selecting an Iteration Approach 

2.0 Software Size
     2.05 Size - Comments on Lines of Code
     2.1 Size - Staff Sizes 
     2.2 Size - Schedule Basics 

Check out the lectures at http://cxlearn.com!

Understanding Software Projects - Steve McConnell

 

Variations in Iteration - New Lecture Posted(1)

10x Software Development - Steve McConnell - Tue, 06/23/2015 - 11:17

In this week's lecture (https://cxlearn.com) I explain how the lifecycle model can be used to show the incredibly large number of variations in approaches to software projects, especially including numerous variations in kinds of iteration. I identify approaches that work if you need predictability, if you need flexibility, if you need to attack uncertainty in requirements (i.e., unknown requirements), and if you need to attack uncertainty in architecture (i.e., technical risk).  

The overarching message is that there are lots of different ways to organize the activities on a software project, and the way you organize the activities significantly affects what a project will accomplish. 

Lectures posted so far include:  

0.0 Understanding Software Projects - Intro
     0.1 Introduction - My Background
     0.2 Reading the News
     0.3 Definitions and Notations 

1.0 The Software Lifecycle Model - Intro
     1.1 Variations in Iteration 
     1.2 Lifecycle Model - Defect Removal
     1.3 Lifecycle Model Applied to Common Methodologies
     1.4 Lifecycle Model - Selecting an Iteration Approach 

2.0 Software Size
     2.05 Size - Comments on Lines of Code
     2.1 Size - Staff Sizes 
     2.2 Size - Schedule Basics 

Check out the lectures at http://cxlearn.com!

Understanding Software Projects - Steve McConnell

 

How to Become an Overnight Success

Making the Complex Simple - John Sonmez - Mon, 06/22/2015 - 16:00

It seems that everyone wants a shortcut to success. I constantly talk to programmers and wannabe entrepreneurs who are ready to give up and throw in the towel at the first sign of trouble. It reminds me of this quote by eight-time—yes, that’s right, eight-time—Mr. Olympia, Ronnie Coleman. “Everybody wants to be a bodybuilder, but […]

The post How to Become an Overnight Success appeared first on Simple Programmer.

Categories: Programming

Software Gardening, Entropy, Hybrid Requirements, Lean UX in Methods & Tools Summer 2015 issue

From the Editor of Methods & Tools - Mon, 06/22/2015 - 14:29
Methods & Tools – the free e-magazine for software developers, testers and project managers – has just published its Summer 2015 issue that discusses  Software Gardening, Software Entropy, Hybrid Requirements, Lean UX and agileMantis. * Software Gardening – Yet another crappy analogy or a reality? * Entropy for Measuring Software Maturity * The READ, RATT, […]