Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/aggregator/E_GuestBook.asp' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Stuff The Internet Says On Scalability For September 30th, 2016

Hey, it's HighScalability time:


Everything is a network. Map showing the global genetic interaction network of a cell. 


If you like this sort of Stuff then please support me on Patreon.
  • 18: Google can now drink and drive in Washington DC.; $10 billion: cost of a Vision Quest to Mars; 620 Gbps: DDoS attack on KrebsOnSecurity; 1 Tbps: DDoS attack on OVH; $200,000: cost of a typical cyber incident; 8 million: video training dataset labeled with 4800 labels; 180: Amazon warehouses in the US; 10: bits of info per photon; 16: GPUs in new AI killer P2 instance type;

  • Quotable Quotes:
    • @markmccaughrean: 1,000,000 people to Mars in 100 yrs. 10 people/launch? That's 3 a day, every day, for a century. 1% failure rate? One explosion every month
    • @jeremiahg: Any sufficiently advanced exploit is indistinguishable from a 400lb hacker.
    • BrianKrebs: I suggested to Mr. Wright perhaps a better comparison was that ne’er-do-wells now have a virtually limitless supply of Stormtrooper clones that can be conscripted into an attack at a moment’s notice.
    • Sonia: Academia’s not-so-subtle distain for applied research does more than damage a few promising careers; it renders our field’s output useless, destined to collect dust on the shelves of Elsevier. 
    • Monica L. Smith: Nobody builds their own infrastructure. You don’t build your own highway, train line, water pipe, your own sewer. Those are things that connect you and your household to everybody else sequentially in your neighborhood, in your region, from the city out into the broader hinterlands.
    • @olesovhcom: This botnet with 145607 cameras/dvr (1-30Mbps per IP) is able to send >1.5Tbps DDoS. Type: tcp/ack, tcp/ack+psh, tcp/syn.
    • kenrose: We see this pattern at PagerDuty over the majority of our customers. There is a definite lull in alert volume over the weekends that picks up first thing Monday morning.It's led to my personal conclusion that most production issues are caused by people, not errant hardware or systems.
    • @rseroter: "We Crammed this Monolith Into a Container and Called it a Microservice"
    • @mweagle: I really don’t want to run my own k8s in AWS, but ECS is so opaque to debug that k8s seems like a good choice.
    • Werner Vogels~ We have this overarching goal which is customer centricity. Doing anything that benefits the customer gets priority above everything else. Working on eliminating all single points of failure in the company purely benefits the customer because it really improves the customer experience.
    • Cory Doctorow~ The thing open source software had going for it was the Ulysses Pact...the  irrevocable license, the failure mode of open source software, having founded an open source software company, I can tell you there are moments where it feels like your survival turns on being able to close the code you had opened when you were idealistic. There are moments of desperation when that happens. 
    • @lightbend: "We've been using #Akka in production for over two years, without a single crash." -@CruiseNorwegian |
    • @cloud_opinion: Monolithic -> Microservices -> "which container image?" -> "Screw it, lets do PaaS" ->  CF  or AWS?
    • Etsy: concurrency proved to be great for logical aggregation of components, and not so great for performance optimization. Better database access would be better for that.
    • Yaniv Nizan: the number of users actually contributing ad revenue in your app is a lot lower than 6.5% and much closer to the 1% or 2% that contribute revenue from In-app purchases. 
    • @reckless: Elon is basically putting on an Apple event, for going to Mars.
    • @potch: DRY: Don't Repeat Yourself / DAMP: Do Abstraction/Minimalism Pragmatically / MOIST: Maybe Only Innovate Some Times?
    • @dannysullivan: In the Facebook video metrics thing, spare a thought for the poor BuzzFeed watermelon, less viral than it thought :)
    • Addison Snell: If the promise of cloud computing is overblown, it because of the amplification it gets from its loyal converts, enterprises who have found liberation and agility in outsourcing IT. 
    • @psaffo: In 1990, the size of the US software industry was $3.2 billion -- the same size as the gourmet popcorn industry in that same year.
    • David Rosenthal: [Storage] Revenues are flat or decreasing, profits are decreasing for both companies. These do not look like companies faced by insatiable demand for their products; they look like mature companies facing increasing difficulty in scaling their technology.
    • @legind: Let's Encrypt now the 3rd largest CA, after Comodo and Symantec, comprising over 13% of the SSL cert market share 
    • @stewartbrand: “In the long run, the technology driving activities in space will be biological.” Rousing essay by Freeman Dyson.
    • @jessitron: Constructing causal ordering at the generic level of "all messages received cause all future messages sent" is expensive and also less meaningful than a business-logic-aware, conscious causal ordering. This conscious causal ordering gives us external consistency, accurate legibility, and visibility into what we know to be causal.

  • In an article light on details, written more with a marketing flourish, we still learn some interesting details on the infrastructure behind Pokemon Go. Bringing Pokémon GO to life on Google Cloud. It runs on Google Cloud, Kubernetes, Google Container Engine, HTTP/S Load Balancer, and Cloud Datastore. Keep in mind Alphabet is invested in Niantic and Ingress, the forerunner of Pokemon Go, ran on App Engine. So it sounds like a new backend implementation that had to scale from zero to the size of Twitter in a matter of weeks, with a much more complicated work load. Growth was explosive. Player traffic was 50x larger than initial estimates. An implication is the problems experienced during launch were not infrastructure related. Google, in the form of Customer Reliability Engineer (CRE), worked closely with Niantic to make sure the infrastructure scaled. The problems must have been elsewhere in the application stack, which is perfectly understandable. That sort of load could not have been predicted. The design decisions you make for 5x expected traffic are very different than they are for 50x. Nobody will spend the money or take the time to build a system for 50x. Nobody. Lots of good comments on HackerNews. Good question by ksec, would Poekemon Go even be possible in a pre-cloud era? 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Five Immutable Principles of Project Success and Project Failure

Herding Cats - Glen Alleman - Fri, 09/30/2016 - 02:43

I saw a blog post about the Top 5 Reasons Your Project Fails recently. They were all good reasons, but those reasons were symptoms, not causes. We seem to always identify the symptoms, but until we fix the cause of failure, those symptoms will return.

The symptoms were:

  1. Priorities change.
  2. Incomplete requirements.
  3. Lack of Resources.
  4. Lack of User Support.
  5. Lack of Executive Support.

But these are simply missing practices and processes of the 5 Immutable Principles of project success

5 Immutable Principles

So let's look at the symptoms and the principle that could have addressed that symptom

  1. What Does Done Look Like?
  2. What is the path to Done?
    • Without a Plan, we have no visibility to the steps needed to reach Done.
    • As Yogi Berra says If you don't know where you're going, you'll end up someplace else.
  3. Do we have enough  time, resources, and money to get to Done?
    • Without a plan, we can't know how many resources will be needed, what kind of resources, and when they will be needed
    • That time, resources, and money are actually random variables, drawn from an underlying population. The distribution of this population can be determined through a variety of means.
  4. What impediments will we encounter along the way to Done?
    • Risk Management is how Adults Manage Projects - Tim Lister
    • What are the risk, what are their mitigations
  5. How do we know we are making progress toward Done?
    • Measuring Physical Percent Complete is the foundation of all good project management
    • This physical percent complete can be represented as effectiveness, performance, key performance parameter, and other technical measures

So with the 5 symptoms assigned to the 5 Principles, corrective actions can be put in place to avoid the outcomes.

Related articles Want To Learn How To Estimate? Capabilities Based Planning First Then Requirements Risk Management is How Adults Manage Projects Root Cause of Project Failure
Categories: Project Management

Agile Risk Management: Recognizing Risks

You can't capture risk with your camera. You need to have a conversation with a diverse group of stakeholders.

You can’t capture risk with your camera. You need to have a conversation with a diverse group of stakeholders.

At a recent Q&A session I was asked: where could a person get their project risks? I stifled a smart-alecky answer that would have included driving to the grocery store, and decided that the question that was being asked was really: how do I go about recognizing and capturing risks? Perhaps a more boring question, but far more important. If I answered the first question the answer would have been that risks are generated by the interaction of the project with other projects, applications, the business, technology and world (risk categories) – pretty much the existence of a project could be considered a risk magnet. The answer to the second question is that once you have a risk magnet (a project) you will need to ask as many different people as is feasible to recognize the possible risks. The discussion of risk always appropriate, however the typical meeting/events and the types of people to consider in the conversation need to be planned. The discovery process typically follows the requirements/user story discovery process outlined below.

  1. Carve out time when you are developing the backlog and ask as diverse a group as possible to identify the potential problems that could get in the away of delivering the value promised by the project. Prompt the group to consider business, technical, operational and organizational factors. Diversity is incredibly important to inject different perspectives, so that the team does not fall prey to only seeing the risks they expect (a form of cognitive bias).
  2. Form a small team (consider the Three Amigos) to interview stakeholders that were not part of the planning exercise. Explain the project and use the same category prompts to generate a risk discussion.
  3. Gather risk data though surveys when the program stakeholders are geographically diverse. (Note: I have only seen this used well in very large programs with professional market research staffs)
  4. Interview customers or potential customers. Customer interviews are not generally used as a standalone risk discovery tool, but rather as a tool to gather requirements/user stories. However, piggybacking a few questions to solicit potential risks is useful to add a diversity of thought to risk identification.
  5. Periodically ask about risks either as an agenda item or as a follow-on to standard meetings. For example, I have seen teams that have successfully added a five-minute follow on to the last daily stand-up of the week in order to consider risks. A quick risk recognition session can easily be added to other standard meeting many projects have. Other standard scrum meetings that can be used to identify risks include: demonstrations, retrospectives and sprint planning. Each of these meetings will provide a different perspective on the project and the team therefore could expose other potential risks.

The baseline answer to the question of how can I recognize and capture risks is by involving all of the projects stakeholders in a discussion of potential risks. The process of collaborative discussion will help increase diversity of thought, reducing (but NOT eliminating) the potential number of unknowns – unknowns that could impact the projects ability to deliver value.

Categories: Process Management

Android Wear 2.0 Developer Preview 3: Play Store and More

Android Developers Blog - Thu, 09/29/2016 - 18:00

Posted by Hoi Lam, Developer Advocate

Today we’re launching the third developer preview of Android Wear 2.0 with a big new addition: Google Play on Android Wear. The Play Store app makes it easy for users to find and install apps directly on the watch, helping developers like you reach more users.

Play Store features

With Play Store for Android Wear, users can browse recommended apps in the home view and search for apps using voice, keyboard, handwriting, and recommended queries, so they can find apps more easily. Users can switch between multiple accounts, be part of alpha and beta tests, and update or uninstall apps in the “My apps” view on their watch, so they can manage apps more easily. Perhaps the coolest feature: If users want an app on their watch but not on their phone, they can install only the watch app. In fact, in Android Wear 2.0, phone apps are no longer necessary. You can now build and publish watch-only apps for users to discover on Google Play.

Why an on-watch store?

We asked developers like you what you wanted most out of Android Wear, and you told us you wanted to make it easier for users to discover apps. So we ran studies with users to find out where they expected and wanted to discover apps––and they repeatedly looked for and asked for a way to discover apps right on the watch itself. Along with improvements to app discovery on the phone and web, the Play Store on the watch helps users find apps right where they need them.

Publish your apps

To make your apps available on Play Store for Android Wear, just follow these steps. You’ll need to make sure your Android Wear 2.0 apps set minSdkVersion to 24 or higher, use the runtime permissions model, and are uploaded via multi-APK using the Play Developer Console. If your app supports Android Wear 1.0, the developer guide also covers the use of product flavors in Gradle.

Download the New Android Wear companion app

To set up Developer Preview 3, you’ll need to install a beta version of the Android Wear app on your phone, flash your watch to the latest preview release, and use the phone app to add a Google Account to your watch. These steps are detailed in Download and Test with a Device. If you don’t have a watch to test on, you can use the emulator as well.

Other additions in Developer Preview 3 Developer Preview 3 also includes:
  • Complications improvements: Starting with Developer Preview 3, watch face developers will need to request RECEIVE_COMPLICATION_DATA permission before the watch face can receive complication data. We have added ComplicationHelperActivity to make this easier. In addition, watch face developers can now set default complications, including a selection of system data complications which do not require special permission (e.g. battery level and step count), as well as data providers that have whitelisted the watch face. Lastly, there are behavior changes related to ComplicationData to 1) help better differentiate various scenarios leading to “empty data” and 2) ease development by returning a default value for fields not supported by a complication type instead of throwing a runtime exception.
  • New WearableRecyclerView: This new UI component helps developers display and manipulate vertical lists of items while optimizing for round displays.
  • Inline Action for Notifications: A new API makes it easy to take action on a notification right from the stream. Developers can specify which action is displayed inline at the bottom of the notification by calling setHintDisplayActionInline:
    NotificationCompat.Action replyAction =
        new NotificationCompat.Action.Builder(R.drawable.ic_message_white_24dp,
                "Reply", replyPendingIntent)
                .extend(new NotificationCompat.Action.WearableExtender()
  • Smart Reply: Android Wear now generates Smart Reply responses for MessagingStyle notifications. Smart Reply responses are generated by an entirely on-watch machine learning model using the context provided by the MessagingStyle notification, and no data is uploaded to the cloud to generate the responses.
  • And much more: Read about the complete list of changes in the Android Wear developer preview release notes. Timeline

    We’ve gotten tons of great feedback from the developer community about Android Wear 2.0––thank you! We’ve decided to continue the preview program into early 2017, at which point the first watches will receive Android Wear 2.0. Please keep the feedback coming by filing bugs or posting in our Android Wear Developers community, and stay tuned for Android Wear Developer Preview 4.

Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Thu, 09/29/2016 - 15:23

Never attribute to malevolence what is explicable by incompetence. - Robert J Halon

When we hear all the bad things that go  wrong with projects. The misuse and abuse of data, people, tools, and processes, I get a smile when I remember Halon's quote.

Removing things, changing things, installing new things will not address the root cause of bad management and especially bad project management. Only replacing the people will fix the root cause when they are willfully ignorant of how to do it right. 

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Estimating Resources

Herding Cats - Glen Alleman - Wed, 09/28/2016 - 19:54

There's a never-ending opportunity to learn how to estimate in the presence of uncertainty. Here's some resources for informing that learning process. 

When you hear that estimates are a waste (we'd rather be coding), estimates are fiction, we're bad at estimating, and the plethora of other excuses for not learning how to estimate, ask if that person has done the minimal homework to learn how to estimate needed to make decisions in the presence of uncertainty found on all software development projects.   Don't need estimates? - the project is de minimis  Related articles Want To Learn How To Estimate? How We Make Decisions is as Important as What We Decide. Herding Cats: Where Are We Going, Doesn't Much Matter It Seems Herding Cats: Estimating on Non-Trivial Software Projects Taxonomy of Logical Fallacies
Categories: Project Management

How to set up Ads on your AMP Pages

Google Code Blog - Wed, 09/28/2016 - 17:03

Posted by Arudea Mahartianto, Google AMP Specialist

From conception, the open source Accelerated Mobile Pages Project has had a clear goal --- to make the mobile web experience better and faster for users. This extends beyond content to creating a user-first approach to advertising as well.

To realize this vision, the AMP team created an advertising solution that follows four core principles:

  • Faster is better - There is no reason ads in AMP can’t be as fast as the AMP document itself.
  • Beautiful matters - Ensure ads in AMP are beautiful and relevant.
  • Security is a must - Require all creatives to utilize the HTTPS protocol.
  • We’re better together - AMP isn’t about supporting a single advertising entity, but an entire industry. Success requires broad industry participation.

Ads in AMP are delivered using the amp-ad component. Using this component you can configure your ads in a number of ways such as the width, length, layout mode and ad loading strategy. Different ad networks might allow even more options.

Here is an example of an DoubleClick responsive ad implementation in AMP:


The type attribute informs the amp-ad component which ad platform to use. In this case we want DoubleClick and therefore the type value is doubleclick. For above the fold responsive ad implementation please use layout="fixed-height" instead and limit the ad height so users will get a fast loading content-focused experience from the very start.

Any attributes starting with data- in amp-ad are ad platform-specific attributes, including the data-slot attribute in the snippet above. Each ad platform will have different attributes available to configure. For example, compare the above DoubleClick example with another AMP ad example that uses the Rubicon platform:

<amp-ad width=320 height=50

For more amp-ad implementation examples, please check out AMP By Example. You can also check out the amp-ad documentation for the complete list of supported ad networks and their configuration semantics.

The team is also developing newer, better ways to bring the benefits of AMP to the ads ecosystem with initiatives like AMP for Ads and AMP Ad Landing Pages. These solutions will enable advertisers to design creatives and ad landing pages that are more consistent with the AMP experience publishers are bringing to users. We believe this will bring us closer to the goal of making the entire mobile web experience faster and better for everybody.

Categories: Programming

How Uber Manages a Million Writes Per Second Using Mesos and Cassandra Across Multiple Datacenters

If you are Uber and you need to store the location data that is sent out every 30 seconds by both driver and rider apps, what do you do? That’s a lot of real-time data that needs to be used in real-time.

Uber’s solution is comprehensive. They built their own system that runs Cassandra on top of Mesos. It’s all explained in a good talk by Abhishek Verma, Software Engineer at Uber: Cassandra on Mesos Across Multiple Datacenters at Uber (slides).

Is this something you should do too? That’s an interesting thought that comes to mind when listening to Abhishek’s talk.

Developers have a lot of difficult choices to make these days. Should we go all in on the cloud? Which one? Isn’t it too expensive? Do we worry about lock-in? Or should we try to have it both ways and craft brew a hybrid architecture? Or should we just do it all ourselves for fear of being cloud shamed by our board for not reaching 50 percent gross margins?

Uber decided to build their own. Or rather they decided to weld together their own system by fusing together two very capable open source components. What was needed was a way to make Cassandra and Mesos work together, and that’s what Uber built.

For Uber the decision is not all that hard. They are very well financed and have access to the top talent and resources needed to create, maintain, and update these kind of complex systems.

Since Uber’s goal is for transportation to have 99.99% availability for everyone, everywhere, it really makes sense to want to be able to control your costs as you scale to infinity and beyond.

But as you listen to the talk you realize the staggering effort that goes into making these kind of systems. Is this really something your average shop can do? No, not really. Keep this in mind if you are one of those cloud deniers who want everyone to build all their own code on top of the barest of bare metals.

Trading money for time is often a good deal. Trading money for skill is often absolutely necessary.

Given Uber’s goal of reliability, where out of 10,000 requests only one can fail, they need to run out of multiple datacenters. Since Cassandra is proven to handle huge loads and works across datacenters, it makes sense as the database choice.  

And if you want to make transportation reliable for everyone, everywhere, you need to use your resources efficiently. That’s the idea behind using a datacenter OS like Mesos. By statistically multiplexing services on the same machines you need 30% fewer machines, which saves money. Mesos was chosen because at the time Mesos was the only product proven to work with cluster sizes of 10s of thousands of machines, which was an Uber requirement. Uber does things in the large.

What were some of the more interesting findings?

  • You can run stateful services in containers. Uber found there was hardly any difference, 5-10% overhead, between running Cassandra on bare metal versus running Cassandra in a container managed by Mesos.

  • Performance is good: mean read latency: 13 ms and write latency: 25 ms, and P99s look good.

  • For their largest clusters they are able to support more than a million writes/sec and ~100k reads/sec.

  • Agility is more important than performance. With this kind of architecture what Uber gets is agility. It’s very easy to create and run workloads across clusters.

Here’s my gloss of the talk:

In the Beginning
Categories: Architecture

Software Development Conferences Forecast September 2016

From the Editor of Methods & Tools - Wed, 09/28/2016 - 15:01
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP), DevOps and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods […]

Discovering Your Team’s Risk Tolerance


Risk tolerance can be visualized as a curve. Above the curve represents a combination of high probability and a potential negative impact that will prevent the team from accepting the risk, and below the curve, the risk is deemed acceptable.  Outside of a few psychologically damaged individuals, everyone has a risk curve whether they know it or not. On a team, everyone’s natural risk tolerance differs.  Complicating the discussion is that risk tolerance changes depending on context the person or team faces.  For example, at one point in my life riding my bike down a hill at top speed to see if I could slalom stop at the bottom was an acceptable risk.  I have the scars to prove I was that silly. Thinking back, I am not sure why I am alive today.  My risk tolerance is different now. While reminiscing about my unsafe days as a seven-year-old is fun what is more important is to recognize that the same lesson can be seen on teams and in organizations. This leads us to the conclusion that we must talk about risk tolerance.  

Knowing and being able to predict a team’s risk tolerance is important.  For example, a few years ago I was asked to assess a team that had operated like clockwork for several years delivering superb value and quality work, however recently their quality had been questionable (their clients were finding significant defects in production).  There had been no significant personnel changes nor was the type of work that they were doing significantly different. In the end, we determined that someone up the hierarchy had decided to remove quality from the team’s objectives (therefore how raises and promotions would be assessed) and doubled down on making dates and meeting budgets.  The change had the unintended consequence of changing the team’s risk tolerance curve.  On the surface at least, taking chances that might impact quality became less risky to the team, therefore more easy to change.  

Two relatively simple ways to approach a discussion of risk tolerance are:

  1. Every team and project has an implicit risk tolerance curve; some risks are acceptable and some are not.  Shifting team or organization’s risk tolerance from implicit to explicit requires explicit discussion.  In the project environment, the simplest approach is to hold an explicit discussion of risk. Specifically, ask participants to achieve consensus on whether examples of risks should be accepted or not.  It is powerful for the examples to be risks that have been recognized by the team and organization in the past, peppered with a few examples that are possible but more external to the team. The discussion will tend to touch on probability and potential impact and expose the participant’s perception of the risk. The team must end by agreeing on whether the team would accept or not accept the risk (accept can include taking on mitigation tasks).  While an explicit risk tolerance curve is not generated, the team will develop a clearer understanding of which risks it will tolerate and which it will not.  The examples also provide a set of analogies that can be used to assess risks as there are recognized. A handy set of analogies is EXTREMELY useful for every team member (Using analogies is a form of pattern recognition which is a cognitive bias).
  2. A more qualitative approach uses approach to quantify risk  popularized by Michael Lant (any other quantitative scheme can be used), which assess each based on impact and probability to assign a number. Lant’s model equates a low impact, low probability  to a “1” and the highest probability, the highest impact to 25. Based on the quantification, the team can quickly develop a consensus that any combination of impact and risk above a certain number can’t be accepted by the team.  In essence, the team says that up to a certain point they can mitigate or deal with a potential risk, but after that someone outside the team needs to own or indemnify the team from the potential that the risk turns into an issue or they can’t go forward.  The quantification provides a proxy for the line in the risk tolerance curve, and the rated risks can be used as a set of analogies for team members to do a real-time triage of newly discovered risks.

Both of these approaches represent a mechanism to have an explicit and structured discussion of risk tolerance.  Both approaches have an advantage over less structured approaches because they generate group knowledge and memory and artifacts that can be used to aid in using the team’s consensus and as a tool to reinforce that memory.  

Categories: Process Management

SE-Radio Episode 269: Phillip Carter on F#

Eberhard Wolff talks with Phillip Carter about F#.  A multi-paradigm programming language that supports object-oriented, imperative, and functional programming, F# can be used for a broad variety of applications. It’s an especially good fit for parallel programming and DSLs. Type interference allows F# code to be type safe even if no types are declared in […]
Categories: Programming

Learn by doing with the Udacity VR Developer Nanodegree

Google Code Blog - Tue, 09/27/2016 - 18:37

Posted by Nathan Martz, Product Manager, Google VR

With Google Cardboard and Daydream, our Google VR team is working to bring virtual reality to everyone. In addition to making VR more accessible by using the smartphone in your pocket, we recently launched the Google VR SDK out of beta, with native integration for Unity and UE4, to help make it easier for more developers to join the fold.

To further support and encourage new developers to build VR experiences, we’ve partnered with Udacity to create the VR Developer Nanodegree. Students will learn how to create 3D environments, define behaviors, and make VR experiences comfortable, immersive, and performant.

Even with more than 50 million installs of Google Cardboard apps on Google Play, these are still the early days of VR. Students who complete the VR Developer Nanodegree learn by doing, and will graduate having completed a portfolio of VR experiences.

Learn more and sign up to receive VR Developer Nanodegree program updates at https://www.udacity.com/vr

Categories: Programming

Announcing the winners of the Google Play Indie Games Festival in San Francisco; Indie Games Contest coming soon to Europe

Android Developers Blog - Tue, 09/27/2016 - 18:03

Posted by Jamil Moledina, Google Play, Games Strategic Lead

Last Saturday, we hosted the first Google Play Indie Games Festival in North America, where we showcased 30 amazing games that celebrate the passion, innovation, and art of indies. After a competitive round of voting from fans and on-stage presentations to a jury of industry experts, we recognized seven finalists nominees and three winners.

Winners: bit bit blocks Presented by Greg Batha Bit Bit Blocks is a cute and action-packed competitive puzzle game. Play with your friends on a single screen, or challenge yourself in single player mode. Head-to-head puzzle play anytime, anywhere. Numbo Jumbo Presented by Kaveh Daryabeygi, Wombo Combo Numbo Jumbo is a casual mobile puzzle number game for iOS and Android. Players group numbers that add together: for example, [3, 5, 8] works because 3+5=8. Orbit - Playing with Gravity Presented by Chetan Surpur & Eric Rahman, Highkey Games ORBIT puts a gravity simulator at the heart of a puzzle game. Launch planets with a flick of your finger, and try to get them into orbit around black holes. ORBIT also features a sandbox where you can create your own universes, control time, and paint with gravity.
Finalist nominees:
Antihero [coming later in 2016] Presented by Tim Conkling Antihero is a "fast-paced strategy game with an (Oliver) Twist." Run a thieves' guild in a gas-lit, corrupt city. Recruit urchins, hire thugs, steal everything – and bribe, blackmail, and assassinate your opposition. Single-player and cross-platform multiplayer for desktops, tablets, and phones. Armajet [coming later in 2016] Presented by Nicola Geretti & Alexander Krivicich, Super Bit Machine Armajet is a free-to-play multiplayer shooter that pits teams of players against each other in fast-paced jetpack combat. Armajet is a best in class mobile game designed for spectator-friendly competitive gaming for tablets and smartphones. Players compete in a modern arena shooter that’s easy to learn, but hard to master. Norman's Night In: The Cave [coming later in 2016] Presented by Nick Iorfino & Alex Reed, Bactrian Games Norman's Night In is a 2D puzzle-platformer that tells the tale of Norman and his fateful fall into the world of cave. While test driving the latest model 3c Bowling Ball, Norman finds himself lost with nothing but his loaned bball and a weird feeling that somehow he was meant to be there. Parallyzed Presented by David Fox, Double Coconut Parallyzed is an atmospheric adventure platformer with unique gameplay, set in a dark and enchanting dreamscape. You play twin sisters who have been cast into separate dimensions. Red and Blue have different attributes and talents, are deeply connected, and have the ability to swap bodies at any time.

Finalists nominees and winners also received a range of prizes, including Google I/O 2017 tickets, a Tango Development kit, Google Cloud credits, an NVIDIA Android TV & K1 tablet, and a Razer Forge TV bundle.

Indie Games Contest coming to Europe

We’re continuing our effort to help indie game developers thrive by highlighting innovative and fun games for fans around the world. Today, we are announcing the Indie Games Contest for developers based in European countries (specific list of countries coming soon!). This is a great opportunity for indie games developers to win prizes that will help you showcase your art to industry experts and grow your business and your community of players worldwide. Make sure you don’t miss out on hearing the details by signing up here for updates.

As we shared at the festival, it’s rewarding to see how Google Play has evolved over the years. We’re now reaching over 1 billion users every month and there’s literally something for everyone. From virtual reality to family indie games, developers like you continue to inspire, provoke, and innovate through beautiful, artistic games.

Categories: Programming

Shopping made simple with Tango and WayfairView

Google Code Blog - Tue, 09/27/2016 - 16:58

Posted by Sophie Miller, Tango Business Development

Window shopping and showrooms let us imagine what that couch might look like in our living room or if that stool is the right height, but Tango can help take out the guesswork using augmented reality. Place virtual furniture in your real room, walk around, and try different colors.

Tango-enabled apps like WayfairView make it easy to visualize and rearrange new furniture in your home. We sat down with the Wayfair team to learn more about their app and see how Tango helps power new AR shopping experiences:

Google: Please tell us about your Tango app.

Mike: Wayfair offers a massive selection of products online. We believe that the ability for customers to visualize products in their living space augments our online experience, and solves real customer problems such as: Will this product fit in my space? and Will this match the rest of my environment?

Why are you excited for your customers to start using WayfairView?

One of the biggest barriers that online shopping poses is the inability for a customer to get a good sense of how a product would fit in their room, and what it would look like in their living space. With WayfairView, we aim to help our customers better visualize our products - going above and beyond a flat, 2D image and providing them with an accurate 3D rendering of what the full-size item could look like in their home. Not only is this a great extension of the customer experience, it’s also a practical approach to figure out how the product fits into the user’s space before ordering it.

How did you get started developing for Tango?

I signed up to buy a dev kit in 2014 because he was personally interested in scanning 3D objects and environments. I ended up using it for a hackathon to build the first prototype of what is now WayfairView. One of my teammates, Shrenik Sadalgi, has always been interested in AR technology and had participated in Tango hackathons in years prior. He thought this particular flavor of AR, i.e Markerless in the form factor of a mobile device, had the potential of providing a seamless, easy user experience for Wayfair customers.

Was there something unique to the Tango platform that made it particularly appealing?

AR technology has been around for a while, but Tango is making it accessible by providing the technology in a way that is user friendly. Specifically, the Tango platform excels in accurate tracking, which allowed Wayfair’s R&D team to focus on building a great experience for our customers. No markers, no HMDs, no cords that can get tangled, but still powerful.

What were some of the challenges you faced building for Tango?

The biggest challenge Wayfair faces with AR technology is more about the experience than the device, which is in big part thanks to Tango. Our goal was to introduce an entirely new way of shopping for furniture in a way that is user friendly. Not having to worry about the inner workings of Tango helped us focus on making the furniture look as real as possible, scaling the app with our massive catalog, and getting to market in a short period of time.

What surprised you during the Tango development process?

The learning curve for Tango was minimal. We were able to get started very quickly using example code. It was pretty remarkable how the stability of the platform (primarily the tracking) kept improving over the period of time that we worked on the app.

Which platform did you build your Tango app on, and why?

We wrote the core of the app using Unity in C# - we wanted all the 2D UI to be in native Android to match the Wayfair native Android experience. This also gave us the opportunity to re-use code from the existing Wayfair Android app. We saw significant performance improvements by using native Android to create the 2D UI as well, which also makes the UI easier to update when the next UI theme of Android comes along.

What features can customers look forward to in a future WayfairView update?

We would love to add the ability to search for products by space: imagine drawing a cube in your real space and finding all products that fit the space. We also want to allow users to stack virtual products on top of each other to help them visualize how a virtual table lamp would look on top of a virtual table. Of course, we also want to make the products look even more real and add more products that can be visualized on WayfairView.

How do you think that this will change the way people shop for household goods?

WayfairView makes it easier than ever for customers to visualize online goods in their home at full scale, giving them an extra level of confidence when making an online purchase. We believe Tango has the potential to become a ubiquitous technology, just like smartphone cameras and mobile GPS. Ultimately, we anticipate that this will further accelerate the shift from brick and mortar to online.

We also imagine that WayfairView will be a very useful tool for our designers as they share their design proposal and vision with their customers.

Categories: Programming

Sponsored Post: ScaleArc, Spotify, Aerospike, Scalyr, Gusto, VividCortex, MemSQL, InMemory.Net, Zohocorp

Who's Hiring?
  • Spotify is looking for individuals passionate in infrastructure to join our Site Reliability Engineering organization. Spotify SREs design, code, and operate tools and systems to reduce the amount of time and effort necessary for our engineers to scale the world’s best music streaming product to 40 million users. We are strong believers in engineering teams taking operational responsibility for their products and work hard to support them in this. We work closely with engineers to advocate sensible, scalable, systems design and share responsibility with them in diagnosing, resolving, and preventing production issues. We are looking for an SRE Engineering Manager in NYC and SREs in Boston and NYC.

  • IT Security Engineering. At Gusto we are on a mission to create a world where work empowers a better life. As Gusto's IT Security Engineer you'll shape the future of IT security and compliance. We're looking for a strong IT technical lead to manage security audits and write and implement controls. You'll also focus on our employee, network, and endpoint posture. As Gusto's first IT Security Engineer, you will be able to build the security organization with direct impact to protecting PII and ePHI. Read more and apply here.

Fun and Informative Events
  • Learn how Nielsen Marketing Cloud (NMC) leverages online machine learning and predictive personalization to drive its success in a live webinar on Tuesday, September 20 at 11 am PT / 2 pm ET. Hear from Nielsen’s Kevin Lyons, Senior VP of Data Science and Digital Technology, and Brent Keator, VP of Infrastructure, as well as from Brian Bulkowski, CTO and Co-Founder at Aerospike, as they describe the front-edge architecture and technical choices – including the Aerospike NoSQL database – that have led to NMC’s success. RSVP: https://goo.gl/xDQcu4
Cool Products and Services
  • ScaleArc's database load balancing software empowers you to “upgrade your apps” to consumer grade – the never down, always fast experience you get on Google or Amazon. Plus you need the ability to scale easily and anywhere. Find out how ScaleArc has helped companies like yours save thousands, even millions of dollars and valuable resources by eliminating downtime and avoiding app changes to scale. 

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network. 

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

The Sprint Review as a Sign-Off Meeting

Mike Cohn's Blog - Tue, 09/27/2016 - 15:00

Some teams use the sprint review as a time for product owners or key stakeholders to formally approve the product backlog items completed during the sprint. Is this a good idea?

In general, a sprint review should not be used by a team to get formal sign-off on their work from their product owner. The team and product owner should be working so closely during a sprint that the team knows what the product owner thinks of what they’ve built.

No surprises is my No. 1 rule for the sprint review.

It is absolutely acceptable for a product owner to reject the work of a team on a product backlog item. But the team should know that’s coming.

Team members should not walk into a sprint review expecting glowing praise from the product owner but then be blindsided by a litany of complaints about a feature.

But what about acceptance by a client? Can a sprint review be used for formal sign-off or acceptance in those cases?

Ideally, in cases in which a client hires a vendor to develop a product, someone at the the client company would act as the product owner. And in those cases, it can be OK for formal sign-off on features to occur during the sprint review. But I’d still stick with the advice that there should be no surprises during the review.

Even though the client product owner is providing feedback to the team during the sprint, it’s possible that the product owner needs to wait to fully accept something until other stakeholders have a chance to comment on the work.

As a simple example, my daughter recently asked me if she could go on a school trip. I said it was fine with me, but--guess what--we needed to check that it was OK with her mother. That is, my wife might have had plans for our family during that time that I didn’t yet know about.

This will be a common situation for client product owners in contract development situations. The product owner interacting with the team daily may like how a feature has been built, but may need to confirm that the stakeholders he or she represents agree. Sure, we can say that the product owner should simply go ask. But that can be impractical and might best be done in a sprint review.

But in outsourced, contract development, the client doesn’t always provide the product owner. Many times, the client hires the vendor to take care of everything.

The client is, of course, the true product owner. The client will ultimately accept or reject what is developed. But, on a day-to-day basis, the client doesn’t want to be “bothered.” And so the typical solution in this case is for the vendor to appoint a product owner from someone within its own organization.

And in this case, true acceptance (or “sign off”) on product backlog items cannot happen before the sprint review. The true product owner (from the client) is not sufficiently available and engaged to accept things any more frequently.

Sure, the team may have a preliminary sign-off from their own product owner representative during the sprint. But the true, client product owner may completely reverse that decision in the actual sprint review.

So the ultimate answer depends, like so many things, upon the context in which you’re operating. And so I’ll say that I’m not too concerned by actual, formal sign-off occurring during a sprint review. But I always want to stick with a policy of no surprises during the review.

Sign off or not, as needed. But the team should always have a good idea of what’s coming before they get to the review.

What Do You Do?

What does your team do in sprint reviews? Has the product owner largely seen everything before then? Are product backlog items formally accepted during the review? Please share your thoughts in the comments below.

Improve Team Collaboration by Co-creating a Team Poster

Xebia Blog - Tue, 09/27/2016 - 14:18
Do you have a scrum team consisting of individual players? Does your team know why it exists in the first place? Do the team members know eachother's personal preferences for doing the things they do? Are they aware of what they find important as a team? A Team Poster crafted by the team itself will

Agile Estimating Methods and Impact on Project Development Performance Index

Herding Cats - Glen Alleman - Tue, 09/27/2016 - 01:40

The presentation "Quantifying the Impact of Agile Practices," Larry MacCherone at the RallyOn 2013 Conference, presents some results on estimating impacts. The chart below shows 4 estimating types, including No Estimates, the sample sizes for each type and the components that make up the estimating types.

The Software Development Performance Index (SDPI) scale on the left ranges - by eyeball measurement - from 46 to 55.

Screen Shot 2016-09-16 at 8.46.28 AM

The Higher the number the better the performance of the process. The presentation speaks to the components of the index further.

But first another piece of information ...

Teams doing Full Scrum have 250% better Quality than teams doing No Estimating

But are these differences meaningful statistically?

Let's start with several reading assignments, before answering

  • How to Lie with Statistics, Darrell Huff - this is a must have book for anyone working in an environment where numbers are used to make decisions.
  • Statistics: A Very Short Introduction, David J. Hand, Oxford University Press - this is a short summary of all the other books on statistical processes sitting on my office shelf.
  • The Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty, Sam Savage - another must have book to learn that those tossing around numbers are likely unaware of the flaws in their logic.

Let's start with the numbers from the chart

Since the raw underlying data is not available, we can't do any p-Factor assessment from the population samples, but there is a simple question that can be asked.

Are there any statistical differences between the 4 SDPI's? If you look below at the quick and dirty assessment of the only data available, it looks like all 4 approaches are within a single digit variances of each other. Not that useful actually. 

Screen Shot 2016-09-16 at 12.26.22 PM 

So the critical question still remains

How can you make a decision in the presence of uncertainty without estimating the impact of that decision?


Related articles The Actual Science in Management Science Carl Sagan's BS Detector How to Estimate Software Development Statistical Significance Monte Carlo Simulation of Project Performance Managing by (mis)quoting Deming Mr. Franklin's Advice Mike Cohn's Agile Quotes Flaw of Averages
Categories: Project Management

How to set up Analytics on your AMP pages

Google Code Blog - Mon, 09/26/2016 - 18:57

Originally posted on Google Analytics blog

Posted by Arudea Mahartianto, Google AMP Specialist

In the digital world, whether you’re writing stories for your loyal readers, creating creative content that your fans love, helping the digital community, or providing items and services for your customer, understanding your audience is at the heart of it all. Key to unlocking that information is access to tools for measuring your audience and understanding their behavior. In addition to making your page load faster, Accelerated Mobile Pages (AMP) provides multiple analytics options without compromising on performance.

You can choose to use a solution like amp-pixel that behaves like a simple tracking pixel. It uses a single URL that allows variable substitutions, so it’s very customizable. See the amp-pixel documentation for more detail.

The amp-analytics component, on the other hand, is a powerful solution that recognizes many types of event triggers to help you collect specific metrics. Since amp-analytics is supported by multiple analytics providers, this means you can use amp-analytics to configure multiple endpoints and data sets. AMP then manages all of the instrumentation to come up with the data specified and shares it with these analytics solution providers.

To use amp-analytics, include the component library in your document's <head>:

<script async custom-element="amp-analytics"


And then include the component as follows (for these examples, make sure to specify your own account number instead of the placeholder):

<amp-analytics type="googleanalytics">

<script type="application/json">
"vars": {
"account": "UA-YYYY-Y"
"triggers": {
"defaultPageview": {
"on": "visible",
"request": "pageview",
"vars": {
"title": "Name of the Article"

The JSON format is super flexible for describing several different types of events and it does not include any JavaScript code which could potentially lead to mistakes.

Expanding the above example, we can add another trigger, clickOnHeader:

<amp-analytics type="googleanalytics">
<script type="application/json">
"vars": {
"account": "UA-YYYY-Y"
"triggers": {
"defaultPageview": {
"on": "visible",
"request": "pageview",
"vars": {
"title": "Name of the Article"
"clickOnHeader": {
"on": "click",
"selector": "#header",
"request": "event",
"vars": {
"eventCategory": "examples",
"eventAction": "clicked-header"

For a detailed description of data sets you can request, as well as the complete list of analytics providers supporting amp-analytics, check out the amp-analytics documentation. You can also see more implementation examples in the Amp By Example site.

If you want to conduct a user experience experiment on your AMP pages, such as an A/B test, you can use the amp-experimentelement. Any configurations done in this element will also be exposed to amp-analytics and amp-pixel, so you can easily do a statistical analysis of your experiment.

There are still plenty of ongoing developments for AMP analytics to help you gain insights as you AMPlify the user experience on your site. Visit the AMP Project roadmap to see a summary of what the team is cooking up. If you see some features missing, please file a request on GitHub.

Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Mon, 09/26/2016 - 16:20

The essence of mathematics is to not make simple things complicated, but to make complicated things simple - Stan Gudder 

This notion that estimating is hard, estimates are a waste because they are always wrong, willfully ignores the basic mathematics of making decisions of in presence of uncertainty. The foundation of all decision-making, Probability and Statistics. Without this understanding there can be no credible information provided to the decision makers. 

Categories: Project Management