Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

No.

NOOP.NL - Jurgen Appelo - Fri, 04/17/2015 - 23:01
Say No.

Last week, I had a nice Skype call with a reader who was seeking my advice on becoming an author and speaker, and I gave him some pointers. I normally don’t schedule calls with random people asking for a favor, but this time I made an exception. I had a good reason.

The post No. appeared first on NOOP.NL.

Categories: Project Management

Australia - July/August 2015

Coding the Architecture - Simon Brown - Fri, 04/17/2015 - 17:42

It's booked! Following on from my trip to Australia and the YOW! 2014 conference in December last year, I'll be back in Australia during July and August. The rough plan is to arrive in Perth and head east; visiting at least Melbourne, Brisbane and Sydney again. I'm hoping to schedule some user group talks and, although there probably won't be any public workshops, I'll be running a limited number of in-house 1-day workshops and/or talks along the way too.

If you're interested in me visiting your office/company during my trip, please just drop me a note at simon.brown@codingthearchitecture.com. Thanks!

Categories: Architecture

Stuff The Internet Says On Scalability For April 17th, 2015

Hey, it's HighScalability time:

A fine tribute on Silicon Valley & hilarious formula evaluating Peter Gregory's positive impact on humanity.

  • 118/196: nations becoming democracies since mid19th century; $70K: nice minimum wage; 70 million: monthly StackExchange visitors; 1 billion: drone planted trees; 1,000 Years: longest-exposure camera shot ever

  • Quotable Quotes:

    • @DrQz: #Performance modeling is really about spreading the guilt around.

    • @natpryce: “What do we want?” “More levels of indirection!” “When do we want it?” “Ask my IDateTimeFactoryImplBeanSingletonProxy!”

    • @BenedictEvans: In the late 90s we were euphoric about what was possible, but half what we had sucked. Now everything's amazing, but we worry about bubbles

    • Calvin Zito on Twitter: "DreamWorks Animation: One movie, 250 TB to make.10 movies in production at one time, 500 million files per movie. Wow."

    • Twitter: Some of our biggest MySQL clusters are over a thousand servers.

    • @SaraJChipps: It's 2015: open source your shit. No one wants to steal your stupid CRUD app. We just want to learn what works and what doesn't.

    • Calvin French-Owen: And as always: peace, love, ops, analytics.

    • @Wikipedia: Cut page load by 100ms and you save Wikipedia readers 617 years of wait annually. Apply as Web Performance Engineer

    • @IBMWatson: A person can generate more than 1 million gigabytes of health-related data.

    • @allspaw: "We’ve learned that automation does not eliminate errors." (yes!)  

    • @Obdurodon: Immutable data structures solve everything, in any environment where things like memory allocators and cache misses cost nothing.

    • KaiserPro: Pixar is still battling with lots of legacy cruft. They went through a phase of hiring the best and brightest directly from MIT and the like.

    • @Obdurodon: Immutable data structures solve everything, in any environment where things like memory allocators and cache misses cost nothing.

    • @abt_programming: "Duplication is far cheaper than the wrong abstraction" - @sandimetz

    • @kellabyte: When I see places running 1,200 containers for fairly small systems I want to scream "WHY?!"

    • chetanahuja: One of the engineers tried running our server stack on a raspberry for a laugh.. I was gobsmacked to hear that the whole thing just worked (it's a custom networking protocol stack running in userspace) if just a bit slower than usual.

  • Chances are if something can be done with your data, it will be done. @RMac18: Snapchat is using geofilters specific to Uber's headquarter to poach engineers.

  • Why (most) High Level Languages are Slow. Exactly this by masterbuzzsaw: If manual memory management is cancer, what is manual file management, manual database connectivity, manual texture management, etc.? C# may have “saved” the world from the “horrors” of memory management, but it introduced null reference landmines and took away our beautiful deterministic C++ destructors.

  • Why NFS instead of S3/EBS? nuclearqtip with a great answer: Stateful; Mountable AND shareable; Actual directories; On-the-wire operations (I don't have to download the entire file to start reading it, and I don't have to do anything special on the client side to support this; Shared unix permission model; Tolerant of network failures Locking!; Better caching ; Big files without the hassle.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Is LeSS meer dan SAFe?

Xebia Blog - Fri, 04/17/2015 - 14:48

(Grote) Nederlandse bedrijven die op zoek zijn naar een oplossing om de voordelen die hun Agile teams brengen op te schalen, gebruiken vooral het Scaled Agile Framework (SAFe) als referentiemodel. Dit model is -ook voor managers- zeer toegankelijke opgezet en trainingen en gecertificeerde consultants zijn beschikbaar. Al in 2009 beschreven Craig Larman en Bas Vodde hun ervaringen met de toepassing van Scrum in grote organisaties (onder andere Nokia) in hun boeken 'Scaling Lean & Agile Development' en 'Practices for Scaling Lean & Agile Development'. De methode noemden ze Large Scale Scrum, afgekort LeSS.
LeSS heeft de afgelopen jaren een onopvallend bestaan geleid. Onlangs is besloten dit waardevolle gedachtengoed meer in de spotlights te zetten. Er komt in de zomer een derde boek, de site less.works is gelanceerd, er is een trainingen tournee gestart en Craig en Bas geven acte de presence op de toonaangevende conferenties. Zo zal Bas 4 juni als keynote optreden tijdens Xebicon 2015, in Amsterdam. Is LeSS meer of minder dan SAFe? Of min of meer SAFe?

Wat is LeSS?
Less is dus een methode om een grote(re) organisatie met Agile teams als basis in te richten. Zoals de naam verraadt, Scrum is daarbij het uitgangspunt. Er zijn 2 smaken: ‘gewoon’ LeSS, tot 8 teams en Less Huge, vanaf 8 teams. LeSS bouwt op verplichte regels (rules), bijvoorbeeld "An Overall Retrospective is held after the Team Retrospectives to discuss cross-team and system-wide issues, and create improvement experiments. This is attended by Product Owner, ScrumMasters, Team Representatives, and managers (if there are any).” Daarnaast kent LeSS principles (ontwerp criteria). De principes vormen het referentie raamwerk op basis waarvan je de juiste ontwerp besluiten neemt. Tenslotte zijn er de Guidelines en Experiments, de dingen die in de praktijk bij organisaties succesvol of juist niet zijn gebleken. LeSS gaat naast het basis framework verder dieper in op:

  • Structure (de organisatie structuur)
  • Management (de -veranderende- rol van management)
  • Technical Excellence (sterk gebaseerd op XP en Continuous Delivery)
  • Adoption (de transformatie naar de LeSS organisatie).

LeSS in een notendop
De basis van LeSS is dat Large Scale Scrum = Scrum! Net als SAFe wordt in LeSS gezocht naar hoe Scrum toegepast kan worden op een groep van zeg 100 man. LeSS blijft het dichtst bij Scrum: er is 1 sprint, met 1 Product Owner, 1 product backlog, 1 planning en 1 sprint review, waarin 1 product wordt gerealiseerd. Dit is dus anders dan in SAFe, waarin een opgeblazen sprint is gedefinieerd (de Product Increment). Om deze 1 sprint implementatie te kunnen waarmaken is naast een hele sterke whole product focus, bijvoorbeeld ook een technisch platform nodig, dat dit ondersteunt. Waar SAFe pragmatisch een geleidelijke invoering van Agile at Scale toestaat, is LeSS strenger in de klaar-voor-de-start eisen. Er moet een structuur worden neergezet die de cultuur van de 'contract game’ doorbreekt. De cultuur van overvragen, druk, onduidelijkheid, verrassingen, en afrekenende verantwoordelijkheid.

LeSS is meer en minder SAFe
De recente inspanning om LeSS toegankelijk te maken gaan ongetwijfeld leiden tot een sterk toenemende aandacht voor deze aansprekende benadering voor de inrichting van Agile at Scale. LeSS is anders dan SAFe, al hebben beide modellen vooral in hun inspiratiebronnen ook veel gemeen.
Beide modellen kiezen een andere insteek, bijvoorbeeld mbt:

  • hoe Scrum toe te passen op een cluster van teams
  • de benadering van de transformatie naar Agile at Scale
  • hoe oplossingen worden gebracht: SAFe geeft de oplossing, LeSS de voors en tegens van keuzes

Opvallend is verder dat SAFe (met het portfolioniveau) uitlegt hoe de verbinding tussen strategie en backlogs gelegd moet worden. LeSS besteedt daarentegen meer aandacht aan de transformatie (Adoption) en Agile op hele grote schaal (LeSS Huge).

Of een organisatie kiest voor LeSS of SAFe, zal afhangen wat het best past bij de organisatie. Past bij de veranderambitie en bij de ‘agility’ op moment van starten. Sterk ‘blauwe’ organisaties zullen kiezen voor SAFe, organisaties die een overtuigende stap richting een Agile organisatie durven te zetten, zullen eerder kiezen voor LeSS. In beide gevallen loont het om kennis te nemen van de oplossingen die de andere methode biedt.

For the Upcoming Graduates

Herding Cats - Glen Alleman - Fri, 04/17/2015 - 05:36

No matter your life experience, your view of the world, whether you've had military experience or not, this is - or should be - an inspirational commencement speech. 

 

Categories: Project Management

#NoEstimates

2751403120_ae8be53206_b

#NoEstimates is a lightening rod.

 

Estimation is one of the lightening rod issues in software development and maintenance. Over the past few years the concept of #NoEstimates has emerged and has become a movement within the Agile community. Due to its newness, #NoEstimates has several camps revolving around a central concept. This essay begins the process of identifying and defining a core set of concepts in order to have a measured discussion.  A  shared language accross the gamut of estimating ideas whether Agile or not including #NoEstimates is critical for comparing both sets of concepts.  We begin our exploration of the ideas around the #NoEstimates concepts by establishing a context that includes classic estimatimation and #NoEstimates.

Classic Estimation Context: Estimation as a topic is often a synthesis of three related, but different concepts. The three concepts are budgeting, estimation and planning. Becasue these three conccepts are often conflated it is important to understand tthe relationship between the three.  These are typical in a normal commercial organization, however these concepts might be called different things depending your business model. An estimate is a finite approximation of cost, effort and/or duration based on some basis of knowledge (this is known as a basis of estimation). The flow of activity conflated as estimation often runs from budget, to project estimation to planning. In most organizations, the act of generating a finite approximation typically begins as a form of portfolio management in order to generate a budget for a department or group. The budgeting process helps make decisions about which pieces of work are to be done. Most organizations have a portfolio of work that is larger than they can accomplish, therefore they need a mechanism to prioritize. Most portfolio managers, whether proponents of an Agile or a classic approach, would defend using value as a key determinant of prioritization. Value requires having some type of forecast of cost and benefit of the project over some timeframe. Once a project enters a pipeline in a classic organization. an estimate is typically generated.  The estimate is generally believed to be more accurate than the orgianal  budget due to the information gathered as the project is groomed to begin. Plans breakdown stories into tasks often with personal assigned, an estimate of effort generated at the task level and sum the estimates into higher-level estimates. Any of these steps can (but should not) be called estimation. The three -level process described above, if misused, can cause several team and organizational issues. Proponents of the #NoEstimates movement often classify these issues as estimation pathologies; we will explore these “pathologies” in later essays.

#NoEstimates Context:  There are two camps macro camps in the #NoEstimate movement (the two camps probably reflect more of continuum of ideas rathe than absolutes).  The first camp argues that a team should break work down into small chunks and then immediately begin completing those small chunks (doing the highest value first). The chunks would build up quickly to a minimum viable product (MVP) that can generate feedback, so the team can hone its ability to deliver value. I call this camp the “Feedback’ers”, and luminaries like Woody Zuill often champion this camp. A second camp begins in a similar manner – by breaking the work into small pieces, prioritizing on value (and perhaps risk), delivering against a MVP to generate feedback – but they measure throughput. Throughput is a measure of how many units of work (e.g. stories or widgets) a team can deliver in a specific period of time. Continuously measuring the throughput of the team provides a tool to understand when work needs to start in order for to be delivered within a period time. Average throughput is used to provide the team and other stakeholders with a forecast of the future. This is very similar to throughput measured used in Kanban. People like Vasco Duarte (listen to my interview with Vasco) champion the second camp, which I tend to call the “Kanban’ers”.  I recently heard David Anderson, the Kanban visionary, discuss a similar #NoEstimates position using throughput as a forecasting tool. Both camps in the #NoEstimates movement eschew developing story- or task-level estimates. The major difference is on the use of throughput to provide forecasting which is central to bottom-up estimating and planning at the lowest level of the classic estimation continuum.

When done correctly, both #NoEstimates and classic estimation are tools to generate feedback and create guidance for the organization. In its purest form #NoEstimates uses functionality to generate feedback and to provide guidance about what is possible. The less absolutist “Kanban’er” form of #NoEstimates uses both functional software and throughput measures as feedback and guidance tools. Classic estimation tools use plans and performance to the plan to generate feedback and guidance. The goal is usually the same, it is just that the mechanisms are very different. With an established context and vocabulary exlore the concepts more deeply.


Categories: Process Management

R: Think Bayes – More posterior probability calculations

Mark Needham - Thu, 04/16/2015 - 21:57

As I mentioned in a post last week I’ve been reading through Think Bayes and translating some of the examples form Python to R.

After my first post Antonios suggested a more idiomatic way of writing the function in R so I thought I’d give it a try to calculate the probability that combinations of cookies had come from each bowl.

In the simplest case we have this function which takes in the names of the bowls and the likelihood scores:

f = function(names,likelihoods) {
  # Assume each option has an equal prior
  priors = rep(1, length(names)) / length(names)
 
  # create a data frame with all info you have
  dt = data.frame(names,priors,likelihoods)
 
  # calculate posterior probabilities
  dt$post = dt$priors*dt$likelihoods / sum(dt$priors*dt$likelihoods)
 
  # specify what you want the function to return
  list(names=dt$names, priors=dt$priors, likelihoods=dt$likelihoods, posteriors=dt$post)  
}

We assume a prior probability of 0.5 for each bowl.

Given the following probabilities of of different cookies being in each bowl…

mixes = {
  'Bowl 1':dict(vanilla=0.75, chocolate=0.25),
  'Bowl 2':dict(vanilla=0.5, chocolate=0.5),
}

…we can simulate taking one vanilla cookie with the following parameters:

Likelihoods = c(0.75,0.5)
Names = c("Bowl 1", "Bowl 2")
res=f(Names,Likelihoods)
 
> res$posteriors[res$name == "Bowl 1"]
[1] 0.6
> res$posteriors[res$name == "Bowl 2"]
[1] 0.4

If we want to simulate taking 3 vanilla cookies and 1 chocolate one we’d have the following:

Likelihoods = c((0.75 ** 3) * (0.25 ** 1), (0.5 ** 3) * (0.5 ** 1))
Names = c("Bowl 1", "Bowl 2")
res=f(Names,Likelihoods)
 
> res$posteriors[res$name == "Bowl 1"]
[1] 0.627907
> res$posteriors[res$name == "Bowl 2"]
[1] 0.372093

That’s a bit clunky and the intent of ‘3 vanilla cookies and 1 chocolate’ has been lost. I decided to refactor the code to take in a vector of cookies and calculate the likelihoods internally.

First we need to create a data structure to store the mixes of cookies in each bowl that we defined above. It turns out we can do this using a nested list:

bowl1Mix = c(0.75, 0.25)
names(bowl1Mix) = c("vanilla", "chocolate")
bowl2Mix = c(0.5, 0.5)
names(bowl2Mix) = c("vanilla", "chocolate")
Mixes = list("Bowl 1" = bowl1Mix, "Bowl 2" = bowl2Mix)
 
> Mixes
$`Bowl 1`
  vanilla chocolate 
     0.75      0.25 
 
$`Bowl 2`
  vanilla chocolate 
      0.5       0.5

Now let’s tweak our function to take in observations rather than likelihoods and then calculate those likelihoods internally:

likelihoods = function(names, observations) {
  scores = c(1,1)
  names(scores) = names
 
  for(name in names) {
      for(observation in observations) {
        scores[name] = scores[name] *  mixes[[name]][observation]      
      }
    }  
  return(scores)
}
 
f = function(names,mixes,observations) {
  # Assume each option has an equal prior
  priors = rep(1, length(names)) / length(names)
 
  # create a data frame with all info you have
  dt = data.frame(names,priors)
 
  dt$likelihoods = likelihoods(Names, Observations)
 
  # calculate posterior probabilities
  dt$post = dt$priors*dt$likelihoods / sum(dt$priors*dt$likelihoods)
 
  # specify what you want the function to return
  list(names=dt$names, priors=dt$priors, likelihoods=dt$likelihoods, posteriors=dt$post)  
}

And if we call that function:

Names = c("Bowl 1", "Bowl 2")
 
bowl1Mix = c(0.75, 0.25)
names(bowl1Mix) = c("vanilla", "chocolate")
bowl2Mix = c(0.5, 0.5)
names(bowl2Mix) = c("vanilla", "chocolate")
Mixes = list("Bowl 1" = bowl1Mix, "Bowl 2" = bowl2Mix)
Mixes
 
Observations = c("vanilla", "vanilla", "vanilla", "chocolate")
 
res=f(Names,Mixes,Observations)
 
> res$posteriors[res$names == "Bowl 1"]
[1] 0.627907
 
> res$posteriors[res$names == "Bowl 2"]
[1] 0.372093

Exactly the same result as before! #win

Categories: Programming

Works with Google Cardboard: creativity plus compatibility

Google Code Blog - Thu, 04/16/2015 - 18:16

Posted by Andrew Nartker, Product Manager, Google Cardboard

All of us is greater than any single one of us. That’s why we open sourced the Cardboard viewer design on day one. And why we’ve been working on virtual reality (VR) tools for manufacturers and developers ever since. We want to make VR better together, and the community continues to inspire us.

For example: what began with cardboard, velcro and some lenses has become a part of toy fairs and art shows and film festivals all over the world. There are also hundreds of Cardboard apps on Google Play, including test drives, roller coaster rides, and mountain climbs. And people keep finding new ways to bring VR into their daily lives—from campus tours to marriage proposals to vacation planning.

It’s what we dreamed about when we folded our first piece of cardboard, and combined it with a smartphone: a VR experience for everyone! And less than a year later, there’s a tremendous diversity of VR viewers and apps to choose from. To keep this creativity going, however, we also need to invest in compatibility. That’s why we’re announcing a new program called Works with Google Cardboard.

At its core, the program enables any Cardboard viewer to work well with any Cardboard app. And the result is more awesome VR for all of us.

For makers: compatibility tools, and a certification badge

These days you can find Cardboard viewers made from all sorts of materials—plastic, wood, metal, even pizza boxes. The challenge is that each viewer may have slightly different optics and dimensions, and apps actually need this info to deliver a great experience. That’s why, as part of today’s program, we’re releasing a new tool that configures any viewer for every Cardboard app, automatically.

As a manufacturer, all you need to do is define your viewer’s key parameters (like focal length, input type, and inter-lens distance), and you’ll get a QR code to place on your device. Once a user scans this code using the Google Cardboard app, all their other Cardboard VR experiences will be optimized for your viewer. And that’s it.

Starting today, manufacturers can also apply for a program certification badge. This way potential users will know, at a glance, that a VR viewer works great with Cardboard apps and games. Visit the Cardboard website to get started.

The GoggleTech C1-Glass viewer works with Google Cardboard For developers: design guidelines and SDK updates

Whether you’re building your first VR app, or you’ve done it ten times before, creating an immersive experience comes with a unique set of design questions like, “How should I orient users at startup?” Or “How do menus even work in VR?”

We’ve explored these questions (and many more) since launch, and today we’re sharing our initial learnings with the developer community. Our new design guidelines focus on overall usability, as well as common VR pitfalls, so take a look and let us know your thoughts.

Of course, we want to make it easier to design and build great apps. So today we're also updating the Cardboard SDKs for Android and Unity—including improved head tracking and drift correction. In addition, both SDKs support the Works with Google Cardboard program, so all your apps will play nice with all certified VR viewers.

For users: apps + viewers = choices

The number of Cardboard apps has quickly grown from dozens to hundreds, so we’re expanding our Google Play collection to help you find high-quality apps even faster. New categories include Music and Video, Games, and Experiences. Whether you’re blasting asteroids, or reliving the Saturday Night Live 40th Anniversary Special, there’s plenty to explore on Google Play.

New collections of Cardboard apps on Google Play

Today’s Works with Google Cardboard announcement means you’ll get the same great VR experience across a wide selection of Cardboard viewers. Find the viewer that fits you best, and then fire up your favorite apps.

For the future: Thrive Audio and Tilt Brush are joining the Google family

Most of today’s VR experiences focus on what you see, but what you hear is just as important. That’s why we’re excited to welcome the Thrive Audio team from the School of Engineering in Trinity College Dublin to Google. With their ambisonic surround sound technology, we can start bringing immersive audio to VR.

In addition, we’re thrilled to have the Tilt Brush team join our family. With its innovative approach to 3D painting, Tilt Brush won last year’s Proto Award for Best Graphical User Interface. We’re looking forward to having them at Google, and building great apps together.

Ultimately, today’s updates are about making VR better together. Join the fold, and let’s have some fun.

Categories: Programming

Announcing General Availability of Azure Premium Storage

ScottGu's Blog - Scott Guthrie - Thu, 04/16/2015 - 18:01

I’m very excited to announce the general availability release of Azure Premium Storage. It is now available with an enterprise grade SLA and is available for everyone to use.

Microsoft Azure now offers two types of storage: Premium Storage and Standard Storage. Premium Storage stores data durably on Solid State Drives (SSDs) and provides high performance, low latency, disk storage with consistent performance delivery guarantees.

image

Premium Storage is ideal for I/O-sensitive workloads - and is especially great for database workloads hosted within Virtual Machines.  You can optionally attach several premium storage disks to a single VM, and support up to 32 TB of disk storage per Virtual Machine and drive more than 64,000 IOPS per VM at less than 1 millisecond latency for read operations. This provides an incredibly fast storage option that enables you to run even more workloads in the cloud.

Using Premium Storage, Azure now offers the ability run more demanding applications - including high-volume SQL Server, Dynamics AX, Dynamics CRM, Exchange Server, MySQL, Oracle Database, IBM DB2, MongoDB, Cassandra, and SAP solutions. Durability

Durability of data is of utmost importance for any persistent storage option. Azure customers have critical applications that depend on the persistence of their data and high tolerance against failures. Premium Storage keeps three replicas of data within the same region, and ensures that a write operation will not be confirmed back until it has been durably replicated. This is a unique cloud capability provided only be Azure today.

In addition, you can also optionally create snapshots of your disks and copy those snapshots to a Standard GRS storage account - which enables you to maintain a geo-redundant snapshot of your data that is stored > 400 miles away from your primary Azure region for disaster recovery purposes. Available Regions

Premium Storage is available today in the following Azure regions:

  • West US
  • East US 2
  • West Europe
  • East China
  • Southeast Asia
  • West Japan

We will expand Premium Storage to run in all Azure regions in the near future. Getting Started

You can easily get started with Premium Storage starting today. Simply go to the Microsoft Azure Management Portal and create a new Premium Storage account. You can do this by creating a new Storage Account and selecting the “Premium Locally Redundant” storage option (note: this option is only listed if you select a region where Premium Storage is available).

Then create a new VM and select the “DS” series of VM sizes. The DS-series of VMs are optimized to work great with Premium Storage. When you create the DS VM you can simply point it at your Premium Storage account and you’ll be all set. Learning More

Learn more about Premium Storage from Mark Russinovich's blog post on today's release.  You can also see a live 3 minute demo of Premium Storage in action by watching Mark Russinovich’s video on premium storage. In it Mark shows both a Windows Server and Linux VM driving more than 64,000 disk IOPS with low latency against a durable drive powered by Azure Premium Storage.

image

You can also visit the following links for more information:

Summary

We are very excited about the release of Azure Premium Storage. Premium Storage opens up so many new opportunities to use Azure to run workloads in the cloud – including migrating existing on-premises solutions.

As always, we would love to hear feedback via comments on this blog, the Azure Storage MSDN forum or send email to mastoragequestions@microsoft.com.

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Paper: Large-scale cluster management at Google with Borg

Joe Beda (@jbeda): Borg paper is finally out. Lots of reasoning for why we made various decisions in #kubernetes. Very exciting.

The hints and allusions are over. We now have everything about Google's long rumored Borg project in one iconic Google style paper: Large-scale cluster management at Google with Borg.

When Google blew our minds by audaciously treating the Datacenter as a Computer it did not go unnoticed that by analogy there must be an operating system for that datacenter/computer.

Now we have the story behind a critical part of that OS:

Google's Borg system is a cluster manager that runs hundreds of thousands of jobs, from many thousands of different applications, across a number of clusters each with up to tens of thousands of machines.

It achieves high utilization by combining admission control, efficient task-packing, over-commitment, and machine sharing with process-level performance isolation. It supports high-availability applications with runtime features that minimize fault-recovery time, and scheduling policies that reduce the probability of correlated failures. Borg simplifies life for its users by offering a declarative job specification language, name service integration, real-time job monitoring, and tools to analyze and simulate system behavior.

We present a summary of the Borg system architecture and features, important design decisions, a quantitative analysis of some of its policy decisions, and a qualitative examination of lessons learned from a decade of operational experience with it.

Virtually all of Google’s cluster workloads have switched to use Borg over the past decade. We continue to evolve it, and have applied the lessons we learned from it to Kubernetes

The next version of Borg was called Omega and Omega is being rolled up into Kubernetes (steersman, helmsman, sailing master), which has been open sourced as part of Google's Cloud initiative.

Note how the world has changed. A decade ago when Google published their industry changing Big Table and Map Reduce papers they launched a thousand open source projects in response. Now we are not only seeing Google open source their software instead of others simply copying the ideas, the software has been released well in advance of the paper describing the software.

The future is still in balance. There's a huge fight going on for the future of what software will look like, how it is built, how it is distributed, and who makes the money. In the search business keeping software closed was a competitive advantage. In the age of AWS the only way to capture hearts and minds is by opening up your software. Interesting times.

Related Articles
Categories: Architecture

Why Your Million Dollar Idea Is Worthless

Making the Complex Simple - John Sonmez - Thu, 04/16/2015 - 16:00

In this video, I talk about why I think some multimillion dollar ideas are worthless, and how some crappy ideas are worth millions. Full transcript: John: Hey, this is John Sonmez from Simpleprogrammer.com and today I want to tell you why your million dollar idea is worthless. So that’s right. Every time that I’m talking […]

The post Why Your Million Dollar Idea Is Worthless appeared first on Simple Programmer.

Categories: Programming

Creating Better User Experiences on Google Play

Android Developers Blog - Thu, 04/16/2015 - 03:44

Posted by Eunice Kim, Product Manager for Google Play

Whether it's a way to track workouts, chart the nighttime stars, or build a new reality and battle for world domination, Google Play gives developers a platform to create engaging apps and games and build successful businesses. Key to that mission is offering users a positive experience while searching for apps and games on Google Play. Today we have two updates to improve the experience for both developers and users.

A global content rating system based on industry standards

Today we’re introducing a new age-based rating system for apps and games on Google Play. We know that people in different countries have different ideas about what content is appropriate for kids, teens and adults, so today’s announcement will help developers better label their apps for the right audience. Consistent with industry best practices, this change will give developers an easy way to communicate familiar and locally relevant content ratings to their users and help improve app discovery and engagement by letting people choose content that is right for them.

Starting now, developers can complete a content rating questionnaire for each of their apps and games to receive objective content ratings. Google Play’s new rating system includes official ratings from the International Age Rating Coalition (IARC) and its participating bodies, including the Entertainment Software Rating Board (ESRB), Pan-European Game Information (PEGI), Australian Classification Board, Unterhaltungssoftware Selbstkontrolle (USK) and Classificação Indicativa (ClassInd). Territories not covered by a specific ratings authority will display an age-based, generic rating. The process is quick, automated and free to developers. In the coming weeks, consumers worldwide will begin to see these new ratings in their local markets.

To help maintain your apps’ availability on Google Play, sign in to the Developer Console and complete the new rating questionnaire for each of your apps. Apps without a completed rating questionnaire will be marked as “Unrated” and may be blocked in certain territories or for specific users. Starting in May, all new apps and updates to existing apps will require a completed questionnaire before they can be published on Google Play.

An app review process that better protects users

Several months ago, we began reviewing apps before they are published on Google Play to better protect the community and improve the app catalog. This new process involves a team of experts who are responsible for identifying violations of our developer policies earlier in the app lifecycle. We value the rapid innovation and iteration that is unique to Google Play, and will continue to help developers get their products to market within a matter of hours after submission, rather than days or weeks. In fact, there has been no noticeable change for developers during the rollout.

To assist in this effort and provide more transparency to developers, we’ve also rolled out improvements to the way we handle publishing status. Developers now have more insight into why apps are rejected or suspended, and they can easily fix and resubmit their apps for minor policy violations.

Over the past year, we’ve paid more than $7 billion to developers and are excited to see the ecosystem grow and innovate. We’ll continue to build tools and services that foster this growth and help the developer community build successful businesses.

Join the discussion on

+Android Developers
Categories: Programming

Helping developers connect with families on Google Play

Android Developers Blog - Thu, 04/16/2015 - 03:43

Posted by Eunice Kim, Product Manager, Google Play

There are thousands of Android developers creating experiences for families and children — apps and games that broaden the mind and inspire creativity. These developers, like PBS Kids, Tynker and Crayola, carefully tailor their apps to provide high quality, age appropriate content; from optimizing user interface design for children to building interactive features that both educate and entertain.

Google Play is committed to the success of this emerging developer community, so today we’re introducing a new program called Designed for Families, which allows developers to designate their apps and games as family-friendly. Participating apps will be eligible for upcoming family-focused experiences on Google Play that will help parents discover great, age-appropriate content and make more informed choices.

Starting now, developers can opt in their app or game through the Google Play Developer Console. From there, our team will review the submission to verify that it meets the Designed for Families program requirements. In the coming weeks, we’ll be adding new ways to promote family content to users on Google Play — we’ll have more to share on this soon.

Join the discussion on

+Android Developers
Categories: Programming

Power Great Gaming with New Analytics from Play Games

Android Developers Blog - Thu, 04/16/2015 - 03:30

By Ben Frenkel, Google Play Games team

A few weeks ago at the Game Developers Conference (GDC), we announced Play Games Player Analytics, a new set of free reports to help you manage your games business and understand in-game player behavior. Today, we’re excited to make these new tools available to you in the Google Play Developer Console.

Analytics is a key component of running a game as a service, which is increasingly becoming a necessity for running a successful mobile gaming business. When you take a closer look at large developers that do this successfully, you find that they do three things really well:

  • Manage their business to revenue targets
  • Identify hot spots in their business metrics so they can continuously focus on the game updates that will drive the most impact
  • Use analytics to understand how players are progressing, spending, and churning

“With player engagement and revenue data living under one roof, developers get a level of data quality that is simply not available to smaller teams without dedicated staff. As the tools evolve, I think Google Play Games Player Analytics will finally allow indie devs to confidently make data-driven changes that actually improve revenue.”

Kevin Pazirandeh
Developer of Zombie Highway 2

With Player Analytics, we wanted to make these capabilities available to the entire developer ecosystem on Google Play in a frictionless, easy-to-use way, freeing up your precious time to create great gaming experiences. Small studios, including the makers of Zombie Highway 2 and Bombsquad, have already started to see the benefits and impact of Player Analytics on their business.

Further, if you integrate with Google Play game services, you get this set of analytics with no incremental effort. But, for a little extra work, you can also unlock another set of high impact reports by integrating Google Play game services Events, starting with the Sources and Sinks report, a report to help you balance your in-game economy.

If you already have a game integrated with Google Play game services, go check out the new reports in the Google Play Developer Console today. For everyone else, enabling Player Analytics is as simple as adding a handful of lines of code to your game to integrate Google Play game services.

Manage your business to revenue targets

Set your spend target in Player Analytics by choosing a daily goal

To help assess the health of your games business, Player Analytics enables you to select a daily in-app purchase revenue target and then assess how you're doing against that goal through the Target vs Actual report depicted below. Learn more.

Identify hot spots using benchmarks with the Business Drivers report

Ever wonder how your game’s performance stacks up against other games? Player Analytics tells you exactly how well you are doing compared to similar games in your category.

Metrics highlighted in red are below the benchmark. Arrows indicate whether a metric is trending up or down, and any cell with the icon can be clicked to see more details about the underlying drivers of the change. Learn more.

Track player retention by new user cohort

In the Retention report, you can see the percentage of players that continued to play your game on the following seven days after installing your game.

Learn more.

See where players are spending their time, struggling, and churning with the Player Progression report

Measured by the number of achievements players have earned, the Player Progression funnel helps you identify where your players are struggling and churning to help you refine your game and, ultimately, improve retention. Add more achievements to make progression tracking more precise.

Learn more.

Manage your in-game economy with the Sources and Sinks report

The Sources and Sinks report helps you balance your in-game economy by showing the relationship between how quickly players are earning or buying and using resources.

For example, Eric Froemling, one man developer of BombSquad, used the Sources & Sinks report to help balance the rate at which players earned and spent tickets.

Read more about Eric’s experience with Player Analytics in his recent blog post.

To enable the Sources and Sinks report you will need to create and integrate Play game services Events that track sources of premium currency (e.g., gold coins earned), and sinks of premium currency (e.g., gold coins spent to buy in-app items).

Categories: Programming

Experimenting with Swift and UIStoryboardSegues

Xebia Blog - Wed, 04/15/2015 - 21:58

Lately I've been experimenting a lot with doing things differently in Swift. I'm still trying to find best practices and discover completely new ways of doing things. One example of this is passing objects from one view controller to another through a segue in a single line of code, which I will cover in this post.

Imagine two view controllers, a BookViewController and an AuthorViewController. Both are in the same storyboard and the BookViewController has a button that pushes the AuthorViewController on the navigation controller through a segue. To know which author we need to show on the AuthorViewController we need to pass an author object from the BookViewController to the AuthorViewController. The traditional way of doing this is giving the segue an identifier and then setting the object:

class BookViewController: UIViewController {

    var book: Book!

    override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
        if segue.identifier == "ShowAuthor" {
            let authorViewController = segue.destinationViewController as! AuthorViewController
            authorViewController.author = book.author
        }
    }
}

class AuthorViewController: UIViewController {

    var author: Author!
}

And in case we would use a modal segue that shows a AuthorViewController embedded in a navigation controller, the code would be slightly more complex:

if segue.identifier == "ShowAuthor" {
  let authorViewController = (segue.destinationViewController as! UINavigationController).viewControllers[0] as! AuthorViewController
  authorViewController.author = book.author
}

Now let's see how we can add an extension to UIStoryboardSegue that makes this a bit easier and works the same for both scenarios. Instead of checking the segue identifier we will just check on the type of the destination view controller. We assume that based on the type the same object is passed on, even when there are multiple segues going to that type.

extension UIStoryboardSegue {

    func destinationViewControllerAs<T>(cl: T.Type) -> T? {
        return destinationViewController as? T ?? (destinationViewController as? UINavigationController)?.viewControllers[0] as? T
    }
}

What we've done here is add the method destinationViewControllerAs to UIStoryboardSegue that checks if the destinationViewController is of the generic type T. If it's not, it will check if the destinationViewController is a navigation controller and if it's first view controller is of type T. If it finds either one, it will return that instance of T. Since it can also be nil, the return type is an optional T.

It's now incredibly simple to pass on our author object to the AuthorViewController:

override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
  segue.destinationViewControllerAs(AuthorViewController.self)?.author = book.author
}

No need to check any identifiers anymore and less code. Now I'm not saying that this is the best way to do it or that it's even better than the traditional way of doing things. But it does show that Swift offers us new ways of doing things and it's worth to experiment to find best practices.

The source code of the samples and extension is available on https://github.com/lammertw/StoryboardSegueExtension.

The Google Fit Developer Challenge winners

Google Code Blog - Wed, 04/15/2015 - 17:01

Posted by Angana Ghosh, Product Manager, Google Fit

Last year, we teamed up with adidas, Polar, and Withings to invite developers to create amazing fitness apps that integrated the new Google Fit platform. The community of Google Fit developers has flourished since then and to help get them inspired, we even suggested a few ideas for new, fun, innovative fitness apps. Today, we’re announcing the twelve grand prize winners, whose apps will be promoted on Google Play.

  • 7MinGym: All you need is this app, a chair, and a wall to start benefiting from 7 minute workouts at home. You can play music from your favorite music app and cast your workout to Chromecast or Android TV.
  • Aqualert: This app reminds you to stay hydrated throughout the day and lets you track your water intake.
  • Cinch Weight Loss and Fitness: Cinch helps you with detailed information your steps taken and calories burned. The app also supports heart-rate tracking with compatible Android Wear devices.
  • FitHub: FitHub lets you track your fitness activity from multiple accounts, including Google Fit, and multiple wearable devices, including Android Wear. You can also add your friends to compare your progress!
  • FitSquad: FitSquad turns fitness into a competition. Join your friends in a squad to compare progress, track achievements, and cheer each other on.
  • Instant - Quantified Self: Instant is a lifestyle app that helps you track not only your physical activity but your digital activity too and tells you how much you’re using your phone and apps.other activity. You can also set usage limits and reminders.
  • Jump Rope Wear Counter: This simple app lets you count your jump rope skips with an Android Wear device.
  • Move it!: This app packs one neat feature – it reminds you to get up and move about if you haven’t been active in the last hour.
  • Openrider - GPS Cycling Riding: Track and map your cycle routes with Openrider.
  • Running Buddies: In this run tracking app, runners can choose to share their runs and stats with those around them so that they can find other runners similar to themselves to go running with.
  • Strength: Strength is a workout tracking app that also lets you choose from a number of routines, so you can get to your workout quickly and track it without manual data entry. Schedules and rest timers come included.
  • Walkholic: Walkholic is another way to see your Google Fit walking, cycling, and running data. You can also turn on notifications if you don’t meet your own preset goals.

We saw a wide range of apps that integrated Google Fit, and both the grand prize winners and the runner ups will be receiving some great devices from our challenge partners to help with their ongoing fitness app development: the X_CELL and SPEED_CELL from adidas, a new Android Wear device, a Loop activity tracker with a H7 heart rate sensor from Polar, and a Smart Body Analyzer from Withings.

We’re thrilled these developers chose to integrate the Google Fit platform into their apps, giving users one place to keep all their fitness activities. With the user’s permission, any developer can store or read the user’s data from Google Fit and use it to build powerful and useful fitness experiences. Find out more about integrating Google Fit into your app.

Categories: Programming

Full Stack Tuning for a 100x Load Increase and 40x Better Response Times

A world that wants full stack developers also needs full stack tuners. That tuning process, or at least the outline of a full stack tuning process is something Ronald Bradford describes in not quite enough detail in Improving performance – A full stack problem.

The general philosophy is:

  • Understanding were to invest your energy first, know what the return on investment can be.
  • Measure and verify every change.

He lists several tips for general website improvements:

Categories: Architecture

SATURN conference in Baltimore, MD

Coding the Architecture - Simon Brown - Wed, 04/15/2015 - 09:25

Following on from the CRAFT conference in Budapest next week, I'm heading straight across the water to the SATURN conference, which is taking place in Baltimore, Maryland. SATURN is much more focussed around software architecture than many of the other events I attend and I had fantastic time when I attended in 2013, so I'm delighted to be invited back. I'm involved with a number of different things at the conference as follows.

  • Tuesday April 28th - Microservices Trial - I'll be on a panel with Len Bass and Sam Newman, debating whether a microservices architecture is just an attractive nuisance rather than something that's actually useful.
  • Wednesday April 29th - Software architecture as code - a talk about how much of the software architectural intent remains in the code and how we can represent this architectural model as code.
  • Wednesday April 29th - Office hours - an open discussion about anything related to software architecture.
  • Thursday April 30th - My Silver Toolbox - I'll be doing a short talk in Micheal Keeling's session about some of the software architecture tools I find indispensable.

SATURN 2015 brochure

This is my last scheduled trip to the US this year, so please do come and grab me if you want to chat.

Categories: Architecture

Debunking

Herding Cats - Glen Alleman - Wed, 04/15/2015 - 04:21

This Blog has been focused on improving program and project management process for many years. Over that time I've run into several bunk ideas around projects, development, methods, and process of managing other peoples money. When that happens, the result is a post or two about the nonsense idea and the corrections to those ideas, not just form my experience but from the governance frameworks that guide our work.

A post on Tony DaSilva's blog, was about the Debunkers Club that struck a cord. I've edit that blog's content to fit mine domain, with full attribution.

This Blog is dedicated to the proposition that all information is not created equal. Much of it is endowed by its creators with certain undeniable wrongs. Misinformation is dangerous!!

There's a lot of crap floating around the any business or technical field. Much of it gets passed around by well-meaning folks, but it is harmful regardless of the purity of the conveyer.

People who attempt to debunk myths, mistakes, and misinformation are often tireless in their efforts. They are also too often helpless against the avalanche of misinformation.

The Debunker Club is an experiment in professional responsibility. Anyone who's interested may join as long as they agree to the following:

  1. I would like to see less misinformation in the project management field. This includes, planning, estimating, risk, execution, performance management, development methods.
  2. I will invest some of my time in learning and seeking the truth, from sources like peer-reviewed scientific research or translations of that research.
  3. I will politely, but actively, provide feedback to those who transmit misinformation.
  4. I will be open to counter feedback, listening to understand opposing viewpoints based on facts, example, and evidence outside personal opinion. I will provide counter-evidence and argument when warranted.
Related articles Debunker Club Works to Dispel the Corrupted Cone of Learning Five Estimating Pathologies and Their Corrective Actions Critical Success Factors of IT Forecasting Calculating Value from Software Projects - Estimating is a Risk Reduction Process
Categories: Project Management

Personas, Scenarios and Stories, Part 3

Tom “The Brewdog” Pinternick

Tom “The Brewdog” Pinternick

The goal of all software centric projects is functional software that meets user needs and provides value. The often-trod path to functional software begins at a group of personas, then to scenarios and then to user stories before being translated into software or other technical requirements.

Personas: A persona represents one of the archetypical users that interacts with the system or product.

Scenarios: A scenario is a narrative that describes purpose-driven interactions between a persona(s) and the product or system.

User Stories: A user story is a brief, simple requirement statement from the user perspective.

In Personas, Scenarios and Stories, Part 2 we generated a simple scenario for the “Brewdog” Logo Glass Collection Application:

When visiting a microbrewery, Tom “Brewdog” Pinternick, notices that there are logo pint glasses on sale. The brewpub has several varieties. Tom can’t remember if he has collected all of the different glasses being offered. Recognizing a potential buying opportunity, Tom opens the Logo Pint Glass app and browses the glasses he has already collected for this brewpub and discovers one the glasses is new.

After a team accepts the narrative story contained in a scenario the team needs to mine the scenario for user stories. The typical format for a user story is <persona> <goal> <benefit>. Using the scenario above, the first user story that can be generated is:

Tom “Brewdog” Pinternick wants to browse his glass collection so that he does not buy the same glass twice.

Much of the activity of this scenario happens outside of the application and provides context for the developers. For example one inferred user story for the context might reflect that since many brewpubs have muted mood lighting, the application will need to be readable in low light scenarios. This is a non-functional requirement. A user story could be constructed to tackle this requirement.

Tom “Brewdog” Pinternick wants to be able to see the app in low light situations so that so that the application is usable in brewpubs with varied lighting schemes.

Other possible ways to tackle this (and other) non-functional requirement includes adding criteria to the definition of done or by adding specific acceptance criteria to all user interface related user stories. For example, the team could add “all screens must be readable in low light” to the definition of done or as acceptance criteria for screens.

In both cases, these proposed user stories would need to be groomed before a team accepts them into a sprint. Grooming will ensure that the story has acceptance criteria and fits the INVEST (independent, negotiable, valuable, estimable, small enough and testable) criteria we have suggested as a guide the development of quality of user stories. Once a story has been created and groomed, the team can convert the story into something of value (e.g. code, hardware or some other user facing deliverable).

I am occasionally ask why teams can’t immediately start working as soon as they think they know what they are expected to deliver. The simplest answer is that the risk of failure is too high. Unless teams spend the time and effort to begin to find out what their customers (whether a customer is internal or external is immaterial) want before they start building, the chances are they will incur failure or rework. Agile embraces change by building in mechanisms for accepting change into the backlog based on learning and feedback. Techniques like personas, scenarios and user stories provide Agile teams with a robust, repeatable process for generating and interpreting user needs.


Categories: Process Management