Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Agile Acceptance Testing Requires A Cavalcade Of Roles

Balloon glows require more expertise than a single team!

Balloon glows require more expertise than a single team!


Over the years I have heard many reasons for performing some form of user acceptance testing. Some of those reasons are somewhat humorous, such as “UAT is on the checklist, therefore we have to do it” while some are profound, such as reducing risk of production failures and lack of acceptance. Regardless of the reason acceptance testing does not happen by magic, someone has to plan and execute acceptance testing.  Even in the most automated environment acceptance testing requires a personal touch and in Agile, acceptance testing is a group affair.

The agile literature and pundits talk a great deal about the need for Agile teams to be cross functional. A cross-functional Agile team should include all of the relevant functional and technical expertise needed to deliver the stories they have committed to delivering. Occasionally this idea is taken too far and teams believe they can’t or don’t need to reach beyond their boundaries for knowledge or expertise. This perception is rarely true. Agile teams often need to draw on knowledge, experience and expertise that exists outside the boundary of the team. While the scope of the effort and techniques used in Agile user acceptance testing (AUAT) can impact the number of people and teams that will be involved with Agile user acceptance testing, typically there are a four fairly stable set of capabilities that actively participate in acceptance testing.

  1. The Agile Team – The team (or teams) is always actively engaged in AUAT. AUAT is not a single event, but rather is integrated directly into every step of the product life cycle. Acceptance test cases are a significant part of requirements. Techniques such as Acceptance Test Driven Development require whole team involvement.
  2. Product Owner/Product Management – The product owner is the focal point for AUAT activities. The product owner acts as a conduit for business knowledge and needs into the team. As efforts scale up to require more than a single team or for external software products, product management teams are often needed to convey the interrelationships between features, stories and teams.
  3. Subject Matter Experts/Real users – Subject matter experts (SMEs) know the ins and outs of the product, market or other area of knowledge. Involving SMEs to frame acceptance test or to review solutions evolve provides the team with a ready pool of knowledge that by definition they don’t have. Product owners or product management identify, organize and bring subject matter expertise to the team.
  4. Test Professionals/Test Coaches – AUAT is real testing, therefore everyone that is involved in writing and automating acceptance test cases, creating test environments and executing acceptance testing needs to understand. Test coaches (possibly test architects, also) are very useful to help everyone involved in AUAT regardless of technique to test effectively.

Over the years, who participated in user acceptance testing was as varied as the reason people said they were doing acceptance testing. Sometimes development teams would “perform” acceptance testing as proxy for the users. Other times software would be thrown over the wall and SMEs and other business users would do something that approximated testing. AUAT takes a different approach and builds it directly into the product development flow. Integrating UAT into whole the flow of developing requires that even the most cross-functional team access a whole cavalcade of roles inside and outside of the team to ensure that AUAT reduces the chance of doing the wrong thing and at the same time reduces the chance of doing the right thing wrong.

Categories: Process Management

Expanding our developer video channel lineup

Google Code Blog - Thu, 09/03/2015 - 19:07

Posted by, Reto Meier

Starting today, the Android Developers, Chrome Developers, and Google Developers YouTube channels will host the videos that apply to each specific topic area. By subscribing to each channel, you will only be notified about content that matches your interests.

The Google Developers YouTube channel has been bringing you content across many platforms and product offerings to help inspire, inform, and delight you. Recently, we’ve been posting a variety of recurring shows that cover many broad topics across all of our developer offerings, such as Android Performance Patterns, Polycasts and Coffee With A Googler.

As we produce more and more videos, covering an ever increasing range of topics, we want to make it easier for you to find the information you need.

This means that for the Android Developers Channel, you will get content that is more focused to Android, such as Android Performance Patterns. Similarly, the Chrome Developers Channel will host more web focused content, such as Polycasts, HTTP203, Totally Tooling Tips, and New in Chrome. The Google Developers Channel will continue to broadcast broader Google Developer focused content like our DevBytes covering Google Play services releases and our Coffee With A Googler series.

We look forward to bringing you lots more video to inspire, inform, and delight -- to avoid missing any of it, you can subscribe to each of our YouTube channels using the following links, also be sure to turn notifications on in YouTube’s settings (more info here) so that you can get updates as we post new content:

Google Developers | Android Developers | Chrome Developers

Categories: Programming

How Agari Uses Airbnb's Airflow as a Smarter Cron

This is a guest repost by Siddharth Anand, Data Architect at Agari, on Airbnb's open source project Airflow, a workflow scheduler for data pipelines. Some think Airflow has a superior approach.

Workflow schedulers are systems that are responsbile for the periodic execution of workflows in a reliable and scalable manner. Workflow schedulers are pervasive - for instance, any company that has a data warehouse, a specialized database typically used for reporting, uses a workflow scheduler to coordinate nightly data loads into the data warehouse. Of more interest to companies like Agari is the use of workflow schedulers to reliably execute complex and business-critical "big" data science workloads! Agari, an email security company that tackles the problem of phishing, is increasingly leveraging data science, machine learning, and big data practices typically seen in data-driven companies like LinkedIn, Google, and Facebook in order to meet the demands of burgeoning data and dynamicism around modeling.

In a previous post, I described how we leverage AWS to build a scalable data pipeline at Agari. In this post, I discuss our need for a workflow scheduler in order to improve the reliablity of our data pipelines, providing the previous post's pipeline as a working example.

Scheduling Workflows @ Agari - A Smarter Cron

Categories: Architecture

How NOT to Email Famous People

Making the Complex Simple - John Sonmez - Thu, 09/03/2015 - 16:00

In this episode, I talk about emailing famous people. Full transcript: John:               Hey, John Sonmez from I am still here in Amsterdam and this video really doesn’t have anything to do with Amsterdam but I got a few emails recently and I thought I would do a response to not to the email specifically […]

The post How NOT to Email Famous People appeared first on Simple Programmer.

Categories: Programming

Decision Support is a Core Business Process

Herding Cats - Glen Alleman - Thu, 09/03/2015 - 14:51

SW MetricsBeen on the road for two weeks straight. At client for a week, at VMWorld for a few days, back at client site. During this time, primary work is on deciding how to move existing platform and augmented software systems forward using the Accelerator paradigm.

Those not familiar with Accelerator, they are fixed-term, cohort-based programs, that include mentorship and educational components and culminate in a public pitch event or demo day.

Money is given to the cohort members ($25,000 to $100,000), mentors provide intensive advice to the members over an 8 to 12 week period, in exchange for a percent of future equity. At the end of the cycle, the software products that result are further funded usually through venture capital, in support of the product strategy of the firm. At this client, we're doing this to expand the code base in a rapid manner to respond to rapid market needs, which are beyond our current capacity to meet in a timely manner.

The business of funding other people to produce value of the firm mandates making decisions in the presence of uncertainty. This is everyday, normal business management. All business's do this. We team with other businesses, we provide funds diectly, others provide funds, we put out a challenge to have those applying for the funds to provide something of value to our portfolio needs. These challenges are in support of our mission. Once selected the Cohort members participate in workshops, mentoring, coaching, architectural assessments, and other standard software development processes.

But there is always uncertainty. This uncertainty is around the knowledge we need to make decisions.Questions like how much can be developed in the allotted time for the allotted moneyHow much effort will it take to arrive at a needed set of capabilities that can meet our needsHow much testing will be needed to confirm the produced software will properly function with the existing portfolio of capabilities

The answers to these and hundred's of other questions involve uncertainty, risks, and tradeoffs. A rational decision making framework used to answer these questions involves estimating nearly everything before working examples are present. These estimates are based on experience, assessment, reference classes, models using metrics and measurements, some measured data, but mostly experience from the past tested in a model. 

No credible decision making can be performed without estimating the impact of the decision on the future performance of our efforts.

This is so core to all business management, it has some names. Managerial Finance and Microeconomics of Software Development. The main objective of these efforts is to improve the probability of success by learning from the metrics of past efforts mapped to the current effort.

These models do not involve a single causal explanation. Instead they combine statistical inference from available data - objective factors - and other subjective factors. In all cases estimates are the basis of the decision making process. These causal relationships are themselves uncertain in their connectivity and influence

One approach to this problem is the application of Bayes Networks, which are models using probabilistic Directed Acyclic Graphic (DAG) that represent a set of random variables and their conditional dependencies on each other. Bayes Theorem provides a rational means of updating our belief in some unknown hypothesis in light of new or additional evidence - observed outcomes or metrics.

So in the end it comes done to this simple and yet powerful observation

Managerial Finance is based on making decisions in the presence of uncertainty. In order to make these decision the needed information must be estimated in many cases.

Any one conjecturing that decisions can be made in the presence of the normal uncertainty (reducible and irreducible) of business in the absence of estimates of the outcomes of those decisions is willfully ignoring the core principles of the business decision support paradigm.

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Wow your users with Google Cast

Google Code Blog - Wed, 09/02/2015 - 23:34

Posted by Alex Danilo, Developer Advocate

When you develop applications for Google Cast, you’re building a true multi-screen experience to ‘wow’ your users and provide a unique perspective. Part of hitting that wow factor is making the app enjoyable and easy to use.

While designing the Google Cast user experience, we performed a huge amount of user testing to refine a model that works for your users in as many scenarios as possible.

The video below gives a quick explanation of the overall user experience for Google Cast enabled applications.

We’ve also produced some targeted videos to highlight important aspects of the core Google Cast design principles.

The placement of the Cast icon is one of the most important UX guidelines since it directly affects your users familiarity with the ability to Cast. Watch this explanation to help understand why we designed it that way:

Another important design consideration is how the connection between your application and the Google Cast device should work and that’s covered in this short video:

When your users are connected to a Google Cast device that’s playing sound, it’s vital that they can control the audio volume easily. Here’s another video that covers volume control in Cast enabled applications:

To get more detailed information about our UX design principles, we have great documentation and a convenient UX guidelines checklist.

By following the Google Cast UX guidelines in your app, you will give your users a great interactive experience that’ll wow them and have them coming back for more!

Join fellow developers in the Cast Developers Google+ community for more tips, tricks and pointers to all kinds of development resources.

Categories: Programming

Getting Go

Phil Trelford's Array - Wed, 09/02/2015 - 22:01

Go is a programming language developed at Google, loosely based on C, adding garbage collection and built-in concurrency primitives (goroutines).

I’ve looked at Go briefly in the past, at a Strangeloop workshop in 2012 and later reading An Introduction to Programming in Go, but until now not really done anything with it.

Go Cover

Recently I’ve been hearing some positive things about Go, so I thought I’d give it a go this evening on a simple task: downloading and unzipping a Nuget package.

Some language highlights:

  • Extensive libraries of functions with good documentation
  • Named and anonymous functions
  • Multiple return values from functions
  • No semi-colons required
  • No class inheritance hierarchies
  • Easy cleanup with the defer statement

Go is easy to install (an msi in Windows) and I found Notepad and the command line compiler sufficient to complete the tasks.

Task 1: List Nuget package versions

Nuget has a web API that returns the available versions of a specified package as a JSON array.

This task was pretty easy using Go’s http and json libraries:

Task 2: Download a Nuget package (zip) file

Again the http package made light work of this task:

Task 3: Download and Unzip a Nuget package

Yet again I simply needed to import a package this time the zip package:


For the Nuget download task, Go was easy to pick up and quite pleasant; syntax-wise it felt a little lighter than C#, but in the bigger scheme of things was still a way off the lightness that comes with F# and OCaml's type inference. That said, if I had to choose between C# and Go, then I'd be tempted to give Go another go.

Categories: Programming

Announcing the Biggest VM Sizes Available in the Cloud: New Azure GS-VM Series

ScottGu's Blog - Scott Guthrie - Wed, 09/02/2015 - 18:51

Today, we’re announcing the release of the new Azure GS-series of Virtual Machine sizes, which enable Azure Premium Storage to be used with Azure G-series VM sizes. These VM sizes are now available to use in both our US and Europe regions.

Earlier this year we released the G-series of Azure Virtual Machines – which provide the largest VM size provided by any public cloud provider.  They provide up to 32-cores of CPU, 448 GB of memory and 6.59 TB of local SSD-based storage.  Today’s release of the GS-series of Azure Virtual Machines enables you to now use these large VMs with Azure Premium Storage – and enables you to perform up to 2,000 MB/sec of storage throughput , more than double any other public cloud provider.  Using the G5/GS5 VM size now also offers more than 20 gbps of network bandwidth, also more than double the network throughout provided by any other public cloud provider.

These new VM offerings provide an ideal solution to your most demanding cloud based workloads, and are great for relational databases like SQL Server, MySQL, PostGres and other large data warehouse solutions. You can also use the GS-series to significantly scale-up the performance of enterprise applications like Dynamics AX.

The G and GS-series of VM sizes are available to use now in our West US, East US-2, and West Europe Azure regions.  You’ll see us continue to expand availability around the world in more regions in the coming months. GS Series Size Details

The below table provides more details on the exact capabilities of the new GS-series of VM sizes:




Max Disk IOPS

Max Disk Bandwidth

(MB per second)


























Creating a GS-Series Virtual Machine

Creating a new GS series VM is very easy.  Simply navigate to the Azure Preview Portal, select New(+) and choose your favorite OS or VM image type:


Click the Create button, and then click the pricing tier option and select “View All” to see the full list of VM sizes. Make sure your region is West US, East US 2, or West Europe to select the G-series or the GS-Series:


When choosing a GS-series VM size, the portal will create a storage account using Premium Azure Storage. You can select an existing Premium Storage account, as well, to use for the OS disk of the VM:


Hitting Create will launch and provision the VM. Learn More

If you would like more information on the GS-Series VM sizes as well as other Azure VM Sizes then please visit the following page for additional details: Virtual Machine Sizes for Azure.

For more information on Premium Storage, please see: Premium Storage overview. Also, refer to Using Linux VMs with Premium Storage for more details on Linux deployments on Premium Storage.

Hope this helps,


Categories: Architecture, Programming

Building Globally Distributed, Mission Critical Applications: Lessons From the Trenches Part 2

This is Part 2 of a guest post by Kris Beevers, founder and CEO, NSONE, a purveyor of a next-gen intelligent DNS and traffic management platform. Here's Part 1.

Integration and functional testing is crucial

Unit testing is hammered home in every modern software development class.  It’s good practice. Whether you’re doing test-driven development or just banging out code, without unit tests you can’t be sure a piece of code will do what it’s supposed to unless you test it carefully, and ensure those tests keep passing as your code evolves.

In a distributed application, your systems will break even if you have the world’s best unit testing coverage. Unit testing is not enough.

You need to test the interactions between your subsystems. What if a particular piece of configuration data changes – how does that impact Subsystem A’s communication with Subsystem B? What if you changed a message format – do all the subsystems generating and handling those messages continue to talk with each other? Does a particular kind of request that depends on results from four different backend subsystems still result in a correct response after your latest code changes?

Unit tests don’t answer these questions, but integration tests do. Invest time and energy in your integration testing suite, and put a process in place for integration testing at all stages of your development and deployment process. Ideally, run integration tests on your production systems, all the time.

There is no such thing as a service interrupting maintenance
Categories: Architecture

The Brilliant Magic of Edge.js

DevHawk - Harry Pierson - Wed, 09/02/2015 - 15:00

In my post relaunching DevHawk, I mentioned that the site is written entirely in C# except for about 30 lines of JavaScript. Like many modern web content systems, Hawk uses Markdown. I write blog posts in Markdown and then my publishing "tool" (frankly little more than duct tape and bailing wire at this point) coverts the Markdown to HTML and uploads it to Azure.

However, as I went thru and converted all my old content to Markdown, I discovered that I needed some features that aren't supported by either the original implementation or the new CommonMark project. Luckily, I discovered the markdown-it project which implements the CommonMark spec but also supports syntax extensions. Markdown-it already had extensions for all of the extra features I needed - things like syntax highlighting, footnotes and custom containers.

The only problem with using markdown-it in Hawk is that it's written in JavaScript. JavaScript is a fine language has lots of great libraries, but I find it a chore to write significant amounts of code in JavaScript - especially async code. I did try and rewrite my blog post upload tool in JavaScript. It was much more difficult than the equivalent C# code. Maybe once promises become more widely used and async/await is available, JavaScript will feel like it has a reasonable developer experience to me. Until then, C# remains my weapon of choice.

I wasn't willing to use JavaScript for the entire publishing tool, but I still needed to use markdown-it [1]. So I started looking for a way to integrate the small amount of JavaScript code that renders Markdown into HTML in with the rest of my C# code base. I was expecting to have to setup some kind of local web service with Node.js to host the markdown-it code in and call out to it from C# with HttpClient.

But then I discovered Edge.js. Holy frak, Edge.js blew my mind.

Edge.js provides nearly seamless interop between .NET and Node.js. I was able to drop the 30 lines of JavaScript code into my C# app and call it directly. It took all of about 15 minutes to prototype and it's less than 5 lines of C# code.

Seriously, I think Tomasz Janczuk must be some kind of a wizard.

To demonstrate how simple Edge.js is to use, let me show you how I integrated markdown-it into my publishing tool. Here is a somewhat simplified version of the JavaScript code I use to render markdown in my tool using markdown-it, including syntax highlighting and some other extensions.

// highlight.js integration lifted unchanged from 
var hljs  = require('highlight.js');
var md = require('markdown-it')({
  highlight: function (str, lang) {
    if (lang && hljs.getLanguage(lang)) {
      try { 
        return hljs.highlight(lang, str).value;
      } catch (__) {}

    try {
      return hljs.highlightAuto(str).value;
    } catch (__) {}

    return ''; 

// I use a few more extensions in my publishing tool, but you get the idea

var html = return md.render(markdown);

As you can see, most of the code is just setting up markdown-it and its extensions. Actually rendering the markdown is just a single line of code.

In order to call this code from C#, we need to wrap the call to md.render with a JavaScript function that follows the Node.js callback style. We pass this wrapper function back to Edge.js by returning it from the JavaScript code.

// Ain't first order functions grand? 
return function (markdown, callback) {
    var html = md.render(markdown);
    callback(null, html);

Note, I have to use the callback style in this case even though my code is syncronous. I suspect I'm the outlier here. There's a lot more async Node.js code out in the wild than syncronous.

To make this code available to C#, all you have to do is pass the JavaScript code into the Edge.js Func function. Edge.js includes a embedded copy of Node.js as a DLL. The Func function executes the JavaScript and wraps the returned Node.js callback function in a .NET async delegate. The .NET delegate takes an object input parameter and returns a Task<object>. The delegate input parameter is passed in as the first parameter to the JavaScript function. The second parameter passed to the callback function becomes the return value from the delegate (wrapped in a Task of course). I haven't tested, but I assume Edge.js will convert the callback function's first parameter to a C# exception if you pass a value other than null.

It sounds complex, but it's a trivial amount of code:

// markdown-it setup code omitted for brevity
Func<object, Task<object>> _markdownItFunc = EdgeJs.Edge.Func(@"
var md = require('markdown-it')() 

return function (markdown, callback) {
    var html = md.render(markdown);
    callback(null, html);
async Task<string> MarkdownItAsync(string markdown)
    return (string)await _markdownItFunc(markdown);

To make it easier to use from the rest of my C# code, I wrapped the Edge.js delegate with a statically typed C# function. This handles type checking and casting as well as provides intellisense for the rest of my app.

The only remotely negative thing I can say about Edge.js is that it doesn't support .NET Core yet. I had to build my markdown rendering tool as a "traditional" C# console app instead of a DNX Custom Command like the rest of Hawk's command line utilities. However, Luke Stratman is working on .NET Core support for Edge.js. So maybe I'll be able to migrate my markdown rendering tool to DNX sooner rather than later.

Rarely have I ever discovered such an elegant solution to a problem I was having. Edge.js simply rocks. As I said on Twitter, I owe Tomasz a beer or five. Drop me a line Tomasz and let me know when you want to collect.

  1. I also investigated what it would take to update an existing .NET Markdown implementation like CommonMark.NET or F# Formatting to support custom syntax extensions. That would have been dramatically more code than simply biting the bullet and rewriting the post upload tool in JavaScript.

Categories: Architecture, Programming

Test Your Understanding

Making the Complex Simple - John Sonmez - Wed, 09/02/2015 - 13:00

Human communication is complicated. Using words to express ideas is so fundamental to the way we operate, the amount of effort going into it is often overlooked. Learning to communicate effectively is one of the more important skills for software professionals (and anyone) when working in teams. Every time we try to communicate something verbally […]

The post Test Your Understanding appeared first on Simple Programmer.

Categories: Programming

TestInsane’s Mindmaps Are Crazy Cool

James Bach’s Blog - Wed, 09/02/2015 - 11:09

Most testing companies offer nothing to the community or the field of testing. They all seem to say they hire only the best experts, but only a very few of them are willing to back that up with evidence. Testing companies, by and large, are all the same, and the sameness is one of mediocrity and mendacity.

But there are a few exceptions. One of them is TestInsane, founded by ex-Moolyan co-founder Santosh Tuppad. This is a company to watch.

The wonderful thing about TestInsane is their mindmaps. More than 100 of them. What lovelies! Check them out. They are a fantastic public contribution! Each mindmap tackles some testing-related subject and lists many useful ideas that will help you test in that area.

I am working on a guide to bug reporting, and I found three maps on their site that are helping me cover all the issues that matter. Thank you TestInsane!

I challenge other testing companies to contribute to the craft, as well.

Note: Santosh offered me money to help promote his company. That is a reasonable request, but I don’t do those kinds of deals. If I did that even once I would lose vital credibility. I tell everyone the same thing: I am happy to work for you if you pay me, but I cannot promote you unless I believe in you, and if I believe in you I will promote you for free. As of this writing, I have not done any work for TestInsane, paid or otherwise, but it could happen in the future.

I have done paid work for Moolya, and Per Scholas, both of which I gush about on a regular basis. I believe in those guys. Neither of them pay me to say good things about them, but remember, anyone who works for a company will never say bad things. There are some other testing companies I have worked for that I don’t feel comfortable endorsing, but neither will I complain about them in public (usually… mostly).

Categories: Testing & QA

Agile User Acceptance Testing Spans The Entire Product Development Life Cycle

Agile re-defines acceptance testing as a “formal description of the behavior of a software product[1].”

Agile re-defines acceptance testing as a “formal description of the behavior of a software product[1].”

User acceptance test (UAT) is a process that confirms that the output of a project meets the business needs and requirements. Classically, UAT would happen at the end of a project or release. Agile spreads UAT across the entire product development life cycle re-defining acceptance testing as a “formal description of the behavior of a software product[1].” By redefining acceptance testing as a description of what the software does (or is supposed to do) that can be proved (the testing part), Agile makes acceptance more important than ever by making it integral across the entire Agile life cycle.

Agile begins the acceptance testing process as requirements are being discovered. Acceptance tests are developed as part of the of the requirements life cycle in an Agile project because acceptance test cases are a form of requirements in their own right. The acceptance tests are part of the overall requirements, adding depth and granularity to the brevity of the classic user story format (persona, goal, benefit). Just like user stories, there is often a hierarchy of granularity from an epic to a user story. The acceptance tests that describe a feature or epic need to be decomposed in lock step with the decomposition of features and epics into user stories. Institutionalizing the process of generating acceptance tests at the feature and epic level and then breaking the stories and acceptance test cases down as part of grooming is a mechanism to synchronize scaled projects (we will dive into greater detail on this topic in a later entry).

As stories are accepted into sprints and development begins, acceptance test cases become a form of executable specifications. Because the acceptance test describes what the user wants the system to do, then the functionality of the code can be compared to the expected outcome of the acceptance test case to guide the developer.

When development of user stories is done the acceptance test cases provide a final feedback step to prove completion. The output of acceptance testing is a reflection of functional testing that can be replicated as part of the demo process. Typically, acceptance test cases are written by users (often Product Owners or subject matter experts) and reflect what the system is supposed to do for the business. Ultimately, it provides proof to the user community that the team (or teams) are delivering what is expected.

As one sprint follows another, the acceptance test cases from earlier sprints are often recast as functional regression tests cases in later sprints.

Agile user acceptance testing is a direct reflection of functional specifications that guide coding, provide basis for demos and finally, ensure that later changes don’t break functions that were develop and accepted in earlier sprints. UAT in an Agile project is more rigorous and timely than the classic end of project UAT found in waterfall projects.

[1], September 2015

Categories: Process Management

Sponsored Post: Microsoft , Librato, Surge, Redis Labs,, VoltDB, Datadog, MongoDB, SignalFx, InMemory.Net, Couchbase, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Microsoft’s Visual Studio Online team is building the next generation of software development tools in the cloud out in Durham, North Carolina. Come help us build innovative workflows around Git and continuous deployment, help solve the Git scale problem or help us build a best-in-class web experience. Learn more and apply.

  • VoltDB's in-memory SQL database combines streaming analytics with transaction processing in a single, horizontal scale-out platform. Customers use VoltDB to build applications that process streaming data the instant it arrives to make immediate, per-event, context-aware decisions. If you want to join our ground-breaking engineering team and make a real impact, apply here.  

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Surge 2015. Want to mingle with some of the leading practitioners in the scalability, performance, and web operations space? Looking for a conference that isn't just about pitching you highly polished success stories, but that actually puts an emphasis on learning from real world experiences, including failures? Surge is the conference for you.

  • Your event could be here. How cool is that?
Cool Products and Services
  • Librato, a SolarWinds Cloud company, is a hosted monitoring platform for real-time operations and performance analytics. Easily add metrics from any source using turnkey solutions such as the AWS Cloudwatch integration, or by leveraging any of over 100 open source collection agents and language bindings. Librato is loved equally by DevOps and data engineers. Start using Librato today. Full-featured and free for 30 days.

  • MongoDB Management Made Easy. Gain confidence in your backup strategy. MongoDB Cloud Manager makes protecting your mission critical data easy, without the need for custom backup scripts and storage. Start your 30 day free trial today.

  • In a recent benchmark for NoSQL databases on the AWS cloud, Redis Labs Enterprise Cluster's performance had obliterated Couchbase, Cassandra and Aerospike in this real life, write-intensive use case. Full backstage pass and and all the juicy details are available in this downloadable report.

  • Real-time correlation across your logs, metrics and events. just released its operations data hub into beta and we are already streaming in billions of log, metric and event data points each day. Using our streaming analytics platform, you can get real-time monitoring of your application performance, deep troubleshooting, and even product analytics. We allow you to easily aggregate logs and metrics by micro-service, calculate percentiles and moving window averages, forecast anomalies, and create interactive views for your whole organization. Try it for free, at any scale.

  • In a recent benchmark conducted on Google Compute Engine, Couchbase Server 3.0 outperformed Cassandra by 6x in resource efficiency and price/performance. The benchmark sustained over 1 million writes per second using only one-sixth as many nodes and one-third as many cores as Cassandra, resulting in 83% lower cost than Cassandra. Download Now.

  • Datadog is a monitoring service for scaling cloud infrastructures that bridges together data from servers, databases, apps and other tools. Datadog provides Dev and Ops teams with insights from their cloud environments that keep applications running smoothly. Datadog is available for a 14 day free trial at

  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Sumo Logic alternative.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here:

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required.

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Budget When You Can’t Estimate

Mike Cohn's Blog - Tue, 09/01/2015 - 15:00

I've written before that we should only estimate if having the estimate will change someone's actions. Normally, this means that the team should estimate work only if having the estimate will either:

  • enable the product owner to better prioritize the product backlog, or
  • let the product owner answer questions about when some amount of functionality can be available.

But, even if we only ask for estimates at these times, there will be some work that the team finds extremely hard or nearly impossible to estimate. Let's look at the best approach when an agile team is asked to estimate work they don't think they can reasonably estimate. But let's start with some of the reasons why an agile team may not be able to estimate well.

Why a Team Might Not Be Able to Estimate Well

This might occur for a variety of reasons. It happens frequently when a team first begins working in a new domain or with new technology.

Take a set of highly experienced team members who have worked together for years building, let's say, healthcare applications for the Web. And then move them to building banking applications for mobile devices. They can come up with some wild guesses about that new mobile banking app. But that's about all that estimate would be--a wild guess.

Similarly, team members can have a hard time estimating when they aren't really much of a team yet. If seven people are thrown together today for the first time and immediately asked to estimate a product backlog, their estimates will suck. Individuals won't know who has which skills (as opposed to who says they have those skills). They won't know how well they collaborate, and so on. After such a team works together for perhaps a few months, or maybe even just a few weeks, the quality of their estimates should improve dramatically.

There can also be fundamental reasons why an estimate is hard to produce. Some work is hard to estimate. For example, how long until cancer is cured? Or, how long will it take to design (or architect) this new system? Teams hate being asked to estimate this kind of work. In some cases, it's done when it's done, as in the cancer example. In other cases, it's done when time runs out, as in the design or architecture example.

Budgeting Instead of Estimating

In cases like these, the best thing is to approach the desire for an estimate from a different direction. Instead of asking a team, "How long will this take?" the business thinks, "How much time is this worth?" In effect, what this does is create a budget for the feature, rather than an estimate. Creating a budget for a feature is essentially applying the agile practice of timeboxing. The business says, "This feature is worth four iterations to us." In a healthy, mature agile organization, the team should be given a chance to say if they think they can do a reasonable job in that amount of time. If the team does not think it can build something the business will value in the budgeted amount of time, a discussion should ensue. What if the budget were expanded by another sprint or two? Can portions of the feature be considered optional so that the mandatory features are delivered within the budget? Establishing a budget frames this type of discussion.

What Do You Think?

What do you think of this approach? Are there times when you'd find it more useful to put a budget or timebox on a feature rather than estimate it? Have you done this in the past? I'd love to know your thoughts in the comments below.

30 Hot Books for Your Backlog (September)

NOOP.NL - Jurgen Appelo - Tue, 09/01/2015 - 14:27

30 Hot Books for Your Backlog… These are my 30 personal reading tips for September!

The post 30 Hot Books for Your Backlog (September) appeared first on NOOP.NL.

Categories: Project Management

Docker and Containers: Coffee With A Googler meets Brian Dorsey

Google Code Blog - Mon, 08/31/2015 - 22:15

Posted by Laurence Moroney, Developer Advocate

If you’ve worked with Web or cloud tech over the last 18 months, you’ll have heard about Containers and about how they let you spend more time on building software, instead of managing infrastructure. In this episode of Coffee with a Googler, we chat with Brian Dorsey into the benefits of using Containers in Google Cloud Platform for simplifying infrastructure management.

Important discussion topics covered in this episode include:

  • Containers improve the developer experience. Regardless of how large the final deployment is, they are there to make it easier for you to succeed.
  • Kubernetes -- an open source project to allow you to manage containers and fleets of containers.

Brian shares an example from Julia Ferraioli who used Containers (with Docker) to configure a Minecraft server, with many plugins, and Kubernetes to manage it.

You can learn more about Google Cloud platform, including Docker and Kubernetes at the Google Cloud Platform site.

Categories: Programming

Go Ahead, Call It a Comeback

DevHawk - Harry Pierson - Mon, 08/31/2015 - 21:55

It's been a looooong time, but I finally got around to geting DevHawk back online. It's hard to believe that it's been over a year since my last post. Lots has happened in that time!

First off, I've changed jobs (again). Last year, I made the switch from program manager to dev. Unfortunately, the project I was working on was cancelled. After several months in limbo, I was reorganized into the .NET Core framework team back over in DevDiv. I've got lots of friends in DevDiv and love the open source work they are doing. But I really missed being in Windows. Earlier this year, I joined the team that builds the platform plumbing for SmartGlass. Not much to talk about publicly right now, but that will change sometime soon.

In addition to my day job in SmartGlass, I'm also pitching in to help the Microsoft Services Disaster Response team. I knew Microsoft has a long history of corporate giving. However, I was unaware of the work we do helping communities affected by natural disasters until recently. My good friend Lewis Curtis took over as Director of Microsoft Services Disaster Response last year. I'm currently helping out on some of the missions for Nepal in response to the devestating earthquake that hit there earlier this year.

Finally, I decided that I was tired of running Other Peoples Codetm on my website. So I built out a new blog engine called Hawk. It's written in C# (plus about 30 lines of JavaScript), uses ASP.NET 5 and runs on Azure. It's specifically designed for my needs - for example, it automatically redirects old DasBlog style links like But I'm happy to let other people use it and would welcome contributions. When I get a chance, I'll push the code up to GitHub.

Categories: Architecture, Programming

Making Amazon ECS Container Service as easy to use as Docker run

Xebia Blog - Mon, 08/31/2015 - 20:52

One of the reasons Docker caught fire was that it was soo easy to use. You could build and start a docker container in a matter of seconds. With Amazon ECS this is not so. You have to learn a whole new lingo (Clusters, Task definitions, Services and Tasks), spin up an ECS cluster, write a nasty looking JSON file or wrestle with a not-so-user-friendly UI before you have your container running in ECS.

In the blog we will show you that Amazon ECS can be as fast, by presenting you a small utility named ecs-docker-run which will allow you to start a Docker container almost as fast as with Docker stand-alone by interpreting the Docker run command line options. Together with a ready-to-run CloudFormation template, you can be up and running with Amazon ECS within minutes!

ECS Lingo

Amazon ECS uses different lingo than Docker people, which causes confusion. Here is a short translation:

- Cluster - one or more Docker Hosts.
- Task Definition - A JSON representation of a docker run command line.
- Task - A running docker instance. When the instance stops, the task is finished.
- Service - A running docker instance, when it stops, it is restarted.

In the basis that is all there is too it. (Cutting a few corners and skimping on a number of details).

Once you know this, we are ready to use ecs-docker-run.

ECS Docker Run

ecs-docker-run is a simple command line utility to run docker images on Amazon ECS. To use this utility you can simply type something familiar like:

ecs-docker-run \
        --name paas-monitor \
        --env SERVICE_NAME=paas-monitor \
        --env SERVICE_TAGS=http \
        --env "MESSAGE=Hello from ECS task" \
        --env RELEASE=v10 \
        -P  \

substituting the 'docker run' with 'ecs-docker-run'.

Under the hood, it will generate a task definition and start a container as a task on the ECS cluster. All of the following Docker run command line options are functionally supported.

-P publishes all ports by pulling and inspecting the image. --name the family name of the task. If unspecified the name will be derived from the image name. -p add a port publication to the task definition. --env set the environment variable. --memory sets the amount of memory to allocate, defaults to 256 --cpu-shares set the share cpu to allocate, defaults to 100 --entrypoint changes the entrypoint for the container --link set the container link. -v set the mount points for the container. --volumes-from set the volumes to mount.

All other Docker options are ignored as they refer to possibilities NOT available to ECS containers. The following options are added, specific for ECS:

--generate-only will only generate the task definition on standard output, without starting anything. --run-as-service runs the task as service, ECS will ensure that 'desired-count' tasks will keep running. --desired-count specifies the number tasks to run (default = 1). --cluster the ECS cluster to run the task or service (default = cluster). Hands-on!

In order to proceed with the hands-on part, you need to have:

- jq installed
- aws CLI installed (version 1.7.44 or higher)
- aws connectivity configured
- docker connectivity configured (to a random Docker daemon).

checkout ecs-docker-run

Get the ecs-docker-run sources by typing the following command:

git clone
cd ecs-docker-run/ecs-cloudformation
import your ssh key pair

To look around on the ECS Cluster instances, import your public key into Amazon EC2, using the following command:

aws ec2 import-key-pair \
          --key-name ecs-$USER-key \
          --public-key-material  "$(ssh-keygen -y -f ~/.ssh/id_rsa)"
create the ecs cluster autoscaling group

In order to create your first cluster of 6 docker Docker Hosts, type the following command:

aws cloudformation create-stack \
        --stack-name ecs-$USER-cluster \
        --template-body "$(<ecs.json)"  \
        --capabilities CAPABILITY_IAM \
        --parameters \
                ParameterKey=KeyName,ParameterValue=ecs-$USER-key \

This cluster is based upon the firstRun cloudformation definition, which is used when you follow the Amazon ECS wizard.

And wait for completion...

Wait for completion of the cluster creation, by typing the following command:

function waitOnCompletion() {
        while expr "$STATUS" : '^.*PROGRESS' > /dev/null ; do
                sleep 10
                STATUS=$(aws cloudformation describe-stacks \
                               --stack-name ecs-$USER-cluster | jq -r '.Stacks[0].StackStatus')
                echo $STATUS

Create the cluster

Unfortunately, CloudFormation does (not) yet allow you to specify the ECS cluster name, so need to manually create the ECS cluster, by typing the following command:

aws ecs create-cluster --cluster-name ecs-$USER-cluster

You can now manage your hosts and tasks from the Amazon AWS EC2 Container Services console.

Run the paas-monitor

Finally, you are ready to run any docker image on ECS! Type the following command to start the paas-monitor.

../bin/ecs-docker-run --run-as-service \
                        --number-of-instances 3 \
                        --cluster ecs-$USER-cluster \
                        --env RELEASE=v1 \
                        --env MESSAGE="Hello from ECS" \
                        -p :80:1337 \
Get the DNS name of the Elastic Load Balancer

To see the application in action, you need to obtain the DNS name of the Elastic Load Balancer. Type the following commands:

# Get the Name of the ELB created by CloudFormation
ELBNAME=$(aws cloudformation describe-stacks --stack-name ecs-$USER-cluster | \
                jq -r '.Stacks[0].Outputs[] | select(.OutputKey =="EcsElbName") | .OutputValue')

# Get the DNS from of that ELB
DNSNAME=$(aws elb describe-load-balancers --load-balancer-names $ELBNAME | \
                jq -r .LoadBalancerDescriptions[].DNSName)
Open the application

Finally, we can obtain access to the application.

open http://$DNSNAME

And it should look something like this..

host release message # of calls avg response time last response time

b6ee7869a5e3:1337 v1 Hello from ECS from release v1; server call count is 82 68 45 36

4e09f76977fe:1337 v1 Hello from ECS from release v1; server call count is 68 68 41 38 65d8edd41270:1337 v1 Hello from ECS from release v1; server call count is 82 68 40 37

Perform a rolling upgrade

You can now perform a rolling upgrade of your application, by typing the following command while keeping your web browser open at http://$DNSNAME:

../bin/ecs-docker-run --run-as-service \
                        --number-of-instances 3 \
                        --cluster ecs-$USER-cluster \
                        --env RELEASE=v2 \
                        --env MESSAGE="Hello from Amazon EC2 Container Services" \
                        -p :80:1337 \

The result should look something like this:

host release message # of calls avg response time last response time b6ee7869a5e3:1337 v1 Hello from ECS from release v1; server call count is 124 110 43 37 4e09f76977fe:1337 v1 Hello from ECS from release v1; server call count is 110 110 41 35 65d8edd41270:1337 v1 Hello from ECS from release v1; server call count is 124 110 40 37 ffb915ddd9eb:1337 v2 Hello from Amazon EC2 Container Services from release v2; server call count is 43 151 9942 38 8324bd94ce1b:1337 v2 Hello from Amazon EC2 Container Services from release v2; server call count is 41 41 41 38 7b8b08fc42d7:1337 v2 Hello from Amazon EC2 Container Services from release v2; server call count is 41 41 38 39

Note how the rolling upgrade is a bit crude. The old instances stop receiving requests almost immediately, while all requests seem to be loaded onto the first new instance.

You do not like the ecs-docker-run script?

If you do not like the ecs-docker-run script, do not dispair. Below are the equivalent Amazon ECS commands to do it without the hocus-pocus script...

Create a task definition

This is the most difficult task: Manually creating a task definition file called 'manual-paas-monitor.json' with the following content:

  "family": "manual-paas-monitor",
  "containerDefinitions": [
      "volumesFrom": [],
      "portMappings": [
          "hostPort": 80,
          "containerPort": 1337
      "command": [],
      "environment": [
          "name": "RELEASE",
          "value": "v3"
          "name": "MESSAGE",
          "value": "Native ECS Command Line Deployment"
      "links": [],
      "mountPoints": [],
      "essential": true,
      "memory": 256,
      "name": "paas-monitor",
      "cpu": 100,
      "image": "mvanholsteijn/paas-monitor"
  "volumes": []
Register the task definition

Before you can start a task it has to be registered at ECS, by typing the following command:

aws ecs register-task-definition --cli-input-json "$(<paas-monitor.json)"
Start a service

Now start a service based on this definition, by typing the following command:

aws ecs create-service \
     --cluster ecs-$USER-cluster \
     --service-name manual-paas-monitor \
     --task-definition manual-paas-monitor:1 \
     --desired-count 1

You should see a new row appear in your browser:

host release message # of calls avg response time last response time .... 5ec1ac73100f:1337 v3 Native ECS Command Line Deployment from release v3; server call count is 37 37 37 36 Conclusion

Amazon EC2 Container Services has a higher learning curve than using plain Docker. You need to get passed the lingo, the creation of an ECS cluster on Amazon EC2 and most importantly the creation of the cumbersome task definition file. After that it is almost as easy to use as Docker run.

In return you get all the goodies from Amazon like Autoscaling groups, Elastic Load Balancers and multi-availability zone deployments ready to use in your Docker applications. So, check ECS out!

More Info

Checkout more information:

Hungry for some Big Android BBQ?

Android Developers Blog - Mon, 08/31/2015 - 18:11

Posted by Colt McAnlis, Head Performance Wrangler

The Big Android BBQ (BABBQ) is almost here and Google Developers will be there serving up a healthy portion of best practices for Android development and performance! BABBQ will be held at the Hurst Convention Center in Dallas/Ft.Worth, Texas on October 22-23, 2015.

We also have some great news! If you sign up for the event through August 25th, you will get 25% off when you use the promotional code "ANDROIDDEV25". You can also click here to use the discount.

Now, sit back, and enjoy this video of some Android cowfolk preparing for this year’s BBQ!

The Big Android BBQ is an Android combo meal with a healthy serving of everything ranging from the basics, to advanced technical dives, and best practices for developers smothered in a sweet sauce of a close knit community.

This year, we are packing in an unhealthy amount of Android Performance Patterns, followed up with the latest and greatest techniques and APIs from the Android 6.0 Marshmallow release. It’s all rounded out with code labs to let you get hands-on learning. To super-size your meal, Android Developer instructors from Udacity will be on-site to guide users through the Android Nanodegree. (Kinda like a personal-waiter at an all-you-can-learn buffet).

Also, come watch Colt McAnlis defend his BABBQ “Speechless” Crown against Silicon Valley reigning champ Chet Haase. It'll be a fist fight of humor in the heart of Texas!

You can get your tickets here, and we look forward to seeing you in October!

Join the discussion on

+Android Developers
Categories: Programming