Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=3' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Is build back? The Fall of the General Purpose CPU

There's a meme out there that hardware is dead. Maybe not. Hardware is becoming more specialized as the general purpose CPU can't keep up. The tick-tock cycle created by Moore's law meant designers had a choice: build or buy. Make your own hardware to deep inspect 1gps of network traffic (for example) and release later or use an off-the-shelf CPU and release sooner.

Now in the anarchy of a Moore's lawless it looks like build is back. Jeff Dean is giving a talk at #scaledmlconf where he talks about this trend at Google.

CPU@jackclarkSF: Jeff Dean says Google can run its full Inception' v3 image model on a phone at about 6fps. And specialized ASICs are coming. 

And Mo Patel captured this slide from the talk:

Categories: Architecture

Large Programs Require Special Processes

Herding Cats - Glen Alleman - Wed, 08/03/2016 - 15:51

Acquisition Category 1 (ACAT 1) programs are large - large is greater than $5B, yes Billion with a B. This Integrated Program Management Conference (IPMC) 2013 presentation addresses the issues of managing these programs in the presence of uncertanty. 

Like all projects or programs, uncertanty comes in two forms - Irreducible (Aleatory) and Reducible (Epistemic). Both these uncertainties create risk. Both are present no matter the size - from small to mega. When we hear estimates are hard, we're not good at estimating, estimates are possibly misused, or any other dysfunction around estimating, they're just symptoms of the problem. Trying to fix the symptom does little to actually fix the problem. Find the root cause, fix that, and the symptom can then have a probability of going away.

This may look like a unique domain, but the core principles of managing in the presence of uncertainty are immutable.

Forecasting cost and schedule performance from Glen Alleman
Categories: Project Management

Agile for Large Scale Government Programs

Herding Cats - Glen Alleman - Wed, 08/03/2016 - 01:32

It would seem counter intuitive to apply Agile (Scrum) to large Software Intensive System of Systems. But it's not. Here's  how we do it with success.

Agile in the government from Glen Alleman Related articles The Microeconomics of a Project Driven Organization GAO Reports on ACA Site All Project Work is Probabilistic Work
Categories: Project Management

The Case for Test Driven Development Refreshed: Cons

Just Say No!

Just Say No!

Over and over I find teams that use Test-Driven Development get serious results, including improved quality and faster delivery.  However, not everything is light, kittens and puppies or everyone would be doing test-first development or one its variants (TDD, ATTD or BDD).   The costs and organizational impacts can lead organizations into bad behaviors. Costs and behavioral impacts (cons) that we explored in earlier articles on TFD and TDD include:

  • All test-first techniques require an investment in time and effort.  The effort to embrace TDD, ATDD or BDD begins when teams learn the required philosophies and techniques.  Learning anything new requires an investment.
  • Early in a development project developers might be required to create stubs (components that emulate parts of a system that have not been developed) or testing harnesses (code that holds created components together before the whole system exists) which requires time and effort (often percieved to be a waste of time).
  • Effective TDD requires automated testing. This is an inferred criticism of the effort, time and cost required for test automation.  Test automation is important to making TDD efficient.
  • TDD, ATDD or BDD doesn’t replace other forms of testing. Organizations that decide that test-first techniques should replace all types of testing are generally in for a rude awakening.
  • TDD does not help teams to learn good testing. What does lead people to learn to test is the discussion of how to test and gaining experience supported by team members with testing knowledge and experience

Other potential behavioral changes that aren’t always anticipated include:

  • TDD is a unit testing method. Tests cases should not have dependencies either between individual tests or requiring dependencies between system states.  These types of tests require a level of coordination between stories that violate the concept of INVEST (remember I is for independent).
  • TDD should not be used to test implementation details.  TDD focuses on testing the coding-specific user stories.  Attributes such as implementation, technical and non-technical details are typically better addressed by a definition of done condition or as system or integration test cases rather than as unit level tests cases of TDD.
  • You can’t fake it until you make it.  TDD requires a team and organization to understand Agile and to be committed to practicing Agile principles.  While TDD can help a team “be” Agile, just trying to do TDD without understanding Agile principles or while trying not be Agile (and not get caught) can lead to poor testing.  One symptom of this problem can be observed when defects crop up later (chronically) in the development cycle as other stories build on the original code which the TDD test cases should have caught.  

Embracing TDD requires effort to learn and implement. If you can’t afford the time and effort, wait until you can.  Embracing any of the test-first techniques will require that teams change their behavior if that is non-starter learning TDD is an academic exercise.  If you are not willing to invest the time and effort in automation either hire a WHOLE lot of manual testers to pretend to be an automated solution or wait to implement TDD until you can bite the automation bullet.


Categories: Process Management

SE-Radio Episode 264: James Phillips on Service Discovery

Charles Anderson talks with James Phillips about service discovery and Consul, an open-source service discovery tool. The discussion begins by defining what service discovery is, what data is stored in a service discovery tool, and some scenarios in which it’s used. Then they dive into some details about the components of a service discovery tool […]
Categories: Programming

Autotrack turns 1.0

Google Code Blog - Tue, 08/02/2016 - 22:31

Posted by Philip Walton, Developer Programs Engineer

Autotrack is a JavaScript library built for use with analytics.jsthat provides developers with a wide range of plugins to track the most common user interactions relevant to today's modern web.

The first version of autotrack for analytics.js was released on Github earlier this year, and since then the response and adoption from developers has been amazing. The project has been starred over a thousand times, and sites using autotrack are sending millions of hits to Google Analytics every single day.

Today I'm happy to announce that we've released autotrack version 1.0, which includes several new plugins, improvements to the existing plugins, and tons of new ways to customize autotrack to meet your needs.

Note: autotrack is not an official Google Analytics product and does not qualify for Google Analytics 360 support. It is maintained by members of the Google Analytics developer platform team and is primarily intended for a developer audience.

New plugins

Based on the feedback and numerous feature requests we received from developers over the past few months, we've added the following new autotrack plugins:

Impression Tracker

The impression tracker plugin allows you to track when an element is visible within the browser viewport. This lets you much more reliably determine whether a particular advertisement or call-to-action button was seen by the user.

Impression tracking has been historically tricky to implement on the web, particularly in a way that doesn't degrade the performance of your site. This plugin leverages new browser APIs that are specifically designed to track these kinds of interactions in a highly performant way.

Clean URL Tracker

If your analytics implementation sends pageviews to Google Analytics without modifying the URL, then you've probably experienced the problem of seeing multiple different page paths in your reports that all point to the same place. Here's an example:

  • /contact
  • /contact/
  • /contact?hl=en
  • /contact/index.html

The clean URL tracker plugin avoids this problem by letting you set your preferred URL format (e.g. strip trailing slashes, remove index.html filenames, remove query parameters, etc.), and the plugin automatically updates all page URLs based on your preference before sending them to Google Analytics.

Note: setting up View Filters in your Google Analytics view settings is another way to modify the URLs sent to Google Analytics.

Page Visibility Tracker

It's becoming increasingly common for users to visit sites on the web and then leave them open in an inactive browser tab for hours or even days. And when users return to your site, they often won't reload the page, especially if your site fetches new content in the background.

If your site implements just the default javascript tracking snippet, these types of interactions will never be captured.

The page visibility tracker plugin takes a more modern approach to what should constitute a pageview. In addition to tracking when a page gets loaded, it also tracks when the visibility state of the page changes (i.e. when the tab goes into or comes out of the background). These additional interaction events give you more insight into how users behave on your site.

Updates and improvements

In addition to the new plugins added to autotrack, the existing plugins have undergone some significant improvements, most notably in the ability to customize them to your needs.

All plugins that send data to Google Analytics now give you 100% control over precisely what fieldsget sent, allowing you to set, modify, or remove anything you want. This gives advanced users the ability to set their own custom dimensions on hits or change the interaction setting to better reflect how they choose to measure bounce rate.

Users upgrading from previous versions of autotrack should refer to the upgrade guide for a complete list of changes (note: some of the changes are incompatible with previous versions).

Who should use autotrack

Perhaps the most common question we received after the initial release of autotrack is who should use it. This was especially true of Google Tag Managerusers who wanted to take advantage of some of the more advanced autotrack features.

Autotrack is a developer project intended to demonstrate and streamline some advanced tracking techniques with Google Analytics, and it's primarily intended for a developer audience. Autotrack will be a good fit for small to medium sized developer teams who already have analytics.js on their website or who prefer to manage their tracking implementation in code.

Large teams and organizations, those with more complex collaboration and testing needs, and those with tagging needs beyond just Google Analytics should instead consider using Google Tag Manager. While Google Tag Manager does not currently support custom analytics.js plugins like those that are part of autotrack, many of the same tracking techniques are easy to achieve with Tag Manager’s built-in triggers, and others may be achieved by pushing data layer events based on custom code on your site or in Custom HTML tags in Google Tag Manager. Read Google Analytics Events in the Google Tag Manager help center to learn more about automatic event tracking based on clicks and form submissions.

Next steps

If you're not already using autotrack but would like to, check out the installation and usage section of the documentation. If you already use autotrack and want to upgrade to the latest version, be sure to read the upgrade guide first.

To get a sense of what the data captured by autotrack looks like, the Google Analytics Demos & Tools site includes several reports displaying its own autotrack usage data. If you want to go deeper, the autotrack library is open source and can be a great learning resource. Have a read through the plugin source code to get a better understanding of how some of the advanced analytics.js features work.

Lastly, if you have feedback or suggestions, please let us know. You can report bugs or submit any issues on Github.

Categories: Programming

5 Tips to help you improve game-as-a-service monetization

Android Developers Blog - Tue, 08/02/2016 - 20:26

Posted by Moonlit Wang, Partner Development Manager at Google Play Games, & Tammy Levy, Director of Product for Mobile at Kongregate

In today’s world of game-as-a-service on mobile, the lifetime value of a player is a lot more complex, where revenue is now the sum of many micro transactions instead of a single purchase with traditional console games.

Of course you don’t need a sophisticated statistical model to understand that the more time a player invests in your game, and the more money they spend, the greater their LTV. But how can you design and improve monetization as a mobile game developer? Here are 5 tips to help you improve game-as-a-service monetization, with best practice examples from mobile games publisher, Kongregate:

1. Track player behavior metrics that have a strong and positive correlation with LTV

  • D1, D7, D30 retention indicates how well a casual player can be converted into a committed fan.
  • Session length and frequency measures user engagement and how fun your game is.
  • Completion rate at important milestones can measure and pinpoint churn.
  • Buyer and repeated buyer conversion, represents your most valuable user segment.

2. Optimize for long-term engagement and delight your best players

Retention is the first metric that can distinguish great games from mediocre ones. Games with higher retention rates throughout the user’ lifecycle, monetize better consistently. Retention is king, and more importantly, long-term retention should be prioritized. Therefore, when designing your game, aim to create a sophisticated and engaging experience to delight your most committed fans.

[This chart shows the retention of top games / apps over time]
  • When considering long term retention, focus on achieving a strong D30, but also look beyond the first 30 days. Measure long term retention by assessing the following rates: D30 to D60, D30 to D90, and D30 to D180. The higher the rate, the stickier your game is in the long term, which will increase your LTV.
  • Players are willing to pay a fixed amount of money per hour of “fun”, so think about updates when designing your game, to make the content rich and fun for those who will play at very high levels and spend the most time within your game, don’t gate your players or hinder their in-game progression.
  • Use the Google Play Games Services - Funnel Report to help you track different milestone completion rates in your games, so you can identify drop off points and reduce churn
3. Increase buyer conversion through targeted offers

First-time buyer conversion is the most important as player churn rate drops significantly after the first purchase, but stays relatively flat regardless of the amount spent. Also, past purchase behavior is the best predictor of future purchases. Find your first-time and repeated buyer conversion rate directly in the Developer Console.

  • Use A/B testing to find the price that will maximize your total revenue. Different people have different willingness to pay for a given product and the tradeoff between price and quantity is different for different products, so don’t decrease prices blindly.
  • Tailor your in-game experience as well as in-app purchase offers based on the player’s predicted probability to spend using the Player Stats API, which predicts players churn and spend.

For example, in Kongregate’s game Spellstone, testing two pricing points for a promotion called Shard Bot, which provides players with a daily “drip” of Shards (the premium currency) for 30 days, showed players had a much stronger preference for the higher priced pack. The first pack, Shard Bot, priced at $4, granted players 5 daily shards, and the second pack, the Super Shard Bot, was priced at $8 and granted players 10 daily shards.

[Two week test results showing preference for the more expensive pack, which also generated more revenue]

Kongregate decided to keep the higher priced Super Shard Bot in the store, although both packs resulted in very similar retention rates:

4. As well as what monetization features to implement, take into consideration why, when and how to do so

  • Why: “Buyer intent” is most important. Any item with a price tag should serve to enhance your players in-game experience. For example, a new map, a new power, something exciting and additional to the free experience. Don’t gate your players with a purchase-only item as happy users means more time spent with your game, which will lead to higher revenue. Educate users by gifting some free premium goods and currency during the tutorial, and let users experience the benefit first.
  • When: Time offers based on when users may need it. If your IAP is to continue gameplay after timeout, then you should surface it right when the timer ends. If your IAP is to offer premium equipment, then you should surface it when users gear up their characters. The offer should be contextually relevant, such that the content should cater to the player’s current status and needs in-game.

    In particular, Starter Packs or New Buyer Promos need to be well timed. Players need to understand the value and importance of all the items before they are shown the promotion. If surfaced too early, players will not feel compelled to purchase. If surfaced too late, the offer will not be compelling enough. The Starter Pack should appear within 3 to 5 sessions since install, depending on your game. Additionally, limiting its availability to 3 to 5 days will urge players to make a quicker purchase decision.

    For example, BattleHand’s starter pack is surfaced around the 4th session, it is available for 36hrs and contains the following items to aid players in all areas of the game:

  • Powerful cards that have an immediate effect in battle
  • High rarity upgrade materials to upgrade your card deck
  • A generous amount of soft currency that can be used in all areas of the game
  • A generous amount of hard currency so players can purchase premium store items
  • Rare upgrade materials for Heroes
[Example starter pack offer in Battle Hands] Thanks to the strength of the promotion over 50% of players choose the Starter Pack instead of the regular gems offerings:
  • How: There are many ways you can implement premium content and goods in your game, such as power-ups, characters, equipment, maps, hints, chapters etc. The two most impactful monetization designs are:
      Gacha - There are many ways to design, present and balance gacha but the key is to have randomized rewards, which allows you to sell extremely powerful items that players want without having to charge really high prices per purchase.
[Example of randomized rewards in Raid Brigade’s boxes]
      LiveOps - Limited time content on a regular cadence will also create really compelling opportunities for the players to both engage further with the game and invest in the game. For instance, Adventure Capitalist has been releasing regular limited themed time events with their spin on the permanent content, their own progression, achievements and IAP promotions.
[Example timed event for Adventure Capitalist]

Through this initiative, the game has seen regular increases in both engagement and revenue during event times without affecting the non-event periods:

[Timed events drastically increase engagement and revenue without lowering the baseline average over time]

5. Take into account local prices and pricing models

Just like different people have different willingness-to-pay, different markets have different purchasing powers.

    • Test what price points make sense for local consumers in each major market. Don’t just apply an umbrella discount, find the price points that maximize total revenue.
    • Consider charm pricing but remember it doesn’t work everywhere. For example, in the United States, prices always end in $x.99, but that’s not the case in Japan and Korea, where rounded numbers are used. Pricing in accordance to the local norm signals to the customers that you care and designed the game with them in mind. The Google Developer Console now automatically applies local pricing conventions of each currency for you.

Check out the Android Developer Story from games developer, Divmob, who improved their game’s monetization threefold simply by adopting sub-dollar pricing strategies. Also, learn more best practices about building for billions to get more tips on monetization.

Get the Playbook for Developers app and stay up-to-date with more features and best practices that will help you grow a successful business on Google Play.

Categories: Programming

Board Tyranny in Iterations and Flow

I was at an experience report at Agile 2016 last week, Scaling Without Frameworks-Ultimate Experience Report. One of the authors, Daniel Vacanti said this:

Flow focuses on unblocking work. Iterations (too often) focus on the person doing the work.

At the time, I did not know Daniel’s twitter handle. I now do. Sorry for not cc’ing you, Daniel.


Possible Scrum Board. Your first column might say “Ready”

Here’s the issue. Iteration-based agile, such as Scrum, limits work in progress by managing the scope of work the team commits to in an iteration. Scrum does not say, “Please pair, swarm or mob to get the best throughput.”

When the team walks the board asking the traditional three questions, it can feel as if people point fingers at them. “Why aren’t you done with your work?” Or, “You’ve been working on that forever…” Neither of those questions/comments is helpful. In Manage It! I suggested iteration-based teams change the questions to:

  • What did you complete today?
  • What are you working on now?
  • What impediments do you have?

Dan and Prateek discussed the fiinger-pointing, blame, and inability to estimate properly as problems. The teams decided to move to flow-based agile.


Possible Kanban board. You might have a first column, “Analysis”

In flow-based agile, the team creates a board of their flow and WIP (work in progress) limits. The visualization of the work and the WIP limits manage the scope of work for the team.

Often—and not all the time—the team learns to pair, swarm, or mob because of the WIP limits.

Iteration-based agile and flow-based agile both manage the team’s work in progress. Sometimes, iteration-based agile is more helpful because the iterations provide a natural cadence for demos and retrospectives.

Sometimes, flow-based agile is more helpful because the team can manage interruptions to the project-based work.

Neither is better in all cases. Both have their uses. I use personal kanban inside one-week iterations to manage my work and make sure I reflect on a regular basis. (I recommend this approach in Manage Your Job Search.)

In the experience report, Daniel and Prateek SIngh spoke about the problems they encountered with iteration-based agile. In iterations, the team focused on the person doing the work.  People took stories alone. The team had trouble estimating the work so that it would fit into one iteration. When the team moved to flow-based agile, the stories settled into a more normalized pattern. (Their report is so interesting. I suggest you read it. Page down to the attachment and read the paper.)

The tyranny was that the people in teams each took a story alone. One person was responsible for a story. That person might have several stories open. When they walked the board, it was about that one person’s progress. The team focused on the people, not on moving stories across the board.

When they moved to flow, they focused on moving stories across the board, not the person doing the stories. They moved from one person/one story to one team/a couple of stories. Huge change.

One of the people who read that tweet was concerned that it was an unfair comparison between bad iterations and good flow. What would bad flow look like?

I’ve seen bad flow look like waterfall: the team does analysis, architecture, design specs, functional specs, coding, testing in that order. No, that’s not agile. The team I’m thinking of had no WIP limits. The only good thing about their board was that they visualized the work. They did not have WIP limits. The architect laid down the law for every feature. The team felt as if they were straightjacketed. No fun in that team.

You can make agile work for you, regardless of whether you use iterations or kanban. You can also subvert agile regardless of what you use. It all depends on what you measure and what the management rewards. (Agile is a cultural change, not merely a project management framework.)

If you have fallen into the “everyone takes their own story” trap, consider a kanban board. If you have a  ton of work in progress, consider using iterations and WIP limits to see finished features more often. If you never retrospect as a team, consider using iterations to provide you a natural cadence for retrospectives.

As you think about how you use agile in your organization, know that there is no one right way for all teams. Each team needs the flexibility to design its own board and see how to manage the scope of work for a given time, and how to see the flow of finished features. I recommend you consider what iterations and flow will buy you.

Categories: Project Management

Sponsored Post: Exoscale, Host Color, Cassandra Summit, Scalyr, Gusto, LaunchDarkly, Aerospike, VividCortex, MemSQL, AiScaler, InMemory.Net

Who's Hiring?
  • IT Security Engineering. At Gusto we are on a mission to create a world where work empowers a better life. As Gusto's IT Security Engineer you'll shape the future of IT security and compliance. We're looking for a strong IT technical lead to manage security audits and write and implement controls. You'll also focus on our employee, network, and endpoint posture. As Gusto's first IT Security Engineer, you will be able to build the security organization with direct impact to protecting PII and ePHI. Read more and apply here.

Fun and Informative Events
  • Join database experts from companies like Apple, ING, Instagram, Netflix, and many more to hear about how Apache Cassandra changes how they build, deploy, and scale at Cassandra Summit 2016. This September in San Jose, California is your chance to network, get certified, and trained on the leading NoSQL, distributed database with an exclusive 20% off with  promo code - Academy20. Learn more at CassandraSummit.org

  • NoSQL Databases & Docker Containers: From Development to Deployment. What is Docker and why is it important to Developers, Admins and DevOps when they are using a NoSQL database? Find out in this on-demand webinar by Alvin Richards, VP of Product at Aerospike, the enterprise-grade NoSQL database. The video includes a demo showcasing the core Docker components (Machine, Engine, Swarm and Compose) and integration with Aerospike. See how much simpler Docker can make building and deploying multi-node, Aerospike-based applications!  
Cool Products and Services
  • Do you want a simpler public cloud provider but you still want to put real workloads into production? Exoscale gives you VMs with proper firewalling, DNS, S3-compatible storage, plus a simple UI and straightforward API. With datacenters in Switzerland, you also benefit from strict Swiss privacy laws. From just €5/$6 per month, try us free now.

  • High Availability Cloud Servers in Europe: High Availability (HA) is very important on the Cloud. It ensures business continuity and reduces application downtime. High Availability is a standard service on the European Cloud infrastructure of Host Color, active by default for all cloud servers, at no additional cost. It provides uniform, cost-effective failover protection against any outage caused by a hardware or an Operating System (OS) failure. The company uses VMware Cloud computing technology to create Public, Private & Hybrid Cloud servers. See Cloud service at Host Color Europe.

  • Dev teams are using LaunchDarkly’s Feature Flags as a Service to get unprecedented control over feature launches. LaunchDarkly allows you to cleanly separate code deployment from rollout. We make it super easy to enable functionality for whoever you want, whenever you want. See how it works.

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.


If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

Why We Don't Need to Question Everything

Herding Cats - Glen Alleman - Tue, 08/02/2016 - 16:15

Tilting at WindmillsIt's popular in some agile circles to question everything. This begs the question - is there any governance process in place? No? Then you're pretty much free to do whatever you want with the money provided to you to build software. If there is a Governance process in place, then that means there are decision rights in place as well. This decision rights almost always belong to the people providing the money for you to do your work. 

Questioning those governance processes and questioning the principles, processes, and procedure that implement the governance processes usually starts with the owners of the governance process. If there is a mechanism for assessing the efficacy of the governance, that's where the questioning starts. Go find that place, put in your suggestions for improvement, become engaged with the Decision Rights Owners and then provide your input.

Standing outside the governance process shouting challenge everything is tilting at windmills.

So when you hear that phrase, ask do you have the right to challenge the governance process?

Related articles Planning is the basis of decision making in the presence of uncertainty What is Governance? Why We Need Governance
Categories: Project Management

How to Setup a Highly Available Multi-AZ Cassandra Cluster on AWS EC2


This is a guest post by Alessandro Pieri, Software Architect at Stream. Try out this 5 minute interactive tutorial to learn more about Stream’s API.

Originally built by Facebook in 2009, Apache Cassandra is a free and open-source distributed database designed to handle large amounts of data across a large number of servers. At Stream, we use Cassandra as the primary data store for our feeds. Cassandra stands out because it’s able to:

  • Shard data automatically

  • Handle partial outages without data loss or downtime

  • Scales close to linearly

If you’re already using Cassandra, your cluster is likely configured to handle the loss of 1 or 2 nodes. However, what happens when a full availability zone goes down?

In this article you will learn how to setup Cassandra to survive a full availability zone outage. Afterwards, we will analyze how moving from a single to a multi availability zone cluster impacts availability, cost, and performance.

Recap 1: What Are Availability Zones?
Categories: Architecture

Freedom from the Inside Out

“I am no bird; and no net ensnares me: I am a free human being with an independent will.” ― Charlotte Brontë, Jane Eyre

A while back, I did an interview on how to build persona freedom from the inside out:

J.D. Meier of Source of Insight Talks About Freedom

Freedom can mean a lot of things to different people.  For me, in this interview, it was really about the freedom to live my values, choose better responses, and empower myself so that I don’t live somebody else’s story or play the blame game or take on a victim mindset.

The interview is raw, real, and unscripted.

Somehow, I think I avoided talking like a pirate, which is pretty good for me.

I try not to have a potty mouth, but it happens.  I’m only human, and I grew up on the East coast Winking smile

Anyway, I think if you haven’t heard this interview before, you will enjoy the insights and wisdom distilled.

It’s from the school of hard knocks, and I had to learn a lot of painful lessons.

The most painful lesson of all is that there is always more to learn.

Which means that rather than think of it as a finish line you are racing to, it’s about continuously growing your skills to meet your challenges.

You are an evolving being learning how to better respond to the challenges that your circumstances and environment throw your way, while you pursuit your dreams.

Always remember that nature favors the flexible.

Freedom is about building that flexibility, and it’s a muscle that gets stronger the more you practice it.

Whether it’s by standing up to bullies, or talking back to that little voice in your head that holds you back, or choosing to live the life you want to lead, not the life others want you to lead, etc.

The wonderful thing about your personal freedom is that the more you exercise it, the more you create personal victories and reference examples to draw from, so you actually build momentum.

It’s this momentum that can help you transform and transcend any area or aspect of your life to operate at a higher level.

Or, at least you can make it a choice vs. leave it to chance.

Take back your power, live life on your terms, and create the experience you want to create…with skill.

So much of life comes down to strategies, skills, and stories.

Use leadership moments and learning opportunities to create the stories that make you come alive.

That’s your freedom in action, and that’s how you live your freedom.

Categories: Architecture, Programming

SPaMCAST 405 – Moral License, Hazards, Change and Innovation, Assumptions, Test Scripting



Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

Software Process and Measurement Cast 405 is a cornucopia of topics!  We begin by exploring a bit of the psychology of change in four short essays. These topics are important for any change agent at any level to understand. Change at any scale is not an easy task. Change requires establishing a goal, recruiting a sponsor, acquiring a budget, developing a set of plans and then there is the part where the miracle happens and people change. The last step is always the hardest and is often akin to herding cats. Psychology and sociology have identified many of the reasons why people embrace change and innovation in different ways.  

Our second column is from Jon M. Quigley.  We have settled on a name for the column, “The Alpha-Omega of Product Development.” In this month’s column, we discuss using metrics to dispel assumptions. Metrics don’t have to add to overhead, for example, one item we discussed was using planning poker to expose assumptions and then to find tactics to address them.

Anchoring the cast, Jeremy Berriault brings the QA Corner to the Software Process and Measurement Cast.  In this installment of the QA Corner, Jeremy talks about whether test automation scripting for new functions should be tackled or not.  Jeremy has an opinion and provides advice for testing professionals on a sticky topic.  

Re-Read Saturday News

This week we continue our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 12 and 13.  This week we tackle two concepts central to XP: planning and testing both done the XP way.  

We are exactly halfway through the book.  We will have seven more installments including entry for reflections on the overall book.  It is time to start thinking about what is next: a re-read or a new read?  Thoughts?

Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.


The next Software Process and Measurement Cast will feature interview with Erik van Veenendaal.  We discussed the Test Maturity Model Integrated, TMMi,  and why in an Agile world quality and testing really matter.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 405 - Moral License, Hazards, Change and Innovation, Assumptions, Test Scripting

Software Process and Measurement Cast - Sun, 07/31/2016 - 22:00

Software Process and Measurement Cast 405 is a cornucopia of topics!  We begin by exploring a bit of the psychology of change in four short essays. These topics are important for any change agent at any level to understand. Change at any scale is not an easy task. Change requires establishing a goal, recruiting a sponsor, acquiring a budget, developing a set of plans and then there is the part where the miracle happens and people change. The last step is always the hardest and is often akin to herding cats. Psychology and sociology have identified many of the reasons why people embrace change and innovation in different ways.  

Our second column is from Jon M. Quigley.  We have settled on a name for the column, “The Alpha-Omega of Product Development.” In this month’s column, we discuss using metrics to dispel assumptions. One item we discussed was using planning poker to expose assumptions and then to find tactics to address them.

Anchoring the cast, Jeremy Berriault brings the QA Corner to the Software Process and Measurement Cast.  In this installment of the QA Corner, Jeremy talks about whether test automation scripting for new functions should be tackled or not.  Jeremy has an opinion and provides advice for testing professionals on a sticky topic.  

Re-Read Saturday News

This week we continue our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 12 and 13.  This week we tackle two concepts central to XP: planning and testing both done the XP way.  

We are exactly halfway through the book.  We will have seven more installments including an entry for reflections on the overall book.  It is time to start thinking about what is next: a re-read or a new read?  Thoughts?

Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.


The next Software Process and Measurement Cast will feature interview with Erik van Veenendaal.  We discussed the Test Maturity Model Integrated, TMMi,  and why in an Agile world quality and testing really matter.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

Extreme Programming Explained, Second Edition: Re-Read Week 7

XP Explained Cover


With this installment of our re-read of Extreme Programing Explained, Second Edition (2005) we are exactly halfway through the book.  We will have seven more installments including one week for an entry for overall reflections on the book. It is time to start thinking about what is next; a re-read or a new read?  Thoughts?  In this week’s we tackle two concepts central to XP; planning and testing, both done the XP way.

Chapter 12 Planning: Managing Scope

Beck begins chapter twelve by pointing out that the state of a team’s shared plan provides clues about the state of the relationship between people affected by the plan. This is a new statement in this edition; however, even during my first read of XP Explained the approach to planning was radical in its inclusiveness.  Well before I learned Scrum, XP planning began to rewrite how I worked with teams.  

In XP, planning is a tool to make goals and directions clear.  Plans are a tool to help a  team organize and communicate the team’s intent for addressing the future based on variables such capabilities, cost value and duration. Plans reflect what we want the future to look like rather than exactly how they will unwind. XP Explained uses the metaphor of grocery shopping with a fixed amount of money to explain planning in XP.  Doing your weekly shopping with a fixed budget constrains what you can buy. This hits quite close to home.  A few times in my life my wife and I have created a weekly budget for food in which we would take cash money out of the bank and put it into an envelope. In this scenario when the envelope is empty, we are done shopping.  If we decide while walking down the aisle of our local grocer to have a dinner of pâté de foie gras and caviar most everything else in the cart would have to go back on the shelf.  In a less extreme scenario, my wife and I decided to invite several people for hamburgers on the grill (actually a true scenario).  In order to get the extra few pounds of ground beef and stay within the budget, we traded the grade of ground beef for one with a higher fat content.  A substitution of quantity for quality. Planning in XP is no different.  XP teams plan within a set of fixed constraints.  In XP, the planning constraints include a fixed team size and a fixed weekly cycle.  The primary variable an XP team has in planning (in our example, a number of groceries they can buy) is the amount of work they can do. In software development, unlike my grilling party, tweaking quality in software development is generally a fool’s errand. Albeit if effort, date and work required are all fixed, quality is the variable that gets tweaked so the team can deliver. When teams take shortcuts that affect quality they are merely postponing work (or sloughing it off on someone else – a moral hazard).

The XP Planning process is:

  1.    Develop a backlog of work that may need to be done.   
  2.    Estimate the items on the backlog
  3.    Set a budget for the planning cycle (this is the amount of work the team can commit to delivering during the cycle)
  4.    Agree on the work that needs to be done within the budget, as you negotiate don’t change the estimates for the budget.

What gets agreed upon in the planning process will be shaped by a number of things.  In a perfect world the stories with the highest value would always be done first; however, risks, technical constraints, and even office politics can impact the order a team pulls work into a cycle.  That said, the team should not change estimates during planning, change their budget of work to accept or change the duration of the cycle.  As with shopping for groceries, stretching the constraints in the short run will always have some impact down the road.

One the most powerful statements about the planning process is Beck’s assertion that planning is an exercise in which everyone on the team participates. Participation includes listening, speaking and learning.  Once the plan is established, the plan is not carved in stone.  As teams assess their progress they are generating feedback that should be used to guide the team and to make future estimates and plans more useful.  Beck references Martin Fowler’s concept of “yesterday’s weather” as a tool for improving estimation.  Yesterday’s weather is a planning strategy in which a team uses the amount of work completed in the last cycle to plan the next cycle as a technique to leverage feedback and new information.

As Chapter 12 concludes, Beck makes the case for the team to plan using analog cards placed on a wall in a team room (or another area).  Transparency in the process is useful to making sure the whole team is involved and the tension between those who want work done and those that have to do the work is continuously tuned and balanced.

Chapter 13 Testing: Early, Often, and Automated

Defects are bad. Defects are bad for lots of reasons. Beck highlights the fact that defects destroy the trust needed for effective software development. I once went on a job interview where the CIO had four pagers (it was a few years ago).  When I asked about the collection of hardware on his belt, he indicated that each major department wanted to be able to get his attention when (not if) they had problems. The inability of the business to trust that they would get functional software caused everyone in the department including the CIO to create mechanisms for defending themselves against the eventuality that someone would make a mistake. Defects represent a HUGE time sink and testing defects out just adds to the expense.

Two of the goals of development in XP include reducing the occurrence of defects to an economically sustainable level and to reduce the occurrence of defects to a the level that trust can be established. Reducing the occurrence of defects is a balance between the costs of the tasks needed to avoid defects and the costs to fix them after they occur.  Costs include money, effort and calendar time.  Another goal of XP development is to reduce the occurrence of defects to a level where trust can reasonably grow on the team.  The occurrence of defects makes it difficult to meet commitments between team members and with the business.  Improving trust energizes everyone involved with a project.  Based on these goals XP uses two principles as tools to increase the cost effectiveness of testing (the term testing is being used in its broadest technical definition).  The two principles are double checking and the concept of Defect Cost Increase.

Double checking is a fairly simple concept that every person has used.  Adding a column of numbers up twice is a form of double checking, as is testing (the developer writes a code and reviews it and then unit tests the code).  Double checking works under the principle that checking the work two different ways significantly increases the chance of finding defects.

The second principle XP leverages to attain its goals for reducing defects and increasing trust is Defect Cost Increase (DCI).  DCI posits that the sooner you find a defect, the cheaper it is to fix. The reason defects are easier to fix when they are discovered earlier includes a lower chance that developers will forget where the fix should be made and  reduction in the chance that other changes will need to be rolled back (we will review this assertion in the future). DCI suggests that establishing shorter cycles and faster feedback loops will make finding and correcting defects less expensive while weeding out many residual defects.

Double checking and using shorter cycles and feedback loops to reverse DCI provides the basis for incorporating test-first development in XP.   Test-first development (TFD) is an approach to development in which developers do not write a single line of code until they have created the test cases needed to prove that unit of work solves the business problem and is technically correct at a unit-test level. TFD creates a double-checking learning cycle so that defects are found quickly and so that the team learns fast.

Writing tests before writing code involves testers earlier in the life cycle, which allows testing resources more leverage to improve the code. There are numerous additional reasons why TFD and its cousins test-driven (TDD), acceptance test-driven (ATDD) and behavior-driven development (BDD).

Previous installments of Extreme Programing Explained, Second Edition (2005) on Re-read Saturday:

Extreme Programming Explained: Embrace Change Second Edition Week 1, Preface and Chapter 1

Week 2, Chapters 2 – 3

Week 3, Chapters 4 – 5

Week 4, Chapters 6 – 7  

Week 5, Chapters 8 – 9

Week 6, Chapters 10 – 11

Categories: Process Management

Seven Reasons why Darth Vader is a Terrible Product Manager

Xebia Blog - Sat, 07/30/2016 - 17:35
It’s not that I have run out of Samurai parallels but I ran into this blog called: “Darth Vader - The Best Project Manager in the Galaxy” and since it’s my sincere belief that this sword wielding (see there is a samurai parallel!) manager actually displays some pretty terrible Product Management Skills: Here are 7

Stuff The Internet Says On Scalability For July 29th, 2016

Hey, it's HighScalability time:

Facial tats to disrupt big brother surveillance systems may actually work. Our future?


If you like this sort of Stuff then please support me on Patreon.
  • 40.4 million: iPhones sold this quarter;  7: number of times Facebook has avoided the IRS; 104: new exoplanets; 100: new brain regions found; 2x: HTTPS adoption; 

  • Quotable Quotes:
    • @mat: Apple is doomed: "the nearly $8 billion in profits this quarter is more than twice what Facebook made in 2015"
    • Bruce Schneier: The truth is that technology magnifies power in general, but the rates of adoption are different. The unorganized, the distributed, the marginal, the dissidents, the powerless, the criminal: they can make use of new technologies faster. And when those groups discovered the Internet, suddenly they had power. But when the already powerful big institutions finally figured out how to harness the Internet for their needs, they had more power to magnify. That’s the difference: the distributed were more nimble and were quicker to make use of their new power, while the institutional were slower but were able to use their power more effectively.
    • @mjasay: What AWS does for AMZN: $2.89B in revenue (up from $1.8B last year), earning 56% of Amazon profits (EPS was $1.78, up from $0.19 last year)
    • @kurtseifried: I wonder how discrete cloud billing can get? Per cpu cycle? bit moved in and out? I suspect yes.
    • Algorithms to Live By: More generally, our intuitions about rationality are too often informed by exploitation rather than exploration. When we talk about decision-making, we usually focus just on the immediate payoff of a single decision—and if you treat every decision as if it were your last, then indeed only exploitation makes sense.
    • Pinterest: As it turns out, it’s damn hard to design consistent and beautiful things at scale. 
    • @obfuscurity: OH: “god i hate having to lie about loving containers all the time”
    • @beaucronin: Leah McGuire: "Metrics are the unit tests of data science"; without them you won't know when things break and you'll be exposed #wrangleconf
    • @tsantero: OH: "Blockchain: a system that allows a bunch of non-CS people to suddenly be distributed computing experts."
    • zeveb: People want safety; they want security; they want conformity; they want power over others.
    • Richard Watson: My take-home [re Pokemon Go]: even the very best can be surprised when the scale hits the fan.
    • @xaibeha: HTTP/2: Because a hundred requests per page load is just a fact of nature.
    • mdatwood: many people have this irrational hate for Java, or they hate the Java from 10 years ago. Todays Java is fast, has tons of mature frameworks, and is probably one of the best tools to use from building a web service back end.
    • @BenedictEvans: Obvious: an iPhone has hundreds of times more compute power than the original Pentium. More important: $50 Androids in rural Africa do too
    • Dark Silicon: infeasible to operate all on-chip components at full performance at the same time due to the thermal constraints (peak temperature, spatial and temporal thermal gradients etc.
    • @Sneakyness: Why do people always assume that companies have scaling issues, and not that they've determined that 85% uptime is enough to make money
    • @cdixon: Alternative headline: "Alphabet invests $859M in long-term projects."
    • @xaprb: We were promised a Utopian vision with the “semantic web,” but it turns out it’s actually Feedly, IFTT, Slack, and Pocket that fulfill it.
    • Amit: Let's drop 10¢ coins and $10 bills and treat them like 50¢ coins, $2 bills, $50 bills — they exist but we don't use them widely.
    • Graham Templeton: One major advantage of life over modern engineering is power efficiency.
    • @neil_conway: @t_crayford @kellabyte >10k threads running native code + user-defined stored procedures in a single address space sounds pretty scary.

  • Niantic is looking for a Software Engineer - Server Infrastructure to help make Pokemon go. You think it's easy? Think again: Create the server infrastructure to support our hosted AR/Geo platform underpinning projects such as Pokémon GO using Java and Google Cloud. You will work on real-time indexing, querying and aggregation problems at massive scales of hundreds of millions of events per day, all on a single, coherent world-wide instance shared by millions of users.

  • DDos attacks as a reason to bypass the kernel. Why we use the Linux kernel's TCP stack:  During some attacks we are flooded with up to 3M packets per second (pps) per server...With this scale of attack the Linux kernel is not enough for us. We must work around it. We don't use the previously mentioned "full kernel bypass", but instead we run what we call a "partial kernel bypass". With this the kernel retains the ownership of the network card, and allows us to perform a bypass only on a single "RX queue". 

  • BTW, I bought nothing on Prime Day. How AWS Powered Amazon’s Biggest Day Ever: This wave of traffic then circled the globe, arriving in Europe and the US over the course of 40 hours and generating 85 billion clickstream log entries. Orders surpassed Prime Day 2015 by more than 60% worldwide and more than 50% in the US alone. On the mobile side, more than one million customers downloaded and used the Amazon Mobile App for the first time.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Systems, Systems Engineering, Systems Thinking

Herding Cats - Glen Alleman - Fri, 07/29/2016 - 15:27

On our morning road bike ride, the conversation came around to Systems. Some of our group are like me - a techie - a few others are business people in finance and ops. The topic was what's a system and how does that notion impact or world. The retailer in the group had a notion of a system - grocery stores are systems that manage the entire supply chain from field to basket.

Here's my reading list that has served me well for those interested in Systems

  • Systems Engineering: Coping with Complexity, Richard Stevens, Peter Brook, Ken Jackson, Stuart Arnold
  • The Art of Systems Architecting, Mark Maier and Eberhardt Rechtin
  • Systems Thinking: Coping with 21st Century Problems, John Boardman and Brian Sauser
  • Systemantics: How Systems Work and Especially How They Fail, John Gall
  • The Art of Modeling Dynamic Systems: Forecasting for Chaos, Randomness and Determinism, Foster Morrison
  • Systems Thinking: Building Maps for Worlds of Systems, John Boardman and Brian Sauser
  • The Systems Bible: The Beginner's Guide to Systems Large and Small, John Gall
  • A Primer for Model-Based Systems Engineering, 2nd Edition, David Long and Zane Scott
  • Thinking in Systems: A Primer, Donella Meadows

These are all actionable outcomes books. 

Systems of information-feedback control are fundamental to all life and human endeavor, from the slow pace of biological evolution to the launching the latest space satellite ... Everything we do as individuals, as an industry, or as a society is done in the context of an information-feedback system. - Jay W. Forrester

Related articles Systems Thinking, System Engineering, and Systems Management Estimating Guidance Can Enterprise Agile Be Bottom Up? Essential Reading List for Managing Other People's Money Systems Thinking and Capabilities Based Planning Herding Cats: Systems, Systems Engineering, Systems Thinking What Can Lean Learn From Systems Engineering?
Categories: Project Management

Assessing Value Produced in Exchange for the Cost to Produce the Value

Herding Cats - Glen Alleman - Fri, 07/29/2016 - 04:56

A common assertion in the Agile community is we focus on Value over Cost.

Both are equally needed. Both must be present to make informed decisions. Both are random variables. As random variables, both need estimates to make informed decisions.

To assess value produced by the project we first must have targets to steer toward. A target Value must be measured in units meaningful to the decision makers. Measures of Effectiveness and Performance that can monetized this Value.

Value cannot be determined without knowing the cost to produce that Value. This is fundamental to the Microeconomics of Decision making for all business processes.

The Value must be assessed using...

  • Measures of Effectiveness - is an Operational measure of success that is closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions.
  • Measures of Performance - is a Measure that characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.
  • Key Performance Parameter - is a Measure that represents the capabilities and characteristics so significant that failure to meet them can be cause for reevaluation, reassessing, or termination of the program.
  • Technical Performance Measures - are Attributes that determine how well a system or system element satisfies or expected to satisfy a technical requirement or goal.

Without these measures attached to the Value there is no way to confirm that the cost to produce the Value will breakeven. The Return on Investment to deliver the needed Capability is of course.

ROI = (Value - Cost)/Cost

So the numerator and the denominator must have the same units of Measure. This can usually be dollars. Maybe hours. So when we hear ...

The focus on value is what makes the #NoEstimates idea valuable - ask in what units of measure is that Value? Are those units of measure meanigful to the decision makers? Are those decision makers accountable for the financial performance of the firm?


Related articles The Reason We Plan, Schedule, Measure, and Correct Estimating Processes in Support of Economic Analysis The Microeconomics of Decision Making in the Presence of Uncertainty
Categories: Project Management