Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

SE-Radio Episode 238: Linda Rising on the Agile Brain

Johannes Thönes talks to Linda Rising, author, speaker and independent consultant, about the Agile Brain. They start by talking about the fixed, talent-oriented mindset and then contrast with the learning-oriented mindset. After establishing the terms, Linda explains how we know which mindset we are in currently and how we can change it for us and others, […]
Categories: Programming

Stuff The Internet Says On Scalability For September 11th, 2015

Hey, it's HighScalability time:

Need a challenge? Solve the code on this 17.5 feet tall 11,000 year old wooden statue!

  • $100 million: amount Popcorn could have made from criminal business offers; 3.2-gigapixel: World’s Most Powerful Digital Camera; $17.3 trillion: US GDP in 2014;  700 million: Facebook time series database data points added per minute; 300PB: Facebook data stored in Hive; 5,000: Airbnb EC2 instances.

  • Quotable Quotes:
    • @jimmydivvy: NASA: Decade long flight across the solar system. Arrives within 72 seconds of predicted. No errors. Me: undefined is not a function
    • Packet Pushers~ Everyone has IOPS now. We are heading towards invisible consumption being the big deal going forward. 
    • Randy Medlin: Gonna drop $1000+ on a giant iPad, $100 on a stylus, then whine endlessly about $4.99 drawing apps.
    • Anonymous: Circuit Breaker + Real-time Monitoring + Recovery = Resiliency
    • Astrid Atkinson: I used to get paged awake at two in the morning. You go from zero to Google is down. That’s a lot to wake up to.
    • Todd Waters~ In 1979, 200MB weighed 30 lbs and took up the space of a washing machine
    • Todd Waters~ CERN spends more compute power throwing away data than storing and analyzing it
    • Rob Story:  We've clearly reached the point where SSD/RAM bandwidth have completely outpaced CPU compute.
    • Shedding light on the era of 'dark silicon': We will soon live in an era where perhaps more than 80 per cent of computer processors' transistors must be powered off  and 'remain dark' at any time to prevent the chip from overheating.
    • @diogomonica: In a container world, when someone asks about A vs B, the answer is always, A on top of B. #softwarecircus
    • Mike Curtis (Airbnb)~ 70 percent of the people who put space up for rent on Airbnb in New York City say they do so because if they didn’t, they would lose their apartments or homes
    • Mike Curtis (Airbnb)~ it would probably be on the order of 20 percent to 30 percent more expensive to operate its own datacenters than rent capacity on AWS 
    • @cloud_opinion: If John McAfee gets elected as President once, it will be impossible to uninstall him.
    • @bradurani: The greatest trick the ORM ever pulled was convincing the world the DB doesn't exist... and it's a disaster for a generation of devs
    • @coderoshi: The idea that management is the higher rung of a programmer's career ladder is like thinking that every actor wants to become a director.
    • @HiddenBrain: MT @CBinsights: A million guys walk into a Silicon Valley bar. No one buys anything. Bar is declared a massive success.
    • @Carnage4Life: Every time a developer says "temporary workaround" I remember this list. 

  • Some impressive gains by migrating from Python to Go. From Python to Go: migrating our entire API: reducing the mean response time of an API call from 100ms to 10ms...We reduced the number of EC2 instances required by 85%...we can now ship a self-hosted version of Repustate that is identical to the one we host for customers.

  • Rob Story has an awesome summary of the goings on at the Very Large DataBases Conference. His main gloss is at VLDB 2015: Concurrency, DataFlow, E-Store, but he also has a day by day summaries up on github. An amazing job and lots of concentrated insight.

  • Really wonderful article. A Life in Games: The Playful Genius of John Conway. Packed with slices of life that make me feel like I would like a little more John Conway in my life.

  • Need high availability? Here's how eBay uses Netflix Hystrix to implement the Circuit Breaker pattern. An example is given for their Secure Token service. Hystrix: a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable.

  • Rob Pike and Naitik Shah are speaking at the Fall Gopherfest - Silicon Valley on November 18th. It's free and you might find it useful. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

A Programmer’s Guide to Managing Stress

Making the Complex Simple - John Sonmez - Fri, 09/11/2015 - 13:00

I almost lost my mind once. It was the best thing that ever happened to me. I almost lost my mind because I wasn’t taking care of it. Way too much crunch time for an important project, coupled with some ongoing issues in my personal life, pushed my ability to cope with day-to-day life into […]

The post A Programmer’s Guide to Managing Stress appeared first on Simple Programmer.

Categories: Programming

Thoughts on Scaling Agile User Acceptance Testing

Scaling up, up, up!

Scaling up, up, up!

Agile User Acceptance Testing (AUAT) at the team level, focuses on proving that the functionality developed to solve a specific user story meets the user’s needs. Typically stories are part of a larger “whole,” and to truly prove that a business problem has been solved, acceptance testing needs to be performed as stories are assembled into features and features into applications/systems.

Individual teams accept user stories into sprints, if they are using time boxes as in Scrum. Stories should follow the guidelines found in the INVEST mnemonic coined by Bill Wake to generate a kernel of functionality that can be delivered. Because user stories are very granular, they often do not satisfy the overall business needs of the stakeholders. Product owners and other stakeholders generally want features. During backlog grooming features are broken down from epics into stories, then are developed and then assembled to satisfy the business need. A typical feature requires multiple stories (a one-to-many relationship). Two basic scenarios can be used to highlight the need to scale from story-level AUAT to feature- and system-level acceptance testing

Scenario One: Each Story Can Stand Alone

The simplest scenario would be the situation in which a feature is just the sum of the individual stories. This means that each independent story can be assembled and that no further acceptance testing is required. In this scenario, meeting the story-level acceptance criteria would satisfy the feature-level acceptance criteria and the system-level acceptance criteria. At best, this scenario is rare.

Scenario Two: Features Represent More Than The Sum of Parts

Features are often represent more than the sum of the individual stories. Even the relatively simple scenarios can be more than a sum of their parts.  For example, consider a feature for maintaining a customer on an applications.  Stories would include adding a customer, modifying a customer, deleting a customer and inquiring on a customer. The acceptance criteria for the feature would more than likely include criteria that the functionality in each story  needs to work smoothly together or meet a performance standard all of which requires running an acceptance test at the feature level. Non-functional requirements are often reflected in overarching acceptance criteria captured at the feature level or system level. These overarching criteria require performing AUAT at the feature and system level.

The discussion of executing a feature- or system-level acceptance test often generates hot debate. The debate is less about the need to get acceptance and generate feedback at the feature or system level, but more about when this type of test should be done. Deciding on “when” is often a reflection on whether the organization and teams have adopted a few critical Agile techniques.

  1. Integrated code base – All teams should be building and committing to a single code base.
  2. Continuous builds (or at least daily) – The single code base should be re-built as code is committed (or at least daily) and validated.
  3. Team synchronization – All teams working together toward a common goal (SAFe calls this an Agile release train) should begin and end their sprints at the same time.

A solution I have used for teams that meet these criteria is to coodinate the feature acceptance test through the Scrum of Scrums as the second to last official activity of each synchronized sprint (prior to the retrospective(s)). The feature AUAT requires team and stakeholder participation so that everyone can agree that the criteria is met or not met. All of these activities assume that acceptance criteria were developed for each feature as it was added the backlog and that overall system acceptance criteria was crafted in the team charter at the beginning of the overall effort. This ensures that delivery of functionality can move forward to release (if planned) without delays.

Where organizations have not addressed the three criteria, often the response is to implement a “hardening” (also known as development plus one, test after or just a testing sprint), so that the system can be assembled and tested as a whole. Problems found after stories are accepted generally require reopening stories and re-planning. Also if work has gone forward and is being built on potentially bad code, significant rework can be required. My strong advice is to spend the time and money needed to implement the three criteria; therefore removing this need for hardening sprints.

Scaling AUAT to features that require more than a single story, team or sprint to complete is not as simple looking at  each story’s acceptance criteria. Features and the overall system will have their own acceptance criteria. Scaling is facilitated by addressing the technical aspects of Agile and synchronizing activities, however these are only prerequisites to building layers of AUAT into the product development cycle.

Note – We have left a number of hanging issues, such as who should be involved in AUAT and if a story is truly independent does it require higher levels of AUAT? We will address these in the future. Are there other aspects of AUAT that you believe we should be address on this blog?

Categories: Process Management

Get Ready for the Polymer Summit 2015

Google Code Blog - Thu, 09/10/2015 - 19:14

Posted by Taylor Savage, Product Manager, Polymer

The Polymer Summit is almost here! We’ll officially kick off live from Amsterdam at 9:00AM GMT+2 this coming Tuesday, September 15th. To get the most out of the event, make sure to check out the speaker list and talk schedule on our site.

Can’t join us in person? Don’t worry, we’ve got you covered! You can tune into the summit live on We will stream the keynote and all sessions over the course of the event. If you want us to send you a reminder to tune into the livestream, sign up here. We’ll also be publishing all of the talks as videos on the Chrome Developers YouTube Channel.

We’re looking forward to seeing you in person or remotely on Tuesday. Don’t forget to join the social conversations at #PolymerSummit!

Categories: Programming

Barcode Detection in Google Play services

Android Developers Blog - Thu, 09/10/2015 - 18:40

Posted by Laurence Moroney, Developer Advocate

With the release of Google Play services 7.8 we’re excited to announce that we’ve added new Mobile Vision APIs which provides the Barcode Scanner API to read and decode a myriad of different barcode types quickly, easily and locally.

Barcode detection

Classes for detecting and parsing bar codes are available in the namespace. The BarcodeDetector class is the main workhorse -- processing Frame objects to return a SparseArray<Barcode> types.

The Barcode type represents a single recognized barcode and its value. In the case of 1D barcode such as UPC codes, this will simply be the number that is encoded in the barcode. This is available in the rawValue property, with the detected encoding type set in the format field.

For 2D barcodes that contain structured data, such as QR codes, the valueFormat field is set to the detected value type, and the corresponding data field is set. So, for example, if the URL type is detected, the constant URL will be loaded into the valueFormat, and the URL property will contain the desired value. Beyond URLs, there are lots of different data types that the QR code can support -- check them out in the documentation here.

When using the API, you can read barcodes in any orientation. They don’t always need to be straight on, and oriented upwards!

Importantly, all barcode parsing is done locally, making it really fast, and in some cases, such as PDF-417, all the information you need might be contained within the barcode itself, so you don’t need any further lookups.

You can learn more about using the API by checking out the sample on GitHub. This uses the Mobile Vision APIs along with a Camera preview to detect both faces and barcodes in the same image.

Supported Bar Code Types

The API supports both 1D and 2D bar codes, in a number of sub formats.

For 1D Bar Codes, these are:


For 2D Bar Codes, these are:

QR Code
Data Matrix
PDF 417

Learn More

It’s easy to build applications that use bar code detection using the Barcode Scanner API, and we’ve provided lots of great resources that will allow you to do so. Check them out here:

Follow the Code Lab

Read the Mobile Vision Documentation

Explore the sample

Join the discussion on

+Android Developers
Categories: Programming

SE-Radio Episode 237: Go Behind the Scenes and Meet the Team

Show editor Robert Blumen begins with a history of the show, what he has been doing since he became the show editor a year ago, and where he wants the show to go in the future. The remainder of the show is a series of interviews with all of the active hosts, the founder of the […]
Categories: Programming

Not Allowed to Learn on the Job

Making the Complex Simple - John Sonmez - Thu, 09/10/2015 - 16:00

In this episode, I give my advice on training within work time. Should learning on the job be allowed? Full transcript: John:               Hey, John Sonmez from I got this question about training in work time. This is kind of an interesting one. I’ve gotten a few people that have mentioned things about this. I’ve […]

The post Not Allowed to Learn on the Job appeared first on Simple Programmer.

Categories: Programming

Software Development Linkopedia September 2015

From the Editor of Methods & Tools - Thu, 09/10/2015 - 13:38
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about self-organization, software quality, software development management, software architecture from a DevOps perspective, Scrum maturity, product owners’ skills and web performance. Web site: Week-end Testing Blog: Why Self-Organizing is So Hard […]

Android Developer Story: Zabob Studio and Buff Studio reach global users with Google Play

Android Developers Blog - Wed, 09/09/2015 - 22:27

Posted by Lily Sheringham, Google Play team

South Korean Games developers Zabob Studio and Buff Studio are start-ups seeking to become major players in the global mobile games industry.

Zabob Studio was set up by Kwon Dae-hyeon and his wife in 2013. This couple-run business has already published ten games, including hits ‘Zombie Judgement Day’ and ‘Infinity Dungeon.’ So far, the company has generated more than KRW ₩140M (approximately $125,000 USD) in sales revenue, with about 60 percent of the studio’s downloads coming from international markets, such as Taiwan and Brazil.

Elsewhere, Buff Studio was founded in 2014 and right from the start, its first game Buff Knight was an instant hit. It was even featured as the ‘Game of the Week’ on Google Play and was included in “30 Best Games of 2014” lists. A sequel is already in the works showing the potential of the franchise.

In this video, Kwon Dae-hyeon, CEO of Zabob Studio, and Kim Do-Hyeong, CEO of Buff Studio, talk about how Google Play services and the Google Play Developer Console have helped them maintain a competitive edge, market their games efficiently to global users and grow revenue on the platform.

Android Developer Story: Buff Studio - Reaching global users with Google Play

Android Developer Story: Zabob Studio - Growing revenue with Google Play

Check Zabob Studio apps and Buff Knight on Google Play!

We’re pleased to share that Android Developer Stories will now come with translated subtitles on YouTube in popular languages around the world. Find out how to turn on YouTube captions. To read locally translated blog posts, visit the Google developer blog in Korean.

Join the discussion on

+Android Developers
Categories: Programming

Trade Stimulators and the Very Old Idea of Increasing User Engagement

Very early in my web career I was introduced to the almost mystical holy grail of web (and now app) properties: increasing user engagement.

The reason is simple. The more time people spend with your property the more stuff you can sell them. The more stuff you can sell the more value you have. Your time is money. So we design for addiction.

Famously Facebook, through the ties that bind, is the engagement leader with U.S. adults spending a stunning average of 42.1 minutes per day on Facebook. Cha-ching.

Immense resources are spent trying to make websites and apps sticky. Psychological tricks and gamification strategies are deployed with abandon to get you not to leave a website or to keep playing an app.

It turns out this is a very old idea. Casinos are designed to keep you gambling, for example. And though I’d never really thought about it before, I shouldn’t have been surprised to learn retail stores of yore used devices called trade stimulators to keep customers hanging around and spending money.

Never heard of trade stimulators? I hadn’t either until, while watching American Pickers, one of my favorite shows, they talked about this whole category of things people collect that I had no idea even existed!

Here’s an explanation of trade stimulators on the For Amusement Only EM and Bingo Pinball podcast. They are small gambling devices used in stores and bars. Usually it was a mini slot machine or game of chance, like a horse racing game or a dice game. It would vend you a small trinket like a particular color gum ball that could be turned into the shop keeper for a free drink or other prize. The idea is you put money in and you keep spending money at the establishment. 

Here’s a beautiful Sun Mfg 2 Wheel Bicycle Trade Stimulator from the late 1800s. The wheels spin and when the wheels stop spinning you add up the numbers by the indicators to learn what prize you've won. It could be a cigar or a drink, for example.


Sun Mfg 2 Wheel Bicycle Trade Stimulator.jpg


Here’s a Stephens Magic Beer Barrel Trade Stimulator from around 1934. Your prize is a beer. Even if you didn’t win a beer you would get a pretzel, which would of course make you thirsty so you want more beer!

Categories: Architecture

Practical Tips on Securing Your Next Technology Role

Making the Complex Simple - John Sonmez - Wed, 09/09/2015 - 13:00

Choosing your next job and moving on in your career can be one of the most important decisions in your life. As much as we’d like to think it doesn’t, our choice of job and career has a profound affect on both our happiness and life choices. Many personal relationships and families have crashed and […]

The post Practical Tips on Securing Your Next Technology Role appeared first on Simple Programmer.

Categories: Programming

Private properties in ES2015: the good, bad and ugly

Xebia Blog - Wed, 09/09/2015 - 12:16
code { display: inline !important; }

This post is part of a series of ES2015 posts. We'll be covering new JavaScript functionality every week!

One of the new features of ECMAScript 2015 is the WeakMap. It has several uses, but one of the most promoted is to store properties that can only be retrieved by an object reference, essentially creating private properties. We'll show several different implementation approaches and compare it in terms of memory usage and performance with a 'public' properties variant.

A classic way

Let's start with an example. We want to create a Rectangle class that is provided the width and height of the rectangle when instantiated. The object provides an area() function that returns the area of the rectangle. The example should make sure that the width and height cannot be accessed directly, but they must be stored both.

First, for comparison, a classic way of defining 'private' properties using the ES2015 class syntax. We simply create properties with an underscore prefix in a class. This of course doesn't hide anything, but a user knows that these values are internal and the user shouldn't let code depend on its behaviour.

class Rectangle {
  constructor(width, height) {
    this._width = width;
    this._height = height;

  area() {
    return this._width * this._height;

We'll do a small benchmark. Let's create 100.000 Rectangle objects, access the area() function and benchmark the memory usage and speed of execution. See the end of this post on how this was benchmarked. In this case, Chrome took ~49ms and used ~8Mb of heap.

WeakMap implementation for each private property

Now, we introduce a WeakMap in the following naive implementation that uses a WeakMap per private property. The idea is to store a value using the object itself as key. In this way, only the accessor of the WeakMap can access the private data, and the accessor should be of course only the instantiated class. A benefit of the WeakMap is the garbage collection of the private data in the map when the original object itself is deleted.

const _width = new WeakMap();
const _height = new WeakMap();

class Rectangle {
  constructor (width, height) {
    _width.set(this, width);
    _height.set(this, height);

  area() {
    return _width.get(this) * _height.get(this);

To create 100.000 Rectangle objects and access the area() function, Chrome took ~152ms and used ~22Mb of heap on my computer. We can do better.

Faster WeakMap implementation

A better approach would be to store all private data in an object for each Rectangle instance in a single WeakMap. This can reduce lookups if used properly.

const map = new WeakMap();

class Rectangle {
  constructor (width, height) {
    map.set(this, {
      width: width,
      height: height

  area() {
    const hidden = map.get(this);
    return hidden.width * hidden.height;

This time, Chrome took ~89ms and used ~21Mb of heap. As expected, the code is faster because there's a set and get call less. Interestingly, memory usage is more or less the same, even though we're storing less object references. Maybe a hint on the internal implementation of a WeakMap in Chrome?

WeakMap implementation with helper functions

To improve the readability of above code, we could create a helper lib that should export two functions: initInternal and internal, in the following fashion:

const map = new WeakMap();
let initInternal = function (object) {
  let data = {};
  map.set(object, data);
  return data;
let internal = function (object) {
  return map.get(object);

Then, we can initialise and use the private vars in the following fashion:

class Rectangle {
  constructor(width, height) {
    const int = initInternal(this);
    int.width = width;
    int.height = height;

  area() {
    const int = internal(this);
    return int.width * int.height;

For the above example, Chrome took ~108ms and used ~23Mb of heap. It is a little bit slower than the direct set/get call approach, but is faster than the separate lookups.


  • The good: real private properties are now possible
  • The bad: it costs more memory and degrades performance
  • The ugly: we need helper functions to make the syntax okay-ish

WeakMap comes at both a performance as well a memory usage cost (at least as tested in Chrome). Each lookup for an object reference in the map takes time, and storing data in a separate WeakMap is less efficient than storing it directly in the object itself. A rule of thumb is to make sure to do as few lookups as necessary. For your project it will be a tradeoff to store private properties with a WeakMap versus lower memory usage and higher performance. Make sure to test your project with different implementations, and don't fall into the trap of micro-optimising too early.

Test reference

Make sure to run Chrome with the following parameters: --enable-precise-memory-info --js-flags="--expose-gc" - this enables detailed heap memory information and exposes the gc function to trigger garbage collection.

Then, for each implementation, the following code was run:

const heapUsed = [];
const timeUsed = [];

for (let i = 1; i <= 50; i++) {
  const instances = [];
  const areas = [];

  const t0 =;
  const m0 = performance.memory.usedJSHeapSize;

  for (let j = 1; j <= 100000; j++) {
    var rectangle = new Rectangle(i, j);

  const t1 =;
  const m1 = performance.memory.usedJSHeapSize;

  heapUsed.push(m1 - m0);
  timeUsed.push(t1 - t0);

var sum = function (old, val) {
  return old + val;
console.log('heapUsed', heapUsed.reduce(sum, 0) / heapUsed.length);
console.log('timeUsed', timeUsed.reduce(sum, 0) / heapUsed.length);

Incorporating Agile User Acceptance Testing Into Team Activities

In goes the money and out comes the soda? It is a test!

In goes the money and out comes the soda? It is a test!

Acceptance testing is necessity when developing any product. My brother, the homebuilder, includes acceptance testing through out the building process. His process includes planned and unplanned walkthroughs and backlog reviews with his clients as the house is built. He has even developed checklists for clients that have never had a custom home built. The process culminates with a final walk through to ensure the homeowner is happy. The process of user acceptance testing in Agile development has many similarities, including: participation by users, building UAT into how the teams and teams-of-teams work and testing user acceptance throughout the product development life cycle.

Acceptance testing is a type of black box testing. The tester knows the inputs and has an expected result in mind, but the window into how the input is transformed is opaque. An example of a black box test for a soda machine would be putting money into a soda machine, pressing the selection button and getting the correct frosty beverage. The tester does not need to be aware of all the steps between hitting the selector and receiving the drink. The story-level AUAT can be incorporated into the the day-to-day activity of an Agile team. Incorporating AUAT activities includes:

  1. Adding the requirement for the development of acceptance tests into the definition of ready to develop. (This will be bookended by the definition of done)
  2. Ensuring that the product owner or a well-regarded subject matter expert for the business participate in defining the acceptance criteria for stories and features.
  3. Reviewing acceptance criteria as part of the story grooming process.
  4. Using Acceptance Test Driven Development (ATDD) or other Test First Development methods. ATDD builds collaboration between the developers, testers and the business into the process by writing acceptance tests before developers begin coding.
  5. Incorporating the satisfaction of the acceptance criteria into the definition of done.
  6. Leveraging the classic Agile demo lead by the product owner to stakeholders performed at the end of each sprint. Completed (done) stories are demonstrated and stakeholders interact with them to make sure their needs are being addressed and to solicit feedback.
  7. Performing a final AUAT step using a soft roll-out or first use technique to generate feedback to collect final user feedback in a truly production environment. One of the most common problems all tests have is that they are executed in an environment that closely mirrors production. The word close is generally the issue, and until the code is run in a true production environment what exactly will happen is unknown. The concept of first use feedback borders on one the more problematic approaches that of throwing code over the wall and testing in production. This should never be the first time acceptance, integration or performance is tested, but rather treated as a mechanism to broaden the pool of feedback available to the team.

In a scaled Agile project acceptance testing at the story level is a step in a larger process of planning and actions. This process typically starts by developing acceptance criteria for features and epics which are then groomed and decomposed into stories.  Once the stories are developed and combined  a final acceptance test at the system or application level is needed to ensure what has been developed works as a whole package and meets the users needs.

Are there other techniques that you use to implement AUAT at the team level?

In the next blog entry we will address ideas for scaling AUAT.

Categories: Process Management

Why We Need To Estimate Software Cost and Schedule

Herding Cats - Glen Alleman - Tue, 09/08/2015 - 23:22

There is no good way to perform a software cost‐benefit analysis, breakeven analysis, or make‐or‐buy analysis without some reasonably accurate method of estimating software costs and their sensitivity to various product, project, and environmental factors. ‐ Dr. Barry Boehm

The previous post on Source Lines of Code, set off a firestorm from the proponents of #NoEstimates. 

I'd rather not estimate than estimate with SLOC 

or my favorite since we work in the domains of flight avionics (command and data handling (C&DH) and guidance navigation and control (GN&C)), fire control systems, fault tolerant process control and the diagnostic coverage needed for process safety management, ground data and business process systems for both aircraft and spacecraft.

I'm no longer going to fly with any company that counts LOC as (it) shows a lack of intelligence.

So the question is where and when are estimating the source lines of code useful for making business decisions? 

Embedded Software Intensive Systems

Corba DaneIn the embedded systems business, memory is fixed, processor speed is hardwired and many times limited by thermal control process. Aircraft and spacecraft avionics bays have limited cooling, so getting a faster processes has repercussions beyond the cost of getting a faster processor. In an aircraft cooling must be added, increasing weight, possibly impacting the center of gravity. In a spacecraft, cooling is not done with fans and moving air. There is no air. Heat pipes and radiators are needed, again adding weight.

For those with experience in rapid development of small chunks of code the get released often to the customer for incremental use in the business process that then provide feedback for the next sliced piece of functionality being concerned about the center of gravity, thermal load, realtime critical path of the executing code so it maintains the realtime closed loop control algorithm so we don't crash into the end of the runway or onto the surface of a distance planet is probably not in their vocabulary.

Business and Processing Systems

For terrestrial systems, even business processing systems, the number of lines of code has direct impact on cost and schedule. Let's start with a source code security analyzer. Those whose skills are rapidly chunking out pieces of useful functionality aren't likely to be interested in running all their code through a security analyzer before even starting the compile and check out process. 

source code security analyzer examines source code to detect and report weaknesses that can lead to security vulnerabilities.

They are one of the last lines of defense to eliminate software vulnerabilities during development or after deployment. Like all things mission critical there is a Source Code Security Analysis Tool Functional Specification Version 1.1, NIST Special Publication 500-268, February 2011,

Development and Product Maintenance 

Reflection_TypeRefA recent hands on experience with the need to know the SLOC comes from a refactoring effort to remove all the reflection from a code base. Those note familiar with reflection it provides objects that describe assemblies, modules and types. Reflection dynamically creates an instance of a type, binds the type to an existing object, or gets the type from an existing object and invoke its methods or access its fields and properties. If you are using attributes in your code, reflection enables you to access them.

This is a cleaver way to build code in a rapidly changing requirements paradigm. A bit too cleaver in our high performance transaction processing system

In larger production transaction processing systems, it's a way to crater the performance of the code by searching for object types on every single call for the transaction.

Removing all the reflection code structures has eliminates a huge percentage of the CPU time, memory requirements, database performance impacts - along with separating all the DB logic into Stored Procedures - resulting in the decommissioning of large chucks of the server farm running a very large public health application.

How long is it going to take to refactor all this code? I know, let's make an estimate by counting the lines of code. Do a few conversions from the current design (reflections), count how long that took. Divide the total lines of code (objects and their size) by that and we have an Estimate to Complete. Add some margin and we'll know approximately when the big pile of crappy code can get rid of the smell of running fat, slow, and error prone.

High Performance Embedded Mission Systems

Honeywell-Laseref-VIHigh Performance Embedded Systems are found everywhere. Current estimates show they outnumber desktop and server systems 100 to 1. Most of these systems have ZERO defect goals. As well as ZERO tolerance for performance shortfalls, processing disruptions, and other reset conditions. 

How do we have any sense of that the code base is capable of meeting these conditions? Testing of course is one way. But exhaustive testing is simply not possible. In a past life verification and validation of the code was the method - and still the method. Along with that is the cyclomatic complexity assessment of the code base. Another activity not likely to be of much interest to those producing the small chunks of sliced code to rapidly satisfy the customers emerging and possibly unknowable needs until they see it working. 

So In The End

Unless we suspend the principles of Microeconomics and Managerial Finance when making management decisions in the presence of uncertainty, we're going to need to estimate the outcomes of our decisions.

This process is the basis of opportunity cost - that is what is the cost of one decisions over some othersIf I make Decision A, what is the cost of NOT making decision B or C. This LOST opportunity is the cost of choice.

Unless we suspend the principles of probability and statistics when applied to networks of interrelated work, we're not going to be able to make decisions without making estimates.

In the four examples above, from direct hands on experience, Source Lines of Code are a good proxy for making estimates about cost and schedule, as well as the complexity of the code base when computing the inherent reliability and fault tolerance of the applications that are embedded in the software by which our daily lives depend on. From flight controls in aircraft, process control loops in everything under computer control, including the computers themselves, the assurance that the code we write is secure and will behave as needed.

If you hear some unsubstantiated claim that SLOC are not of any use in estimating further outcomes, ask when you were working a system where failure is not an option did those paying for that system tell you they didn't need to estimate the outcomes of their decisions?  Haven't worked in that environment? May want to do some exploring of your own to see some of the many ways estimates are made and how SLOC is one of those in Software Intensive Systems Cost and Schedule Estimating (this document is an example of how SLOC is used in systems that are sensitive to size and performance based on the size of the code base. So take a read and possible see something you may not have encountered before. May not be your domain, but embedded systems outnumber desktop and server side systems 100 to 1)

One final thought about Software Intensive Systems and their impact on larger software development processes is the introduction of Agile Development in these domains. Progress is being made in the integration of Agile with large systems acquisition processes. Here's a recent briefing in a domain where systems are engineered. Systems we depend on to work as specified every single time.

Screen Shot 2015-09-07 at 4.33.34 PM


† It's going to be a long walk for the poster of that nonsense idea. Oh yeah those building Positive Train Controls, are also realtime embedded systems developers and they use SLOC to estimate timing, testing, complexity, and many other ...ilities. Same with auto manufacturers. Maybe the Nike show company doesn't. So enjoy the walk. And BTW that OP deleted his post, but worry not, got a screen capture.


Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

100 days of Google Dev

Google Code Blog - Tue, 09/08/2015 - 21:21

Posted by Reto Meier, Team Lead, Scalable Developer Advocacy

For the past 100 days, Google Developers has delivered a series of daily videos to keep you informed about everything you need to develop, engage and earn.

We’ve covered everything from the Android Marshmallow launch: how you can get started developing with beacons:

...and continued our coverage of everything Polymer and Geo:

Thank you for following along and learning with us about all the ways you can use Google tools to make your apps awesome. Let us know what your favourite video was using #GoogleDev100. In the meantime, check out this short sizzle reel looking back at our most memorable moments -- we hope you’ve enjoyed watching them as much as we’ve enjoyed making them:

Categories: Programming

Ask Me a Question or Suggest a Topic

Mike Cohn's Blog - Tue, 09/08/2015 - 20:25

Do you have a question on your mind or a topic you'd like me to write about?

If so, I'm trying something new. Head on over to the new ask-a-question page and submit a question. At least initially, that page won't be linked anywhere from the site's menu structure. For now, the only way to know about it is to be a loyal reader of my blog.

I'll answer the most common, important or interesting questions or topics here on the blog or on my newsletter. I can't promise I'll be able to address every topic, or answer every question that’s submitted, but I'll do my best.

My hope is that this way, I can share my opinions. But, more importantly, we can all benefit from the great community discussion that often happens here around some of the topics.

I look forward to your suggestions and questions. The new ask-a-question page is available now. So let me know what you'd like me to address.

OK, I Will Help You for Free

NOOP.NL - Jurgen Appelo - Tue, 09/08/2015 - 11:37

I am happy to do things for free. Also for you! But only when I trust you, believe in you. There is no shortage of ideas. There is a shortage of commitment (and collaboration and competence) to make things happen.

The post OK, I Will Help You for Free appeared first on NOOP.NL.

Categories: Project Management

Hawk Notes, Volume 1

DevHawk - Harry Pierson - Tue, 09/08/2015 - 06:32

This is the first in a series of blog posts about Hawk, the engine that powers this site. My plan is to make a post like this for every significant update to the site. We'll see well that plan works.

  • I just pushed out a new version of Hawk on my website. The primary feature of this release is support for ASP.NET 5 Beta 7. I also published the source code up on GitHub. Feedback welcome!
  • As I mentioned in my post on Edge.js, the publishing tools for Hawk is little more than duct tape and bailing wire at this point. Eventually, I'd like to have a dedicated tool, but for now it's a manual three step process:
    1. Run the PublishDraft to publish a post from my draft directory to a local git repo of all my content. As part of this, I update some of the metadata and render the markdown to HTML.
    2. Run my WritePostsToAzure Custom Command to publish posts from my local git repo to Azure. I have a blog post on my custom command infrastructure in the works.
    3. Trigger a content refresh via an unpublished URL.
  • I need to trigger a content refresh because Hawk loads all of the post metadata from Azure on startup. The combined metadata for all my posts is pretty small - about 2/3 of a megabyte stored on disk as JSON. Having the data in memory makes it easy to query as well as support multiple post repositories ( Azure storage and the file system).
  • I felt comfortable publishing the Hawk source code now because I added a secret key to the data refresh URL. Previously, the refresh URL was unsecured. I didn't think giving random, anonymous people on the Interet the ability to kick off a data refresh was a good idea, so I held off publishing source until I secured that endpoint.
  • Hawk caches blog post content and legacy comments in memory. This release also adds cache invalidation logic so that everything gets reloaded from storage on data refresh, not just the blog post metadata.
  • I don't understand what the ASP.NET team is doing with the BufferedHtmlContent class. In beta 7 it's been moved to the Common repo and published as source. However, I couldn't get it to compile because it depends on an internal [NotNull] attribute. I decided to scrap my use of BufferedHtmlContent and built out several classes that implement IHtmlContent directly instead. For example, the links at the bottom of my master layout are rendered by the SocialLink class. Frankly, I'm not sure if rolling your own IHtmlContent class for snippet of HTML code you want to automate is a best practice. It seems like it's harder than it should be. It feels like ASP.NET needs a built-in class like BufferedHtmlContent, so I'm not sure why it's been removed.
Categories: Architecture, Programming

The Top 10 Project Management Books

"No one can whistle a symphony. It takes a whole orchestra." — H.E. Luccock

Being an effective program manager at Microsoft means knowing how to make things happen.  While being a program manager requires a lot more than project management, project management is still at the core.

Project management is the backbone of execution.

And execution is tough.  But execution is also the breeding ground of results.  Execution is what separates many teams and individuals from the people who have good ideas, and the people that actually ship them.  Great ideas die on the vine every day from lack of execution.  (Lack of execution is the same way great strategies die, too.)

If you want to learn the art and science of execution, here is a handful of books that have served me well:

  1. Agile Management for Software Engineering, by David Anderson.  David turns the Theory of Constraints into pragmatic insights for driving projects, making progress where it counts, and producing great results.   The book provides a great lens for thinking in terms of business value and how to flow value throughout the project cycle.
  2. Agile Project Management with Kanban, by Eric Brechner.  This is the ultimate guide for doing Kanban.  Rather than get bogged down in theory, it’s a fast-paced, action guide to transitioning from Scrum to Kanban, while carrying the good forward.  Eric helps you navigate the tough choices and adapt Kanban to your environment, whether it’s a small team, or a large org.  If you want to lead great projects in today’s world, and if you want to master project management, Kanban is a fundamental part of the formula and this is the book.
  3. Flawless Execution, by James D. Murphy.  James shares deep insight from how fighter pilots fly and lead successful missions, and how those same practices apply to leading teams and driving projects.   It’s among the best books at connecting strategy to execution, and showing how to get everybody’s head in the game, and how to keep learning and improving throughout the project.  This book also has a great way to set the outcomes for the week and to help people avoid getting overloaded and overwhelmed, so they can do their best work, every day.
  4. Get Them On Your Side, by Samuel B. Bacharach.  Stakeholder management is one of the secret keys to effective project management.  So many great ideas and otherwise great projects die because of poor stakeholder management.  If you don’t get people on your side, the project loses support and funding.  If you win support, everything get easier.   This is probably the ultimate engineer’s guide to understanding politics and treating politics as a “system” so you can play the game effectively without getting swept up into it.
  5. How to Run Successful Projects III: The Silver Bullet, by Fergus O'Connell.  While  “The Silver Bullet” is a bold title, the book lives up to its name.  It cuts through all the noise of what it takes to do project management with skill.  It carves out the essential core and the high-value activities with amazing clarity so you can focus on what counts.  Whether you are a lazy project manager that just wants to focus on doing the minimum and still driving great projects, or you are a high-achiever that wants to take your project management game to the next level, this is the guide to do so.
  6. Making Things Happen: Mastering Project Management, by Scott Berkun.  The is the book that really frames out how to drive high-impact projects in the real-world.  It’s a book for program managers and project managers, by a real Microsoft program manager.  It’s hard to do projects well, if you don’t understand project management end-to-end.  This is that end-to-end guide, and it dives deep into all the middle.  If you want to get a taste of what it takes to ship blockbuster projects, this is the guide.
  7. Managing the Design Factory, by Donald G. Reinertsen.  This is an oldie, but goodie.   One of my former colleagues recommended this to me, early in my career.  It taught me how to think very differently and much more systematically in how to truly design a system of people that can consistently design better products.  It’s the kind of book that you can keep going back to after a life-time to truly master the art of building systems and ecosystems for shipping great things.  While it might sound  like a philosophy book, Donald does a great job of turning ideas and insight into action.  You will find yourself re-thinking and re-imagining how you build products and lead projects.
  8. Requirements-Led Project Management: Discovering David's Slingshot, by Susanne Robertson and James Robertson.  This book will add a bunch of new tools to your toolbox for depicting the problem space and better organizing the solution space.  It’s one of the best books I know for dealing with massive amounts of information and using it in meaningful ways in terms of driving projects and driving better product design.
  9. Secrets to Mastering the WBS in Real-World Projects, by Liliana Buchtik.  If ultimate tool that project managers have, that other disciplines don’t, is the Work Breakdown Structure.  The problem is, too many project managers still create activity-based Work Breakdown Structures, when they should be creating outcome-based Work Breakdown Structures.  This is the first book that I found that provided real breadth and depth in building better Work Breakdown Structures.  I also like how Liliana applies Work Breakdown Structures to Agile projects.  This is hands down the best book I’ve read on the art and science of doing Work Breakdown Structures in the real world.
  10. Strategic Project Management Made Simple: Practices Tools for Leaders and Teams, by Terry Schmidt.  This book helps you build the skills to handle really big, high-impact projects.  But it scales down to very simple projects as well.  What it does is help you really paint a vivid picture of the challenge and the solution, so that your project efforts will be worth it.  It’s an “outcome” focused approach, while a lot of project management books tend to be “activity” focused.  This is actually the book that I wish I had found out about earlier in my career – it would have helped me fast path a lot of skills and techniques that I learned the hard way through the school of hard knocks.   The strategic aspect of the book also makes this super relevant for program managers that want to change the world.   This book shows you how to drive projects that can change the world.

Well, there you have it.   That’s my short-list of project management books that really have made a difference and that can really help you be a more effective program manager or project manager (or simply build better project management skills.)

Too many people are still working on ineffective projects, getting lackluster results, slogging away, and doing too much “push” and not addressing nearly enough of the existing “pull” that’s already there.

These are the project management books that build real competence.

And where competence grows, confidence flows.

Categories: Architecture, Programming