Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Stuff The Internet Says On Scalability For April 11th, 2014

Hey, it's HighScalability time:

 Daly and Newton/Getty Images)
DNA nanobots deliver drugs in living cockroaches which have as much compute power as a Commodore 64
  • 40,000: # of people it takes to colonize a star system; 600,000: servers vulnerable to heartbleed
  • Quotable Quotes:
    • @laurencetratt: High frequency traders paid $300M to reduce New York <-> Chicago network transfers from 17 -> 13ms. 
    • @talios: People read  for sexual arousal - jim webber #cm14
    • @viktorklang: LOL RT @giltene 2nd QOTD: @mjpt777 “Some people say ‘Thread affinity is for sissies.’Those people don’t make money.”
    • @pbailis: Reminder: eventual consistency is orthogonal to durability and data loss as long as you correctly resolve update conflicts.
    • @codinghorror: Converting a low-volume educational Discourse instance from Heroku at ~$152.50/month to Digital Ocean at $10/month.
    • @FrancescoC: Scary post on kids who can't tell the diff. between atomicity & eventual consistency architecting bitcoin exchanges 
    • @jboner: "Protocols are a lot harder to get right than APIs, and most people can't get APIs right" -  @daveathomas at #reactconf
    • @vitaliyk: “Redundancy is ambiguous because it seems like a waste if nothing unusual happens. Except that something unusual happens—usually.” @nntaleb
    • Blazes: Asynchrony * partial failure is hard.
    • David Rosenthal: I have long thought that the fundamental challenge facing system architects is to build systems that fail gradually, progressively, and slowly enough for remedial action to be effective, all the while emitting alarming noises to attract attention to impending collapse. 
    • Brian Wilson: Moral of the story: design for failure and buy the cheapest components you can. :-)

  • Just damn. DNA nanobots deliver drugs in living cockroaches: Levner and his colleagues at Bar Ilan University in Ramat-Gan, Israel, made the nanobots by exploiting the binding properties of DNA. When it meets a certain kind of protein, DNA unravels into two complementary strands. By creating particular sequences, the strands can be made to unravel on contact with specific molecules – say, those on a diseased cell. When the molecule unravels, out drops the package wrapped inside.

  • Remember those studies where a guerilla walks through the middle of a basketball game and most people don't notice? Attention blindness. 1000 eyeballs doesn't mean anything will be seen. That's human nature. Heartbleed -- another horrible, horrible, open-source FAIL.

  • Remember the Content Addressable Web? Your kids won't. The mobile web vs apps is another front on the battle between open and closed systems.

  • In Public Cloud Instance Pricing Wars - Detailed Context and Analysis Adrian Cockcroft takes a deep stab at making sense of the recent price cuts by Google, Amazon, and Microsoft. AWS users should migrate to the new m3, r3, c3 instances; AWS and Google instance prices are essentially the same for similar specs; Microsoft doesn't have the latest Intel CPUs and isn't pricing against like spec'ed machines;  IBM Softlayer pricing is still higher; Moore's law dictates price curves going forward.

  • Seth Lloyd: Quantum Machine Learning - QM algorithms are a win because they give exponential speedups on BigData problems. The mathematical structure of QM, because a wave can be at two places at once, is that the states of QM systems are in fact vectors in high dimensional vector spaces. The kind of transformations that happen when particles of light bounce of CDs, for example, are linear transformations on these high dimensional vector spaces. Quantum computing is the effort to exploit quantum systems to allow these linear transformations to perform the kind of calculations we want to perform. Or something like that. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Categories: Architecture

IT Governance and Decision Rights

Herding Cats - Glen Alleman - Fri, 04/11/2014 - 16:15

GoveranceIn the discussions around #NoEstimates, it's finally dawned on me - after walking the book shelf in the office - the conversation is split across a chasm. Governance based organizations and non-governance based organizations. 

Same is the case for product development organizaitons. Those producing a software product for sale or providing a service in exchange for money. There are governance based product organizations and non-governance based product organizations. 

I can't say how those are differentiated, but there is broad research on the top of governance and business success using IT. The book on the left is a start. In this book there is a study of 300 enterprises around the world, with the following...

Companies with effective IT governance have profits that are 20% higher than other companies pursuing similar strategies. One viable explanation for this differential is that IT governance specifies accountabilities for IT-related business outcomes and helps companies align their IT investments with their business priorities. But IT governance is a mystery to key decision makers at most companies. Our research indicates that, on average, only 38% of senior managers in a company know how IT is governed. Ignorance is not bliss. In our research senior management awareness of IT governance processes proved to be the single best indicator of governance effectiveness with top performing firms having 60, 70 or 80% of senior executives aware of how IT is governed. Taking the time at senior management levels to design, implement, and communicate IT governance processes is worth the trouble—it pays off.

IT Governance is a decision rights and accountability framework for encouraging desirable behaviours in the use of IT. And I'd add the creation of IT, IT products, and IT services. Since IT is a broad domain, let's exclude development effort for things like games, phone apps, plugins. and in general items that have low value at risk. This doesn't mean they don't have high revenue, but the investment value is low. So when they don't produce their desired beneficial outcome, the degree of loss is low as well.

Asessment of IT Governance focuses on four objectives:

  1. Cost effectiveness of IT
  2. Effective use of IT for asset utilization
  3. Effecrive use of IT for growth
  4. Effective use of IT for business flexibility

In all four, Weill and Ross provide guidance for assessing the capabilities of IT. In all four Cost is considered a critical success factor.

Without knowing the cost of a decision, the choices presented by that decision cannot be assessed. So when we hear #NoEstimates is about making decisions, ask of those decisions are being made in a governance based organization?

Then ask the question who has the decision rights to make those decisions? Who has the need to know the cost of the value produced by the firm in exchange for that cost. The developers, the management of the development team, the business management of the firm, the customers of the firm?

The three dependent variables of all projects are schedule, cost, and technical perfomance of produced capabilities (this is a wrapper word for everything NOT time and cost). The value at risk is a good starting point for deciding to apply governance processes or not. If you fix one of these variables - say budget (which is a place holder for cost until the actuals arrive), then the other two (time and technical) are now free to vary. Estimating their behaviour will be needed to assure the ROI meets the business goals. In the governance paradigm, these three dependent variables are part of the decision making process. Not knowing one of more puts at risk the value produced by the project or work effort. It's this value at risk that is the key to determining why, how, and when to estimate. 

What are you willing to loose (risk) if you don't need to know when you'll be done, or what you'll be able to produce on a planned day, or what that will cost, or determine if the ROI (return on investment), ROA (return on asset value), or ROE (return on equity) to some level of confidence to support your decision making - then estimating is a waste of time.

If on the other hand, the firm or customers you work for writing software in exchange for money have an interest in knowing any or all of those answers to support their decision making, you'll likley going to have to estimate, time, cost, produced capabilities to answer their questions.

It's not about you (the consumer of money). To find out who, follow the money, they'll tell you if they need an estimate or not.

Related articles Back To The Future Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices Danger Will Robinson Some more answers to the estimating questions Capabilties Based Planning, Use Cases, and Agile The Value of Making An Estimate
Categories: Project Management

Monoidal Event Sourcing

Think Before Coding - Jérémie Chassaing - Fri, 04/11/2014 - 03:34

I Could not sleep… 3am and this idea…


Event sourcing is about fold but there is no monoid around !


What’s a monoid ?


First, for those that didn’t had a chance to see Cyrille Martaire’s  fabulous explanation with beer glasses, or read the great series of post on F# for fun and profit, here is a quick recap:


We need a set.

Let’s take a simple one, positive integers.


And an operation, let say + which takes two positive integers.


It returns… a positive integer. The operation is said to be closed on the set.


Something interesting is that 3 + 8 + 2 = 13 = (3 + 8) + 2 = 3 + (8 + 2).

This is associativity: (x + y) + z = x + (y + z)


Le last interesting thing is 0, the neutral element:

x + 0 = 0 + x = x


(N,+,0) is a monoid.


Let say it again:

(S, *, ø) is a monoid if

  • * is closed on S  (* : S –> S > S)
  • * is associative ( (x * y) * z = x * (y * z) )
  • ø is the neutral element of * ( x * ø = ø * x = x )

warning: it doesn’t need to be commutative so x * y can be different from y * x !


Some famous monoids:

(int, +, 0)

(int, *, 1)

(lists, concat, empty list)

(strings, concat, empty string)


The link with Event Sourcing


Event sourcing is based on an application function apply : State –> Event –> State, which returns the new state based on previous state and an event.


Current state is then:

fold apply emptyState events


(for those using C# Linq, fold is the same as .Aggregate)


Which is great because higher level functions and all…


But fold is event more powerful with monoids ! For integers, fold is called sum, and the cool thing is that it’s associative !


With a monoid you can fold subsets, then combine them together after (still in the right order). This is what give the full power of map reduce: move code to the data. Combine in place, then combine results. As long as you have monoids, it works.


But apply will not be part of a monoid. It’s not closed on a set.


To make it closed on a set it should have the following signature:


apply: State –> State –> State, so we should maybe rename the function combine.


Let’s dream


Let’s imagine we have a combine operation closed on State.


Now, event sourcing goes from:


decide: Command –> State –> Event list

apply: State –> Event –> State



decide: Command –> State –> Event list

convert: Event –> State

combine: State –> State –> State


the previous apply function is then just:

apply state event = combine state (convert event)


and fold distribution gives:


fold apply emptyState events = fold combine emptyState (map convert events) 


(where map applies the convert fonction to each item of the events list, as does .Select in Linq)


The great advantage here is that we map then fold which is another name for reduce !


Application of events can be done in parallel by chuncks and then combined !


Is it just a dream ?


Surely not.


Most of the tricky decisions have been taken in the decide function which didn’t change. The apply function usually just set state members to values present in the event, or increment/decrement values, or add items to a list… No business decision is taken in the apply function, and most of the primitive types in state members are already monoids under there usual operations…


And a group (tuple, list..) of monoid is also a monoid under a simple composition:

if (N1,*,n1) and (N2,¤,n2) are monoids then N1 * N2 is a monoid with an operator <*> ( (x1,x2) <*> (y1,y2) = (x1*y1, x2¤y2)) and a neutral element (n1,n2)…


To view it more easily, the convert fonction converts an event to a Delta, a difference of the State.


Those delta can then be aggregated/folded to make a bigger delta.

It can then be applied to a initial value to get the result !



The idea seams quite interesting and I never read anything about this.. If anyone knows prior study of the subject, I’m interested.


Next time we’ll see how to make monoids for some common patterns we can find in the apply function, to use them in the convert function.

Categories: Architecture, Requirements

Checklist for Process Improvement Success

I spend lots of time in airports.

I spend lots of time in airports.

I get around.  Once upon a time in my life that might have been an epithet but now reflects a wide exposure to what works, doesn’t work and what is clearly a cop out.  I suggest that there are five requirements for a successful process improvement program or five attributes that give a program a chance of success.  They are:

  1. The best and brightest
  2. Understanding of change management
  3. The wish to change
  4. A commitment to change in dollars and cents
  5. Recognition that implementation matters
The best change programs are staffed by people that have a solid track record of success both technically and in business terms. The right candidates will have a high follow-ability quotient.  Follow-ability is combination of a number of attributes including optimism, successful, collaborative, vision and leadership. Change management is a structured approach to shifting/transitioning individualsteams, and organizations from a current state to a desired future state.  It requires planning for selling and promoting ideas, then supporting the nascent changes until that have achieved critical mass.  Make sure your process improvement group has someone trained in change management.   Skills will include sales, promotion, branding, communication and organizational politics. I have observed many change programs that were created to check a box.  There was no real impetuous within the organization to change.  The organization must want to change; to become something different or the best a change program can do is to put lipstick on a pig. Having a commitment to change in dollars and cents is critical.  I suggest funding the process improvement program by having each of the affected groups contribute the needed budget. The word contribute means that they can choose not to renew funding if they do not get what they need. The funding linkage ensures that the funding groups stay involved and at the same time makes sure the process improvement team recognizes their customer. Finally the recognition that implementation matters is the capstone.  Process changes, unlike spaghetti, cannot be tossed against the wall to determine if it is done.  How a process change is implemented will determine whether it sticks.  An implementation plan that integrates with the organization change management plan is part of the price of admission for a successful change
Categories: Process Management

Paper: Scalable Atomic Visibility with RAMP Transactions - Scale Linearly to 100 Servers

We are not yet at the End of History for database theory as Peter Bailis and the Database Group at UC Berkeley continue to prove with a great companion blog post to their new paper: Scalable Atomic Visibility with RAMP Transactions. I like the approach of pairing a blog post with a paper. A paper almost by definition is formal, but a blog post can help put a paper in context and give it some heart.

From the abstract:

Databases can provide scalability by partitioning data across several servers. However, multi-partition, multi-operation transactional access is often expensive, employing coordination-intensive locking, validation, or scheduling mechanisms. Accordingly, many real-world systems avoid mechanisms that provide useful semantics for multi-partition operations. This leads to incorrect behavior for a large class of applications including secondary indexing, foreign key enforcement, and materialized view maintenance. In this work, we identify a new isolation model—Read Atomic (RA) isolation—that matches the requirements of these use cases by ensuring atomic visibility: either all or none of each transaction’s updates are observed by other transactions. We present algorithms for Read Atomic Multi-Partition (RAMP) transactions that enforce atomic visibility while offering excellent scalability, guaranteed commit despite partial failures (via synchronization independence), and minimized communication between servers (via partition independence). These RAMP transactions correctly mediate atomic visibility of updates and provide readers with snapshot access to database state by using limited multi-versioning and by allowing clients to independently resolve non-atomic reads. We demonstrate that, in contrast with existing algorithms, RAMP transactions incur limited overhead—even under high contention—and scale linearly to 100 servers.

What is RAMP?

Categories: Architecture

5 Questions That Need Answers for Project Success

Herding Cats - Glen Alleman - Thu, 04/10/2014 - 16:40

5 Success Questions

These 5 questions need credible answers in units of measure meanigful to the decision makers.

  1. Capabilities Based Planning starts with a Concept of Operations for each deliverable and the system as a whole. The ConOps describes how these capabilities will be used and what benefits will be produced as a result. This question is answered through Measures of Effectiveness (MOE) for each capability. MOEs are operational measures of success related to the achievement of the mission or operational objectives evaluated in an operational environment, under a specific set of conditions
  2. Technical and Operational requirements fulfill the needed capabilities. These requirements are assessed through their Measures of Performance (MOP), which characterize the physical or functional attributes relating the system operation, measured or estimated under specific conditions.
  3. The Integrated Master Plan and Integrated Master Schedule describes the increasing maturity of the project deliverables through Technical Performance Measures (TPM) to determine how well each deliverables and the resulting system elements satisfy or are expected to satisfy a technical requirement or goal in support of the MOEs and MOPs.
  4. Assessing progress to plan starts with Earned Value Management on a periodic basis. Physical Percent Complete is determine through adherence with the planned TPMs at the time of assessment and the adjustment of Earned Value (BCWP) to forecast future performance and the Estimate at Completion for cost and Estimated Completion Duration for the schedule.
  5. For each key planned deliverable and the work activities to produce it, Risks and their handling strategies are in place to adjust future performance assessment. Irreducible risks, such as duration and cost are handled with margin. Reducible risks are handled with risk retirement plans. Compliance with the risk buy down plan becomes a fifth assessment of progress to plan.

What Does All This Mean?

With these top level questions, many approaches are available, not matter what the domain or technology. But in the end if we don't have answers the probability of success will be reduced.

  1. If we don't have some idea of what DONE looks like in measures of effectiveness, then the project itself is in jeopardy from do one. The only way out is to pay money during the project to discover what DONE looks like. Agile does essentially this, but there are other ways. In all ways, knowing where we are going is mandatory. Exploring is the same as wandering around looking for a solution. If the customer is paying for this, the project is likely a R&D project. Even then the "D" part of R&D has a goal to discover something useful for those paying.
  2. When we separate capabilities based planning from requirements elicitation, we are freed up to be creative, emergent, agile, and maximize our investments. The technical and operational requirements can then be traced to the needed capabilities. This approach sets the stage for valiation of each requirement. Answering the question - why do we have this requirement? There is always mention of feature blot, this is an approach to test each requirement for its contirbution to the business or mission benefit of the project.
  3. The paradigm of Integrated Master Planning (IMP) provides the framework for assessing the increasing maturity of the project's deliverables. The IMP is the strategy for producing these deliverables along their path of increasing maturity to the final deliverables. Terms like preliminaryfinal, recviewed, accepted, installed, delivered, available, etc. are statements about the increasing maturity.
  4. Measuring physical percent complete, starts with the exit criteria for all packages of work on the project. This is a paradigm of agile, but more importantly it has always been the paradigm in defense and space domain. The foundation of this is Systems Engineering that is just starting to appear in enterprise IT projects.
  5. Risk Management is how adults manage projects - Tim Lister This says it all. Don't have a risk register, an assessment of the probability of occurrence, impacts, handling plans, residual risk and its impact - those risks are not going away. Each risk has to be monetized and their handling included in the budget. 


Categories: Project Management

Why You Need To Start Writing

Making the Complex Simple - John Sonmez - Thu, 04/10/2014 - 16:00

I used to hate writing. It always felt like such a strain to put my ideas on paper or in a document. Why not just say what I thought? In this video, I’ll tell you why I changed my mind about writing and why I think writing is one of the best things you can […]

The post Why You Need To Start Writing appeared first on Simple Programmer.

Categories: Programming

You Make It Happen (I Go Where People Send Me)

NOOP.NL - Jurgen Appelo - Thu, 04/10/2014 - 08:59
Management 3.0 Book Tour

I don’t decide which countries I go to. You do.

My new one-day workshop follows an important principle:
I go where people send me.
Selecting countries and cities
Every week I get questions such as “Will your book tour come to Argentina?” “When will you visit China?” and “Why are you not planning for Norway?”

And every time my answer is the same: I go where people send me. The backlog of countries is based on the readers of my mailing list

The post You Make It Happen (I Go Where People Send Me) appeared first on NOOP.NL.

Categories: Project Management

Trust, Adaptable Culture and Coaches and the Success of Agile Implementation

Adaptable culture, not adaptable hair...

Adaptable culture, not adaptable hair…

So far we have discussed three of the top factors for successful Agile implementations:

  1. Senior Management
  2. (Tied) Engagement and Early Feedback

Tied for fourth place in the list of success factors are trust, adaptable culture and coaching.

Trust was one of surprises on the list. Trust, in this situation, means that all of the players needed to deliver value including the team, stakeholders and management should exhibit predictable behavior. From the team’s perspective there needs to be trust that they won’t be held to other process standards to judge how they deliver if they adopt Agile processes. From a stakeholder and management perspective there needs to be trust that a team will live up to the commitments they make.

An adaptable culture reflects an organization’s ability to make and accept change.  I had expected this to be higher on the list.  Implementing Agile generally requires that an organization makes a substantial change to how people are managed and how work is delivered.  Those changes typically impact not only the project team, but also the business departments served by the project. Organizations that do not adopt to change well rarely make a jump into Agile painlessly. Organizations that have problems adapting will need to spend significantly more effort on organizational change management.

Coaches help teams, stakeholders and other leaders within an organization learn how to be Agile. Being Agile requires some combination of knowing specific techniques and embracing a set of organizational principles. Even in more mature Agile organizations, coaches bring new ideas to the table, different perspectives and a shot of energy. That shot of energy is important to implementing Agile and then for holding on to those new behaviors until they become muscle memory.

Change in organizations is rarely easy. Those being asked to change very rarely perceive change being for the better, which makes trust very difficult. Adopting Agile requires building trust between teams, the business and IT management and vice versa. Coaching is a powerful tool to help even adaptable organizations build trust and embrace Agile as a mechanism to deliver value and as a set of principles for managing work.

Categories: Process Management

Build a map infographic with Google Maps & JavaScript

Google Code Blog - Wed, 04/09/2014 - 18:30
Cross-posted from the Geo Developers Blog

By Mark McDonald, Google GeoDevelopers Team

We recently announced the launch of the data layer in the Google Maps JavaScript API, including support for GeoJSON and declarative styling.  Today we’d like to share a technical overview explaining how you can create great looking data visualizations using Google Maps.

Here’s our end goal. Click through to interact with the live version.
Data provided by the Census Bureau Data API but is not endorsed or certified by the Census Bureau. The map loads data from two sources: the shape outlines (polygons) are loaded from a public Google Maps Engine table and we query the US Census API for the population data.  You can use the controls above the map to select a category of data to display (the "census variable"). The display is then updated to show a choropleth map shading the various US regions in proportion to the values recorded in the census.How it worksWhen the map loads, it first queries the Google Maps Engine API to retrieve the polygons defining the US state boundaries and render them using the loadGeoJson method. The controls on the map are used to select a data source and then execute a query against the US Census Data API for the specified variable.  
Note: At the time of writing, the data layer and functions described here require you to use the experimental (3.exp) version of the Maps API.Loading polygons from Maps EngineThe Maps Engine API's Table.Features list method returns resources in GeoJSON format so the API response can be loaded directly using loadGeoJson. For more information on how to use Maps Engine to read public data tables, check out the developer guide.
The only trick in the code below is setting the idPropertyName for the data that is loaded. When we load the census data we'll need a way to connect it with the Maps Engine data based on some common key. In this case we're using the 'STATE' property.
Importing data from the US Census APIThe US Census Bureau provides an API for querying data in a number of ways. This post will not describe the Census API, other than to say that the data is returned in JSON format. We use the state ID, provided in the 2nd column, to look up the existing state data (using the lookupId method of google.maps.Data) and update with the census data (using the setProperty method of google.maps.Data)
Styling the dataData can be styled through the use of a Data.StyleOptions object or through a function that returns a Data.StyleOptions object. Here we create a choropleth map by applying a gradient to each polygon in the dataset based on the value in the census data.In addition to the coloring, we've created an interactive element by adding events that respond to mouse activity. When you hover your mouse cursor (or finger) over a region with data, the border becomes heavier and the data card is updated with the selected value.
We’ve also used a custom basemap style in this example to provide some contrast to the colorful data. 
Check out Google Maps Engine if you need somewhere to store your geospatial data in the cloud, as we’ve done here. If you have any questions on using these features, check out the docs for the data layer and the Maps Engine API or head over to Stack Overflow and ask there. You can also check out this article’s permanent home, where the interactive version lives.

Posted by Louis Gray, Googler
Categories: Programming

Dart improves async and server-side performance

Google Code Blog - Wed, 04/09/2014 - 17:40
Cross-posted from the Chromium Blog

By Anders Johnsen, Google Chrome Team

Today's release of the Dart SDK version 1.3 includes a 2x performance improvement for asynchronous Dart code combined with server-side I/O operations. This puts Dart in the same league as popular server-side runtimes and allows you to build high-performance server-side Dart VM apps.

We measured request-per-second improvements using three simple HTTP benchmarks: Hello, File, and JSON. Hello, which improved by 130%, provides a measure for how many basic connections an HTTP server can handle, by simply measuring an HTTP server responding with a fixed string. The File benchmark, which simulates the server accessing and serving static content, improved by nearly 30%. Finally, as a proxy for performance of REST apps, the JSON benchmark nearly doubled in throughput. In addition to great performance, another benefit of using Dart on the server is that it allows you to use the same language and libraries on both the client and server, reducing mental context switches and improving code reuse.

The data for the chart above was collected on a Ubuntu 12.04.4 LTS machine with 8GB RAM and a Intel(R) Core(TM) i5-2400 CPU, running a single-isolate server on Dart VM version 1.1.3, 1.2.0 and 1.3.0-dev.7.5.
The source for the benchmarks is available.
We are excited about these initial results, and we anticipate continued improvements for server-side Dart VM apps. If you're interested in learning how to build a web server with Dart, check out the new Write HTTP Clients and Servers tutorial and explore the programmer's guide to command-line apps with Dart. We hope to see what you build in our Dartisans G+ community.

Anders Johnsen is a software engineer on the Chrome team, working in the Aarhus, Denmark office. He helps Dart run in the cloud.

Posted by Louis Gray, Googler
Categories: Programming

Manage Your Job Search is Out; You Get the Presents

I am happy to announce that Manage Your Job Search is available on all platforms: electronic and print. And, you get the presents!

For one week, I am running a series of special conference calls to promote the book. Take a look at my celebration page.

I also have special pricing on Hiring Geeks That Fit as a bundle at the Pragmatic Bookshelf, leanpub, and on Amazon. Why? Because you might want to know how great managers hire.

Please do join me on the conference calls. They’ll be short, sweet, and a ton of fun.

Categories: Project Management

Using Key Performance Indicators (KPI) and Features in Project Success Measurements

Software Requirements Blog - - Wed, 04/09/2014 - 12:27
As I pointed out in a prior post on project success measurements, overall project success and the success of the related IT development effort can be mutually exclusive of each other.  A business can achieve the objectives for a certain initiative regardless of whether the related IT effort succeeds or not.  Similarly, an IT initiative […]

Using Key Performance Indicators (KPI) and Features in Project Success Measurements is a post from:

Categories: Requirements

Engagement and Early Feedback and the Success of Agile Implementation

Engagement and feedback are interrelated like the bricks in the aqueduct.

Engagement and feedback are interrelated like the bricks in the aqueduct.

In Senior Management and the Success of Agile Implementation,I described the results of a survey of experienced process improvement personnel, testers or developers felt contribute to a successful Agile implementation. Tied for second place in the survey were team engagement and generating early feedback. These two concepts are curiously inter-related.

Team engagement is a reflection of motivated and capable individuals working together.  Agile provides teams with the tools to instill unity of purpose. Working with the business on a continuous basis provides the team a clear understanding of the project’s purpose. Short iterations provide the team with a sense of progress. Self-management and retrospectives provide teams with a degree of control over how they tackle impediments.  Finally, the end-of-sprint demonstrations provide early feedback. Feedback helps reinforce the team’s sense of purpose, which reinforces motivation.

Early feedback was noted in the survey as often as team engagement. In classic software development projects, the project would progress from requirements through analysis, design, coding and testing before customers would see functional code.  Progress in these methods is conveyed through process documents (e.g. requirements documents) and status reports. On the other hand, one of the most important principles of Agile states:

Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

Delivering functional software provides all of the project’s stakeholders with explicit proof of progress and provides stakeholders with a chance to provide feedback based on code they can execute. Early feedback increases stakeholder engagement and satisfaction, which also helps to motivate the team. As importantly, since stakeholders see incremental progress, any required course corrections are also incremental.  Incremental course corrections help to ensure that when the project is complete that most value possible has been delivered.

Team engagement and early feedback are both important to successful Agile implementations. Interestingly, both concepts are inter-twined. Feedback helps to generate engagement and motivation. As one of the respondents to the survey stated, “Agile succeeds when it instills ‘unity of purpose’ and builds a ‘community of trust’ within an organization.” Team engagement and early feedback provides a platform for Agile success.

Categories: Process Management

Data Types

Eric.Weblog() - Eric Sink - Tue, 04/08/2014 - 19:00

(This entry is part of a series. The audience: SQL Server developers. The topic: SQLite on mobile devices.)

Different types of, er, types

At the SQL language level, the biggest difference with SQLite is the way it deals with data types. There are three main differences to be aware of:

  1. There are only a few types

  2. And types are dynamic

  3. (But not entirely, because they have affinity)

  4. And type declarations are weird

Okay, so actually that's FOUR things, not three. But the third one doesn't really count, so I'm not feeling terribly obligated to cursor all the way back up to the top just to fix the word "three". Let's keep moving.

Only a few types

SQLite values can be one of the following types:


  • REAL

  • TEXT

  • BLOB

The following table shows roughly how these compare to SQL Server types:

SQL Server SQLite Notes tinyint smallint int bigint bit INTEGER In SQLite, all integers are up to 64 bits wide (like bigint), but smaller values are stored more efficiently. real float REAL In SQLite, all floating point numbers are 64 bits wide. char varchar nchar nvarchar text ntext TEXT In SQLite, all strings are Unicode, and it doesn't care about widths on TEXT columns. binary varbinary image BLOB Width doesn't matter here either. decimal numeric money smallmoney INTEGER ? These are problematic, as SQLite 3 does not have a fixed point type. (In Zumero, we handle synchronization of these by mapping them to INTEGER and handling the scaling.) date datetime datetime2 datetimeoffset smalldatetime time (your choice) SQLite has no data types for dates or times. However, it does have a rich set of built-in functions for manipulating date/time values represented as text (ISO-8601 format), integer (unix time) or real (Julian day). Types are dynamic

In SQL Server, the columns in a table are strictly typed. If you define a column to be of type smallint, then every value in that column must be a 16 bit signed integer.

In contrast, SQLite's approach might be called "dynamic typing". Quoting from its own documentation: "In SQLite, the datatype of a value is associated with the value itself, not with its container."

For example, the following code will fail on SQL Server:

CREATE TABLE [foo] (a smallint);
INSERT INTO [foo] (a) VALUES (3);
INSERT INTO [foo] (a) VALUES (3.14);
INSERT INTO [foo] (a) VALUES ('pi');

But on SQLite, it will succeed. The value in the first row is an INTEGER. The value in the second row is a REAL. The value in the third row is a TEXT string.

sqlite> SELECT a, typeof(a) FROM foo;

The column [a] is a container that simply doesn't care what you place in it.

Type affinity

Well, actually, it does care. A little.

A SQLite column does not have a type requirement, but it does have a type preference, called an affinity. I'm not going to reiterate the type affinity rules from the SQLite website here. Suffice it to say that sometimes SQLite will change the type of a value to fit match the affinity of the column, but you probably don't need to know this, because:

  • If you declare of column of type TEXT and always insert TEXT into it, nothing weird will happen.

  • If you declare of column of type INTEGER and always insert INTEGER into it, nothing weird will happen.

  • If you declare of column of type REAL and always insert REAL into it, nothing weird will happen.

In other words, just store values of the type that matches the column. This is the way you usually do things anyway.

Type declarations are weird

In a column declaration, SQLite has a rather funky set of rules for how it parses the type. It uses these rules to try its very best to Do The Right Thing when somebody ports SQL code from another database.

For example, all of the columns in the following table end up with TEXT affinity, which is probably what was intended:

[a] varchar(50),
[b] char(5),
[c] nchar,
[d] nvarchar(5),
[e] nvarchar(max),
[f] text

But in some cases, the rules are funky. Here are more declarations which all end up with TEXT affinity, even though none of them look right:

[a] characters,
[b] textish,
[c] charbroiled,
[d] context

And if you want to be absurd, SQLite will let you. Here's an example of a declaration of a column with INTEGER affinity:

[d] My wife and I went to Copenhagen a couple weeks ago
    to celebrate our wedding anniversary 
    and I also attended SQL Saturday while I there
    and by the way we saw
    Captain America The Winter Soldier 
    there as well which means I got to see it 
    before all my friends back here in Illinois 
    and the main reason this blog entry is late is 
    because I spent most of the following week gloating

SQLite will accept nearly anything as a type name. Column [d] ends up being an INTEGER because its ridiculously long type name contains the characters "INT" (in "Winter Soldier").

Perhaps we can agree that this "feature" could be easily abused.

There are only four types anyway. Pick a name for each type and stick to it. Once again, the official names are:


  • REAL

  • TEXT

  • BLOB

(If you want a little more latitude, you can use INT for INTEGER. Or VARCHAR for TEXT. But don't stray very far, mkay?)

Pretend like these are the only four things that SQLite will allow, and then it will never surprise you.


SQLite handles types very differently from SQL Server, but its approach is mostly a superset of your existing habits. The differences explained above might look like a big deal, but in practice, they probably won't affect you all that much.


Microservices - Not a free lunch!

This is a guest post by Benjamin Wootton, CTO of Contino, a London based consultancy specialising in applying DevOps and Continuous Delivery to software delivery projects. 

Microservices are a style of software architecture that involves delivering systems as a set of very small, granular, independent collaborating services.

Though they aren't a particularly new idea, Microservices seem to have exploded in popularity this year, with articles, conference tracks, and Twitter streams waxing lyrical about the benefits of building software systems in this style.

This popularity is partly off the back of trends such as Cloud, DevOps and Continuous Delivery coming together as enablers for this kind of approach, and partly off the back of great work at companies such as Netflix who have very visibly applied the pattern to great effect.

Let me say up front that I am a fan of the approach. Microservices architectures have lots of very real and significant benefits:

Categories: Architecture

Don't Start With Requirements Start With Capabilities

Herding Cats - Glen Alleman - Tue, 04/08/2014 - 15:08

Lot's of myth floating around about requirements elicitation and management. Starting with requirements is not how large, complex, mission critical DOD and NASA programs work. Start with Capabilities. This cna be directly transferred to Enterprise IT.

Here's a map of the planned capabilities for an ERP system. This figure is from Performance-Based Project Management® where all the deatails of this and other principles, practices, and processes needed for project success can be found.

Here each business systems capability is outlined in the order needed to maximize the business value. In agile parlance, the customer has prioritized the deliverables. But in fact the prioritization is part of the strategic planning needed to assure that the cost investment returns the maximum value to assure the business receive maintains a positive ROI throughout the life of the project

Capabilities Map

The first step is to identify the needed capabilities. Here's the steps

Capabilities Based Planning

Only when we have the capabilities defined for each stage of the project can we start on the requirements. All the capabilities need to be identified and sequenced, otherwise we can't be assured to business goals can be met for the planned investment. With the planned capabilities, we can start on the requirements

Requirements Steps

With requirements in place for each capability, we can then start development. This is done incrementally and iterative using our favorite agile method. Doesn't mater as long an incremental delivery of value of the approach.

Categories: Project Management

Google Play App Translation Service

Android Developers Blog - Tue, 04/08/2014 - 07:04

Posted by Ellie Powers, Google Play team

Today we are happy to announce that the App Translation Service, previewed in May at Google I/O, is now available to all developers. Every day, more than 1.5 million new Android phones and tablets around the world are turned on for the first time. Each newly activated Android device is an opportunity for you as a developer to gain a new user, but frequently, that user speaks a different language from you.

To help developers reach users in other languages, we launched the App Translation Service, which allows developers to purchase professional app translations through the Google Play Developer Console. This is part of a toolbox of localization features you can (and should!) take advantage of as you distribute your app around the world through Google Play.

We were happy to see that many developers expressed interest in the App Translation Service pilot program, and it has been well received by those who have participated so far, with many repeat customers.

Here are several examples from developers who participated in the App Translation Service pilot program: the developers of Zombie Ragdoll used this tool to launch their new game simultaneously in 20 languages in August 2013. When they combined app translation with local marketing campaigns, they found that 80% of their installs came from non-English-language users. Dating app SayHi Chat expanded into 13 additional languages using the App Translation Service. They saw 120% install growth in localized markets and improved user reviews of the professionally translated UI. The developer of card game G4A Indian Rummy found that the App Translation Service was easier to use than their previous translation methods, and saw a 300% increase with user engagement in localized apps. You can read more about these developers’ experiences with the App Translation Service in Developer Stories: Localization in Google Play.

To use the App Translation Service, you’ll want to first read the localization checklist. You’ll need to get your APK ready for translation, and select the languages to target for translation. If you’re unsure about which languages to select, Google Play can help you identify opportunities. First, review the Statistics section in the Developer Console to see where your app has users already. Does your app have a lot of installs in a certain country where you haven’t localized to their language? Are apps like yours popular in a country where your app isn’t available yet? Next, go to the Optimization Tips section in the Developer Console to make sure your APK, store listing, and graphics are consistently translated.

You’ll find the App Translation Service in the Developer Console at the bottom of the APK section — you can start a new translation or manage an existing translation here. You’ll be able to upload your app’s file of string resources, select the languages you want to translate into, select a professional translation vendor, and place your order. Pro tip: you can put your store listing text into the file you upload to the App Translation Service. You’ll be able to communicate with your translator to be sure you get a great result, and download your translated string files. After you do some localization testing, you’ll be ready to publish your newly translated app update on Google Play — with localized store listing text and graphics. Be sure to check back to see the results on your user base, and track the results of marketing campaigns in your new languages using Google Analytics integration.

Good luck! Bonne chance ! ご幸運を祈ります! 행운을 빌어요 ¡Buena suerte! Удачи! Boa Sorte!

Join the discussion on

+Android Developers
Categories: Programming

Improved App Insight by Linking Google Analytics with Google Play

Android Developers Blog - Tue, 04/08/2014 - 02:02

Posted by Ellie Powers, Google Play team

A key part of growing your app’s installed base is knowing more about your users — how they discover your app, what devices they use, what they do when they use your app, and how often they return to it. Understanding your users is now made easier through a new integration between Google Analytics and the Google Play Developer Console.

Starting today, you can link your Google Analytics account with your Google Play Developer Console to get powerful new insights into your app’s user acquisition and engagement. In Google Analytics, you’ll get a new report highlighting which campaigns are driving the most views, installs, and new users in Google Play. In the Developer Console, you’ll get new app stats that let you easily see your app’s engagement based on Analytics data.

This combined data can help you take your app business to the next level, especially if you’re using multiple campaigns or monetizing through advertisements and in-app products that depend on high engagement. Linking Google Analytics to your Developer Console is straightforward — the sections below explain the new types of data you can get and how to get started.

In Google Analytics, see your app’s Google Play referral flow

Once you’ve linked your Analytics account to your Developer Console, you’ll see a new report in Google Analytics called Google Play Referral Flow. This report details each of your campaigns and the user traffic that they drive. For each campaign, you can see how many users viewed listing page in Google Play and how many went on to install your app and ultimately launch it on their mobile devices.

With this data you can track the effectiveness of a wide range of campaigns — such as blogs, news articles, and ad campaigns — and get insight into which marketing activities are most effective for your business. You can find the Google Play report by going to Google Analytics and clicking on Acquisitions > Google Play > Referral Flow.

In the Developer Console, see engagement data from Google Analytics

If you’re already using Google Analytics, you know how important it is to see how users are interacting with your app. How often do they launch it? How much do they do with it? What are they doing inside the app?

Once you link your Analytics account, you’ll be able to see your app’s engagement data from Google Analytics right in the Statistics page in your Developer Console. You’ll be able to select two new metrics from the drop-down menu at the top of the page:

  • Active users: the number of users who have launched your app that day
  • New users: the number of users who have launched your app for the first time that day

These engagement metrics are integrated with your other app statistics, so you can analyze them further across other dimensions, such as by country, language, device, Android version, app version, and carrier.

How to get started

To get started, you first need to integrate Google Analytics into your app. If you haven’t done this already, download the Google Analytics SDK for Android and then take a look at the developer documentation to learn how to add Analytics to your app. Once you’ve integrated Analytics into your app, upload the app to the Developer Console.

Next, you’ll need to link your Developer Console to Google Analytics. To do this, go to the Developer Console and select the app. At the bottom of the Statistics page, you’ll see directions about how to complete the linking. The process takes just a few moments.

That’s it! You can now see both the Google Play Referral Flow report in Google Analytics and the new engagement metrics in the Developer Console.

Join the discussion on

+Android Developers
Categories: Programming

Senior Management and the Success of Agile Implementation

Senior leadership needs to lead by example.

Senior leadership needs to lead by example.

Over the past few weeks I have been asking friends and colleagues to formally answer the following question:

What are the top reasons you think an organization succeeds in implementing Agile?

The group that participated in this survey are from a highly experienced cohort of process improvement personnel, testers or developers. Not all of the respondents were sure Agile and success belonged in the same sentence (more on that later in the week). There was a rich range of answers, however after the first dozen responses a consensus formed. Today I would like to explore the most important success factor as reported in this survey: senior leadership support.

Senior management support was the most often mentioned factor influencing Agile Success. By far one of the significant nuances in the senior management support was exhibiting a true understanding of Agile. In particular, senior managers must understand what Agile really is rather than falling prey to buzzword bingo.  One of the respondents suggested that, “I feel most senior leaders that I have dealt with don’t have a full understanding of what is needed and it trickles down to the rest of the organization.” Senior leaders need to walk the talk when it comes to Agile, if they expect to implement Agile successfully.  They need to prove to both team members and middle managers that they understand how Agile impacts the flow of work through a sprint and that Agile teams are expected to self-organize. Senior leaders will help pull the transition to Agile forward by asking questions that elicit proof that teams are acting Agile.  For example, asking to see team’s burn down chart rather than report-based status reporting sends a strong message that leads behavior.

In many organizations, following the process is as important as the outcome of any specific project. This is based on the presumption that precisely following the process foreshadows success. In the role of process champion, senior leaders own one of the more significant barriers to change. Senior leadership needs to incent teams to try new processes such as Scrum. Senior managers need to understand that Agile frameworks are scaffolds that that teams that need to be tailored to fit project needs and requirements. Providing incentive for teams to experiment will create an environment of flexibility so that teams can decide how address impediments as soon as they are encountered.

Teams need support from senior leadership to allow innovation or Agile implementations will fail. Support for Agile innovation derives from the expectations of senior management that teams will use Agile techniques.  These expectations need to part of the annual goals and objectives and be in evidence in the questions that leaders ask of middle managers and project teams. The power of asking for questions that require that teams prove they are using Agile is a VERY power evidence of a senior leader’s expectations.

Categories: Process Management