Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Testing Principles: Part 2

Let these principles be a caution!

Let these principles be a caution!

In Testing Principles Part 1: This is not Pokémon we noted that we need to strive to find ways of not injecting defects into systems, because while testing can find many defects, it will never be able to remove them all. The next set of principles suggests both where and when to work and then finally leaves us with a strong and sobering reminder. The next set of principles includes:

4. Early testing. Testing includes activities that execute the product (dynamic testing) and activities that review the product (static testing). In software, most people would recognize dynamic testing, which includes executing test cases and comparing results.  Static testing includes reviews and inspections in which a person (or tool in some cases) looks at the code or deliverable and compares it to requirements or another standard. Reviews and inspections can be applied to any piece of work at any time in the development life cycle. Reviewing or testing work products as soon as they are available will find defects earlier in the process and will reduce the possibility of rework.

5. Defect clustering. When you find one defect in a deliverable there will generally be others waiting to be found. The clustering can be caused by the technical or business complexity of the product, misinterpretation of a requirement or other issues (for example, left to my own devices I would get the “ie” thing wrong when spelling word – thank you static testing tool: spell check). Given that defects tend to cluster, if the budget for testing isn’t unlimited then spending more time on areas where defects have been found is a good risk mitigation strategy.

6. Testing is context dependent.  The type of testing that will be the most effective and efficient is driven by the type of product or project. For example, a list of user stories can be reviewed or inspected but can’t be dynamically tested (words can’t be executed like code). Programmers will unit test code based on their knowledge of the structure of the code (white box testing) while product owners and other business stakeholders will test based on understanding of how the overall product will behave (black box testing).

Principle 7 could have been included in part one, however it serves well as a final cautionary tale.

7. Absence of errors fallacy. Just because you have found and corrected all of the defects possible does not mean that the product being delivered is what the customer wants or is useful. My personal translation of this principle is that unless you build the right thing, building it right isn’t even an interesting conversation.

The seven testing principles lead us to understand that we need to focus our efforts by building the right product, using risk to focus our limited resources and find defects as early as possible. Testing in an important part of delivering value from any project, however it is never sufficient.  If you remember one concept based on the seven Testing Principles it is that we can’t test in quality or value.  Those two attributes require that everyone on the project consider quality and value rather than putting that mantle on the shoulders of testers alone.

Categories: Process Management

The Infinite Space Between Words

Coding Horror - Jeff Atwood - Fri, 05/16/2014 - 20:42

Computer performance is a bit of a shell game. You're always waiting for one of four things:

  • Disk
  • CPU
  • Memory
  • Network

But which one? How long will you wait? And what will you do while you're waiting?

Did you see the movie "Her"? If not, you should. It's great. One of my favorite scenes is the AI describing just how difficult it becomes to communicate with humans:

It's like I'm reading a book… and it's a book I deeply love. But I'm reading it slowly now. So the words are really far apart and the spaces between the words are almost infinite. I can still feel you… and the words of our story… but it's in this endless space between the words that I'm finding myself now. It's a place that's not of the physical world. It's where everything else is that I didn't even know existed. I love you so much. But this is where I am now. And this who I am now. And I need you to let me go. As much as I want to, I can't live your book any more.

I have some serious reservations about the work environment pictured in Her where everyone's spending all day creepily whispering to their computers, but there is deep fundamental truth in that one pivotal scene. That infinite space "between" what we humans feel as time is where computers spend all their time. It's an entirely different timescale.

The book Systems Performance: Enterprise and the Cloud has a great table that illustrates just how enormous these time differentials are. Just translate computer time into arbitrary seconds:

1 CPU cycle0.3 ns1 s Level 1 cache access0.9 ns3 s Level 2 cache access2.8 ns9 s Level 3 cache access12.9 ns43 s Main memory access120 ns6 min Solid-state disk I/O50-150 μs2-6 days Rotational disk I/O1-10 ms1-12 months Internet: SF to NYC40 ms4 years Internet: SF to UK81 ms8 years Internet: SF to Australia183 ms19 years OS virtualization reboot4 s423 years SCSI command time-out30 s3000 years Hardware virtualization reboot40 s4000 years Physical system reboot5 m32 millenia

The above Internet times are kind of optimistic. If you look at the AT&T real time US internet latency chart, the time from SF to NYC is more like 70ms. So I'd double the Internet numbers in that chart.

Latency is one thing, but it's also worth considering the cost of that bandwidth.

Speaking of the late, great Jim Gray, he also had an interesting way of explaining this. If the CPU registers are how long it takes you to fetch data from your brain, then going to disk is the equivalent of fetching data from Pluto.

He was probably referring to traditional spinning rust hard drives, so let's adjust that extreme endpoint for today:

  • Distance to Pluto: 4.67 billion miles.
  • Latest fastest spinning HDD performance (49.7) versus latest fastest PCI Express SSD (506.8). That's an improvement of 10x.
  • New distance: 467 million miles.
  • Distance to Jupiter: 500 million miles.

So instead of travelling to Pluto to get our data from disk in 1999, today we only need to travel to … Jupiter.

That's disk performance over the last decade. How much faster did CPUs, memory, and networks get in the same time frame? Would a 10x or 100x improvement really make a dent in these vast infinite spaces in time that computers deal with?

To computers, we humans work on a completely different time scale, practically geologic time. Which is completely mind-bending. The faster computers get, the bigger this time disparity grows.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Categories: Programming

The Multiple SQLite Problem

Eric.Weblog() - Eric Sink - Fri, 05/16/2014 - 19:00
Eric, why the #$%! is your SQLite PCL taking so long?

It's Google's fault. And Apple's fault.


No. Yes. Kinda. Not really.

The Multiple SQLite Problem, In a Nutshell

If your app makes use of two separate instances of the SQLite library, you can end up with a corrupted SQLite data file.

From the horse's mouth

On the SQLite website, section 2.2.1 of How to Corrupt an SQLite Database File is entitled "Multiple copies of SQLite linked into the same application", and says:

As pointed out in the previous paragraph, SQLite takes steps to work around the quirks of POSIX advisory locking. Part of that work-around involves keeping a global list (mutex protected) of open SQLite database files. But, if multiple copies of SQLite are linked into the same application, then there will be multiple instances of this global list. Database connections opened using one copy of the SQLite library will be unaware of database connections opened using the other copy, and will be unable to work around the POSIX advisory locking quirks. A close() operation on one connection might unknowingly clear the locks on a different database connection, leading to database corruption. The scenario above sounds far-fetched. But the SQLite developers are aware of at least one commercial product that was released with exactly this bug. The vendor came to the SQLite developers seeking help in tracking down some infrequent database corruption issues they were seeing on Linux and Mac. The problem was eventually traced to the fact that the application was linking against two separate copies of SQLite. The solution was to change the application build procedures to link against just one copy of SQLite instead of two.

At its core, SQLite is written in C. It is plain-old-fashioned native/umanaged code. If you are accessing SQLite using C#, you are doing so through some kind of a wrapper. That wrapper is loading the SQLite library from somewhere. You may not know where. You probably don't [want to] care.

This is an abstraction. And it can leak. C# is putting some distance between you and the reality of what SQLite really is. That distance can somewhat increase the likelihood of you accidentally having two instances of the SQLite library without even knowing it.

SQLite as part of the mobile OS

Both iOS and Android contain an instance of SQLite as part of the basic operating system. This is a blessing. And a curse.

Built-in SQLite is nice because your app doesn't have to include it. This makes the size of your app smaller. It avoids the need to compile SQLite as part of your build process.

But the problem is that the OS has contributed one instance of the SQLite library that you can't eliminate. It's always there. The multiple SQLite problem cannot happen if only one SQLite is available to your app. Anybody or anything which adds one is risking a plurality.

If SQLite is always in the OS, why not always use it?

Because Apple and Google do a terrible job of keeping it current.

  • iOS 7 ships with SQLite 3.7.13. That shipped in June of 2012.

  • Android ships with SQLite 3.7.11. That shipped in March of 2012.

  • Since Android users never update their devices, a large number of them are still running SQLite 3.7.4, which shipped in December of 2010. (Yes, I know the sweeping generalization in the previous sentence is unfair. I like Android a lot, but I think Google's management of the Android world has been bad enough that I'm entitled to a little crabbiness.)

If you are targeting Android or iOS and using the built-in SQLite library, you are missing out on at least TWO YEARS of excellent development work by DRH and his team. Current versions of SQLite are significantly faster, with many bug fixes, and lots of insanely cool new features. This is just one of the excellent reasons to bundle a current version of SQLite into your app instead of using the one in the OS.

And as soon as you do that, there are two instances in play. You and Apple/Google have collaborated to introduce the risk of database corruption.


AFAIK, no version of Windows includes a SQLite library. This is a blessing. And a curse. For all of the opposite reasons discussed above.

In general, building a mobile app for Windows (Phone or RT or whatever) means you have to include SQLite as part of the app. And when doing so, it certainly makes sense to just use the latest version.

And that introduces another reason somebody might want to use an application-private version of SQLite instead of the one built-in to iOS or Android. If you're building a cross-platform app, you probably want all your platforms using the same version of SQLite. Have fun explaining to your QA people that your app is built on SQLite 3.8.4 on Windows and 3.7.11 on Android and 3.7.13 on iOS.

BTW, it's not clear how or if Windows platforms suffer from the data corruption risk of the multiple SQLite problem. Given that the DRH explanation talks about workarounds for quirks in POSIX file locking, it seems likely that the situation on Windows is different in significant ways. Nonetheless, even if using multiple SQLite instances on Windows platforms is safe, it is still wasteful. And sad.

SQLCipher or SEE

Mobile devices get lost or stolen. A significant portion of mobile app developers want their data encrypted on the device. And the SQLite instance built-in to iOS and Android is plain, with no support for encryption.

The usual solution to this problem is to use SQLCipher (open source, from Zetetic) or SEE (proprietary, from the authors of SQLite). Both of these are drop-in replacements for SQLite.

In other words, this is yet another reason the OS-provided SQLite library might not be sufficient.

SQLite compilation options

SQLite can be compiled in a lot of different ways. Do you want the full-text-search feature? Do you want foreign keys to be default on or off? What do you want the default thread-safety mode to be? Do you need the column metadata feature? Do you need ICU for full Unicode support in collations? The list goes on and on.

Did Apple or Google compile SQLite with the exact set of build options your app needs? Maybe. Or maybe your app just needs to have its own.

Adding a SQLite instance without knowing it

Another way to get two SQLite instances is to add a component or library which includes one. Even if you don't know.

For example, the client side of Zumero (our mobile SQL sync product) needs to call SQLite. Should it bundle a SQLite library? Or should it always call the one in the mobile OS (when available)?

Some earlier versions of the Zumero client SDK included a SQLite instance in our Xamarin component builds. Because, why on earth would we want our code running against the archaic version of SQLite provided by Apple and Google?

And then we had a customer run into this exact problem. They called Zumero for sync. And they used Mono.Data.Sqlite for building their app.

Now we ship builds which contain no SQLite library instance, because it minimizes the likelihood of this kind of accident happening.

There are all kinds of libraries and components and SDKs out there which build on SQLite. Are they calling the instance provided by the OS? Or are they bundling one? Do you even know?

So maybe app developers should just be more careful

Knee-jerk reaction: Yes, absolutely.

Better answer: Certainly not.

App developers don't want to think about this stuff. It's a bit of esoterica that nobody cares about. Most people who started reading this blog entry gave up several paragraphs ago. The ones that are still here (both of you) are wondering why you are still reading when right now there are seven cable channels showing a rerun of Law and Order.

An increasingly easy accident

The multiple SQLite scenario is sounding less far-fetched all the time. SQLite is now one of the most widely deployed pieces of software in history. It is incredibly ubiquitous, and still growing. And people love to build abstractions on top of it.

This problem is going to get more and more common.

And it can have very significant consequences for end users.

Think of it this way

The following requirements are very typical:

  • App developers want to be using a current version of SQLite (because DRH has actually been working for the last two years).

  • App developers want their SQLite data on the mobile device to be encrypted (because even grown-ups lose mobile devices).

  • App developers want to be using the same version of SQLite on all of their mobile app platforms (because it simplifies testing).

  • App developers want no risk of data corruption (because end users don't like that kind of thing).

  • App developers want to work with abstractions, also-known-as ORMs and sync tools, also-known-as things that makes their lives easier (because writing mobile apps is insanely expensive and it is important to reduce development costs).

  • App developers want to NOT have to think about anything in this blog entry (because they are paid to focus on their actual business, which is medicine or rental cars or construction, and it's 2014, so they shouldn't have to spend any time on the ramifications of quirky POSIX file locking).

Those requirements are not just typical, they are reasonable. To ask app developers to give up any of these things would be absurd.

And right now, there is NO WAY to satisfy all the requirements above. In the terminology of high school math, this is a system of equations with no solution.

To be fair

The last several weeks of "the NuGet package is almost ready" are also due to some reasons I can't blame Apple or Google or POSIX for.

When I started working on SQLitePCL.raw, I didn't know nearly enough about MSBuild or NuGet. Anything involving native code with NuGet is pretty tricky. I've spent time climbing the learning curve. My particular way of learning new technologies is to write the code three times. The commit history on GitHub contains the whole story.

Ramifications for SQLitePCL.raw

I want users of my SQLite PCL to have a great experience, so I'm spending [perhaps too much] time trying to find the sweetest subsets of the requirements above.

For example: C++/CX is actually pretty cool. I can build a single WP8 component DLL which is visible to C# while statically building SQLite itself inside. Fewer pieces. Fewer dependencies. Nice. But if anything else in the app needs direct access to SQLite, they'll have to include another instance of the library. Yuck.

Another example: I see [at least] three reasonable choices for iOS:

  • Use the SQLite provided by iOS. It's a shared library. Access it with P/Invoke, DllImport("sqlite3").

  • Bundle the latest SQLite. DllImport("__Internal"), and embed a sqlite3.a as a resource and use the MonoTouch LinkWith attribute.

  • Use the Xamarin SQLCipher component. DllImport("__Internal"), but don't bundle anything, relying on the presence of the SQLCipher component to make the link succeed.

Which one should the NuGet package assume that people want? How do people that prefer the others get a path that Just Works?

So, Eric, when will the SQLitePCL.raw NuGet package be ready

Soon. ;-)

Bottom line

"I don't know the key to success, but the key to failure is trying to please everybody." -- Bill Cosby


Helping You Go Global with More Seamless Google Play Payments

Android Developers Blog - Fri, 05/16/2014 - 18:52

By Ibrahim Elbouchikhi, Google Play Product Manager

Sales of apps and games on Google Play are up by more than 300 percent over the past year. And today, two-thirds of Google Play purchases happen outside of the United States, with international sales continuing to climb. We’re hoping to fuel this momentum by making Google Play payments easier and more convenient for people around the world.

PayPal support

Starting today, we're making it possible for people to choose PayPal for their Google Play purchases in 12 countries, including the U.S., Germany, and Canada. When you make a purchase on Google Play in these countries, you'll find PayPal as an option in your Google Wallet; just enter your PayPal account login and you'll easily be able to make purchases. Our goal is to provide users with a frictionless payment experience, and this new integration is another example of how we work with partners from across the payments industry to deliver this to the user.

Carrier billing and Google Play gift cards in more countries

Carrier billing—which lets people charge purchases in Google Play directly to their phone bill—continues to be a popular way to pay. We’ve just expanded coverage to seven more countries for a total of 24, including Singapore, Thailand and Taiwan. That means almost half of all Google Play users have this option when making their purchases.

We’ve also made Google Play gift cards available to a total of 13 countries, including Japan and Germany.

Support for developer sales in more countries

Developers based in 13 new countries can now sell apps on Google Play (with new additions such as Indonesia, Malaysia and Turkey), bringing the total to 45 countries with support for local developers. We’ve also increased our buyer currency support to 28 new countries, making it even easier for you to tailor your pricing in 60 countries.

Nothing for you to do!

Of course, as developers, when it comes to payments, there’s nothing for you to do; we process all payments, reconcile all currencies globally, and make a monthly deposit in your designated bank account. This means you get to focus on what you do best: creating beautiful and engaging apps and games.

Visit for more information.

Per-country availability of forms of payment is summarized here.

Join the discussion on

+Android Developers
Categories: Programming

Google I/O 2014: start planning your schedule

Google Code Blog - Fri, 05/16/2014 - 17:30
By Katie Miller, Google Developer Marketing

From making your apps as powerful as they can be to putting them in front of hundreds of millions of users, our focus at Google is to help you design, develop and distribute compelling experiences for your users. At Google I/O 2014, happening June 25-26 at Moscone West in San Francisco, we’re bringing you sessions and experiences ranging from design principles and techniques to the latest developer tools and implementations to developer-minded products and strategies to help distribute your app.

If you're coming in person, the schedule will give you more time to interact in the Sandbox, where partners will be on hand to demo apps built on the best of Google and open source, and where you can interact with Googlers 1:1 and in small groups. Don’t worry, though--we’ll have plenty of content online for those following along remotely! Visit the schedule on the Google I/O website (and check back often for updates). As you start your I/O planning, we want to highlight the experiences we’re working on to help you build and grow your apps:

  • Breakout sessions: This year, we’ll once again bring you a deep selection of technical content, including sessions such as "What's New in Android"and "Wearable computing with Google” from Android, Chrome and Cloud, and cross-product, cross-platform implementations. There will be a full slate of design sessions that will bring to life Google’s design principles and teach best practices, and an update on how our monetization, measurement and payment products are better suited than ever to help developers grow the reach of their applications. Sessions from Ray Kurzweil, Ignite and Women Techmakers will take the stage and make us uncomfortably excited about what is possible. The first sessions are now listed, keep checking back for more.
  • Workshops and code labs: Roll up your sleeves, dig in to hands-on experiences and code. Learn how to build better products, apply quantitative data to user experiences, and prototype new Glassware through interactive workshops on UX, experience mapping and design principles. To maximize your learning and give you more interaction with Googlers and peers, visit our coding work space, with work stations preloaded with self-paced modules. Dive into Android, Chrome, Cloud and APIs with experts on hand for guidance.
  • Connect with Googlers in the sandbox: Check out your favorite Google products and meet the Googlers who built them. From there, join a ‘Box talk or app review, ranging from conceptual prototyping, to performance testing with the latest tools, to turning your app into a successful business.
  • Learn from peers at the partner sandbox: We love to see partners build cool things with Google, and have invited a few of them to showcase inspiring integrations of what’s possible. You will be able to see demos and talk in-depth with them about how they designed, created and grew their apps.
  • Beyond Moscone, with I/O Extended: Experience I/O around the world, in an event setting, with I/O Extended. The I/O Extended events include everything from live streaming sessions from I/O to local speaker sessions and hackathons. It is great to see so many events taking place around the world, and we can't wait to see I/O Extended events have another strong year.

We look forward to seeing you next month, whether it’s in-person in San Francisco, at I/O Extended or online through the livestream!

Katherine Miller is part of the Developer Marketing team, working on session programming for Google I/O and developer research efforts. In her spare time she runs (both competitively and after her 2 children) and memorizes passages from beloved children's books.

Posted by Louis Gray, Googler
Categories: Programming

Stuff The Internet Says On Scalability For May 16th, 2014

Hey, it's HighScalability time:

Cross Section of an Undersea Cable. It's industrial art. The parts. The story.
  • 400,000,000,000: Wayback Machine pages indexed; 100 billion: Google searches per month; 10 million: Snapchat monthly user growth.
  • Quotable Quotes:
    • @duncanjw: The Great Rewrite - many apps will be rewritten not just replatformed over next 10 years says @cote #openstacksummit
    • @RFFlores: The Openstack conundrum. If you don't adopt it, you will regret it in the future. If you do adopt it, you will regret it now
    • elementai: I love Redis so much, it became like a superglue where "just enough" performance is needed to resolve a bottleneck problem, but you don't have resources to rewrite a whole thing in something fast.
    • @antirez: "when software engineering is reduced to plumbing together generic systems, software engineers lose their sense of ownership"
    • Tom Akehurst: Microservices vs. monolith is a false dichotomy.
    • @joestump: “Keep in mind that any piece of butt-based infrastructure can fail at any time. Plan your infrastructure accordingly.” Ain’t that the truth?
    • @SalesforceEng: Check out the scale of Kafka @LinkedInEng. @bonkoif says these numbers are about a month old. 3.25 million msgs/sec. 
    • Don Neufeld: The first is to look deeply into the stack of implicit assumptions I’m working with. It’s often the unspoken assumptions that are the most important ones. The second flows from the first and it’s to focus less on building the right thing and more how we’re going to meet our immediate needs.
    • Dan Gillmor: We’re in danger of losing what’s made the Internet the most important medium in history – a decentralized platform where the people at the edges of the networks – that would be you and me – don’t need permission to communicate, create and innovate.

  • If you think of a Hotel as an app, hotels have been doing in-app purchases for a long time. They lead with a teaser rate and then charge for anything that might cross a desire-money threshold. Wifi, that's extra. Gym, that's extra. The bar, a cover charge. Drinks, so so expensive. The pool, extra. A lounge by the pool is double extra extra. To go all the way hotels just need to let you stay for free and then fully monetize all the gamification points.

  • Apple: We handle hundreds of millions of active users using some of the most desirable devices on the planet and several Billion iMesssages/day, 40 billion push notifications/day, 16+ trillion push notifications sent to date.

  • It's a data prison for everyone! Comcast plans data caps for all customers in 5 years, could be 500GB. Or just a few 4K movies.

  • From the future of everything to the verge of extinction. The Slow Decline of Peer-to-Peer File Sharing: People have shifted their activities to streaming over file sharing. Subscribers get quality content at a reasonable price and it's dead simple to use, whereas torrenting or file sharing is a little more complicated.

  • I don't think people understand how hard this is to do in practice. European Court Lets Users Erase Records on Web. Once data is stored on tape deleting takes rewriting all the non-deleted data to another tape. So it's far more efficient to forget indexes to data than delete the data. Which goes against the point I'd imagine.

  • How is a strategy tax hands off? @parislemon: Instagram's decision to use Facebook's much worse place database over Foursquare's has made the product worse. Stupid.

  • Excellent detailed example of the SEDA architecture in action. Guide to Cassandra Thread Pools. Follow the regal message as it flows from thread pool to thread pool, transforming as it makes its way to its final resting place.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so keep on going)...

Categories: Architecture

Spike It! Article Posted

One of my clients was having trouble with estimating work they had never done before, so I wrote an article explaining spikes. That article is up on agileconnection: Need to Learn More about the Work You’re Doing? Spike It!

It was a little serendipity; I taught an estimation workshop right after I explained how to spike in email. That article almost wrote itself.

You can use a spike in any project, agile or not. Spikes are for learning.

I also explain what to do when managers say, “Use the code in the spike” and you think, “Oh no!” See for yourself.

Would-be authors: want to write an article for I’m interested.


Categories: Project Management

Interview with Esko Kilpi

NOOP.NL - Jurgen Appelo - Fri, 05/16/2014 - 14:21

As part of my global book tour I hope to have fascinating conversations with management & leadership experts around the world. One of them is Esko Kilpi, a researcher and consultant in Finland who has a focus on the “arts and sciences of complex interaction”. With Esko I talked about management & complexity.

The post Interview with Esko Kilpi appeared first on NOOP.NL.

Categories: Project Management

Testing Principles Part 1:  This is not Pokémon


I recently studied and passed the test for the International Software Testing Qualification Board’s (ISTQB) Certified Test, Foundation Level (CFTL). During my career I have been a tester, managed a test group and consulted on testing processes. During my time as a tester and a test manager, I was not aware explicitly of the seven principles of testing, however I think I understood them in my gut. Unfortunately most of my managers and clients did not understand them, which meant they behaved in a way that never felt rational and always devolved into a discussion of why bugs made it into production. Whether you are involved in testing, developing, enhancing, supporting or managing IT projects an understanding of the principles of testing can and should influence your professional behavior. I have broken the seven principles into two groups.  Group one relates to why we can’t catch them all and the second is focus on where we find defects. The first group includes:

  1. Testing shows the presence of defects. Stated differently, testing proves that the defects you find exist, but does not prove that there aren’t any other defects that you did not find. Understanding that testing does not prove that software or any product is defect free means that we always need to plan and mitigate the risk that we will find a defect as the development process progresses through to a production environment.
  2. Exhaustive testing is impossible. Testing all combinations of inputs, outputs and processing conditions is not generally not possible (I was involved in a spirited argument at a testing conference that suggested in very simple cases, exhaustive testing might be possible). Even if we set aside exoteric test cases, such as the possibility of a neutrino changing active memory while your software, application or product is using it, the number of possible perpetuations for even simple changes is eye popping (consider calculating the number of possible combinations of a simple change with 15 independent inputs each having 10 possible values). If exhaustive testing is not possible, the testers and test managers must use other techniques to focus the time and effort they have on what is important and risky. Developing an understanding of potential impact and possibility of problems (risk) is needed to target testing resources.
  3. Pesticide Paradox. The value running the same type of test over and over on an application wanes over time. The metaphor of pesticide is used to draw attention to the fact that once a test finds the bugs it is designed to find (or can find – a factor of how the test is implemented) the remaining bugs will be not found by the test.  Testing must be refactored over time to continue to be effective. This is why simply automating a set of tests and then running them over and over is not an adequate risk reduction strategy.

The first three principles of testing very forcibly remind everyone involved in developing, maintaining or support IT applications (hardware or software) that zero defects is aspirational, but not realistic. That understanding belies the shocked disbelief or manic finger pointing when defects are discovered late in the development cycle or in production. They exist and will be found. Our strategy should start by first avoiding creating the defects, focus testing (the whole range of testing from reviews to dynamic testing) on areas of the application or change based on risk to the business if a defect is not found, and have a plan in place for the bugs that run the gauntlet. In the world of IT, everyone, developers, testers, operators and network engineers alike, need to work together to improve quality within real world constraints because unlike Pokémon, you are never going to catch them all.

Categories: Process Management

Using Dropwizard in combination with Elasticsearch

Gridshore - Thu, 05/15/2014 - 21:09

Dropwizard logo

How often do you start creating a new application? How often have you thought about configuring an application. Where to locate a config file, how to load the file, what format to use? Another thing you regularly do is adding timers to track execution time, management tools to do thread analysis etc. From a more functional perspective you want a rich client side application using angularjs. So you need a REST backend to deliver json documents. Does this sound like something you need regularly? Than this blog post is for you. If you never need this, please keep on reading, you might like it.

In this blog post I will create an application that show you all the available indexes in your elasticsearch cluster. Not very sexy, but I am going to use: AngularJS, Dropwizard and elasticsearch. That should be enough to get a lot of you interested.

What is Dropwizard

Dropwizard is a framework that combines a lot of other frameworks that have become the de facto standard in their own domain. We have jersey for REST interface, jetty for light weight container, jackson for json parsing, free marker for front-end templates, Metric for the metrics, slf4j for logging. Dropwizard has some utilities to combine these frameworks and enable you as a developer to be very productive in constructing your application. It provides building blocks like lifecycle management, Resources, Views, loading of bundles, configuration and initialization.

Time to jump in and start creating an application.

Structure of the application

The application is setup as a maven project. To start of we only need one dependency:


If you want to follow along, you can check my github repository:

Configure your application

Every application needs configuration. In our case we need to configure how to connect to elasticsearch. In drop wizard you extend the Configuration class and create a pojo. Using jackson and hibernate validator annotations we configure validation and serialization. In our case the configuration object looks like this:

public class DWESConfiguration extends Configuration {
    private String elasticsearchHost = "localhost:9200";

    private String clusterName = "elasticsearch";

    public String getElasticsearchHost() {
        return elasticsearchHost;

    public void setElasticsearchHost(String elasticsearchHost) {
        this.elasticsearchHost = elasticsearchHost;

    public String getClusterName() {
        return clusterName;

    public void setClusterName(String clusterName) {
        this.clusterName = clusterName;

Then you need to create a yml file containing the properties in the configuration as well as some nice values. In my case it looks like this:

elasticsearchHost: localhost:9300
clusterName: jc-play

How often did you start in your project to create the configuration mechanism? Usually I start with maven and quickly move to tomcat. Not this time. We did do maven, now we did configuration. Next up is the runner for the application.

Add the runner

This is the class we run to start the application. Internally jetty is started. We extend the Application class and use the configuration class as a generic. This is the class that initializes the complete application. Used bundles are initialized, classes are created and passed to other classes.

public class DWESApplication extends Application<DWESConfiguration> {
    private static final Logger logger = LoggerFactory.getLogger(DWESApplication.class);

    public static void main(String[] args) throws Exception {
        new DWESApplication().run(args);

    public String getName() {
        return "dropwizard-elastic";

    public void initialize(Bootstrap<DWESConfiguration> dwesConfigurationBootstrap) {

    public void run(DWESConfiguration config, Environment environment) throws Exception {"Running the application");

When starting this application, we have no succes. A big error because we did not register any resources.

ERROR [2014-05-14 16:58:34,174] com.sun.jersey.server.impl.application.RootResourceUriRules: 
	The ResourceConfig instance does not contain any root resource classes.
Nothing happens, we just need a resource.

Before we can return something, we need to have something to return. We create a pogo called Index that contains one property called name. For now we just return this object as a json object. The following code shows the IndexResource that handles the requests that are related to the indexes.

public class IndexResource {

    public Index showIndexes() {
        Index index = new Index();
        index.setName("A Dummy Index");

        return index;

The @GET, @PATH and @Produces annotations are from the jersey rest library. @Timed is from the metrics library. Before starting the application we need to register our index resource with jersey.

    public void run(DWESConfiguration config, Environment environment) throws Exception {"Running the application");
        final IndexResource indexResource = new IndexResource();

Now we can start the application using the following runner from intellij. Later on we will create the executable jar.

Running the app from intelij

Run the application again, this time it works. You can browse to http://localhost:8080/index and see our dummy index as a nice json document. There is something in the logs though. I love this message, this is what you get when running the application without health checks.

Creating a health check

We add a health check, since we are creating an application interacting with elasticsearch, we create a health check for elasticsearch. Don’t think to much about how we connect to elasticsearch yet. We get there later on.

public class ESHealthCheck extends HealthCheck {

    private ESClientManager clientManager;

    public ESHealthCheck(ESClientManager clientManager) {
        this.clientManager = clientManager;

    protected Result check() throws Exception {
        ClusterHealthResponse clusterIndexHealths = clientManager.obtainClient().admin().cluster().health(new ClusterHealthRequest())
        switch (clusterIndexHealths.getStatus()) {
            case GREEN:
                return HealthCheck.Result.healthy();
            case YELLOW:
                return HealthCheck.Result.unhealthy("Cluster state is yellow, maybe replication not done? New Nodes?");
            case RED:
                return HealthCheck.Result.unhealthy("Something is very wrong with the cluster", clusterIndexHealths);


Just like with the resource handler, we need to register the health check. Together with the standard http port for normal users, another port is exposed for administration. Here you can find the metrics reports like Metrics, Ping, Threads, Healthcheck.

    public void run(DWESConfiguration config, Environment environment) throws Exception {
        Client client = ESClientFactorybean.obtainClient(config.getElasticsearchHost(), config.getClusterName());"Running the application");
        final IndexResource indexResource = new IndexResource(client);

        final ESHealthCheck esHealthCheck = new ESHealthCheck(client);
        environment.healthChecks().register("elasticsearch", esHealthCheck);

You as a reader now have an assignment to start the application and check the admin pages yourself: http://localhost:8081. We are going to connect to elasticsearch in the mean time.

Connecting to elasticsearch

We connect to elasticsearch using the transport client. This is taken care of by the ESClientManager. We make use of the drop wizard managed classes. The lifecycle of these classes is managed by drop wizard. From the configuration object we take the host(s) and the cluster name. Now we can obtain a client in the start method and pass this client to the classes that need it. The first class that needs it is the health check, but we already had a look at that one. Using the ESClientManager other classes have access to the client. The managed interface mandates the start as well as the stop method.

    public void start() throws Exception {
        Settings settings = ImmutableSettings.settingsBuilder().put("", clusterName).build();

        logger.debug("Settings used for connection to elasticsearch : {}", settings.toDelimitedString('#'));

        TransportAddress[] addresses = getTransportAddresses(host);

        logger.debug("Hosts used for transport client : {}", (Object) addresses);

        this.client = new TransportClient(settings).addTransportAddresses(addresses);

    public void stop() throws Exception {

We need to register our managed class with the lifecycle of the environment in the runner class.

    public void run(DWESConfiguration config, Environment environment) throws Exception {
        ESClientManager esClientManager = new ESClientManager(config.getElasticsearchHost(), config.getClusterName());

Next we want to change the IndexResource to use the elasticsearch client to list all indexes.

    public List<Index> showIndexes() {
        IndicesStatusResponse indices = clientManager.obtainClient().admin().indices().prepareStatus().get();

        List<Index> result = new ArrayList<>();
        for (String key : indices.getIndices().keySet()) {
            Index index = new Index();
        return result;

Now we can browse to http://localhost:8080/indexes and we get back a nice json object. In my case I got this:

Creating a better view

Having this REST based interface with json documents is nice, but not if you are human like me (well kind of). So let us add some AngularJS magic to create a slightly better view. The following page can of course also be created with easier view technologies, but I want to demonstrate what you can do with dropwizard.

First we make it possible to use free marker as a template. To make this work we need to additional dependencies: dropwizard-views and dropwizard-views-freemarker. The first step is a view class that knows the free marker template to load and provide the fields that you template can read. In our case we want to expose the cluster name.

public class HomeView extends View {
    private final String clusterName;

    protected HomeView(String clusterName) {
        this.clusterName = clusterName;

    public String getClusterName() {
        return clusterName;

Than we have to create the free marker template. This looks like the following code block

<#-- @ftlvariable name="" type="nl.gridshore.dwes.HomeView" -->
<html ng-app="myApp">
<body ng-controller="IndexCtrl">
<p>Underneath a list of indexes in the cluster <strong>${clusterName?html}</strong></p>

<div ng-init="initIndexes()">
        <li ng-repeat="index in indexes">{{}}</li>

<script src="/assets/js/angular-1.2.16.min.js"></script>
<script src="/assets/js/app.js"></script>

By default you put these template in the resources folder using the same sub folders as your view class has for the package. If you look closely you see some angularjs code, more on this later on. First we need to map a url to the view. This is done with a resource class. The following code block shows the HomeResource class that maps the “/” to the HomeView.

public class HomeResource {
    private String clusterName;

    public HomeResource(String clusterName) {
        this.clusterName = clusterName;

    public HomeView goHome() {
        return new HomeView(clusterName);

Notice we now configure it to return text/html. The goHome method is annotated with GET, so each GET request to the PATH “/” is mapped to the HomeView class. Now we need to tell jersey about this mapping. That is done in the runner class.

final HomeResource homeResource = new HomeResource(config.getClusterName());
Using assets

The final part I want to show is how to use the assets bundle from drop zone to map a folder “/assets” to a part of the url. To use this bundle you have to add the following dependency in maven: dropwizard-assets. Than we can easily map the assets folder in our resources folder to the web assets folder

    public void initialize(Bootstrap<DWESConfiguration> dwesConfigurationBootstrap) {
        dwesConfigurationBootstrap.addBundle(new ViewBundle());
        dwesConfigurationBootstrap.addBundle(new AssetsBundle("/assets/", "/assets/"));

That is it, now you can load the angular javascript file. My very basic sample has one angular controller. This controller uses the $http service to call our /indexes url. The result is used to show the indexes in a list view.

myApp.controller('IndexCtrl', function ($scope, $http) {
    $scope.indexes = [];

    $scope.initIndexes = function () {
        $http.get('/indexes').success(function (data) {
            $scope.indexes = data;

And the result

the very basic screen showing the indexes


This was my first go at using drop wizard, I must admit I like what I have seen so far. Not sure if I would create a big application with it, on the other hand it is really structured. Before moving on I would need to reed a bit more about the library and check all of its options. There is a lot more possible than what I have showed you in here.


The post Using Dropwizard in combination with Elasticsearch appeared first on Gridshore.

Categories: Architecture, Programming

Paper: SwiftCloud: Fault-Tolerant Geo-Replication Integrated all the Way to the Client Machine

So how do you knit multiple datacenters and many thousands of phones and other clients into a single cooperating system?

Usually you don't. It's too hard. We see nascent attempts in services like Firebase and Parse. 

SwiftCloud, as described in SwiftCloud: Fault-Tolerant Geo-Replication Integrated all the Way to the Client Machine, goes two steps further, by leveraging Conflict free Replicated Data Types (CRDTs), which means "data can be replicated at multiple sites and be updated independently with the guarantee that all replicas converge to the same value. In a cloud environment, this allows a user to access the data center closer to the user, thus optimizing the latency for all users."

While we don't see these kind of systems just yet, they are a strong candidate for how things will work in the future, efficiently using resources at every level while supporting huge numbers of cooperating users.


Categories: Architecture

45 New Books (by Top Authors) for Your Management Book Shelf

NOOP.NL - Jurgen Appelo - Thu, 05/15/2014 - 16:38

You may have seen that I supplied the data for’s Top 50 Management & Leadership Experts. People seem to like it very much.

Well, what I also did was find the answer to the question, “What are the latest book releases of the management & leadership experts?”

I discovered new books by Tim Harford, Daniel Kahneman, Dan Ariely, Henry Mintzberg, Jason Fried, Simon Sinek, Peter Senge, Malcolm Gladwell, Chip & Dan Heath, John Kotter, Clayton Christensen, and many more… My reading backlog just doubled in size!

The post 45 New Books (by Top Authors) for Your Management Book Shelf appeared first on NOOP.NL.

Categories: Project Management

Quickest Way To Pass A Job Interview

Making the Complex Simple - John Sonmez - Thu, 05/15/2014 - 16:00

Plain and simple, the quickest way to pass a job interview is to make the interviewer like you. You can do it the hard way during the interview, but in this video I talk about how to do it the easy way—before the interview even starts. My course that shows you how to create a […]

The post Quickest Way To Pass A Job Interview appeared first on Simple Programmer.

Categories: Programming

Better Management with Fewer Managers (video)

NOOP.NL - Jurgen Appelo - Thu, 05/15/2014 - 11:40

In modern businesses managers are expected to be “servant leaders” and “systems thinkers”. But nobody explains how they can do this tomorrow morning. “Empowering workers” and “delighting customers” are important principles. But none of those suggestions are concrete. Most people want to know…

The post Better Management with Fewer Managers (video) appeared first on NOOP.NL.

Categories: Project Management

Two More Myths: Teams!

Interaction of team members yields productivity.

Interaction of team members yields productivity.

Over the past eight days we reviewed the myths of outsourcing.  It has generated more than a few responses and Skype conversations, and I missed two myths. I suspect that I have missed many more and stand ready to add to the list.  The missing myths revolve around teams. Teams are a normal feature of all IT organizations, whether we build or maintain code or are network engineers. Outsourcing changes the effectiveness of any team it touches.

Myth: “Distributed teams must leverage waterfall techniques.” The rational for this myth is often attributed to time zones, language or accent differences.  But, distributed teams comprised of members of both the outsourcer(s) and the outsourcee can work in a collaborative, dare I say, Agile manner. The distributed nature of the team makes working as a team significantly more difficult. This is even truer when organizations are newly mixed. Most of the potential complications impacting communications can be solved by:

  1. Identifying and focusing on a common goal,
  2. Concentrating an effort to personalize each team member, and
  3. Synchronizing the work processes.

Myth:  “Distributed teams are just as efficient as those that are co-located.” We discussed the impact of distributed teams  on making decisions. Distributed teams must expend significantly more effort to stay synchronized, if for no other reasons than the length of the communication lines.  A distributed team needs more time and effort to communicate than the same team would need if they were sitting in the same room.  That extra effort typically reduces productivity. For example, if one function point of functionality required 40 hours of effort to deliver before outsourcing and then when the work was outsourced it took 30 hours to deliver a function point, the organization would have become more efficient. Anything that decreases the amount of effort needed to deliver a unit of work increases efficiency. In outsourcing scenarios, organizations often redefine  efficiency to mean the ratio of output to cost. One of the primary reasons organizations choose to outsource is to lower the cost basis of work. The reduction in cost is interpreted as an increase in efficiency, which may or may not be true. For example if a function point requires 40 hours to deliver before and after outsourcing, the labor efficiency would not have changed. However if the cost for the function point originally was $4,000 USD and then after outsourcing the cost falls to $3,000 USD, the cost efficiency would have improved. The rational for shifting from effort efficiency to cost efficiency in this scenario is that an organization focuses on measuring the efficiency of how it uses its own resource to deliver output.  Money is the resource being used in the case (really a combination of money and internal effort to manage the outsourcer). Ignoring effort efficiency hides potential issues that could be exposed if the cost of developers between the outsourcee and outsourcer normalizes.

A note on the idea of efficiency: While measuring efficiency is important, a more important measure is effectiveness.  Effectiveness measure the value delivered per unit of input (effort or cost based on the discussion above). I am sure I have not explored all of the myths of outsourcing. What are the myths that you have seen? For example, have you seen the terms outsourcing and off-shore conflated?

Categories: Process Management

Google Says Cloud Prices Will Follow Moore’s Law: Are We All Renters Now?

After Google cut prices on their Google Cloud Platform Amazon quickly followed with their own price cuts. Even more interesting is what the future holds for pricing. The near future looks great. After that? We'll see.

Adrian Cockcroft highlights that Google thinks prices should follow Moore’s law, which means we should expect prices to halve every 18-24 months.

That's good news. Greater cost certainty means you can make much more aggressive build out plans. With the savings you can hire more people, handle more customers, and add those media rich features you thought you couldn't afford. Design is directly related to costs.

Without Google competing with Amazon there's little doubt the price reduction curve would be much less favorable.

As a late cloud entrant Google is now in a customer acquisition phase, so they are willing to pay for customers, which means lower prices are an acceptable cost of doing business. Profit and high margins are not the objective. Getting market share is what is important.

Amazon on the other hand has been reaping the higher margins earned from recurring customers. So Google's entrance into the early product life cycle phase is making Amazon eat into their margins and is forcing down prices over all.

But there's a floor to how low prices can go. Alen Peacock, co-founder of Space Monkey has an interesting graphic telling the story. This is Amazon's historical pricing for 1TB of storage in S3, graphed as a multiple of the historical pricing for 1TB of local hard disk:

Alen explains it this way:

Cloud prices do decrease over time, and have dropped significantly over the timespan shown in the graph, but this graph shows cloud storage prices as a multiple of hard disk prices. In other words, hard disk prices are dropping much faster than datacenter prices. This is because, right, datacenters have costs other than hard disks (power, cooling, bandwidth, building infrastructure, diesel backup generators, battery backup systems, fire suppression, staffing, etc). Most of those costs do not follow Moore's Law -- in fact energy costs are on a long trend upwards. So over time, the gap shown by the graph should continue to widen.


The economic advantages of building your own (but housed in datacenters) is there, but it isn't huge. There is also some long term strategic advantage to building your own, e.g., GDrive dropped their price dramatically at will because Google owns their datacenters, but Dropbox couldn't do that without convincing Amazon to drop the price they pay for S3.

Costs other than hardware began dominating in datacenters several years ago, Moore's Law-like effects are dampened. Energy/cooling and cooling costs do not follow Moore's Law, and those costs make up a huge component of the overall picture in datacenters. This is only going to get worse, barring some radical new energy production technology arriving on the scene.

What we're [Space Monkey] interested in, long term, is dropping the floor out from underneath all of these, and I think that only happens if you get out of the datacenter entirely.

As the size of cloud market is still growing there will still be a fight for market share. When growth slows and the market is divided between major players a collusionary pricing phase will take over. Cloud customers are sticky customers. It's hard to move off a cloud. The need for higher margins to justify the cash flow drain during the customer acquisition phase will reverse the favorable trends we are seeing now.

Until then it seems the economics indicate we are in a rent, not a buy world.

Related Articles 
  • IaaS Series: Cloud Storage Pricing – How Low Can They Go? - "For now it seems we can assume we’ve not seen the last of the big price reductions."
  • The Cloud Is Not Green
  • Brad Stone: “Bill Miller, the chief investment officer at Legg Mason Capital Management and a major Amazon shareholder, asked Bezos at the time about the profitability prospects for AWS. Bezos predicted they would be good over the long term but said that he didn’t want to repeat “Steve Jobs’s mistake” of pricing the iPhone in a way that was so fantastically profitable that the smartphone market became a magnet for competition.” 
Categories: Architecture

Design Your Agile Project, Part 5

This post is what you do when you are a program manager and not everyone knows what “agile” is, when you create a new product, when you are introducing that much cultural change? (In the book, I will talk more specifically about change and what to do. This post is the highlights.)

Project management and program management are all about managing risks. How do we bring the management of change and management of risk together in an agile project or a program?

Let’s review the principles again:

  1. Separate the business decision for product release from the software being releaseable all the time. That means you want the software releaseable all the time, but you don’t have to release it. I talked about this in Design Your Agile Project, Part 1.
  2. Keep the feature size small for each team, so you can see your throughput.
  3. Use the engineering practices, so you don’t incur technical debt as you proceed.
  4. Understand your potential for release frequency.

Are you doing these things now? Are they part of how you work every day? If not, you need to change.  I’m going to address what the program needs to do to succeed.

Your Program Represents the Teams

In a sense, the program will represent the state of agile for the teams. Think of it as Conway’s Law exposed. (Conway says the system design reflects the communication structure of the designers.)

You might think you need to standardize your approach to agile or lean. You might think you need to be rigid about how you move to agile.

You would be wrong about the process. You would be more correct about the engineering practices.

You need to create the most resilience for the organization. Here’s what I mean.

CynefinIf you have autonomous, collaborative teams, you will have uncoupled, collaborative code. If you look at the Cynefin framework, you get that on the right side, without too much trouble. (I’m not saying this is easy. Just that it’s more possible.)

But, what if you have geographically distributed teams, or your teams are new to agile/lean, or you are still responding to requests from all of the program because the rest of the organization doesn’t quite understand agile? What happens then?

You are on the Complex or Chaotic side of the Cynefin framework. Maybe you don’t use the Good Practice that we already know for program management, right? Maybe you don’t use what we already know about for the projects, because they won’t scale for the program.

That’s why each team needs to review Part 2 and Part 3, especially if they are part of a program.

That’s why program management needs to be servant leadership at the core team level. See Which Program Team Are You Managing? Some program managers think they are managing technical teams. They might be. But, they might need to manage a core team in addition to a technical team.

What Does this Mean for a Program?
  1. If you are trying to change everything, you have many unknowns. You are not in the right side of the Cynefin framework. You are somewhere on the left side of the framework. Agile “out of the box” is not going to work.
  2. Teams need to practice being agile as a team, before they are part of a program. They can come together in service of a program. And, because each team designs its agile project, no manager can change people on a team, unless the team requests that change. No “I can move people like chess pieces” business. Managers serve the teams.
  3. Beware of hierarchies. They will slow your program. What is a hierarchy? Scrum of scrums. Hardening sprints, especially where other release teams integrate for the feature teams, can create hierarchies. Why? Because it’s too easy to say, “My part is done.”

If you are designing your agile project to be part of a program, you want to consider, “How will we make sure we deliver on a consistent basis to the rest of the program?”

This is not about maximizing throughput. This is about meeting deliverables, and making sure that the teams know about their interdependcies long enough before they have to meet them. Then, the teams can meet them.

In a program, you always have interdependencies. Always.

Design Each Team’s Project to Optimize at the Program Level

If you are part of a program, it’s not enough to design your project for your team. You have to consider the needs of the program, too.

Each team needs to ask itself, “How do we deliver what the rest of the program needs, as the program needs it?”

You might want to watch Phil Evans Ted talk, How Data Will Transform Business. In a hierarchy, we have too-high transaction costs. (In geographically distributed teams, we have too-high transaction costs, too, but that’s a different problem.) Note how he says “Small is Beautiful.” Hehehehe. Gotta love it.

Hierarchies are slow to respond. They impose barriers where you don’t need any. The problem with programs is that they are big. You want to decrease the problems of bigness where you can. One way to do that is to decrease the effects of hierarchy. How? By not involving hierarchy whenever you can. That means using small world networks to solve problems between and among teams. That way you solve problems where the problems exist.

If I ran all the programs in the world, what would I do?

  1. Have feature teams everywhere, not geographically dispersed project teams. I prefer collocated teams, but I realize in very large programs that is not always possible. (Sigh.)
  2. Have a core program team (cross-functional business team) that runs itself using kanban. If you need a cadence, use a one- or two-week iteration for the team’s problem-solving.
  3. For the technical program team, run itself using kanban. Same idea with problem-solving cadence.
  4. Have the project teams use their own approaches to agile and lean, recognizing that their job is to reduce batch size, get to releaseable all the time, and not incur any technical debt as they proceed. The more the project teams are autonomous in their approaches to agile, the more they will collaborate with each other. The more they will feel free to explore what they can do.
  5. Have the program architect (who represents the business value to the core team) look for examples of bad implementations of Conway’s Law all the time in the product. That will help create architectural business value. Yes, there is more that the architects do.
  6. Encourage Communities of Practice for the functional areas. Encourage cross-pollination between the communities. The “plain” developers need to know what the architects are thinking, as do the testers. The developers need to know what problems the testers are solving, and so on. Organizing and facilitating CoP might be a management job. It might be a program management job. It’s a servant leadership role. It’s definitely not a command-and-control role. (“It’s Tuesday at 4pm. It’s time to learn.” Ugh. No.) The word here is “encourage,” not mandate.
  7. As a program manager, you need to be aware when people need more training in understanding deliverables or what those deliverables are. Do they understand flow? Do they understand agile? Do they understand feedback? Do the teams need coaches? Do the teams need help in project management (yes, the teams are doing project management)? Do the teams need help in agile or lean? Do the teams need help in the interpersonal skills? Do the teams need help in the engineering practices that will help them deliver a clean feature every day or so into the code base?

Those are just your deliverables to the program. That’s nothing about what you deliver to your management team.

Keep these three words in your pocket for your teams: autonomy, collaboration, and exploration. The teams need to be autonomous. It’s their deliverables that you care about. Not the teams being in lock-step.

You care about the teams collaborating. How do you encourage that? With small features and product owners who have well-defined feature backlogs and roadmaps. The more often the teams check completed features in, the fewer collisions, and the more manageable the collisions are. You get momentum that way. I talked about momentum in Organizing an Agile Program, Part 3, Large Programs Demand Servant Leadership.

The exploration occurs when the teams (which include architects) spike or explore what the team(s) think the options are. Or, when teams talk among themselves to solve problems. Teams can first solve problems themselves. They do need a small world network and to know that you want them to solve their problems. They don’t need a hierarchy to solve problems. These people are smart. Isn’t that why you hired them?

Okay, all the previous posts in this series are:

Design Your Agile Project, Part 1

Design Your Agile Project, Part 2

Design Your Agile Project, Part 3

Design Your Agile Project, Part 4

Sorry this series took me so long. Travel and our new house interfered. Being a product owner is all-consuming.

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Wed, 05/14/2014 - 16:15

Again and again and again – what are the facts? Shun wishful thinking, ignore divine revelation, forget what ‘the stars foretell’, avoid opinion, care not what the neighbors think, never mind the unguessable ‘verdict of history’ – what are the facts, and to how many decimal places?   You pilot always into an unknown future; facts are your single clue.  Get the facts! ~ Robert A. Heinlein (1907-1988)

So why do we hear conjecture, unsubstantiated statements, suggestions without actionable outcomes, obvious nonsense about how to spend other peoples money?

Facts must be harder to come by than it looks.

Categories: Project Management

Myths of Outsourcing: Been There, Done That! Part 2

Been there, done that, got a picture, part 2.

Been there, done that, got a picture, part 2.

Myth:  “Outsourcing arrangements can be managed as business as usual.”  This is a myth that is common among rookie organizations that have avoided reading the literature or gotten lucky on one or two small projects.  While it must seem like a truism, outsourcing models can not be addressed as if it were business as usual. Change is expensive and risky.  Managing all sourcing options in the same manner will in the long run yield the same results as you have today.  The fact that most (if not all) outsourcing agreements are based on a contract makes this kind of work different.  The concept of how the internal IT department manages outsourced work shifts from a line management model to a more disengaged model.  Users trained to partially specify requirements, then ask for changes, will need to learn that changes cost real money.  I have seen more Agile models applied to outsourced projects.  These methods hold great promise when applied to relationships of differing levels of discipline (typically the organization sourcing the work is lower while the outsourcer has a higher level of discipline).   The use of high-touch components of methods such as eXtreme coupled with high documentation methods to facilitate communication across groups have been found to be useful in building bridges between different organizations.  Using these modified approaches makes sense if cost is not the primary driver of the relationship.

A corollary to managing outsourcing as business as usual is that all outsourcing options can be managed in a similar manner.  If there were no process differences between sourcing options, the decision process would be moot (or at least very different than it is today).  Process differences require different management tactics.  The difference in communication methods in offshore models and staff acquisition models are significant.  Process differences require different management methods to produce the cost, quality or productivity differentials we expect.

Myth:  “We have used staff augmentation models before.  How different could outsourcing be?”  This myth is another variant of the business as usual myth.  Internal line managers managing personnel in the staff augmentation or bodyshop models, trying to apply line management techniques to outsourcing models, significantly reduces efficiency and effectiveness.  Acceptance of this myth causes organizations to try to over-manage their outsourcer.  The over-management is reflected in two typical scenarios: In the first scenario organizations create voluminous contracts that seek to specify (in detail) how outsourcer is to do their work.  This causes the project to lose the effectiveness and efficiency expected (if you were that good at doing and managing these types of project why did you outsource it?).  The second scenario is when the firm outsourcing the work directly tries to manage the outsourcer’s personnel.  Thankfully, this scenario is becoming rarer.  It isn’t even worth making this mistake to learn from it.

Myth:  “Our outsourcer is good at managing projects; we will let them handle it.”  This is a final variant of the business as usual myth.  Adherents to the “hands off” approach typically use the same method for internal projects.  They view the outsourcer as an extension of their own staff.  The approach is similar to the classic cartoon where a blackboard is filled with equations that lead to a box titled “a miracle occurs” before emerging in an outcome.  Sourcing arrangements cannot be approached in a hands-off manner.  Contractual relationship must define how to manage the sourced projects.  Management must include all parties to be effective.  The organization that has placed the sourced work must intimately know the status of the project in their portfolio regardless of who is managing the project.

Myth:  “One for all and all for one”.  In the ‘60s, there was a saying “different strokes for different folks.” A different source for different needs.  A package for a base, outsourcing for core functionality enhancements and support and a business team.  Complex scenarios are required to make the IT department of the 21st century run.  In the end, remember that “one size fits one” does not work in all scenarios.

Myth:  “All or nothing.”  Nothing should or could be further from the truth.  Multiple partial sourcing (mps) is an effective means of addressing todays varied IT needs.  MPS is not the lowest cost option; it requires aggressive management, nimble contracting and excellent internal project management.  Large projects or programs are assembled (much like Lego or objects) can often require multiple-sourcing scenarios.  Project management must combine the best of the hands-on approaches with the controls typically viewed as rigid or bureaucratic to keep all projects pointed into the wind.  All or nothing/one-size-fits-all scenarios reduce the effectiveness of the outsourcing options.

Myth:  “I can hire one firm and get rid of all of my IT headaches.” There are large outsourcing firms that can offer across-the-board solutions and leverage multiple sourcing models.  The allure of one-stop shopping is very strong for some.  Replace a whole cog with a whole cog.  Frankly, in a few cases this might be a good answer; however, in the majority of cases it is an overreaction that trades one set of problems for another.  Depending on how much work gets redirected, it may leave you with the inability to deal with work that must be closely held.  In most organizations, IT is a repository of project management skills. Getting rid of everyone in IT will impact the ability to monitor projects or act as technical interfaces with your clients.  While the large sourcing firms can offer a dizzying array of services, it is rarely circumspect to outsource your whole IT department or place all of your work in one organizational basket that you do not control.

Myth: “External sourcing is equally applicable to all type of projects.” There is an old adage: “If the only tool one has is a hammer, everything looks like a nail.” If you have no residual IT organization, go to the next myth: “This ain’t gonna help.”

Some type of projects make more sense to source internally:

  1. Exploratory projects where the requirements or solutions might be unknown.
  2. Projects developing or building on proprietary knowledge. While it is highly unlikely you can not control all knowledge leaks, why tempt fate by adding more potential leaks.
  3. Projects developing core but non-proprietary work.

This final point continues to be controversial. There is a ongoing discussion within the industry that suggests that core work can and should be outsourced if it is built on non-proprietary knowledge. The passing of control for the support of core work can lead to a feeling of loss of control.

Myth:  “One-time projects cannot be outsourced.”  Recently, the subject whether it made sense to outsource one-time projects reared its ugly head.  Can anything really can be one-time unless liquidation occurs in the days after implementation. The working definition I have settled on is that all functionality is new and it will never be enhanced or maintained.  The questions to ask when determining whether you should outsource include:

  • Whether your organization has the expertise required?
  • Do you need to build the expertise for the future?
  • Will the project use strategic or create strategic knowledge?

The number of times you are going to do a specific project does not rise to the same importance level as these questions.  The goal is to determine whether you need to do the project for strategic reasons.

The other “‘one-project” type is the “one-off” (like something done before).  This project type yields to the same logic.  It is critical that you develop a strategy based on the infrastructure and knowledge capital needs of the organization to filter which project should be considered for outsourcing.  A strategic filter will ensure that sourcing decisions are goal driven rather scenario driven.

Myth:  “External sourcing options are fire and forget.” Why would not knowing the intimate status of internal projects be acceptable (arguably it might actually be considered project management malpractice) and is it okay not to know the same level of detail about projects done externally?  This myth seems to go hand in hand with “they are more mature, therefore know better.” Thankfully, I see this phenomenon happening less often, perhaps because people typically don’t let it happen twice.  Unwatched projects wander.  Wandering projects have all sorts of problems such as scope creep.  Capers Jones has been quoted as saying that projects with over 5% scope creep per month end up in cancellation or litigation 90% of the time.  The expediency of communication and monitoring is a simple cure for all but a systemic case of project wandering.

Categories: Process Management

Sponsored Post: Apple, Cloudant, CopperEgg, Logentries,, PagerDuty, HelloSign, CrowdStrike, Gengo, ScaleOut Software, Couchbase, MongoDB, BlueStripe, AiScaler, Aerospike, LogicMonitor, AppDynamics, ManageEngine, Site24x7

Who's Hiring?

  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here.
    • Enterprise Software Engineer. Apple's Emerging Technology Services group provides a Java based SOA platform for various applications to interact with each other. The platform is designed to handle millions of messages a day with very low latency. We have an immediate opening for a talented Software Engineer in a highly visible team who is passionate about exploring emerging technologies to create elegant scalable solutions. Please apply here
    • Mobile Services Software Engineer. The Emerging Technologies/Mobile Services team is looking for a proactive and hardworking software engineer to join our team. The team is responsible for a variety of high quality and high performing mobile services and applications for internal use. Please apply here
    • Sr. Software Engineer-iOS Systems. Do you love building highly scalable, distributed web applications? Does the idea of performance tuning Java applications make your heart leap? If so, iOS Systems is looking for a highly motivated, detail-oriented, energetic individual with excellent written and oral skills who is not afraid to think outside the box and question assumptions. Please apply here
    • Senior Software Engineering Manager. As a Senior Software Engineering Manager on our team, you will be managing teams of very dedicated and talented engineering team. You will be responsible for managing the development of mobile point of sale system on iPod touch hardware. Please apply here.
    • Sr Software Engineer - Messaging Services. An exciting opportunity for a Software Engineer to join Apple's Messaging Services team. We build the cloud systems that power some of the busiest applications in the world, including iMessage, FaceTime and Apple Push Notifications. We handle hundreds of millions of active users using some of the most desirable devices on the planet and several Billion iMesssages/day, 40 billion push notifications/day, 16+ trillion push notifications sent to date. Please apply here.

  • Engine Programmer - C/C++. Wargaming|BigWorld is seeking Engine Programmers to join our team in Sydney, Australia. We offer a relocation package, Australian working visa & great salary + bonus. Your primary responsibility will be to work on our PC engine. Please apply here

  • Senior Engineer wanted for large scale, security oriented distributed systems application that offers career growth and independent work environment. Use your talents for good instead of getting people to click ads at CrowdStrike. Please apply here.

  • Ops Engineer - Are you passionate about scaling and automating cloud-based websites? Love Puppet and deployment scripts? Want to take advantage of both your sys-admin and DevOps skills? Join HelloSign as our second Ops Engineer and help us scale as we grow! Apply at

  • Human Translation Platform Gengo Seeks Sr. DevOps Engineer. Build an infrastructure capable of handling billions of translation jobs, worked on by tens of thousands of qualified translators. If you love playing with Amazon’s AWS, understand the challenges behind release-engineering, and get a kick out of analyzing log data for performance bottlenecks, please apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events

  • The Biggest MongoDB Event Ever Is On. Will You Be There? Join us in New York City June 23-25 for MongoDB World! The conference lineup includes Amazon CTO Werner Vogels and Cloudera Co-Founder Mike Olson for keynote addresses.  You’ll walk away with everything you need to know to build and manage modern applications. Register before April 4 to take advantage of super early bird pricing.

  • Upcoming Webinar: Practical Guide to SQL - NoSQL Migration. Avoid common pitfalls of NoSQL deployment with the best practices in this May 8 webinar with Anton Yazovskiy of Thumbtack Technology. He will review key questions to ask before migration, and differences in data modeling and architectural approaches. Finally, he will walk you through a typical application based on RDBMS and will migrate it to NoSQL step by step. Register for the webinar.
Cool Products and Services
  • The NoSQL "Family Tree" from Cloudant explains the NoSQL product landscape using an infographic. The highlights: NoSQL arose from "Big Data" (before it was called "Big Data"); NoSQL is not "One Size Fits All"; Vendor-driven versus Community-driven NoSQL.  Create a free Cloudant account and start the NoSQL goodness

  • Finally, log management and analytics can be easy, accessible across your team, and provide deep insights into data that matters across the business - from development, to operations, to business analytics. Create your free Logentries account here.

  • CopperEgg. Simple, Affordable Cloud Monitoring. CopperEgg gives you instant visibility into all of your cloud-hosted servers and applications. Cloud monitoring has never been so easy: lightweight, elastic monitoring; root cause analysis; data visualization; smart alerts. Get Started Now.

  • PagerDuty helps operations and DevOps engineers resolve problems as quickly as possible. By aggregating errors from all your IT monitoring tools, and allowing easy on-call scheduling that ensures the right alerts reach the right people, PagerDuty increases uptime and reduces on-call burnout—so that you only wake up when you have to. Thousands of companies rely on PagerDuty, including Netflix, Etsy, Heroku, and Github.

  • Aerospike Releases Client SDK for Node.js 0.10.x. This client makes it easy to build applications in Node.js that need to store and retrieve data from a high-performance Aerospike cluster. This version exposes Key-Value Store functionality - which is the core of Aerospike's In-Memory NoSQL Database. Platforms supported: CentOS 6, RHEL 6, Debian 6, Debian7, Mac OS X, Ubuntu 12.04. Write your first app:

  • consistent: to be, or not to be. That’s the question. Is data in MongoDB consistent? It depends. It’s a trade-off between consistency and performance. However, does performance have to be sacrificed to maintain consistency? more.

  • Do Continuous MapReduce on Live Data? ScaleOut Software's hServer was built to let you hold your daily business data in-memory, update it as it changes, and concurrently run continuous MapReduce tasks on it to analyze it in real-time. We call this "stateful" analysis. To learn more check out hServer.

  • LogicMonitor is the cloud-based IT performance monitoring solution that enables companies to easily and cost-effectively monitor their entire IT infrastructure stack – storage, servers, networks, applications, virtualization, and websites – from the cloud. No firewall changes needed - start monitoring in only 15 minutes utilizing customized dashboards, trending graphs & alerting.

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture