Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Training Strategies for Transformations and Change: Synthesis

Different training tools are sometimes needed!

Different training tools are sometimes needed!

Organizational transformations, like an Agile transformation, require the acquisition of new skills and capabilities. Gaining new skills and capabilities in an effective manner requires a training strategy. The best transformations borrow liberally from all categories of training strategies to best meet the needs of the transformation program and the culture of the organization. The four major training strategies typically used in Agile (and other IT) transformations have their own strengths and weaknesses. Those attributes make the strategies better for some types of knowledge and skill distribution than other strategies.

Training strategies by use.

Training strategies by use.

Lectures and presentations are the ubiquitous feature of the 21st century corporate meeting. These techniques are useful for spreading awareness and, to a lesser extent, to introduce concepts. The reduced efficiency of the lecture to introduce concepts is a due to trainers that are not trained educators, conference/training rooms that are not as well appointed as college lecture halls and learners that tend to pay only partial attention whenever possible. The partial attention problem is a reflection of email and text messages generated from their day job. Difficulties occur when distributed meetings are not supported with proper telecommunications.

Active learning and experiential learning are both excellent strategies for building and supporting skills and capabilities. Each method can include games, activities, discussions and lecture components. The combination of methods for generating and conveying knowledge keeps the learners focused and involved. Involvement helps defeat the common problem of partial attention by keeping the learners busy. The scalability of the two techniques differs, which can lead to a decision to favor one technique over the other. Many transformation programs blend both strategies. For example, I recently observed a program with group learning session (active learning) with assignments to be done outside the class as part of the learner’s day-to-day activities then debriefed in community of practice sessions (experiential learning).

Mentoring is a specialized form of experience-based learning. Because mentoring is generally a one-on-one technique, it is generally not scalable to for large-scale change programs, however it a good tool to transfer knowledge from one person to another and an excellent tool to support and maintain capabilities. Mentoring is most often used for specialized skills rather than general skills that need to broadly distributed.

Transformation programs generally will need to use more than one training strategy. Each strategy makes sense for specific scenarios. The of crafting an overall strategy requires understanding of which skills, capabilities and knowledge need to be fostered or built within the organization, then the distribution of the learners, the tools available and finally the organization’s culture. Once you understand the requirements, the training strategy can be crafted using a mixture of the training techniques.

Categories: Process Management

Build Mobile App Services with Google Cloud Tools for Android Studio v1.0

Android Developers Blog - Fri, 12/19/2014 - 22:56

Posted by Chris Sells, Product Manager, Cloud Tools for Android Studio

Cloud Tools for Android Studio allows you to simultaneously build the service- and client-side of your mobile app. Earlier this month, we announced the release of Android Studio 1.0 that showed just how much raw functionality there is available for Android app developers. However, the client isn’t the whole picture, as most mobile apps also need one or more web services. It was for this reason that the Cloud Tools for Android Studio were created.

Cloud Tools put the power of Google App Engine in the same IDE alongside of your mobile client, giving you all the same Java language tools for both sides of your app, as well as making it far easier for you to keep them in sync as each of them changes.

Getting Started

To get started with Cloud Tools for Android Studio, add a New Module to your Android Studio project, choose Google Cloud Module and you’ll have three choices:

You can add three Google Cloud module types to your Android Studio project

The Java Servlet Module gives you a plain servlet class for you to implement as you see fit. If you’d like help building your REST endpoints with declarative routing and HTTP verbs and automatic Java object serialization to and from JSON, then you’ll want the Java Endpoints Module. If you want the power of endpoints, along with the ability to send notifications from your server to your clients, then choose Backend with Google Cloud Messaging.

Once you’re done, you’ll have your service code right next to your client code:

You can build your mobile app’s client and service code together in a single project

Not only does this make it very convenient to build and test your entire end-to-end, but we also dropped a little extra something into your app’s build.gradle file:

The android-endpoints configuration build step in your build.gradle file creates a client-side library for your server-side endpoint

The updated Gradle file will now create a library for use in your app’s client code that changes when your service API changes. This library lets you call into your service from your client and provides full code completion as you do:

The client-side endpoint library provides code completion and documentation

Instead of writing the code to create HTTP requests by hand, you can make calls via the library in a typesafe manner and the marshalling from JSON to Java will be handled for you, just like on the server-side (but in reverse, of course).

Endpoints Error Detection

Meanwhile, back on the server-side, as you make changes to your endpoints, we’re watching to make sure that they’re in good working order even before you compile by checking the attributes as you type:

Cloud Tools will detect errors in your endpoint attributes

Here, Cloud Tools have found a duplicate name in the ApiMethod attribute, which is easy to do if you’re creating a new method from an existing method.

Creating an Endpoint from an Objectify Entity

If, as part of your endpoint implementation, you decide to take advantage of the popular Objectify library, you’ll find that Cloud Tools provides special support for you. When you right-click (or control-click on the Mac) on a file containing an Objectify entity class, you’ll get the Generate Cloud Endpoint from Java class option:

The generate Cloud Endpoint from Java class option will create a CRUD endpoint for you

If you’re running this option on a Java class that isn’t built with Objectify, then you’re going to get an endpoint with empty methods for get and insert operations that you can implement as appropriate. However, if you do this with an Objectify entity, you’ll get a fully implemented endpoint:

Cloud Tools has built-in support for generating Objectify-based cloud endpoint implementations

Using your Cloud Endpoint

As an Android developer, you’re used to deploying your client first in the emulator and then into a local device. Likewise, with the service, you’ll want to test first to your local machine and then, when you’re ready, deploy into a Google App Engine project. You can run your service app locally by simply choosing it from the Configurations menu dropdown on the toolbar and pressing the Run button:

The Configurations menu in the toolbar lets you launch your service for testing

This will build and execute your service on http://localhost:8080/ (by default) so that you can test against it with your Android app running in the emulator. Once you’re ready to deploy to Google Cloud Platform, you can do so by selecting the Deploy Module to App Engine option from the Build menu, where you’ll be able to choose the source module you want to deploy, log into your Google account and pick the target project to which you’d like to deploy:

The Deploy to App Engine dialog will use your Google credentials to enumerate your projects for you

Cloud Tools beta required some extra copying and pasting to get the Google login to work, but all of that’s gone now in this release.

What’s Next?

We’re excited to get this release into your hands, so if you’ve haven’t downloaded it yet, then go download Android Studio 1.0 right now! To take advantage of Cloud Tools for Android Studio, you’ll want to sign up for a free Google Cloud Platform trial. Nothing is stopping you from building great Android apps from front to back. If you’ve got suggestions, drop us a line so that we can keep improving. We’re just getting started putting Google Cloud Platform tools in your hands. We can’t wait to see what you’ll build.

Join the discussion on

+Android Developers
Categories: Programming

Google Play game services ends year with a bang!

Android Developers Blog - Fri, 12/19/2014 - 20:31

Posted by Benjamin Frenkel, Product Manager, Play Games

In an effort to supercharge our Google Play games services (GPGS) developer tools, we’re introducing the Game services Publishing API, a revamped Unity Plugin, additional enhancements to the C++ SDK, and improved Leaderboard Tamper Protection.

Let’s dig into what’s new for developers:

Publishing API to automate game services configuration

At Google I/O this past June, the pubsite team launched the Google Play Developer Publishing APIs to automate the configuration and publishing of applications to the Play store. Game developers can now also use the Google Play game services Publishing API to automate the configuration and publishing of game services resources, starting with achievements and leaderboards.

For example, if you plan on publishing your game in multiple languages, the game services Publishing API will enable you to pull translation data from spreadsheets, CSVs, or a Content Management System (CMS) and automatically apply those translations to your achievements.

Early adopter Square Enix believes the game services Publishing API will be an indispensable tool to manage global game rollouts:

Achievements are the most used feature in Google Play game services for us. As our games support more languages, achievement management has become increasingly difficult. With the game services Publishing API, we can automate this process, which is really helpful. The game services Publishing API also comes with great samples that we were able to easily customize for our needs

Keisuke Hata, Manager / Technical Director, SQUARE ENIX Co., Ltd.

To get started today, take a look at the developer documentation here.

Updated Unity plugin and Cross-platform C++ SDK
  • Unity plugin Saved Games support: You can now take advantage of the Saved Games feature directly from the Unity plugin, with more storage and greater discoverability through the Play Games app
  • New Unity plugin architecture: We’ve rewritten the plugin on top of our cross-platform C++ SDK to speed up feature development across SDKs and increase our responsiveness to your feedback
  • Improved Unity generated Xcode project setup: You now have a much more robust way to generate Xcode projects integrated with Google Play Game Services in Unity
  • Updated and improved Unity samples: We’ve updated our sample codes to make it easier for first time developers to integrate Google Play games services
  • C++ SDK support for iPhone 6 Plus: You can now take advantage of the out-of-box games services UI (e.g., for leaderboards and achievements) for larger form factor devices, such as the iPhone 6 Plus

We also include some important bug fixes and stability improvements. Check out the release notes for the Unity Plugin and the getting started page for the C++ SDK for more details.

Leaderboard Tamper Protection

Turn on Leaderboard Tamper Protection to automatically hide suspected tampered scores from your leaderboards. To enable tamper protection on an existing leaderboard, go to your leaderboard in the Play developer console and flip the “Leaderboard tamper protection” toggle to on. Tamper protection will be on by default for new leaderboards.Learn more.

To learn more about cleaning up previously submitted suspicious scores refer to the Google Play game services Management APIs documentation or get the web demo console for the Management API directly from github here.

In addition, if you prefer command-line tools, you can use the python-based option here. Join the discussion on

+Android Developers
Categories: Programming

Testing on the Toilet: Truth: a fluent assertion framework

Google Testing Blog - Fri, 12/19/2014 - 19:28
by Dori Reuveni and Kurt Alfred Kluever

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

As engineers, we spend most of our time reading existing code, rather than writing new code. Therefore, we must make sure we always write clean, readable code. The same goes for our tests; we need a way to clearly express our test assertions.

Truth is an open source, fluent testing framework for Java designed to make your test assertions and failure messages more readable. The fluent API makes reading (and writing) test assertions much more natural, prose-like, and discoverable in your IDE via autocomplete. For example, compare how the following assertion reads with JUnit vs. Truth:
assertEquals("March", monthMap.get(3));          // JUnit
assertThat(monthMap).containsEntry(3, "March"); // Truth
Both statements are asserting the same thing, but the assertion written with Truth can be easily read from left to right, while the JUnit example requires "mental backtracking".

Another benefit of Truth over JUnit is the addition of useful default failure messages. For example:
ImmutableSet<String> colors = ImmutableSet.of("red", "green", "blue", "yellow");
assertTrue(colors.contains("orange")); // JUnit
assertThat(colors).contains("orange"); // Truth
In this example, both assertions will fail, but JUnit will not provide a useful failure message. However, Truth will provide a clear and concise failure message:

AssertionError: <[red, green, blue, yellow]> should have contained <orange>

Truth already supports specialized assertions for most of the common JDK types (Objects, primitives, arrays, Strings, Classes, Comparables, Iterables, Collections, Lists, Sets, Maps, etc.), as well as some Guava types (Optionals). Additional support for other popular types is planned as well (Throwables, Iterators, Multimaps, UnsignedIntegers, UnsignedLongs, etc.).

Truth is also user-extensible: you can easily write a Truth subject to make fluent assertions about your own custom types. By creating your own custom subject, both your assertion API and your failure messages can be domain-specific.

Truth's goal is not to replace JUnit assertions, but to improve the readability of complex assertions and their failure messages. JUnit assertions and Truth assertions can (and often do) live side by side in tests.

To get started with Truth, check out

Categories: Testing & QA

Stuff The Internet Says On Scalability For December 19th, 2014

Hey, it's HighScalability time:

Brilliant & hilarious keynote to finish the day at #yow14 (Matt)

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

See You Next Year!

NOOP.NL - Jurgen Appelo - Fri, 12/19/2014 - 13:13

My book tour was crazy, exciting, ambitious, and exhausting.

The start of the tour now seems a long time ago, even though it was just six months. I had fun with 400 participants, had some great (and some not so great) travel experiences, and collected a good amount of fresh stories.

The post See You Next Year! appeared first on NOOP.NL.

Categories: Project Management

Swift self reference in inner closure

Xebia Blog - Fri, 12/19/2014 - 13:06

We all pretty much know how to safely use self within a Swift closure. But have you ever tried to use self inside a closure that's inside another closure? There is a big chance that the Swift compiler crashed (Xcode 6.1.1) without giving you an error in the editor and without any error message. So how can you solve this problem?

The basic working closure

Before we dive into the problem and solution, let's first have a look at a working code sample that only uses a single closure. We can create a simple Swift Playground to run it and validate that it works.

class Runner {
    var closures: [() -> ()] = []

    func doSomethingAsync(completion: () -> ()) {
        closures = [completion]

class Playground {

    let runner = Runner()

    func works() {
        runner.doSomethingAsync { [weak self] in
            self?.printMessage("This works") ?? ()

    func printMessage(message: String) {

    deinit {


struct Tester {
    var playground: Playground? = Playground()

var tester: Tester? = Tester()
tester?.playground = nil

The doSomethingAsync method takes a closure without arguments and has return type Void. This method doesn't really do anything, but imagine it would load data from a server and then call the completion closure once it's done loading. It does however create a strong reference to the closure by adding it to the closures array. That means we are only allowed to use a weak reference of self within our closure. Otherwise the Runner would keep a strong reference to the Playground instance and neither would ever be deallocated.

Luckily all is fine and the "This works" message is printed in our playground output. Also a "Deinit" message is printed. The Tester construction is used to make sure that the playground will actually deallocate it.

The failing situation

Let's make things slightly more complex. When our first async call is finished and calls our completion closure, we want to load something more and therefore need to create another closure within the outer closure. We add the method below to our Playground class. Keep in mind that the first closure doesn't have [weak self] since we only reference self in the inner closure.

func doesntWork() {
    weak var weakRunner = runner
    runner.doSomethingAsync {

        // do some stuff for which we don't need self

        weakRunner?.doSomethingAsync { [weak self] in
            self?.printMessage("This doesn't work") ?? ()
        } ?? ()

Just adding it already makes the compiler crash, without giving us an error in the editor. We don't even need to run it.

Screen Shot 2014-12-19 at 10.30.59

It gives us the following message:

Communication with the playground service was interrupted unexpectedly.
The playground service "" may have generated a crash log.

And when you have such code in your normal project, the editor also doesn't give an error, but the build will fail with a Swift Compiler Error without clear message of what's wrong:
Command /Applications/ failed with exit code 1

The solution

So how can we work around this problem. Quite simple actually. We simply need to move the [weak self] to the most outer closure.

func doesWork() {
    weak var weakRunner = runner
    runner.doSomethingAsync { [weak self] in

        weakRunner?.doSomethingAsync {
            self?.printMessage("This now works again") ?? ()
        } ?? ()

This does mean that it's possible that within the outer closure, self is not nil and in the inner closure it is nil. So don't write code like this:

    runner.doSomethingAsync { [weak self] in

        if self != nil {
            self!.printMessage("This is fine, self is not nil")

            weakRunner?.doSomethingAsync {
                self!.printMessage("This is not good, self could be nil now")
            } ?? ()

There is one more scenario you should be aware of. If you use an if let construction to safely unwrap self, you could create a strong reference again to self. The following sample illustrates this and will create a reference cycle since our runner will create a strong reference to the Playground instance because of the inner closure.

    runner.doSomethingAsync { [weak self] in

        if let this = self {

            weakRunner?.doSomethingAsync {
                this.printMessage("Captures a strong reference to self")
            } ?? ()

Also this is solved easily by including a weak reference to the instance again, now called this.

runner.doSomethingAsync { [weak self] in

    if let this = self {

        weakRunner?.doSomethingAsync { [weak this] in
            this?.printMessage("We're good again") ?? ()
        } ?? ()

Most people working with Swift know that there are still quite a few bugs in it. In this case, Xcode should give us an error in the editor. If your editor doesn't complain, but your Swift compiler fails, look for closures like this and correct it. Always be safe and use [weak self] references within closures.

Closed Loop Control

Herding Cats - Glen Alleman - Fri, 12/19/2014 - 04:14

Writing software for money is a Closed Loop Control System.

Screen Shot 2014-12-18 at 6.16.44 PM

  • The Reference is our needed performance - cost, schedule, and techncial - to acheive the projects goals. These goals include providing the needed business value, at the needed cost, on the needed day. Since each of these is a random variable, we can't state  Exactly what they are, but we can make a statement about their range and the confidence that the will fall inside that range in order to meet the business goals
  • The System is the development process that takes money, time, and requirements and turns them into code.
  • The Sensor is the measurement system for the project. This starts with assessing the delivered code to assure it meets some intended capability, requirement, or business value. The target units of measure are defined before the work starts for any particular piece of code, so when that code is delivered we can know that it accomplished what we wanted it to do.
  • The Error Signal  comes from comparing the difference between the desired state - cost, schedule, technical, value, or any other measure - and the actual state. This error signal is then used to  adjust or take corrective action to put the system back in balance so it can achieve the desired outcome.

Without the Desired State, the Current State, the comparison of the two, the Error Signal, the project is running open loop. We'll arrive when we arrive at the rate of progress we are performing at, for the cost we are consuming. There is no information available to show what the needed performance of cost, schedule, or value production needs to be to arrive, on time, on budget, and on value (or near enough to call it close).

And when you hear about control systems and they don't follow the picture at the top, they're not Closed Loop. They may be Open Loop, but they are not Close Loop.

Control systems from Glen Alleman


Related articles Conveniently Unburdened by Evidence Risk Management is Project Management for Adults Three Increasingly Mature Views of Estimate Making in IT Projects Complex Project Management The Myth and Half-Truths of "Myths and Half-Truths"
Categories: Project Management

Training Strategies for Transformations and Change: Experiential Learning

You learn to play an instrument by practicing.

You learn to play an instrument by practicing.

Experiential learning, often thought of as learning by doing, can play an important role in any transformation program. In this strategy learners gather knowledge from the combination of doing something, reflecting on what was done and finally generalizing learnings into broader knowledge. The theory holds that knowledge is internalized through concrete engagement more effective and quickly rather through rote learning techniques. The basic steps of experiential learning are:

Experience – The learner is directly involved in experiences that are tied to a real world scenario. The teacher facilitates the experience. Writing your first computer program in a computer lab is an example of a concrete learning experience.

Reflection – The learner reflects on what happened and what they learned during the experience. Reflection typically includes determining what was important about the experience for future experiences. When used in a classroom, the reflection step generally includes sharing reflections and observations with the classmates (a form of a feedback loop). Demonstrating the program you wrote, reviewing the code snippets and sharing lessons learned with the class would be an example of this step.

Generalization – The learner incorporates the experience and what was learned into their broader view of how their job should be performed. The lessons learned from writing the program adds to the base of coding and problem-solving knowledge for the learner.

The flow of work through a team using Scrum can be mapped to experiential learning model. Small slices are of work are accepted into a sprint, the team solves the business problem, reflects on what was learned and then uses what was learned to determine what work will be done next. The process follows the experience, reflection, generalization flow.

There are several versions of the three stage experiential learning model. Conceptually they are all similar, the differences tend to be how the stages are broken down. For example, Northern Illinois University breaks the reflection step into reflection and “what’s important” steps.

There are several pluses and minuses I have observed in applying experiential learning in transformation programs.


  1. Builds on and connects theory to the real world – Theory is often a dirty word in organizations. Experiential learning allows learners to experience a concept that can then be tied back to higher-level concepts such as theory.
  2. Experiences can be manufactured – Meaningful real-life examples can be designed to generate or focus on a specific concepts. When I learned to code first assembler computer program in the LSU computer lab, I was assigned a specific project by my TA.  This was an example of experiential learning.
  3. Can be coupled with other learning techniques – Experiential learning techniques can be combined with other learning strategies to meet logistical and cultural needs. For example classic lecture methods can be combined with experiential learning. My assembler class at LSU included lecture (theory) and lab (experiential) features.
  4. Individuals can apply experiential learning outside of the classroom – Motivated learners often apply the concept of experiential learning to add skills in a non-classroom environment when the skill may not generally applicable to the team or organization. For example, I had an employee learn to write SQL when I got frustrated waiting for the support team to write queries for him.  I learned by writing simple queries and debugging the results (he also used the internet for reference).


  1. Not perfectly scalable – Experiential learning in the classroom or organization tends to require facilitation. Facilitation of large groups either requires multiple facilitators for breaking the group up into smaller groups and extending the time it takes to deliver the training. Without good facilitation experiential learning is less effective (just ask my wife about my skills facilitating her experience learning to drive a stick shift).
  2. Requires careful design – Experience, if not designed or facilitated well, can lead to learning the wrong lesson or to failures that impact the learner’s motivation.
  3. Reflection and generalization steps are often overlooked – The steps after experience are occasionally not given the focus needed to draw out concepts that were learned and then allow them to be incorporated the broader process of how work is performed.

Can anyone learn to ride a bicycle from a book or from a lecture? But you can learn to ride a bicycle using experiential learning (the reality is that it might be the only way). Experiential learning lets the learner try to ride the bike, fall and skin their knees, reflect on the how to improve and then try again.

Categories: Process Management

Making a performant watch face

Android Developers Blog - Thu, 12/18/2014 - 22:27

Posted by Hoi Lam, Developer Advocate, Android Wear

What’s a better holiday gift than great performance? You’ve got a great watch face idea -- now, you want to make sure the face you’re presenting to the world is one of care and attention to detail.

At the core of the watch face's process is an onDraw method for canvas operations. This allows maximum flexibility for your design, but also comes with a few performance caveats. In this blog post, we will mainly focus on performance using the real life journey of how we optimised the Santa Tracker watch face, more than doubling the number of fps (from 18 fps to 42 fps) and making the animation sub-pixel smooth.

Starting point - 18 fps

Our Santa watch face contains a number of overlapping bitmaps that are used to achieve our final image. Here's a list of them from bottom to top:

  1. Background (static)
  2. Clouds which move to the middle
  3. Tick marks (static)
  4. Santa figure and sledge (static)
  5. Santa’s hands - hours and minutes
  6. Santa’s head (static)

The journey begins with these images...

Large images kill performance (+14 fps)

Image size is critical to performance in a Wear application, especially if the images will be scaled and rotated. Wasted pixel space (like Santa’s arm here) is a common asset mistake:

Before: 584 x 584 = 341,056 pixelsAfter: 48*226 = 10,848 (97% reduction)

It's tempting to use bitmaps from the original mock up that have the exact location of watch arms and components in absolute space. Sadly, this creates problems, like in Santa's arm here. While the arm is in the correct position, even transparent pixels increase the size of the image, which can cause performance problems due to memory fetch. You'll want to work with your design team to extract padding and rotational information from the images, and rely on the system to apply the transformations on our behalf.

Since the original image covers the entire screen, even though the bitmap is mostly transparent, the system still needs to check every pixel to see if they have been impacted. Cutting down the area results in significant gains in performance. After correcting both of the arms, the Santa watch face frame rate increased by 10 fps to 28 fps (fps up 56%). We saved another 4 fps (fps up 22%) by cropping Santa’s face and figure layer. 14 fps gained, not bad!

Combine Bitmaps (+7 fps)

Although it would be ideal to have the watch tick marks on top of our clouds, it actually does not make much difference visually as the clouds themselves are transparent. Therefore there is an opportunity to combine the background with the ticks.


When we combined these two views together, it meant that the watch needed to spend less time doing alpha blending operations between them, saving precious GPU time. So, consider collapsing alpha blended resources wherever we can in order to increase performance. By combining two full screen bitmaps, we were able to gain another 7 fps (fps up 39%).

Anti-alias vs FilterBitmap flags - what should you use? (+2 fps)

Android Wear watches come in all shapes and sizes. As a result, it is sometimes necessary to resize a bitmap before drawing on the screen. However, it is not always clear what options developers should select to make sure that the bitmap comes out smoothly. With canvas.drawBitmap, developers need to feed in a Paint object. There are two important options to set - they are anti-alias and FilterBitmap. Here’s our advice:

  • Anti-alias does not do anything for bitmaps. We often switch on the anti-alias option by default as developers when we are creating a Paint object. However, this option only really makes sense for vector objects. For bitmaps, this has no impact. The hand on the left below has anti-alias switched on, the one on the right has it switched off. So turn off anti-aliasing for bitmaps to gain performance back. For our watch face, we gained another 2 fps (fps up 11%) by switching this option off.
  • Switch on FilterBitmap for all bitmap objects which are on top of other objects - this option smooths the edges when drawBitmap is called. This should not be confused with the filter option on Bitmap.createScaledBitmap for resizing bitmaps. We need both to be turned on. The bitmaps below are the magnified view of Santa’s hand. The one on the left has FilterBitmap switched off and the one on the right has FilterBitmap switched on.
  • Eliminate expensive calls in the onDraw loop (+3 fps)

    onDraw is the most critical function call in watch faces. It's called for every drawable frame, and the actual painting process cannot move forward until it's finished. As such, our onDraw method should be as light and as performant as possible. Here's some common problems that developers run into that can be avoided:

    1. Do move heavy and common code to a precompute function - e.g. if we commonly grab R.array.cloudDegrees, try doing that in onCreate, and just referencing it in the onDraw loop.
    2. Don’t repeat the same image transform in onDraw - it’s common to resize bitmaps at runtime to fit the screen size but this is not available in onCreate. To avoid resizing the bitmap over and over again in onDraw, override onSurfaceChanged where width and height information are available and resize images there.
    3. Don't allocate objects in onDraw - this leads to high memory churn which will force garbage collection events to kick off, killing frame rates.
    4. Do analyze the CPU performance by using a tool such as the Android Device Monitor. It’s important that the onDraw execution time is short and occurs in a regular period.

    Following these simple rules will improve rendering performance drastically.

    In the first version, the Santa onDraw routine has a rogue line:

    int[] cloudDegrees = 

    This loads the int array on every call from resources which is expensive. By eliminating this, we gained another 3 fps (fps up 17%).

    Sub-pixel smooth animation (-2 fps)

    For those keeping count, we should be 44 fps, so why is the end product 42 fps? The reason is a limitation with canvas.drawBitmap. Although this command takes left and top positioning settings as a float, the API actually only deals with integers if it is purely translational for backwards compatibility reasons. As a result, the cloud can only move in increments of a whole pixel resulting in janky animations. In order to be sub-pixel smooth, we actually need to draw and then rotate rather than having pre-rotate clouds which moves towards Santa. This additional rotation costs us 2 fps. However, the effect is worthwhile as the animation is now sub-pixel smooth.

    Before - fast but janky and wobbly

    for (int i = 0; i < mCloudBitmaps.length; i++) {
        float r = centerX - (timeElapsed / mCloudSpeeds[i]) % centerX;
        float x = centerX + 
            -1 * (r * (float) Math.cos(Math.toRadians(cloudDegrees[i] + 90)));
        float y = centerY - 
            r * (float) Math.sin(Math.toRadians(cloudDegrees[i] + 90));
        mCloudFilterPaints[i].setAlpha((int) (r/centerX * 255));
        Bitmap cloud = mCloudBitmaps[i];
            x - cloud.getWidth() / 2,
            y - cloud.getHeight() / 2,

    After - slightly slower but sub-pixel smooth

    for (int i = 0; i < mCloudBitmaps.length; i++) {;
        canvas.rotate(mCloudDegrees[i], centerX, centerY);
        float r = centerX - (timeElapsed / (mCloudSpeeds[i])) % centerX;
        mCloudFilterPaints[i].setAlpha((int) (r / centerX * 255));
        canvas.drawBitmap(mCloudBitmaps[i], centerX, centerY - r,

    Before: Integer translation values create janky, wobbly animation. After: smooth sailing!

    Quality on every wrist

    The watch face is the most prominent UI element in Android Wear. As craftspeople, it is our responsibility to make it shine. Let’s put quality on every wrist!

    Join the discussion on

    +Android Developers
Categories: Programming

The Sweet Spot of Customer Demand Meets Microsoft Supply

Here’s a simple visual that I whiteboard when I lead workshops for business transformation.


The Sweet Spot is where customer “demand” meets Microsoft “supply.”

I’m not a fan of product pushers or product pushing.  I’m a fan of creating “pull.”

In order for customers to pull-through any product, platform, or service, you need to know the customer’s pains, needs, and desired outcomes.  Without customer empathy, you’re not relevant.

This is a simple visual, but a powerful one.  

When you have good representation of the voice of the customer, you can really identity problems worth solving.   It always comes down to pains, needs, opportunities, and desired outcomes.  In short, I always just say pains, needs, and desired outcomes so that people can remember it easily.

To make it real, we use scenarios to tell a simple story of a customer’s pain, needs, and desired outcomes.   We use our friends in the field working with customers to give us real stories of real pain.  

Here is an example Scenario Narrative where a company is struggling in creating products that its customers care about …


As you can see, the Current State is a pretty good story of pain, that a lot of business leaders and product owners can identify with.  For some, it’s all too real, because it is their story and they can see themselves in it.

In the Desired Future State, it’s a pretty good story of what success would look like.   It paints a pretty simple picture of what would be an ideal scenario …. a future possibility.

Here is an example of a Solution Storyboard, where we paint a simple picture of that Desired Future State, or more precisely, a Future Capability Vision.     It’s this Future Capability Vision that shows how, with the right capabilities, the customer can address their pains, needs, and desired outcomes.


The beauty of this approach is that it’s product and technology agnostic.   It’s all about building capabilities.

From there, with a  good understanding of the pains, needs, and desired outcomes, it’s super easy to overlay relevant products, technologies, consulting services, etc.

And then, rather than trying to do a product “push”, it becomes a product “pull” because it connects with customers in a very deep, very real, very relevant way.

Think “pull” not “push.”

You Might Also Like

Drive Business Transformation by Reenvisioning Operations

Drive Business Transformation by Reenvisioning Your Customer Experience

Dual-Speed IT Drives Business Transformation and Improves IT-Business Relationships

How Business Leaders are Building Digital Skills

How To Build a Roadmap for Your Digital Transformation

Categories: Architecture, Programming

Use Cases 101: Let’s Take an Uber

Software Requirements Blog - - Thu, 12/18/2014 - 17:30
I was recently asked to prepare a handout giving the basics of use cases for an upcoming training session. It struck me as odd that I needed to start from square one for a model that seemed standard. Use cases, once ubiquitous, have largely been replaced by process flows and other less text-heavy models. As […]
Categories: Requirements

Agility and the essence of software architecture

Coding the Architecture - Simon Brown - Thu, 12/18/2014 - 16:44

I'm just back from the YOW! conference tour in Australia (which was amazing!) and I presented this as the closing slide for my Agility and the essence of software architecture talk, which was about how to create agile software systems in an agile way.

Agility and the essence of software architecture

You will have probably noticed that software architecture sketches/diagrams form a central part of my lightweight approach to software architecture, and I thought this slide was a nice way to summarise the various things that diagrams and the C4 model enable, plus how this helps to do just enough up front design. The slides are available to view online/download and hopefully one of the videos will be available to watch after the holiday season.

Categories: Architecture

Manage Your Project Portfolio is Featured in Colombia’s Largest Business Newspaper

Andy Hunt, the Pragmatic Bookshelf publisher, just sent me an email telling me that Manage Your Project Portfolio is featured in La RepĂşblica, Columbia’s “first and most important business newspaper.” That’s because getabstract liked it!  

la_repĂşblica_20141204Okay, my book is below the fold. It’s in smaller print.

And, I have to say, I’m still pretty excited.

If your organization can’t decide which projects come first, second, third, whatever, or which features come first, second, third, whatever, you should get Manage Your Project Portfolio.

Categories: Project Management

Learning Hacks

Making the Complex Simple - John Sonmez - Thu, 12/18/2014 - 16:00

In this video, I talk about some of the hacks I use to get in extra learning time in my busy day. One of the main hacks I use is to listen to audio books whenever I am in the car or working out. I also read a book while walking on the treadmill every night.

The post Learning Hacks appeared first on Simple Programmer.

Categories: Programming

The Myth and Half-Truths of "Myths and Half-Truths"

Herding Cats - Glen Alleman - Thu, 12/18/2014 - 05:51

I love it when there is a post about Myths and Half-Truths, that is itself full of Myths and Half Truths. Orginal text in Italics, the Myth Busted and Half Truth turned into actual Truth beow each

Myth: If your estimates are a range of dates, you are doing a good job managing expectations.

  • Only the earlier, lower, more magical numbers will be remembered. And those will be accepted as firm commitments.

If this is the case, the communication process is broken. It's not the fault of the estimate or the estimator that Dilbert style management is in place. If this condition persists, better look for work somewhere else, because there are much bigger systemic problems there.

  • The lower bound is usually set at "the earliest date the project can possibly be completed". In other words, there is absolutely no way the work can be completed any earlier, even by a day. What are the chances of hitting that exact date? Practice shows - close to nil. 

The lower bound, most likely, and upper bounds must have a confidence level. The early date with a 10% confidence says to the reader, there is no hope of making that. But the early alone gets stuck in the mind as a cofirmation bias.

Never ever communicate a date, a cost, a capability in the absence of the confidence level for that number.

Making the estimate and management expectations have no connection to each other. Poor estimates make it difficult to manage expectations, because receiving the estimates loose confidecen in the estimator when the estimates have no connection with the actual outcomes of the project.

Half-truth: You can control the quality of your estimate by putting more work into producing this estimate.

It's not the work that is needed, it's the proper work. Never confuse effort with results. Reference Class Forecasting, Parametric models, and similar estimating tools along with the knowledge of the unlying uncertaities of the processes, technology, and people connected with delivering the outcomes of their efforts.

  • By spending some time learning about the project, researching resources available, considering major and minor risks, one can produce a somewhat better estimate.
  • The above activities are only going to take the quality of the estimate so far.  Past a certain point, no matter how much effort goes into estimating a project, the quality of the estimate is not going to improve. Then the best bet is to simply start working on the project.

Of course this ignores the very notion of Subject Matter Expertise. Why are you asking someone who only knows one thing - a one trick pony to work on yu new project. This is naive hiring and management.

This would be like hiring a commercial builder with little or no understanding of modern energy efficient building, solar power and heating, thermal windows, and high efficiency HVAC systems and asked him to build a LEADS Compliant office building.

Why would you do that? Why would you hire software developers that had little understanding of where technology is going? Don’t they read, attend conferences, and look at the literature to see what’s coming up? 

Myth: People can learn to estimate better, as they gain experience.

People can learn to estimates as they gain skill, knowledge, and experience. All three ae needed. Experience alone is necessary but far from sufficient. Experience doing it wrong doesn't lead to improvement, only confirm that bad estimates are the outcome. There are a nearly endless supply of books, papers, and articles on how to properly estimate. Read, take a course, talk to experts, listen, and you'll be able to determine where you are going wrong. Then your experience  is will of value, beyond confirming you know how to do it wrong.

  • It is possible to get better at estimating – if one keeps estimating the same task, which becomes known and familiar with experience. This is hardly ever the case in software development. Every project is different, most teams are brand new, and technology is moving along fast enough.

That's not the way to improve anyting in the software development world. If you wrote code for a single function over and over again, you'd be a one-trick-pony. 

The notion of projects are always new, is better said projects are new to me. Fix that by finding someone who knows what the new problem is about and hire them to help you. Like the builder above dobn't embark on a project where you don't have some knowledge of what to do, how to do it, what prolems will be encountered and what their solutions are. That's a great way to waste your customers money and join a Dath March project.

    • Do not expect to produce a better estimate for your next project than you did for your last one.

Did you not keep a record of what you did last time. Were you paying attention to what happened? No wonder your outcomes are the same.

    • By the same token, do not expect a worse estimate. The quality of the estimate is going to be low, and it is going to be random.

As Inigo Montoya says: You keep using that word. I do not think it means what you think it means.All estimates are random variables. These random variables come from an underlying statistcial process - a Reference Class - of uncertainty about or work. Some are reducible some are irreducible. Making decisons in the presence of this uncertainty is called risk management. And as Tim Lister says Risk Management is how Adults Manage Projects.

Half-truth: it is possible to control the schedule by giving up quality.

The trade offs between cost, schedule, and performance - quality is a performance measure in our Software Intensive Systems domain, along with many other ...ilities. So certainly the schedule can be controlled by trading off quality. 

  • Only for short-term, throw-away temporary projects.

This is not a Myth, it's all too common. I'm currently the Director of Program Governance, where this decision was made lomg ago and we're still paying the price for that decision.

  • For most projects, aiming for lower quality has a negative effect on the schedule.


Related articles How Think Like a Rocket Scientist - Irreducible Complexity Complex Project Management Risk Management is Project Management for Adults
Categories: Project Management

Training Strategies for Transformations and Change: Active Learning

Group discussion is an active learning technique.

Group discussion is an active learning technique.

Change is often premised on people learning new methods, frameworks or techniques.  For any change to be implemented effectively, change agents need to understand the most effective way of helping learners learn.  Active learning is a theory of teaching based on the belief that learners should take responsibility for their own learning.  Techniques that support this type of teaching exist on a continuum that begins with merely fostering active listening, to interactive lectures and finally to using investigative inquiry techniques. Learning using games is a form of active learning (see for examples of Agile games).  Using active learning requires understanding the four basic elements of active learning, participants’ responsibilities and keys to success.

There are four basic elements of active learning that need to be worked into content delivery.

  1. Talking and listening – The act of talking about a topic helps learners organize, synthesize and reinforce what they have learned.
  2. Writing – Writing provides a mechanism for students to process information (similar to talking and listening). Writing is can used in when groups are too large for group or team level interaction or are geographically distributed.
  3. Reading – Reading provides the entry point for new ideas and concepts. Coupling reading with other techniques such as writing (e.g. generating notes and summaries) improves learner’s ability to synthesize and incorporate new concepts.
  4. Reflecting – Reflection provides learners with time to synthesize what they have learned. For example providing learners with time to reflect on how they would teach or answer questions on the knowledge gained in a game or exercise helps increase retention.

Both learners and teachers have responsibilities when using active learning methods. Learners have the responsibility to:

  1. Be motivated – The learner needs to have a goal for learning and be will to expend the effort to reach that goal [HUH?]
  2. Participate in the community – The learner needs to other learners in games, exercises and discussions. [HUH?]
  3. Be able to accept, shape and manage change – Learning is change; the learner must be able to incorporate what they have learned into how they work.

While by definition, active learning shifts the responsibility for learning to learner not all of the responsibility rests on the learner. Teachers/Organization have the responsibility to:

  1. Set goals – The teacher or organization needs to define or identify the desired result of the training.
  2. Design curriculum – The trainer (or curriculum designer) needs to ensure they have designed the courseware needed to guide the learner’s transformations.
  3. Provide facilitation – The trainer needs to provide encouragement and help make the learning process easier.

As a trainer in an organization pursuing a transformation, there are several keys to successfully using active learning.

  1. Use creative events (games or other exercised) that generate engagement.
  2. Incorporate active learning in a planned manner.
  3. Make sure the class understands the process being used and how it will benefit them.
  4. In longer classes, vary pair, team or group membership to help expose learners to as diverse a set of points-of-view as possible.
  5. All exercises should conclude with a readout/presentation of results to the class.  Varying the approach (have different people present, ask different questions) taken during the readout help focus learner attention.
  6. Negotiate a signal for students to stop talking. (Best method: The hand raise, where when the teacher raises his or her hand everyone else raises their hand and stops talking.)

While randomly adding a discussion exercise at the end of a lecture module uses an active learning technique, it not a reflection of an effective approach to active learning.  When building a class or curriculum that intends to use active learning, the game and exercises that are selected need to be carefully chosen  to illicit the desired learning impact.

Categories: Process Management

Dynamic DNS updates with nsupdate (new and improved!)

Agile Testing - Grig Gheorghiu - Wed, 12/17/2014 - 23:14
I blogged about this topic before. This post shows a slightly different way of using nsupdate remotely against a DNS server running BIND 9 in order to programatically update DNS records. The scenario I am describing here involves an Ubuntu 12.04 DNS server running BIND 9 and an Ubuntu 12.04 client running nsupdate against the DNS server.

1) Run ddns-confgen and specify /dev/urandom as the source of randomness and the name of the zone file you want to dynamically update via nsupdate:

$ ddns-confgen -r /dev/urandom -z

# To activate this key, place the following in named.conf, and
# in a separate keyfile on the system or systems from which nsupdate
# will be run:
key "" {
algorithm hmac-sha256;
secret "1D1niZqRvT8pNDgyrJcuCiykOQCHUL33k8ZYzmQYe/0=";

# Then, in the "zone" definition statement for "",
# place an "update-policy" statement like this one, adjusted as
# needed for your preferred permissions:
update-policy {
 grant zonesub ANY;

# After the keyfile has been placed, the following command will
# execute nsupdate using this key:
nsupdate -k <keyfile>

2) Follow the instructions in the output of ddns-keygen (above). I actually named the key just ddns-key, since I was going to use it for all the zones on my DNS server. So I added this stanza to /etc/bind/named.conf on the DNS server:

key "ddns-key" {
algorithm hmac-sha256;
secret "1D1niZqRvT8pNDgyrJcuCiykOQCHUL33k8ZYzmQYe/0=";

3) Allow updates when the key ddns-key is used. In my case, I added the allow-update line below to all zones that I wanted to dynamically update, not only to

zone "" {
        type master;
        file "/etc/bind/zones/";
allow-update { key "ddns-key"; };

At this point I also restarted the bind9 service on my DNS server.

4) On the client box, create a text file containing nsupdate commands to be sent to the DNS server. In the example below, I want to dynamically add both an A record and a reverse DNS PTR record:

$ cat update_dns1.txt
debug yes
update add 3600 A
update add 3600 PTR

Still on the client box, create a file containing the stanza with the DDNS key generated in step 1:

$ cat ddns-key.txt
key "ddns-key" {
algorithm hmac-sha256;
secret "Wxp1uJv3SHT+R9rx96o6342KKNnjW8hjJTyxK2HYufg=";

5) Run nsupdate and feed it both the update_dns1.txt file containing the commands, and the ddns-key.txt file:

$ nsupdate -k ddns-key.txt -v update_dns1.txt

You should see some fairly verbose output, since the command file specifies 'debug yes'. At the same time, tail /var/log/syslog on the DNS server and make sure there are no errors.

In my case, there were some hurdles I had to overcome on the DNS server. The first one was that apparmor was installed and it wasn't allowing the creation of the journal files used to keep track of DDNS records. I saw lines like these in /var/log/syslog:

Dec 16 11:22:59 dns1 kernel: [49671335.189689] type=1400 audit(1418757779.712:12): apparmor="DENIED" operation="mknod" parent=1 profile="/usr/sbin/named" name="/etc/bind/zones/" pid=31154 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107
Dec 16 11:22:59 dns1 kernel: [49671335.306304] type=1400 audit(1418757779.828:13): apparmor="DENIED" operation="mknod" parent=1 profile="/usr/sbin/named" name="/etc/bind/zones/" pid=31153 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107

To get past this issue, I disabled apparmor for named:

# ln -s /etc/apparmor.d/usr.sbin.named /etc/apparmor.d/disable/
# service apparmor restart

The next issue was an OS permission denied (nothing to do with apparmor) when trying to create the journal files in /etc/bind/zones:

Dec 16 11:30:54 dns1 named[32640]: /etc/bind/zones/ create: permission denied
Dec 16 11:30:54 dns named[32640]: /etc/bind/zones/ create: permission denied

I got past this issue by running

# chown -R bind:bind /etc/bind/zones

At this point everything worked as expected.

The Big Problem is Medium Data

This is a guest post by Matt Hunt, who leads open source projects for Bloomberg LP R&D. 

“Big Data” systems continue to attract substantial funding, attention, and excitement. As with many new technologies, they are neither a panacea, nor even a good fit for many common uses. Yet they also hold great promise. The question is, can systems originally designed to serve hundreds of millions of requests for something like web pages also work for requests that are computationally expensive and have tight tolerances?

Modern era big data technologies are a solution to an economics problem faced by Google and other Internet giants a decade ago. Storing, indexing, and responding to searches against all web pages required tremendous amounts of disk space and computer power. Very powerful machines, fast SAN storage, and data center space were prohibitively expensive. The solution was to pack cheap commodity machines as tightly together as possible with local disks.

This addressed the space and hardware cost problem, but introduced a software challenge. Writing distributed code is hard, and with many machines comes many failures. So a framework was also required to take care of such problems automatically for the system to be viable.


Right now, we’re in a transition phase in the industry in computing built from the entrance of Hadoop and its community starting in 2004. Understanding why and how these systems were created also offers insight into some of their weaknesses.  

At Bloomberg that we don’t have a big data problem. What we have is a “medium data” problem -- and so does everyone else.   Systems such as Hadoop and Spark are less efficient and mature for these typical low latency enterprise uses in general. High core counts, SSDs, and large RAM footprints are common today - but many of the commodity platforms have yet to take full advantage of them, and challenges remain.  A number of distributed components are further hampered by Java, which creates its own complications for low latency performance.

A practical use case
Categories: Architecture

Training Strategies for Transformations and Change

Presentations are just one learning strategy.

Presentations are just one learning strategy.

How many times have you sat in a room, crowded around tables, perhaps taking notes on your laptop or maybe doing email while someone drones away about the newest process change? All significant organizational transformations require the learning and adoption of new techniques, concepts and methods. Agile transformations are no different. For example, a transformation from waterfall to Scrum will require developing an understanding of Agile concepts and Scrum techniques. Four types of high-level training strategies are often used to support process improvement, such as Agile transformations. They are:

  1. Classic Lecture /Presentation – A presenter stands in front of the class and presents information to the learners. In most organizations the classic classroom format is used in conjunction with a PowerPoint deck, which provides counterpoint and support for the presenter. The learner’s role is to take notes from the lecture, interactions between the class and presenter and the presentation material and then to synthesize what they have heard. Nearly everyone in an IT department is familiar with type of training from attending college or university. An example in my past was Psychology 101 at the University of Iowa with 500+ of my closest friends. I remember the class because it was at 8 AM and because of the people sleeping in the back row. While I do not remember anything about the material many years later, this technique is often very useful in broadly introducing concepts. This method is often hybridized with other strategies to more deeply implement techniques and methods.
  2. Active Learning – Is strategy that is based on the belief that learners should take responsibility for their own learning. Learners are provided with a set of activities that keep them busy doing what is to be learned while analyzing, synthesizing and evaluating. Learners must do more than just listen and take notes. The teacher acts as a facilitator to help ensure the learning activities include techniques such as Agile games, working in pairs/groups, role-playing with discussion of the material and other co-operative learning techniques. Active learning and lecture techniques are often combined.
  3. Experiential Learning – Experiential learning is learning from experience. The learner will perform tasks, reflect on performance and possibly suffer the positive or negative consequences of making mistakes or being successful. The theory behind experiential learning is that for learning to be internalized the experience needs to be concretely and actively engaged rather than in a more theoretical or purely in a reflective manner. Process improvement based on real time experimentation in the Toyota Production system is a type of experiential learning.
  4. Mentoring – Mentoring is a process that uses one-on-one coaching and leadership to help a learner do concrete tasks with some form of support. Mentoring is a form of experience based learning, most attributable to the experiential learning camp. Because mentoring is generally a one-on-one technique it is generally not scalable to for large-scale change programs.

The majority of change agents have not been educated in adult learning techniques and leverage classic presentation/lecture techniques spiced with exercises. However, each of these high-level strategies have value and can be leveraged to help build the capacity and capabilities for change.

Categories: Process Management