Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Ten at Ten Meetings

Ten at Ten are a very simple tool for helping teams stay focused, connected, and collaborate more effectively, the Agile way.

I’ve been leading distributed teams and v-teams for years.   I needed a simple way to keep everybody on the same page, expose issues, and help everybody on the team increase their awareness of results and progress, as well as unblock and breakthrough blocking issues.

Why Ten at Ten Meetings?

When people are remote, it’s easy to feel disconnected, and it’s easy to start to feel like different people are just a “black box” or “go dark.”

Ten at Ten Meetings have been my friend and have helped me help everybody on the team stay in sync and appreciate each other’s work, while finding better ways to team up on things, and drive to results, in a collaborative way.  I believe I started Ten at Ten Meetings back in 2003 (before that, I wasn’t as consistent … I think 2003 is where I realized a quick sync each day, keeps the “black box” away.)

Overview of Ten at Ten Meetings

I’ve written about Ten at Ten Meetings before in my posts on How To Lead High-Performance Distributed Teams, How I Use Agile Results, Interview on Timeboxing for HBR (Harvard Business Review), Agile Results Works for Teams and Leaders Too,  and 10 Free Leadership Tools for Work and Life, but I thought it would be helpful to summarize some of the key information at a glance.

Here is an overview of Ten at Ten Meetings:

This is one of my favorite tools for reducing email and administration overhead and getting everybody on the same page fast.  It's simply a stand-up meeting.  I tend to have them at 10:00, and I set a limit of 10 minutes.  This way people look forward to the meeting as a way to very quickly catch up with each other, and to stay on top of what's going on, and what's important.  The way it works is I go around the (virtual) room, and each person identifies what they got done yesterday, what they're getting done today, and any help they need.  It's a fast process, although it can take practice in the beginning.  When I first started, I had to get in the habit of hanging up on people if it went past 10 minutes.  People very quickly realized that the ten minute meeting was serious.  Also, as issues came up, if they weren't fast to solve on the fly and felt like a distraction, then we had to learn to take them offline.  Eventually, this helped build a case for a recurring team meeting where we could drill deeper into recurring issues or patterns, and focus on improving overall team effectiveness.

3 Steps for Ten at Ten Meetings

Here is more of a step-by-step approach:

  1. I schedule ten minutes for Monday through Thursday, at whatever time the team can agree to, but in the AM. (no meetings on Friday)
  2. During the meeting, we go around and ask three simple questions:  1)  What did you get done?  2) What are you getting done today? (focused on Three Wins), and 3) Where do you need help?
  3. We focus on the process (the 3 questions) and the timebox (10 minutes) so it’s a swift meeting with great results.   We put issues that need more drill-down or exploration into a “parking lot” for follow up.  We focus the meeting on status and clarity of the work, the progress, and the impediments.

You’d be surprised at how quickly people start to pay attention to what they’re working on and on what’s worth working on.  It also helps team members very quickly see each other’s impact and results.  It also helps people raise their bar, especially when they get to hear  and experience what good looks like from their peers.

Most importantly, it shines the light on little, incremental progress, and, if you didn’t already know, progress is the key to happiness in work and life.

You Might Also Like

10 Free Leadership Tools for Work and Life

How I Use Agile Results

How To Lead High-Performance Distributed Teams

Categories: Architecture, Programming

Soft Skills is the Manning Deal of the Day!

Making the Complex Simple - John Sonmez - Thu, 09/11/2014 - 15:00

Great news! Early access to my new book Soft Skills: The Software Developer’s Life Manual, is on sale today (9/11/2014) only as Manning’s deal of the day! If you’ve been thinking about getting the book, now is probably the last chance to get it at a discounted rate. Just use the code: dotd091114au to the get […]

The post Soft Skills is the Manning Deal of the Day! appeared first on Simple Programmer.

Categories: Programming

Open as Many Doors as Possible

Making the Complex Simple - John Sonmez - Thu, 09/11/2014 - 15:00

Even though specialization is important, it doesn’t mean you shouldn’t strive to open up as many doors as possible in your life.

The post Open as Many Doors as Possible appeared first on Simple Programmer.

Categories: Programming

Software Development Linkopedia September 2014

From the Editor of Methods & Tools - Thu, 09/11/2014 - 09:36
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about the software developer condition, scaling Agile, technical debt, behavior-driven development, Agile metrics, UX (user eXperience), NoSQL databases and software design. Blog: The Developer is Dead, Long Live the Developer Blog: Scaling Agile at Gilt Blog: Technical debt 101 Blog: Behaviour Driven Development: Tips for Writing Better Feature Files Article: Acceptance Criteria – Demonstrate Them by Drawing a House Article: Actionable Metrics At Siemens Health Services Article: Adapting Scrum to a ...

Xcode 6 GM & Learning Swift (with the help of Xebia)

Xebia Blog - Thu, 09/11/2014 - 07:32

I guess re-iterating the announcements Apple did on Tuesday is not needed.

What is most interesting to me about everything that happened on Tuesday is the fact that iOS 8 now reached GM status and Apple sent the call to bring in your iOS 8 uploads to iTunes connect. iOS 8 is around the corner in about a week from now allowing some great new features to the platform and ... Swift.

Swift

I was thinking about putting together a list of excellent links about Swift. But obviously somebody has done that already
https://github.com/Wolg/awesome-swift
(And best of all, if you find/notice an epic Swift resource out there, submit a pull request to that REPO, or leave a comment on this blog post.)

If you are getting started, check out:
https://github.com/nettlep/learn-swift
It's a Github repo filled with extra Playgrounds to learn Swift in a hands-on matter. It elaborates a bit further on the later chapters of the Swift language book.

But the best way to learn Swift I can come up with is to join Xebia for a day (or two) and attend one of our special purpose update training offers hosted by Daniel Steinberg on 6 and 7 november. More info on that:

Booting a Raspberry Pi B+ with the Raspbian Debian Wheezy image

Agile Testing - Grig Gheorghiu - Thu, 09/11/2014 - 01:32
It took me a while to boot my brand new Raspberry Pi B+ with a usable Linux image. I chose the Raspbian Debian Wheezy image available on the downloads page of the official raspberrypi.org site. Here are the steps I needed:

1) Bought micro SD card. Note DO NOT get a regular SD card for the B+ because it will not fit in the SD card slot. You need a micro SD card.

2) Inserted the SD card via an SD USB adaptor in my MacBook Pro.

3) Went to the command line and ran df to see which volume the SD card was mounted as. In my case, it was /dev/disk1s1.

4) Unmounted the SD card. I initially tried 'sudo umount /dev/disk1s1' but the system told me to use 'diskutil unmount', so the command that worked for me was:

diskutil unmount /dev/disk1s1

5) Used dd to copy the Raspbian Debian Wheezy image (which I previously downloaded) per these instructions. Important note: the target of the dd command is /dev/disk1 and NOT /dev/disk1s1. I tried initially with the latter, and the Raspberry Pi wouldn't boot (one of the symptoms that something was wrong other than the fact that nothing appeared on the monitor, was that the green light was solid and not flashing; a google search revealed that one possible cause for that was a problem with the SD card). The dd command I used was:

dd if=2014-06-20-wheezy-raspbian.img of=/dev/disk1 bs=1m

6) At this point, I inserted the micro SD card into the SD slot on the Raspberry Pi, then connected the Pi to a USB power cable, a monitor via an HDMI cable, a USB keyboard and a USB mouse. I was able to boot and change the password for the pi user. The sky is the limit next ;-)




Community of Practice: Owning The Message

Some time controlling the message is important!

Some time controlling the message is important!

The simplest definition of a community of practice (COP) is people connecting, encouraging each other and sharing ideas and experiences. The power of COPs is generated by the interchange between people in a way that helps both the individuals and the group to achieve their goals. Who owns the message that the COP focuses on will affect how well the interchange occurs. Ownership, viewed in a black and white mode, generates two kinds of COPs. In the first type of COP, the group owns the message. In the second, the organization owns the message. “A Community of Practice: An Example” described a scenario in which the organization created a COP for a specific practices and made attendance mandatory. The inference in this scenario is that the organization is using the COP to deliver a message. The natural tendency is to view COPs, in which the organization controls the message and membership, as delivering less value.

Organizational ownership of a COP’s message and membership are generally viewed as anti-patterns. The problem is that ownership and control membership can impact the COP’s ability to:

  1. Connect like-minded colleagues and peers
  2. Share experiences safely if they do not conform to the organization’s message
  3. Innovate and to create new ideas that are viewed as outside-the-box.

The exercise of control will constrain the COP’s focus which in an organization implementing concepts such self-organizing and self-managing team (Agile concepts) and will send very mixed messages to the organization.

The focus that control generates can be used to implement, reinforce and institutionalize new ideas that are being rolled out on an organizational basis. Control of message and membership can:

  1. Accelerate learning by generating focus
  2. Validate and build on existing knowledge, the organization’s message
  3. Foster collaboration and consistency of process

In the short-run this behavior may well be a beneficial mechanism to deliver and then reinforce the organization’s message.  The positives that the constraints generate will quickly be overwhelmed once the new idea loses its bright and shiny status.

In organizations that use top-down process improvement methods, COPs can be used to deliver a message and then to reinforce the message as implementation progresses. However, as soon as institutionalization begins, the organization should get out of the COP control business. This does not mean that support, such as providing budget and logistics, should be withdrawn.  Support does not have to equate to control. Remember that control might be effective in the short run; however, COPs in which the message and membership is explicitly controlled, in the long-term will not be able to evolve and support its members effectively.


Categories: Process Management

10 Common Server Setups For Your Web Application

If you need a good overview of different ways to setup your web service then Mitchell Anicas has written a good article for you: 5 Common Server Setups For Your Web Application.

We've even included a few additional possibilities at no extra cost.

  1. Everything on One Server. Simple. Potential for poor performance because of resource contention. Not horizontally scalable. 
  2. Separate Database Server. There's an application server and a database server. Application and database don't share resources. Can independently vertically scale each component. Increases latency because the database is a network hop away.
  3. Load Balancer (Reverse Proxy). Distribute workload across multiple servers. Native horizontal scaling. Protection against DDOS attacks using rules. Adds complexity. Can be a performance bottleneck. Complicates issues like SSL termination and stick sessions.
  4. HTTP Accelerator (Caching Reverse Proxy). Caches web responses in memory so they can be served faster. Reduces CPU load on web server. Compression reduces bandwidth requirements. Requires tuning. A low cache-hit rate could reduce performance. 
  5. Master-Slave Database Replication. Can improve read and write performance. Adds a lot of complexity and failure modes.
  6. Load Balancer + Cache + Replication. Combines load balancing the caching servers and the application servers, along with database replication. Nice explanation in the article.
  7. Database-as-a-Service (DBaaS). Let someone else run the database for you.  RDS is one example from Amazon and there are hosted versions of many popular databases.
  8. Backend as a Service (BaaS). If you are writing a mobile application and you don't want to deal with the backend component then let someone else do it for you. Just concentrate on the mobile platform. That's hard enough. Parse and Firebase are popular examples, but there are many more.
  9. Platform as a Service (PaaS). Let someone else run most of your backend, but you get more flexibility than you have with BaaS to build your own application. Google App Engine, Heroku, and Salesforce are popular examples, but there are many more.
  10. Let Somone Else Do it. Do you really need servers at all? If you have a store then a service like Etsy saves a lot of work for very little cost. Does someone already do what you need done? Can you leverage it?
Categories: Architecture

Cloud Changes the Game from Deployment to Adoption

Before the Cloud, there was a lot of focus on deployment, as if deployment was success. 

Once you shipped the project, it was time to move on to the next project.  And project success was measured in terms of “on time” and “on budget.”   If you could deploy things quickly, you were a super shipper.

Of course, what we learned was that if you simply throw things over the wall and hope they stick, it’s not very successful.

"If you build it" ... users don't always come.

It was easy to confuse shipping projects on time and on budget with business impact.  

But let's compound the problem. 

The Development Hump

The big hump of software development was the hump in the middle—A big development hump.  And that hump was followed by a big deployment hump (installing software, fixing issues, dealing with deployment hassles, etc.)

So not only were development cycles long, but deployment was tough, too.

Because development cycles were long, and deployment was so tough, it was easy to confuse effort for value.

Cloud Changes the Hump

Now, let's turn it around.

With the Cloud, deployment is simplified.  You can reach more users, and it's easier to scale.  And it's easier to be available 24x7.

Add Agile to the mix, and people ship smaller, more frequent releases.

So with smaller, more-frequent releases, and simpler deployment, some software teams have turned into shipping machines.

The Cloud shrinks the development and deployment humps.

So now the game is a lot more obvious.

Deployment doesn't mark the finish.  It starts the game.

The real game of software success is adoption.

The Adoption Hump is Where the Benefits Are

If you picture the old IT project hump, where there is a long development cycle in the middle, now it's shorter humps in the middle.

The big hump is now user adoption.

It’s not new.  It was always there.   But the adoption hump was hidden beyond the development and deployment humps, and simply written off as “Value Leakage.”

And if you made it over the first two humps, since most projects did not plan or design for adoption, or allocate any resources or time, adoption was mostly an afterthought.  

And so the value leaked.

But the adoption hump is where the business benefits are.   The ROI is sitting there, gathering dust, in our "pay-for-play" world.   The value is simply waiting to be released and unleashed. 

Software solutions are sitting idle waiting for somebody to realize the value.

Accelerate Value by Accelerating Adoption

All of the benefits to the business are locked up in that adoption hump.   All of the benefits around how users will work better, faster, or cheaper, or how you will change the customer interaction experience, or how back-office systems will be better, faster, cheaper ... they are all locked up in that adoption hump.

As I said before, the key to Value Realization is adoption.  

So if you want to realize more value, drive more user adoption. 

And if you want to accelerate value, then accelerate user adoption.

In Essence …

In a Cloud world, the original humps of design, development, and deployment shrink.   But it’s not just time and effort that shrink.  Costs shrink, too.   With online platforms to build on (Infrastructure as a Service, Platforms as a Service, and Software as a Service), you don’t have to start from scratch or roll your own.   And if you adopt a configure before customize mindset, you can further reduce your costs of design and development.

Architecture moves up the stack from basic building blocks to composition.

And adoption is where the action is.  

What was the afterthought in the last generation of solutions, is now front and center. 

In the new world, adoption is a planned spend, and it’s core to the success of the planned value delivery.

If you want to win the game, think “Adoption-First.”

You Might Also Like

Continuous Value Delivery the Agile Way

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Architecture, Programming

Conference Data Sync and GCM in the Google I/O App

Android Developers Blog - Wed, 09/10/2014 - 15:57
By Bruno Oliveira, tech lead of the 2014 Google I/O mobile app

Keeping data in sync with the cloud is an important part of many applications, and the Google I/O App is no exception. To do this, we leverage the standard Android mechanism for this purpose: a Sync Adapter. Using a Sync Adapter has many benefits over using a more rudimentary mechanism such as setting up recurring alarms, because the system automatically handles the scheduling of Sync Adapters to optimize battery life.

We store the data in a local SQLite database. However, rather than having the whole application access that database directly, the application employs another standard Android mechanism to control and organize access to that data. This structure is, naturally, a Content Provider. Only the content provider's implementation has direct access to the SQLite database. All other parts of the app can only access data through the Content Resolver. This allows for a very flexible decoupling between the representation of the data in the database and the more abstract view of that data that is used throughout the app.

The I/O app maintains with two main kinds of data: conference data (sessions, speakers, rooms, etc) and user data (the user's personalized schedule). Conference data is kept up to date with a one-way sync from a set of JSON files stored in Google Cloud Storage, whereas user data goes through a two-way sync with a file stored in the user's Google Drive AppData folder.

Downloading Conference Data Efficiently

For a conference like Google I/O, conference data can be somewhat large. It consists of information about all the sessions, rooms, speakers, map locations, social hashtags, video library items and others. Downloading the whole data set repeatedly would be wasteful both in terms of battery and bandwidth, so we adopt a strategy to minimize the amount of data we download and process.

This strategy is separating the data into several different JSON files, and having them be referenced by a central master JSON file called the manifest file. The URL of the manifest file is the only URL that is hard-coded into the app (it is defined by the MANIFEST_URL constant in Config.java). Note that the I/O app uses Google Cloud Storage to store and serve these files, but any robust hosting service accessible via HTTP can be used for the same purpose.

The first part of the sync process is checking if the manifest file was changed since the app last downloaded it, and processing it only if it's newer. This logic is implemented by the fetchConfenceDataIfNewer method in RemoteConferenceDataFetcher.

public class RemoteConferenceDataFetcher {
    // (...)
    public String[] fetchConferenceDataIfNewer(String refTimestamp) throws IOException {
        BasicHttpClient httpClient = new BasicHttpClient();
        httpClient.setRequestLogger(mQuietLogger);
        // (...)

        // Only download if data is newer than refTimestamp
        if (!TextUtils.isEmpty(refTimestamp) && TimeUtils
            .isValidFormatForIfModifiedSinceHeader(refTimestamp)) {
                httpClient.addHeader("If-Modified-Since", refTimestamp);
            }
        }

        HttpResponse response = httpClient.get(mManifestUrl, null);
        int status = response.getStatus();
        if (status == HttpURLConnection.HTTP_OK) {
            // Data modified since we last checked -- process it!
        } else if (status == HttpURLConnection.HTTP_NOT_MODIFIED) {
            // data on the server is not newer than our data - no work to do!
            return null;
        } else {
            // (handle error)
        }
    }
    // (...)
}

Notice that we submit the HTTP If-Modified-Since header with our request, so that if the manifest hasn't changed since we last checked it, we will get an HTTP response code of HTTP_NOT_MODIFIED rather than HTTP_OK, we will react by skipping the download and parsing process. This means that unless the manifest has changed since we last saw it, the sync process is very economical: it consists only of a single HTTP request and a short response.

The manifest file's format is straightforward: it consists of references to other JSON files that contain the relevant pieces of the conference data:

{
  "format": "iosched-json-v1",
  "data_files": [
    "past_io_videolibrary_v5.json",
    "experts_v11.json",
    "hashtags_v8.json",
    "blocks_v10.json",
    "map_v11.json",
    "keynote_v10.json",
    "partners_v2.json",
    "session_data_v2.681.json"
  ]
}

The sync process then proceeds to process each of the listed data files in order. This part is also implemented to be as economical as possible: if we detect that we already have a cached version of a specific data file, we skip it entirely and use our local cache instead. This task is done by the processManifest method.

Then, each JSON file is parsed and the entities present in each one are accumulated in memory. At the end of this process, the data is written to the Content Provider.

Issuing Content Provider Operations Efficiently

The conference data sync needs to be efficient not only in the amount of data it downloads, but also in the amount of operations it performs on the database. This must be done as economically as possible, so this step is also optimized: instead of overwriting the whole database with the new data, the Sync Adapter attempts to preserve the existing entities and only update the ones that have changed. In our tests, this optimization step reduced the total sync time from 16 seconds to around 2 seconds on our test devices.

In order to accomplish this important third layer of optimization, the application needs to know, given an entity in memory and its version in the Content Provider, whether or not we need to issue content provider operations to update that entity. Comparing the entity in memory to the entity in the database field by field is one option, but is cumbersome and slow, since it would require us to read every field. Instead, we add a field to each entity called the import hashcode. The import hashcode is a weak hash value generated from its data. For example, here is how the import hashcode for a speaker is computed:

public class Speaker {
    public String id;
    public String publicPlusId;
    public String bio;
    public String name;
    public String company;
    public String plusoneUrl;
    public String thumbnailUrl;

    public String getImportHashcode() {
        StringBuilder sb = new StringBuilder();
        sb.append("id").append(id == null ? "" : id)
                .append("publicPlusId")
                .append(publicPlusId == null ? "" : publicPlusId)
                .append("bio")
                .append(bio == null ? "" : bio)
                .append("name")
                .append(name == null ? "" : name)
                .append("company")
                .append(company== null ? "" : company)
                .append("plusoneUrl")
                .append(plusoneUrl == null ? "" : plusoneUrl)
                .append("thumbnailUrl")
                .append(thumbnailUrl == null ? "" : thumbnailUrl);
        String result = sb.toString();
        return String.format(Locale.US, "%08x%08x", 
            result.hashCode(), result.length());
    }
}

Every time an entity is updated in the database, its import hashcode is saved with it as a database column. Later, when we have a candidate for an updated version of that entity, all we need to do is compute the import hashcode of the candidate and compare it to the import hashcode of the entity in the database. If they differ, then we issue Content Provider operations to update the entity in the database. If they are the same, we skip that entity. This incremental update logic can be seen, for example, in the makeContentProviderOperations method of the SpeakersHandler class:

public class SpeakersHandler extends JSONHandler {
    private HashMap mSpeakers = new HashMap();

    // (...)
    @Override
    public void makeContentProviderOperations(ArrayList list) {
        // (...)
        int updatedSpeakers = 0;
        for (Speaker speaker : mSpeakers.values()) {
            String hashCode = speaker.getImportHashcode();
            speakersToKeep.add(speaker.id);

            if (!isIncrementalUpdate || !speakerHashcodes.containsKey(speaker.id) ||
                    !speakerHashcodes.get(speaker.id).equals(hashCode)) {
                // speaker is new/updated, so issue content provider operations
                ++updatedSpeakers;
                boolean isNew = !isIncrementalUpdate || 
                    !speakerHashcodes.containsKey(speaker.id);
                buildSpeaker(isNew, speaker, list);
            }
        }

        // delete obsolete speakers
        int deletedSpeakers = 0;
        if (isIncrementalUpdate) {
            for (String speakerId : speakerHashcodes.keySet()) {
                if (!speakersToKeep.contains(speakerId)) {
                    buildDeleteOperation(speakerId, list);
                    ++deletedSpeakers;
                }
            }
        }
    }
}

The buildSpeaker and buildDeleteOperation methods (omitted here for brevity) simply build the Content Provider operations necessary to, respectively, insert/update a speaker or delete a speaker from the Content Provider. Notice that this approach means we only issue Content Provider operations to update a speaker if the import hashcode has changed. We also deal with obsolete speakers, that is, speakers that were in the database but were not referenced by the incoming data, and we issue delete operations for those speakers.

Making Sync Robust

The sync adapter in the I/O app is responsible for several tasks, amongst which are the remote conference data sync, the user schedule sync and also the user feedback sync. Failures can happen in any of them because of network conditions and other factors. However, a failure in one of the tasks should not impact the execution of the other tasks. This is why we structure the sync process as a series of independent tasks, each protected by a try/catch block, as can be seen in the performSync method of the SyncHelper class:

// remote sync consists of these operations, which we try one by one (and
// tolerate individual failures on each)
final int OP_REMOTE_SYNC = 0;
final int OP_USER_SCHEDULE_SYNC = 1;
final int OP_USER_FEEDBACK_SYNC = 2;

int[] opsToPerform = userDataOnly ?
        new int[] { OP_USER_SCHEDULE_SYNC } :
        new int[] { OP_REMOTE_SYNC, OP_USER_SCHEDULE_SYNC, OP_USER_FEEDBACK_SYNC};

for (int op : opsToPerform) {
    try {
        switch (op) {
            case OP_REMOTE_SYNC:
                dataChanged |= doRemoteSync();
                break;
            case OP_USER_SCHEDULE_SYNC:
                dataChanged |= doUserScheduleSync(account.name);
                break;
            case OP_USER_FEEDBACK_SYNC:
                doUserFeedbackSync();
                break;
        }
    } catch (AuthException ex) {
        // (... handle auth error...)
    } catch (Throwable throwable) {
        // (... handle other error...)

        // Let system know an exception happened:
        if (syncResult != null && syncResult.stats != null) {
            ++syncResult.stats.numIoExceptions;
        }
    }
}

When one particular part of the sync process fails, we let the system know about it by increasing syncResult.stats.numIoExceptions. This will cause the system to retry the sync at a later time, using exponential backoff.

When Should We Sync? Enter GCM.

It's very important for users to be able to get updates about conference data in a timely manner, especially during (and in the few days leading up to) Google I/O. A naïve way to solve this problem is simply making the app poll the server repeatedly for updates. Naturally, this causes problems with bandwidth and battery consumption.

To solve this problem in a more elegant way, we use GCM (Google Cloud Messaging). Whenever there is an update to the data on the server side, the server sends a GCM message to all registered devices. Upon receipt of this GCM message, the device performs a sync to download the new conference data. The GCMIntentService class handles the incoming GCM messages:

public class GCMIntentService extends GCMBaseIntentService {
    private static final String TAG = makeLogTag("GCM");

    private static final Map MESSAGE_RECEIVERS;
    static {
        // Known messages and their GCM message receivers
        Map  receivers = new HashMap();
        receivers.put("test", new TestCommand());
        receivers.put("announcement", new AnnouncementCommand());
        receivers.put("sync_schedule", new SyncCommand());
        receivers.put("sync_user", new SyncUserCommand());
        receivers.put("notification", new NotificationCommand());
        MESSAGE_RECEIVERS = Collections.unmodifiableMap(receivers);
    }

    // (...)

    @Override
    protected void onMessage(Context context, Intent intent) {
        String action = intent.getStringExtra("action");
        String extraData = intent.getStringExtra("extraData");
        LOGD(TAG, "Got GCM message, action=" + action + ", extraData=" + extraData);

        if (action == null) {
            LOGE(TAG, "Message received without command action");
            return;
        }

        action = action.toLowerCase();
        GCMCommand command = MESSAGE_RECEIVERS.get(action);
        if (command == null) {
            LOGE(TAG, "Unknown command received: " + action);
        } else {
            command.execute(this, action, extraData);
        }

    }
    // (...)
}

Notice that the onMessage method delivers the message to the appropriate handler depending on the GCM message's "action" field. If the action field is "sync_schedule", the application delivers the message to an instance of the SyncCommand class, which causes a sync to happen. Incidentally, notice that the implementation of the SyncCommand class allows the GCM message to specify a jitter parameter, which causes it to trigger a sync not immediately but at a random time in the future within the jitter interval. This spreads out the syncs evenly over a period of time rather than forcing all clients to sync simultaneously, and thus prevents a sudden peak in requests on the server side.

Syncing User Data

The I/O app allows the user to build their own personalized schedule by choosing which sessions they are interested in attending. This data must be shared across the user's Android devices, and also between the I/O website and Android. This means this data has to be stored in the cloud, in the user's Google account. We chose to use the Google Drive AppData folder for this task.

User data is synced to Google Drive by the doUserScheduleSync method of the SyncHelper class. If you dive into the source code, you will notice that this method essentially accesses the Google Drive AppData folder through the Google Drive HTTP API, then reconciles the set of sessions in the data with the set of sessions starred by the user on the device, and issues the necessary modifications to the cloud if there are locally updated sessions.

This means that if the user selects one session on their Android device and then selects another session on the I/O website, the result should be that both the Android device and the I/O website will show that both sessions are in the user's schedule.

Also, whenever the user adds or removes a session on the I/O website, the data on all their Android devices should be updated, and vice versa. To accomplish that, the I/O website sends our GCM server a notification every time the user makes a change to their schedule; the GCM server, in turn, sends a GCM message to all the devices owned by that user in order to cause them to sync their user data. The same mechanism works across the user's devices as well: when one device updates the data, it issues a GCM message to all other devices.

Conclusion

Serving fresh data is a key component of many Android apps. This article showed how the I/O app deals with the challenges of keeping the data up-to-date while minimizing network traffic and database changes, and also keeping this data in sync across different platforms and devices through the use of Google Cloud Storage, Google Drive and Google Cloud Messaging.

Categories: Programming

A Community of Practice: An Example And An Evaluation

A Community of Practice must has a have a common interest.

A Community of Practice must has a have a common interest.

I am often asked to describe a community practice.  We defined a community of practice (COP) as a group of people with a common area of interest to sharing knowledge. The interaction and sharing of knowledge serves to generate relationships that reinforce a sense of community and support.

An example of a recently formed COP:

Organizational context:  The organization is a large multinational organization with four large development centers (US, Canada, India and China). Software development projects (includes development, enhancements and maintenance) leverage a mix of Scrum/xP and classic plan based methods.  Each location has an internal project management group that meets monthly and is sponsored by the company.  This group is called the Project Manager COP (PMCOP).  The PMCOP primarily meets at lunchtimes at the each of the corporate locations with quarterly events early in the afternoon (Eastern time zone in Canada). Attendance is mandatory and active involvement in the PMCOP is perceived to be a career enhancement. Senior management viewed the PMCOP as extremely successful while many participants viewed it as little more than a free lunch.

Agile COP: The organization recently had adopted Agile as an approved development and software project management approach.  A large number of Scrum masters were appointed.  Some were certified, some had experience in previous organizations and some had “read the book” (their description). The organization quickly recognized that the consistency of practice was needed and implemented a package of coaching (mostly internal), auditing and started a COP with the similar attributes as the PMCOP. Differences implementation included more localization and self-organization.  Each location met separately and developed its own list of topics (this allowed each group to be more culturally sensitive) and each location rotated the meeting chair on a 3 month basis.  Participation was still mandatory (attendance was taken and shared with senior management).

In both cases the COP included a wide range of programming including outside presenters (live and webinar), retrospective like sessions and practitioner sharing.

In order to determine whether a COP is positioned to be effective, we can use the four attributes from Communities of Practice to evaluate the programs.  If we use the framework to evaluate the two examples in our mini-case study the results would show:

 

PMCOP Agile COP Common area of interest Yes, project management is a specific area of interest. Yes – ish, The COP is currently focused on Scrum masters; however, Agile can include a very broad range of practices therefore other areas of focus may need to be broken up. Process Yes, the PMCOP has a set of procedures for membership, attendance and logistics The Agile COP adopted most the PMCOP processes (the rules for who chairs the meeting and local topics were modified). Support Yes, the organization provided space, budget for lunch and other incidentals. Yes, the organization provided space, budget for lunch and other incidentals.  Note in both the Agile and PMCOP the requirement to participate was considered support by the management team but not by the practitioners. Interest The requirement to participate makes interest hard to measure from mere attendance.  We surveyed the members which lead to changes in programming to increase perceived value. The Agile COP had not been in place long enough to judge long-term interest; however, the results from the PMCOP was used to push for more local, culturally sensitive programming.

 

Using the four common attributes needed for an effective community of practice is a good first step to determine the strengths and weaknesses of a planned-to-be-implemented community of practice.


Categories: Process Management

Chrome - Firefox WebRTC Interop Test - Pt 2

Google Testing Blog - Tue, 09/09/2014 - 21:09
by Patrik Höglund

This is the second in a series of articles about Chrome’s WebRTC Interop Test. See the first.

In the previous blog post we managed to write an automated test which got a WebRTC call between Firefox and Chrome to run. But how do we verify that the call actually worked?

Verifying the CallNow we can launch the two browsers, but how do we figure out the whether the call actually worked? If you try opening two apprtc.appspot.com tabs in the same room, you will notice the video feeds flip over using a CSS transform, your local video is relegated to a small frame and a new big video feed with the remote video shows up. For the first version of the test, I just looked at the page in the Chrome debugger and looked for some reliable signal. As it turns out, the remoteVideo.style.opacity property will go from 0 to 1 when the call goes up and from 1 to 0 when it goes down. Since we can execute arbitrary JavaScript in the Chrome tab from the test, we can simply implement the check like this:

bool WaitForCallToComeUp(content::WebContents* tab_contents) {
// Apprtc will set remoteVideo.style.opacity to 1 when the call comes up.
std::string javascript =
"window.domAutomationController.send(remoteVideo.style.opacity)";
return test::PollingWaitUntil(javascript, "1", tab_contents);
}

Verifying Video is PlayingSo getting a call up is good, but what if there is a bug where Firefox and Chrome cannot send correct video streams to each other? To check that, we needed to step up our game a bit. We decided to use our existing video detector, which looks at a video element and determines if the pixels are changing. This is a very basic check, but it’s better than nothing. To do this, we simply evaluate the .js file’s JavaScript in the context of the Chrome tab, making the functions in the file available to us. The implementation then becomes

bool DetectRemoteVideoPlaying(content::WebContents* tab_contents) {
if (!EvalInJavascriptFile(tab_contents, GetSourceDir().Append(
FILE_PATH_LITERAL(
"chrome/test/data/webrtc/test_functions.js"))))
return false;
if (!EvalInJavascriptFile(tab_contents, GetSourceDir().Append(
FILE_PATH_LITERAL(
"chrome/test/data/webrtc/video_detector.js"))))
return false;

// The remote video tag is called remoteVideo in the AppRTC code.
StartDetectingVideo(tab_contents, "remoteVideo");
WaitForVideoToPlay(tab_contents);
return true;
}

where StartDetectingVideo and WaitForVideoToPlay call the corresponding JavaScript methods in video_detector.js. If the video feed is frozen and unchanging, the test will time out and fail.

What to Send in the CallNow we can get a call up between the browsers and detect if video is playing. But what video should we send? For chrome, we have a convenient --use-fake-device-for-media-stream flag that will make Chrome pretend there’s a webcam and present a generated video feed (which is a spinning green ball with a timestamp). This turned out to be useful since Firefox and Chrome cannot acquire the same camera at the same time, so if we didn’t use the fake device we would have two webcams plugged into the bots executing the tests!

Bots running in Chrome’s regular test infrastructure do not have either software or hardware webcams plugged into them, so this test must run on bots with webcams for Firefox to be able to acquire a camera. Fortunately, we have that in the WebRTC waterfalls in order to test that we can actually acquire hardware webcams on all platforms. We also added a check to just succeed the test when there’s no real webcam on the system since we don’t want it to fail when a dev runs it on a machine without a webcam:

if (!HasWebcamOnSystem())
return;

It would of course be better if Firefox had a similar fake device, but to my knowledge it doesn’t.

Downloading all Code and Components Now we have all we need to run the test and have it verify something useful. We just have the hard part left: how do we actually download all the resources we need to run this test? Recall that this is actually a three-way integration test between Chrome, Firefox and AppRTC, which require the following:

  • The AppEngine SDK in order to bring up the local AppRTC instance, 
  • The AppRTC code itself, 
  • Chrome (already present in the checkout), and 
  • Firefox nightly.

While developing the test, I initially just hand-downloaded these and installed and hard-coded the paths. This is a very bad idea in the long run. Recall that the Chromium infrastructure is comprised of thousands and thousands of machines, and while this test will only run on perhaps 5 at a time due to its webcam requirements, we don’t want manual maintenance work whenever we replace a machine. And for that matter, we definitely don’t want to download a new Firefox by hand every night and put it on the right location on the bots! So how do we automate this?

Downloading the AppEngine SDK
First, let’s start with the easy part. We don’t really care if the AppEngine SDK is up-to-date, so a relatively stale version is fine. We could have the test download it from the authoritative source, but that’s a bad idea for a couple reasons. First, it updates outside our control. Second, there could be anti-robot measures on the page. Third, the download will likely be unreliable and fail the test occasionally.

The way we solved this was to upload a copy of the SDK to a Google storage bucket under our control and download it using the depot_tools script download_from_google_storage.py. This is a lot more reliable than an external website and will not download the SDK if we already have the right version on the bot.

Downloading the AppRTC Code
This code is on GitHub. Experience has shown that git clone commands run against GitHub will fail every now and then, and fail the test. We could either write some retry mechanism, but we have found it’s better to simply mirror the git repository in Chromium’s internal mirrors, which are closer to our bots and thereby more reliable from our perspective. The pull is done by a Chromium DEPS file (which is Chromium’s dependency provisioning framework).

Downloading Firefox
It turns out that Firefox supplies handy libraries for this task. We’re using mozdownload in this script in order to download the Firefox nightly build. Unfortunately this fails every now and then so we would like to have some retry mechanism, or we could write some mechanism to “mirror” the Firefox nightly build in some location we control.

Putting it TogetherWith that, we have everything we need to deploy the test. You can see the final code here.

The provisioning code above was put into a separate “.gclient solution” so that regular Chrome devs and bots are not burdened with downloading hundreds of megs of SDKs and code that they will not use. When this test runs, you will first see a Chrome browser pop up, which will ensure the local apprtc instance is up. Then a Firefox browser will pop up. They will each acquire the fake device and real camera, respectively, and after a short delay the AppRTC call will come up, proving that video interop is working.

This is a complicated and expensive test, but we believe it is worth it to keep the main interop case under automation this way, especially as the spec evolves and the browsers are in varying states of implementation.

Future Work

  • Also run on Windows/Mac. 
  • Also test Opera. 
  • Interop between Chrome/Firefox mobile and desktop browsers. 
  • Also ensure audio is playing. 
  • Measure bandwidth stats, video quality, etc.


Categories: Testing & QA

Techno-BDDs: What Daft Punk Can Teach Us About Requirements

Software Requirements Blog - Seilevel.com - Tue, 09/09/2014 - 17:00
I listen to a lot of Daft Punk lately—especially while I’m running.   On one of my recent runs, my mind’s reflections upon the day’s work merged with my mental impressions of the music blaring from my earbuds, which happened to be the Daft Punk song, “Technologic.”  For those of you unfamiliar with the song, […]
Categories: Requirements

The Main Benefit of Story Points

Mike Cohn's Blog - Tue, 09/09/2014 - 15:00

If story points are an estimate of the time (effort) involved in doing something, why not just estimate directly in hours or days? Why use points at all?

There are multiple good reasons to estimate product backlog items in story points, and I cover them fully in the Agile Estimating and Planning video course, but there is one compelling reason that on its own is enough to justify the use of points.

It has to do with King Henry I who reigned between 1100 and 1135. Prior to his reign, a “yard” was a unit of measure from a person’s nose to his outstretched thumb. Just imagine all the confusion this caused with that distance being different for each person.

King Henry eventually decided a yard would always be the distance between the king’s nose and outstretched thumb. Convenient for him, but also convenient for everyone else because there was now a standard unit of measure.

You might learn that for you, a yard (as defined by the king’s arm) was a little more or less than your arm. I’d learn the same about my arm. And we’d all have a common unit of measure.

Story points are much the same. Like English yards, they allow team members with different skill levels to communicate about and agree on an estimate. As an example, imagine you and I decide to go for a run. I like to run but am very slow. You, on the other hand, are a very fast runner. You point to a trail and say, “Let’s run that trail. It’ll take 30 minutes.”

I am familiar with that trail, but being a much slower runner than you, I know it takes me 60 minutes every time I run that trail. And I tell you I’ll run that trail with you but that will take 60 minutes.

And so we argue. “30.” “60.” “30.” “60.”

We’re getting nowhere. Perhaps we compromise and call it 45 minutes. But that is possibly the worst thing we could do. We now have an estimate that is no good for either of us.

So instead of compromising on 45, we continue arguing. “30.” “60.” “30.” “60.”

Eventually you say to me, “Mike, it’s a 5-mile trail. I can run it in 30 minutes.”

And I tell you, “I agree: it’s a 5-mile trail. That takes me 60 minutes.”

The problem is that we are both right. You really can run it in 30 minutes, and it really will take me 60. When we try to put a time estimate on running this trail, we find we can’t because we work (run) at different speeds.

But, when we use a more abstract measure—in this case, miles—we can agree. You and I agree the trail is 5 miles. We just differ in how long it will take each of us to run it.

Story points serve much the same purpose. They allow individuals with differing skill sets and speeds of working to agree. Instead of a fast and slow runner, consider two programmers of differing productivity.

Like the runners, these two programmers may agree that a given user story is 5 points (rather than 5 miles). The faster programmer may be thinking it’s easy and only a day of work. The slower programmer may be thinking it will take two days of work. But they can agree to call it 5 points, as the number of points assigned to the first story is fairly arbitrary.

What’s important is once they agree that the first story is 5 points, our two programmers can then agree on subsequent estimates. If the fast programmer thinks a new user story will take two days (twice his estimate for the 5-point story), he will estimate the new story as 10 points. So will the second programmer if she thinks it will take four days (twice as long as her estimate for the 5-point story).

And so, like the distance from King Henry’s nose to his thumb, story points allow agreement among individuals who perform at different rates.

The Main Benefit of Story Points

Mike Cohn's Blog - Tue, 09/09/2014 - 15:00

If story points are an estimate of the time (effort) involved in doing something, why not just estimate directly in hours or days? Why use points at all?

There are multiple good reasons to estimate product backlog items in story points, and I cover them fully in the Agile Estimating and Planning video course, but there is one compelling reason that on its own is enough to justify the use of points.

It has to do with King Henry I who reigned between 1100 and 1135. Prior to his reign, a “yard” was a unit of measure from a person’s nose to his outstretched thumb. Just imagine all the confusion this caused with that distance being different for each person.

King Henry eventually decided a yard would always be the distance between the king’s nose and outstretched thumb. Convenient for him, but also convenient for everyone else because there was now a standard unit of measure.

You might learn that for you, a yard (as defined by the king’s arm) was a little more or less than your arm. I’d learn the same about my arm. And we’d all have a common unit of measure.

Story points are much the same. Like English yards, they allow team members with different skill levels to communicate about and agree on an estimate. As an example, imagine you and I decide to go for a run. I like to run but am very slow. You, on the other hand, are a very fast runner. You point to a trail and say, “Let’s run that trail. It’ll take 30 minutes.”

I am familiar with that trail, but being a much slower runner than you, I know it takes me 60 minutes every time I run that trail. And I tell you I’ll run that trail with you but that will take 60 minutes.

And so we argue. “30.” “60.” “30.” “60.”

We’re getting nowhere. Perhaps we compromise and call it 45 minutes. But that is possibly the worst thing we could do. We now have an estimate that is no good for either of us.

So instead of compromising on 45, we continue arguing. “30.” “60.” “30.” “60.”

Eventually you say to me, “Mike, it’s a 5-mile trail. I can run it in 30 minutes.”

And I tell you, “I agree: it’s a 5-mile trail. That takes me 60 minutes.”

The problem is that we are both right. You really can run it in 30 minutes, and it really will take me 60. When we try to put a time estimate on running this trail, we find we can’t because we work (run) at different speeds.

But, when we use a more abstract measure—in this case, miles—we can agree. You and I agree the trail is 5 miles. We just differ in how long it will take each of us to run it.

Story points serve much the same purpose. They allow individuals with differing skill sets and speeds of working to agree. Instead of a fast and slow runner, consider two programmers of differing productivity.

Like the runners, these two programmers may agree that a given user story is 5 points (rather than 5 miles). The faster programmer may be thinking it’s easy and only a day of work. The slower programmer may be thinking it will take two days of work. But they can agree to call it 5 points, as the number of points assigned to the first story is fairly arbitrary.

What’s important is once they agree that the first story is 5 points, our two programmers can then agree on subsequent estimates. If the fast programmer thinks a new user story will take two days (twice his estimate for the 5-point story), he will estimate the new story as 10 points. So will the second programmer if she thinks it will take four days (twice as long as her estimate for the 5-point story).

And so, like the distance from King Henry’s nose to his thumb, story points allow agreement among individuals who perform at different rates.

Cost, Value & Investment: How Much Will This Project Cost? Part 1

I’ve said before that you cannot use capacity planning for the project portfolio. I also said that managers often want to know how much the project will cost. Why? Because businesses have to manage costs. No one can have runaway projects. That is fair.

If you use an agile or incremental approach to your projects, you have options. You don’t have to have runaway projects. Here are two better questions:

  • How much do you want to invest before we stop?
  • How much value is this project or program worth to you?

You need to think about cost, value, and investment, not just cost when you think about about the project portfolio. If you think about cost, you miss the potentially great projects and features.

However, no business exists without managing costs. In fact, the cost question might be critical to your business. If you proceed without thinking about cost, you might doom your business.

Why do you want to know about cost? Do you have a contract? Does the customer need to know? A cost-bound contract is a good reason.  (If you have other reasons for needing to know cost, let me know. I am serious when I say you need to evaluate the project portfolio on value, not on cost.)

You Have a Cost-Bound Contract

I’ve talked before about providing date ranges or confidence ranges with estimates. It all depends on why you need to know. If you are trying to stay within your predicted cost-bound contract, you need a ranked backlog. If you are part of a program, everyone needs to see the roadmap. Everyone needs to see the product backlog burnup charts. You’d better have feature teams so you can create features. If you don’t, you’d better have small features.

Why? You can manage the interdependencies among and between teams more easily with small features and with a detailed roadmap. The larger the program, the smaller you want the batch size to be. Otherwise, you will waste a lot of money very fast. (The teams will create components and get caught in integration hell. That wastes money.)

Your Customer Wants to Know How Much the Project Will Cost

Why does your customer want to know? Do you have payments based on interim deliverables? If the customer needs to know, you want to build trust by delivering value, so the customer trusts you over time.

If the customer wants to contain his or her costs, you want to work by feature, delivering value. You want to share the roadmap, delivering value. You want to know what value the estimate has for your customer. You can provide an estimate for your customer, as long as you know why your customer needs it.

Some of you think I’m being perverse. I’m not being helpful by saying what you could do to provide an estimate. Okay, in Part 2, I will provide a suggestion of how you could do an order-of-magnitude approach for estimating a program.

Categories: Project Management

Firm foundations

Coding the Architecture - Simon Brown - Tue, 09/09/2014 - 12:32

I truly believe that a lightweight, pragmatic approach to software architecture is pivotal to successfully delivering software, and that it can complement agile approaches rather than compete against them. After all, a good architecture enables agility and this doesn't happen by magic. But the people in our industry often tend to have a very different view. Historically, software architecture has been a discipline steeped in academia and, I think, subsequently feels inaccessible to many software developers. It's also not a particularly trendy topic when compared to [microservices|Node.js|Docker|insert other new thing here].

I've been distilling the essence of software architecture over the past few years, helping software teams to understand and introduce it into the way that they work. And, for me, the absolute essence of software architecture is about building firm foundations; both in terms of the team that is building the software and for the software itself. It's about technical leadership, creating a shared vision and stacking the chances of success in your favour.

I'm delighted to have been invited back to ASAS 2014 and my opening keynote is about what a software team can do in order to create those firm foundations. I'm also going to talk about some of the concrete examples of what I've done in the past, illustrating how I apply a minimal set of software architecture practices in the real world to take an idea through to working software. I'll also share some examples of where this hasn't exactly gone to plan too! I look forward to seeing you in a few weeks.

Read more...

Categories: Architecture

Running unit tests on iOS devices

Xebia Blog - Tue, 09/09/2014 - 09:48

When running a unit test target needing an entitlement (keychain access) it does not work out of the box on Xcode. You get some descriptive error in the console about a "missing entitlement". Everything works fine on the Simulator though.

Often this is a case of the executable bundle's code signature not begin valid anymore because a testing bundle was added/linked to the executable before deployment to the device. Easiest fix is to add a new "Run Script Build Phase" with the content:

codesign --verify --force --sign "$CODE_SIGN_IDENTITY" "$CODESIGNING_FOLDER_PATH"

codesign

Now try (cleaning and) running your unit tests again. Good chance it now works.

Communities of Practice

Communities are all differnt

Communities are all different

Organizations are increasingly becoming more diverse and distributed while at the same time pursuing mechanisms to increase collaboration between groups and consistency of knowledge and practice. A community of practice (COP) is often used as a tool to share knowledge and improve performance. Etienne Wenger-Trayner suggests that a community of practice is formed by people who engage in a process of collective learning in a shared domain of human endeavor. There are four common requirements for a community of practice to exist:

  1. Common Area of Interest – The first required attribute for any potential community of interest must have is a common area of interest among a group of people. The area needs to be specific and an area where knowledge is not viewed as proprietary but can be generated, shared and owned by the community as a whole. When knowledge is perceived to be propriety it will not be shared.
  2. Process – The second must have attribute a community of practice needs is set of defined processes. Processes that are generally required include mechanisms to attract legitimate participants and the capture and dissemination of community knowledge.  The existence processes differentiate COPs from ad-hoc meetings.
  3. Support – There is a tendency within many organizations to either think of COPs as socially driven by users and self-managing or as tools to control, to collect and disseminate knowledge within the organization. In their purist form, neither case is perfect (we will explore why in a later essay) but in either case, somebody needs to take responsibility for the COP. The role is typically known as the community manager. The role of the community manager can include finding logistics support, budget, identifying speakers, capturing knowledge and ensuring the group gets together.
  4. Interest – Perhaps the single most important attribute required for any COP to exist is an interest in interacting and sharing. Unless participants are interested (passionate is even better) in the topic(s) that the community pursues they will not participate.

A community of practice tool promotes collaboration and consistency of practice even in organizations that are becoming more distributed and diverse. Communities of practice provide a platform for people with similar interests to build on individual knowledge to create group knowledge which helps organizations deliver more value.


Categories: Process Management

Update on Android Wear Paid Apps

Android Developers Blog - Mon, 09/08/2014 - 22:02

Update (8 September 2014): All of the issues in the post below have now been resolved in Android Studio 0.8.3 onwards, released on 21 July 2014. The gradle wearApp rule, and the packaging documentation, were updated to use res/raw. The instructions on this blog post remain correct and you can continue to use manual packaging if you want. You can upload Android Wear paid apps to the Google Play using the standard release wearApp release mechanism.

We have a workaround to enable paid apps (and other apps that use Google Play's forward-lock mechanism) on Android Wear. The assets/ directory of those apps, which contains the wearable APK, cannot be extracted or read by the wearable installer. The workaround is to place the wearable APK in the res/raw directory instead.

As per the documentation, there are two ways to package your wearable app: use the “wearApp” Gradle rule to package your wearable app or manually package the wearable app. For paid apps, the workaround is to manually package your apps with the following two changes, and you cannot use the “wearApp” Gradle rule. To manually package the wearable APK into res/raw, do the following:

  1. Copy the signed wearable app into your handheld project's res/raw directory and rename it to wearable_app.apk, it will be referred to as wearable_app.
  2. Create a res/xml/wearable_app_desc.xml file that contains the version and path information of the wearable app:
    <wearableApp package="wearable app package name">
        <versionCode>1</versionCode>
        <versionName>1.0</versionName>
        <rawPathResId>wearable_app</rawPathResId>
    </wearableApp>

    The package, versionCode, and versionName are the same as values specified in the wearable app's AndroidManifest.xml file. The rawPathResId is the static variable name of the resource. If the filename of your resource is wearable_app.apk, the static variable name would be wearable_app.

  3. Add a <meta-data> tag to your handheld app's <application> tag to reference the wearable_app_desc.xml file.
    <meta-data android:name="com.google.android.wearable.beta.app"
               android:resource="@xml/wearable_app_desc"/>
  4. Build and sign the handheld app.

We will be updating the “wearApp” Gradle rule in a future update to the Android SDK build tools to support APK embedding into res/raw. In the meantime, for paid apps you will need to follow the manual steps outlined above. We will be also be updating the documentation to reflect the above workaround. We're working to make this easier for you in the future, and we apologize for the inconvenience.


Join the discussion on
+Android Developers


Categories: Programming