Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/1' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

The 2016 HTPC Build

Coding Horror - Jeff Atwood - Mon, 11/30/2015 - 07:42

I've loved many computers in my life, but the HTPC has always had a special place in my heart. It's the only always-on workhorse computer in our house, it is utterly silent, totally reliable, sips power, and it's at the center of our home entertainment, networking, storage, and gaming. This handy box does it all, 24/7.

I love this little machine to death; it's always been there for me and my family. The steady march of improvements in my HTPC build over the years lets me look back and see how far the old beige box PC has come in the decade I've been blogging:

2005~$1000512 MB RAM, single core CPU80 watts idle 2008~$5202 GB RAM, dual core CPU45 watts idle 2011~$4204 GB RAM, dual core CPU + GPU22 watts idle 2013~$3008 GB RAM, dual core CPU + GPU×215 watts idle 2016~$3208 GB RAM, dual core CPU + GPU×410 watts idle

As expected, the per-thread performance increase from 2013's Haswell CPU to 2016's Skylake CPU is modest – 20 percent at best, and that might be rounding up. About all you can do is slap more cores in there, to very limited benefit in most applications. The 6100T I chose is dual-core plus hyperthreading, which I consider the sweet spot, but there are some other Skylake 6000 series variants at the same 35w power envelope which offer true quad-core, or quad-core plus hyperthreading – and, inevitably, a slightly lower base clock rate. So it goes.

The real story is how power consumption was reduced another 33 percent. Here's what I measured with my trusty kill-a-watt:

  • 10w idle with display off
  • 11w idle with display on
  • 13w active standard netflix (720p?) movie playback
  • 14w multiple torrents, display off
  • 15w 1080p video playback in MPC-HC x64
  • 40w Lego Batman 3 high detail 720p gameplay
  • 56w Prime95 full CPU load + Rthdribl full GPU load

These are impressive numbers, much better than I expected. Maybe part of it is the latest Windows 10 update which supports the new Speed Shift technology in Skylake. Speed Shift hands over CPU clockspeed control to the CPU itself, so it can ramp its internal clock up and down dramatically faster than the OS could. A Skylake CPU, with the right OS support, gets up to speed and back to idle faster, resulting in better performance and less overall power draw.

Skylake's on-board HD 530 graphics is about twice as fast as the HD 4400 that it replaces. Haswell offered the first reasonable big screen gaming GPU on an Intel CPU, but only just. 720p was mostly attainable in older games with the HD 4400, but I sometimes had to drop to medium detail settings, or lower. Two generations on, with the HD 530, even recent games like GRID Autosport, Lego Jurassic Park and so on can now be played at 720p with high detail settings at consistently high framerates. It depends on the game, but a few can even be played at 1080p now with medium settings. I did have at least one saved benchmark result on the disk to compare with:

GRID 2, 1280×720, high detail defaults MaxMinAvg i3-4130T, Intel HD 4400 GPU322127 i3-6100T, Intel HD 530 GPU503239

Skylake is a legitimate gaming system on a chip, provided you are OK with 720p. It's tremendous fun to play Lego Batman 3 with my son.

At 720p using high detail settings, where there used to be many instances of notable slowdown, particularly in co-op, it now feels very smooth throughout. And since games are much cheaper on PC than consoles, particularly through Steam, we have access to a complete range of gaming options from new to old, from indie to mainstream – and an enormous, inexpensive back catalog.

Of course, this is still far from the performance you'd get out of a $300 video card or a $300 console. You'll never be able to play a cutting edge, high end game like GTA V or Witcher 3 on this HTPC box. But you may not need to. Steam in-home streaming has truly come into its own in the last year. I tried streaming Batman: Arkham Knight from my beefy home office computer to the HTPC at 1080p, and I was surprised to discover just how effortless it was – nor could I detect any visual artifacts or input latency.

It's super easy to set up – just have the Steam client running on both machines at a logged in Windows desktop (can't be on the lock screen), and press the Stream button on any game that you don't have installed locally. Be careful with WiFi when streaming high resolutions, obviously, but if you're on a wired network, I found the experience is nearly identical to playing the game locally. As long as the game has native console / controller support, like Arkham Knight and Fallout 4, streaming to the big screen works great. Try it! That's how Henry and I are going to play through Just Cause 3 this Tuesday and I can't wait.

As before in 2013, I only upgraded the guts of the system, so the incremental cost is low.

That's a total of $321 for this upgrade cycle, about the cost of a new Xbox One or PS4. The i3-6100T should be a bit cheaper; according to Intel it has the same list price as the i3-6100, but suffers from weak availability. The motherboard I chose is a little more expensive, too, perhaps because it includes extras like built in WiFi and M.2 support, although I'm not using either quite yet. You might be able to source a cheaper H170 motherboard than mine.

The rest of the system has not changed much since 2013:

Populate these items to taste, pick whatever drives and mini-ITX case you prefer, but definitely stick with the PicoPSU, because removing the large, traditional case power supply makes the setup both a) much more power efficient at low wattage, and b) much roomier inside the case and easier to install, upgrade, and maintain.

I also switched to Xbox One controllers, for no really good reason other than the Xbox 360 is getting more obsolete every month, and now that my beloved Rock Band 4 is available on next-gen systems, I'm trying to slowly evict the 360s from my house.

The Windows 10 wireless Xbox One adapter does have some perks. In addition to working with the newer and slightly nicer gamepads from the Xbox One, it supports an audio stream over each controller via the controller's headset connector. But really, for the purposes of Steam gaming, any USB controller will do.

While I've been over the moon in love with my HTPC for years, and I liked the Xbox 360, I have been thoroughly unimpressed with my newly purchased Xbox One. Both the new and old UIs are hard to use, it's quite slow relative to my very snappy HTPC, and it has a ton of useless features that I don't care about, like broadcast TV support. About all the Xbox One lets you do is sometimes play next gen games at 1080p without paying $200 or $300 for a fancy video card, and let's face it – the PS4 does that slightly better. If those same games are available on PC, you'll have a better experience streaming them from a gaming PC to either a cheap Steam streaming box, or a generalist HTPC like this one.

The Xbox One and PS4 are effectively plain old PCs, built on:

  • Intel Atom class (aka slow) AMD 8-core x86 CPU
  • 8 GB RAM
  • AMD Radeon 77xx / 78xx GPUs
  • cheap commodity 512GB or 1TB hard drives (not SSDs)

The golden age of x86 gaming is well upon us. That's why the future of PC gaming is looking brighter every day. We can see it coming true in the solid GPU and idle power improvements in Skylake, riding the inevitable wave of x86 becoming the dominant kind of (non mobile, anyway) gaming for the forseeable future.

[advertisement] At Stack Overflow, we help developers learn, share, and grow. Whether you’re looking for your next dream job or looking to build out your team, we've got your back.
Categories: Programming

Improvements to Sign-In with Google Play services 8.3

Android Developers Blog - Mon, 11/30/2015 - 04:18

Posted by Laurence Moroney, Developer Advocate

With Google Play services 8.3, we’ve been hard at work to provide a greatly improved sign-in experience for developers that want to build apps that sign their users in with Google. To help you better understand some of these changes, this is the first in a series of blog posts about what’s available to you as a developer. In this post, we’ll discuss the changes to the user experience, and how you can use them in your app, as well as updates to the API to make coding Sign-In with Google more straightforward. On Android Marshmallow, this new Sign-In API has removed any requirement for device permissions, so there is no need to request runtime access to the accounts on the device, as was the case with the old API.

User Experience Improvements

We’ve gotten lots of feedback from developers about the user experience of using Google’s social sign-in button. Many of you noted that it took too many steps and was confusing for users. Typically, the experience is that the user touches a sign in button, and they are asked to choose an account. If that account doesn’t have a Google+ profile, they need to create one, and after that they have to give permissions based on the type of information that the app is asking for. Finally, they get to sign in to the app.

With the new API, the default set of permissions that the app requests has been reduced to basic profile information and optionally email address as demonstrated here. This introduces opportunities for much streamlined user experience: the first improvement here is in the presentation of the button itself. We had received feedback that the Google+ branding on the Sign-In button made it feel like the user would need to share Google+ data, which most apps don’t use. As such, the SignInButton has been rebranded with the reduced scopes -- it now reads ‘Sign In with Google’, and follows the standard Google branding for use with basic profile information.

After this, the user flow is also more straightforward. Instead of subsequent screens where a Google account is picked based on the email addresses registered on the device, followed by a potential ‘Create Google+ Profile’ dialog, followed by a permissions consent dialog, like this:

The user experience has changed to a single step, where the user chooses their account and gives consent. If they don’t have a Google+ profile, they don’t need to create one, eliminating that step. Additional consent dialogs come later, and are best requested in context so that the user understand why you might ask for access to their calendar or contact, and they are only prompted at the time that this data is needed.

We hope that a streamlined, one-tap, non-social sign-in option with additional OAuth permissions requested in context will help improve your sign-in rates and make it a breeze to sign-in with Google.

Check out some live apps that use the new API, including Instacart, NPR One, and Bring!

In the next post we’ll build on this by looking at some of the changes in the API to make coding apps that use Sign-In with Google even easier.

Categories: Programming

Android Studio 2.0 Preview

Android Developers Blog - Mon, 11/30/2015 - 04:10

Posted by, Jamal Eason, Product Manager, Android

One the most requested features we receive is to make app builds and deployment faster in Android Studio. Today at the Android Developer Summit, we’re announcing a preview of Android Studio 2.0 featuring Instant Run that will dramatically improve your development workflow. With Android Studio 2.0, we are also including a preview of a new GPU Profiler.

All these updates are available now in the canary release channel, so we can get your feedback. Since this initial release is a preview, you may want to download and run an additional copy of Android Studio in parallel with your current version.

New Features in Android Studio 2.0
Instant Run: Faster Build & Deploy

Android Studio’s instant run feature allows you to to quickly see your changes running on your device or emulator.

Getting started is easy. If you create a new project with Android Studio 2.0 then your projects are already setup. If you have a pre-existing app open Settings/Preferences, the go to Build, Execution, Deployment → Instant Run. Click on Enable Instant Run... This will ensure you have the correct gradle plugin for your project to work with Instant Run.

Enable Instant Run for Android Studio projects

Select Run as normal and Android Studio will perform normal compilation, packaging and install steps and run your app on your device or emulator. After you make edits to your source code or resources, pressing Run again will deploy your changes directly into the running app.

New Run & Stop Actions in Android Studio for Instant Run

For a more detailed guide setup and try Instant Run, click here.

GPU Profiler

Profiling your OpenGL ES Android code is now even easier with the GPU Profiler in Android Studio. The tool is in early preview, but is very powerful and not only shows details about the GL State and Commands, you can record entire sessions and walk through the GL Framebuffer and Textures as your app is running OpenGL ES Code.

Android Studio GPU Profiler

To get started, first download the GPU Debugging Tools package from the Android Studio SDK Manager. Click here for more details about the GPU Profiler tool and how to set up your Android app project for profiling.

Whats Next

This is just a taste of some of the bigger updates in this latest release of Android Studio. We'll be going through the full release in more detail at the Android Developer Summit (livestreamed on Monday and Tuesday). Over the next few weeks, we'll be showing how to take advantage of even more features in Android Studio 2.0, so be sure to check back in.

If you're interested in more Android deep technical content, we will be streaming over 16 hours of content from the inaugural Android Developer Summit over the next two days, and together with Codelabs, all of this content will be available online after the Summit concludes.

Android Studio 2.0 is available today on the Android Studio canary channel. Let us know what you think of these new features by connecting with the Android Studio development team on Google+.

Categories: Programming

Python: Parsing a JSON HTTP chunking stream

Mark Needham - Sat, 11/28/2015 - 14:56

I’ve been playing around with meetup.com’s API again and this time wanted to consume the chunked HTTP RSVP stream and filter RSVPs for events I’m interested in.

I use Python for most of my hacking these days and if HTTP requests are required the requests library is my first port of call.

I started out with the following script

import requests
import json
def stream_meetup_initial():
    uri = "http://stream.meetup.com/2/rsvps"
    response = requests.get(uri, stream = True)
    for chunk in response.iter_content(chunk_size = None):
        yield chunk
for raw_rsvp in stream_meetup_initial():
    print raw_rsvp
        rsvp = json.loads(raw_rsvp)
    except ValueError as e:
        print e

This mostly worked but I also noticed the following error from time to time:

No JSON object could be decoded

Although less frequent, I also saw errors suggesting I was trying to parse an incomplete JSON object. I tweaked the function to keep a local buffer and only yield that if the chunk ended in a new line character:

def stream_meetup_newline():
    uri = "http://stream.meetup.com/2/rsvps"
    response = requests.get(uri, stream = True)
    buffer = ""
    for chunk in response.iter_content(chunk_size = 1):
        if chunk.endswith("\n"):
            buffer += chunk
            yield buffer
            buffer = ""
            buffer += chunk

This mostly works although I’m sure I’ve seen some occasions where two JSON objects were being yielded and then the call to ‘json.loads’ failed. I haven’t been able to reproduce that though.

A second read through the requests documentation made me realise I hadn’t read it very carefully the first time since we can make our lives much easier by using ‘iter_lines’ rather than ‘iter_content’:

r = requests.get('http://stream.meetup.com/2/rsvps', stream=True)
for raw_rsvp in r.iter_lines():
    if raw_rsvp:
        rsvp = json.loads(raw_rsvp)
        print rsvp

We can then process ‘rsvp’, filtering out the ones we’re interested in.

Categories: Programming

Calling For A New Breed Of Testing Conferences

Xebia Blog - Sat, 11/28/2015 - 14:51

The way in which testing is organized is changing heavily. And rightfully so. Testing should no longer be treated as a separate phase, but rather should be fully embedded within the software delivery life cycle. These developments significantly impact the role of automation in testing, the team collaboration, and how the testing discipline should be cherished and continuously improved.

C'mon, guys! GO GO GO!

Testing should be reinvented and, honestly, testers may need to reinvent themselves. The proper skill set of a tester is expanding towards the technical side in which hands-on knowledge is needed. This need may be covered by competence development through training programmes, but we all know that most knowledge and inspiration is obtained on the job. Yet, we are only at the beginning of making seminars and conferences address this need for practical experience. C'mon, guys! GO GO GO!

Our first steps

Within Xebia, we aim to be one of the drivers of the change in the testing arena. As knowledge sharing is our second nature, the first ideas for "a Xebia testing conference" originated late 2014. Since the testing conference landscape is rather crowded, we felt we should come up with something unique and new. So, why not combine the need for expansion in the skill-set of a tester with a (for the testing conference arena) new approach. Select often-used and promising test frameworks and create a setting in which testers can gain practical experience and learn how to use these frameworks.

The idea matured, and was further strengthened through our consultancy experiences where we continuously train and coach testers and developers in modern testing (nicely summarized here). And by the somewhat unsatisfied feeling we were stuck with when leaving other testing conferences: sharing experiences is great, but actually experiencing it yourself during the conference is even better!

Stop the talking, start DOING!

So, we had to put our money where our mouth was: on Friday, October 2nd 2015, we had a first of a kind hands-on testing conference in Amsterdam. TestWorks Conf!

OK, What do you mean with  first-of-a-kind hands-on testing conference? Well,

  • we  meticulously prepared USB-sticks containing a VM image that could be loaded onto the participants' laptops. Provided they didn't bring equipment from the Stonehenge era. In this way, the participants would all have access to exactly the same environment: pre-loaded with all the frameworks, tools, and other settings needed to participate effectively during the conference. This allowed us to not be dependent on the venue's wifi capabilities to download all kinds of test frameworks.
  • presentations without code did not exist -- basically, we did not allow them. Parallel sessions with talks and guided demos (about 1hr) and 2hr-workshops allowed participants to choose their way of conference engagement. The 1hr-slot did prove to be challenging for participants to both follow the talk outline and use their laptops and code at the same time. Yet, all participants took home some food for thought -- and the technical environment to trace back the steps presented during the talk.
Let's gear up, all!

Close to 200 participants attended the first edition of TestWorks Conf. This attendance and the positive vibe and feedback strengthens our feeling that there is much room for these kinds of hands on conferences. For sure, we'll continue with this practical setup; we'll further tune the approach for the 2016 edition of TestWorks Conf. But, perhaps even more important, we urge other conference organizers to revisit their conference setup so that the apparent hunger for a practical and technically oriented testing conference is stilled. Think about how the needed skill set can be augmented, and find the appropriate form to deliver it. And have fun!

Game Performance: Vertex Array Objects

Android Developers Blog - Sat, 11/28/2015 - 09:07

Posted by Shanee Nishry

Previously, we showed how you can use vertex layout qualifiers to increase the performance and determinism of your OpenGL application. In this post, we’ll show another useful technique that will help you produce increased performance and cleaner code when drawing objects.

Binding the vertex buffer

Before drawing onto the screen, you need to bind your vertex data (e.g. positions, normals, UVs) to the corresponding vertex shader attributes. To do that, you need to bind the vertex buffer, enable the generic vertex attribute, and use glVertexAttribPointer to describe the layout of the buffer.

Therefore, a draw call might look like this:


// Bind shader program, uniforms and textures
// ...

// Bind the vertex buffer
glBindBuffer( GL_ARRAY_BUFFER, vertex_buffer_object );

// Set the vertex attributes


glEnableVertexAttribArray( ATTRIBUTE_LOCATION_NORMALS );
glVertexAttribPointer( ATTRIBUTE_LOCATION_NORMALS, 3, GL_FLOAT, GL_FALSE, 32, 20 );

// Draw elements
glDrawElements( GL_TRIANGLES, count, GL_UNSIGNED_SHORT, 0 );

There are several reasons why we might not like this code very much. The first is that we need to cache the layout of the vertex buffer to enable and disable the right attributes before drawing. This means we are either hard-coding or saving some amount of data for a nearly meaningless task.

The second reason is performance. Having to tell the drivers which attributes to individually activate is suboptimal. It would be best if we could precompile this information and deliver it all at once.

Lastly, and purely for aesthetics, our draw call is cluttered by long boilerplate code. It would be nice to get rid of it.

Did you know there is another reason why someone might frown on this code? The code is making use of layout qualifiers which is great! But, since it’s already using OpenGL ES 3+, it would be even better if the code also used Geometry Instancing. By batching many instances of a mesh into a single draw call, you can really boost performance.

So how can we improve on the above code?

Vertex Array Objects (VAOs)

If you are using OpenGL ES 3 or higher, you should use Vertex Array Objects (or "VAOs") to store your vertex attribute state.

Using a VAO allows the drivers to compile the vertex description format for repeated use. In addition, this frees you from having to cache the vertex format needed for glVertexAttribPointer, and it also results in less per-draw boilerplate code.

Creating Vertex Array Objects

The first thing you need to do is create your VAO. This is created once per mesh, alongside the vertex buffer object and is done like this:


// Bind the vertex buffer object
glBindBuffer( GL_ARRAY_BUFFER, vertex_buffer_object );

// Create a VAO
GLuint vao;
glGenVertexArrays( 1, &vao );
glBindVertexArray( vao );

// Set the vertex attributes as usual


glEnableVertexAttribArray( ATTRIBUTE_LOCATION_NORMALS );
glVertexAttribPointer( ATTRIBUTE_LOCATION_NORMALS, 3, GL_FLOAT, GL_FALSE, 32, 20 );

// Unbind the VAO to avoid accidentally overwriting the state
// Skip this if you are confident your code will not do so
glBindVertexArray( 0 );

You have probably noticed that this is very similar to our previous code section except that we now have the addition of:

// Create a vertex array object
GLuint vao;
glGenVertexArrays( 1, &vao );
glBindVertexArray( vao );

These lines create and bind the VAO. All glEnableVertexAttribArray and glVertexAttribPointer calls after that are recorded in the currently bound VAO, and that greatly simplifies our per-draw procedure as all you need to do is use the newly created VAO.

Using the Vertex Array Object

The next time you want to draw using this mesh all you need to do is bind the VAO using glBindVertexArray.

// Bind shader program, uniforms and textures
// ...

// Bind Vertex Array Object
glBindVertexArray( vao );

// Draw elements
glDrawElements( GL_TRIANGLES, count, GL_UNSIGNED_SHORT, 0 );

You no longer need to go through all the vertex attributes. This makes your code cleaner, makes per-frame calls shorter and more efficient, and allows the drivers to optimize the binding stage to increase performance.

Did you notice we are no longer calling glBindBuffer? This is because calling glVertexAttribPointer while recording the VAO references the currently bound buffer even though the VAO does not record glBindBuffer calls on itself.

Want to learn more how to improve your game performance? Check out our Game Performance article series. If you are building on Android you might also be interested in the Android Performance Patterns.

Categories: Programming

New in Android Samples: Authenticating to remote servers using the Fingerprint API

Android Developers Blog - Sat, 11/28/2015 - 09:07

Posted by Takeshi Hagikura, Yuichi Araki, Developer Programs Engineer

As we announced in the previous blog post, Android 6.0 Marshmallow is now publicly available to users. Along the way, we’ve been updating our samples collection to highlight exciting new features available to developers.

This week, we’re releasing AsymmetricFingerprintDialog, a new sample demonstrating how to securely integrate with compatible fingerprint readers (like Nexus Imprint) in a client/server environment.

Let’s take a closer look at how this sample works, and talk about how it complements the FingerprintDialog sample we released earlier during the public preview.

Symmetric vs Asymmetric Keys

The Android Fingerprint API protects user privacy by keeping users’ fingerprint features carefully contained within secure hardware on the device. This guards against malicious actors, ensuring that users can safely use their fingerprint, even in untrusted applications.

Android also provides protection for application developers, providing assurances that a user’s fingerprint has been positively identified before providing access to secure data or resources. This protects against tampered applications, providing cryptographic-level security for both offline data and online interactions.

When a user activates their fingerprint reader, they’re unlocking a hardware-backed cryptographic vault. As a developer, you can choose what type of key material is stored in that vault, depending on the needs of your application:

  • Symmetric keys: Similar to a password, symmetric keys allow encrypting local data. This is a good choice securing access to databases or offline files.
  • Asymmetric keys: Provides a key pair, comprised of a public key and a private key. The public key can be safely sent across the internet and stored on a remote server. The private key can later be used to sign data, such that the signature can be verified using the public key. Signed data cannot be tampered with, and positively identifies the original author of that data. In this way, asymmetric keys can be used for network login and authenticating online transactions. Similarly, the public key can be used to encrypt data, such that the data can only be decrypted with the private key.

This sample demonstrates how to use an asymmetric key, in the context of authenticating an online purchase. If you’re curious about using symmetric keys instead, take a look at the FingerprintDialog sample that was published earlier.

Here is a visual explanation of how the Android app, the user, and the backend fit together using the asymmetric key flow:

1. Setting Up: Creating an asymmetric keypair

First you need to create an asymmetric key pair as follows:

KeyPairGenerator.getInstance(KeyProperties.KEY_ALGORITHM_EC, "AndroidKeyStore");
        new KeyGenParameterSpec.Builder(KEY_NAME,
                .setAlgorithmParameterSpec(new ECGenParameterSpec("secp256r1"))

Note that .setUserAuthenticationRequired(true) requires that the user authenticate with a registered fingerprint to authorize every use of the private key.

Then you can retrieve the created private and public keys with as follows:

KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
PublicKey publicKey =

KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
PrivateKey key = (PrivateKey) keyStore.getKey(KEY_NAME, null);
2. Registering: Enrolling the public key with your server

Second, you need to transmit the public key to your backend so that in the future the backend can verify that transactions were authorized by the user (i.e. signed by the private key corresponding to this public key). This sample uses the fake backend implementation for reference, so it mimics the transmission of the public key, but in real life you need to transmit the public key over the network.

boolean enroll(String userId, String password, PublicKey publicKey);
3. Let’s Go: Signing transactions with a fingerprint

To allow the user to authenticate the transaction, e.g. to purchase an item, prompt the user to touch the fingerprint sensor.

Then start listening for a fingerprint as follows:

KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
PrivateKey key = (PrivateKey) keyStore.getKey(KEY_NAME, null);
CryptoObject cryptObject = new FingerprintManager.CryptoObject(signature);

CancellationSignal cancellationSignal = new CancellationSignal();
FingerprintManager fingerprintManager =
fingerprintManager.authenticate(cryptoObject, cancellationSignal, 0, this, null);
4. Finishing Up: Sending the data to your backend and verifying

After successful authentication, send the signed piece of data (in this sample, the contents of a purchase transaction) to the backend, like so:

Signature signature = cryptoObject.getSignature();
// Include a client nonce in the transaction so that the nonce is also signed 
// by the private key and the backend can verify that the same nonce can't be used 
// to prevent replay attacks.
Transaction transaction = new Transaction("user", 1, new SecureRandom().nextLong());
try {
    byte[] sigBytes = signature.sign();
    // Send the transaction and signedTransaction to the dummy backend
    if (mStoreBackend.verify(transaction, sigBytes)) {
    } else {
} catch (SignatureException e) {
    throw new RuntimeException(e);

Last, verify the signed data in the backend using the public key enrolled in step 2:

public boolean verify(Transaction transaction, byte[] transactionSignature) {
    try {
        if (mReceivedTransactions.contains(transaction)) {
            // It verifies the equality of the transaction including the client nonce
            // So attackers can't do replay attacks.
            return false;
        PublicKey publicKey = mPublicKeys.get(transaction.getUserId());
        Signature verificationFunction = Signature.getInstance("SHA256withECDSA");
        if (verificationFunction.verify(transactionSignature)) {
            // Transaction is verified with the public key associated with the user
            // Do some post purchase processing in the server
            return true;
    } catch (NoSuchAlgorithmException | InvalidKeyException | SignatureException e) {
        // In a real world, better to send some error message to the user
    return false;

At this point, you can assume that the user is correctly authenticated with their fingerprints because as noted in step 1, user authentication is required before every use of the private key. Let’s do the post processing in the backend and tell the user that the transaction is successful!

Other updated samples

We also have a couple of Marshmallow-related updates to the Android For Work APIs this month for you to peruse:

  • AppRestrictionEnforcer and AppRestrictionSchema These samples were originally released when the App Restriction feature was introduced as a part of Android for Work API in Android 5.0 Lollipop. AppRestrictionEnforcer demonstrates how to set restriction to other apps as a profile owner. AppRestrictionSchema defines some restrictions that can be controlled by AppRestrictionEnforcer. This update shows how to use 2 additional restriction types introduced in Android 6.0.
  • We hope you enjoy the updated samples. If you have any questions regarding the samples, please visit us on our GitHub page and file issues or send us pull requests.

    Categories: Programming

    Learn top tips from Kongregate to achieve success with Store Listing Experiments

    Android Developers Blog - Sat, 11/28/2015 - 09:05

    Posted by Lily Sheringham, Developer Marketing at Google Play

    Editor’s note: This is another post in our series featuring tips from developers finding success on Google Play. We recently spoke to games developer Kongregate, to find out how they use Store Listing Experiments successfully. - Ed.

    With Store Listing Experiments in the Google Play Developer Console, you can conduct A/B tests on the content of your store listing pages. Test versions of the text and graphics to see which ones perform best, based on install data.

    Kongregate increases installs by 45 percent with Store Listing Experiments

    Founded in 2006 by brother and sister Jim and Emily Greer, Kongregate is a leading mobile games publisher specializing in free to play games. Kongregate used Store Listing Experiments to test new content for the Global Assault listing page on Google Play. By testing with different audience sizes, they found a new icon that drove 92 percent more installs, while variant screenshots achieved an impressive 14 percent improvement. By picking the icons, screenshots, and text descriptions that were the most sticky with users, Kongregate saw installs increase by 45 percent on the improved page.

    Kongregate’s Mike Gordon, VP of Publishing; Peter Eykemans, Senior Producer; and Tammy Levy, Director of Product for Mobile Games, talk about how to successfully optimise mobile game listings with Store Listing Experiments.

    Kongregate’s tips for success with Store Listing Experiments

    Jeff Gurian, Sr. Director of Marketing at Kongregate also shares his do’s and don’ts on how to use experiments to convert more of your visitors, thereby increasing installs. Check them out below:

    Do’s Don’ts Do start by testing your game’s icon. Icons can have the greatest impact (positive or negative) on installs — so test early! Don’t test too many variables at once. It makes it harder to determine what drove results. The more variables you test, the more installs (and time) you’ll need to identify a winner. Do have a question or objective in mind when designing an experiment. For example, does artwork visualizing gameplay drive more installs than artwork that doesn’t? Don’t test artwork only. Also test screenshot ordering, videos, and text to find what combinations increase installs. Do run experiments long enough to achieve statistical significance. How long it takes to get a result can vary due to changes in traffic sources, location of users, and other factors during testing. Don’t target too small an audience with your experiment variants. The more users you expose to your variants, the more data you collect, the faster you get results! Do pay attention to the banner, which tells you if your experiment is still “in progress.” When it has collected enough data, the banner will clearly tell you which variant won or if it was a tie. Don’t interpret a test where the control attribute performs better than variants as a waste. You can still learn valuable lessons from what “didn’t work.” Iterate and try again!

    Learn more about how Kongregate optimized their Play Store listing with Store Listing Experiments. Learn more about Google Play products and best practices to help you grow your business globally.

    Categories: Programming

    Get your bibs ready for Big Android BBQ!

    Android Developers Blog - Sat, 11/28/2015 - 09:03

    Posted by, Colt McAnlis, Senior Texas Based Developer Advocate

    We’re excited to be involved in the Big Android BBQ (BABBQ) this year because of one thing: passion! Just like BBQ, Android development is much better when passionate people obsess over it. This year’s event is no exception.

    Take +Ian Lake for example. His passion about Android development runs so deep, he was willing to chug a whole bottle of BBQ sauce just so we’d let him represent Android Development Patterns at the conference this year. Or even +Chet Haase, who suffered a humiliating defeat during the Speechless session last year (at the hands of this charming bald guy). He loves BBQ so much that he’s willing to come back and lose again this year, just so he can convince you all that #perfmatters. Let’s not forget +Reto Meier. That mustache was stuck on his face for days. DAYS! All because he loves Android Development so much.

    When you see passion like this, you just have to be part of it. Which is why this year’s BABBQ is jam packed with awesome Google Developers content. We’re going to be talking about performance, new APIs in Marshmallow 6.0, NDK tricks, and Wear optimization. We even have a new set of code labs so that folks can get their hands on new code to use in their apps.

    Finally, we haven’t even mentioned our BABBQ attendees, yet. We’re talking about people who are so passionate about an Android development conference that they are willing to travel to Texas to be a part of it!

    If BBQ isn’t your thing, or you won’t be able to make the event in person, the Android Developers and Google Developers YouTube channels will be there in full force. We’ll be recording the sessions and posting them to Twitter and Google+ throughout the event.

    So, whether you are planning to attend in person or watch online, we want you to remain passionate about your Android development.

    Categories: Programming

    Developer tips for success with Player Analytics and Google Play games services

    Android Developers Blog - Sat, 11/28/2015 - 08:36

    Posted by, Lily Sheringham, Developer Marketing at Google Play

    Editor’s note: As part of our series featuring tips from developers, we spoke to some popular game developers to find out how they use Player Analytics and Google Play game services to find success on Google Play. - Ed.

    Google Play games services, available in the Developer Console, allows you to add features such as achievements and leaderboards to your games. Google Play games services provides Player Analytics, a free games-specific analytics tool, in the Developer Console Game services tab. You can use the reports to understand how players are progressing, spending, and churning backed by a data-driven approach.

    Bombsquad grows revenue by 140% per user with Player Analytics

    Independent developer Eric Froemling, initially created the game Bombsquad as a hobby, but now relies on it as his livelihood. Last year, he switched the business model of the game from paid to free-to-play. By using Player Analytics, he was able to improve player retention and monetization in the game, achieving a 140% increase in the average revenue per daily active user (ARPDAU).

    Watch the video below to learn how Eric uses Player Analytics and the Developer Console to improve gamers’ experience, while increasing retention and monetization.

    Tips from Auxbrain for success with Google Play games services

    Kevin Pazirandeh, founder and CEO of games developer Auxbrain, creator of Zombie Highway, provides insight into how they use Google Play games services, and comments:

    “While there are a few exceptions, I have not run into a better measure of engagement, and perhaps more importantly, a measure for change in engagement, than the retention table. For the uninitiated, a daily retention table gives you the % of players who return on the nth day after their first play. Comparing retention rates of two similar games can give you an immediate signal if you are doing something right or wrong.”

    Kevin shares his top tips on how to best use the analytics tools in Google Play games services:

    1. You get Player Analytics for free - If you’ve implemented Google Play game services in your games, check out Player Analytics under Game services in the Developer Console, you’ll find you are getting analytics data already.
    2. Never assume change is for the better - Players may not view changes in your game as the improvement you had hoped they were. So when you make a change, have a strategy for measuring the result. Where you cannot find a way to measure the change’s impact with Player Analytics, consider not making it and prioritize those changes you can measure.
    3. Use achievements and events to track player progress - If you add achievements or events you can use the Player progression report or Event viewer to track player progress. You’ll quickly find out where players are struggling or churning, and can look for ways to help move players on.
    4. Use sign-in to get more data - The more data about player behavior you collect, the more meaningful the reports in Player Analytics become. The best way to increase the data collected is to get more players signed-in. Auto sign-in players, and provide a Play game services start point on the first screen (after any tutorial flow) for those that don’t sign-in first time.
    5. Track your player engagement with Retention tables - The Retention table report lets you see where players are turning away, over time. Compare retention before and after changes to understand their impact, or between similar games to see if different designs decisions are turning players away earlier or later.

    Get started with Google Play Games Services or learn more about products and best practices that will help you grow your business on Google Play globally.

    Categories: Programming

    Android Developer Story: Peak Games generates majority of global revenue for popular game ‘Spades’ on Android

    Android Developers Blog - Sat, 11/28/2015 - 08:33

    Posted by Lily Sheringham, Google Play team

    Founded in 2010, Turkish mobile games developer Peak Games started developing games targeted to the local market and is now scaling globally. Their game ‘Spades Plus’ is growing in the US and the game generates over 70% of its mobile revenue from Android.

    Watch Erdem İnan, Business Intelligence and Marketing Director, and İlkin Ulaş Balkanay, Head of Android Development, explain how Peak Games improved user engagement and increased installs with Google Play Store Listing experiments and app promotion right from within the Developer Console.

    Find out more about how to use run tests on your Store Listing to increase your installs and how to promote your app or game with Universal App Campaigns from the Google Play Developer Console.

    Categories: Programming

    Minimum purchase price for apps and in-app products reduced on Google Play

    Android Developers Blog - Sat, 11/28/2015 - 08:32

    Posted by Alistair Pott, Product Manager, Google Play

    Available in more than 190 countries, Google Play is a global platform for developers to build high quality apps and successful businesses. But every market has its own unique challenges and opportunities. Purchasing behavior, in particular, varies significantly between markets. So to provide developers with more flexibility, we've worked to adapt Google Play pricing options to better suit local consumers and make content more accessible.

    Following a successful pilot in India earlier this year, today, developers have the option to reduce the price of their premium titles and in-app products in 17 more countries to these new minimum thresholds:

    Countries affected:

    • Brazil: R$ 0.99 (was R$2.00)
    • Chile: CLP $200.00 (was CLP $500.00)
    • Colombia: COP$ 800.00 (was COP$ 2000.00)
    • Hungary: Ft 125.00 (was Ft 225.00)
    • Indonesia: Rp 3,000.00 (was Rp 12,000.00)
    • Malaysia: RM 1.00 (was RM 3.50)
    • Mexico: MXN$ 5.00 (was MXN$ 9.90)
    • Peru: S/. 0.99 (was S/. 3.00)
    • Philippines: ₱15.00 (was ₱43.00)
    • Poland: zł1.79 (was zł2.99)
    • Russia: руб 15.00 (was руб 30.00)
    • Saudi Arabia:﷼ 0.99 (was 4.00﷼)
    • South Africa: R3.99 (was R10.00)
    • Thailand: ฿10.00 (was ฿32.00)
    • Turkey: ₺0.59 (was ₺2.00)
    • Ukraine: ₴5.00 (was ₴8.00)
    • Vietnam: ₫6,000 (was ₫21,000.00)

    You can lower the price of your apps and games right away by visiting the Google Play Developer Console and clicking on “Pricing & Distribution” or “In-app Products” for your apps.

    We hope this change allows you to reach more people around the world so that you can continue to grow your business on Google Play.

    Categories: Programming

    An updated app guide and new video tips to help you find success on Google Play

    Android Developers Blog - Sat, 11/28/2015 - 08:31

    Posted by Dom Elliott, The Google Play Apps & Games team

    Last year, we introduced our first playbook for developers, “The Secrets to App Success on Google Play”, to help you grow your app or game business, which has been downloaded more than 200,000 times.. Many new features have since been announced on the platform – from Store Listing Experiments and beta testing improvements to App Invites and Smart Lock for Passwords.

    Get the second edition of “The Secrets to App Success on Google Play”

    Hot off the press, you can now download the second edition to learn about all the new tools and best practices for improving the quality of your app, growing a valuable audience, increasing engagement and retention, and earning more revenue.

    Get the book on Google Play in English now or you can sign-up to be notified when the booklet is released in the following languages: Bahasa Indonesia, Deutsch, español (Latinoamérica), le français, português do Brasil, tiếng Việt, русский язы́к, ไทย, 한국어, 中文 (简体), 中文 (繁體), 日本語. Based on your feedback, the guide was updated to work seamlessly in the Google Play Books app. If you prefer, you can also download a PDF version from the Android Developers website.

    New videos with tips to find success on Google Play

    To accompany the guide, watch the first two episodes in a new ten-part video series of actionable tips you can start using today to achieve your business objectives. Subscribe to the Android Developers channel on YouTube and follow +Android Developers to watch the new videos as they’re released weekly.

    Two new videos will be released each week in the ten-part series
    on the Android Developer YouTube channel.

    Let us know your feedback

    Once you’ve checked out the guide and the videos, we’d again love to hear your feedback so we can continue to improve our developer support, please let us know what you think.

    Categories: Programming

    Protect Your Productive Time

    Making the Complex Simple - John Sonmez - Fri, 11/27/2015 - 14:00

    You only have 24 hours a day… 1,440 minutes… 86,400 seconds.  At first glance, this may seem like a lot. But it is not. There is a very limited supply of time, and it can be your friend or your enemy. And you get to choose if you are in control of your time, or […]

    The post Protect Your Productive Time appeared first on Simple Programmer.

    Categories: Programming

    10 Personal Productivity Tools from Agile Results

    “Great acts are made up of small deeds.“ -- Lao Tzu

    The best productivity tools are the ones you actually use and get results.

    I'll share some quick personal productivity tools from Agile Results, introduced in the book, Getting Results the Agile Way.

    Agile Results is a Personal Results System for work and life, and it's all about how to use your best energy for your best results.

    With that in mind, here are some quick productivity tools you can use to think better, feel better, and do better, while getting results better, faster, and easier with more fun ...


    Think in terms of Three Wins each day, each week, each month, each year.

    You can apply the Rule of 3 to life. Rather than get overwhelmed by your tasks, choose three things you want to accomplish today. This puts you in control. If nothing else, it gives you a very simple way to focus for the day. This will help you get on track and practice the art of ruthless prioritization.

    Consider the energy you have, what's the most important, what's the most valuable, and what would actually feel like a win for you and build momentum.

    To get started, right here, right now, simply write down on paper the three things you want to achieve today.


    The Monday Vision, Daily Outcomes, and Friday Reflection pattern is a simple habit for daily and weekly results.

    Monday Vision - On Monday, identify Three Wins that you want for the week.  Imagine if it was Friday and you were looking back on your week, what are three results that you would be proud of?  This helps you have create a simple vision for your week.

    Daily Wins - Get a Fresh Start each day.  Each day, identify Three Wins that you want for the day.  First thing in the morning, before you dive into the hustle and the bustle, step back.  Take the balcony view for your day and identify Three Wins that you want to accomplish.  This helps you create a simple vision for your day.  You can imagine three scenes from your day -- morning, noon and night -- or whatever works for you.

    One way to stay balanced here is to ask yourself both, "What do I want to accomplish?", and "What are the key things that if I don't get done ... I'm screwed?"

    Friday Reflection -- On each Friday, reflect on your week.  To do this, ask yourself two questions:

    “What are 3 things going well?”

    “What are 3 things to improve?”

    You'll find that either you are either focusing on the wrong things, getting distracted, or biting off more than you can chew.  Use what you learn here as input into next week's Monday Vision, Daily Wins, Friday Reflection. 

    The real power of Friday Reflection is that you acknowledge and appreciate your Personal Victories.  If you gave your all during your workout, hats off to you.  If you pushed a bit harder to really nail your presentation, great job.

    It's also a simple way to "put a bow" on your results for the week.  Now, if your manager or somebody were to ask you what you accomplished for the week, you have a simple story of Three Wins.


    Hot Spots are a simple metaphor for thinking about what’s important.

    Think of your life like a heat map.

    Start with a simple set of categories:

    1. Mind
    2. Body
    3. Emotions
    4. Career
    5. Finance
    6. Relationships
    7. Fun

    Where do you need to spend more time or less time?

    The Hot Spot categories support each other and they are connected, and in some cases overlapping.  But they give you a very quick way to explore an area of your life. 

    It's hard to do well at work if you're having issues with relationships.  And the surprise for a lot of people is how if they take better care of their body, work gets a lot easier, and they improve their mind and emotions. 


    The Growth Mindset is a learning mindset.

    Instead of a static view of things, you approach things as experiments to learn and explore.  Failure isn't final.  Failure isn't fatal.  Instead, find the lesson and change your approach.

    By adopting a Growth Mindset, you get better and better over time.  You don't say, "I'm no good at that."  You say, "I'm getting better at that." or "I'm learning."

    With a Growth Mindset and a focus on continuous learning, you turn your days into learning opportunities.  This helps you keep your motivation going and your energy strong.

    Life-long Learners last longer :)


    Timeboxing is a way to set a time "budget."  This helps you avoid spending too much time on something, or over-investing when it's diminishing returns.

    For a lot of people, they find they can focus in short-batches.  They can't focus indefinitely, but if they know they only have to work on something for say 20-minutes, it helps them fully focus on the task at hand.

    If you've heard of the Pomodoro Technique, this is an example.  Set a time limit for a task, and work on the task until the buzzer goes off.

    I use Timeboxing at multiple levels.  I might Timebox a mini-project to a week or a month, rather than let it go on forever "until it is done."  By using a Timebox, I create a sense of urgency and I give myself a finish line.  That's a real key to staying motivated and refueling your momentum.

    Timeboxing can help you improve your productivity in a very simple way. For example, rather than try to figure out how long something might take, start by figuring out how much time you want to invest in it. Identify up front, at what point is it diminishing return. This will help you cut your losses and figure out how to optimize your time.


    Each week spend more time in your strengths, and less time in your weaknesses.

    Push activities that make you weak to the first part of your day. By doing your Worst Things First, you create a glide path for the rest of the day. This is like Brian Tracy's Eat that Frog.

    Set limits.  Stuff the things that make you weak into a Timebox. For example, if the stuff that makes you weak is taking more than 20 percent of your day, then find a way to keep it within that 20 percent boundary. This might mean limiting the time or quantity.

    Sometimes you just can't get rid of the things that make you weak; in that case, balance it with more things that energize you and make you strong.

    Apply this to your week too. Push the toughest things that drain you to the start of the week to create a glide path. Do the same with people. Spend more time with people that make you strong and less time with people that make you weak. Be careful not to confuse the things that make you weak with challenges that will actually make you stronger. Grow yourself stronger over time.


    Pick one thing to improve for the month.

    Each month, pick something new; this gives you a chance to cycle through 12 things over the year. Or if necessary, you can always repeat a sprint.

    The idea is that 30 days is enough time to experiment with your results throughout the month. Because you might not see progress in the first couple of weeks while you’re learning, a month is a good chunk of time to check your progress.

    This is especially helpful if you find that you start a bunch of things but never finish.  Just focus this month on the one thing, and then next month, you can focus on the other thing, and so on.

    Each month is a Fresh Start and you get to pick a theme for the month so that everything you do accrues to something bigger.


    This is perhaps one of the most impactful ways to improve your productivity.

    Pair with people that complement your strengths.

    Pair up or team up with others that compliment your preferred patterns.  If you are a Starter, pair up with a Finisher.  If you are a Thinker, pair up with a Doer.  If you are a Maximizer, pair up with a Simplifier.

    Anything, and I mean anything, that you want to do better or faster, there is somebody in the world that lives and breathes it.  And, in my experience, they are more than happy to teach you, if you just ask.

    The best way to Pair Up is to find somebody where it's a two-way exchange of value and you both get something out of it.  To do this, it helps when you really know what you bring to the table, so it's clear why you are Pairing Up.

    Ask yourself, who can you team up with to get better results?


    Chances are you have certain hours in the day or night when you are able to accomplish more.

    These are your personal Power Hours.

    Guard your Power Hours so they are available to you and try to push the bulk of your productivity within these Timeboxes. This maximizes your results while optimizing your time.

    You might find you only have a few great hours during the week where you feel you produce effective and efficient results. You may even feel “in the zone” or in your “flow” state. Gradually increase the number of Power Hours you have. You can build a powerful day, or powerful week, one power hour at a time. If you know you only have three Power Hours in a 40-hour week, see if you can set yourself up to have five Power Hours.


    Your Creative Hours are those times during the week where you feel you are at your creative best.

    This might be a Saturday morning or a Tuesday night, or maybe during weekday afternoons.

    The key is to find those times where you have enough creative space, to do your creative work.

    Just like adding power hours, you might benefit from adding more creative hours. Count how many creative hours you have during the week. If it’s not enough, schedule more and set yourself up so that they truly are creative hours. If you’re the creative type, this will be especially important. If you don’t think of yourself as very creative, then simply use your Creative Hours to explore any challenges in your life or to innovate.

    There is so much more, but I find that if you play around with these Personal Productivity Tools, you can very quickly get better results in work and life.

    If you don't know where to start, start simple:

    Ask yourself what are the Three Wins you want to accomplish today, and write those done on a piece of paper.

    That's it -- You're doing Agile Results.

    Categories: Architecture, Programming

    Should I Buy a Small House?

    Making the Complex Simple - John Sonmez - Thu, 11/26/2015 - 16:00

    In this episode, I talk about buying a small house. Full transcript: John:               Hey, John Sonmez from simpleprogrammer.com. I got a question about investing in property. I did a couple of videos on property investment since I do some real estate investment and this email comes from John and John says, “Hi John, myself and […]

    The post Should I Buy a Small House? appeared first on Simple Programmer.

    Categories: Programming

    Scheduling containers and more with Nomad

    Xebia Blog - Thu, 11/26/2015 - 11:18

    Specifically for the Dutch Docker Day on the 20th of November, HashiCorp released version 0.2.0 of Nomad which has some awesome features such as service discovery by integrating with Consul, the system scheduler and restart policies.  HashiCorp worked hard to release version 0.2.0 on 18th of November and we pushed ourselves to release a self-paced, hands-on workshop. If you would like to explore and play with these latest features of Nomad, go check out the workshop over at http://workshops.nauts.io.

    In this blog post (or as I experienced it: roller coaster ride), you will catch a glimpse of the work that went into creating the workshop.

    Last friday, November the 20th, was the first edition of the Dutch Docker Day where I helped prepare a workshop about "scheduling containers and more with Nomad". It was a great experience where attendees got to play with the new features included in 0.2.0, which nearly didn't make it into the workshop.


    When HashiCorp released Nomad during their HashiConf event at the end of September, I was really excited as they always produce high quality tools with great user experience. As soon as the binary was available I downloaded it and tried to set up a cluster to see how it compared to some of it's competitors. The first release already had a lot of potential but also a lot of problems. For instance: when a container failed, Nomad would report it dead, but take no action; restart policies were still but a dream.

    There were a lot of awesome features in store for the future of Nomad: integration with Consul, system jobs, batch jobs, restart policies, etc. Imagine all possible integrations with other HashiCorp tools! I was sold. So when I was asked to prepare a workshop for the Dutch Docker Day I jumped at the opportunity to get better acquainted with Nomad. The idea was that the attendees of the workshop, since it was a pretty new product and had some quirks, would go on an explorative journey into the far reaches of the scheduler and together find it's treasures and dark secrets.

    Time went by and the workshop was taking shape nicely. We have a nice setup with a cluster of machines that automatically bootstrap the Nomad cluster and set up it's basic configuration. We were told that there would be a new version released before the Dutch Docker Day but nothing appeared, until the day before the event. I was both excited and terrified! The HashiCorp team worked long hours to get the new release of Nomad done in time for the Dutch Docker Day so Armon Dadgar, the CTO of HashiCorp and original creator of Nomad, could present the new features during his talk. This of course is a great thing, except for the fact that the workshop was entirely aimed at 0.1.2 and we had none of these new features incorporated into our Vagrant box. Were we going to throw all our work overboard and just start over, the night before the event?

    “Immediately following the initial release of Nomad 0.1, we knew we wanted to get Nomad 0.2 and all its enhancements into the hands of our users by Dutch Docker Day. The team put in a huge effort over the course of a month and a half to get all the new features done and to a polish people expect of HashiCorp products. Leading up to the release we ran into one crazy bug after another (surprisingly all related to serialization). After intense debugging we got it to the fit and polish we wanted the night before at 3 AM! Once the website docs were updated and the blog post written, we released Nomad 0.2. The experience was very exciting but also exhausting! We are very happy with how it turned out and hope the users are too!„

    - Alex Dadgar, HashiCorp Engineer working on Nomad

    It took until late in the evening to get an updated Vagrant box with a bootstrapped Consul cluster and the new Nomad version, in order to showcase the auto discovery feature and Consul integration that 0.2.0 added. However, the slides for the workshop were still referencing the problems we encountered when trying out the 0.1.0 and 0.1.2 release, so all the slides and statements we had made about things not working or being released in the future had to be aligned with the fixes and improvements that came with the new release. After some hours of hectic editing during the morning of the event, the slides were finally updated and showcased all the glorious new features!

    Nomad simplifies operations by supporting blue/green deployments, automatically handling machine failures, and providing a single workflow to deploy applications.

    The new features they added in this release and the amount of fixes and improvements is staggering. In order to discover services there is no longer a need for extra tools such as Registrator, and your services are now automatically registered and deregistered as soon as they are started and stopped (which I first thought was a bug, cause I wasn't used to Nomad actually restarting my dead containers). The system scheduler is another feature I've been missing in other schedulers for a while, as it makes it possible to easily schedule services (such as Consul or Sysdig) on all of the eligible nodes in the cluster.

    Feature 0.1.2 0.2.0 Service scheduler Schedule a long lived job. Y Y Batch scheduler Schedule batch workloads. Y Y System scheduler Schedule a long lived job on every eligible node. N Y Service discovery Discover launched services in Consul. N Y Restart policies If and how to restart a service when it fails. N Y Distinct host constraint Ensure that Task Groups are running on distinct clients. N Y Raw exec driver Run exec jobs without jailing them. N Y Rkt driver A driver for running containers with Rkt. N Y External artifacts Download external artifacts to execute for Exec and Raw exec drivers. N Y   And numerous fixes/improvements were added to 0.2.0

    If you would like to follow the self-paced workshop by yourself, you can find the slides, machines and scripts for the workshop at http://workshops.nauts.io together with the other workshops of the event. Please let me know your experiences, so the workshop can be improved over time!

    I would like to thank the HashiCorp team for their amazing work on the 0.2.0 release, the speed at which they have added so many great new features and improved the stability is incredible.

    It was a lot of fun preparing the workshop for the Dutch Docker Day. Working with bleeding edge technologies is always a great way to really get to know it's inner workings and quirks, and I would recommend it to anyone, just be prepared to do some last-minute work ; )

    How To Hire A Tech Team

    Making the Complex Simple - John Sonmez - Wed, 11/25/2015 - 14:00

    As with any career, there comes a time when you have to take on more responsibility. Usually, that’s because you’ve become an established, experienced member of the team, or you’ve reached a point of respect where your seniors look to you for guiding their hiring decisions. Ultimately, they trust you. At this point, you might […]

    The post How To Hire A Tech Team appeared first on Simple Programmer.

    Categories: Programming

    jq: Cannot iterate over number / string and number cannot be added

    Mark Needham - Tue, 11/24/2015 - 01:12

    In my continued parsing of meetup.com’s JSON API I wanted to extract some information from the following JSON file:

    $ head -n40 data/members/18313232.json
        "status": "active",
        "city": "London",
        "name": ". .",
        "other_services": {},
        "country": "gb",
        "topics": [],
        "lon": -0.13,
        "joined": 1438866605000,
        "id": 92951932,
        "state": "17",
        "link": "http://www.meetup.com/members/92951932",
        "photo": {
          "thumb_link": "http://photos1.meetupstatic.com/photos/member/8/d/6/b/thumb_250896203.jpeg",
          "photo_id": 250896203,
          "highres_link": "http://photos1.meetupstatic.com/photos/member/8/d/6/b/highres_250896203.jpeg",
          "photo_link": "http://photos1.meetupstatic.com/photos/member/8/d/6/b/member_250896203.jpeg"
        "lat": 51.49,
        "visited": 1446745707000,
        "self": {
          "common": {}
        "status": "active",
        "city": "London",
        "name": "Abdelkader Idryssy",
        "other_services": {},
        "country": "gb",
        "topics": [
            "name": "Weekend Adventures",
            "urlkey": "weekend-adventures",
            "id": 16438
            "name": "Community Building",
            "urlkey": "community-building",

    In particular I want to extract the member’s id, name, join date and the ids of topics they’re interested in. I started with the following jq query to try and extract those attributes:

    $ jq -r '.[] | [.id, .name, .joined, (.topics[] | .id | join(";"))] | @csv' data/members/18313232.json
    Cannot iterate over number (16438)

    Annoyingly this treats topic ids on an individual basis rather than as an array as I wanted. I tweaked the query to the following with no luck:

    $ jq -r '.[] | [.id, .name, .joined, (.topics[].id | join(";"))] | @csv' data/members/18313232.json
    Cannot iterate over number (16438)

    As a guess I decided to wrap ‘.topics[].id’ in an array literal to see if it had any impact:

    $ jq -r '.[] | [.id, .name, .joined, ([.topics[].id] | join(";"))] | @csv' data/members/18313232.json
    92951932,". .",1438866605000,""
    jq: error (at data/members/18313232.json:60013): string ("") and number (16438) cannot be added

    Woot! A different error message at least and this one seems to be due to a type mismatch between the string we want to end up with and the array of numbers that we currently have.

    We can cast our way to victory with the ‘tostring’ function:

    $ jq -r '.[] | [.id, .name, .joined, ([.topics[].id | tostring] | join(";"))] | @csv' data/members/18313232.json
    92951932,". .",1438866605000,""
    193866304,"Abdelkader Idryssy",1445195325000,"16438;20727;15401;9760;20246;20923;3336;2767;242;58259;4417;1789;10454;20274;10232;563;25375;16433;15187;17635;26273;21808;933;7789;23884;16212;144477;15322;21067;3833;108403;20221;1201;182;15083;9696;4377;15360;18296;15121;17703;10161;1322;3880;18333;3485;15585;44584;18692;21681"
    28643052,"Abhishek Chanda",1439688955000,"646052;520302;15167;563;65735;537492;646072;537502;24959;1025832;8599;31197;24410;26118;10579;1064;189;48471;16216;18062;33089;107633;46831;20479;1423042;86258;21441;3833;21681;188;9696;58162;20398;113032;18060;29971;55324;30928;15261;58259;638;16475;27591;10107;242;109595;10470;26384;72514;1461192"
    39523062,"Adam Kinder-Jones",1438677474000,"70576;21549;3833;42277;164111;21522;93380;48471;15167;189;563;25435;87614;9696;18062;58162;10579;21681;19882;108403;128595;15582;7029"
    194119823,"Adam Lewis",1444867994000,"10209"
    14847001,"Adam Rogers",1422917313000,""
    87709042,"Adele Green",1436867381000,"15167;18062;102811;9696;30928;18060;78565;189;7029;48471;127567;10579;58162;563;3833;16216;21441;37708;209821;15401;59325;31792;21836;21900;984862;15720;17703;96823;4422;85951;87614;37428;2260;827;121802;19672;38660;84325;118991;135612;10464;1454542;17936;21549;21520;17628;148303;20398;66339;29661"
    11497937,"Adrian Bridgett",1421067940000,"30372;15046;25375;638;498;933;374;27591;18062;18060;15167;10581;16438;15672;1998;1273;713;26333;15099;15117;4422;15892;242;142180;563;31197;20479;1502;131178;15018;43256;58259;1201;7319;15940;223;8652;66493;15029;18528;23274;9696;128595;21681;17558;50709;113737"
    14151190,"adrian lawrence",1437142198000,"7029;78565;659;85951;15582;48471;9696;128595;563;10579;3833;101960;16137;1973;78566;206;223;21441;16216;108403;21681;186;1998;15731;17703;15043;16613;17885;53531;48375;16615;19646;62326;49954;933;22268;19243;37381;102811;30928;455;10358;73511;127567;106382;16573;36229;781;23981;1954"
    183557824,"Adrien Pujol",1421882756000,"108403;563;9696;21681;188;24410;1064;32743;124668;15472;21123;1486432;1500742;87614;46831;1454542;46810;166000;126177;110474"
    Categories: Programming

    Example Mapping - Steering the conversation

    Xebia Blog - Mon, 11/23/2015 - 19:14

    People who are familiar with BDD and ATDD already know how useful the three amigos (product owner, tester and developer) session is for talking about what the system under development is supposed to do. But somehow these refinement sessions seem to drain the group's energy. One of the problems I see is not having a clear structure for conversations.

    Example Mapping is a simple technique that can steer the conversation into breaking down any product backlog items within 30 minutes.

    The Three Amigos

    Example Mapping is best used in so called Three Amigo Sessions. The purpose of this session is to create a common understanding of the requirements and a shared vocabulary across product owner and the rest of the team. During this session the product owner shares every user story by explaining the need for change in a product. It is essential that the conversation has multiple points of view. Testers and developers identify missing requirements or edge cases and are addressed by describing accepted behaviour, before a feature is considered ready for development.

    In order to help you steer the conversations, here is a list of guidelines for Three Amigos Sessions:

    • Empathy: Make sure the team has the capability to help each other understand the requirements. Without empathy and the room for it, you are lost.
    • Common understanding of the domain: Make sure that the team uses the same vocabulary (digital of physical) and speaks the same domain language.
    • Think big, but act small: Make sure all user stories are small and ready enough to make impact
    • Rules and examples: Make sure every user story explains the specification with rules and scenarios / examples.
    Example mapping

    Basic ingredients for Example Mapping are curiosity and a pack of post-it notes containing the following colours:

    • Yellow for defining user story
    • Red for defining questions
    • Blue for defining rules
    • Green for defining examples

    Using the following steps can help you steer the conversations towards accepted behaviour of the system under development:

    1. Understanding the problem
      Let the product owner start by writing down the user story on a yellow post-it note and have him explain the need for change in the product. The product owner should help the team understand the problem.
    2. Challenge the problem by asking questions
      Once the team has understood the problem, the team challenges the problem by asking questions. Collect all the questions by writing them down starting with "What if ... " on red post-it notes. Place them on the right side of the user story (yellow) post-it note. We will treat this post-it note as a ticket for a specific and focussed discussion.
    3. Identifying rules
      The key here is to identify rules for every given answer (steered from the red question post-it notes). Extract rules from the answers and write them down on a blue post-it note. Place them below the user story (yellow) post-it note. This basically describes the acceptance criteria of a user story. Make sure that every rule can be discussed separately. The single responsibility principle and separation of concerns should be applied.
    4. Describing situations with examples
      Once you have collected all the important rules of the user story, you collect all interesting situations / examples by writing them down on a green post-it note. Place them below the rule (blue) post-it note. Make sure that the team talks about examples focussed on one single rule. Steer the discussion by asking questions like: Have we reached the boundaries of the rule? What happens when the rule fails?
    An example


    Here in the given example above, the product owners requires a free shipping process. She wrote it down on a yellow post-it note. After collecting and answering questions, two rules were discussed and refined on blue post-it notes; shopping cart limit and the appearance of the free shipping banner on product pages. All further discussions were steered towards the appropriate rule. Two examples in the shopping cart limit were defined and one example for the free shipping banner on a green post-it notes. Besides steering the team in rule based discussions, the team also gets a clear overview of the scope for the first iteration of the requirement.

    Getting everyone on the same page is the key to success here. Try it a couple of times and let me know how it went.