Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Running unit tests on iOS devices

Xebia Blog - Tue, 09/09/2014 - 09:48

When running a unit test target needing an entitlement (keychain access) it does not work out of the box on Xcode. You get some descriptive error in the console about a "missing entitlement". Everything works fine on the Simulator though.

Often this is a case of the executable bundle's code signature not begin valid anymore because a testing bundle was added/linked to the executable before deployment to the device. Easiest fix is to add a new "Run Script Build Phase" with the content:

codesign --verify --force --sign "$CODE_SIGN_IDENTITY" "$CODESIGNING_FOLDER_PATH"

codesign

Now try (cleaning and) running your unit tests again. Good chance it now works.

Update on Android Wear Paid Apps

Android Developers Blog - Mon, 09/08/2014 - 22:02

Update (8 September 2014): All of the issues in the post below have now been resolved in Android Studio 0.8.3 onwards, released on 21 July 2014. The gradle wearApp rule, and the packaging documentation, were updated to use res/raw. The instructions on this blog post remain correct and you can continue to use manual packaging if you want. You can upload Android Wear paid apps to the Google Play using the standard release wearApp release mechanism.

We have a workaround to enable paid apps (and other apps that use Google Play's forward-lock mechanism) on Android Wear. The assets/ directory of those apps, which contains the wearable APK, cannot be extracted or read by the wearable installer. The workaround is to place the wearable APK in the res/raw directory instead.

As per the documentation, there are two ways to package your wearable app: use the “wearApp” Gradle rule to package your wearable app or manually package the wearable app. For paid apps, the workaround is to manually package your apps with the following two changes, and you cannot use the “wearApp” Gradle rule. To manually package the wearable APK into res/raw, do the following:

  1. Copy the signed wearable app into your handheld project's res/raw directory and rename it to wearable_app.apk, it will be referred to as wearable_app.
  2. Create a res/xml/wearable_app_desc.xml file that contains the version and path information of the wearable app:
    <wearableApp package="wearable app package name">
        <versionCode>1</versionCode>
        <versionName>1.0</versionName>
        <rawPathResId>wearable_app</rawPathResId>
    </wearableApp>

    The package, versionCode, and versionName are the same as values specified in the wearable app's AndroidManifest.xml file. The rawPathResId is the static variable name of the resource. If the filename of your resource is wearable_app.apk, the static variable name would be wearable_app.

  3. Add a <meta-data> tag to your handheld app's <application> tag to reference the wearable_app_desc.xml file.
    <meta-data android:name="com.google.android.wearable.beta.app"
               android:resource="@xml/wearable_app_desc"/>
  4. Build and sign the handheld app.

We will be updating the “wearApp” Gradle rule in a future update to the Android SDK build tools to support APK embedding into res/raw. In the meantime, for paid apps you will need to follow the manual steps outlined above. We will be also be updating the documentation to reflect the above workaround. We're working to make this easier for you in the future, and we apologize for the inconvenience.


Join the discussion on
+Android Developers


Categories: Programming

An F# newbie using SQLite

Eric.Weblog() - Eric Sink - Mon, 09/08/2014 - 19:00

Like I said in a tweet on Friday, I'm guessing everybody's first 10,000 lines of F# are crap. That's a lot of bad code I need to write, so I figure maybe I better get started.

This blog entry is a journal of my first attempts at using F# to do some SQLite stuff. I'm using SQLitePCL.raw, which is a Portable Class Library wrapper (written in C#) allowing .NET developers to call the native SQLite library.

My program has five "stanzas":

  • ONE: Open a SQLite database and create a table with two integer columns called a and b.

  • TWO: Insert 16 rows with a going from 1 through 16.

  • THREE: Set column b equal to a squared, and lookup the value of b for a=7.

  • FOUR: Loop over all the rows where b<120 and sum the a values.

  • FIVE: Close the database and print the two results.

I've got three implementations of this program to show you -- one in C# and two in F#.

C#

Here's the C# version I started with:

using System;
using System.IO;

using SQLitePCL;

public class foo
{
    // helper functions to check SQLite result codes and throw

    private static bool is_error(int rc)
    {
        return (
                (rc != raw.SQLITE_OK)
                && (rc != raw.SQLITE_ROW)
                && (rc != raw.SQLITE_DONE)
           );
    }

    private static void check(int rc)
    {
        if (is_error(rc))
        {
            throw new Exception(string.Format("{0}", rc));
        }
    }

    private static void check(sqlite3 conn, int rc)
    {
        if (is_error(rc))
        {
            throw new Exception(raw.sqlite3_errmsg(conn));
        }
    }

    private static int checkthru(sqlite3 conn, int rc)
    {
        if (is_error(rc))
        {
            throw new Exception(raw.sqlite3_errmsg(conn));
        }
        else
        {
            return rc;
        }
    }

    // MAIN program

    public static void Main()
    {
        sqlite3 conn = null;

        // ONE: open the db and create the table

        check(raw.sqlite3_open(":memory:", out conn));

        check(conn, raw.sqlite3_exec(conn, "CREATE TABLE foo (a int, b int)"));

        // TWO: insert 16 rows

        for (int i=1; i<=16; i++)
        {
            string sql = string.Format("INSERT INTO foo (a) VALUES ({0})", i);
            check(conn, raw.sqlite3_exec(conn, sql));
        }

        // THREE: set b = a squared and find b for a=7

        check(conn, raw.sqlite3_exec(conn, "UPDATE foo SET b = a * a"));

        sqlite3_stmt stmt = null;
        check(conn, raw.sqlite3_prepare_v2(conn, "SELECT b FROM foo WHERE a=?", out stmt));

        check(conn, raw.sqlite3_bind_int(stmt, 1, 7));
        check(conn, raw.sqlite3_step(stmt));
        int vsq = raw.sqlite3_column_int(stmt, 0);
        check(conn, raw.sqlite3_finalize(stmt));
        stmt = null;

        // FOUR: fetch sum(a) for all rows where b < 120

        check(conn, raw.sqlite3_prepare_v2(conn, "SELECT a FROM foo WHERE b<120", out stmt));

        int vsum = 0;

        while (raw.SQLITE_ROW == (checkthru(conn, raw.sqlite3_step(stmt))))
        {
            vsum += raw.sqlite3_column_int(stmt, 0);
        }
        
        check(conn, raw.sqlite3_finalize(stmt));
        stmt = null;

        // FIVE: close and print the results

        check(raw.sqlite3_close(conn));
        conn = null;

        Console.WriteLine("val: {0}", vsq);
        Console.WriteLine("sum: {0}", vsum);
    }
}

Notes:

  • I'm coding against the 'raw' SQLite API, which returns integer error codes rather than throwing exceptions. So I've written some little check functions which throw on any result code that signifies an error condition.

  • In the first stanza, I'm opening ":memory:" rather than an actual file on disk so that I can be sure the db starts clean.

  • In the second stanza, I'm constructing the SQL string rather than using parameter substitution. This is a bad idea for two reasons. First, parameter substitution eliminates SQL injection attacks. Second, forcing SQLite to compile a SQL statement inside a loop is going to cause poor performance.

  • In the third stanza, I'm going out of my way to do this more properly, using prepare/bind/step/finalize. Ironically, this is the case where it doesn't matter as much, since I'm not looping.

  • In the fourth stanza, I specifically want to loop over the rows in C# even though I could easily just do the sum in SQL.

F#, first attempt

OK, now here's a painfully direct translation of this code to F#:

open SQLitePCL

// helper functions to check SQLite result codes and throw

let is_error rc = 
    (rc <> raw.SQLITE_OK) 
    && (rc <> raw.SQLITE_ROW) 
    && (rc <> raw.SQLITE_DONE)

let check1 rc = 
    if (is_error rc) 
    then failwith (sprintf "%d" rc) 
    else ()

let check2 conn rc = 
    if (is_error rc) 
    then failwith (raw.sqlite3_errmsg(conn)) 
    else ()

let checkthru conn rc = 
    if (is_error rc) 
    then failwith (raw.sqlite3_errmsg(conn)) 
    else rc

// MAIN program

// ONE: open the db and create the table

let (rc,conn) = raw.sqlite3_open(":memory:") 
check1 rc

check2 conn (raw.sqlite3_exec (conn, "CREATE TABLE foo (a int, b int)"))

// TWO: insert 16 rows

for i = 1 to 16 do 
    let sql = (sprintf "INSERT INTO foo (a) VALUES (%d)" i)
    check2 conn (raw.sqlite3_exec (conn, sql ))

// THREE: set b = a squared and find b for a=7

check2 conn (raw.sqlite3_exec (conn, "UPDATE foo SET b = a * a"))

let rc2,stmt = raw.sqlite3_prepare_v2(conn, "SELECT b FROM foo WHERE a=?")
check2 conn rc2

check2 conn (raw.sqlite3_bind_int(stmt, 1, 7))
check2 conn (raw.sqlite3_step(stmt))
let vsq = raw.sqlite3_column_int(stmt, 0)
check2 conn (raw.sqlite3_finalize(stmt))

// FOUR: fetch sum(a) for all rows where b < 120

let rc3,stmt2 = raw.sqlite3_prepare_v2(conn, "SELECT a FROM foo WHERE b<120")
check2 conn rc3

let mutable vsum = 0

while raw.SQLITE_ROW = ( checkthru conn (raw.sqlite3_step(stmt2)) ) do 
    vsum <- vsum + (raw.sqlite3_column_int(stmt2, 0))

check2 conn (raw.sqlite3_finalize(stmt2))

// FIVE: close and print the results

check1 (raw.sqlite3_close(conn))

printfn "val: %d" vsq
printfn "sum: %d" vsum

Notes:

  • The is_error function actually looks kind of elegant to me in this form. Note that != is spelled <>. Also there is no return keyword, as the value of the last expression just becomes the return value of the function.

  • The F# way is to use type inference. For example, in the is_error function, the rc parameter is strongly typed as an integer even though I haven't declared it that way. The F# compiler looks at the function and sees that I am comparing the parameter against raw.SQLITE_OK, which is an integer, therefore rc must be an integer as well. F# does have a syntax for declaring the type explicitly, but this is considered bad practice.

  • The check2 and checkthru functions are identical except that one returns the unit type (which is kind of like void) and the other passes the integer argument through. In C# this wouldn't matter and I could have just had the check functions return their argument when they don't throw. But F# gives warnings ("This expression should have type 'unit', but has type...") for any expression whose values is not used.

  • In C#, I overloaded check() so that I could sometimes call it with the sqlite3 connection handle and sometimes without. F# doesn't do function overloading, so I did two versions of the function called check1 and check2.

  • Since raw.sqlite3_open() has an out parameter, F# automatically converts this to return a tuple with two items (the actual return value is first, and the value in the out parameter is second). It took me a little while to figure out the right syntax to get the two parts into separate variables.

  • It took me even longer to figure out that calling a .NET method in F# uses a different syntax than calling a regular F# function. I was just getting used to the idea that F# wants functions to be called without parens and with the parameters separated by spaces instead of commas. But .NET methods are not F# functions. The syntax for calling a .NET method is, well, just like in C#. Parens and commas.

  • Here's another way that method calls are different in F#: When a method call is a parameter to a regular F# function, you have to enclose it in parens. That's why the call to sqlite3_exec() in the first stanza is parenthesized when I pass it to check2.

  • BTW, one of the first things I did was try to call raw.sqlite3_Open(), just to verify that F# is case-sensitive. It is.

  • F# programmers seem to pride themselves on how much they can do in a single line of code, regardless of how long it is. I originally wrote the second stanza in a single line. I only split it up so it would look better here in my blog article.

  • In the third stanza, F# wouldn't let me reuse rc ("Duplicate definition of value 'rc'") so I had to introduce rc2.

  • In the fourth stanza, I have tried to exactly mimic the behavior of the C# code, and I think I've succeeded so thoroughly that any real F# programmer will be tempted to gouge their eyes out when they see it. I've used mutable and while/do, both of which are considered a very un-functional way of doing things.

  • Bottom line: This code works and it does exactly what the original C# does. But I named the file fsharp_dreadful.fs because I think that in terms of what is considered best practices in the F# world, it's probably about as bad as it can be while still being correct.

F#, somewhat less csharpy

Here's an F# version I called fsharp_less_bad.fs. It's still not very good, but I've made an attempt to do some things in a more F#-ish way.

open SQLitePCL

// helper functions to check SQLite result codes and throw

let is_error rc = 
    match rc with
    | raw.SQLITE_OK -> false
    | raw.SQLITE_ROW -> false
    | raw.SQLITE_DONE -> false
    | _ -> true

let check1 rc = 
    if (is_error rc) 
    then failwith (sprintf "%d" rc) 
    else ()

let check2 conn rc = 
    if (is_error rc) 
    then failwith (raw.sqlite3_errmsg(conn)) 
    else rc

let checkpair1 pair =
    let rc,result = pair
    check1 rc |> ignore
    result

let checkpair2 conn pair =
    let rc,result = pair
    check2 conn rc |> ignore
    result

// helper functions to wrap method calls in F# functions

let sqlite3_open name = checkpair1 (raw.sqlite3_open(name))
let sqlite3_exec conn sql = check2 conn (raw.sqlite3_exec (conn, sql)) |> ignore
let sqlite3_prepare_v2 conn sql = checkpair2 conn (raw.sqlite3_prepare_v2(conn, sql))
let sqlite3_bind_int conn stmt ndx v = check2 conn (raw.sqlite3_bind_int(stmt, ndx, v)) |> ignore
let sqlite3_step conn stmt = check2 conn (raw.sqlite3_step(stmt)) |> ignore
let sqlite3_finalize conn stmt = check2 conn (raw.sqlite3_finalize(stmt)) |> ignore
let sqlite3_close conn = check1 (raw.sqlite3_close(conn))
let sqlite3_column_int stmt ndx = raw.sqlite3_column_int(stmt, ndx)

// MAIN program

// ONE: open the db and create the table

let conn = sqlite3_open(":memory:")

// use partial application to create an exec function that already 
// has the conn parameter baked in

let exec = sqlite3_exec conn

exec "CREATE TABLE foo (a int, b int)"

// TWO: insert 16 rows

let ins x = 
    exec (sprintf "INSERT INTO foo (a) VALUES (%d)" x)

[1 .. 16] |> List.iter ins

// THREE: set b = a squared and find b for a=7

exec "UPDATE foo SET b = a * a"

let stmt = sqlite3_prepare_v2 conn "SELECT b FROM foo WHERE a=?"
sqlite3_bind_int conn stmt 1 7
sqlite3_step conn stmt
let vsq = sqlite3_column_int stmt 0
sqlite3_finalize conn stmt

// FOUR: fetch sum(a) for all rows where b < 120

let stmt2 = sqlite3_prepare_v2 conn "SELECT a FROM foo WHERE b<120"

let vsum = List.sum [ while 
    raw.SQLITE_ROW = ( check2 conn (raw.sqlite3_step(stmt2)) ) do 
        yield sqlite3_column_int stmt2 0 
    ]

sqlite3_finalize conn stmt2

// FIVE: close and print the results

sqlite3_close conn

printfn "val: %d" vsq
printfn "sum: %d" vsum

Notes:

  • I changed is_error to use pattern matching. For this very simple situation, I'm not sure this is an improvement over the simple boolean expression I had before.

  • I get the impression that typical doctrine in functional programming church is to not use exceptions, but I'm not tackling that problem here.

  • I got rid of checkthru and just made check2 return its rc paraemter when it doesn't throw. This means most of the times I call check2 I have to ignore the result or else I get a warning.

  • I added a couple of checkpair functions. These are designed to take a tuple, such as the one that comes from a .NET method with an out parameter, like sqlite3_open() or sqlite3_prepare_v2(). The checkpair function does the appropriate check function on the first part of the tuple (the integer return code) and then returns the second part. The sort-of clever thing here is that checkpair does not care what type the second part of the tuple is. I get the impression that this sort of "things are generic by default" philosophy is a pillar of functuonal programming.

  • I added several functions which wrap raw.sqlite3_whatever() as a more F#-like function that looks less cluttered.

  • In the first stanza, after I get the connection open, I define an exec function using the F# partial application feature. The exec function is basically just the sqlite3_exec function except that the conn parameter has already been baked in. This allows me to use very readable syntax like exec "whatever". I considered doing this for all the functions, but I'm not really sure this design is a good idea. I just found this hammer called "partial application" so I was looking for a nail.

  • In the second stanza, I've eliminated the for loop in favor of a list operation. I defined a function called ins which inserts one row. The [1 .. 16] syntax produces a range, which is piped into List.iter.

  • The third stanza looks a lot cleaner with all the .NET method calls hidden away.

  • In the fourth stanza, I still have a while loop, but I was able to get rid of mutable. The syntax I'm using here is (I think) something called a computation expression. Basically, the stuff inside the square brackets is constructing a list with a while loop. Then List.sum is called on that list, resulting in the number I want.

Other notes

I did all this using the command-line F# tools and Mono on a Mac. I've got a command called fsharpc on my system. I'm not sure how it got there, but it probably happened when I installed Xamarin.

Since I'm not using msbuild or NuGet, I just harvested SQLitePCL.raw.dll from the SQLitePCL.raw NuGet package. The net45 version is compatible with Mono, and on a Mac, it will simply P/Invoke from the SQLite library that comes preinstalled with MacOS X.

So the bash commands to setup my environment for this blog entry looked something like this:

mkdir fs_sqlite
cd fs_sqlite
mkdir unzipped
cd unzipped
unzip ~/Downloads/sqlitepcl.raw_needy.0.5.0.nupkg 
cd ..
cp unzipped/lib/net45/SQLitePCL.raw.dll .

Here are the build commands I used:

fs_sqlite eric$ gmcs -r:SQLitePCL.raw.dll -sdk:4.5 csharp.cs

fs_sqlite eric$ fsharpc -r:SQLitePCL.raw.dll fsharp_dreadful.fs
F# Compiler for F# 3.1 (Open Source Edition)
Freely distributed under the Apache 2.0 Open Source License

fs_sqlite eric$ fsharpc -r:SQLitePCL.raw.dll fsharp_less_bad.fs
F# Compiler for F# 3.1 (Open Source Edition)
Freely distributed under the Apache 2.0 Open Source License

fs_sqlite eric$ ls -l *.exe
-rwxr-xr-x  1 eric  staff   4608 Sep  8 15:30 csharp.exe
-rwxr-xr-x  1 eric  staff   8192 Sep  8 15:30 fsharp_dreadful.exe
-rwxr-xr-x  1 eric  staff  11776 Sep  8 15:31 fsharp_less_bad.exe

fs_sqlite eric$ mono csharp.exe
val: 49
sum: 55

fs_sqlite eric$ mono fsharp_dreadful.exe 
val: 49
sum: 55

fs_sqlite eric$ mono fsharp_less_bad.exe 
val: 49
sum: 55

BTW, I noticed that compiling F# (fsharpc) is a LOT slower than compiling C# (gmcs).

Note that the command-line flag to reference (-r:) an assembly is the same for F# as it is for C#.

Note that fsharp_dreadful.exe is bigger than csharp.exe, and the "less_bad" exe is even bigger. I suspect that generalizing these observations would be extrapolating from too little data.

C# fans may notice that I [attempted to] put more effort into the F# code. This was intentional. Making the C# version beautiful was not the point of this blog entry.

So far, my favorite site for learning F# has been http://fsharpforfunandprofit.com/

 

How To Rapidly Brainstorm and Share Ideas with Method 635

So, if you have a bunch of smart people, a bunch of bright ideas, and everybody wants to talk at the same time ... what do you do?

Or, you have a bunch of smart people, but they are quiet and nobody is sharing their bright ideas, and the squeaky wheel gets the oil ... what do you do?

Whenever you get a bunch of smart people together to change the world it helps to have some proven practices for better results.

One of the techniques a colleague shared with me recently is Method 635.  It stands for six participants, three ideas, and five rounds of supplements. 

He's used Method 635 successfully to get a large room of smart people to brainstorm ideas and put their top three ideas forward.

Here's how he uses Method 635 in practice.

  1. Split the group into 6 people per table (6 people per team or table).
  2. Explain the issue or challenge to the group, so that everybody understands it. Each group of 6 writes down 3 solutions to the problem (5 minutes).
  3. Go five rounds (5 minutes per round).  During each round, pass the ideas to the participant's neighbor (one of the other participants).  The participant's neighbor will add three additional ideas or modify three of the existing ones.
  4. At the end of the five rounds, each team votes on their top three ideas (5 minutes.)  For example, you can use “impact” and “ability to execute” as criteria for voting (after all, who cares about good ideas that can't be done, and who cares about low-value ideas that can easily be executed.)
  5. Each team presents their top three ideas to the group.  You could then vote again, by a show of hands, on the top three ideas across the teams of six.

The outcome is that each person will see the original three solutions and contribute to the overall set of ideas.

By using this method, if each of the 5 rounds is 5 minutes, and if you take 10 minutes to start by explaining the issue, and you give teams 5 minutes to write down their initial set of 3 ideas, and then another 5 minutes at the end to vote, and another 5 minutes to present, you’ve accomplished a lot within an hour.   Voices were heard.  Smart people contributed their ideas and got their fingerprints on the solutions.  And you’ve driven to consensus by first elaborating on ideas, while at the same time, driving to convergence and allowing refinement along the way.

Not bad.

All in a good day’s work, and another great example of how structuring an activity, even loosely structuring an activity, can help people bring out their best.

You Might Also Like

How To Use Six Thinking Hats

Idea to Done: How to Use a Personal Kanban for Getting Results

Workshop Planning Framework

Categories: Architecture, Programming

Xebia KnowledgeCast Episode 3

Xebia Blog - Mon, 09/08/2014 - 15:41

xebia_xkc_podcast
The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this third episode,
we get a bit more technical with me interviewing some of the most excellent programmers in the known universe: Age Mooy and Barend Garvelink. Then, I talk education and Smurfs with Femke Bender. Also, I toot my own horn a bit by providing you with a summary of my PMI Netherlands Summit session on Building Your Parachute On The Way Down. And of course, Serge will have Fun With Stickies!

It's been a while, and for those that are still listening to this feed: The Xebia Knowledgecast has been on hold due to personal circumstances. Last year, my wife lost her job, my father and mother in-law died, and we had to take our son out of the daycare center he was in due to the way they were treating him there. So, that had a little bit of an effect on my level of podfever. That said, I'm back! And my podfever is up!

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, use the Auphonic recording app to send in a voicemessage as an AIFF, WAV, or FLAC file so we can put you ON the show!

Credits

Your Worst Enemy Is Yourself

Making the Complex Simple - John Sonmez - Mon, 09/08/2014 - 15:00

Here’s the thing… You could have been exactly where you want to be right now. You could have gotten the perfect job. You could have started that business you always wanted to start. You could have gotten those 6-pack abs. You could have even met the love of your life. There has only been one […]

The post Your Worst Enemy Is Yourself appeared first on Simple Programmer.

Categories: Programming

Getting started with Salt

Xebia Blog - Mon, 09/08/2014 - 09:07

A couple of days ago I had the chance to spend a full day working with Salt(stack). On my current project we are using a different configuration management tool and my colleagues there claimed that Salt was simpler and more productive. The challenge was easily set, they claimed that a couple of people with no Salt experience, albeit with a little configuration management knowledge, would be productive in a single day.

My preparation was easy, I had to know nothing about Salt....done!

During the day I was working side by side with another colleague who knew little to nothing about Salt. When the day started, the original plan was to do a quick one hour introduction into Salt. But as we like to dive in head first this intro was skipped in favor of just getting started. We used an existing Vagrant box that spun up a Salt master & minion we could work on. The target was to get Salt to provision a machine for XL Deploy, complete with the customizations we were doing at our client. Think of custom plugins, logging configuration and custom libraries.

So we got cracking, running down the steps we needed to get XL Deploy installed. The steps were relatively simple, create a user & group, get the installation files from somewhere, install the server, initialize the repository and run it as a service.

First thing I noticed is that we simply just could get started. For the tasks we needed to do (downloading, unzipping etc.) we didn't need any additional states. Actually, during the whole exercise we never downloaded any additional states. Everything we needed was provided by Salt from the get go. Granted, we weren't doing anything special but it's not a given that everything is available.

During the day we approached the development of our Salt state like we would a shell script. We started from the top and added the steps needed. When we ran into issues with the order of the steps we'd simply move things around to get it to work. Things like creating a user before running a service as that user were easily resolved this way.

Salt uses yaml to define a state and that was fairly straight forward to use. Sometimes the naming used was strange. For example the salt.state.archive uses the parameter "source" for it's source location but "name" for it's destination. It's clearly stated in the docs what the parameter is used for, but a strange convention nonetheless.

We also found that the feedback provided by Salt can be scarce. On more than one occasion we'd enter a command and nothing would happen for a good while. Sometimes there would eventually be a lot of output but sometimes there wasn't.  This would be my biggest gripe with Salt, that you don't always get the feedback you'd like. Things like using templates and hierarchical data (the so-called pillars) proved easy to use. Salt uses jinja2 as it's templating engine, since we only needed simple variable replacement it's hard to comment on how useful jinja is. For our purposes it was fine.  Using pillars proved equally straightforward. The only issue we encountered here was that we needed to add our pillar to our machine role in the top.sls. Once we did that we could use the pillar data where needed.

The biggest (and only real) problem we encountered was to get XL Deploy to run as a service. We tried two approaches, one using the default service mechanism on Linux and the second using upstart. Upstart made it very easy to get the service started but it wouldn't stop properly. Using the default mechanism we couldn't get the service to start during a Salt run. When we send it specific commands it would start (and stop) properly but not during a run. We eventually added a post-stop script to the upstart config to make sure all the (child) processes stopped properly.

At the end of the day we had a state running that provisioned a machine with XL Deploy including all the customizations we wanted. Salt basically did what we wanted. Apart from the service everything went smooth. Granted, we didn't do anything exotic and stuck to rudimentary tasks like downloading, unzipping and copying, but implementing these simple tasks remained simple and straightforward. Overall Salt did what one might expect.

From my perspective the goal of being productive in a single day was easily achieved. Because of how straightforward it was to implement I feel confident about using Salt for more complex stuff. So, all in all Salt left a positive impression and I would like to do more with it.

How to create a knowledge worker Gemba

Software Development Today - Vasco Duarte - Mon, 09/08/2014 - 04:00

I am a big fan of the work by Jim Benson and Tonianne Barry ever since I read their book: Personal Kanban.

In this article Jim describes an idea that I would like to highlight and expand. He says: we need a knowledge worker Gemba. He goes on to describe how to create that Gemba:

  • Create a workcell for knowledge work: Where you can actually observe the team work and interact
  • Make work explicit: Without being able to visualize the work in progress, you will not be able to understand the impact of certain dynamics between the team members. Also, you will miss the necessary information that will allow you to understand the obstacles to flow in the team - what prevents value from being delivered.

These are just some steps you can take right now to understand deeply how work gets done in your team, your organization or by yourself if you are an independent knowledge worker. This understanding, in turn will help you define concrete changes to the way work gets done in a way that can be measured and understood.

I've tried the same idea for my own work and described it here. How about you? What have you tried to implement to create visibility and understanding in your work?

Are Testers still relevant?

Xebia Blog - Sun, 09/07/2014 - 22:07

Last week I talked to one of my colleagues about a tester in his team. He told me that the tester was bored, because he had nothing to do. All the developers wrote and executed their tests themselves. Which makes sense, because the tester 2.0 tries to make the team test infected.

So what happens if every developer in the team has the Testivus? Are you still relevant on the Continuous Delivery train?
Come and join the discussion at the Open Kitchen Test Automation: Are you still relevant?

Testers 1.0

Remember the days when software testers where consulted after everything was built and released for testing. Testing was a big fat QA phase, which was a project by itself. The QA department consisted of test managers analyzing the requirements first. Logical test cases were created and were subordinated to test executors, who created physical test cases and executed them manually. Testers discovered conflicting requirements and serious issues in technical implementations. Which is good obviously. You don't want to deliver low quality software. Right?

So product releases were being delayed and the QA department documented everything in a big fat test report. And we all knew it: The QA department had to do it all over again after the next release.

I remember being a tester during those days. I always asked myself: Why am I always the last one thinking about ways to break the system? Does the developer know how easily this functionality can be broken? Does the product manager know that this requirement does not make sense at all?
Everyone hated our QA department. We were portrayed as slow, always delivering bad news and holding back the delivery cycle. But the problem was not delivering the bad news. The timing was.

The way of working needed to be changed.

Testers 2.0

We started training testers to help Agile teams deliver high quality software during development: The birth of the Tester 2.0 - The Agile Tester.
These testers master the Agile Manifesto, processes and methods that come with it. Collaboration about quality is the key here. Agile Testing is a mindset. And everyone is responsible for the quality of the product. Testers 2.0 helped teams getting (more) test infected. They thought like a researcher instead of a quality gatekeeper. They became part of the software development and delivery teams and they looked into possibilities to speed up testing efforts. So they practiced several exploratory testing techniques. Focused on reasonable and required tests, given the constraints of a sprint.

When we look back at several Agile teams having a shared understanding about Agile Testing, we saw many multidisciplinary teams becoming two separate teams: One is for developers and the other for QA / Testers.
I personally never felt comfortable in those teams. Putting testers with a group of developers is not Agile Testing. Developers still left testing for testers, and testers passively waited for developers to deploy something to be tested. At some point testers became a bottleneck again so they invested in test automation. So testers became test automators and build their own test code in a separate code base than development code. Test automators also picked tools that did not foster team responsibility. Therefore Test Automation got completely separated from product development. We found ways to help testers, test automators and developers to collaborate by improving the process. But that was treating the symptom of the problem. Developers were not taking responsibility in automated tests. And testers did not help developers designing testable software.

We want test automators and developers to become the same people.

Testers 3.0

If you want to accelerate your business you'll need to become iterative, delivering value to production as soon as possible by doing Continuous Delivery properly. So we need multidisciplinary teams to shorten feedback loops coming from different point of views.

Testers 3.0 are therefore required to accelerate the business by working in these following areas:

Requirement inspection: Building the right thing

The tester 3.0 tries to understand the business problem of requirements by describing examples. It's important to get common understanding between the business and technical context. So the tester verifies them as soon as possible and uses this as input for Test Automation and use BDD as a technique where a Human Readable Language is fostered. These testers work on automated acceptance tests as soon as possible.

Test Automation: Boosting the software delivery process

When common understanding is reached and the delivery team is ready to implement the requested feature, the tester needs programming skills to make the acceptance tests in a clean code state. The tester 3.0 uses appropriate Acceptance Test Driven Development tools (like Cucumber), which the whole team supports. But the tester keeps an eye out for better, faster and easier automated testing frameworks to support the team.

At the Xebia Craftsmanship Seminar (a couple of months ago) someone asked me if testers should learn how to write code.
I told him that no one is good at everything. But the tester 3.0 has good testing skills and enough technical baggage to write automated acceptance tests. Continuous Delivery teams have a shared responsibility and they automate all boring steps like manual test scripts, performance and security tests. It's very important to know how to automate; otherwise you'll slow down the team. You'll be treated the same as anyone else in the delivery team.
Testers 3.0 try to get developers to think about clean code and ensuring high quality code. They look into (and keep up with) popular development frameworks and address the testability of it. Even the test code is evaluated for quality attributes continuously. It needs the same love and caring as getting code into production.

Living documentation: Treating tests as specifications

At some point you'll end up with a huge set of automated tests telling you everything is fine. The tester 3.0 treats these tests as specifications and tries to create a living document, which is used for long term requirements gathering. No one will complain about these tests when they are all green and passing. The problem starts when tests start failing and no one can understand why. Testers 3.0 think about their colleague when they write a specification or test. They need to clearly specify what is being tested in a Human Readable Language.
They are used to changing requirements and specifications. And they don't make a big deal out of it. They understand that stakeholders can change their mind once a product comes alive. So the tester makes sure that important decisions made during new requirement inspections and development are stored and understood.

Relevant test results: Building quality into the process

Testers 3.0 focus on getting extreme fast feedback to determine the software quality of software products every day. Every night. Every second.
Testers want to deploy new working software features into production more often. So they do whatever it takes to build a high quality pipeline decreasing the quality feedback time during development.
Everyone in your company deserves to have confidence in the software delivery pipeline at any moment. Testers 3.0 think about how they communicate these types of feedback to the business. They provide ways to automatically report these test results about quality attributes. Testers 3.0 ask the business to define quality. Knowing everything was built right, how can they measure they've built the right thing? What do we need to measure when the product is in production?

How to stay relevant as a Tester

So what happens when all of your teammates are completely focused on high quality software using automation?

Testing does not require you to manually click, copy and paste boring scripted test steps you didn't want to do in the first place. You were hired to be skeptical about anything and make sure that all risks are addressed. It's still important to keep being a researcher for your team and test curiosity accordingly.

Besides being curious, analytical and having great communication skills, you need to learn how to code. Don't work harder. Work smarter by analyzing how you can automate all the boring checks so you'll have more time discovering other things by using your curiosity.

Since testing drives software development, and should no longer be treated as a separate phase in the development process, it's important that teams put test automation in the center of all design decisions. Because we need Test Automation to boost the software delivery by building quality sensors in every step of the process. Every day. Every night. Every second!

Do you want to discuss this topic with other Testers 3.0?  Come and join the Open Kitchen: Test Automation and get on board the Continuous Delivery Train!

 

10 High-Value Activities in the Enterprise

I was flipping back over the past year and reflecting on the high-value activities that I’ve seen across various Enterprise customers.  I think the high-value activities tend to be some variation of the following:

  1. Customer interaction (virtual, remote, connect with experts)
  2. Product innovation and ideation.
  3. Work from anywhere on any device.
  4. Comprehensive and cross-boundary collaboration (employees, customers, and partners.)
  5. Connecting with experts.
  6. Operational intelligence (predictive analytics, predictive maintenance)
  7. Cross-sell / up-sell and real-time marketing.
  8. Development and ALM in the Cloud.
  9. Protecting information and assets.
  10. Onboarding and enablement.

At first I was thinking of Porter’s Value Chain (Inbound Logistics, Operations, Outbound Logistics, Marketing & Sales, Services), which do help identify where the action is.   Next, I was reviewing how when we drive big changes with a customer, it tends to be around changing the customer experience, or changing the employee experiences, or changing the back-office and systems experiences.

You can probably recognize how the mega-trends (Cloud, Mobile, Social, and Big Data) influence the activities above, as well as some popular trends like Consumerization of IT.

High-Value Activities in the Enterprise from the Microsoft “Transforming Our Company” Memo

I also found it helpful to review the original memo from July 11, 2013 titled Transforming Our Company.  Below are some of my favorite sections from the memo:

Via Transforming Our Company:

We will engage enterprise on all sides — investing in more high-value activities for enterprise users to do their jobs; empowering people to be productive independent of their enterprise; and building new and innovative solutions for IT professionals and developers. We will also invest in ways to provide value to businesses for their interactions with their customers, building on our strong Dynamics foundation.

Specifically, we will aim to do the following:

  • Facilitate adoption of our devices and end-user services in enterprise settings. This means embracing consumerization of IT with the vigor we pursued in the initial adoption of PCs by end users and business in the ’90s. Our family of devices must allow people to be more productive, and for them to easily use our devices for work.

  • Extend our family of devices and services for enterprise high-value activities. We have unique expertise and capacity in this space.

  • Information assurance. Going forward this will be an area of critical importance to enterprises. We are their trusted partners in this space, and we must continue to innovate for them against a changing security and compliance landscape.

  • IT management. With more IT delivered as services from the cloud, the function of IT itself will be reimagined. We are best positioned to build the tools and training for that new breed of IT professional.

  • Big data insight. Businesses have new and expanded needs and opportunities to generate, store and use their own data and the data of the Web to better serve customers, make better decisions and design better products. As our customers’ online interactions with their customers accelerate, they generate massive amounts of data, with the cloud now offering the processing power to make sense of it. We are well-positioned to reimagine data platforms for the cloud, and help unlock insight from the data.

  • Customer interaction. Organizations today value most those activities that help them fully understand their customers’ needs and help them interact and communicate with them in more responsive and personalized ways. We are well-positioned to deliver services that will enable our customers to interact as never before — to help them match their prospects to the right products and services, derive the insights so they can successfully engage with them, and even help them find and create brand evangelists.

  • Software development. Finally, developers will continue to write the apps and sites that power the world, and integrate to solve individual problems and challenges. We will support them with the simplest turnkey way to build apps, sites and cloud services, easy integration with our products, and innovation for projects of every size.”

A Story of High-Value Activities in Action

If you can’t imagine what high-value activities look like, or what business transformation would look like, then have a look at this video:

Nedbank:  Video Banking with Lync

Nedbank was a brick-and-mortar bank that wanted to go digital and, not just catch up to the Cloud world, but leap frog into the future.  According to the video description, “Nedbank initiated a program called the Integrated Channel Strategy, focusing on client centered banking experiences using Microsoft Lync. The client experience is integrated and aligned across all channels and seeks to bring about efficiencies for the bank. Video banking with Microsoft Lync gives Nedbank a competitive advantage.”

The most interesting thing about the video is not just what’s possible, but that’s it’s real and happening.

They set a new bar for the future of digital banking.

You Might Also Like

Continuous Value Delivery the Agile Way

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Architecture, Programming

Episode 209: Josiah Carlson on Redis

Josiah Carlson discusses Redis, an in-memory single-threaded data structure server. A Redis mailing list contributor and author, Josiah talks with Robert about the differences between Redis and a key-value store, client-side versus server-side data structures, consistency models, embedding Lua scripts within the server, what you can do with Redis from an application standpoint, native locking […]
Categories: Programming

Standard Markdown is now Common Markdown

Coding Horror - Jeff Atwood - Fri, 09/05/2014 - 01:23

Let me open with an apology to John Gruber for my previous blog post.

We've been working on the Standard Markdown project for about two years now. We invited John Gruber, the original creator of Markdown, to join the project via email in November 2012, but never heard back. As we got closer to being ready for public feedback, we emailed John on August 19th with a link to the Standard Markdown spec, asking him for his feedback. Since John MacFarlane was the primary author of most of the work, we suggested that he be the one to reach out.

We then waited two weeks for a response.

There was no response, so we assumed that John Gruber was either OK with the project (and its name), or didn't care. So we proceeded.

There was lots of internal discussion about what to name our project. Strict Markdown? XMarkdown? Markdown Pro? Markdown Super Hyper Turbo Pro Alpha Diamond Edition?

As we were finalizing the name, we noticed on this podcast, at 1:15 …

… that John seemed OK with the name "GitHub Flavored Markdown". So I originally wrote the blog post and the homepage using that terminology – "Standard Flavored Markdown" – and even kept that as the title of the blog post to signify our intent. We were building Yet Another Flavor of Markdown, one designed to remove ambiguity by specifying a standard, while preserving as much as possible the spirit of Markdown and compatibility with existing documents.

Before we went live, I asked for feedback internally, and one of the bits of feedback I got was that it was inconsistent to say Standard Flavored Markdown on the homepage and blog when the spec says Standard Markdown throughout. So I changed them to match Standard Markdown, and that's what we launched with.

It was a bit of a surprise to get an email last night, addressed to both me and John MacFarlane, from John Gruber indicating that the name Standard Markdown was "infuriating".

I'm sorry the name is so infuriating. I assure you that we did not choose the name to make you, or anyone else, angry. We were simply trying to pick a name that correctly and accurately reflected our goal – to build an unambiguous flavor of Markdown. If the name we chose made inappropriate overtures about Standard Markdown being anything more than a highly specified flavor of Markdown, I apologize. Standard does have certain particular computer science meanings, as in IETF Standard, ECMA Standard. That was not our intent, it was more of an aspirational element of "what if, together, we could eventually..". What can I say? We're programmers. We name things literally. And naming is hard.

John Gruber was also very upset, and I think rightfully so, that the word Markdown was not capitalized throughout the spec. This was an oversight on our part – and also my fault because I did notice Markdown wasn't capitalized as I copied snippets of the spec to the homepage and blog post, and I definitely thought it was odd, too. You'll note that I took care to manually capitalize Markdown in the parts of the spec I copied to the blog post and home page – but I neglected to mention this to John MacFarlane as I should have. We corrected this immediately when it was brought to our attention.

John then made three requests:

  1. Rename the project.

  2. Shut down the standardmarkdown.com domain, and don't redirect it.

  3. Apologize.

All fair. Happy to do all of those things.

Starting with the name. In his email John graciously indicated that he would "probably" approve a name like "Strict Markdown" or "Pedantic Markdown". Given the very public earlier miscommunication about naming, that consideration is appreciated.

We replied with the following suggestions:

  • Compatible Markdown
  • Regular Markdown
  • Community Markdown
  • Common Markdown
  • Uniform Markdown
  • Vanilla Markdown

We haven't heard back after replying last night, and I'm not sure we ever will, so in the interest of moving ahead and avoiding conflict, we're immediately renaming the project to Common Markdown.

We hope that is an acceptable name; it was independently suggested to us several times in several different feedback areas. The intention is to avoid any unwanted overtones of ownership; we have only ever wanted to be Yet Another Flavor of Markdown.

  1. The project name change is already in progress.

  2. This is our public apology.

  3. I'll shut down the standardmarkdown.com domain as soon as I can, probably by tomorrow.

John, we deeply apologize for the miscommunication. It's our fault, and we want to fix it. But even though we made mistakes, I hope it is clear that everything we've done, we did solely out of a shared love of Markdown (and its simple, unencumbered old-school ASCII origins), and the desire to ensure the success of Markdown as a stable format for future generations.

Edit: after a long and thoughtful email from John Gruber – which is greatly appreciated – he indicated that no form of the word "Markdown" is acceptable to him in this case. We are now using the name CommonMark.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Categories: Programming

Welcome to the Swift Playground

Xebia Blog - Thu, 09/04/2014 - 19:16

Imagine... You're working in swift code and you need to explain something to a co-worker. Easiest would be to just explain it and show the code right. So you grab your trusty editor and type some markdown.

let it = "be awesome"

Great!

So now you have a file filled with content:
swift-markdown

But it would be better if it looked like:

playground

Well you can and it's super simple, all you need is some Markdown and:

npm install -g swift-playground-builder

After that it's a matter of running:

playground my-super-nice-markdown-with-swift-code.md

Enjoy!

More info: https://github.com/jas/swift-playground-builder

10 Big Ideas from The 7 Habits of Highly Effective People

It’s long over-do, but I finally wrote up my 10 Big Ideas from the 7 Habits of Highly Effective People.

What can I say … the book is a classic.

I remember when my Dad first recommended that I read The 7 Habits of Highly Effective People long ago.   In his experience, while Tony Robbins was more focused on Personality Ethic, Stephen Covey at the time was more focused on Character Ethic.  At the end of the day, they are both complimentary, and one without the other is a failed strategy.

While writing 10 Big Ideas from the 7 Habits of Highly Effective People, I was a little torn on what to keep in and what to leave out.   The book is jam packed with insights, powerful patterns, and proven practices for personal change.   I remembered reading about the Law of the Harvest, where you reap what you sow.  I remembered reading about how to think Win/Win, and how that helps you change the game from a scarcity mentality to a mindset of abundance.   I remembered reading about how we can move up the stack in terms of time management if we focus less on To Dos and more on relationships and results.   I remembered reading about how if we want to be heard, we need to first seek to understand.

The 7 Habits of Highly Effective People is probably one of the most profound books on the planet when it comes to personal change and empowerment.

It’s full of mental models and big ideas.  

What I really like about Covey’s approach is that he bridged work and life.  Rather than splinter our lives, Covey found a way to integrate our lives more holistically, to combine our personal and professional lives through principles that empower us, and help us lead a more balanced life.

Here is a summary list of 10 Big Ideas from the 7 Habits of Highly Effective People:

  1. The Seven Habits Habits of Effectiveness.
  2. The Four Quadrants of Time Management.
  3. Character Ethic vs. Personality Ethic
  4. Increase the Gap Between Stimulus and Response.
  5. All Things are Created Twice.
  6. The Five Dimensions of Win/Win.
  7. Expand Your Circle of Influence.
  8. Principle-Centered Living.
  9. Four Generations of Time Management.
  10. Make Meaningful Deposits in the Emotional Bank Account.

In my post, I’ve summarized each one and provided one of my favorite highlights from the book that brings each idea to life.

Enjoy.

Categories: Architecture, Programming

Optimizing for Bandwidth on Apache and Nginx

Google Code Blog - Thu, 09/04/2014 - 16:27
This post originally appeared on Webmaster Central
by Jeff Kaufman, Make the Web Fast


Webmaster level: advancedEveryone wants to use less bandwidth: hosts want lower bills, mobile users want to stay under their limits, and no one wants to wait for unnecessary bytes. The web is full of opportunities to save bandwidth: pages served without gzip, stylesheets and JavaScript served unminified, and unoptimized images, just to name a few.So why isn't the web already optimized for bandwidth? If these savings are good for everyone then why haven't they been fixed yet? Mostly it's just been too much hassle. Web designers are encouraged to "save for web" when exporting their artwork, but they don't always remember.  JavaScript programmers don't like working with minified code because it makes debugging harder. You can set up a custom pipeline that makes sure each of these optimizations is applied to your site every time as part of your development or deployment process, but that's a lot of work.

An easy solution for web users is to use an optimizing proxy, like Chrome's. When users opt into this service their HTTP traffic goes via Google's proxy, which optimizes their page loads and cuts bandwidth usage by 50%.  While this is great for these users, it's limited to people using Chrome who turn the feature on and it can't optimize HTTPS traffic.

With Optimize for Bandwidth, the PageSpeed team is bringing this same technology to webmasters so that everyone can benefit: users of other browsers, secure sites, desktop users, and site owners who want to bring down their outbound traffic bills. Just install the PageSpeed module on your Apache or Nginx server [1], turn on Optimize for Bandwidth in your configuration, and PageSpeed will do the rest.

If you later decide you're interested in PageSpeed's more advanced optimizations, from cache extension and inlining to the more aggressive image lazyloading and defer JavaScript, it's just a matter of enabling them in your PageSpeed configuration.

Learn more about installing PageSpeed or enabling Optimize for Bandwidth.
[1] If you're using a different web server, consider running PageSpeed on an Apache or Nginx proxy.  And it's all open source, with porting efforts underway for IIS, ATS, and others.

Posted by Mano Marks, Google Developer Platform Team
Categories: Programming

TypeScript Mario

Phil Trelford's Array - Thu, 09/04/2014 - 08:23

Earlier this year I had a play with Microsoft’s new compile to JavaScript language, TypeScript. Every man and his dog has a compile to JavaScript solution these days. TypeScript’s angle appears to be to provide optional static typing over JavaScript and some ES6 functionality while compiling out to ES3 by default. It provides a class based syntax similar to C#’s and seems to be aimed at developer’s attempting to scale out JavaScript based solutons.

Last year I ported Elm’s Mario sample to F#, which ended up looking similarly concise. I tried both FunScript and WebSharper for compiling F# to JavaScript, and both worked well:

mario

So I thought I’d try the sample out in TypeScript as a way to get a feel for the language.

TypeScript Interfaces

In F# I defined a type for Mario using a record:

// Definition 
type mario = { x:float; y:float; vx:float; vy:float; dir:string }
// Instantiation 
let mario = { x=0.; y=0.; vx=0.; vy=0.; dir="right" }

In TypeScript I used an interface which looks pretty similar syntactically:

// Definition
interface Character {
    x: number; y: number; vx: number; vy: number; dir: string
};
// Instantiation
var mario = { x:0, y:0, vx:0, vy:0, dir:"right" };

TypeScript transcompiles this to a JavaScript associative array using object notation:

var mario = { x: 0, y: 0, vx: 0, vy: 0, dir: "right" };

Composition

For me the cute part of the Elm and F# versions was using the record “with” syntax and function composition, i.e.

let jump (_,y) m = if y > 0 && m.y = 0. then  { m with vy = 5. } else m
let gravity m = if m.y > 0. then { m with vy = m.vy - 0.1 } else m
let physics m = { m with x = m.x + m.vx; y = max 0. (m.y + m.vy) }
let walk (x,_) m = 
    { m with vx = float x 
             dir = if x < 0 then "left" elif x > 0 then "right" else m.dir }

let step dir mario = mario |> physics |> walk dir |> gravity |> jump dir

I couldn’t fine either of those features available out-of-the-box in TypeScript so I resorted to imperative code with mutation and procedures:

function walk(velocity: CursorKeys.Velocity, character: Character) {
    character.vx = velocity.x;
    if (velocity.x < 0) character.dir = "left";
    else if (velocity.x > 0) character.dir = "right";
}

function jump(velocity:CursorKeys.Velocity, character:Character) {
    if (velocity.y > 0 && character.y == 0) character.vy = 5;    
}

function gravity(character: Character) {
    if (character.y > 0) character.vy -= 0.1;
}

function physics(character: Character) {
    character.x += character.vx;
    character.y = Math.max(0, character.y + character.vy);
}

function verb(character: Character): string {
    if (character.y > 0) return "jump";
    if (character.vx != 0) return "walk";
    return "stand";
}

function step(velocity: CursorKeys.Velocity, character:Character) {
    walk(velocity, mario);
    jump(velocity, mario);
    gravity(mario);
    physics(mario);
}

The only difference between the TypeScript and the resultant JavaScript is the type annotations.

HTML Canvas

TypeScript provides typed access to JavaScript libraries via type definition files. The majority appear to be held on a personal github repository.

Note: both FunScript and WebSharper can make use of these type definition files to provide types within F# too.

Among other things this lets you get typed access over things like the HTML canvas element albeit with some funky casts:

    var canvas = <HTMLCanvasElement> document.getElementById("canvas");
    canvas.width = w;
    canvas.height = h;

This has some value, but you do have to rely on the definition files being kept up-to-date.

Conclusions

On the functional reactive side TypeScript didn't appear to offer much value add in comparison to Elm or F#.

To be honest, for a very small app, I couldn’t find any advantages to using TypeScript over vanilla JavaScript. I guess I’d need to build something a lot bigger to find any.

Sample source code: https://bitbucket.org/ptrelford/mario.typescript

Categories: Programming

Minimum Credible Release (MCR) and Minimum Viable Product (MVP)

A Minimum Credible Release, or MCR, is simply the minimal set of user stories that need to be implemented in order for the product increment to be worth releasing.

I don’t know exactly when Minimum Credible Release became an established practice, but I do know that we were using Minimum Credible Release as a concept back in the early 2000’s on the Microsoft patterns & practices team.  It’s how we defined the minimum scope for our project releases.

The value of the Minimum Credible Release is that it provides a baseline for the team to focus on so they can ship.   It’s a metaphorical “finish line.”   This is especially important when the team gets into the thick of things, and you start to face scope creep.

The Minimum Credible Release is also a powerful tool when it comes to communicating to stakeholders what to expect.   If you want people to invest, they need to know what to expect in terms of the minimum bar that they will get for their investment.

The Minimum Credible Release is also the hallmark of great user experience in action.  It takes great judgment to define a compelling minimal release.

A sample is worth a thousand words, so here is a visual way to think about this.  

Let’s say you had a pile of prioritized user stories, like this:

image

You would establish a cut line for your minimum release:

image

Note that this is an over-simplified example to keep the focus on the idea of a list of user stories with a cut line.

And the art part is in where and how you draw the line for the release.

While you would think this is such a simple, obvious, and high-value practice, not everybody does it.

All too often there are projects that run for a period of time without a defined Minimum Credible Release.   They often turn into never-ending projects or somebody’s bitter disappointment.   If you get agreement with users about what the Minimum Credible Release will be, you have a much better chance of making your users happy.  This goes for stakeholders, too.

There is another concept that, while related, I don’t think it’s the same thing.

It’s Minimum Viable Product, or MVP.

Here is what Eric Ries, author of The Lean Startup, says about the Minimum Viable Product:

“The idea of minimum viable product is useful because you can basically say: our vision is to build a product that solves this core problem for customers and we think that for the people who are early adopters for this kind of solution, they will be the most forgiving. And they will fill in their minds the features that aren’t quite there if we give them the core, tent-pole features that point the direction of where we’re trying to go.

So, the minimum viable product is that product which has just those features (and no more) that allows you to ship a product that resonates with early adopters; some of whom will pay you money or give you feedback.”

And, here is what Josh Kaufman, author of The Personal MBA, has to say about the Minimum Viable Product:

“The Lean Startup provides many strategies for validating the worth of a business idea. One core strategy is to develop a minimum viable product – the smallest offer you can create that someone will actually buy, then offer it to real customers. If they buy, you’re in good shape. If your original idea doesn’t work, you simply ‘pivot’ and try another idea.”

So if you want happier users, better products, reduced risk, and more reliable releases, look to Minimum Credible Releases and Minimum Viable Products.

You Might Also Like

Continuous Value Delivery the Agile Way

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Architecture, Programming

Standard Flavored Markdown

Coding Horror - Jeff Atwood - Wed, 09/03/2014 - 21:06

In 2009 I lamented the state of Markdown:

Right now we have the worst of both worlds. Lack of leadership from the top, and a bunch of fragmented, poorly coordinated community efforts to advance Markdown, none of which are officially canon. This isn't merely incovenient for anyone trying to find accurate information about Markdown; it's actually harming the project's future.

In late 2012, David Greenspan from Meteor approached me and proposed we move forward, and a project crystallized:

I propose that Stack Exchange, GitHub, Meteor, Reddit, and any other company with lots of traffic and a strategic investment in Markdown, all work together to come up with an official Markdown specification, and standard test suites to validate Markdown implementations. We've all been working at cross purposes for too long, accidentally fragmenting Markdown while popularizing it.

We formed a small private working group with key representatives from GitHub, from Reddit, from Stack Exchange, from the open source community. We spent months hashing out the details and agreeing on the necessary changes to turn Markdown into a language you can parse without feeling like you just walked through a sewer – while preserving the simple, clear, ASCII email inspired spirit of Markdown.

We really struggled with this at Discourse, which is also based on Markdown, but an even more complex dialect than the one we built at Stack Overflow. In Discourse, you can mix three forms of markup interchangeably:

  • Markdown
  • HTML (safe subset)
  • BBCode (subset)

Discourse is primarily a JavaScript app, so naturally we needed a nice, compliant implementation of Markdown in JavaScript. Surely such a thing exists, yes? Nope. Even in 2012, we found zero JavaScript implementations of Markdown that could pass the only Markdown test suite I know of, MDTest. It isn't authoritative, it's a community created initiative that embodies its own decisions about rendering ambiguities in Markdown, but it's all we've got. We contributed many upstream fixes to markdown.js to make it pass MDTest – but it still only passes in our locally extended version.

As an open source project ourselves, we're perfectly happy contributing upstream code to improve it for everyone. But it's an indictment of the state of the Markdown ecosystem that any remotely popular implementation wasn't already testing itself against a formal spec and test suite. But who can blame them, because it didn't exist!

Well, now it does.

It took a while, but I'm pleased to announce that Standard Markdown is now finally ready for public review.

standardmarkdown.com

It's a spec, including embedded examples, and implementations in portable C and JavaScript. We strived mightily to stay true to the spirit of Markdown in writing it. The primary author, John MacFarlane, explains in the introduction to the spec:

Because Gruber’s syntax description leaves many aspects of the syntax undetermined, writing a precise spec requires making a large number of decisions, many of them somewhat arbitrary. In making them, I have appealed to existing conventions and considerations of simplicity, readability, expressive power, and consistency. I have tried to ensure that “normal” documents in the many incompatible existing implementations of markdown will render, as far as possible, as their authors intended. And I have tried to make the rules for different elements work together harmoniously. In places where different decisions could have been made (for example, the rules governing list indentation), I have explained the rationale for my choices. In a few cases, I have departed slightly from the canonical syntax description, in ways that I think further the goals of markdown as stated in that description.

Part of my contribution to the project is to host the discussion / mailing list for Standard Markdown in a Discourse instance.

talk.standardmarkdown.com

Fortunately, Discourse itself just reached version 1.0. If the only thing Standard Markdown does is help save a few users from the continuing horror that is mailing list web UI, we all win.

What I'm most excited about is that we got a massive contribution from the one person who, in my mind, was the most perfect person in the world to work on this project: John MacFarlane. He took our feedback and wrote the entire Standard Markdown spec and both implementations.

A lot of people know of John through his Pandoc project, which is amazing in its own right, but I found out about him because he built Babelmark. I learned to refer to Babelmark extensively while working on Stack Overflow and MarkdownSharp, a C# implementation of Markdown.

Here's how crazy Markdown is: to decide what the "correct" behavior is, you provide sample Markdown input to 20+ different Markdown parsers … and then pray that some consensus emerges in all their output. That's what Babelmark does.

Consider this simple Markdown example:

# Hello there

This is a paragraph.

- one
- two
- three
- four

1. pirate
2. ninja
3. zombie

Just for that, I count fifteen different rendered outputs from 22 different Markdown parsers.

In Markdown, we literally built a Tower of Babel.

Have I mentioned that it's a good idea for a language to have a formal specification and test suites? Maybe now you can see why that is.

Oh, and in his spare time, John is also the chair of the department of philosophy at the University of California, Berkeley. No big deal. While I don't mean to minimize the contributions of anyone to the Standard Markdown project, we all owe a special thanks to John.

Markdown is indeed everywhere. And that's a good thing. But it needs to be sane, parseable, and standard. That's the goal of Standard Markdown — but we need your help to get there. If you use Markdown on a website, ask what it would take for that site to become compatible with Standard Markdown; when you see the word "Markdown" you have the right to expect consistent rendering across all the websites you visit. If you implement Markdown, take a look at the spec, try to make your parser compatible with Standard Markdown, and discuss improvements or refinements to the spec.

Update: The project was renamed CommonMark. See my subsequent blog post.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Categories: Programming

Continuous Value Delivery the Agile Way

Continuous Value Delivery helps businesses realize the benefits from their technology investments in a continuous fashion.

Businesses these days expect at least quarterly results from their technology investments.  The beauty is, with Continuous Value Delivery they can get it, too.  

Continuous Value Delivery is a practice that makes delivering user value and business value a rapid, reliable, and repeatable process.  It’s a natural evolution from Continuous Integration and Continuous Delivery.  Continuous Value Delivery simply adds a focus on Value Realization, which addresses planning for value, driving adoption, and measuring results.

But let’s take a look at the evolution of software practices that have made it possible to provide Continuous Value Delivery in our Cloud-first, mobile-first world.

Long before there was Continuous Value Delivery, there was Continuous Integration …

Continuous Integration

Continuous Integration is a software development practice where team members integrate their work frequently.  The goal of Continuous Integration is to reduce and prevent integration problems.  In Continuous Integration, each integration is verified against tests.

Then, along came, Continuous Delivery …

Continuous Delivery

Continuous Delivery extended the idea of Continuous Integration to automate and improve the process of software delivery.  With Continuous Delivery,  software checked in on the mainline is always ready for release.  When you combine automated testing, Continuous Integration, and Continuous Delivery, it's possible to push out updates, fixes, and new releases to customers with lower risk and minimal manual overhead.

Continuous Delivery changes the model from a big bang approach, where software is shipped at the end of a long project cycle, to where software can be iteratively and incrementally shipped along the way.

This set the stage for Continuous Value Delivery …

Continuous Value Delivery

Continuous Value Delivery puts a focus on Value Realization as a first-class citizen.  

To be able to ship value on a continuous basis, you need to have a simple way to have a simple mechanism for units of value.  Scenarios and stories are an effective way to chunk and carve up value into useful increments.  Scenario and stories also help with driving adoption.

For Continuous Value Delivery, you also need a way to "pull" value, as well as "push" value.   Kanbans provide an easy way to visualize the flow of value, and support a “pull” mechanism and reinforce “the voice of the customer.”    User stories provide an easy way to create a backlog or catalog of potential value, that you can “push” based on priorities and user demand.

Businesses that are making the most of their technology investments are linking scenarios, backlogs, and Kanbans to their value chains and their value streams.

Value Planning Enables Continuous Value Delivery

If you want to drive continuous value to the business, then you need to plan for it.  As part of value planning, you need to identify key stakeholders in the business.    With the stakeholders you need to identify the business benefits that they care about, along with the KPIs and value measures that they care about.

At this stage, you also want to identify who in the business will be responsible for collecting the data and reporting the value.

Adoption is the Key to Value Realization

Adoption is the key component of Continuous Value Delivery.  After all, if you release new features, but nobody uses them, then the users won't get the new benefits.   In order to realize the value, users need to use the new features and actually change their behaviors.

So while deployment was the old bottleneck, adoption is the new bottleneck.

Users and the business can only absorb so much value at once.  In order to flow more value, you need to reduce friction around adoption, and drive consumption of technology.  You can do this through effective adoption planning, user readiness, communication plans, and measurement.

Value Measurement and Reporting

To close the loop, you want the business to acknowledge the delivery of value.   That’s where measurement and reporting come in.

From a measurement standpoint, you can use adoption and usage metrics to better understand what's being used and how much.  But that’s only part of the story.

To connect the dots back to the business impact, you need to measure changes in behavior, such as what people have stopped doing, started doing, and continue doing.   This will be an indicator of benefits being realized.

Ultimately, to show the most value to the business, you need to move the benefits up the stack.  At the lowest level, you can observe the benefits, by simply observing the changes in behavior.  If you can observe the benefits, then you should be able to measure the benefits.  And if you can measure the benefits, then you should be able to quantify the benefits.   And if you can quantify the benefits, then you should be able to associate some sort of financial amount that shows how things are being done better, faster, or cheaper.

The value reporting exercise should help inform and adjust any value planning efforts.  For example, if adoption is proving to be the bottleneck, now you can drill into where exactly the bottleneck is occurring and you can refocus efforts more effectively.

Plan, Do, Check, Act

In essence, your value realization loop is really a cycle of plan, do, check, act, where value is made explicit, and it is regarded as a first-class citizen throughout the process of Continuous Value Delivery.

That’s a way better approach than building solutions and hoping that value will come or that you’ll stumble your way into business impact.

As history shows, too many projects try to luck their way into value, and it’s far better to design for it.

Value Sprints

A Sprint is simply a unit of development in Scrum.   The idea is to provide a working increment of the solution at the end of the Sprint, that is potentially shippable.  

It’s a “timeboxed” effort.   This helps reduce risk as well as support a loop of continuous learning.  For example, a team might work in 1 week, 2 week or 1 month sprints.   At the end of the Sprint, you can review the progress, and make any necessary adjustments to improve for the next Sprint.

In the business arena, we can think in terms of Value Sprints, where we don’t want to stop at just shipping a chunk of value.

Just shipping or deploying software and solutions does not lead to adoption.

And that’s how software and IT projects fall down.

With a Value Sprint, we want to do a add a few specific things to the mix to ensure appropriate Value Realization and true benefits delivery.  Specifically, we want to integrate Value Planning right up front, and as part of each Sprint.   Most importantly, we want to plan and drive adoption, as part of the Value Sprint.

If we can accelerate adoption, then we can accelerate time to value.

And, of course, we want to report on the value as part of the Value Sprint.

In practice, our field tells us that Value Sprints of 6-8 weeks tend to work well with the business.    Obviously, the right answer depends on your context, but it helps to know what others have been doing.   The length of the loop depends on the business cadence, as well as how well adoption can be driven in an organization, which varies drastically based on ability to execute and maturity levels.  And, for a lot of businesses, it’s important to show results within a quarterly cycle.

But what’s really important is that you don’t turn value into a long winded run, or a long shot down the line, and that you don’t simply hope that value happens.

Through Value Sprints and Continuous Value Delivery you can create a sustainable approach where the business can realize the value from it’s technology investments in a sustainable and more reliable way for real business results.

And that’s how you win in the game of software today.

You Might Also Like

Blessing Sibanyoni on Value Realization

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Architecture, Programming

The Beautiful Design Summer 2014 Collection on Google Play

Android Developers Blog - Tue, 09/02/2014 - 16:10
Posted by Marco Paglia, Android Design Team

It’s that time again! Last summer, we published the first Beautiful Design collection on Google Play, and updated it in the winter with a fresh set of beautifully crafted apps.

Since then, developers have been hard at work updating their existing apps with new design ideas, and many new apps targeted to phones and tablets have launched on Google Play sporting exquisite detail in their UIs. Some apps are even starting to incorporate elements from material design, which is great to see. We’re on the lookout for even more material design concepts applied across the Google Play ecosystem!

Today, we're refreshing the Beautiful Design collection with our latest favorite specimens of delightful design from Google Play. As a reminder, the goal of this collection is to highlight beautiful apps with masterfully crafted design details such as beautiful presentation of photos, crisp and meaningful layout and typography, and delightful yet intuitive gestures and transitions.

The newly updated Beautiful Design Summer 2014 collection includes:

Flight Track 5, whose gorgeously detailed flight info, full of maps and interactive charts, stylishly keeps you in the know.

Oyster, a book-reading app whose clean, focused reading experience and delightful discovery makes it a joy to take your library with you, wherever you go.

Gogobot, an app whose bright colors and big images make exploring your next city delightful and fun.

Lumosity, Vivino, FIFA, Duolingo, SeriesGuide, Spotify, Runtastic, Yahoo News Digest… each with delightful design details.

Airbnb, a veteran of the collection from this past winter, remains as they continue to finesse their app.

If you’re an Android designer or developer, make sure to play with some of these apps to get a sense for the types of design details that can separate good apps from great ones. And remember to review the material design spec for ideas on how to design your next beautiful Android app!.


Join the discussion on

+Android Developers
Categories: Programming