Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Chrome - Firefox WebRTC Interop Test - Pt 2

Google Testing Blog - Tue, 09/09/2014 - 21:09
by Patrik Höglund

This is the second in a series of articles about Chrome’s WebRTC Interop Test. See the first.

In the previous blog post we managed to write an automated test which got a WebRTC call between Firefox and Chrome to run. But how do we verify that the call actually worked?

Verifying the CallNow we can launch the two browsers, but how do we figure out the whether the call actually worked? If you try opening two apprtc.appspot.com tabs in the same room, you will notice the video feeds flip over using a CSS transform, your local video is relegated to a small frame and a new big video feed with the remote video shows up. For the first version of the test, I just looked at the page in the Chrome debugger and looked for some reliable signal. As it turns out, the remoteVideo.style.opacity property will go from 0 to 1 when the call goes up and from 1 to 0 when it goes down. Since we can execute arbitrary JavaScript in the Chrome tab from the test, we can simply implement the check like this:

bool WaitForCallToComeUp(content::WebContents* tab_contents) {
// Apprtc will set remoteVideo.style.opacity to 1 when the call comes up.
std::string javascript =
"window.domAutomationController.send(remoteVideo.style.opacity)";
return test::PollingWaitUntil(javascript, "1", tab_contents);
}

Verifying Video is PlayingSo getting a call up is good, but what if there is a bug where Firefox and Chrome cannot send correct video streams to each other? To check that, we needed to step up our game a bit. We decided to use our existing video detector, which looks at a video element and determines if the pixels are changing. This is a very basic check, but it’s better than nothing. To do this, we simply evaluate the .js file’s JavaScript in the context of the Chrome tab, making the functions in the file available to us. The implementation then becomes

bool DetectRemoteVideoPlaying(content::WebContents* tab_contents) {
if (!EvalInJavascriptFile(tab_contents, GetSourceDir().Append(
FILE_PATH_LITERAL(
"chrome/test/data/webrtc/test_functions.js"))))
return false;
if (!EvalInJavascriptFile(tab_contents, GetSourceDir().Append(
FILE_PATH_LITERAL(
"chrome/test/data/webrtc/video_detector.js"))))
return false;

// The remote video tag is called remoteVideo in the AppRTC code.
StartDetectingVideo(tab_contents, "remoteVideo");
WaitForVideoToPlay(tab_contents);
return true;
}

where StartDetectingVideo and WaitForVideoToPlay call the corresponding JavaScript methods in video_detector.js. If the video feed is frozen and unchanging, the test will time out and fail.

What to Send in the CallNow we can get a call up between the browsers and detect if video is playing. But what video should we send? For chrome, we have a convenient --use-fake-device-for-media-stream flag that will make Chrome pretend there’s a webcam and present a generated video feed (which is a spinning green ball with a timestamp). This turned out to be useful since Firefox and Chrome cannot acquire the same camera at the same time, so if we didn’t use the fake device we would have two webcams plugged into the bots executing the tests!

Bots running in Chrome’s regular test infrastructure do not have either software or hardware webcams plugged into them, so this test must run on bots with webcams for Firefox to be able to acquire a camera. Fortunately, we have that in the WebRTC waterfalls in order to test that we can actually acquire hardware webcams on all platforms. We also added a check to just succeed the test when there’s no real webcam on the system since we don’t want it to fail when a dev runs it on a machine without a webcam:

if (!HasWebcamOnSystem())
return;

It would of course be better if Firefox had a similar fake device, but to my knowledge it doesn’t.

Downloading all Code and Components Now we have all we need to run the test and have it verify something useful. We just have the hard part left: how do we actually download all the resources we need to run this test? Recall that this is actually a three-way integration test between Chrome, Firefox and AppRTC, which require the following:

  • The AppEngine SDK in order to bring up the local AppRTC instance, 
  • The AppRTC code itself, 
  • Chrome (already present in the checkout), and 
  • Firefox nightly.

While developing the test, I initially just hand-downloaded these and installed and hard-coded the paths. This is a very bad idea in the long run. Recall that the Chromium infrastructure is comprised of thousands and thousands of machines, and while this test will only run on perhaps 5 at a time due to its webcam requirements, we don’t want manual maintenance work whenever we replace a machine. And for that matter, we definitely don’t want to download a new Firefox by hand every night and put it on the right location on the bots! So how do we automate this?

Downloading the AppEngine SDK
First, let’s start with the easy part. We don’t really care if the AppEngine SDK is up-to-date, so a relatively stale version is fine. We could have the test download it from the authoritative source, but that’s a bad idea for a couple reasons. First, it updates outside our control. Second, there could be anti-robot measures on the page. Third, the download will likely be unreliable and fail the test occasionally.

The way we solved this was to upload a copy of the SDK to a Google storage bucket under our control and download it using the depot_tools script download_from_google_storage.py. This is a lot more reliable than an external website and will not download the SDK if we already have the right version on the bot.

Downloading the AppRTC Code
This code is on GitHub. Experience has shown that git clone commands run against GitHub will fail every now and then, and fail the test. We could either write some retry mechanism, but we have found it’s better to simply mirror the git repository in Chromium’s internal mirrors, which are closer to our bots and thereby more reliable from our perspective. The pull is done by a Chromium DEPS file (which is Chromium’s dependency provisioning framework).

Downloading Firefox
It turns out that Firefox supplies handy libraries for this task. We’re using mozdownload in this script in order to download the Firefox nightly build. Unfortunately this fails every now and then so we would like to have some retry mechanism, or we could write some mechanism to “mirror” the Firefox nightly build in some location we control.

Putting it TogetherWith that, we have everything we need to deploy the test. You can see the final code here.

The provisioning code above was put into a separate “.gclient solution” so that regular Chrome devs and bots are not burdened with downloading hundreds of megs of SDKs and code that they will not use. When this test runs, you will first see a Chrome browser pop up, which will ensure the local apprtc instance is up. Then a Firefox browser will pop up. They will each acquire the fake device and real camera, respectively, and after a short delay the AppRTC call will come up, proving that video interop is working.

This is a complicated and expensive test, but we believe it is worth it to keep the main interop case under automation this way, especially as the spec evolves and the browsers are in varying states of implementation.

Future Work

  • Also run on Windows/Mac. 
  • Also test Opera. 
  • Interop between Chrome/Firefox mobile and desktop browsers. 
  • Also ensure audio is playing. 
  • Measure bandwidth stats, video quality, etc.


Categories: Testing & QA

Techno-BDDs: What Daft Punk Can Teach Us About Requirements

Software Requirements Blog - Seilevel.com - Tue, 09/09/2014 - 17:00
I listen to a lot of Daft Punk lately—especially while I’m running.   On one of my recent runs, my mind’s reflections upon the day’s work merged with my mental impressions of the music blaring from my earbuds, which happened to be the Daft Punk song, “Technologic.”  For those of you unfamiliar with the song, […]
Categories: Requirements

The Main Benefit of Story Points

Mike Cohn's Blog - Tue, 09/09/2014 - 15:00

If story points are an estimate of the time (effort) involved in doing something, why not just estimate directly in hours or days? Why use points at all?

There are multiple good reasons to estimate product backlog items in story points, and I cover them fully in the Agile Estimating and Planning video course, but there is one compelling reason that on its own is enough to justify the use of points.

It has to do with King Henry I who reigned between 1100 and 1135. Prior to his reign, a “yard” was a unit of measure from a person’s nose to his outstretched thumb. Just imagine all the confusion this caused with that distance being different for each person.

King Henry eventually decided a yard would always be the distance between the king’s nose and outstretched thumb. Convenient for him, but also convenient for everyone else because there was now a standard unit of measure.

You might learn that for you, a yard (as defined by the king’s arm) was a little more or less than your arm. I’d learn the same about my arm. And we’d all have a common unit of measure.

Story points are much the same. Like English yards, they allow team members with different skill levels to communicate about and agree on an estimate. As an example, imagine you and I decide to go for a run. I like to run but am very slow. You, on the other hand, are a very fast runner. You point to a trail and say, “Let’s run that trail. It’ll take 30 minutes.”

I am familiar with that trail, but being a much slower runner than you, I know it takes me 60 minutes every time I run that trail. And I tell you I’ll run that trail with you but that will take 60 minutes.

And so we argue. “30.” “60.” “30.” “60.”

We’re getting nowhere. Perhaps we compromise and call it 45 minutes. But that is possibly the worst thing we could do. We now have an estimate that is no good for either of us.

So instead of compromising on 45, we continue arguing. “30.” “60.” “30.” “60.”

Eventually you say to me, “Mike, it’s a 5-mile trail. I can run it in 30 minutes.”

And I tell you, “I agree: it’s a 5-mile trail. That takes me 60 minutes.”

The problem is that we are both right. You really can run it in 30 minutes, and it really will take me 60. When we try to put a time estimate on running this trail, we find we can’t because we work (run) at different speeds.

But, when we use a more abstract measure—in this case, miles—we can agree. You and I agree the trail is 5 miles. We just differ in how long it will take each of us to run it.

Story points serve much the same purpose. They allow individuals with differing skill sets and speeds of working to agree. Instead of a fast and slow runner, consider two programmers of differing productivity.

Like the runners, these two programmers may agree that a given user story is 5 points (rather than 5 miles). The faster programmer may be thinking it’s easy and only a day of work. The slower programmer may be thinking it will take two days of work. But they can agree to call it 5 points, as the number of points assigned to the first story is fairly arbitrary.

What’s important is once they agree that the first story is 5 points, our two programmers can then agree on subsequent estimates. If the fast programmer thinks a new user story will take two days (twice his estimate for the 5-point story), he will estimate the new story as 10 points. So will the second programmer if she thinks it will take four days (twice as long as her estimate for the 5-point story).

And so, like the distance from King Henry’s nose to his thumb, story points allow agreement among individuals who perform at different rates.

The Main Benefit of Story Points

Mike Cohn's Blog - Tue, 09/09/2014 - 15:00

If story points are an estimate of the time (effort) involved in doing something, why not just estimate directly in hours or days? Why use points at all?

There are multiple good reasons to estimate product backlog items in story points, and I cover them fully in the Agile Estimating and Planning video course, but there is one compelling reason that on its own is enough to justify the use of points.

It has to do with King Henry I who reigned between 1100 and 1135. Prior to his reign, a “yard” was a unit of measure from a person’s nose to his outstretched thumb. Just imagine all the confusion this caused with that distance being different for each person.

King Henry eventually decided a yard would always be the distance between the king’s nose and outstretched thumb. Convenient for him, but also convenient for everyone else because there was now a standard unit of measure.

You might learn that for you, a yard (as defined by the king’s arm) was a little more or less than your arm. I’d learn the same about my arm. And we’d all have a common unit of measure.

Story points are much the same. Like English yards, they allow team members with different skill levels to communicate about and agree on an estimate. As an example, imagine you and I decide to go for a run. I like to run but am very slow. You, on the other hand, are a very fast runner. You point to a trail and say, “Let’s run that trail. It’ll take 30 minutes.”

I am familiar with that trail, but being a much slower runner than you, I know it takes me 60 minutes every time I run that trail. And I tell you I’ll run that trail with you but that will take 60 minutes.

And so we argue. “30.” “60.” “30.” “60.”

We’re getting nowhere. Perhaps we compromise and call it 45 minutes. But that is possibly the worst thing we could do. We now have an estimate that is no good for either of us.

So instead of compromising on 45, we continue arguing. “30.” “60.” “30.” “60.”

Eventually you say to me, “Mike, it’s a 5-mile trail. I can run it in 30 minutes.”

And I tell you, “I agree: it’s a 5-mile trail. That takes me 60 minutes.”

The problem is that we are both right. You really can run it in 30 minutes, and it really will take me 60. When we try to put a time estimate on running this trail, we find we can’t because we work (run) at different speeds.

But, when we use a more abstract measure—in this case, miles—we can agree. You and I agree the trail is 5 miles. We just differ in how long it will take each of us to run it.

Story points serve much the same purpose. They allow individuals with differing skill sets and speeds of working to agree. Instead of a fast and slow runner, consider two programmers of differing productivity.

Like the runners, these two programmers may agree that a given user story is 5 points (rather than 5 miles). The faster programmer may be thinking it’s easy and only a day of work. The slower programmer may be thinking it will take two days of work. But they can agree to call it 5 points, as the number of points assigned to the first story is fairly arbitrary.

What’s important is once they agree that the first story is 5 points, our two programmers can then agree on subsequent estimates. If the fast programmer thinks a new user story will take two days (twice his estimate for the 5-point story), he will estimate the new story as 10 points. So will the second programmer if she thinks it will take four days (twice as long as her estimate for the 5-point story).

And so, like the distance from King Henry’s nose to his thumb, story points allow agreement among individuals who perform at different rates.

Cost, Value & Investment: How Much Will This Project Cost? Part 1

I’ve said before that you cannot use capacity planning for the project portfolio. I also said that managers often want to know how much the project will cost. Why? Because businesses have to manage costs. No one can have runaway projects. That is fair.

If you use an agile or incremental approach to your projects, you have options. You don’t have to have runaway projects. Here are two better questions:

  • How much do you want to invest before we stop?
  • How much value is this project or program worth to you?

You need to think about cost, value, and investment, not just cost when you think about about the project portfolio. If you think about cost, you miss the potentially great projects and features.

However, no business exists without managing costs. In fact, the cost question might be critical to your business. If you proceed without thinking about cost, you might doom your business.

Why do you want to know about cost? Do you have a contract? Does the customer need to know? A cost-bound contract is a good reason.  (If you have other reasons for needing to know cost, let me know. I am serious when I say you need to evaluate the project portfolio on value, not on cost.)

You Have a Cost-Bound Contract

I’ve talked before about providing date ranges or confidence ranges with estimates. It all depends on why you need to know. If you are trying to stay within your predicted cost-bound contract, you need a ranked backlog. If you are part of a program, everyone needs to see the roadmap. Everyone needs to see the product backlog burnup charts. You’d better have feature teams so you can create features. If you don’t, you’d better have small features.

Why? You can manage the interdependencies among and between teams more easily with small features and with a detailed roadmap. The larger the program, the smaller you want the batch size to be. Otherwise, you will waste a lot of money very fast. (The teams will create components and get caught in integration hell. That wastes money.)

Your Customer Wants to Know How Much the Project Will Cost

Why does your customer want to know? Do you have payments based on interim deliverables? If the customer needs to know, you want to build trust by delivering value, so the customer trusts you over time.

If the customer wants to contain his or her costs, you want to work by feature, delivering value. You want to share the roadmap, delivering value. You want to know what value the estimate has for your customer. You can provide an estimate for your customer, as long as you know why your customer needs it.

Some of you think I’m being perverse. I’m not being helpful by saying what you could do to provide an estimate. Okay, in Part 2, I will provide a suggestion of how you could do an order-of-magnitude approach for estimating a program.

Categories: Project Management

Firm foundations

Coding the Architecture - Simon Brown - Tue, 09/09/2014 - 12:32

I truly believe that a lightweight, pragmatic approach to software architecture is pivotal to successfully delivering software, and that it can complement agile approaches rather than compete against them. After all, a good architecture enables agility and this doesn't happen by magic. But the people in our industry often tend to have a very different view. Historically, software architecture has been a discipline steeped in academia and, I think, subsequently feels inaccessible to many software developers. It's also not a particularly trendy topic when compared to [microservices|Node.js|Docker|insert other new thing here].

I've been distilling the essence of software architecture over the past few years, helping software teams to understand and introduce it into the way that they work. And, for me, the absolute essence of software architecture is about building firm foundations; both in terms of the team that is building the software and for the software itself. It's about technical leadership, creating a shared vision and stacking the chances of success in your favour.

I'm delighted to have been invited back to ASAS 2014 and my opening keynote is about what a software team can do in order to create those firm foundations. I'm also going to talk about some of the concrete examples of what I've done in the past, illustrating how I apply a minimal set of software architecture practices in the real world to take an idea through to working software. I'll also share some examples of where this hasn't exactly gone to plan too! I look forward to seeing you in a few weeks.

Read more...

Categories: Architecture

Running unit tests on iOS devices

Xebia Blog - Tue, 09/09/2014 - 09:48

When running a unit test target needing an entitlement (keychain access) it does not work out of the box on Xcode. You get some descriptive error in the console about a "missing entitlement". Everything works fine on the Simulator though.

Often this is a case of the executable bundle's code signature not begin valid anymore because a testing bundle was added/linked to the executable before deployment to the device. Easiest fix is to add a new "Run Script Build Phase" with the content:

codesign --verify --force --sign "$CODE_SIGN_IDENTITY" "$CODESIGNING_FOLDER_PATH"

codesign

Now try (cleaning and) running your unit tests again. Good chance it now works.

Communities of Practice

Communities are all differnt

Communities are all different

Organizations are increasingly becoming more diverse and distributed while at the same time pursuing mechanisms to increase collaboration between groups and consistency of knowledge and practice. A community of practice (COP) is often used as a tool to share knowledge and improve performance. Etienne Wenger-Trayner suggests that a community of practice is formed by people who engage in a process of collective learning in a shared domain of human endeavor. There are four common requirements for a community of practice to exist:

  1. Common Area of Interest – The first required attribute for any potential community of interest must have is a common area of interest among a group of people. The area needs to be specific and an area where knowledge is not viewed as proprietary but can be generated, shared and owned by the community as a whole. When knowledge is perceived to be propriety it will not be shared.
  2. Process – The second must have attribute a community of practice needs is set of defined processes. Processes that are generally required include mechanisms to attract legitimate participants and the capture and dissemination of community knowledge.  The existence processes differentiate COPs from ad-hoc meetings.
  3. Support – There is a tendency within many organizations to either think of COPs as socially driven by users and self-managing or as tools to control, to collect and disseminate knowledge within the organization. In their purist form, neither case is perfect (we will explore why in a later essay) but in either case, somebody needs to take responsibility for the COP. The role is typically known as the community manager. The role of the community manager can include finding logistics support, budget, identifying speakers, capturing knowledge and ensuring the group gets together.
  4. Interest – Perhaps the single most important attribute required for any COP to exist is an interest in interacting and sharing. Unless participants are interested (passionate is even better) in the topic(s) that the community pursues they will not participate.

A community of practice tool promotes collaboration and consistency of practice even in organizations that are becoming more distributed and diverse. Communities of practice provide a platform for people with similar interests to build on individual knowledge to create group knowledge which helps organizations deliver more value.


Categories: Process Management

Update on Android Wear Paid Apps

Android Developers Blog - Mon, 09/08/2014 - 22:02

Update (8 September 2014): All of the issues in the post below have now been resolved in Android Studio 0.8.3 onwards, released on 21 July 2014. The gradle wearApp rule, and the packaging documentation, were updated to use res/raw. The instructions on this blog post remain correct and you can continue to use manual packaging if you want. You can upload Android Wear paid apps to the Google Play using the standard release wearApp release mechanism.

We have a workaround to enable paid apps (and other apps that use Google Play's forward-lock mechanism) on Android Wear. The assets/ directory of those apps, which contains the wearable APK, cannot be extracted or read by the wearable installer. The workaround is to place the wearable APK in the res/raw directory instead.

As per the documentation, there are two ways to package your wearable app: use the “wearApp” Gradle rule to package your wearable app or manually package the wearable app. For paid apps, the workaround is to manually package your apps with the following two changes, and you cannot use the “wearApp” Gradle rule. To manually package the wearable APK into res/raw, do the following:

  1. Copy the signed wearable app into your handheld project's res/raw directory and rename it to wearable_app.apk, it will be referred to as wearable_app.
  2. Create a res/xml/wearable_app_desc.xml file that contains the version and path information of the wearable app:
    <wearableApp package="wearable app package name">
        <versionCode>1</versionCode>
        <versionName>1.0</versionName>
        <rawPathResId>wearable_app</rawPathResId>
    </wearableApp>

    The package, versionCode, and versionName are the same as values specified in the wearable app's AndroidManifest.xml file. The rawPathResId is the static variable name of the resource. If the filename of your resource is wearable_app.apk, the static variable name would be wearable_app.

  3. Add a <meta-data> tag to your handheld app's <application> tag to reference the wearable_app_desc.xml file.
    <meta-data android:name="com.google.android.wearable.beta.app"
               android:resource="@xml/wearable_app_desc"/>
  4. Build and sign the handheld app.

We will be updating the “wearApp” Gradle rule in a future update to the Android SDK build tools to support APK embedding into res/raw. In the meantime, for paid apps you will need to follow the manual steps outlined above. We will be also be updating the documentation to reflect the above workaround. We're working to make this easier for you in the future, and we apologize for the inconvenience.


Join the discussion on
+Android Developers


Categories: Programming

An F# newbie using SQLite

Eric.Weblog() - Eric Sink - Mon, 09/08/2014 - 19:00

Like I said in a tweet on Friday, I'm guessing everybody's first 10,000 lines of F# are crap. That's a lot of bad code I need to write, so I figure maybe I better get started.

This blog entry is a journal of my first attempts at using F# to do some SQLite stuff. I'm using SQLitePCL.raw, which is a Portable Class Library wrapper (written in C#) allowing .NET developers to call the native SQLite library.

My program has five "stanzas":

  • ONE: Open a SQLite database and create a table with two integer columns called a and b.

  • TWO: Insert 16 rows with a going from 1 through 16.

  • THREE: Set column b equal to a squared, and lookup the value of b for a=7.

  • FOUR: Loop over all the rows where b<120 and sum the a values.

  • FIVE: Close the database and print the two results.

I've got three implementations of this program to show you -- one in C# and two in F#.

C#

Here's the C# version I started with:

using System;
using System.IO;

using SQLitePCL;

public class foo
{
    // helper functions to check SQLite result codes and throw

    private static bool is_error(int rc)
    {
        return (
                (rc != raw.SQLITE_OK)
                && (rc != raw.SQLITE_ROW)
                && (rc != raw.SQLITE_DONE)
           );
    }

    private static void check(int rc)
    {
        if (is_error(rc))
        {
            throw new Exception(string.Format("{0}", rc));
        }
    }

    private static void check(sqlite3 conn, int rc)
    {
        if (is_error(rc))
        {
            throw new Exception(raw.sqlite3_errmsg(conn));
        }
    }

    private static int checkthru(sqlite3 conn, int rc)
    {
        if (is_error(rc))
        {
            throw new Exception(raw.sqlite3_errmsg(conn));
        }
        else
        {
            return rc;
        }
    }

    // MAIN program

    public static void Main()
    {
        sqlite3 conn = null;

        // ONE: open the db and create the table

        check(raw.sqlite3_open(":memory:", out conn));

        check(conn, raw.sqlite3_exec(conn, "CREATE TABLE foo (a int, b int)"));

        // TWO: insert 16 rows

        for (int i=1; i<=16; i++)
        {
            string sql = string.Format("INSERT INTO foo (a) VALUES ({0})", i);
            check(conn, raw.sqlite3_exec(conn, sql));
        }

        // THREE: set b = a squared and find b for a=7

        check(conn, raw.sqlite3_exec(conn, "UPDATE foo SET b = a * a"));

        sqlite3_stmt stmt = null;
        check(conn, raw.sqlite3_prepare_v2(conn, "SELECT b FROM foo WHERE a=?", out stmt));

        check(conn, raw.sqlite3_bind_int(stmt, 1, 7));
        check(conn, raw.sqlite3_step(stmt));
        int vsq = raw.sqlite3_column_int(stmt, 0);
        check(conn, raw.sqlite3_finalize(stmt));
        stmt = null;

        // FOUR: fetch sum(a) for all rows where b < 120

        check(conn, raw.sqlite3_prepare_v2(conn, "SELECT a FROM foo WHERE b<120", out stmt));

        int vsum = 0;

        while (raw.SQLITE_ROW == (checkthru(conn, raw.sqlite3_step(stmt))))
        {
            vsum += raw.sqlite3_column_int(stmt, 0);
        }
        
        check(conn, raw.sqlite3_finalize(stmt));
        stmt = null;

        // FIVE: close and print the results

        check(raw.sqlite3_close(conn));
        conn = null;

        Console.WriteLine("val: {0}", vsq);
        Console.WriteLine("sum: {0}", vsum);
    }
}

Notes:

  • I'm coding against the 'raw' SQLite API, which returns integer error codes rather than throwing exceptions. So I've written some little check functions which throw on any result code that signifies an error condition.

  • In the first stanza, I'm opening ":memory:" rather than an actual file on disk so that I can be sure the db starts clean.

  • In the second stanza, I'm constructing the SQL string rather than using parameter substitution. This is a bad idea for two reasons. First, parameter substitution eliminates SQL injection attacks. Second, forcing SQLite to compile a SQL statement inside a loop is going to cause poor performance.

  • In the third stanza, I'm going out of my way to do this more properly, using prepare/bind/step/finalize. Ironically, this is the case where it doesn't matter as much, since I'm not looping.

  • In the fourth stanza, I specifically want to loop over the rows in C# even though I could easily just do the sum in SQL.

F#, first attempt

OK, now here's a painfully direct translation of this code to F#:

open SQLitePCL

// helper functions to check SQLite result codes and throw

let is_error rc = 
    (rc <> raw.SQLITE_OK) 
    && (rc <> raw.SQLITE_ROW) 
    && (rc <> raw.SQLITE_DONE)

let check1 rc = 
    if (is_error rc) 
    then failwith (sprintf "%d" rc) 
    else ()

let check2 conn rc = 
    if (is_error rc) 
    then failwith (raw.sqlite3_errmsg(conn)) 
    else ()

let checkthru conn rc = 
    if (is_error rc) 
    then failwith (raw.sqlite3_errmsg(conn)) 
    else rc

// MAIN program

// ONE: open the db and create the table

let (rc,conn) = raw.sqlite3_open(":memory:") 
check1 rc

check2 conn (raw.sqlite3_exec (conn, "CREATE TABLE foo (a int, b int)"))

// TWO: insert 16 rows

for i = 1 to 16 do 
    let sql = (sprintf "INSERT INTO foo (a) VALUES (%d)" i)
    check2 conn (raw.sqlite3_exec (conn, sql ))

// THREE: set b = a squared and find b for a=7

check2 conn (raw.sqlite3_exec (conn, "UPDATE foo SET b = a * a"))

let rc2,stmt = raw.sqlite3_prepare_v2(conn, "SELECT b FROM foo WHERE a=?")
check2 conn rc2

check2 conn (raw.sqlite3_bind_int(stmt, 1, 7))
check2 conn (raw.sqlite3_step(stmt))
let vsq = raw.sqlite3_column_int(stmt, 0)
check2 conn (raw.sqlite3_finalize(stmt))

// FOUR: fetch sum(a) for all rows where b < 120

let rc3,stmt2 = raw.sqlite3_prepare_v2(conn, "SELECT a FROM foo WHERE b<120")
check2 conn rc3

let mutable vsum = 0

while raw.SQLITE_ROW = ( checkthru conn (raw.sqlite3_step(stmt2)) ) do 
    vsum <- vsum + (raw.sqlite3_column_int(stmt2, 0))

check2 conn (raw.sqlite3_finalize(stmt2))

// FIVE: close and print the results

check1 (raw.sqlite3_close(conn))

printfn "val: %d" vsq
printfn "sum: %d" vsum

Notes:

  • The is_error function actually looks kind of elegant to me in this form. Note that != is spelled <>. Also there is no return keyword, as the value of the last expression just becomes the return value of the function.

  • The F# way is to use type inference. For example, in the is_error function, the rc parameter is strongly typed as an integer even though I haven't declared it that way. The F# compiler looks at the function and sees that I am comparing the parameter against raw.SQLITE_OK, which is an integer, therefore rc must be an integer as well. F# does have a syntax for declaring the type explicitly, but this is considered bad practice.

  • The check2 and checkthru functions are identical except that one returns the unit type (which is kind of like void) and the other passes the integer argument through. In C# this wouldn't matter and I could have just had the check functions return their argument when they don't throw. But F# gives warnings ("This expression should have type 'unit', but has type...") for any expression whose values is not used.

  • In C#, I overloaded check() so that I could sometimes call it with the sqlite3 connection handle and sometimes without. F# doesn't do function overloading, so I did two versions of the function called check1 and check2.

  • Since raw.sqlite3_open() has an out parameter, F# automatically converts this to return a tuple with two items (the actual return value is first, and the value in the out parameter is second). It took me a little while to figure out the right syntax to get the two parts into separate variables.

  • It took me even longer to figure out that calling a .NET method in F# uses a different syntax than calling a regular F# function. I was just getting used to the idea that F# wants functions to be called without parens and with the parameters separated by spaces instead of commas. But .NET methods are not F# functions. The syntax for calling a .NET method is, well, just like in C#. Parens and commas.

  • Here's another way that method calls are different in F#: When a method call is a parameter to a regular F# function, you have to enclose it in parens. That's why the call to sqlite3_exec() in the first stanza is parenthesized when I pass it to check2.

  • BTW, one of the first things I did was try to call raw.sqlite3_Open(), just to verify that F# is case-sensitive. It is.

  • F# programmers seem to pride themselves on how much they can do in a single line of code, regardless of how long it is. I originally wrote the second stanza in a single line. I only split it up so it would look better here in my blog article.

  • In the third stanza, F# wouldn't let me reuse rc ("Duplicate definition of value 'rc'") so I had to introduce rc2.

  • In the fourth stanza, I have tried to exactly mimic the behavior of the C# code, and I think I've succeeded so thoroughly that any real F# programmer will be tempted to gouge their eyes out when they see it. I've used mutable and while/do, both of which are considered a very un-functional way of doing things.

  • Bottom line: This code works and it does exactly what the original C# does. But I named the file fsharp_dreadful.fs because I think that in terms of what is considered best practices in the F# world, it's probably about as bad as it can be while still being correct.

F#, somewhat less csharpy

Here's an F# version I called fsharp_less_bad.fs. It's still not very good, but I've made an attempt to do some things in a more F#-ish way.

open SQLitePCL

// helper functions to check SQLite result codes and throw

let is_error rc = 
    match rc with
    | raw.SQLITE_OK -> false
    | raw.SQLITE_ROW -> false
    | raw.SQLITE_DONE -> false
    | _ -> true

let check1 rc = 
    if (is_error rc) 
    then failwith (sprintf "%d" rc) 
    else ()

let check2 conn rc = 
    if (is_error rc) 
    then failwith (raw.sqlite3_errmsg(conn)) 
    else rc

let checkpair1 pair =
    let rc,result = pair
    check1 rc |> ignore
    result

let checkpair2 conn pair =
    let rc,result = pair
    check2 conn rc |> ignore
    result

// helper functions to wrap method calls in F# functions

let sqlite3_open name = checkpair1 (raw.sqlite3_open(name))
let sqlite3_exec conn sql = check2 conn (raw.sqlite3_exec (conn, sql)) |> ignore
let sqlite3_prepare_v2 conn sql = checkpair2 conn (raw.sqlite3_prepare_v2(conn, sql))
let sqlite3_bind_int conn stmt ndx v = check2 conn (raw.sqlite3_bind_int(stmt, ndx, v)) |> ignore
let sqlite3_step conn stmt = check2 conn (raw.sqlite3_step(stmt)) |> ignore
let sqlite3_finalize conn stmt = check2 conn (raw.sqlite3_finalize(stmt)) |> ignore
let sqlite3_close conn = check1 (raw.sqlite3_close(conn))
let sqlite3_column_int stmt ndx = raw.sqlite3_column_int(stmt, ndx)

// MAIN program

// ONE: open the db and create the table

let conn = sqlite3_open(":memory:")

// use partial application to create an exec function that already 
// has the conn parameter baked in

let exec = sqlite3_exec conn

exec "CREATE TABLE foo (a int, b int)"

// TWO: insert 16 rows

let ins x = 
    exec (sprintf "INSERT INTO foo (a) VALUES (%d)" x)

[1 .. 16] |> List.iter ins

// THREE: set b = a squared and find b for a=7

exec "UPDATE foo SET b = a * a"

let stmt = sqlite3_prepare_v2 conn "SELECT b FROM foo WHERE a=?"
sqlite3_bind_int conn stmt 1 7
sqlite3_step conn stmt
let vsq = sqlite3_column_int stmt 0
sqlite3_finalize conn stmt

// FOUR: fetch sum(a) for all rows where b < 120

let stmt2 = sqlite3_prepare_v2 conn "SELECT a FROM foo WHERE b<120"

let vsum = List.sum [ while 
    raw.SQLITE_ROW = ( check2 conn (raw.sqlite3_step(stmt2)) ) do 
        yield sqlite3_column_int stmt2 0 
    ]

sqlite3_finalize conn stmt2

// FIVE: close and print the results

sqlite3_close conn

printfn "val: %d" vsq
printfn "sum: %d" vsum

Notes:

  • I changed is_error to use pattern matching. For this very simple situation, I'm not sure this is an improvement over the simple boolean expression I had before.

  • I get the impression that typical doctrine in functional programming church is to not use exceptions, but I'm not tackling that problem here.

  • I got rid of checkthru and just made check2 return its rc paraemter when it doesn't throw. This means most of the times I call check2 I have to ignore the result or else I get a warning.

  • I added a couple of checkpair functions. These are designed to take a tuple, such as the one that comes from a .NET method with an out parameter, like sqlite3_open() or sqlite3_prepare_v2(). The checkpair function does the appropriate check function on the first part of the tuple (the integer return code) and then returns the second part. The sort-of clever thing here is that checkpair does not care what type the second part of the tuple is. I get the impression that this sort of "things are generic by default" philosophy is a pillar of functuonal programming.

  • I added several functions which wrap raw.sqlite3_whatever() as a more F#-like function that looks less cluttered.

  • In the first stanza, after I get the connection open, I define an exec function using the F# partial application feature. The exec function is basically just the sqlite3_exec function except that the conn parameter has already been baked in. This allows me to use very readable syntax like exec "whatever". I considered doing this for all the functions, but I'm not really sure this design is a good idea. I just found this hammer called "partial application" so I was looking for a nail.

  • In the second stanza, I've eliminated the for loop in favor of a list operation. I defined a function called ins which inserts one row. The [1 .. 16] syntax produces a range, which is piped into List.iter.

  • The third stanza looks a lot cleaner with all the .NET method calls hidden away.

  • In the fourth stanza, I still have a while loop, but I was able to get rid of mutable. The syntax I'm using here is (I think) something called a computation expression. Basically, the stuff inside the square brackets is constructing a list with a while loop. Then List.sum is called on that list, resulting in the number I want.

Other notes

I did all this using the command-line F# tools and Mono on a Mac. I've got a command called fsharpc on my system. I'm not sure how it got there, but it probably happened when I installed Xamarin.

Since I'm not using msbuild or NuGet, I just harvested SQLitePCL.raw.dll from the SQLitePCL.raw NuGet package. The net45 version is compatible with Mono, and on a Mac, it will simply P/Invoke from the SQLite library that comes preinstalled with MacOS X.

So the bash commands to setup my environment for this blog entry looked something like this:

mkdir fs_sqlite
cd fs_sqlite
mkdir unzipped
cd unzipped
unzip ~/Downloads/sqlitepcl.raw_needy.0.5.0.nupkg 
cd ..
cp unzipped/lib/net45/SQLitePCL.raw.dll .

Here are the build commands I used:

fs_sqlite eric$ gmcs -r:SQLitePCL.raw.dll -sdk:4.5 csharp.cs

fs_sqlite eric$ fsharpc -r:SQLitePCL.raw.dll fsharp_dreadful.fs
F# Compiler for F# 3.1 (Open Source Edition)
Freely distributed under the Apache 2.0 Open Source License

fs_sqlite eric$ fsharpc -r:SQLitePCL.raw.dll fsharp_less_bad.fs
F# Compiler for F# 3.1 (Open Source Edition)
Freely distributed under the Apache 2.0 Open Source License

fs_sqlite eric$ ls -l *.exe
-rwxr-xr-x  1 eric  staff   4608 Sep  8 15:30 csharp.exe
-rwxr-xr-x  1 eric  staff   8192 Sep  8 15:30 fsharp_dreadful.exe
-rwxr-xr-x  1 eric  staff  11776 Sep  8 15:31 fsharp_less_bad.exe

fs_sqlite eric$ mono csharp.exe
val: 49
sum: 55

fs_sqlite eric$ mono fsharp_dreadful.exe 
val: 49
sum: 55

fs_sqlite eric$ mono fsharp_less_bad.exe 
val: 49
sum: 55

BTW, I noticed that compiling F# (fsharpc) is a LOT slower than compiling C# (gmcs).

Note that the command-line flag to reference (-r:) an assembly is the same for F# as it is for C#.

Note that fsharp_dreadful.exe is bigger than csharp.exe, and the "less_bad" exe is even bigger. I suspect that generalizing these observations would be extrapolating from too little data.

C# fans may notice that I [attempted to] put more effort into the F# code. This was intentional. Making the C# version beautiful was not the point of this blog entry.

So far, my favorite site for learning F# has been http://fsharpforfunandprofit.com/

 

How To Rapidly Brainstorm and Share Ideas with Method 635

So, if you have a bunch of smart people, a bunch of bright ideas, and everybody wants to talk at the same time ... what do you do?

Or, you have a bunch of smart people, but they are quiet and nobody is sharing their bright ideas, and the squeaky wheel gets the oil ... what do you do?

Whenever you get a bunch of smart people together to change the world it helps to have some proven practices for better results.

One of the techniques a colleague shared with me recently is Method 635.  It stands for six participants, three ideas, and five rounds of supplements. 

He's used Method 635 successfully to get a large room of smart people to brainstorm ideas and put their top three ideas forward.

Here's how he uses Method 635 in practice.

  1. Split the group into 6 people per table (6 people per team or table).
  2. Explain the issue or challenge to the group, so that everybody understands it. Each group of 6 writes down 3 solutions to the problem (5 minutes).
  3. Go five rounds (5 minutes per round).  During each round, pass the ideas to the participant's neighbor (one of the other participants).  The participant's neighbor will add three additional ideas or modify three of the existing ones.
  4. At the end of the five rounds, each team votes on their top three ideas (5 minutes.)  For example, you can use “impact” and “ability to execute” as criteria for voting (after all, who cares about good ideas that can't be done, and who cares about low-value ideas that can easily be executed.)
  5. Each team presents their top three ideas to the group.  You could then vote again, by a show of hands, on the top three ideas across the teams of six.

The outcome is that each person will see the original three solutions and contribute to the overall set of ideas.

By using this method, if each of the 5 rounds is 5 minutes, and if you take 10 minutes to start by explaining the issue, and you give teams 5 minutes to write down their initial set of 3 ideas, and then another 5 minutes at the end to vote, and another 5 minutes to present, you’ve accomplished a lot within an hour.   Voices were heard.  Smart people contributed their ideas and got their fingerprints on the solutions.  And you’ve driven to consensus by first elaborating on ideas, while at the same time, driving to convergence and allowing refinement along the way.

Not bad.

All in a good day’s work, and another great example of how structuring an activity, even loosely structuring an activity, can help people bring out their best.

You Might Also Like

How To Use Six Thinking Hats

Idea to Done: How to Use a Personal Kanban for Getting Results

Workshop Planning Framework

Categories: Architecture, Programming

How Twitter Uses Redis to Scale - 105TB RAM, 39MM QPS, 10,000+ Instances

Yao Yue has worked on Twitter’s Cache team since 2010. She recently gave a really great talk: Scaling Redis at Twitter. It’s about Redis of course, but it's not just about Redis.

Yao has worked at Twitter for a few years. She's seen some things. She’s watched the growth of the cache service at Twitter explode from it being used by just one project to nearly a hundred projects using it. That's many thousands of machines, many clusters, and many terabytes of RAM.

It's clear from her talk that's she's coming from a place of real personal experience and that shines through in the practical way she explores issues. It's a talk well worth watching.

As you might expect, Twitter has a lot of cache.

Timeline Service for one datacenter using Hybrid List:
  • ~40TB allocated heap
  • ~30MM qps
  • > 6,000 instances
Use of BTree in one datacenter:
  • ~65TB allocated heap
  • ~9MM qps
  • >4,000 instances

You'll learn more about BTree and Hybrid List later in the post.

A couple of points stood out:

  • Redis is a brilliant idea because it takes underutilized resources on servers and turns them into valuable service.
  • Twitter specialized Redis with two new data types that fit their use cases perfectly. So they got the performance they needed, but it locked them into an older code based and made it hard to merge in new features. I have to wonder, why use Redis for this sort of thing? Just create a timeline service using your own datastructures. Does Redis really add anything to the party?
  • Summarize large chunks of log data on the node, using your local CPU power, before saturating the network.
  • If you want something that’s high performance separate the fast path, which is the data path, away from the slow path, which is the command and control path. 
  • Twitter is moving towards a container environment with Mesos as the job scheduler. This is still a new approach so it's interesting to hear about how it works. One issue is the Mesos wastage problem that stems from requirement to specify hard resource usage limits in a complicated runtime world.
  • A central cluster manager is really important to keep a cluster in a state that’s easy to understand.
  • The JVM is slow and C is fast. Their cache proxy layer is moving back to C/C++.
With that in mind, let's learn more about how Redis is used at Twitter:

Why Redis?
Categories: Architecture

Xebia KnowledgeCast Episode 3

Xebia Blog - Mon, 09/08/2014 - 15:41

xebia_xkc_podcast
The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this third episode,
we get a bit more technical with me interviewing some of the most excellent programmers in the known universe: Age Mooy and Barend Garvelink. Then, I talk education and Smurfs with Femke Bender. Also, I toot my own horn a bit by providing you with a summary of my PMI Netherlands Summit session on Building Your Parachute On The Way Down. And of course, Serge will have Fun With Stickies!

It's been a while, and for those that are still listening to this feed: The Xebia Knowledgecast has been on hold due to personal circumstances. Last year, my wife lost her job, my father and mother in-law died, and we had to take our son out of the daycare center he was in due to the way they were treating him there. So, that had a little bit of an effect on my level of podfever. That said, I'm back! And my podfever is up!

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, use the Auphonic recording app to send in a voicemessage as an AIFF, WAV, or FLAC file so we can put you ON the show!

Credits

Your Worst Enemy Is Yourself

Making the Complex Simple - John Sonmez - Mon, 09/08/2014 - 15:00

Here’s the thing… You could have been exactly where you want to be right now. You could have gotten the perfect job. You could have started that business you always wanted to start. You could have gotten those 6-pack abs. You could have even met the love of your life. There has only been one […]

The post Your Worst Enemy Is Yourself appeared first on Simple Programmer.

Categories: Programming

Define Your Target Audience

NOOP.NL - Jurgen Appelo - Mon, 09/08/2014 - 14:44
target audience

The big checklist that I used while writing my new book #Workout contained the following items:

Remove use of the words “agile” and “lean”
Remove use of the words “software” and “development”
Remove use of the words “Scrum” and “Kanban”

The post Define Your Target Audience appeared first on NOOP.NL.

Categories: Project Management

Improving Time-To-Market and Business-IT Alignment

Software Requirements Blog - Seilevel.com - Mon, 09/08/2014 - 11:13
A very common area of focus we hear from organizations is the desire to decrease product Time To Market (TTM) and improve Business-IT Alignment. Projects are either running long or delivering a product that does not match the vision of the business– both resulting in significant increases of resource time and effort, and opportunity cost. There are a number […]
Categories: Requirements

Getting started with Salt

Xebia Blog - Mon, 09/08/2014 - 09:07

A couple of days ago I had the chance to spend a full day working with Salt(stack). On my current project we are using a different configuration management tool and my colleagues there claimed that Salt was simpler and more productive. The challenge was easily set, they claimed that a couple of people with no Salt experience, albeit with a little configuration management knowledge, would be productive in a single day.

My preparation was easy, I had to know nothing about Salt....done!

During the day I was working side by side with another colleague who knew little to nothing about Salt. When the day started, the original plan was to do a quick one hour introduction into Salt. But as we like to dive in head first this intro was skipped in favor of just getting started. We used an existing Vagrant box that spun up a Salt master & minion we could work on. The target was to get Salt to provision a machine for XL Deploy, complete with the customizations we were doing at our client. Think of custom plugins, logging configuration and custom libraries.

So we got cracking, running down the steps we needed to get XL Deploy installed. The steps were relatively simple, create a user & group, get the installation files from somewhere, install the server, initialize the repository and run it as a service.

First thing I noticed is that we simply just could get started. For the tasks we needed to do (downloading, unzipping etc.) we didn't need any additional states. Actually, during the whole exercise we never downloaded any additional states. Everything we needed was provided by Salt from the get go. Granted, we weren't doing anything special but it's not a given that everything is available.

During the day we approached the development of our Salt state like we would a shell script. We started from the top and added the steps needed. When we ran into issues with the order of the steps we'd simply move things around to get it to work. Things like creating a user before running a service as that user were easily resolved this way.

Salt uses yaml to define a state and that was fairly straight forward to use. Sometimes the naming used was strange. For example the salt.state.archive uses the parameter "source" for it's source location but "name" for it's destination. It's clearly stated in the docs what the parameter is used for, but a strange convention nonetheless.

We also found that the feedback provided by Salt can be scarce. On more than one occasion we'd enter a command and nothing would happen for a good while. Sometimes there would eventually be a lot of output but sometimes there wasn't.  This would be my biggest gripe with Salt, that you don't always get the feedback you'd like. Things like using templates and hierarchical data (the so-called pillars) proved easy to use. Salt uses jinja2 as it's templating engine, since we only needed simple variable replacement it's hard to comment on how useful jinja is. For our purposes it was fine.  Using pillars proved equally straightforward. The only issue we encountered here was that we needed to add our pillar to our machine role in the top.sls. Once we did that we could use the pillar data where needed.

The biggest (and only real) problem we encountered was to get XL Deploy to run as a service. We tried two approaches, one using the default service mechanism on Linux and the second using upstart. Upstart made it very easy to get the service started but it wouldn't stop properly. Using the default mechanism we couldn't get the service to start during a Salt run. When we send it specific commands it would start (and stop) properly but not during a run. We eventually added a post-stop script to the upstart config to make sure all the (child) processes stopped properly.

At the end of the day we had a state running that provisioned a machine with XL Deploy including all the customizations we wanted. Salt basically did what we wanted. Apart from the service everything went smooth. Granted, we didn't do anything exotic and stuck to rudimentary tasks like downloading, unzipping and copying, but implementing these simple tasks remained simple and straightforward. Overall Salt did what one might expect.

From my perspective the goal of being productive in a single day was easily achieved. Because of how straightforward it was to implement I feel confident about using Salt for more complex stuff. So, all in all Salt left a positive impression and I would like to do more with it.

How to create a knowledge worker Gemba

Software Development Today - Vasco Duarte - Mon, 09/08/2014 - 04:00

I am a big fan of the work by Jim Benson and Tonianne Barry ever since I read their book: Personal Kanban.

In this article Jim describes an idea that I would like to highlight and expand. He says: we need a knowledge worker Gemba. He goes on to describe how to create that Gemba:

  • Create a workcell for knowledge work: Where you can actually observe the team work and interact
  • Make work explicit: Without being able to visualize the work in progress, you will not be able to understand the impact of certain dynamics between the team members. Also, you will miss the necessary information that will allow you to understand the obstacles to flow in the team - what prevents value from being delivered.

These are just some steps you can take right now to understand deeply how work gets done in your team, your organization or by yourself if you are an independent knowledge worker. This understanding, in turn will help you define concrete changes to the way work gets done in a way that can be measured and understood.

I've tried the same idea for my own work and described it here. How about you? What have you tried to implement to create visibility and understanding in your work?

SPaMCAST 306 – Luis Gonçalves, No More Performance Appraisals

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 306 Now

Software Process and Measurement Cast number 306 features our interview with Luis Gonçalves.  We discussed getting rid of performance appraisals.  Luis makes the case that performance appraisals hurt people and companies.

Luis’s Bio . . .

Luis Gonçalves is an Agile coach, author, speaker and blogger.

Luis has been working in the software industry since 2003, as an Agile practitioner since 2007. He has experience in integrating sequential project phases like localization into an Agile framework and pioneering Agile adoption at different companies and in different contexts.

Luis is the co-author of the book, Getting Value Out of Agile Retrospectives.

He has a technical background and is Management 3.0 passionate.

Mr Gonçalves likes to write and share ideas with the world and is a passionate blogger. Inspiration comes from his professional life and from the books he reads. Follow his blog at  http://lmsgoncalves.com/

Luis asks for SPaMCAST listeners to provide feedback on his new book, Get Rid of Performance Appraisals.

His mailing list is  http://eepurl.com/QGxfX  which provides the first part of his new book for free!

Next

Software Process and Measurement Cast number 307 features our essay on Agile integration testing.  Integration testing is defined as testing in which components (software and hardware) are combined to confirm that they interact according to expectations and requirements.  Good integration testing is critical to effective Agile development.

Upcoming Events

DCG Webinars:

Raise Your Game: Agile Retrospectives September 18, 2014 11:30 EDT Retrospectives are a tool that the team uses to identify what they can do better. The basic process – making people feel safe and then generating ideas and solutions so that the team can decide on what they think will make the most significant improvement – puts the team in charge of how they work. When teams are responsible for their own work, they will be more committed to delivering what they promise. Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming: ITMPI Webinar!

We Are All Biased!  September 16, 2014 11:00 AM – 12:30 PM EST

Register HERE

How we think and form opinions affects our work whether we are project managers, sponsors or stakeholders. In this webinar, we will examine some of the most prevalent workplace biases such as anchor bias, agreement bias and outcome bias. Strategies and tools for avoiding these pitfalls will be provided.

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

 

 

 

 


Categories: Process Management

Are Testers still relevant?

Xebia Blog - Sun, 09/07/2014 - 22:07

Last week I talked to one of my colleagues about a tester in his team. He told me that the tester was bored, because he had nothing to do. All the developers wrote and executed their tests themselves. Which makes sense, because the tester 2.0 tries to make the team test infected.

So what happens if every developer in the team has the Testivus? Are you still relevant on the Continuous Delivery train?
Come and join the discussion at the Open Kitchen Test Automation: Are you still relevant?

Testers 1.0

Remember the days when software testers where consulted after everything was built and released for testing. Testing was a big fat QA phase, which was a project by itself. The QA department consisted of test managers analyzing the requirements first. Logical test cases were created and were subordinated to test executors, who created physical test cases and executed them manually. Testers discovered conflicting requirements and serious issues in technical implementations. Which is good obviously. You don't want to deliver low quality software. Right?

So product releases were being delayed and the QA department documented everything in a big fat test report. And we all knew it: The QA department had to do it all over again after the next release.

I remember being a tester during those days. I always asked myself: Why am I always the last one thinking about ways to break the system? Does the developer know how easily this functionality can be broken? Does the product manager know that this requirement does not make sense at all?
Everyone hated our QA department. We were portrayed as slow, always delivering bad news and holding back the delivery cycle. But the problem was not delivering the bad news. The timing was.

The way of working needed to be changed.

Testers 2.0

We started training testers to help Agile teams deliver high quality software during development: The birth of the Tester 2.0 - The Agile Tester.
These testers master the Agile Manifesto, processes and methods that come with it. Collaboration about quality is the key here. Agile Testing is a mindset. And everyone is responsible for the quality of the product. Testers 2.0 helped teams getting (more) test infected. They thought like a researcher instead of a quality gatekeeper. They became part of the software development and delivery teams and they looked into possibilities to speed up testing efforts. So they practiced several exploratory testing techniques. Focused on reasonable and required tests, given the constraints of a sprint.

When we look back at several Agile teams having a shared understanding about Agile Testing, we saw many multidisciplinary teams becoming two separate teams: One is for developers and the other for QA / Testers.
I personally never felt comfortable in those teams. Putting testers with a group of developers is not Agile Testing. Developers still left testing for testers, and testers passively waited for developers to deploy something to be tested. At some point testers became a bottleneck again so they invested in test automation. So testers became test automators and build their own test code in a separate code base than development code. Test automators also picked tools that did not foster team responsibility. Therefore Test Automation got completely separated from product development. We found ways to help testers, test automators and developers to collaborate by improving the process. But that was treating the symptom of the problem. Developers were not taking responsibility in automated tests. And testers did not help developers designing testable software.

We want test automators and developers to become the same people.

Testers 3.0

If you want to accelerate your business you'll need to become iterative, delivering value to production as soon as possible by doing Continuous Delivery properly. So we need multidisciplinary teams to shorten feedback loops coming from different point of views.

Testers 3.0 are therefore required to accelerate the business by working in these following areas:

Requirement inspection: Building the right thing

The tester 3.0 tries to understand the business problem of requirements by describing examples. It's important to get common understanding between the business and technical context. So the tester verifies them as soon as possible and uses this as input for Test Automation and use BDD as a technique where a Human Readable Language is fostered. These testers work on automated acceptance tests as soon as possible.

Test Automation: Boosting the software delivery process

When common understanding is reached and the delivery team is ready to implement the requested feature, the tester needs programming skills to make the acceptance tests in a clean code state. The tester 3.0 uses appropriate Acceptance Test Driven Development tools (like Cucumber), which the whole team supports. But the tester keeps an eye out for better, faster and easier automated testing frameworks to support the team.

At the Xebia Craftsmanship Seminar (a couple of months ago) someone asked me if testers should learn how to write code.
I told him that no one is good at everything. But the tester 3.0 has good testing skills and enough technical baggage to write automated acceptance tests. Continuous Delivery teams have a shared responsibility and they automate all boring steps like manual test scripts, performance and security tests. It's very important to know how to automate; otherwise you'll slow down the team. You'll be treated the same as anyone else in the delivery team.
Testers 3.0 try to get developers to think about clean code and ensuring high quality code. They look into (and keep up with) popular development frameworks and address the testability of it. Even the test code is evaluated for quality attributes continuously. It needs the same love and caring as getting code into production.

Living documentation: Treating tests as specifications

At some point you'll end up with a huge set of automated tests telling you everything is fine. The tester 3.0 treats these tests as specifications and tries to create a living document, which is used for long term requirements gathering. No one will complain about these tests when they are all green and passing. The problem starts when tests start failing and no one can understand why. Testers 3.0 think about their colleague when they write a specification or test. They need to clearly specify what is being tested in a Human Readable Language.
They are used to changing requirements and specifications. And they don't make a big deal out of it. They understand that stakeholders can change their mind once a product comes alive. So the tester makes sure that important decisions made during new requirement inspections and development are stored and understood.

Relevant test results: Building quality into the process

Testers 3.0 focus on getting extreme fast feedback to determine the software quality of software products every day. Every night. Every second.
Testers want to deploy new working software features into production more often. So they do whatever it takes to build a high quality pipeline decreasing the quality feedback time during development.
Everyone in your company deserves to have confidence in the software delivery pipeline at any moment. Testers 3.0 think about how they communicate these types of feedback to the business. They provide ways to automatically report these test results about quality attributes. Testers 3.0 ask the business to define quality. Knowing everything was built right, how can they measure they've built the right thing? What do we need to measure when the product is in production?

How to stay relevant as a Tester

So what happens when all of your teammates are completely focused on high quality software using automation?

Testing does not require you to manually click, copy and paste boring scripted test steps you didn't want to do in the first place. You were hired to be skeptical about anything and make sure that all risks are addressed. It's still important to keep being a researcher for your team and test curiosity accordingly.

Besides being curious, analytical and having great communication skills, you need to learn how to code. Don't work harder. Work smarter by analyzing how you can automate all the boring checks so you'll have more time discovering other things by using your curiosity.

Since testing drives software development, and should no longer be treated as a separate phase in the development process, it's important that teams put test automation in the center of all design decisions. Because we need Test Automation to boost the software delivery by building quality sensors in every step of the process. Every day. Every night. Every second!

Do you want to discuss this topic with other Testers 3.0?  Come and join the Open Kitchen: Test Automation and get on board the Continuous Delivery Train!