Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Quote of the Day - All Things Project Are Probabilistic

Herding Cats - Glen Alleman - Wed, 08/27/2014 - 16:28

As far as the laws of mathematics refer to reality, they are not certain, as far as they are certain, they do not refer to reality.

— Albert Einstein Sherwin, Ronald Paul (2014-08-17). In The Tao of Systems Engineering: An Engineer's Survival Guide (pp. 195-197). Ronald Paul Sherwin. Kindle Edition.

When ever you hear that we can't predict the future. Think again. We can always predict the future. The level of confidence of that future is what is in question.

When you hear estimating is guessing. Think again. That person doesn't understand probability and statistics. When you hear we don't need to predict to make decisions, that person has very little at risk from that decision, since making decisions in the absence of knowing the possible loss ignores the principles of microeconomics of everyday life.

When ever you hear we don't need to estimate the outcomes of our decisions. Think again. We don't need to estimate those outcomes only if they are of low enough value that we don't care about the consequences of not knowing to some level of confidence what happens as a result of our decision. We're willing to wright off our loss if we're wrong.

When we hear any conjecture that involves mathematics that does not address the foundation of the mathematical principles of that discussion, remember Einstein, and also remember how to apply that advice in the specific domain and context of the question guided by Deming.

Management is Prediction 

Deming 2

Since management is prediction, knowing how to make predictions using statistical methods to produce a confidence interval about the probabilistic outcomes of those business decisions is part of management. When we want a sit at the table where management decisions are being made, knowing this and being able to add value to the decision process is the price of entry to that room. Otherwise we're labor sitting outside the room waiting for the decisions to be made. 
 

Categories: Project Management

Navigating the Way Home

Herding Cats - Glen Alleman - Wed, 08/27/2014 - 03:10

Manx shearwaterI came across a nice blog post from DelancyPlace about the navigation powers of birds. 

This bird was taken from Wale's to Venice Italy and released and found it's way home in 14 days. 930 miles, over mountains .

To be able to find their way home from an unfamiliar place, birds must carry a figurative map and compass in their brains.

The map tells them where they are, and the compass tells them which direction to fly, even when they are released with no frame of reference to their home loft.

Projects Are Not Birds

As project managers, what's our map and compass? How can we navigate from the start of the project to the end, even though we haven't been on this path before. 

How can we find our way Home?

We have a map. It starts with a Capabilities Based Plan. The CBP states what Done looks like in units of measure meaningful to the decision makers. These units of measure are Measures of Effectiveness and Measures of Performance.

  • Measures of Effectiveness - are operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions.

  • Measures of Performance - characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.

These measures speak to our home and the attributes of the home. The map that gets us ti home is the Integrated Master Plan. This shows the increasing maturity of the deliverables that implement the Measures of Performance and those Performance items that enable the project to produce the needed capabilities that are effectively accomplish the mission or fulfill the business need. 

This looks like a map for the increasing value delivery for an insurance company. The map shows the path or actually paths to home. Home is the ability to generate value from the exchange of money to develop the software.

Project Maturity Flow is the Incremental Delivery of Business Value

Related articles Golden Ratio Managing In The Presence Uncertainty Impact Mapping and Integrated Master Planning We Can Know the Business Value of What We Build All Project Work is Probabilistic Work 5 Questions That Need Answers for Project Success
Categories: Project Management

Traceability: An Approach Mixing CMMI and Agile

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement.

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement.

Traceability is an important tool in software engineering and a core tenant of the CMMI.  It is used as tool for the management and control of requirements. Controlling and understanding the flow of requirements puts a project manager’s hand on the throttle of the project by allowing and controlling the flow of work through a project. However, it is both hard to accomplish and requires a focused application to derive value. When does the control generated represent the proper hand on the throttle or a lead foot on a break?

The implementation of traceability sets the stage for the struggle over processes mandated by management or the infamous “model”.  Developers actively resist process when they perceive that the effort isn’t directly leading to functionality that can be delivered and therefore, not delivering value to their customers.  In the end, traceability, like insurance, is best when you don’t need the information it provides to sort out uncontrolled project changes or delivering functionality not related to requirements.

Identifying both the projects and the audience that can benefit from traceability is paramount for implementing and sustaining the process.  Questions that need to be asked and addressed include:

  • Is the need for control for all types of projects the same?
  • Is the value-to-effort ratio from tracing requirements the same for all projects?
  • What should be evaluated when determining whether to scale the traceability process?

Scalability is a needed step to affect the maximum value from any methodology component, traceability included, regardless of whether the project is plan-driven or Agile. A process is needed to ensure that traceability occurs based on a balance between process, effort and complexity.

The concept of traceability acts a lightening rod for the perceived excesses of CMMI (and by extension all other model-based improvement methods).  I will explore a possible approach for scaling traceability.  My approach bridges the typical approach (leveraging matrices and requirement tools) with an approach that trades documentation for intimate user involvement. It uses a simple set of three criteria (complexity, user involvement and criticality) to determine where a project should focus its traceability effort on continuum between documentation and involvement.

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement, a complex project, and increased criticality.  The model we will propose provides a means to apply traceability in a scaled manner so that it fits a project’s need and is not perceived as a one size fits all approach.


Categories: Process Management

Chrome - Firefox WebRTC Interop Test - Pt 1

Google Testing Blog - Tue, 08/26/2014 - 22:09
by Patrik Höglund

WebRTC enables real time peer-to-peer video and voice transfer in the browser, making it possible to build, among other things, a working video chat with a small amount of Python and JavaScript. As a web standard, it has several unusual properties which makes it hard to test. A regular web standard generally accepts HTML text and yields a bitmap as output (what you see in the browser). For WebRTC, we have real-time RTP media streams on one side being sent to another WebRTC-enabled endpoint. These RTP packets have been jumping across NAT, through firewalls and perhaps through TURN servers to deliver hopefully stutter-free and low latency media.

WebRTC is probably the only web standard in which we need to test direct communication between Chrome and other browsers. Remember, WebRTC builds on peer-to-peer technology, which means we talk directly between browsers rather than through a server. Chrome, Firefox and Opera have announced support for WebRTC so far. To test interoperability, we set out to build an automated test to ensure that Chrome and Firefox can get a call up. This article describes how we implemented such a test and the tradeoffs we made along the way.

Calling in WebRTC Setting up a WebRTC call requires passing SDP blobs over a signaling connection. These blobs contain information on the capabilities of the endpoint, such as what media formats it supports and what preferences it has (for instance, perhaps the endpoint has VP8 decoding hardware, which means the endpoint will handle VP8 more efficiently than, say, H.264). By sending these blobs the endpoints can agree on what media format they will be sending between themselves and how to traverse the network between them. Once that is done, the browsers will talk directly to each other, and nothing gets sent over the signaling connection.

Figure 1. Signaling and media connections.
How these blobs are sent is up to the application. Usually the browsers connect to some server which mediates the connection between the browsers, for instance by using a contact list or a room number. The AppRTC reference application uses room numbers to pair up browsers and sends the SDP blobs from the browsers through the AppRTC server.

Test DesignInstead of designing a new signaling solution from scratch, we chose to use the AppRTC application we already had. This has the additional benefit of testing the AppRTC code, which we are also maintaining. We could also have used the small peerconnection_server binary and some JavaScript, which would give us additional flexibility in what to test. We chose to go with AppRTC since it effectively implements the signaling for us, leading to much less test code.

We assumed we would be able to get hold of the latest nightly Firefox and be able to launch that with a given URL. For the Chrome side, we assumed we would be running in a browser test, i.e. on a complete Chrome with some test scaffolding around it. For the first sketch of the test, we imagined just connecting the browsers to the live apprtc.appspot.com with some random room number. If the call got established, we would be able to look at the remote video feed on the Chrome side and verify that video was playing (for instance using the video+canvas grab trick). Furthermore, we could verify that audio was playing, for instance by using WebRTC getStats to measure the audio track energy level.

Figure 2. Basic test design.
However, since we like tests to be hermetic, this isn’t a good design. I can see several problems. For example, if the network between us and AppRTC is unreliable. Also, what if someone has occupied myroomid? If that were the case, the test would fail and we would be none the wiser. So to make this thing work, we would have to find some way to bring up the AppRTC instance on localhost to make our test hermetic.

Bringing up AppRTC on localhostAppRTC is a Google App Engine application. As this hello world example demonstrates, one can test applications locally with
google_appengine/dev_appserver.py apprtc_code/

So why not just call this from our test? It turns out we need to solve some complicated problems first, like how to ensure the AppEngine SDK and the AppRTC code is actually available on the executing machine, but we’ll get to that later. Let’s assume for now that stuff is just available. We can now write the browser test code to launch the local instance:
bool LaunchApprtcInstanceOnLocalhost() 
// ... Figure out locations of SDK and apprtc code ...
CommandLine command_line(CommandLine::NO_PROGRAM);
EXPECT_TRUE(GetPythonCommand(&command_line));

command_line.AppendArgPath(appengine_dev_appserver);
command_line.AppendArgPath(apprtc_dir);
command_line.AppendArg("--port=9999");
command_line.AppendArg("--admin_port=9998");
command_line.AppendArg("--skip_sdk_update_check");

VLOG(1) << "Running " << command_line.GetCommandLineString();
return base::LaunchProcess(command_line, base::LaunchOptions(),
&dev_appserver_);
}

That’s pretty straightforward [1].

Figuring out Whether the Local Server is Up Then we ran into a very typical test problem. So we have the code to get the server up, and launching the two browsers to connect to http://localhost:9999?r=some_room is easy. But how do we know when to connect? When I first ran the test, it would work sometimes and sometimes not depending on if the server had time to get up.

It’s tempting in these situations to just add a sleep to give the server time to get up. Don’t do that. That will result in a test that is flaky and/or slow. In these situations we need to identify what we’re really waiting for. We could probably monitor the stdout of the dev_appserver.py and look for some message that says “Server is up!” or equivalent. However, we’re really waiting for the server to be able to serve web pages, and since we have two browsers that are really good at connecting to servers, why not use them? Consider this code.
bool LocalApprtcInstanceIsUp() {
// Load the admin page and see if we manage to load it right.
ui_test_utils::NavigateToURL(browser(), GURL("localhost:9998"));
content::WebContents* tab_contents =
browser()->tab_strip_model()->GetActiveWebContents();
std::string javascript =
"window.domAutomationController.send(document.title)";
std::string result;
if (!content::ExecuteScriptAndExtractString(tab_contents,
javascript,
&result))
return false;

return result == kTitlePageOfAppEngineAdminPage;
}

Here we ask Chrome to load the AppEngine admin page for the local server (we set the admin port to 9998 earlier, remember?) and ask it what its title is. If that title is “Instances”, the admin page has been displayed, and the server must be up. If the server isn’t up, Chrome will fail to load the page and the title will be something like “localhost:9999 is not available”.

Then, we can just do this from the test:
while (!LocalApprtcInstanceIsUp())
VLOG(1) << "Waiting for AppRTC to come up...";

If the server never comes up, for whatever reason, the test will just time out in that loop. If it comes up we can safely proceed with the rest of test.

Launching the Browsers A browser window launches itself as a part of every Chromium browser test. It’s also easy for the test to control the command line switches the browser will run under.

We have less control over the Firefox browser since it is the “foreign” browser in this test, but we can still pass command-line options to it when we invoke the Firefox process. To make this easier, Mozilla provides a Python library called mozrunner. Using that we can set up a launcher python script we can invoke from the test:
from mozprofile import profile
from mozrunner import runner

WEBRTC_PREFERENCES = {
'media.navigator.permission.disabled': True,
}

def main():
# Set up flags, handle SIGTERM, etc
# ...
firefox_profile =
profile.FirefoxProfile(preferences=WEBRTC_PREFERENCES)
firefox_runner = runner.FirefoxRunner(
profile=firefox_profile, binary=options.binary,
cmdargs=[options.webpage])

firefox_runner.start()

Notice that we need to pass special preferences to make Firefox accept the getUserMedia prompt. Otherwise, the test would get stuck on the prompt and we would be unable to set up a call. Alternatively, we could employ some kind of clickbot to click “Allow” on the prompt when it pops up, but that is way harder to set up.

Without going into too much detail, the code for launching the browsers becomes
GURL room_url = 
GURL(base::StringPrintf("http://localhost:9999?r=room_%d",
base::RandInt(0, 65536)));
content::WebContents* chrome_tab =
OpenPageAndAcceptUserMedia(room_url);
ASSERT_TRUE(LaunchFirefoxWithUrl(room_url));

Where LaunchFirefoxWithUrl essentially runs this:
run_firefox_webrtc.py --binary /path/to/firefox --webpage http://localhost::9999?r=my_room

Now we can launch the two browsers. Next time we will look at how we actually verify that the call worked, and how we actually download all resources needed by the test in a maintainable and automated manner. Stay tuned!

1The explicit ports are because the default ports collided on the bots we were running on, and the --skip_sdk_update_check was because the SDK stopped and asked us something if there was an update.

Categories: Testing & QA

Episode 208: Randy Shoup on Hiring in the Software Industry

With this episode, Software Engineering Radio begins a series of interviews on social/nontechnical aspects of working as a software engineer as Tobias Kaatz talks to Randy Shoup, former CTO at KIXEYE, about hiring in the software industry. Prior to KIXEYE, Randy worked as director of engineering at Google for the Google App Engine and as […]
Categories: Programming

Let’s Have a Difficult Conversation

Software Requirements Blog - Seilevel.com - Tue, 08/26/2014 - 17:00
No one wants to have a conversation that is going to be uncomfortable, potentially make himself seem difficult to work with, or put pressure on either party in the conversation, but sometimes it’s just unavoidable. Or is it? I have been witness to several projects lately that grossly underestimated the workload going in. This is […]
Categories: Requirements

Synchronize the Team

Xebia Blog - Tue, 08/26/2014 - 13:52

How can you, as a scrum master, improve the chances that the scrum team has a common vision and understanding of both the user story and the solution, from the start until the end of the sprint?   

The problem

The planning session is where the team should synchronize on understanding the user story and agree on how to build the solution. But there is no real validation that all the team members are on the same page. The team tends to dive into the technical details quite fast in order to identify and size the tasks. The technical details are often discussed by only a few team members and with little or no functional or business context. Once the team leaves the session, there is no guarantee that they remain synchronized when the sprint progresses. 

The only other team synchronization ritual, prescribed by the scrum process, is the daily scrum or stand-up. In most teams the daily scrum is as short as possible, avoiding semantic discussions. I also prefer the stand-ups to be short and sweet. So how can you or the team determine that the team is (still) synchronized?

Specify the story

In the planning session, after a story is considered ready enough be to pulled into the sprint, we start analyzing the story. This is the specification part, using a technique called ‘Specification by Example’. The idea is to write testable functional specifications with actual examples. We decompose the story into specifications and define the conditions of failure and success with examples, so they can be tested. Thinking of examples makes the specification more concrete and the interpretation of the requirements more specific.

Having the whole team work out the specifications and examples, helps the team to stay focussed on the functional part of the story longer and in more detail, before shifting mindsets to the development tasks.  Writing the specifications will also help to determine wether a story is ready enough. While the sprint progresses and all the tests are green, the story should be done for the part of building the functionality.

You can use a tool like Fitnesse  or Cucumber to write testable specifications. The tests are run against the actual code, so they provide an accurate view on the progress. When all the tests pass, the team has successfully created the functionality. In addition to the scrum board and burn down charts, the functional tests provide a good and accurate view on the sprint progress.

Design the solution

Once the story has been decomposed into clear and testable specifications we start creating a design on a whiteboard. The main goal is to create a shared visible understanding of the solution, so avoid (technical) details to prevent big up-front designs and loosing the involvement of the less technical members on the team. You can use whatever format works for your team (e.g. UML), but be sure it is comprehensible by everybody on the team.

The creation of the design, as an effort by the whole team, tends to sparks discussion. In stead of relying on the consistency of non-visible mental images in the heads of team members, there is a tangible image shared with everyone.

The whiteboard design will be a good starting point for refinement as the team gains insight during the sprint. The whiteboard should always be visible and within reach of the team during the sprint. Using a whiteboard makes it easy to adapt or complement the design. You’ll notice the team standing around the whiteboard or pointing to it in discussions quite often.

The design can be easily turned into a digital artefact by creating a photo copy of it. A digital copy can be valuable to anyone wanting to learn the system in the future. The design could also be used in the sprint demo, should the audience be interested in a technical overview.

Conclusion

The team now leaves the sprint planning with a set of functional tests and a whiteboard design. The tests are useful to validate and synchronize on the functional goals. The whiteboard designs are useful to validate and synchronize on the technical goals. The shared understanding of the team is more visible and can be validated, throughout the sprint. The team has become more transparent.

It might be a good practice to have the developers write the specification, and the testers or analysts draw the designs on the board. This is to provoke more communication, by getting the people out of their comfort zone and forcing them to ask more questions.

There are more compelling reasons to implement (or not) something like specification by design or to have the team make design overviews. But it also helps the team to stay on the same page, when there are visible and testable artefacts to rely on during the sprint.

Quote of the Day

Herding Cats - Glen Alleman - Tue, 08/26/2014 - 13:16

"All the mathematical sciences are founded on relations between physical laws and laws of numbers, so that the aim of exact science is to reduce the problems of nature to the determination of quantities by operations with numbers."
— James Clerk Maxwell, On Faraday's Lines of Force, 1856

Categories: Project Management

Quote of the Month August 2014

From the Editor of Methods & Tools - Tue, 08/26/2014 - 09:55
We don’t mean that you should put on your Super Tester cape and go protect the world from bugs. There’s no room for big egos on agile teams. Your teammates share your passion for quality. Focus on the teams goals and do what you can to help everyone do their best work. Source: Agile Testing, Lisa Crispin and Janet Gregory, Addison Wesley

Management is Prediction - W. Edwards Deming

Herding Cats - Glen Alleman - Tue, 08/26/2014 - 01:44

DemingThis statement Management is Prediction is the basis of the microeconomics of writing software for money, someone else's money. 

Along with the quote of Daniel Boorstin's, The greatest obstacle to discovery is not ignorance – it is the illusion of knowledge. Boorstin was the Librarian of Congress. One of his other quotes was once the subtitle of this blog I Write to Discover What I Think.

And here's what I think about the predictive nature of managing projects. If we don't know how to estimate the impacts of our decisions on the future outcomes of those decisions, we have the illusion of knowledge, when in fact we have no knowledge.

There is a popular myth, that estimating the future outcomes of our present day decisions is somehow voodoo, guessing, making things up, and having our management treat them as commitments. It's clear we didn't pay attention in the probability and statistics class. Or worse yet, we have intentionally ignored what we did learn in that probability and statistics class. Thinking naively or with intent ignoring the probabilistic nature of all project work.

All variables in project work are random variables. These variables are almost always coupled to each other in some way, usually a non-linear way. Fix one variable, the other two are free to be variable. Fix two variables, the third variable is free. Fix all three, they still vary. This is the primary reason margin is needed for project to have a credible chance of showing up on time, on budget, and on value. These variances are Irreducible, meaning they will vary no matter what you do. Fixing Budget, only fixes the amount of money you've allocated to the project. It doesn't fix the cost to produce the value from the project, nor the time to produce that value.

Slide1

So Why Do We Estimate?

When ever possible we should use evidence in making decisions about project variables. Observing the behaviour of the variables in an attempt to see what they are doing, where they are going, what the next outcome might be. 

This is the basis of the OODA Loop. Boyd was an Air Force fighter pilot, and like Jeff Sutherland of agile fame, understood that emerging situations are the norm in the skies of Vietnam, just like they are on projects.

OODA ClipThe OODA loop is often popularized as four elements going around in a circle. The picture is the only diagram Boyd ever drew. Boyd's OODA Loop describes the details. The title of the chart above comes from one of our briefings for Managing in the Presence of Uncertainty. But the domain OODA is applicable is not restricted to DOD acquisition, but is applicable in any domain where money, time, and performance are at risk.

Just to push a little harder, when you hear about Clausewitz and Tzu from agile coders, remember Boyd's comments when he spoke in public

“Don’t be a member of Clausewitz’s school because a lot has happened since 1832,” he would warn his audiences, “and don’t be a member of Sun Tzu’s school because an awful lot has happened since 400 BC.”

So just like those reference to 1968 reference to the Software Crisis - a lot has happened since then.

So Here's a Simple Fact

Also often ignored by some wanting us to learn to decide without knowing the impacts of those decisions.

People learn better when they predict

Making an estimate about the future — predicting on outcome or impact — forces us to think ahead about those very outcomes. Making an estimate causes us to examine more deeply the system we are observing and our engagement with that system and our reactions to the system. It causes us to question our understanding of the system as it is now — in  its current state — and what we would like it to be in its future state. As well, by making estimating about future outcomes, we can learn more about our understanding of the management beliefs we hold as we examine the results of our predictions.

So like Deming tells us ...

Management is Prediction

Not making prediction's about the impact of our decisions, the processes affected by those decisions, like - cost, schedule, and technical performance - shown in the three legged picture above, ignores the principles of Deming. And when we do that, ...

... we're not managing how we spend other peoples money.

The only way this notion can't be called bad management is if the amount of money at risk is low enough that those providing the money don't care if we waste it or not.

Related articles Making Estimates For Your Project Require Discipline, Skill, and Experience Why We Must Learn to Estimate The OODA Loop - Insight into Strategic Thinking An Agile Estimating Story Averages Without Variances are Meaningless - Or Worse Misleading Four Critical Elements of Project Success The Value of Information What does a "broken OODA loop" look like? How To Create Confusion About Root Causes
Categories: Project Management

Best Practices

Best practices aren't magic and neither are goblins.

Best practices aren’t magic and neither are goblins.

To paraphrase Edwin Starr, “Best Practices, huh, what are they good for? Absolutely nothing,  Say it again . . .”

Every organization wants to use best practices. How many organizations do you know that would stand up and say we want to use average practices? Therefore a process with the moniker “best practice” on it has an allure that is hard to resist.  The problem is that one organization’s best practice is another’s average process, even if they produce the same quality and quantity of output.  Or even worse, one organization’s best practice might be beyond another organization.  The process reflects the overall organizational context.  It is possible that adopting a new process wholesale could produce output faster or better, but without tailoring, the chances are more random than many consultants would suggest. For example, just buying a configuration management tool without changing how you do configuration management will be less effective melding the tool with your processes.  Tailoring will allow you to use the process based on the attributes of the current organizational context such as the organization’s overall size or the capabilities of the people involved.

An example of an organization’s best practice that might not translate to all of its competitors is the use of super sophisticated inventory control computer systems used at Walmart. Would Walmart’s computer system help a local grocery store (let’s call this Hometown Grocery)? Not likely, the overhead of the same system would be beyond Hometown’s IT capabilities and budget.  However if hundreds of Hometown Groceries banded together, the answer might be different (tailoring the process to the environmental context).  Without tailoring the context, the best practice for Walmart would not be a best practice for our small town grocery.

The term best practice gets thrown around as if there was a dusty old tome full of magical incantations that will solve any crisis regardless of context (assuming you are a seventh level mage).  There are those that hold up the CMMI, ISO or SCRUM and shout (usually on email lists) that they are only way.  Let’s begin by putting the idea that there is a one-size-fits-all solution to every job to rest.  There isn’t and there never was any such animal.  Any individual process, practice or step that worked wonderfully in the company down the street will not work the same way for you, especially if you try to it do it same way they did.  Software development and maintenance isn’t a chemical reaction, a Lego construct or even magic.  Best practices, what are they good for?  Fortunately a lot, if used correctly.

Best practices find their highest value as a tool for you to use as a comparison in order for you to expose the assumptions hat have been used to build or evolve your own processes.   Knowledge allows you to challenge how and why are you are doing any specific step and provides an opportunity for change.  How many companies have embraced the tenants of the Toyota Production Systems after benchmarking Toyota?

Adopting best practices without regard to your context may not yield the benefits found on the box.  If you read the small print you’d see a warning. Use best practices only after reading all of the instructions and understanding of your goals and your environment.  This is not to say that exemplary practices should not be aggressively studied and translated into your organization.  Ignoring new ideas because they did not grow out of your context is just as crazy as embracing best practices without understanding the context it was created in. Best practices as an ideal, as a comparison so that you can understand your organization makes sense, not as plug-compatible modules.


Categories: Process Management

C# Records & Pattern Matching Proposal

Phil Trelford's Array - Mon, 08/25/2014 - 22:27

Following on from VB.Net’s new basic pattern matching support, the C# team has recently put forward a proposal for record types and pattern matching in C# which was posted in the Roslyn discussion area on CodePlex:

Pattern matching extensions for C# enable many of the benefits of algebraic data types and pattern matching from functional languages, but in a way that smoothly integrates with the feel of the underlying language. The basic features are: records, which are types whose semantic meaning is described by the shape of the data; and pattern matching, which is a new expression form that enables extremely concise multilevel decomposition of these data types. Elements of this approach are inspired by related features in the programming languages F# and Scala.

There has been a very active discussion on the forum ever since, particularly around syntax.

Background

Algebraic types and pattern matching have been a core language feature in functional-first languages like ML (early 70s), Miranda (mid 80s), Haskell (early 90s) and F# (mid 00s).

I like to think of records as part of a succession of data types in a language:

Name Example (F#) Description Scalar
let width = 1.0
let height = 2.0
Single values Tuple
// Tuple of float * float
let rect = (1.0, 2.0)
Multiple values Record
type Rect = {Width:float; Height:float}
let rect = {Width=1.0; Height=2.0}
Multiple named fields Sum type(single case)
type Rect = Rect of float * float
let rect = Rect(1.0,2.0)
Tagged tuple Sum type(named fields)
type Rect = Rect of width:float*height:float
let rect = Rect(width=1.0,height=2.0)
Tagged tuple with named fields Sum type(multi case)
type Shape=
   | Circle of radius:float
   | Rect of width:float * height:float
Union of tagged tuples

Note: in F# sum types are also often referred to as discriminated unions or union types, and in functional programming circles algebraic data types tend to refer to tuples, records and sum types.

Thus in the ML family of languages records are like tuples with named fields. That is, where you use a tuple you could equally use a record instead to add clarity, but at the cost of defining a type. C#’s anonymous types fit a similar lightweight data type space, but as there is no type definition their scope is limited (pun intended).

For the most part I find myself pattern matching over tuples and sum types in F# (or in Erlang simply using tuples where the first element is the tag to give a similar effect).

Sum Types

The combination of sum types and pattern matching is for me one of the most compelling features of functional programming languages.

Sum types allow complex data structures to be succinctly modelled in just a few lines of code, for example here’s a concise definition for a generic tree:

type 'a Tree =
    | Tip
    | Node of 'a * 'a Tree * 'a Tree

Using pattern matching the values in a tree can be easily summed:

let rec sumTree tree =
    match tree with
    | Tip -> 0
    | Node(value, left, right) ->
        value + sumTree(left) + sumTree(right)

The technique scales up easily to domain models, for example here’s a concise definition for a retail store:

/// For every product, we store code, name and price
type Product = Product of Code * Name * Price

/// Different options of payment
type TenderType = Cash | Card | Voucher

/// Represents scanned entries at checkout
type LineItem = 
  | Sale of Product * Quantity
  | Cancel of int
  | Tender of Amount * TenderType

Class Hierarchies versus Pattern Matching

In class-based programming languages like C# and Java, classes are the primary data type  where (frequently mutable) data and related methods are intertwined. Hierarchies of related types are typically described via inheritance. Inheritance makes it relatively easy to add new types, but adding new methods or behaviour usually requires visiting the entire hierarchy. That said the compiler can help here by emitting an error if a required method is not implemented.

Sum types also describe related types, but data is typically separated from functions, where functions employ pattern matching to handle separate cases. This pattern matching based approach makes it easier to add new functions, but adding a new case may require visiting all existing functions. Again the compiler helps here by emitting a warning if a case is not covered.

Another subtle advantage of using sum types is being able to see the behaviour for all cases in a single place, which can be helpful for readability. This may also help when attempting to separate concerns, for example if we want to add a method to print to a device to a hierarchy of classes in C# we could end up adding printer related dependencies to all related classes. With a sum type the printer functionality and related dependencies are more naturally encapsulated in a single module

In F# you have the choice of class-based inheritance or sum types and can choose in-situ. In practice most people appear to use sum types most of the time.

C# Case Classes

The C# proposal starts with a simple “record” type definition:

public record class Cartesian(double x: X, double y: Y);

Which is not too dissimilar to an F# record definition, i.e.:

type Cartesian = { X: double, Y: double }

However from there it then starts to differ quite radically. The C# proposal allows a “record” to inherit from another class, in effect allowing sum types to be defined, i.e:

abstract class Expr; 
record class X() : Expr; 
record class Const(double Value) : Expr; 
record class Add(Expr Left, Expr Right) : Expr; 
record class Mult(Expr Left, Expr Right) : Expr; 
record class Neg(Expr Value) : Expr;

which allows pattern matching to be performed using an extended switch case statement:

switch (e) 
{ 
  case X(): return Const(1); 
  case Const(*): return Const(0); 
  case Add(var Left, var Right): 
    return Add(Deriv(Left), Deriv(Right)); 
  case Mult(var Left, var Right): 
    return Add(Mult(Deriv(Left), Right), Mult(Left, Deriv(Right))); 
  case Neg(var Value): 
    return Neg(Deriv(Value)); 
}

This is very similar to Scala case classes, in fact change “record” to case, drop semicolons and voilà:

abstract class Term
case class Var(name: String) extends Term
case class Fun(arg: String, body: Term) extends Term
case class App(f: Term, v: Term) extends Term

To sum up, the proposed C# “record” classes appear to be case classes which support both single and multi case sum types.

Language Design

As someone who has to spend some of their time working in C# and who feels more productive having concise types and pattern matching in their toolbox, overall I welcome our new overlords this proposal.

From my years of experience using F#, I feel it would be nice to see a simple safety feature included, to what is in effect a sum type representation, so that sum types can be exhaustive. This would allow compile time checks to ensure that all cases have been covered in a switch/case statement, and a warning given otherwise.

Then again, I feel this is quite a radical departure from the style of implementation I’ve seen in C# codebases in the wild, to the point where it’s starting to look like an entirely different language
 and so this may be a feature that if it does see the light of day is likely to get more exposure in C# shops working on greenfield projects.

Categories: Programming

Powerful New Messaging Features with GCM

Android Developers Blog - Mon, 08/25/2014 - 18:51

By Subir Jhanb, Google Cloud Messaging team

Developers from all segments are increasingly relying on Google Cloud Messaging (GCM) to handle their messaging needs and make sure that their apps stay battery-friendly. GCM has been experiencing incredible momentum, with more than 100,000 apps registered, 700,000 QPS, and 300% QPS growth over the past year.

At Google I/O we announced the general availability of several GCM capabilities, including the GCM Cloud Connection Server, User Notifications, and a new API called Delivery Receipt. This post highlights the new features and how you can use them in your apps. You can watch these and other GCM announcements at our I/O presentation.

Two-way XMPP messaging with Cloud Connection Server

XMPP-based Cloud Connection Server (CCS) provides a persistent, asynchronous, bidirectional connection to Google servers. You can use the connection to send and receive messages between your server and your users' GCM-connected devices. Apps can now send upstream messages using CCS, without needing to manage network connections. This helps keep battery and data usage to a minimum. You can establish up to 100 XMPP connections and have up to 100 outstanding messages per connection. CCS is available for both Android and Chrome.

User notifications managed across multiple devices

Nowadays users have multiple devices and hence receive notifications multiple times. This can reduce notifications from being a useful feature to being an annoyance. Thankfully, the GCM User Notifications API provides a convenient way to reach all devices for a user and help you synchronise notifications including dismissals - when the user dismisses a notification on one device, the notification disappears automatically from all the other devices. User Notifications is available on both HTTP and XMPP.

Insight into message status through delivery receipts

When sending messages to a device, a common request from developers is to get more insight on the state of the message and to know if it was delivered. This is now available using CCS with the new Delivery Receipt API. A receipt is sent as soon as the message is sent to the endpoint, and you can also use upstream for app level delivery receipt.

How to get started

If you’re already using GCM, you can take advantage of these new features right away. If you haven't used GCM yet, you’ll be surprised at how easy it is to set up — get started today! And remember, GCM is completely free no matter how big your messaging needs are.

To learn more about GCM and its new features — CCS, user notifications, and Delivery Receipt — take a look at the I/O Bytes video below and read our developer documentation.


Join the discussion on
+Android Developers


Categories: Programming

MixRadio Architecture - Playing with an Eclectic Mix of Services

This is a guest repost by Steve Robbins, Chief Architect at MixRadio.

At MixRadio, we offer a free music streaming service that learns from listening habits to deliver people a personalised radio station, at the single touch of a button. MixRadio marries simplicity with an incredible level of personalization, for a mobile-first approach that will help everybody, not just the avid music fan, enjoy and discover new music. It's as easy as turning on the radio, but you're in control - just one touch of Play Me provides people with their own personal radio station.   The service also offers hundreds of hand-crafted expert and celebrity mixes categorised by genre and mood for each region. You can also create your own artist mix and mixes can be saved for offline listening during times without signal such as underground travel, as well as reducing data use and costs.   Our apps are currently available on Windows Phone, Windows 8, Nokia Asha phones and the web. We’ve spent years evolving a back-end that we’re incredibly proud of, despite being British! Here's an overview of our back-end architecture.

 

Architecture Overview
Categories: Architecture

How Can Enterprise Architects Drive Business Value the Agile Way?

An Enterprise Architect can have a tough job when it comes to driving value to the business.   With multiple stakeholders, multiple moving parts, and a rapid rate of change, delivering value is tough enough.   But what if you want to accelerate value and maximize business impact?

Enterprise Architects can borrow a few concepts from the Agile world to be much more effective in today’s world.

A Look Back at How Agile Helped Connect Development to Business Impact 


First, let’s take a brief look at traditional development and how it evolved.  Traditionally, IT departments focused on delivering value to the business by shipping big bang projects.   They would plan it, build it, test it, and then release it.   The measure of success was on time, on budget.   

Few projects ever shipped on time.  Few were ever on budget.  And very few ever met the requirements of the business.

Then along came Agile approaches and they changed the game.

One of the most important ideas was a shift away from thick requirements documentation to user stories.  Developers got customers telling stories about what they wanted the future solution to do.  For example, a user story for a sale representative might look like this:

“As a sales rep, I want to see my customer’s account information so that I can identify cross-sell and upsell opportunities.” 

The use of user stories accomplished several things.   First, user stories got the development teams talking to the business users.  Rather than throwing documents back and forth, people started having face-to-face communication to understand the user stories.  Second, user stories helped chunk bigger units of value down into smaller units of value.  Rather than a big bang project where all the value is promised at the end of some long development cycle, a development team could now ship the solution in increments, where each increment was a prioritized set of stories.   The user stories effectively create a shared language for value

Third, it made it easier to test the delivery of value.  Now the user and the development team could test the solution against the user stories and acceptance criteria.  If the story met acceptance criteria, the user would acknowledge that the value was delivered.  In this way, the user stories created both a validation mechanism and a feedback loop for delivering and acknowledging value.

In the Agile world, bigger stories are called epics, and collections of stories are called themes.  Often a story starts off as an epic until it gets broken down into multiple stories.  What’s important here is that the collections of stories serve as a catalog of potential value.   Specifically, this catalog of stories reflects potential value with real stakeholders.  In this way, Agile helps drive customer focus and customer connection.  It’s really effective stakeholder management in action.

Agile approaches have been used in software projects large and small.  And they’ve forever changed how developers and project managers approach projects.

A Look at How Agile Can Help Enterprise Architecture Accelerate Business Value 


But how does this apply to Enterprise Architects?

As an Enterprise Architect, chances are you are responsible for achieving business outcomes.  You do this by driving business transformation.   The way you achieve business transformation is through driving capability change including business, people, and technical capabilities.

That’s a tall order.   And you need a way to chunk this up and make it meaningful to all the parties involved.

The Power of Scenarios as Units of Value for the Enterprise

This is where scenarios come into play.  Scenarios are a simple way to capture pains, needs and desired outcomes.   You can think of the desired outcome as the future capability vision.   It’s really a story that helps articulate the art of the possible.   More precisely, you can use scenarios to help build empathy with stakeholders for what value will look like, by painting a conceptual scene of the future.

An Enterprise scenario is simply a chunk of organizational change, typically about 3-5 business capabilities, 3-5 people capabilities, and 3-5 technical capabilities.

If that sounds like a lot of theory, let’s step into an example to show what it looks like in practice.

Let’s say you’re in a situation where you need to help a healthcare provider change their business.  

You can come up with a lot of scenarios, but it helps to start with the pains and needs of the business owner.  Otherwise, you might start going through a bunch of scenarios for the patients or for the doctors.  In this case, the business owner would be the Chief Medical Officer or the doctor of doctors.

Scenario: Tele-specialist for Healthcare

If we walk the pains, needs, and desired outcomes of the Chief Medical Officer, we might come up with a scenario that looks something like this, where the CURRENT STATE reflects the current pains, and needs, and the FUTURE STATE reflects the desired outcome.

CURRENT STATE

Here is an example of the CURRENT STATE portion of the scenario:

The Chief Medical Officer of Contoso Provider is struggling with increased costs and declining revenues. Costs are rising due to the Affordable Healthcare Act regulatory compliance requirements and increasing malpractice insurance premiums. Revenue is declining due to decreasing medical insurance payments per claim.

FUTURE STATE

Here is an example of the FUTURE STATE portion of the scenario:

Doctors can consult with patients, peers, and specialists from anywhere. Contoso provider's doctors can see more patients, increase accuracy of first time diagnosis, and grow revenues.


image

 

Storyboard for the Future Capability Vision

It helps to be able to picture what the Future Capability Vision might look like.   That’s where storyboarding can come in.  An Enterprise Architect can paint a simple scene of the future with a storyboard that shows the Future Capability Vision in action.  This practice lends itself to whiteboarding, and the beauty of a whiteboard is you can quickly elaborate where you need to, without getting mired in details.

image

As you can see in this example storyboard of the Future Capability Vision, we listed out some business benefits, which we could then drill-down into relevant KPIs and value measures.   We’ve also outlines some building blocks required for this Future Capability Vision in the form of business capabilities and technical capabilities.

Now this simple approach accomplishes a lot.   It helps ensure that any technology solution actually connects back to business drivers and pains that a business decision maker actually cares about.   This gets their fingerprints on the solution concept.   And it creates a simple “flashcard” for value.   If we name the Enterprise scenario well, then we can use it as a handle to get back to the story we created with the business of a better future.

The obvious thing this does, aside from connecting IT to the business, is it helps the business justify any investment in IT.

And all we did was walk through one Enterprise Scenario.  

But there is a lot more value to be found in the Enterprise.   We can literally explore and chunk up the value in the Enterprise if we take a step back and add another tool to our toolbelt:  the Scenario Chain.

Scenario Chain:  Chaining the Industry Scenarios to Enterprise Scenarios

The Scenario Chain is another powerful conceptual visualization tool.  It helps you quickly map out what’s happening in the marketplace in terms of industry drivers or industry scenarios.  You can then identify potential investment objectives.   These investment objectives lead to patterns of value or patterns of solutions in the Enterprise, which are effectively Enterprise scenarios.   From the Enterprise scenarios, you can then identify relevant usage scenarios.  The usage scenarios effectively represent new ways of working for the employees, or new interaction models with customers, which is effectively a change to your value stream.

image

With one simple glance, the Scenario Chain is a bird’s-eye view of how you can respond to the changing marketplace and how you can transform your business.   And, by using Enterprise scenarios, you can chunk up the change into meaningful units of value that reflect pains, needs, and desired outcomes for the business.  And, because you have the fingerprints of stakeholders from both business and IT, you’ve effectively created a shared vision for the future, that has business impact, a justification for investment, and it creates a pull-through mechanism for additional value, by driving the adoption of the usage scenarios.

Let’s elaborate on adoption and how scenarios can help accelerate business value.

Using Scenario to Drive Adoption and Accelerate Business Value

Driving adoption is a key way to realize the business value.  If nobody adopts the solution, then that’s what Gartner would call “Value Leakage.”  Value Realization really comes down to governance, measurement, and adoption.

With scenarios at your fingertips, you have a powerful way to articulate value, justify business cases, drive business transformation, and accelerate business value.   The key lies in using the scenarios as a unit of value, and focusing on scenarios as a way to drive adoption and change.

Here are three ways you can use scenarios to drive adoption and accelerate business value:

1.  Accelerate Business Adoption

One of the ways to accelerate business value is to accelerate adoption.    You can use scenarios to help enumerate specific behavior changes that need to happen to drive the adoption.   You can establish metrics and measures around specific behavior changes.   In this way, you make adoption a lot more specific, concrete, intentional, and tangible.

This approach is about doing the right things, faster.

2.  Re-Sequence the Scenarios

Another way to accelerate business value is to re-sequence the scenarios.   If your big bang is way at the end (way, way at the end), no good.  Sprinkle some of your bangs up front.   In fact, a great way to design for change is to build rolling thunder.   Put some of the scenarios up front that will get people excited about the change and directly experiencing the benefits.  Make it real.

The approach is about putting first things first.

3.  Identify Higher Value Scenarios

The third way to accelerate business value is to identify higher-value scenarios.   One of the things that happens along the way, is you start to uncover potential scenarios that you may not have seen before, and these scenarios represent orders of magnitude more value.   This is the space of serendipity.   As you learn more about users and what they value, and stakeholders and what they value, you start to connect more dots between the scenarios you can deliver and the value that can be realized (and therefore, accelerated.)

This approach is about trading up for higher value and more impact.

As you can see, Enterprise Architects can drive business value and accelerate business value realization by using scenarios and storyboarding.   It’s a simple and agile approach for connecting business and IT, and for shaping a more Agile Enterprise.

I’ll share more on this topic in future posts.   Value Realization is an art and a science and I’d like to reduce the gap between the state of the art and the state of the practice.

You Might Also Like

3 Ways to Accelerate Business Value

6 Steps for Enterprise Architecture as Strategy

Cognizant on the Next Generation Enterprise

Simple Enterprise Strategy

The Mission of Enterprise Services

The New Competitive Landscape

What Am I Doing on the Enterprise Strategy Team?

Why Have a Strategy?

Categories: Architecture, Programming

Anything Worth Doing is Worth Doing Right

Making the Complex Simple - John Sonmez - Mon, 08/25/2014 - 15:00

It seems that I am always in a rush. I find it very difficult to just do what I am doing without thinking about what is coming next or when I’ll be finished with whatever I am working on. Even as I am sitting and writing this blog post, I’m not really as immersed in […]

The post Anything Worth Doing is Worth Doing Right appeared first on Simple Programmer.

Categories: Programming

Capacity Planning and the Project Portfolio

I was problem-solving with a potential client the other day. They want to manage their project portfolio. They use Jira, so they think they can see everything everyone is doing. (I’m a little skeptical, but, okay.) They want to know how much the teams can do, so they can do capacity planning based on what the teams can do. (Red flag #1)

The worst part? They don’t have feature teams. They have component teams: front end, middleware, back end. You might, too. (Red flag #2)

Problem #1: They have a very large program, not a series of unrelated projects. They also have projects.

Problem #2: They want to use capacity planning, instead of flowing work through teams.

They are setting themselves up to optimize at the lowest level, instead of optimizing at the highest level of the organization.

If you read Manage Your Project Portfolio: Increase Your Capacity and Finish More Projects, you understand this problem. A program is a strategic collection of projects where the business value of the all of the projects is greater than any one of the projects itself. Each project has value. Yes. But all together, the program, has much more value. You have to consider the program has a whole.

Don’t Predict the Project Portfolio Based on Capacity

If you are considering doing capacity planning on what the teams can do based on their estimation or previous capacity, don’t do it.

First, you can’t possibly know based on previous data. Why? Because the teams are interconnected in interesting ways.

When you have component teams, not feature teams, their interdependencies are significant and unpredictable. Your ability to predict the future based on past velocity? Zero. Nada. Zilch.

This is legacy thinking from waterfall. Well, you can try to do it this way. But you will be wrong in many dimensions:

  • You will make mistakes because of prediction based on estimation. Estimates are guesses. When you have teams using relative estimation, you have problems.
  • Your estimates will be off because of the silent interdependencies that arise from component teams. No one can predict these if you have large stories, even if you do awesome program management. The larger the stories, the more your estimates are off. The longer the planning horizon, the more your estimates are off.
  • You will miss all the great ideas for your project portfolio that arise from innovation that you can’t predict in advance. As the teams complete features, and as the product owners realize what the teams do, the teams and the product owners will have innovative ideas. You, the management team, want to be able to capitalize on this feedback.

It’s not that estimates are bad. It’s that estimates are off. The more teams you have, the less your estimates are normalized between teams. Your t-shirt sizes are not my Fibonacci numbers, are not that team’s swarming or mobbing. (It doesn’t matter if you have component teams or feature teams for this to be true.)

When you have component teams, you have the additional problem of not knowing how the interdependencies affect your estimates. Your estimates will be off, because no one’s estimates take the interdependencies into account.

You don’t want to normalize estimates among teams. You want to normalize story size. Once you make story size really small, it doesn’t matter what the estimates are.

When you  make the story size really small, the product owners are in charge of the team’s capacity and release dates. Why? Because they are in charge of the backlogs and the roadmaps.

The more a program stops trying to estimate at the low level and uses small stories and manages interdependencies at the team level, the more the program has momentum.

The part where you gather all the projects? Do that part. You need to see all the work. Yes. that part works and helps the program see where they are going.

Use Value for the Project Portfolio

Okay, so you try to estimate the value of the features, epics, or themes in the roadmap of the project portfolio. Maybe you even use the cost of delay as Jutta and I suggest in Diving for Hidden Treasures: Finding the Real Value in Your Project Portfolio (yes, this book is still in progress). How will you know if you are correct?

You don’t. You see the demos the teams provide, and you reassess on a reasonable time basis. What’s reasonable? Not every week or two. Give the teams a chance to make progress. If people are multitasking, not more often than once every two months, or every quarter. They have to get to each project. Hint: stop the multitasking and you get tons more throughput.

Categories: Project Management

Vert.x with core.async. Handling asynchronous workflows

Xebia Blog - Mon, 08/25/2014 - 12:00

Anyone who was written code that has to coordinate complex asynchronous workflows knows it can be a real pain, especially when you limit yourself to using only callbacks directly. Various tools have arisen to tackle these issues, like Reactive Extensions and Javascript promises.

Clojure's answer comes in the form of core.async: An implementation of CSP for both Clojure and Clojurescript. In this post I want to demonstrate how powerful core.async is under a variety of circumstances. The context will be writing a Vert.x event-handler.

Vert.x is a young, light-weight, polyglot, high-performance, event-driven application platform on top of the JVM. It has an actor-like concurrency model, where the coarse-grained actors (called verticles) can communicate over a distributed event bus. Although Vert.x is still quite young, it's sure to grow as a big player in the future of the reactive web.

Scenarios

The scenario is as follows. Our verticle registers a handler on some address and depends on 3 other verticles.

1. Composition

Imagine the new Mars rover got stuck against some Mars rock and we need to send it instructions to destroy the rock with its inbuilt laser. Also imagine that the controlling software is written with Vert.x. There is a single verticle responsible for handling the necessary steps:

  1. Use the sensor to locate the position of the rock
  2. Use the position to scan hardness of the rock
  3. Use the hardness to calibrate and fire the laser. Report back status
  4. Report success or failure to the main caller

As you can see, in each step we need the result of the previous step, meaning composition.
A straightforward callback-based approach would look something like this:

(ns example.verticle
  (:require [vertx.eventbus :as eb]))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (let [reply-msg eb/*current-message*]
      (eb/send "rover.scope" (scope-msg instructions)
        (fn [coords]
          (eb/send "rover.sensor" (sensor-msg coords)
            (fn [data]
              (let [power (calibrate-laser data)]
                (eb/send "rover.laser" (laser-msg power)
                  (fn [status]
                    (eb/reply* reply-msg (parse-status status))))))))))))

A code structure quite typical of composed async functions. Now let's bring in core.async:

(ns example.verticle
  (:refer-clojure :exclude [send])
  (:require [ vertx.eventbus :as eb]
            [ clojure.core.async :refer [go chan put! <!]]))

(defn send [addr msg]
  (let [ch (chan 1)]
    (eb/send addr msg #(put! ch %))
    ch))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (go (let [coords (<! (send "rover.scope" (scope-msg instructions)))
              data (<! (send "rover.sensor" (sensor-msg coords)))
              power (calibrate-laser data)
              status (<! (send "rover.laser" (laser-msg power)))]
          (eb/reply (parse-status status))))))

We created our own reusable send function which returns a channel on which the result of eb/send will be put. Apart from the 2. Concurrent requests

Another thing we might want to do is query different handlers concurrently. Although we can use composition, this is not very performant as we do not need to wait for reply from service-A in order to call service-B.

As a concrete example, imagine we need to collect atmospheric data about some geographical area in order to make a weather forecast. The data will include the temperature, humidity and wind speed which are requested from three different independent services. Once all three asynchronous requests return, we can create a forecast and reply to the main caller. But how do we know when the last callback is fired? We need to keep some memory (mutable state) which is updated when each of the callback fires and process the data when the last one returns.

core.async easily accommodates this scenario without adding extra mutable state for coordinations inside your handlers. The state is contained in the channel.

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan 3)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go (let [data (merge (<! ch) (<! ch) (<! ch))
                forecast (create-forecast data)]
            (eb/reply forecast))))))
3. Fastest response

Sometimes there are multiple services at your disposal providing similar functionality and you just want the fastest one. With just a small adjustment, we can make the previous code work for this scenario as well.

(eb/on-message
  "server.request"
  (fn [msg]
    (let [ch (chan 3)]
      (eb/send "service-A" msg #(put! ch %))
      (eb/send "service-B" msg #(put! ch %))
      (eb/send "service-C" msg #(put! ch %))
      (go (eb/reply (<! ch))))))

We just take the first result on the channel and ignore the other results. After the go block has replied, there are no more takers on the channel. The results from the services that were too late are still put on the channel, but after the request finished, there are no more references to it and the channel with the results can be garbage-collected.

4. Handling timeouts and choice with alts!

We can create timeout channels that close themselves after a specified amount of time. Closed channels can not be written to anymore, but any messages in the buffer can still be read. After that, every read will return nil.

One thing core.async provides that most other tools don't is choice. From the examples:

One killer feature for channels over queues is the ability to wait on many channels at the same time (like a socket select). This is done with `alts!!` (ordinary threads) or `alts!` in go blocks.

This, combined with timeout channels gives the ability to wait on a channel up a maximum amount of time before giving up. By adjusting example 2 a bit:

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan)
          t-ch (timeout 3000)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go-loop [n 3 data {}]
        (if (pos? n)
          (if-some [result (alts! [ch t-ch])]
            (recur (dec n) (merge data result))
            (eb/fail 408 "Request timed out"))
          (eb/reply (create-forecast data)))))))

This will do the same thing as before, but we will wait a total of 3s for the requests to finish, otherwise we reply with a timeout failure. Notice that we did not put the timeout parameter in the vert.x API call of eb/send. Having a first-class timeout channel allows us to coordinate these timeouts more more easily than adding timeout parameters and failure-callbacks.

Wrapping up

The above scenarios are clearly simplified to focus on the different workflows, but they should give you an idea on how to start using it in Vert.x.

Some questions that have arisen for me is whether core.async can play nicely with Vert.x, which was the original motivation for this blog post. Verticles are single-threaded by design, while core.async introduces background threads to dispatch go-blocks or state machine callbacks. Since the dispatched go-blocks carry the correct message context the functions eb/send, eb/reply, etc.. can be called from these go blocks and all goes well.

There is of course a lot more to core.async than is shown here. But that is a story for another blog.

How Not to Standardize Testing (ISO 29119)

James Bach’s Blog - Mon, 08/25/2014 - 09:15

Many years ago I took a management class. One of the exercises we did was on achieving consensus. My group did not reach an agreement because I wouldn’t lower my standards. I wanted to discuss the matter further, but the other guys grew tired of arguing with me and declared “consensus” over my objections. This befuddled me, at first. The whole point of the exercise was to reach a common decision, and we had failed, by definition, to do that– so why declare consensus at all? It’s like getting checkmated in chess and then declaring that, well, you still won the part of the game that you cared about… the part before the checkmate.

Later I realized this is not so bizarre. What they had effectively done is ostracize me from the team. They had changed the players in the game. The remaining team did come to consensus. In the years since, I have found that changing the boundaries or membership of a community is indeed an important pillar of consensus building. I have used this tactic many times to avoid unhelpful debate. It is one reason why I say that I’m a member of the Context-Driven School of Testing. My school does not represent all schools, and the other schools do not represent mine. Therefore, we don’t need consensus with them.

Then what about ISO 29119?

The ISO organization claims to have a new standard for software testing. But ISO 29119 is not a standard for testing. It cannot be a standard for testing.

A standard for testing would have to reflect the values and practices of the world community of testers. Yet, the concerns of the Context-Driven School of thought, which has been in development for at least 15 years have been ignored and our values shredded by this so-called standard and the process used to create it. They have done this by excluding us. There are two organizations explicitly devoted to Context-Driven values (AST and ISST) and our community holds several major conferences a year. Members of our community speak at all the major practitioners conferences, and our ideas are widely cited. Some of the most famous testers in the the world, including me, are Context-Driven testers. We exist, and together with the Agilists, we are the source of nearly every new idea in testing in the last decade.

The reason they have excluded us is that they know we won’t agree to any simplistic standard based on templates or simple formulae. We know those things look pretty but they don’t help. If ISO doesn’t exclude us, they worry they will never finish. They know we will challenge their evidence, and even their ethics and basic competence. This is why I say the craft is not ready for standards. It will be years before all the recognized experts in testing can come together and agree on anything substantial.

The people running the ISO effort know exactly who we are. I personally have had multiple public debates with Stuart Reid, on stage. He cannot pretend we don’t exist. He cannot pretend we are some sort of lunatic fringe. Tens of thousands of testers have watched my video lectures or bought my books. This is not a case where ISO can simply declare us to be outsiders.

The Burden of Proof

The Context-Driven community stands for excellence in testing. This is why we must reject this depraved attempt by ISO to grab power and assert control over our craft. Our craft is still an open marketplace of ideas, and it is full of strong debates. We must protect that marketplace and allow it to evolve. I want the fair chance to put my competitors out of business (or get them to change their business) with the high quality of my work. Context-Driven testing has been growing in strength and numbers over the years. Whereas this ISO effort appears to be a job protection program for people who can’t stomach debate. They can’t win the debate so they want to remake the rules.

The burden of proof is not on me or any of us to show that the standard is wrong, nor is it our job to make it right. The burden is on those who claim that the craft can be standardized to study the craft and recognize and resolve the deep differences among us. Failing that, there can be no ethical or rational basis for standardization.

This blog post puts me on record as opposing the ISO 29119 standard. Together with my colleagues, we constitute a determined and sustained and principled opposition.

Categories: Testing & QA

Docker on a raspberry pi

Xebia Blog - Mon, 08/25/2014 - 07:11

This blog describes how easy it is to use docker in combination with a Raspberry Pi. Because of docker, deploying software to the Raspberry Pi is a piece of cake.

What is a raspberry pi?
The Raspberry Pi is a credit-card sized computer that plugs into your TV and a keyboard. It is a capable little computer which can be used in electronics projects and for many things that your desktop PC does, like spreadsheets, word-processing and games. It also plays high-definition video. A raspberry pi runs linux, has an ARM processor of 700 MHZ and internal memory of 512 MB. Last but not least, it only costs around  35 Euro.

A raspberry pi

A raspberry pi version B

Because of the price, size and performance, the raspberry pi is a step to the 'Internet of things' principle. With a raspberry pi it is possible to control and connect everything to everything. For instance, my home project which is an raspberry pi controlling a robot.

 

Raspberry Pi in action

What is docker?
Docker is an open platform for developers and sysadmins to build, ship and run distributed applications. With Docker, developers can build any app in any language using any toolchain. “Dockerized” apps are completely portable and can run anywhere. A dockerized app contains the application, its environment, dependencies and even the OS.

Why combine docker and raspberry pi?
It is nice to work with a Raspberry Pi because it is a great platform to connect devices. Deploying anything however, is kind of a pain. With dockerized apps we can develop and test our application on our own home machine, when it works we can deploy it to the raspberry. We can do this without any pain or worries about corruption of the underlying operating system and tools. And last but not least, you can easily undo your tryouts.

What is better than I expected
First of all; it was relatively easy to install docker on the raspberry pi. When you use the Arch Linux operating system, docker is already part of the package manager! I expected to do a lot of cross-compiling of the docker application, because the raspberry pi uses an ARM-architecture (instead of the default x86 architecture), but someone did this already for me!

Second of all; there are a bunch of ready-to-use docker-images especially for the raspberry pi. To run dockerized applications on the raspberry pi you are depending on the base images. These base images must also support the ARM-architecture. For each situation there is an image, whether you want to run node.js, python, ruby or just java.

The worst thing that worried me was the performance of running virtualized software on a raspberry pi. But it all went well and I did not notice any performance reduce. Docker requires far less resources than running virtual machines. A docker proces runs straight on the host, giving native CPU performance. Using Docker requires a small overhead for memory and network.

What I don't like about docker on a raspberry pi
The slogan of docker to 'build, ship and run any app anywhere' is not entirely valid. You cannot develop your Dockerfile on your local machine and deploy the same application directly to your raspberry pi. This is because each dockerfile includes a core image. For running your application on your local machine, you need a x86-based docker-image. For your raspberry pi you need an ARM-based image. That is a pity, because this means you can only build your docker-image for your Raspberry Pi on the raspberry pi, which is slow.

I tried several things.

  1. I used the emulator QEMU to emulate the rasberry pi on a fast Macbook. But, because of the inefficiency of the emulation, it is just as slow as building your dockerfile on a raspberry pi.
  2. I tried cross-compiling. This wasn't possible, because the commands in your dockerfile are replayed on a running image and the running raspberry-pi image can only be run on ... a raspberry pi.

How to run a simple node.js application with docker on a raspberry pi  

Step 1: Installing Arch Linux
The first step is to install arch linux on an SD card for the raspberry pi. The preferred OS for the raspberry pi is a debian based OS: Raspbian, which is nicely configured to work with a raspberry pi. But in this case, the ArchLinux is better because we use the OS only to run docker on it. Arch Linux is a much smaller and a more barebone OS. The best way is by following the steps at http://archlinuxarm.org/platforms/armv6/raspberry-pi. In my case, I use version 3.12.20-4-ARCH. In addition to the tutorial:

  1. After downloading the image, install it on a sd-card by running the command:
    sudo dd if=path_of_your_image.img of=/dev/diskn bs=1m
  2. When there is no HDMI output at boot, remove the config.txt on the SD-card. It will magically work!
  3. Login using root / root.
  4. Arch Linux will use 2 GB by default. If you have a SD-card with a higher capacity you can resize it using the following steps http://gleenders.blogspot.nl/2014/03/raspberry-pi-resizing-sd-card-root.html

Step 2: Installing a wifi dongle
In my case I wanted to connect a wireless dongle to the raspberry pi, by following these simple steps

  1. Install the wireless tools:
        pacman -Syu
        pacman -S wireless_tool
        
  2. Setup the configuration, by running:
    wifi-menu
  3. Autostart the wifi with:
        netctl list
        netctl enable wlan0-[name]
    

Because the raspberry pi is now connected to the network you are able to SSH to it.

Step 3: Installing docker
The actual install of docker is relative easy. There is a docker version compatible with the ARM processor (that is used within the Raspberry Pi). This docker is part of the package manager of Arch Linux and the used version is 1.0.0. At the time of writing this blog docker release version 1.1.2. The missing features are

  1. Enhanced security for the LXC driver.
  2. .dockerignore support.
  3. Pause containers during docker commit.
  4. Add --tail to docker logs.

You will install docker and start is as a service on system boot by the commands:

pacman -S docker
systemctl enable docker
Installing docker with pacman

Installing docker with pacman

Step 4: Run a single nodejs application
After we've installed docker on the raspberry pi, we want to run a simple nodejs application. The application we will deploy is inspired on the nodejs web in the tutorial on the docker website: https://github.com/enokd/docker-node-hello/. This nodejs application prints a "hello world" to the console of the webbrowser. We have to change the dockerfile to:

# DOCKER-VERSION 1.0.0
FROM resin/rpi-raspbian

# install required packages
RUN apt-get update
RUN apt-get install -y wget dialog

# install nodejs
RUN wget http://node-arm.herokuapp.com/node_latest_armhf.deb
RUN dpkg -i node_latest_armhf.deb

COPY . /src
RUN cd /src; npm install

# run application
EXPOSE 8080
CMD ["node", "/src/index.js"]

And it works!

Screen Shot 2014-08-07 at 20.52.09

The webpage that runs in nodejs on a docker image on a raspberry pi

 

Just by running four little steps, you are able to use docker on your raspberry pi! Good luck!