Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Traceability: Putting the Model Into Action

Three core concepts.

Three core concepts.

My model for scaling traceability is based on an assumption that there is a relationship between customer involvement, criticality and complexity.  This yields the level of documentation required to achieve the benefits of traceability.  The model leverages an assessment of project attributes that define the three common concepts.  The concepts are:

  • Customer involvement in the project
  • Complexity of the functionality being delivered
  • Criticality of the project

A thumbnail definition of each of the three concepts begins with the concept of customer involvement, which is defined as the amount of time and effort applied to a project in a positive manner by the primary users of the project.  The second concept, complexity, is a measure of the number of project properties that are outside the normal expectations as perceived by the project team (the norm is relative to the organization or project group rather than to any external standard).  The final concept, criticality, is defined as the attributes defining quality, state or degree of being of the highest importance (again relative to the organization or group doing the work).  We will unpack these concepts and examine them in greater detail as we peal away the layers of the model.

The Model

Untitled

The process for using the model is a simple set of steps.
1.    Get a project (and team members)
2.    Assess the project’s attributes
3.    Plot the results on the model
4.    Interpret the findings
5.    Reassess as needed

The model is built for project environments. Don’t have a project you say!  Get one, I tell you! Can’t get one? This model will be less useful, but not useless.

Who Is Involved And When Will They Be Involved:

Implementing the traceability model assessment works best when the team (or a relevant subset) charged with doing the work conducts the assessment of project attributes.  The use of team members acts to turn Putt’s theory of “Competence Inversion ” on it head by focusing project-level competencies on defining the impact of specific attributes.  The use of a number of team members will provide a basis for consistency if assessments are performed again later in the project.

While the assessment process is best done by a cross functional team, it can be also be performed by those in the project governance structure alone.  The smaller the group that is involved in the assessment the more open and honest the communication between the assessment group and the project team must be or the exercise will be just another process inflicted on the team.  Regardless of the size, the assessment team needs to include technical competence.  Technical competence is especially useful when appraising complexity.  Technical competence is also a good tool to sell the results of the process to the rest of the project team.  Regardless of the deployment model, diversity of thought is generated in cross functional groups that will provide the breadth of knowledge needed to apply the model (this suggestion is based on feedback from process users).  The use of cross functional groups becomes even more critical for large projects and/or projects with embedded sub-projects.  In a situation where the discussion will be contentious or the group participating will be large I suggest using a facilitator to ensure an effective outcome.

An approach I suggest for integrating the assessment process into your current methodology is to incorporate the assessment as part of your formal risk assessment.  An alternative for smaller projects is to perform the assessment process during the initial project planning activities or in a sprint zero (if used).  This will minimize the impact of yet another assessment.

In larger projects where the appraisal outcome may vary across teams or sub-projects, thoughtful discussion will be required to determine whether the lowest common denominator will drive the results or whether a mixed approach is needed.  Use of this method in the real world suggests that in large projects/programs the highest or lowest common denominator is seldom universally useful.  The needs for scalability should be addressed at the level it makes sense for the project, which may mean that sub-projects are different.


Categories: Process Management

What is your next step in Continuous Delivery? Part 1

Xebia Blog - Wed, 08/27/2014 - 21:15

Continuous Delivery helps you deliver software faster, with better quality and at lower cost. Who doesn't want to delivery software faster, better and cheaper? I certainly want that!

No matter how good you are at Continuous Delivery, you can always do one step better. Even if you are as good as Google or Facebook, you can still do one step better. Myself included, I can do one step better.

But also if you are just getting started with Continuous Delivery, there is a feasible step to take you forward.

In this series, I describe a plan that helps you determine where you are right now and what your next step should be. To be complete, I'll start at the very beginning. I expect most of you have passed the first steps already.

The steps you already took

This is the first part in the series: What is your next step in Continuous Delivery? I'll start with three steps combined in a single post. This is because the great majority of you has gone through these steps already.

Step 0: Your very first lines of code

Do you remember the very first lines of code you wrote? Perhaps as a student or maybe before that as a teenager? Did you use version control? Did you bring it to a test environment before going to production? I know I did not.

None of us was born with an innate skills for delivering software in a certain way. However, many of us are taught a certain way of delivering software that still is a long way from Continuous Delivery.

Step 1: Version control

At some point during your study of career, you have been introduced to Version Control. I remember starting with CVS, migrating to Subversion and I am currently using Git. Each of these systems are an improvement over te previous one.

It is common to store the source code for your software in version control. Do you already have definitions or scripts for your infrastructure in version control? And for your automated acceptance tests or database schemas? In later steps, we'll get back to that.

Step 2: Release process

Your current release process may be far from Continuous Delivery. Despite appearances, your current release process is a useful step towards Continuous Delivery.

Even if you delivery to production less than twice a year, you are better off than a company that delivers their code unpredictably, untested and unmanaged. Or worse, a company that edits their code directly on a production machine.

In your delivery process, you have planning, control, a production-like testing environment, actual testing and maintenance after the go-live. The main difference with Continuous Delivery is the frequency and the amount of software that is released at the same time.

So yes, a release process is a productive step towards Continuous Delivery. Now let's see if we can optimize beyond this manual release process.

Step 3: Scripts

Imagine you have issues on your production server... Who do you go to for help? Do you have someone in mind?

Let me guess, you are thinking about a middle-aged guy who has been working at your organisation for 10+ years. Even if your organization is only 3 years old, I bet he's been working there for more than 10 years. Or at least, it seems like it.

My next guess is that this guy wrote some scripts to automate recurring tasks and make his life easier. Am I right?

These scripts are an important step towards Continuous Delivery. in fact, Continuous Delivery is all about automating repetitive tasks. The only thing that falls short is that these scripts are a one-man-initiative. It is a good initiative, but there is no strategy behind it and a lack of management support.

If you don't have this guy working for you, then you may have a bigger step to take when continuing towards the next step of Continuous Delivery. To successfully adopt Continuous Delivery on the long run, you are going to need someone like him.

Following steps

In the next parts, we will look at the following steps towards becoming world champion delivering software:

  • Step 4: Continuous Delivery
  • Step 5: Continuous Deployment
  • Step 6: "Hands-off"
  • Step 7: High Scalability

Stay tuned for the following posts.

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Personas and scenario can be a powerful tool for driving adoption and business value realization.  

All too often, people deploy technology without fully understanding the users that it’s intended for. 

Worse, if the technology does not get used, the value does not get realized.

Keep in mind that the value is in the change.  

The change takes the form of doing something better, faster, cheaper, and behavior change is really the key to value realization.

If you deploy a technology, but nobody adopts it, then you won’t realize the value.  It’s a waste.  Or, more precisely, it’s only potential value.  It’s only potential value because nobody has used it to change their behavior to be better, faster, or cheaper with the new technology.  

In fact, you can view change in terms of behavior changes:

What should users START doing or STOP doing, in order to realize the value?

Behavior change becomes a useful yardstick for evaluating adoption and consumption of technology, and significant proxy for value realization.

What is a Persona?

I’ve written about personas before  in Actors, Personas, and Roles, MSF Agile Persona Template, and Personas at patterns & practices, and Microsoft Research has a whitepaper called Personas: Practice and Theory.

A persona, simply defined is a fictitious character that represents user types.  Personas are the “who” in the organization.    You use them to create familiar faces and to inspire project teams to know their clients as well as to build empathy and clarity around the user base. 

Using personas helps characterize sets of users.  It’s a way to capture and share details about what a typical day looks like and what sorts of pains, needs, and desired outcomes the personas have as they do their work. 

You need to know how work currently gets done so that you can provide relevant changes with technology, plan for readiness, and drive adoption through specific behavior changes.

Using personas can help you realize more value, while avoiding “value leakage.”

What is a Scenario?

When it comes to users, and what they do, we're talking about usage scenarios.  A usage scenario is a story or narrative in the form of a flow.  It shows how one or more users interact with a system to achieve a goal.

You can picture usage scenarios as high-level storyboards.  Here is an example:

clip_image001

In fact, since scenario is often an overloaded term, if people get confused, I just call them Solution Storyboards.

To figure out relevant usage scenarios, we need to figure out the personas that we are creating solutions for.

Workforce Analysis with Personas

In practice, you would segment the user population, and then assign personas to the different user segments.  For example, let’s say there are 20,000 employees.  Let’s say that 3,000 of them are business managers, let’s say that 6,000 of them are sales people.  Let’s say that 1,000 of them are product development engineers.   You could create a persona named Mary to represent the business managers, a persona named Sally to represent the sales people, and a persona named Bob to represent the product development engineers.

This sounds simple, but it’s actually powerful.  If you do a good job of workforce analysis, you can better determine how many users a particular scenario is relevant for.  Now you have some numbers to work with.  This can help you quantify business impact.   This can also help you prioritize.  If a particular scenario is relevant for 10 people, but another is relevant for 1,000, you can evaluate actual numbers.

  Persona 1
”Mary
Persona 2
”Sally”
Persona 3
”Bob”
Persona 4
”Jill”
Persona 5
”Jack”
User Population 3,000 6,000 1,000 5,000 5,000 Scenario 1 X         Scenario 2 X X       Scenario 3     X     Scenario 4       X X Scenario 5 X         Scenario 6 X X X X X Scenario 7 X X       Scenario 8     X X   Scenario 9 X X X X X Scenario 10   X   X   Analyzing a Persona

Let’s take Bob for example.  As a product development engineer, Bob designs and develops new product concepts.  He would love to collaborate better with his distributed development team, and he would love better feedback loops and interaction with real customers.

We can drill in a little bit to get a get a better picture of his work as a product development engineer. 

Here are a few ways you can drill in:

  • A Day in the Life – We can shadow Bob for a day and get a feel for the nature of his work.  We can create  a timeline for the day and characterize the types of activities that Bob performs.
  • Knowledge and Skills - We can identify the knowledge Bob needs and the types of skills he needs to perform his job well.  We can use this as input to design more effective readiness plans.
  • Enabling Technologies –  Based on the scenario you are focused on, you can evaluate the types of technologies that Bob needs.  For example, you can identify what technologies Bob would need to connect and interact better with customers.

Another approach is to focus on the roles, responsibilities, challenges, work-style, needs and wants.  This helps you understand which solutions are appropriate, what sort of behavior changes would be involved, and how much readiness would be required for any significant change.

At the end of the day, it always comes down to building empathy, understanding, and clarity around pains, needs, and desired outcomes.

Persona Creation Process

Here’s an example of a high-level process for persona creation:

  1. Kickoff workshop
  2. Interview users
  3. Create skeletons
  4. Validate skeletons
  5. Create final personas
  6. Present final personas

Doing persona analysis is actually pretty simple.  The challenge is that people don’t do it, or they make a lot of assumptions about what people actually do and what their pains and needs really are.  When’s the last time somebody asked you what your pains and needs are, or what you need to perform your job better?

A Story of Using Personas to Create the Future of Digital Banking

In one example I know of a large bank that transformed itself by focusing on it’s personas and scenarios.  

It started with one usage scenario:

Connect with customers wherever they are.

This scenario was driven from pain in the business.  The business was out of touch with customers, and it was operating under a legacy banking model.   This simple scenario reflected an opportunity to change how employees connect with customers (though Cloud, Mobile, and Social).

On the customer side of the equation, customers could now have virtual face-to-face communication from wherever they are.  On the employee side, it enabled a flexible work-style, helped employees pair up with each other for great customer service, and provided better touch and connection with the customers they serve.

And in the grand scheme of things, this helped transform a brick-and-mortar bank to a digital bank of the future, setting a new bar for convenience, connection, and collaboration.

Here is a video that talks through the story of one bank’s transformation to the digital banking arena:

Video: NedBank on The Future of Digital Banking

In the video, you’ll see Blessing Sibanyoni, one of Microsoft’s Enterprise Architects in action.

If you’re wondering how to change the world, you can start with personas and scenarios.

You Might Also Like

Scenarios in Practice

How I Learned to Use Scenarios to Evaluate Things

How Can Enterprise Architects Drive Business Value the Agile Way?

Business Scenarios for the Cloud

IT Scenarios for the Cloud

Categories: Architecture, Programming

The 1.2M Ops/Sec Redis Cloud Cluster Single Server Unbenchmark

This is a guest post by Itamar Haber, Chief Developers Advocate, Redis Labs.

While catching up with the world the other day, I read through the High Scalability guest post by Anshu and Rajkumar's from Aerospike (great job btw). I really enjoyed the entire piece and was impressed by the heavy tweaking that they did to their EC2 instance to get to the 1M mark, but I kept wondering - how would Redis do?

I could have done a full-blown benchmark. But doing a full-blown benchmark is a time- and resource-consuming ordeal. And that's without taking into account the initial difficulties of comparing apples, oranges and other sorts of fruits. A real benchmark is a trap, for it is no more than an effort deemed from inception to be backlogged. But I wanted an answer, and I wanted it quick, so I was willing to make a few sacrifices to get it. That meant doing the next best thing - an unbenchmark.

An unbenchmark is, by (my very own) definition, nothing like a benchmark (hence the name). In it, you cut every corner and relax every assumption to get a quick 'n dirty ballpark figure. Leaning heavily on the expertise of the guys in our labs, we measured the performance of our Redis Cloud software without any further optimizations. We ran our unbenchmark with the following setup:

Categories: Architecture

Quote of the Day - All Things Project Are Probabilistic

Herding Cats - Glen Alleman - Wed, 08/27/2014 - 16:28

As far as the laws of mathematics refer to reality, they are not certain, as far as they are certain, they do not refer to reality.

— Albert Einstein Sherwin, Ronald Paul (2014-08-17). In The Tao of Systems Engineering: An Engineer's Survival Guide (pp. 195-197). Ronald Paul Sherwin. Kindle Edition.

When ever you hear that we can't predict the future. Think again. We can always predict the future. The level of confidence of that future is what is in question.

When you hear estimating is guessing. Think again. That person doesn't understand probability and statistics. When you hear we don't need to predict to make decisions, that person has very little at risk from that decision, since making decisions in the absence of knowing the possible loss ignores the principles of microeconomics of everyday life.

When ever you hear we don't need to estimate the outcomes of our decisions. Think again. We don't need to estimate those outcomes only if they are of low enough value that we don't care about the consequences of not knowing to some level of confidence what happens as a result of our decision. We're willing to wright off our loss if we're wrong.

When we hear any conjecture that involves mathematics that does not address the foundation of the mathematical principles of that discussion, remember Einstein, and also remember how to apply that advice in the specific domain and context of the question guided by Deming.

Management is Prediction 

Deming 2

Since management is prediction, knowing how to make predictions using statistical methods to produce a confidence interval about the probabilistic outcomes of those business decisions is part of management. When we want a sit at the table where management decisions are being made, knowing this and being able to add value to the decision process is the price of entry to that room. Otherwise we're labor sitting outside the room waiting for the decisions to be made. 
 

Categories: Project Management

Navigating the Way Home

Herding Cats - Glen Alleman - Wed, 08/27/2014 - 03:10

Manx shearwaterI came across a nice blog post from DelancyPlace about the navigation powers of birds

This bird was taken from Wale's to Venice Italy and released and found it's way home in 14 days. 930 miles, over mountains .

To be able to find their way home from an unfamiliar place, birds must carry a figurative map and compass in their brains.

The map tells them where they are, and the compass tells them which direction to fly, even when they are released with no frame of reference to their home loft.

Projects Are Not Birds

As project managers, what's our map and compass? How can we navigate from the start of the project to the end, even though we haven't been on this path before. 

How can we find our way Home?

We have a map. It starts with a Capabilities Based Plan. The CBP states what Done looks like in units of measure meaningful to the decision makers. These units of measure are Measures of Effectiveness and Measures of Performance.

  • Measures of Effectiveness - are operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions.

  • Measures of Performance - characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.

These measures speak to our home and the attributes of the home. The map that gets us ti home is the Integrated Master Plan. This shows the increasing maturity of the deliverables that implement the Measures of Performance and those Performance items that enable the project to produce the needed capabilities that are effectively accomplish the mission or fulfill the business need. 

This looks like a map for the increasing value delivery for an insurance company. The map shows the path or actually paths to home. Home is the ability to generate value from the exchange of money to develop the software.

Project Maturity Flow is the Incremental Delivery of Business Value

Related articles Golden Ratio Managing In The Presence Uncertainty Impact Mapping and Integrated Master Planning We Can Know the Business Value of What We Build All Project Work is Probabilistic Work 5 Questions That Need Answers for Project Success
Categories: Project Management

Traceability: An Approach Mixing CMMI and Agile

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement.

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement.

Traceability is an important tool in software engineering and a core tenant of the CMMI.  It is used as tool for the management and control of requirements. Controlling and understanding the flow of requirements puts a project manager’s hand on the throttle of the project by allowing and controlling the flow of work through a project. However, it is both hard to accomplish and requires a focused application to derive value. When does the control generated represent the proper hand on the throttle or a lead foot on a break?

The implementation of traceability sets the stage for the struggle over processes mandated by management or the infamous “model”.  Developers actively resist process when they perceive that the effort isn’t directly leading to functionality that can be delivered and therefore, not delivering value to their customers.  In the end, traceability, like insurance, is best when you don’t need the information it provides to sort out uncontrolled project changes or delivering functionality not related to requirements.

Identifying both the projects and the audience that can benefit from traceability is paramount for implementing and sustaining the process.  Questions that need to be asked and addressed include:

  • Is the need for control for all types of projects the same?
  • Is the value-to-effort ratio from tracing requirements the same for all projects?
  • What should be evaluated when determining whether to scale the traceability process?

Scalability is a needed step to affect the maximum value from any methodology component, traceability included, regardless of whether the project is plan-driven or Agile. A process is needed to ensure that traceability occurs based on a balance between process, effort and complexity.

The concept of traceability acts a lightening rod for the perceived excesses of CMMI (and by extension all other model-based improvement methods).  I will explore a possible approach for scaling traceability.  My approach bridges the typical approach (leveraging matrices and requirement tools) with an approach that trades documentation for intimate user involvement. It uses a simple set of three criteria (complexity, user involvement and criticality) to determine where a project should focus its traceability effort on continuum between documentation and involvement.

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement, a complex project, and increased criticality.  The model we will propose provides a means to apply traceability in a scaled manner so that it fits a project’s need and is not perceived as a one size fits all approach.


Categories: Process Management

Chrome - Firefox WebRTC Interop Test - Pt 1

Google Testing Blog - Tue, 08/26/2014 - 22:09
by Patrik Höglund

WebRTC enables real time peer-to-peer video and voice transfer in the browser, making it possible to build, among other things, a working video chat with a small amount of Python and JavaScript. As a web standard, it has several unusual properties which makes it hard to test. A regular web standard generally accepts HTML text and yields a bitmap as output (what you see in the browser). For WebRTC, we have real-time RTP media streams on one side being sent to another WebRTC-enabled endpoint. These RTP packets have been jumping across NAT, through firewalls and perhaps through TURN servers to deliver hopefully stutter-free and low latency media.

WebRTC is probably the only web standard in which we need to test direct communication between Chrome and other browsers. Remember, WebRTC builds on peer-to-peer technology, which means we talk directly between browsers rather than through a server. Chrome, Firefox and Opera have announced support for WebRTC so far. To test interoperability, we set out to build an automated test to ensure that Chrome and Firefox can get a call up. This article describes how we implemented such a test and the tradeoffs we made along the way.

Calling in WebRTC Setting up a WebRTC call requires passing SDP blobs over a signaling connection. These blobs contain information on the capabilities of the endpoint, such as what media formats it supports and what preferences it has (for instance, perhaps the endpoint has VP8 decoding hardware, which means the endpoint will handle VP8 more efficiently than, say, H.264). By sending these blobs the endpoints can agree on what media format they will be sending between themselves and how to traverse the network between them. Once that is done, the browsers will talk directly to each other, and nothing gets sent over the signaling connection.

Figure 1. Signaling and media connections.
How these blobs are sent is up to the application. Usually the browsers connect to some server which mediates the connection between the browsers, for instance by using a contact list or a room number. The AppRTC reference application uses room numbers to pair up browsers and sends the SDP blobs from the browsers through the AppRTC server.

Test DesignInstead of designing a new signaling solution from scratch, we chose to use the AppRTC application we already had. This has the additional benefit of testing the AppRTC code, which we are also maintaining. We could also have used the small peerconnection_server binary and some JavaScript, which would give us additional flexibility in what to test. We chose to go with AppRTC since it effectively implements the signaling for us, leading to much less test code.

We assumed we would be able to get hold of the latest nightly Firefox and be able to launch that with a given URL. For the Chrome side, we assumed we would be running in a browser test, i.e. on a complete Chrome with some test scaffolding around it. For the first sketch of the test, we imagined just connecting the browsers to the live apprtc.appspot.com with some random room number. If the call got established, we would be able to look at the remote video feed on the Chrome side and verify that video was playing (for instance using the video+canvas grab trick). Furthermore, we could verify that audio was playing, for instance by using WebRTC getStats to measure the audio track energy level.

Figure 2. Basic test design.
However, since we like tests to be hermetic, this isn’t a good design. I can see several problems. For example, if the network between us and AppRTC is unreliable. Also, what if someone has occupied myroomid? If that were the case, the test would fail and we would be none the wiser. So to make this thing work, we would have to find some way to bring up the AppRTC instance on localhost to make our test hermetic.

Bringing up AppRTC on localhostAppRTC is a Google App Engine application. As this hello world example demonstrates, one can test applications locally with
google_appengine/dev_appserver.py apprtc_code/

So why not just call this from our test? It turns out we need to solve some complicated problems first, like how to ensure the AppEngine SDK and the AppRTC code is actually available on the executing machine, but we’ll get to that later. Let’s assume for now that stuff is just available. We can now write the browser test code to launch the local instance:
bool LaunchApprtcInstanceOnLocalhost() 
// ... Figure out locations of SDK and apprtc code ...
CommandLine command_line(CommandLine::NO_PROGRAM);
EXPECT_TRUE(GetPythonCommand(&command_line));

command_line.AppendArgPath(appengine_dev_appserver);
command_line.AppendArgPath(apprtc_dir);
command_line.AppendArg("--port=9999");
command_line.AppendArg("--admin_port=9998");
command_line.AppendArg("--skip_sdk_update_check");

VLOG(1) << "Running " << command_line.GetCommandLineString();
return base::LaunchProcess(command_line, base::LaunchOptions(),
&dev_appserver_);
}

That’s pretty straightforward [1].

Figuring out Whether the Local Server is Up Then we ran into a very typical test problem. So we have the code to get the server up, and launching the two browsers to connect to http://localhost:9999?r=some_room is easy. But how do we know when to connect? When I first ran the test, it would work sometimes and sometimes not depending on if the server had time to get up.

It’s tempting in these situations to just add a sleep to give the server time to get up. Don’t do that. That will result in a test that is flaky and/or slow. In these situations we need to identify what we’re really waiting for. We could probably monitor the stdout of the dev_appserver.py and look for some message that says “Server is up!” or equivalent. However, we’re really waiting for the server to be able to serve web pages, and since we have two browsers that are really good at connecting to servers, why not use them? Consider this code.
bool LocalApprtcInstanceIsUp() {
// Load the admin page and see if we manage to load it right.
ui_test_utils::NavigateToURL(browser(), GURL("localhost:9998"));
content::WebContents* tab_contents =
browser()->tab_strip_model()->GetActiveWebContents();
std::string javascript =
"window.domAutomationController.send(document.title)";
std::string result;
if (!content::ExecuteScriptAndExtractString(tab_contents,
javascript,
&result))
return false;

return result == kTitlePageOfAppEngineAdminPage;
}

Here we ask Chrome to load the AppEngine admin page for the local server (we set the admin port to 9998 earlier, remember?) and ask it what its title is. If that title is “Instances”, the admin page has been displayed, and the server must be up. If the server isn’t up, Chrome will fail to load the page and the title will be something like “localhost:9999 is not available”.

Then, we can just do this from the test:
while (!LocalApprtcInstanceIsUp())
VLOG(1) << "Waiting for AppRTC to come up...";

If the server never comes up, for whatever reason, the test will just time out in that loop. If it comes up we can safely proceed with the rest of test.

Launching the Browsers A browser window launches itself as a part of every Chromium browser test. It’s also easy for the test to control the command line switches the browser will run under.

We have less control over the Firefox browser since it is the “foreign” browser in this test, but we can still pass command-line options to it when we invoke the Firefox process. To make this easier, Mozilla provides a Python library called mozrunner. Using that we can set up a launcher python script we can invoke from the test:
from mozprofile import profile
from mozrunner import runner

WEBRTC_PREFERENCES = {
'media.navigator.permission.disabled': True,
}

def main():
# Set up flags, handle SIGTERM, etc
# ...
firefox_profile =
profile.FirefoxProfile(preferences=WEBRTC_PREFERENCES)
firefox_runner = runner.FirefoxRunner(
profile=firefox_profile, binary=options.binary,
cmdargs=[options.webpage])

firefox_runner.start()

Notice that we need to pass special preferences to make Firefox accept the getUserMedia prompt. Otherwise, the test would get stuck on the prompt and we would be unable to set up a call. Alternatively, we could employ some kind of clickbot to click “Allow” on the prompt when it pops up, but that is way harder to set up.

Without going into too much detail, the code for launching the browsers becomes
GURL room_url = 
GURL(base::StringPrintf("http://localhost:9999?r=room_%d",
base::RandInt(0, 65536)));
content::WebContents* chrome_tab =
OpenPageAndAcceptUserMedia(room_url);
ASSERT_TRUE(LaunchFirefoxWithUrl(room_url));

Where LaunchFirefoxWithUrl essentially runs this:
run_firefox_webrtc.py --binary /path/to/firefox --webpage http://localhost::9999?r=my_room

Now we can launch the two browsers. Next time we will look at how we actually verify that the call worked, and how we actually download all resources needed by the test in a maintainable and automated manner. Stay tuned!

1The explicit ports are because the default ports collided on the bots we were running on, and the --skip_sdk_update_check was because the SDK stopped and asked us something if there was an update.

Categories: Testing & QA

Episode 208: Randy Shoup on Hiring in the Software Industry

With this episode, Software Engineering Radio begins a series of interviews on social/nontechnical aspects of working as a software engineer as Tobias Kaatz talks to Randy Shoup, former CTO at KIXEYE, about hiring in the software industry. Prior to KIXEYE, Randy worked as director of engineering at Google for the Google App Engine and as […]
Categories: Programming

Let’s Have a Difficult Conversation

Software Requirements Blog - Seilevel.com - Tue, 08/26/2014 - 17:00
No one wants to have a conversation that is going to be uncomfortable, potentially make himself seem difficult to work with, or put pressure on either party in the conversation, but sometimes it’s just unavoidable. Or is it? I have been witness to several projects lately that grossly underestimated the workload going in. This is […]
Categories: Requirements

Synchronize the Team

Xebia Blog - Tue, 08/26/2014 - 13:52

How can you, as a scrum master, improve the chances that the scrum team has a common vision and understanding of both the user story and the solution, from the start until the end of the sprint?   

The problem

The planning session is where the team should synchronize on understanding the user story and agree on how to build the solution. But there is no real validation that all the team members are on the same page. The team tends to dive into the technical details quite fast in order to identify and size the tasks. The technical details are often discussed by only a few team members and with little or no functional or business context. Once the team leaves the session, there is no guarantee that they remain synchronized when the sprint progresses. 

The only other team synchronization ritual, prescribed by the scrum process, is the daily scrum or stand-up. In most teams the daily scrum is as short as possible, avoiding semantic discussions. I also prefer the stand-ups to be short and sweet. So how can you or the team determine that the team is (still) synchronized?

Specify the story

In the planning session, after a story is considered ready enough be to pulled into the sprint, we start analyzing the story. This is the specification part, using a technique called ‘Specification by Example’. The idea is to write testable functional specifications with actual examples. We decompose the story into specifications and define the conditions of failure and success with examples, so they can be tested. Thinking of examples makes the specification more concrete and the interpretation of the requirements more specific.

Having the whole team work out the specifications and examples, helps the team to stay focussed on the functional part of the story longer and in more detail, before shifting mindsets to the development tasks.  Writing the specifications will also help to determine wether a story is ready enough. While the sprint progresses and all the tests are green, the story should be done for the part of building the functionality.

You can use a tool like Fitnesse  or Cucumber to write testable specifications. The tests are run against the actual code, so they provide an accurate view on the progress. When all the tests pass, the team has successfully created the functionality. In addition to the scrum board and burn down charts, the functional tests provide a good and accurate view on the sprint progress.

Design the solution

Once the story has been decomposed into clear and testable specifications we start creating a design on a whiteboard. The main goal is to create a shared visible understanding of the solution, so avoid (technical) details to prevent big up-front designs and loosing the involvement of the less technical members on the team. You can use whatever format works for your team (e.g. UML), but be sure it is comprehensible by everybody on the team.

The creation of the design, as an effort by the whole team, tends to sparks discussion. In stead of relying on the consistency of non-visible mental images in the heads of team members, there is a tangible image shared with everyone.

The whiteboard design will be a good starting point for refinement as the team gains insight during the sprint. The whiteboard should always be visible and within reach of the team during the sprint. Using a whiteboard makes it easy to adapt or complement the design. You’ll notice the team standing around the whiteboard or pointing to it in discussions quite often.

The design can be easily turned into a digital artefact by creating a photo copy of it. A digital copy can be valuable to anyone wanting to learn the system in the future. The design could also be used in the sprint demo, should the audience be interested in a technical overview.

Conclusion

The team now leaves the sprint planning with a set of functional tests and a whiteboard design. The tests are useful to validate and synchronize on the functional goals. The whiteboard designs are useful to validate and synchronize on the technical goals. The shared understanding of the team is more visible and can be validated, throughout the sprint. The team has become more transparent.

It might be a good practice to have the developers write the specification, and the testers or analysts draw the designs on the board. This is to provoke more communication, by getting the people out of their comfort zone and forcing them to ask more questions.

There are more compelling reasons to implement (or not) something like specification by design or to have the team make design overviews. But it also helps the team to stay on the same page, when there are visible and testable artefacts to rely on during the sprint.

Quote of the Day

Herding Cats - Glen Alleman - Tue, 08/26/2014 - 13:16

"All the mathematical sciences are founded on relations between physical laws and laws of numbers, so that the aim of exact science is to reduce the problems of nature to the determination of quantities by operations with numbers."
James Clerk Maxwell, On Faraday's Lines of Force, 1856

Categories: Project Management

Quote of the Month August 2014

From the Editor of Methods & Tools - Tue, 08/26/2014 - 09:55
We don’t mean that you should put on your Super Tester cape and go protect the world from bugs. There’s no room for big egos on agile teams. Your teammates share your passion for quality. Focus on the teams goals and do what you can to help everyone do their best work. Source: Agile Testing, Lisa Crispin and Janet Gregory, Addison Wesley

Management is Prediction - W. Edwards Deming

Herding Cats - Glen Alleman - Tue, 08/26/2014 - 01:44

DemingThis statement Management is Prediction is the basis of the microeconomics of writing software for money, someone else's money. 

Along with the quote of Daniel Boorstin's, The greatest obstacle to discovery is not ignorance – it is the illusion of knowledge. Boorstin was the Librarian of Congress. One of his other quotes was once the subtitle of this blog I Write to Discover What I Think.

And here's what I think about the predictive nature of managing projects. If we don't know how to estimate the impacts of our decisions on the future outcomes of those decisions, we have the illusion of knowledge, when in fact we have no knowledge.

There is a popular myth, that estimating the future outcomes of our present day decisions is somehow voodoo, guessing, making things up, and having our management treat them as commitments. It's clear we didn't pay attention in the probability and statistics class. Or worse yet, we have intentionally ignored what we did learn in that probability and statistics class. Thinking naively or with intent ignoring the probabilistic nature of all project work.

All variables in project work are random variables. These variables are almost always coupled to each other in some way, usually a non-linear way. Fix one variable, the other two are free to be variable. Fix two variables, the third variable is free. Fix all three, they still vary. This is the primary reason margin is needed for project to have a credible chance of showing up on time, on budget, and on value. These variances are Irreducible, meaning they will vary no matter what you do. Fixing Budget, only fixes the amount of money you've allocated to the project. It doesn't fix the cost to produce the value from the project, nor the time to produce that value.

Slide1

So Why Do We Estimate?

When ever possible we should use evidence in making decisions about project variables. Observing the behaviour of the variables in an attempt to see what they are doing, where they are going, what the next outcome might be. 

This is the basis of the OODA Loop. Boyd was an Air Force fighter pilot, and like Jeff Sutherland of agile fame, understood that emerging situations are the norm in the skies of Vietnam, just like they are on projects.

OODA ClipThe OODA loop is often popularized as four elements going around in a circle. The picture is the only diagram Boyd ever drew. Boyd's OODA Loop describes the details. The title of the chart above comes from one of our briefings for Managing in the Presence of Uncertainty. But the domain OODA is applicable is not restricted to DOD acquisition, but is applicable in any domain where money, time, and performance are at risk.

Just to push a little harder, when you hear about Clausewitz and Tzu from agile coders, remember Boyd's comments when he spoke in public

“Don’t be a member of Clausewitz’s school because a lot has happened since 1832,” he would warn his audiences, “and don’t be a member of Sun Tzu’s school because an awful lot has happened since 400 BC.”

So just like those reference to 1968 reference to the Software Crisis - a lot has happened since then.

So Here's a Simple Fact

Also often ignored by some wanting us to learn to decide without knowing the impacts of those decisions.

People learn better when they predict

Making an estimate about the future — predicting on outcome or impact — forces us to think ahead about those very outcomes. Making an estimate causes us to examine more deeply the system we are observing and our engagement with that system and our reactions to the system. It causes us to question our understanding of the system as it is now — in  its current state — and what we would like it to be in its future state. As well, by making estimating about future outcomes, we can learn more about our understanding of the management beliefs we hold as we examine the results of our predictions.

So like Deming tells us ...

Management is Prediction

Not making prediction's about the impact of our decisions, the processes affected by those decisions, like - cost, schedule, and technical performance - shown in the three legged picture above, ignores the principles of Deming. And when we do that, ...

... we're not managing how we spend other peoples money.

The only way this notion can't be called bad management is if the amount of money at risk is low enough that those providing the money don't care if we waste it or not.

Related articles Making Estimates For Your Project Require Discipline, Skill, and Experience Why We Must Learn to Estimate The OODA Loop - Insight into Strategic Thinking An Agile Estimating Story Averages Without Variances are Meaningless - Or Worse Misleading Four Critical Elements of Project Success The Value of Information What does a "broken OODA loop" look like? How To Create Confusion About Root Causes
Categories: Project Management

Best Practices

Best practices aren't magic and neither are goblins.

Best practices aren’t magic and neither are goblins.

To paraphrase Edwin Starr, “Best Practices, huh, what are they good for? Absolutely nothing,  Say it again . . .”

Every organization wants to use best practices. How many organizations do you know that would stand up and say we want to use average practices? Therefore a process with the moniker “best practice” on it has an allure that is hard to resist.  The problem is that one organization’s best practice is another’s average process, even if they produce the same quality and quantity of output.  Or even worse, one organization’s best practice might be beyond another organization.  The process reflects the overall organizational context.  It is possible that adopting a new process wholesale could produce output faster or better, but without tailoring, the chances are more random than many consultants would suggest. For example, just buying a configuration management tool without changing how you do configuration management will be less effective melding the tool with your processes.  Tailoring will allow you to use the process based on the attributes of the current organizational context such as the organization’s overall size or the capabilities of the people involved.

An example of an organization’s best practice that might not translate to all of its competitors is the use of super sophisticated inventory control computer systems used at Walmart. Would Walmart’s computer system help a local grocery store (let’s call this Hometown Grocery)? Not likely, the overhead of the same system would be beyond Hometown’s IT capabilities and budget.  However if hundreds of Hometown Groceries banded together, the answer might be different (tailoring the process to the environmental context).  Without tailoring the context, the best practice for Walmart would not be a best practice for our small town grocery.

The term best practice gets thrown around as if there was a dusty old tome full of magical incantations that will solve any crisis regardless of context (assuming you are a seventh level mage).  There are those that hold up the CMMI, ISO or SCRUM and shout (usually on email lists) that they are only way.  Let’s begin by putting the idea that there is a one-size-fits-all solution to every job to rest.  There isn’t and there never was any such animal.  Any individual process, practice or step that worked wonderfully in the company down the street will not work the same way for you, especially if you try to it do it same way they did.  Software development and maintenance isn’t a chemical reaction, a Lego construct or even magic.  Best practices, what are they good for?  Fortunately a lot, if used correctly.

Best practices find their highest value as a tool for you to use as a comparison in order for you to expose the assumptions hat have been used to build or evolve your own processes.   Knowledge allows you to challenge how and why are you are doing any specific step and provides an opportunity for change.  How many companies have embraced the tenants of the Toyota Production Systems after benchmarking Toyota?

Adopting best practices without regard to your context may not yield the benefits found on the box.  If you read the small print you’d see a warning. Use best practices only after reading all of the instructions and understanding of your goals and your environment.  This is not to say that exemplary practices should not be aggressively studied and translated into your organization.  Ignoring new ideas because they did not grow out of your context is just as crazy as embracing best practices without understanding the context it was created in. Best practices as an ideal, as a comparison so that you can understand your organization makes sense, not as plug-compatible modules.


Categories: Process Management

C# Records & Pattern Matching Proposal

Phil Trelford's Array - Mon, 08/25/2014 - 21:27

Following on from VB.Net’s new basic pattern matching support, the C# team has recently put forward a proposal for record types and pattern matching in C# which was posted in the Roslyn discussion area on CodePlex:

Pattern matching extensions for C# enable many of the benefits of algebraic data types and pattern matching from functional languages, but in a way that smoothly integrates with the feel of the underlying language. The basic features are: records, which are types whose semantic meaning is described by the shape of the data; and pattern matching, which is a new expression form that enables extremely concise multilevel decomposition of these data types. Elements of this approach are inspired by related features in the programming languages F# and Scala.

There has been a very active discussion on the forum ever since, particularly around syntax.

Background

Algebraic types and pattern matching have been a core language feature in functional-first languages like ML (early 70s), Miranda (mid 80s), Haskell (early 90s) and F# (mid 00s).

I like to think of records as part of a succession of data types in a language:

Name Example (F#) Description Scalar
let width = 1.0
let height = 2.0
Single values Tuple
// Tuple of float * float
let rect = (1.0, 2.0)
Multiple values Record
type Rect = {Width:float; Height:float}
let rect = {Width=1.0; Height=2.0}
Multiple named fields Sum type(single case)
type Rect = Rect of float * float
let rect = Rect(1.0,2.0)
Tagged tuple Sum type(named fields)
type Rect = Rect of width:float*height:float
let rect = Rect(width=1.0,height=2.0)
Tagged tuple with named fields Sum type(multi case)
type Shape=
   | Circle of radius:float
   | Rect of width:float * height:float
Union of tagged tuples

Note: in F# sum types are also often referred to as discriminated unions or union types, and in functional programming circles algebraic data types tend to refer to tuples, records and sum types.

Thus in the ML family of languages records are like tuples with named fields. That is, where you use a tuple you could equally use a record instead to add clarity, but at the cost of defining a type. C#’s anonymous types fit a similar lightweight data type space, but as there is no type definition their scope is limited (pun intended).

For the most part I find myself pattern matching over tuples and sum types in F# (or in Erlang simply using tuples where the first element is the tag to give a similar effect).

Sum Types

The combination of sum types and pattern matching is for me one of the most compelling features of functional programming languages.

Sum types allow complex data structures to be succinctly modelled in just a few lines of code, for example here’s a concise definition for a generic tree:

type 'a Tree =
    | Tip
    | Node of 'a * 'a Tree * 'a Tree

Using pattern matching the values in a tree can be easily summed:

let rec sumTree tree =
    match tree with
    | Tip -> 0
    | Node(value, left, right) ->
        value + sumTree(left) + sumTree(right)

The technique scales up easily to domain models, for example here’s a concise definition for a retail store:

/// For every product, we store code, name and price
type Product = Product of Code * Name * Price

/// Different options of payment
type TenderType = Cash | Card | Voucher

/// Represents scanned entries at checkout
type LineItem = 
  | Sale of Product * Quantity
  | Cancel of int
  | Tender of Amount * TenderType

Class Hierarchies versus Pattern Matching

In class-based programming languages like C# and Java, classes are the primary data type  where (frequently mutable) data and related methods are intertwined. Hierarchies of related types are typically described via inheritance. Inheritance makes it relatively easy to add new types, but adding new methods or behaviour usually requires visiting the entire hierarchy. That said the compiler can help here by emitting an error if a required method is not implemented.

Sum types also describe related types, but data is typically separated from functions, where functions employ pattern matching to handle separate cases. This pattern matching based approach makes it easier to add new functions, but adding a new case may require visiting all existing functions. Again the compiler helps here by emitting a warning if a case is not covered.

Another subtle advantage of using sum types is being able to see the behaviour for all cases in a single place, which can be helpful for readability. This may also help when attempting to separate concerns, for example if we want to add a method to print to a device to a hierarchy of classes in C# we could end up adding printer related dependencies to all related classes. With a sum type the printer functionality and related dependencies are more naturally encapsulated in a single module

In F# you have the choice of class-based inheritance or sum types and can choose in-situ. In practice most people appear to use sum types most of the time.

C# Case Classes

The C# proposal starts with a simple “record” type definition:

public record class Cartesian(double x: X, double y: Y);

Which is not too dissimilar to an F# record definition, i.e.:

type Cartesian = { X: double, Y: double }

However from there it then starts to differ quite radically. The C# proposal allows a “record” to inherit from another class, in effect allowing sum types to be defined, i.e:

abstract class Expr; 
record class X() : Expr; 
record class Const(double Value) : Expr; 
record class Add(Expr Left, Expr Right) : Expr; 
record class Mult(Expr Left, Expr Right) : Expr; 
record class Neg(Expr Value) : Expr;

which allows pattern matching to be performed using an extended switch case statement:

switch (e) 
{ 
  case X(): return Const(1); 
  case Const(*): return Const(0); 
  case Add(var Left, var Right): 
    return Add(Deriv(Left), Deriv(Right)); 
  case Mult(var Left, var Right): 
    return Add(Mult(Deriv(Left), Right), Mult(Left, Deriv(Right))); 
  case Neg(var Value): 
    return Neg(Deriv(Value)); 
}

This is very similar to Scala case classes, in fact change “record” to case, drop semicolons and voilà:

abstract class Term
case class Var(name: String) extends Term
case class Fun(arg: String, body: Term) extends Term
case class App(f: Term, v: Term) extends Term

To sum up, the proposed C# “record” classes appear to be case classes which support both single and multi case sum types.

Language Design

As someone who has to spend some of their time working in C# and who feels more productive having concise types and pattern matching in their toolbox, overall I welcome our new overlords this proposal.

From my years of experience using F#, I feel it would be nice to see a simple safety feature included, to what is in effect a sum type representation, so that sum types can be exhaustive. This would allow compile time checks to ensure that all cases have been covered in a switch/case statement, and a warning given otherwise.

Then again, I feel this is quite a radical departure from the style of implementation I’ve seen in C# codebases in the wild, to the point where it’s starting to look like an entirely different language… and so this may be a feature that if it does see the light of day is likely to get more exposure in C# shops working on greenfield projects.

Categories: Programming

Powerful New Messaging Features with GCM

Android Developers Blog - Mon, 08/25/2014 - 18:51

By Subir Jhanb, Google Cloud Messaging team

Developers from all segments are increasingly relying on Google Cloud Messaging (GCM) to handle their messaging needs and make sure that their apps stay battery-friendly. GCM has been experiencing incredible momentum, with more than 100,000 apps registered, 700,000 QPS, and 300% QPS growth over the past year.

At Google I/O we announced the general availability of several GCM capabilities, including the GCM Cloud Connection Server, User Notifications, and a new API called Delivery Receipt. This post highlights the new features and how you can use them in your apps. You can watch these and other GCM announcements at our I/O presentation.

Two-way XMPP messaging with Cloud Connection Server

XMPP-based Cloud Connection Server (CCS) provides a persistent, asynchronous, bidirectional connection to Google servers. You can use the connection to send and receive messages between your server and your users' GCM-connected devices. Apps can now send upstream messages using CCS, without needing to manage network connections. This helps keep battery and data usage to a minimum. You can establish up to 100 XMPP connections and have up to 100 outstanding messages per connection. CCS is available for both Android and Chrome.

User notifications managed across multiple devices

Nowadays users have multiple devices and hence receive notifications multiple times. This can reduce notifications from being a useful feature to being an annoyance. Thankfully, the GCM User Notifications API provides a convenient way to reach all devices for a user and help you synchronise notifications including dismissals - when the user dismisses a notification on one device, the notification disappears automatically from all the other devices. User Notifications is available on both HTTP and XMPP.

Insight into message status through delivery receipts

When sending messages to a device, a common request from developers is to get more insight on the state of the message and to know if it was delivered. This is now available using CCS with the new Delivery Receipt API. A receipt is sent as soon as the message is sent to the endpoint, and you can also use upstream for app level delivery receipt.

How to get started

If you’re already using GCM, you can take advantage of these new features right away. If you haven't used GCM yet, you’ll be surprised at how easy it is to set up — get started today! And remember, GCM is completely free no matter how big your messaging needs are.

To learn more about GCM and its new features — CCS, user notifications, and Delivery Receipt — take a look at the I/O Bytes video below and read our developer documentation.


Join the discussion on
+Android Developers


Categories: Programming

MixRadio Architecture - Playing with an Eclectic Mix of Services

This is a guest repost by Steve Robbins, Chief Architect at MixRadio.

At MixRadio, we offer a free music streaming service that learns from listening habits to deliver people a personalised radio station, at the single touch of a button. MixRadio marries simplicity with an incredible level of personalization, for a mobile-first approach that will help everybody, not just the avid music fan, enjoy and discover new music. It's as easy as turning on the radio, but you're in control - just one touch of Play Me provides people with their own personal radio station.   The service also offers hundreds of hand-crafted expert and celebrity mixes categorised by genre and mood for each region. You can also create your own artist mix and mixes can be saved for offline listening during times without signal such as underground travel, as well as reducing data use and costs.   Our apps are currently available on Windows Phone, Windows 8, Nokia Asha phones and the web. We’ve spent years evolving a back-end that we’re incredibly proud of, despite being British! Here's an overview of our back-end architecture.

 

Architecture Overview
Categories: Architecture

How Can Enterprise Architects Drive Business Value the Agile Way?

An Enterprise Architect can have a tough job when it comes to driving value to the business.   With multiple stakeholders, multiple moving parts, and a rapid rate of change, delivering value is tough enough.   But what if you want to accelerate value and maximize business impact?

Enterprise Architects can borrow a few concepts from the Agile world to be much more effective in today’s world.

A Look Back at How Agile Helped Connect Development to Business Impact …

First, let’s take a brief look at traditional development and how it evolved.  Traditionally, IT departments focused on delivering value to the business by shipping big bang projects.   They would plan it, build it, test it, and then release it.   The measure of success was on time, on budget.   

Few projects ever shipped on time.  Few were ever on budget.  And very few ever met the requirements of the business.

Then along came Agile approaches and they changed the game.

One of the most important ideas was a shift away from thick requirements documentation to user stories.  Developers got customers telling stories about what they wanted the future solution to do.  For example, a user story for a sale representative might look like this:

“As a sales rep, I want to see my customer’s account information so that I can identify cross-sell and upsell opportunities.” 

The use of user stories accomplished several things.   First, user stories got the development teams talking to the business users.  Rather than throwing documents back and forth, people started having face-to-face communication to understand the user stories.  Second, user stories helped chunk bigger units of value down into smaller units of value.  Rather than a big bang project where all the value is promised at the end of some long development cycle, a development team could now ship the solution in increments, where each increment was a prioritized set of stories.   The user stories effectively create a shared language for value

Third, it made it easier to test the delivery of value.  Now the user and the development team could test the solution against the user stories and acceptance criteria.  If the story met acceptance criteria, the user would acknowledge that the value was delivered.  In this way, the user stories created both a validation mechanism and a feedback loop for delivering and acknowledging value.

In the Agile world, bigger stories are called epics, and collections of stories are called themes.  Often a story starts off as an epic until it gets broken down into multiple stories.  What’s important here is that the collections of stories serve as a catalog of potential value.   Specifically, this catalog of stories reflects potential value with real stakeholders.  In this way, Agile helps drive customer focus and customer connection.  It’s really effective stakeholder management in action.

Agile approaches have been used in software projects large and small.  And they’ve forever changed how developers and project managers approach projects.

A Look at How Agile Can Help Enterprise Architecture Accelerate Business Value …

But how does this apply to Enterprise Architects?

As an Enterprise Architect, chances are you are responsible for achieving business outcomes.  You do this by driving business transformation.   The way you achieve business transformation is through driving capability change including business, people, and technical capabilities.

That’s a tall order.   And you need a way to chunk this up and make it meaningful to all the parties involved.

The Power of Scenarios as Units of Value for the Enterprise

This is where scenarios come into play.  Scenarios are a simple way to capture pains, needs and desired outcomes.   You can think of the desired outcome as the future capability vision.   It’s really a story that helps articulate the art of the possible.   More precisely, you can use scenarios to help build empathy with stakeholders for what value will look like, by painting a conceptual scene of the future.

An Enterprise scenario is simply a chunk of organizational change, typically about 3-5 business capabilities, 3-5 people capabilities, and 3-5 technical capabilities.

If that sounds like a lot of theory, let’s step into an example to show what it looks like in practice.

Let’s say you’re in a situation where you need to help a healthcare provider change their business.  

You can come up with a lot of scenarios, but it helps to start with the pains and needs of the business owner.  Otherwise, you might start going through a bunch of scenarios for the patients or for the doctors.  In this case, the business owner would be the Chief Medical Officer or the doctor of doctors.

Scenario: Tele-specialist for Healthcare

If we walk the pains, needs, and desired outcomes of the Chief Medical Officer, we might come up with a scenario that looks something like this, where the CURRENT STATE reflects the current pains, and needs, and the FUTURE STATE reflects the desired outcome.

CURRENT STATE

Here is an example of the CURRENT STATE portion of the scenario:

The Chief Medical Officer of Contoso Provider is struggling with increased costs and declining revenues. Costs are rising due to the Affordable Healthcare Act regulatory compliance requirements and increasing malpractice insurance premiums. Revenue is declining due to decreasing medical insurance payments per claim.

FUTURE STATE

Here is an example of the FUTURE STATE portion of the scenario:

Doctors can consult with patients, peers, and specialists from anywhere. Contoso provider's doctors can see more patients, increase accuracy of first time diagnosis, and grow revenues.


image

 

Storyboard for the Future Capability Vision

It helps to be able to picture what the Future Capability Vision might look like.   That’s where storyboarding can come in.  An Enterprise Architect can paint a simple scene of the future with a storyboard that shows the Future Capability Vision in action.  This practice lends itself to whiteboarding, and the beauty of a whiteboard is you can quickly elaborate where you need to, without getting mired in details.

image

As you can see in this example storyboard of the Future Capability Vision, we listed out some business benefits, which we could then drill-down into relevant KPIs and value measures.   We’ve also outlines some building blocks required for this Future Capability Vision in the form of business capabilities and technical capabilities.

Now this simple approach accomplishes a lot.   It helps ensure that any technology solution actually connects back to business drivers and pains that a business decision maker actually cares about.   This gets their fingerprints on the solution concept.   And it creates a simple “flashcard” for value.   If we name the Enterprise scenario well, then we can use it as a handle to get back to the story we created with the business of a better future.

The obvious thing this does, aside from connecting IT to the business, is it helps the business justify any investment in IT.

And all we did was walk through one Enterprise Scenario.  

But there is a lot more value to be found in the Enterprise.   We can literally explore and chunk up the value in the Enterprise if we take a step back and add another tool to our toolbelt:  the Scenario Chain.

Scenario Chain:  Chaining the Industry Scenarios to Enterprise Scenarios

The Scenario Chain is another powerful conceptual visualization tool.  It helps you quickly map out what’s happening in the marketplace in terms of industry drivers or industry scenarios.  You can then identify potential investment objectives.   These investment objectives lead to patterns of value or patterns of solutions in the Enterprise, which are effectively Enterprise scenarios.   From the Enterprise scenarios, you can then identify relevant usage scenarios.  The usage scenarios effectively represent new ways of working for the employees, or new interaction models with customers, which is effectively a change to your value stream.

image

With one simple glance, the Scenario Chain is a bird’s-eye view of how you can respond to the changing marketplace and how you can transform your business.   And, by using Enterprise scenarios, you can chunk up the change into meaningful units of value that reflect pains, needs, and desired outcomes for the business.  And, because you have the fingerprints of stakeholders from both business and IT, you’ve effectively created a shared vision for the future, that has business impact, a justification for investment, and it creates a pull-through mechanism for additional value, by driving the adoption of the usage scenarios.

Let’s elaborate on adoption and how scenarios can help accelerate business value.

Using Scenario to Drive Adoption and Accelerate Business Value

Driving adoption is a key way to realize the business value.  If nobody adopts the solution, then that’s what Gartner would call “Value Leakage.”  Value Realization really comes down to governance, measurement, and adoption.

With scenarios at your fingertips, you have a powerful way to articulate value, justify business cases, drive business transformation, and accelerate business value.   The key lies in using the scenarios as a unit of value, and focusing on scenarios as a way to drive adoption and change.

Here are three ways you can use scenarios to drive adoption and accelerate business value:

1.  Accelerate Business Adoption

One of the ways to accelerate business value is to accelerate adoption.    You can use scenarios to help enumerate specific behavior changes that need to happen to drive the adoption.   You can establish metrics and measures around specific behavior changes.   In this way, you make adoption a lot more specific, concrete, intentional, and tangible.

This approach is about doing the right things, faster.

2.  Re-Sequence the Scenarios

Another way to accelerate business value is to re-sequence the scenarios.   If your big bang is way at the end (way, way at the end), no good.  Sprinkle some of your bangs up front.   In fact, a great way to design for change is to build rolling thunder.   Put some of the scenarios up front that will get people excited about the change and directly experiencing the benefits.  Make it real.

The approach is about putting first things first.

3.  Identify Higher Value Scenarios

The third way to accelerate business value is to identify higher-value scenarios.   One of the things that happens along the way, is you start to uncover potential scenarios that you may not have seen before, and these scenarios represent orders of magnitude more value.   This is the space of serendipity.   As you learn more about users and what they value, and stakeholders and what they value, you start to connect more dots between the scenarios you can deliver and the value that can be realized (and therefore, accelerated.)

This approach is about trading up for higher value and more impact.

As you can see, Enterprise Architects can drive business value and accelerate business value realization by using scenarios and storyboarding.   It’s a simple and agile approach for connecting business and IT, and for shaping a more Agile Enterprise.

I’ll share more on this topic in future posts.   Value Realization is an art and a science and I’d like to reduce the gap between the state of the art and the state of the practice.

You Might Also Like

3 Ways to Accelerate Business Value

6 Steps for Enterprise Architecture as Strategy

Cognizant on the Next Generation Enterprise

Simple Enterprise Strategy

The Mission of Enterprise Services

The New Competitive Landscape

What Am I Doing on the Enterprise Strategy Team?

Why Have a Strategy?

Categories: Architecture, Programming

Anything Worth Doing is Worth Doing Right

Making the Complex Simple - John Sonmez - Mon, 08/25/2014 - 15:00

It seems that I am always in a rush. I find it very difficult to just do what I am doing without thinking about what is coming next or when I’ll be finished with whatever I am working on. Even as I am sitting and writing this blog post, I’m not really as immersed in […]

The post Anything Worth Doing is Worth Doing Right appeared first on Simple Programmer.

Categories: Programming