Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Quote of the Day — Just a Little Process Check

Herding Cats - Glen Alleman - Fri, 08/29/2014 - 15:12

Everyone is entitled to his own opinion, but not his own facts.Daniel Patrick Moynihan

When engaging in exchanges about complex topics like cost, schedule, and technical performance management, I always get a smile when someone says oh that problem can be solved with this simple approach. Or I bet that organization has no motivation what so every to solve the problem

Then the next quote is applicable...

For every complex problem there is an answer that is clear, simple, and wrong.H. L. Mencken

Solving complex problems is hard, stating there are simple solutions without having worked on complex problems is easy.

Related articles Complex Problems Require Better Solutions The Three Elements of Project Work and Their Estimates The Power of Misattributed and Misquoted Quotes Is There Such a Thing As Making Decisions Without Knowing the Cost?
Categories: Project Management

The Web Search API is Retiring

Google Code Blog - Fri, 08/29/2014 - 13:00
Posted by Dan Ciruli, Product Manager


On November 1, 2010, we announced the deprecation of the Web Search API. As per our policy at the time, we supported the API for a three year period (and beyond), but as all things come to an end, so has its deprecation window.

We are now announcing the turndown of the Web Search API. You may wish to look at our Custom Search API (note: it has a free quota of 100 queries per day).
The service will cease operations on September 29th, 2014.
Categories: Programming

R: dplyr – group_by dynamic or programmatic field / variable (Error: index out of bounds)

Mark Needham - Fri, 08/29/2014 - 10:13

In my last blog post I showed how to group timestamp based data by week, month and quarter and by the end we had the following code samples using dplyr and zoo:

library(RNeo4j)
library(zoo)
 
timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01", tz = "GMT")
 
query = "MATCH (:Person)-[:HAS_MEETUP_PROFILE]->()-[:HAS_MEMBERSHIP]->(membership)-[:OF_GROUP]->(g:Group {name: \"Neo4j - London User Group\"})
         RETURN membership.joined AS joinTimestamp"
meetupMembers = cypher(graph, query)
 
meetupMembers$joinDate <- timestampToDate(meetupMembers$joinTimestamp)
meetupMembers$monthYear <- as.Date(as.yearmon(meetupMembers$joinDate))
meetupMembers$quarterYear <- as.Date(as.yearqtr(meetupMembers$joinDate))
 
meetupMembers %.% group_by(week) %.% summarise(n = n())
meetupMembers %.% group_by(monthYear) %.% summarise(n = n())
meetupMembers %.% group_by(quarterYear) %.% summarise(n = n())

As you can see there’s quite a bit of duplication going on – the only thing that changes in the last 3 lines is the name of the field that we want to group by.

I wanted to pull this code out into a function and my first attempt was this:

groupMembersBy = function(field) {
  meetupMembers %.% group_by(field) %.% summarise(n = n())
}

And now if we try to group by week:

> groupMembersBy("week")
 Show Traceback
 
 Rerun with Debug
 Error: index out of bounds

It turns out if we want to do this then we actually want the regroup function rather than group_by:

groupMembersBy = function(field) {
  meetupMembers %.% regroup(list(field)) %.% summarise(n = n())
}

And now if we group by week:

> head(groupMembersBy("week"), 20)
Source: local data frame [20 x 2]
 
         week n
1  2011-06-02 8
2  2011-06-09 4
3  2011-06-16 1
4  2011-06-30 2
5  2011-07-14 1
6  2011-07-21 1
7  2011-08-18 1
8  2011-10-13 1
9  2011-11-24 2
10 2012-01-05 1
11 2012-01-12 3
12 2012-02-09 1
13 2012-02-16 2
14 2012-02-23 4
15 2012-03-01 2
16 2012-03-08 3
17 2012-03-15 5
18 2012-03-29 1
19 2012-04-05 2
20 2012-04-19 1

Much better!

Categories: Programming

Inspirational Work Quotes at a Glance

What if your work could be your ultimate platform? … your ultimate channel for your growth and greatness?

We spend a lot of time at work. 

For some people, work is their ultimate form of self-expression

For others, work is a curse.

Nobody stops you from using work as a chance to challenge yourself, to grow your skills, and become all that you’re capable of.

But that’s a very different mindset than work is a place you have to go to, or stuff you have to do.

When you change your mind, you change your approach.  And when you change your approach, you change your results.   But rather than just try to change your mind, the ideal scenario is to expand your mind, and become more resourceful.

You can do so with quotes.

Grow Your “Work Intelligence” with Inspirational Work Quotes

In fact, you can actually build your “work intelligence.”

Here are a few ways to think about “intelligence”:

  1. the ability to learn or understand things or to deal with new or difficult situations (Merriam Webster)
  2. the more distinctions you have for a given concept, the more intelligence you have

In Rich Dad, Poor Dad, Robert Kiyosaki, says, “intelligence is the ability to make finer distinctions.”   And, Tony Robbins, says “intelligence is the measure of the number and the quality of the distinctions you have in a given situation.”

If you want to grow your “work intelligence”, one of the best ways is to familiarize yourself with the best inspirational quotes about work.

By drawing from wisdom of the ages and modern sages, you can operate at a higher level and turn work from a chore, into a platform of lifelong learning, and a dojo for personal growth, and a chance to master your craft.

You can use inspirational quotes about work to fill your head with ideas, distinctions, and key concepts that help you unleash what you’re capable of.

To give you a giant head start and to help you build a personal library of profound knowledge, here are two work quotes collections you can draw from:

37 Inspirational Quotes for Work as Self-Expression

Inspirational Work Quotes

10 Distinct Ideas for Thinking About Your Work

Let’s practice.   This will only take a minute, and if you happen to hear the right words, which are the keys for you, your insight or “ah-ha” can be just the breakthrough that you needed to get more of your work, and, as a result, more out of life (or at least your moments.)

Here is a sample of distinct ideas and depth that you use to change how you perceive your work, and/or how you do your work:

  1. “Either write something worth reading or do something worth writing.” — Benjamin Franklin
  2. “You don’t get paid for the hour. You get paid for the value you bring to the hour.” — Jim Rohn
  3. “Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do.” — Steve Jobs
  4. “Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” -- Bill Gates
  5. “We must each have the courage to transform as individuals. We must ask ourselves, what idea can I bring to life? What insight can I illuminate? What individual life could I change? What customer can I delight? What new skill could I learn? What team could I help build? What orthodoxy should I question?” – Satya Nadella
  6. “My work is a game, a very serious game.” — M. C. Escher
  7. “Hard work is a prison sentence only if it does not have meaning. Once it does, it becomes the kind of thing that makes you grab your wife around the waist and dance a jig.” — Malcolm Gladwell
  8. “The test of the artist does not lie in the will with which he goes to work, but in the excellence of the work he produces.” -- Thomas Aquinas
  9. “Are you bored with life? Then throw yourself into some work you believe in with all you heart, live for it, die for it, and you will find happiness that you had thought could never be yours.” — Dale Carnegie
  10. “I like work; it fascinates me. I can sit and look at it for hours.” -– Jerome K. Jerome

For more ideas, take a stroll through my inspirational work quotes.

As you can see, there are lots of ways to think about work and what it means.  At the end of the day, what matters is how you think about it, and what you make of it.  It’s either an investment, or it’s an incredible waste of time.  You can make it mundane, or you can make it matter.

The Pleasant Life, The Good Life, and The Meaningful Life

Here’s another surprise about work.   You can use work to live the good life.   According to Martin Seligman, a master in the art and science of positive psychology, there are three paths to happiness:

  1. The Pleasant Life
  2. The Good Life
  3. The Meaningful Life

In The Pleasant Life, you simply try to have as much pleasure as possible.  In The Good Life, you spend more time in your values.  In The Meaningful Life, you use your strengths in the service of something that is bigger than you are.

There are so many ways you can live your values at work and connect your work with what makes you come alive.

There are so many ways to turn what you do into service for others and become a part of something that’s bigger than you.

If you haven’t figured out how yet, then dig deeper, find a mentor, and figure it out.

You spend way too much time at work to let your influence and impact fade to black.

You Might Also Like

40 Hour Work Week at Microsoft

Agile Avoids Work About Work

How Employees Lost Empathy for Their Work, for the Customer, and for the Final Product

Satya Nadella on Live and Work a Meaningful Life

Short-Burst Work

Categories: Architecture, Programming

R: Grouping by week, month, quarter

Mark Needham - Fri, 08/29/2014 - 01:25

In my continued playing around with R and meetup data I wanted to have a look at when people joined the London Neo4j group based on week, month or quarter of the year to see when they were most likely to do so.

I started with the following query to get back the join timestamps:

library(RNeo4j)
query = "MATCH (:Person)-[:HAS_MEETUP_PROFILE]->()-[:HAS_MEMBERSHIP]->(membership)-[:OF_GROUP]->(g:Group {name: \"Neo4j - London User Group\"})
         RETURN membership.joined AS joinTimestamp"
meetupMembers = cypher(graph, query)
 
> head(meetupMembers)
      joinTimestamp
1 1.376572e+12
2 1.379491e+12
3 1.349454e+12
4 1.383127e+12
5 1.372239e+12
6 1.330295e+12

The first step was to get joinDate into a nicer format that we can use in R more easily:

timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01", tz = "GMT")
meetupMembers$joinDate <- timestampToDate(meetupMembers$joinTimestamp)
 
> head(meetupMembers)
  joinTimestamp            joinDate
1  1.376572e+12 2013-08-15 13:13:40
2  1.379491e+12 2013-09-18 07:55:11
3  1.349454e+12 2012-10-05 16:28:04
4  1.383127e+12 2013-10-30 09:59:03
5  1.372239e+12 2013-06-26 09:27:40
6  1.330295e+12 2012-02-26 22:27:00

Much better!

I started off with grouping by month and quarter and came across the excellent zoo library which makes it really easy to transform dates:

library(zoo)
meetupMembers$monthYear <- as.Date(as.yearmon(meetupMembers$joinDate))
meetupMembers$quarterYear <- as.Date(as.yearqtr(meetupMembers$joinDate))
 
> head(meetupMembers)
  joinTimestamp            joinDate  monthYear quarterYear
1  1.376572e+12 2013-08-15 13:13:40 2013-08-01  2013-07-01
2  1.379491e+12 2013-09-18 07:55:11 2013-09-01  2013-07-01
3  1.349454e+12 2012-10-05 16:28:04 2012-10-01  2012-10-01
4  1.383127e+12 2013-10-30 09:59:03 2013-10-01  2013-10-01
5  1.372239e+12 2013-06-26 09:27:40 2013-06-01  2013-04-01
6  1.330295e+12 2012-02-26 22:27:00 2012-02-01  2012-01-01

The next step was to create a new data frame which grouped the data by those fields. I’ve been learning dplyr as part of Udacity’s EDA course so I thought I’d try and use that:

> head(meetupMembers %.% group_by(monthYear) %.% summarise(n = n()), 20)
 
    monthYear  n
1  2011-06-01 13
2  2011-07-01  4
3  2011-08-01  1
4  2011-10-01  1
5  2011-11-01  2
6  2012-01-01  4
7  2012-02-01  7
8  2012-03-01 11
9  2012-04-01  3
10 2012-05-01  9
11 2012-06-01  5
12 2012-07-01 16
13 2012-08-01 32
14 2012-09-01 14
15 2012-10-01 28
16 2012-11-01 31
17 2012-12-01  7
18 2013-01-01 52
19 2013-02-01 49
20 2013-03-01 22
> head(meetupMembers %.% group_by(quarterYear) %.% summarise(n = n()), 20)
 
   quarterYear   n
1   2011-04-01  13
2   2011-07-01   5
3   2011-10-01   3
4   2012-01-01  22
5   2012-04-01  17
6   2012-07-01  62
7   2012-10-01  66
8   2013-01-01 123
9   2013-04-01 139
10  2013-07-01 117
11  2013-10-01  94
12  2014-01-01 266
13  2014-04-01 359
14  2014-07-01 216

Grouping by week number is a bit trickier but we can do it with a bit of transformation on our initial timestamp:

meetupMembers$week <- as.Date("1970-01-01")+7*trunc((meetupMembers$joinTimestamp / 1000)/(3600*24*7))
 
> head(meetupMembers %.% group_by(week) %.% summarise(n = n()), 20)
 
         week n
1  2011-06-02 8
2  2011-06-09 4
3  2011-06-16 1
4  2011-06-30 2
5  2011-07-14 1
6  2011-07-21 1
7  2011-08-18 1
8  2011-10-13 1
9  2011-11-24 2
10 2012-01-05 1
11 2012-01-12 3
12 2012-02-09 1
13 2012-02-16 2
14 2012-02-23 4
15 2012-03-01 2
16 2012-03-08 3
17 2012-03-15 5
18 2012-03-29 1
19 2012-04-05 2
20 2012-04-19 1

We can then plug that data frame into ggplot if we want to track membership sign up over time at different levels of granularity and create some bar charts of scatter plots depending on what we feel like!

Categories: Programming

Traceability: Assessing Customer Involvement

Ruminating on Customer Involvement

Ruminating on Customer Involvement

Customer involvement can be defined as the amount of time and effort applied to a project by the customers (or user) of the project.  Involvement can be both good (e.g. knowledge transfer and decision making) and bad (e.g. interference and indecision).  The goal in using the traceability model is to force the project team to predict both the quality and quantity of customer involvement as accurately as possible across the life of a project.  While the question of quality and quantity of customer involvement is important for all projects it becomes even more important as Agile techniques are leveraged.  Customer involvement is required for the effective use of Agile techniques and to reduce the need for classic traceability.  Involvement is used to replace documentation with a combination of lighter documentation and interaction with the customer.

Quality can be unpacked to include attributes such as competence: knowledge of the problem space, knowledge of the process and ability to make decisions that stick.  Assessing the quality attributes of involvement requires understanding how having multiple customer and/or user constituencies involved in the project outcome can change the complexity of the project.  For example, the impact of multiple customers and user constituencies’ on decision making, specifically the ability to make decisions correctly or on a timely basis, will influence how a project needs to be run.  Multiple constituencies complicate the ability to make decisions which drives the need for structure.  As the number of groups increases, the number of communication nodes increases, making it more difficult to get enough people involved in a timely manner.   Although checklists are used to facilitate the model, model users should remember that knowledge of the project and project management is needed to use the model effectively.  Users of the model should not see the lists of attributes and believe that this model can be used merely as a check-the-box method.

The methodical assessment of the quantity and quality of customer involvement requires determining the leading indicators of success.  Professional experience suggests a standard set of predictors for customer involvement which are incorporated into the appraisal questions below.
These predictors are as follows:

  1. Agile methods will be used                        y/n
  2. The customer will be available more than 80% of the time         y/n
  3. User/customer will be co-located with the project team            y/n
  4. Project has a single primary customer                    y/n
  5. The customer has adequate business knowledge            y/n
  6. The customer has knowledge of how development projects work         y/n
  7. Correct business decision makers are available                y/n
  8. Team members have a high level of interpersonal skills            y/n
  9. Process coaches are available                    y/n

The assessment process simplifies the evaluation process by using a simple yes-no evaluation.  Gray areas like ‘maybe’ are evaluated as an equivalent to a ‘no’.  While the rating scale is simple the discussion to get to a yes-no decision is typically far less simple.

Agile methods will be used:  The first component in the evaluation is to determine whether the project intends to use disciplined Agile methods for the project being evaluated.  The term ‘disciplined’ is used on purpose.  Agile methods like xP are a set of practices that interact to create development supported by intimate communication.  Without the discipline or without critical practices, the communication alone will not suffice.  Assessment tip:  Using a defined, agile process equates to a ‘Y’, making it up as you go equates to an ‘N’.

Customer availability (>80%):  Intense customer interaction is required to ensure effective development and to reduce reliance on classically documented traceability.  Availability is defined as the total amount of time the primary customer is available.  If customers are not available, lack of interaction is foregone conclusion.  I have found that agile methods (which require intense communication) tend to loose traction when customer availability drops below 80%.   Assessment Tip: Assess this attribute as a ‘Y’ if primary customer availability is above 80%.  Assess it as an ‘N’ if customer availability is below 80% (which means if your customers are not around 80% of the time normally during the project without very special circumstances rate this as a No).

Co-located customer/user:  Co-location is an intimate implementation scenario of customer/user availability.  The intimacy that co-location provides can be leveraged as a replacement for documentation-based communication by using less formal techniques like white boards and sticky notes.  Assessment Tip:  Stand up look around, if you don’t have a high probability of seeing your primary customer (unless it is lunch time), you should rate this attribute as an ‘N’.  Leveraging metaverse tools (e.g. Secondlife or similar) can be used to mitigate some of the problems of disparate physical location.

Project Has A Single Customer:  As the number of primary customers increase, the number of communication paths required for creating and deploying the project increases exponentially.  The impact that the number of customers has on communication is not a linear, it can be more easily conceived as a web.  Each node in the web will require attention (attention = communication) to coordinate activities.  Assessment Tip: Count the number of primary customers, if you need more than one finger, assess this question as an ‘N’.

Business Knowledge:  The quality and quantity of business knowledge the team has to draw upon is inversely related to the amount of documentation-based communication needed.  Availability of solid business knowledge impacts the amount of background that needs to be documented in order to establish the team’s bona fides.  It should be noted that it can be argued that sourcing long term business knowledge in human repositories is a risk.  Assessment Tip:  Assessing the quality and quantity of business knowledge will require introspection and fairly brutal honesty, but do not sell the team or yourself short.

Knowledge of How Development Projects Work:  All team members, whether they are filling a hardcore IT role or the most ancillary user role, need to understand both their project responsibilities and how they will contribute to the project.  The more intrinsically participants understand their roles and responsibilities the smaller the amount of wasted effort a project will typically have to expend on non-value added activities (like re-explaining how work is done).  Assessment Tip:  This is an area that can be addressed after assessment through training.  If team members can not be trained or educated as to their role, appraise this attribute as an ‘N’.

Decisions Makers:  The project attribute that defines “decision makers” is the process that leads to the selection of a course of action.  Most IT projects have a core set of business customers that are the decision makers for requirements and business direction.  Knowing who can make a decision (and have it stick) then having access to them is critical.  Having a set of customers available or co-located is not effective if they are not decision makers (‘the right people’).  The perfect customer for a development project is available, co-located and can make decisions that stick (and very apt not to be the person provided).  Assessment Tip:  This area is another that can only be answered after soul searching introspection (i.e. thinking about it over a beer).  If your customer has to check with a nebulous puppet master before making critical decisions then assessment response should be an “N”.

High Level of Interpersonal Skills:  All team members must be able to interact together and perform as a team.  Insular or other behavior that is not team conducive will cause communications to pool and stagnate as team members either avoid the non-team player or the offending party holds on to information at inopportune times.  Non-team behavior within a team is bad regardless of the development methodology being used.  Assessment Tip:  Teams that have worked together and crafted a good working relationship typically can answer this as a “Y”.

Facilitation: Projects perform more consistently with coaching (and seem to deliver better solutions), however coaching as a process has not been universally adopted.  The role that has been universally embraced is project manager (PM).  Coaches and project managers typically play two very different roles.  The PM role has an external focus and acts as the voice of the process, while the role of coach has an internal focus and acts the as the voice of the team (outside vs. inside, process vs. people).  Agile methods implement the role of coach and PM as two very different roles, even though they can co-exist.  Coaches nurture the personnel on the project; helping them to do their best (remember your last coach).  Shouldn’t the same facility be leveraged on all projects?  Assessment Tip:  Evaluate whether a coach is assigned if yes answer affirmatively.  If the role is not formally recognized within the group or organization, care should be taken, even if a coach is appointed.


Categories: Process Management

Training – Lessons Learned from Training a Group of Indian Analysts

Software Requirements Blog - Seilevel.com - Thu, 08/28/2014 - 17:00
I recently trained a couple of groups of analysts in India on Seilevel methodology. This was the first time we had done training in an Indian setting and honestly, I have to confess to being more than a bit apprehensive when I set out. My fears, set out in no particular order of importance, included: […]
Categories: Requirements

Training – Lessons Learned from Training a Group of Indian Analysts

Software Requirements Blog - Seilevel.com - Thu, 08/28/2014 - 17:00
I recently trained a couple of groups of analysts in India on Seilevel methodology. This was the first time we had done training in an Indian setting and honestly, I have to confess to being more than a bit apprehensive when I set out. My fears, set out in no particular order of importance, included: […]
Categories: Requirements

Managers Manage Ambiguity

I was thinking about the Glen Alleman’s post, All Things Project Are Probabilistic. In it, he says,

Management is Prediction

as a inference from Deming. When I read this quote,

If you can’t describe what you are doing as a process, you don’t know what you’re doing. –Deming

I infer from Deming that managers must manage ambiguity.

Here’s where Glen and I agree. Well, I think we agree. I hope I am not putting words into Glen’s mouth. I am sure he will correct me if I am.

Managers make decisions based on uncertain data. Some of that data is predictive data.

For example, I suggest that people provide, where necessary, order-of-magnitude estimates of projects and programs. Sometimes you need those estimates. Sometimes you don’t. (Yes, I have worked on programs where we didn’t need to estimate. We needed to execute and show progress.)

Now, here’s where I suspect Glen and I disagree:

  1. Asking people for detailed estimates at the beginning of a project and expecting those estimates to be true for the entire project. First, the estimates are guesses. Second, software is about learning, If you work in an agile way, you want to incorporate learning and change into the project or program. I have some posts about estimation in this blog queue where I discuss this.
  2. Using estimation for the project portfolio. I see no point in using estimates instead of value for the project portfolio, especially if you use agile approaches to your projects. If we finish features, we can end the project at any time. We can release it. This makes software different than any other type of project. Why not exploit that difference? Value makes much more sense. You can incorporate cost of delay into value.
  3. If you use your estimate as a target, you have some predictable outcomes unless you get lucky: you will shortchange the feature by decreasing scope, incur technical debt, or increase the defects. Or all three.

What works for projects is honest status reporting, which traffic lights don’t provide. Demos provide that. Transparency about obstacles provides that. The ability to be honest about how to solve problems and work through issues provides that.

Much has changed since I last worked on a DOD project. I’m delighted to see that Glen writes that many government projects are taking more agile approaches. However, if we always work on innovative, new work, we cannot predict with perfect estimation what it will take at the beginning, or even through the project. We can better our estimates as we proceed.

We can have a process for our work. Regardless of our approach, as long as we don’t do code-and-fix, we do. (In Manage It! Your Guide to Modern, Pragmatic Project Management, I say to choose an approach based on your context, and to choose any lifecycle except for code-and-fix.)

We can refine our estimates, if management needs them. The question is this: why does management need them? For predicting future cost for a customer? Okay, that’s reasonable. Maybe on large programs, you do an estimate every quarter for the next quarter, based on what you completed, as in released, and what’s on the roadmap. You already know what you have done. You know what your challenges were. You can do better estimates. I would even do an EQF for the entire project/program. Nobody has an open spigot of money.

But, in my experience, the agile project or program will end before you expect it to. (See the comments on Capacity Planning and the Project Portfolio.) But, the project will only end early if you evaluate features based on value and if you collaborate with your customer. The customer will say, “I have enough now. I don’t need more.” It might occur before the last expected quarter. It might occur before the last expected half-year.

That’s the real ambiguity that managers need to manage. Our estimates will not be correct. Technical leaders, project managers and product owners need to manage risks and value so the project stays on track. Managers need to ask the question: What if the project or program ends early?

Ambiguity, anyone?

Categories: Project Management

Speaking in September

Coding the Architecture - Simon Brown - Thu, 08/28/2014 - 16:01

After a lovely summer (mostly) spent in Jersey, September is right around the corner and is shaping up to be a busy month. Here's a list of the events where you'll be able to find me.

It's going to be a fun month and besides, I have to keep up my British Airways frequent flyer status somehow, right? ;-)

Categories: Architecture

CocoaHeadsNL @ Xebia on September 16th

Xebia Blog - Thu, 08/28/2014 - 11:20

On Tuesday the 16th the Dutch CocoaHeads will be visiting us. It promises to be a great night for anybody doing iOS or OSX development. The night starts at 17:00, diner at 18:00.

If you are an iOS/OSX developer and like to meet fellow developers? Come join the CocoaHeads on september 16th at our office. More details are on the CocoaHeadsNL meetup page.

Software Development Conferences Forecast Agust 2014

From the Editor of Methods & Tools - Thu, 08/28/2014 - 08:38
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban) software testing and software quality, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine. Agile on the Beach, September 4-5 2014, Falmouth in Cornwall, UK SPTechCon, September 16-19 2014, Boston, USA Receive a $200 discount on a 4 or 3-day pass with code SHAREPOINT Future of Web Apps, September 29-October 1 2014, London, ...

Traceability: Putting the Model Into Action

Three core concepts.

Three core concepts.

My model for scaling traceability is based on an assumption that there is a relationship between customer involvement, criticality and complexity.  This yields the level of documentation required to achieve the benefits of traceability.  The model leverages an assessment of project attributes that define the three common concepts.  The concepts are:

  • Customer involvement in the project
  • Complexity of the functionality being delivered
  • Criticality of the project

A thumbnail definition of each of the three concepts begins with the concept of customer involvement, which is defined as the amount of time and effort applied to a project in a positive manner by the primary users of the project.  The second concept, complexity, is a measure of the number of project properties that are outside the normal expectations as perceived by the project team (the norm is relative to the organization or project group rather than to any external standard).  The final concept, criticality, is defined as the attributes defining quality, state or degree of being of the highest importance (again relative to the organization or group doing the work).  We will unpack these concepts and examine them in greater detail as we peal away the layers of the model.

The Model

Untitled

The process for using the model is a simple set of steps.
1.    Get a project (and team members)
2.    Assess the project’s attributes
3.    Plot the results on the model
4.    Interpret the findings
5.    Reassess as needed

The model is built for project environments. Don’t have a project you say!  Get one, I tell you! Can’t get one? This model will be less useful, but not useless.

Who Is Involved And When Will They Be Involved:

Implementing the traceability model assessment works best when the team (or a relevant subset) charged with doing the work conducts the assessment of project attributes.  The use of team members acts to turn Putt’s theory of “Competence Inversion ” on it head by focusing project-level competencies on defining the impact of specific attributes.  The use of a number of team members will provide a basis for consistency if assessments are performed again later in the project.

While the assessment process is best done by a cross functional team, it can be also be performed by those in the project governance structure alone.  The smaller the group that is involved in the assessment the more open and honest the communication between the assessment group and the project team must be or the exercise will be just another process inflicted on the team.  Regardless of the size, the assessment team needs to include technical competence.  Technical competence is especially useful when appraising complexity.  Technical competence is also a good tool to sell the results of the process to the rest of the project team.  Regardless of the deployment model, diversity of thought is generated in cross functional groups that will provide the breadth of knowledge needed to apply the model (this suggestion is based on feedback from process users).  The use of cross functional groups becomes even more critical for large projects and/or projects with embedded sub-projects.  In a situation where the discussion will be contentious or the group participating will be large I suggest using a facilitator to ensure an effective outcome.

An approach I suggest for integrating the assessment process into your current methodology is to incorporate the assessment as part of your formal risk assessment.  An alternative for smaller projects is to perform the assessment process during the initial project planning activities or in a sprint zero (if used).  This will minimize the impact of yet another assessment.

In larger projects where the appraisal outcome may vary across teams or sub-projects, thoughtful discussion will be required to determine whether the lowest common denominator will drive the results or whether a mixed approach is needed.  Use of this method in the real world suggests that in large projects/programs the highest or lowest common denominator is seldom universally useful.  The needs for scalability should be addressed at the level it makes sense for the project, which may mean that sub-projects are different.


Categories: Process Management

What is your next step in Continuous Delivery? Part 1

Xebia Blog - Wed, 08/27/2014 - 21:15

Continuous Delivery helps you deliver software faster, with better quality and at lower cost. Who doesn't want to delivery software faster, better and cheaper? I certainly want that!

No matter how good you are at Continuous Delivery, you can always do one step better. Even if you are as good as Google or Facebook, you can still do one step better. Myself included, I can do one step better.

But also if you are just getting started with Continuous Delivery, there is a feasible step to take you forward.

In this series, I describe a plan that helps you determine where you are right now and what your next step should be. To be complete, I'll start at the very beginning. I expect most of you have passed the first steps already.

The steps you already took

This is the first part in the series: What is your next step in Continuous Delivery? I'll start with three steps combined in a single post. This is because the great majority of you has gone through these steps already.

Step 0: Your very first lines of code

Do you remember the very first lines of code you wrote? Perhaps as a student or maybe before that as a teenager? Did you use version control? Did you bring it to a test environment before going to production? I know I did not.

None of us was born with an innate skills for delivering software in a certain way. However, many of us are taught a certain way of delivering software that still is a long way from Continuous Delivery.

Step 1: Version control

At some point during your study of career, you have been introduced to Version Control. I remember starting with CVS, migrating to Subversion and I am currently using Git. Each of these systems are an improvement over te previous one.

It is common to store the source code for your software in version control. Do you already have definitions or scripts for your infrastructure in version control? And for your automated acceptance tests or database schemas? In later steps, we'll get back to that.

Step 2: Release process

Your current release process may be far from Continuous Delivery. Despite appearances, your current release process is a useful step towards Continuous Delivery.

Even if you delivery to production less than twice a year, you are better off than a company that delivers their code unpredictably, untested and unmanaged. Or worse, a company that edits their code directly on a production machine.

In your delivery process, you have planning, control, a production-like testing environment, actual testing and maintenance after the go-live. The main difference with Continuous Delivery is the frequency and the amount of software that is released at the same time.

So yes, a release process is a productive step towards Continuous Delivery. Now let's see if we can optimize beyond this manual release process.

Step 3: Scripts

Imagine you have issues on your production server... Who do you go to for help? Do you have someone in mind?

Let me guess, you are thinking about a middle-aged guy who has been working at your organisation for 10+ years. Even if your organization is only 3 years old, I bet he's been working there for more than 10 years. Or at least, it seems like it.

My next guess is that this guy wrote some scripts to automate recurring tasks and make his life easier. Am I right?

These scripts are an important step towards Continuous Delivery. in fact, Continuous Delivery is all about automating repetitive tasks. The only thing that falls short is that these scripts are a one-man-initiative. It is a good initiative, but there is no strategy behind it and a lack of management support.

If you don't have this guy working for you, then you may have a bigger step to take when continuing towards the next step of Continuous Delivery. To successfully adopt Continuous Delivery on the long run, you are going to need someone like him.

Following steps

In the next parts, we will look at the following steps towards becoming world champion delivering software:

  • Step 4: Continuous Delivery
  • Step 5: Continuous Deployment
  • Step 6: "Hands-off"
  • Step 7: High Scalability

Stay tuned for the following posts.

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Personas and scenario can be a powerful tool for driving adoption and business value realization.  

All too often, people deploy technology without fully understanding the users that it’s intended for. 

Worse, if the technology does not get used, the value does not get realized.

Keep in mind that the value is in the change.  

The change takes the form of doing something better, faster, cheaper, and behavior change is really the key to value realization.

If you deploy a technology, but nobody adopts it, then you won’t realize the value.  It’s a waste.  Or, more precisely, it’s only potential value.  It’s only potential value because nobody has used it to change their behavior to be better, faster, or cheaper with the new technology.  

In fact, you can view change in terms of behavior changes:

What should users START doing or STOP doing, in order to realize the value?

Behavior change becomes a useful yardstick for evaluating adoption and consumption of technology, and significant proxy for value realization.

What is a Persona?

I’ve written about personas before  in Actors, Personas, and Roles, MSF Agile Persona Template, and Personas at patterns & practices, and Microsoft Research has a whitepaper called Personas: Practice and Theory.

A persona, simply defined is a fictitious character that represents user types.  Personas are the “who” in the organization.    You use them to create familiar faces and to inspire project teams to know their clients as well as to build empathy and clarity around the user base. 

Using personas helps characterize sets of users.  It’s a way to capture and share details about what a typical day looks like and what sorts of pains, needs, and desired outcomes the personas have as they do their work. 

You need to know how work currently gets done so that you can provide relevant changes with technology, plan for readiness, and drive adoption through specific behavior changes.

Using personas can help you realize more value, while avoiding “value leakage.”

What is a Scenario?

When it comes to users, and what they do, we're talking about usage scenarios.  A usage scenario is a story or narrative in the form of a flow.  It shows how one or more users interact with a system to achieve a goal.

You can picture usage scenarios as high-level storyboards.  Here is an example:

clip_image001

In fact, since scenario is often an overloaded term, if people get confused, I just call them Solution Storyboards.

To figure out relevant usage scenarios, we need to figure out the personas that we are creating solutions for.

Workforce Analysis with Personas

In practice, you would segment the user population, and then assign personas to the different user segments.  For example, let’s say there are 20,000 employees.  Let’s say that 3,000 of them are business managers, let’s say that 6,000 of them are sales people.  Let’s say that 1,000 of them are product development engineers.   You could create a persona named Mary to represent the business managers, a persona named Sally to represent the sales people, and a persona named Bob to represent the product development engineers.

This sounds simple, but it’s actually powerful.  If you do a good job of workforce analysis, you can better determine how many users a particular scenario is relevant for.  Now you have some numbers to work with.  This can help you quantify business impact.   This can also help you prioritize.  If a particular scenario is relevant for 10 people, but another is relevant for 1,000, you can evaluate actual numbers.

  Persona 1
”Mary
Persona 2
”Sally”
Persona 3
”Bob”
Persona 4
”Jill”
Persona 5
”Jack”
User Population 3,000 6,000 1,000 5,000 5,000 Scenario 1 X         Scenario 2 X X       Scenario 3     X     Scenario 4       X X Scenario 5 X         Scenario 6 X X X X X Scenario 7 X X       Scenario 8     X X   Scenario 9 X X X X X Scenario 10   X   X   Analyzing a Persona

Let’s take Bob for example.  As a product development engineer, Bob designs and develops new product concepts.  He would love to collaborate better with his distributed development team, and he would love better feedback loops and interaction with real customers.

We can drill in a little bit to get a get a better picture of his work as a product development engineer. 

Here are a few ways you can drill in:

  • A Day in the Life – We can shadow Bob for a day and get a feel for the nature of his work.  We can create  a timeline for the day and characterize the types of activities that Bob performs.
  • Knowledge and Skills - We can identify the knowledge Bob needs and the types of skills he needs to perform his job well.  We can use this as input to design more effective readiness plans.
  • Enabling Technologies –  Based on the scenario you are focused on, you can evaluate the types of technologies that Bob needs.  For example, you can identify what technologies Bob would need to connect and interact better with customers.

Another approach is to focus on the roles, responsibilities, challenges, work-style, needs and wants.  This helps you understand which solutions are appropriate, what sort of behavior changes would be involved, and how much readiness would be required for any significant change.

At the end of the day, it always comes down to building empathy, understanding, and clarity around pains, needs, and desired outcomes.

Persona Creation Process

Here’s an example of a high-level process for persona creation:

  1. Kickoff workshop
  2. Interview users
  3. Create skeletons
  4. Validate skeletons
  5. Create final personas
  6. Present final personas

Doing persona analysis is actually pretty simple.  The challenge is that people don’t do it, or they make a lot of assumptions about what people actually do and what their pains and needs really are.  When’s the last time somebody asked you what your pains and needs are, or what you need to perform your job better?

A Story of Using Personas to Create the Future of Digital Banking

In one example I know of a large bank that transformed itself by focusing on it’s personas and scenarios.  

It started with one usage scenario:

Connect with customers wherever they are.

This scenario was driven from pain in the business.  The business was out of touch with customers, and it was operating under a legacy banking model.   This simple scenario reflected an opportunity to change how employees connect with customers (though Cloud, Mobile, and Social).

On the customer side of the equation, customers could now have virtual face-to-face communication from wherever they are.  On the employee side, it enabled a flexible work-style, helped employees pair up with each other for great customer service, and provided better touch and connection with the customers they serve.

And in the grand scheme of things, this helped transform a brick-and-mortar bank to a digital bank of the future, setting a new bar for convenience, connection, and collaboration.

Here is a video that talks through the story of one bank’s transformation to the digital banking arena:

Video: NedBank on The Future of Digital Banking

In the video, you’ll see Blessing Sibanyoni, one of Microsoft’s Enterprise Architects in action.

If you’re wondering how to change the world, you can start with personas and scenarios.

You Might Also Like

Scenarios in Practice

How I Learned to Use Scenarios to Evaluate Things

How Can Enterprise Architects Drive Business Value the Agile Way?

Business Scenarios for the Cloud

IT Scenarios for the Cloud

Categories: Architecture, Programming

The 1.2M Ops/Sec Redis Cloud Cluster Single Server Unbenchmark

This is a guest post by Itamar Haber, Chief Developers Advocate, Redis Labs.

While catching up with the world the other day, I read through the High Scalability guest post by Anshu and Rajkumar's from Aerospike (great job btw). I really enjoyed the entire piece and was impressed by the heavy tweaking that they did to their EC2 instance to get to the 1M mark, but I kept wondering - how would Redis do?

I could have done a full-blown benchmark. But doing a full-blown benchmark is a time- and resource-consuming ordeal. And that's without taking into account the initial difficulties of comparing apples, oranges and other sorts of fruits. A real benchmark is a trap, for it is no more than an effort deemed from inception to be backlogged. But I wanted an answer, and I wanted it quick, so I was willing to make a few sacrifices to get it. That meant doing the next best thing - an unbenchmark.

An unbenchmark is, by (my very own) definition, nothing like a benchmark (hence the name). In it, you cut every corner and relax every assumption to get a quick 'n dirty ballpark figure. Leaning heavily on the expertise of the guys in our labs, we measured the performance of our Redis Cloud software without any further optimizations. We ran our unbenchmark with the following setup:

Categories: Architecture

Quote of the Day - All Things Project Are Probabilistic

Herding Cats - Glen Alleman - Wed, 08/27/2014 - 16:28

As far as the laws of mathematics refer to reality, they are not certain, as far as they are certain, they do not refer to reality.

— Albert Einstein Sherwin, Ronald Paul (2014-08-17). In The Tao of Systems Engineering: An Engineer's Survival Guide (pp. 195-197). Ronald Paul Sherwin. Kindle Edition.

When ever you hear that we can't predict the future. Think again. We can always predict the future. The level of confidence of that future is what is in question.

When you hear estimating is guessing. Think again. That person doesn't understand probability and statistics. When you hear we don't need to predict to make decisions, that person has very little at risk from that decision, since making decisions in the absence of knowing the possible loss ignores the principles of microeconomics of everyday life.

When ever you hear we don't need to estimate the outcomes of our decisions. Think again. We don't need to estimate those outcomes only if they are of low enough value that we don't care about the consequences of not knowing to some level of confidence what happens as a result of our decision. We're willing to wright off our loss if we're wrong.

When we hear any conjecture that involves mathematics that does not address the foundation of the mathematical principles of that discussion, remember Einstein, and also remember how to apply that advice in the specific domain and context of the question guided by Deming.

Management is Prediction 

Deming 2

Since management is prediction, knowing how to make predictions using statistical methods to produce a confidence interval about the probabilistic outcomes of those business decisions is part of management. When we want a sit at the table where management decisions are being made, knowing this and being able to add value to the decision process is the price of entry to that room. Otherwise we're labor sitting outside the room waiting for the decisions to be made. 
 

Categories: Project Management

Navigating the Way Home

Herding Cats - Glen Alleman - Wed, 08/27/2014 - 03:10

Manx shearwaterI came across a nice blog post from DelancyPlace about the navigation powers of birds

This bird was taken from Wale's to Venice Italy and released and found it's way home in 14 days. 930 miles, over mountains .

To be able to find their way home from an unfamiliar place, birds must carry a figurative map and compass in their brains.

The map tells them where they are, and the compass tells them which direction to fly, even when they are released with no frame of reference to their home loft.

Projects Are Not Birds

As project managers, what's our map and compass? How can we navigate from the start of the project to the end, even though we haven't been on this path before. 

How can we find our way Home?

We have a map. It starts with a Capabilities Based Plan. The CBP states what Done looks like in units of measure meaningful to the decision makers. These units of measure are Measures of Effectiveness and Measures of Performance.

  • Measures of Effectiveness - are operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions.

  • Measures of Performance - characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.

These measures speak to our home and the attributes of the home. The map that gets us ti home is the Integrated Master Plan. This shows the increasing maturity of the deliverables that implement the Measures of Performance and those Performance items that enable the project to produce the needed capabilities that are effectively accomplish the mission or fulfill the business need. 

This looks like a map for the increasing value delivery for an insurance company. The map shows the path or actually paths to home. Home is the ability to generate value from the exchange of money to develop the software.

Project Maturity Flow is the Incremental Delivery of Business Value

Related articles Golden Ratio Managing In The Presence Uncertainty Impact Mapping and Integrated Master Planning We Can Know the Business Value of What We Build All Project Work is Probabilistic Work 5 Questions That Need Answers for Project Success
Categories: Project Management

Traceability: An Approach Mixing CMMI and Agile

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement.

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement.

Traceability is an important tool in software engineering and a core tenant of the CMMI.  It is used as tool for the management and control of requirements. Controlling and understanding the flow of requirements puts a project manager’s hand on the throttle of the project by allowing and controlling the flow of work through a project. However, it is both hard to accomplish and requires a focused application to derive value. When does the control generated represent the proper hand on the throttle or a lead foot on a break?

The implementation of traceability sets the stage for the struggle over processes mandated by management or the infamous “model”.  Developers actively resist process when they perceive that the effort isn’t directly leading to functionality that can be delivered and therefore, not delivering value to their customers.  In the end, traceability, like insurance, is best when you don’t need the information it provides to sort out uncontrolled project changes or delivering functionality not related to requirements.

Identifying both the projects and the audience that can benefit from traceability is paramount for implementing and sustaining the process.  Questions that need to be asked and addressed include:

  • Is the need for control for all types of projects the same?
  • Is the value-to-effort ratio from tracing requirements the same for all projects?
  • What should be evaluated when determining whether to scale the traceability process?

Scalability is a needed step to affect the maximum value from any methodology component, traceability included, regardless of whether the project is plan-driven or Agile. A process is needed to ensure that traceability occurs based on a balance between process, effort and complexity.

The concept of traceability acts a lightening rod for the perceived excesses of CMMI (and by extension all other model-based improvement methods).  I will explore a possible approach for scaling traceability.  My approach bridges the typical approach (leveraging matrices and requirement tools) with an approach that trades documentation for intimate user involvement. It uses a simple set of three criteria (complexity, user involvement and criticality) to determine where a project should focus its traceability effort on continuum between documentation and involvement.

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement, a complex project, and increased criticality.  The model we will propose provides a means to apply traceability in a scaled manner so that it fits a project’s need and is not perceived as a one size fits all approach.


Categories: Process Management

Chrome - Firefox WebRTC Interop Test - Pt 1

Google Testing Blog - Tue, 08/26/2014 - 22:09
by Patrik Höglund

WebRTC enables real time peer-to-peer video and voice transfer in the browser, making it possible to build, among other things, a working video chat with a small amount of Python and JavaScript. As a web standard, it has several unusual properties which makes it hard to test. A regular web standard generally accepts HTML text and yields a bitmap as output (what you see in the browser). For WebRTC, we have real-time RTP media streams on one side being sent to another WebRTC-enabled endpoint. These RTP packets have been jumping across NAT, through firewalls and perhaps through TURN servers to deliver hopefully stutter-free and low latency media.

WebRTC is probably the only web standard in which we need to test direct communication between Chrome and other browsers. Remember, WebRTC builds on peer-to-peer technology, which means we talk directly between browsers rather than through a server. Chrome, Firefox and Opera have announced support for WebRTC so far. To test interoperability, we set out to build an automated test to ensure that Chrome and Firefox can get a call up. This article describes how we implemented such a test and the tradeoffs we made along the way.

Calling in WebRTC Setting up a WebRTC call requires passing SDP blobs over a signaling connection. These blobs contain information on the capabilities of the endpoint, such as what media formats it supports and what preferences it has (for instance, perhaps the endpoint has VP8 decoding hardware, which means the endpoint will handle VP8 more efficiently than, say, H.264). By sending these blobs the endpoints can agree on what media format they will be sending between themselves and how to traverse the network between them. Once that is done, the browsers will talk directly to each other, and nothing gets sent over the signaling connection.

Figure 1. Signaling and media connections.
How these blobs are sent is up to the application. Usually the browsers connect to some server which mediates the connection between the browsers, for instance by using a contact list or a room number. The AppRTC reference application uses room numbers to pair up browsers and sends the SDP blobs from the browsers through the AppRTC server.

Test DesignInstead of designing a new signaling solution from scratch, we chose to use the AppRTC application we already had. This has the additional benefit of testing the AppRTC code, which we are also maintaining. We could also have used the small peerconnection_server binary and some JavaScript, which would give us additional flexibility in what to test. We chose to go with AppRTC since it effectively implements the signaling for us, leading to much less test code.

We assumed we would be able to get hold of the latest nightly Firefox and be able to launch that with a given URL. For the Chrome side, we assumed we would be running in a browser test, i.e. on a complete Chrome with some test scaffolding around it. For the first sketch of the test, we imagined just connecting the browsers to the live apprtc.appspot.com with some random room number. If the call got established, we would be able to look at the remote video feed on the Chrome side and verify that video was playing (for instance using the video+canvas grab trick). Furthermore, we could verify that audio was playing, for instance by using WebRTC getStats to measure the audio track energy level.

Figure 2. Basic test design.
However, since we like tests to be hermetic, this isn’t a good design. I can see several problems. For example, if the network between us and AppRTC is unreliable. Also, what if someone has occupied myroomid? If that were the case, the test would fail and we would be none the wiser. So to make this thing work, we would have to find some way to bring up the AppRTC instance on localhost to make our test hermetic.

Bringing up AppRTC on localhostAppRTC is a Google App Engine application. As this hello world example demonstrates, one can test applications locally with
google_appengine/dev_appserver.py apprtc_code/

So why not just call this from our test? It turns out we need to solve some complicated problems first, like how to ensure the AppEngine SDK and the AppRTC code is actually available on the executing machine, but we’ll get to that later. Let’s assume for now that stuff is just available. We can now write the browser test code to launch the local instance:
bool LaunchApprtcInstanceOnLocalhost() 
// ... Figure out locations of SDK and apprtc code ...
CommandLine command_line(CommandLine::NO_PROGRAM);
EXPECT_TRUE(GetPythonCommand(&command_line));

command_line.AppendArgPath(appengine_dev_appserver);
command_line.AppendArgPath(apprtc_dir);
command_line.AppendArg("--port=9999");
command_line.AppendArg("--admin_port=9998");
command_line.AppendArg("--skip_sdk_update_check");

VLOG(1) << "Running " << command_line.GetCommandLineString();
return base::LaunchProcess(command_line, base::LaunchOptions(),
&dev_appserver_);
}

That’s pretty straightforward [1].

Figuring out Whether the Local Server is Up Then we ran into a very typical test problem. So we have the code to get the server up, and launching the two browsers to connect to http://localhost:9999?r=some_room is easy. But how do we know when to connect? When I first ran the test, it would work sometimes and sometimes not depending on if the server had time to get up.

It’s tempting in these situations to just add a sleep to give the server time to get up. Don’t do that. That will result in a test that is flaky and/or slow. In these situations we need to identify what we’re really waiting for. We could probably monitor the stdout of the dev_appserver.py and look for some message that says “Server is up!” or equivalent. However, we’re really waiting for the server to be able to serve web pages, and since we have two browsers that are really good at connecting to servers, why not use them? Consider this code.
bool LocalApprtcInstanceIsUp() {
// Load the admin page and see if we manage to load it right.
ui_test_utils::NavigateToURL(browser(), GURL("localhost:9998"));
content::WebContents* tab_contents =
browser()->tab_strip_model()->GetActiveWebContents();
std::string javascript =
"window.domAutomationController.send(document.title)";
std::string result;
if (!content::ExecuteScriptAndExtractString(tab_contents,
javascript,
&result))
return false;

return result == kTitlePageOfAppEngineAdminPage;
}

Here we ask Chrome to load the AppEngine admin page for the local server (we set the admin port to 9998 earlier, remember?) and ask it what its title is. If that title is “Instances”, the admin page has been displayed, and the server must be up. If the server isn’t up, Chrome will fail to load the page and the title will be something like “localhost:9999 is not available”.

Then, we can just do this from the test:
while (!LocalApprtcInstanceIsUp())
VLOG(1) << "Waiting for AppRTC to come up...";

If the server never comes up, for whatever reason, the test will just time out in that loop. If it comes up we can safely proceed with the rest of test.

Launching the Browsers A browser window launches itself as a part of every Chromium browser test. It’s also easy for the test to control the command line switches the browser will run under.

We have less control over the Firefox browser since it is the “foreign” browser in this test, but we can still pass command-line options to it when we invoke the Firefox process. To make this easier, Mozilla provides a Python library called mozrunner. Using that we can set up a launcher python script we can invoke from the test:
from mozprofile import profile
from mozrunner import runner

WEBRTC_PREFERENCES = {
'media.navigator.permission.disabled': True,
}

def main():
# Set up flags, handle SIGTERM, etc
# ...
firefox_profile =
profile.FirefoxProfile(preferences=WEBRTC_PREFERENCES)
firefox_runner = runner.FirefoxRunner(
profile=firefox_profile, binary=options.binary,
cmdargs=[options.webpage])

firefox_runner.start()

Notice that we need to pass special preferences to make Firefox accept the getUserMedia prompt. Otherwise, the test would get stuck on the prompt and we would be unable to set up a call. Alternatively, we could employ some kind of clickbot to click “Allow” on the prompt when it pops up, but that is way harder to set up.

Without going into too much detail, the code for launching the browsers becomes
GURL room_url = 
GURL(base::StringPrintf("http://localhost:9999?r=room_%d",
base::RandInt(0, 65536)));
content::WebContents* chrome_tab =
OpenPageAndAcceptUserMedia(room_url);
ASSERT_TRUE(LaunchFirefoxWithUrl(room_url));

Where LaunchFirefoxWithUrl essentially runs this:
run_firefox_webrtc.py --binary /path/to/firefox --webpage http://localhost::9999?r=my_room

Now we can launch the two browsers. Next time we will look at how we actually verify that the call worked, and how we actually download all resources needed by the test in a maintainable and automated manner. Stay tuned!

1The explicit ports are because the default ports collided on the bots we were running on, and the --skip_sdk_update_check was because the SDK stopped and asked us something if there was an update.

Categories: Testing & QA