Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

The Heavyweight Championship of Agile

Without a framework, the building falls.

Without a framework, the building falls.

Team-based frameworks, techniques and methods have dominated the discussion over short history of Agile as a named movement. Concepts like Scrum and Extreme Programing (just two of the more prominent frameworks) caught the interest of software developers and IT management for many reasons, including the promise of customer responsiveness, faster delivery rates, increased productivity and the implementation excesses of heavyweight frameworks, such as the CMMI. Excess of the later still resounds in the minds of many developers, process improvement specialist and IT leaders. The fear of excess overhead and administration has dampened the discussion and exploration of techniques to scale Agile to the address program level and enterprise issues.  In the past few years, three primary frameworks have emerged to vie for the heavyweight championship of Agile. They are:

  1. Dynamic Systems Development Method (DSDM). DSDM is a project delivery/system development method. Originally released in 1994, DSDM predates the Agile Manifesto and was one the frameworks that drove the evolution of lightweight incremental development. DSMD takes a broader view of projects and integrates other lifecycle steps into the framework on top of a developer lead Scrum team. The framework has been tailored to integrate the PRINCE2 (European Project Management Standard) as a project and program management tool. DSDM is open source. (http://www.dsdm.org/)
  2. Disciplined Agile Delivery (DAD). DAD is a framework that incorporates concepts from many of the standard techniques and frameworks (Scrum, lean, Agile modeling and Agile Unified Process to name a few) to develop an organizational model. Given that DAD was developed and introduced after the introduction of Agile-as-a-movement, DAD explicitly reflects the Agile principles espoused in the Agile Manifesto while reflecting organizational scaling concerns. DAD has been influenced and championed by Scott Amber (Disciplined Agile Delivery: A Practitioner’s Guide to Agile, 2012) and IBM. The model is proprietary (http://disciplinedagiledelivery.com/)
  3. Scaled Agile Framework Enterprise (SAFe). SAFe as a framework leverages Kanban as tool to introduce organizational portfolio management and then to evolve concepts to programs to project and finally to Scrum teams. SAFe explicitly addresses the necessity of architecture to develop in lockstep with functionality. As with DSDM and DAD, SAFe’s goal is provide a framework to scale Agile in a manner that addresses the concerns and needs to the overall business, IT and development organizations. SAFe is a proprietary framework that was originally developed by Dean Leffingwell and now is supported by the company Scaled Agile. (http://scaledagileframework.com/)

There is a huge amount of controversy in the Agile community about the need for frameworks for scaling Agile, as evidenced by shouting matches at conferences and the vitriol on message boards. However many large organizations in both commercial and governmental organizations are using DSDM, DAD and SAFe as mechanisms to scale Agile.  Shouting is not going to stop what many organizations view as a valuable mechanism to deliver functionality to their customers.


Categories: Process Management

Why 'Why' Is Everything

Xebia Blog - Mon, 10/06/2014 - 20:46

The 'Why' part is perhaps the most important aspect of a user story. This links to the sprint goal which links ultimately to the product vision and organisation's vision.

Lately, I got reminded of the very truth of this statement. My youngest son is part of a soccer team and they have training every week. Part of the training are exercises that use a so-called speedladder.

foto-1

After the training while driving home I asked him what he especially liked about the training and what he wants to do differently next time. This time he answered that he didn't like the training at all. So I asked him what part he disliked: "The speedladder. It is such a stupid thing to do.". Although I realised it to be a poor mans answer I told him that some parts are not that nice and he needs to accept that: practising is not always fun. I wasn't happy with the answer but couldn't think of a better one.

Some time passed when I overheard the trainers explaining to each other that the speedladder is for improving the 'footwork', coordination, and sensory development. Then I got an idea!
I knew that his ambition is to become as good as Messi :-) so when at home I explained this to my son and that it helps him to improve feints and unparalleled actions. I noticed his twinkling eyes and he enthusiastically replied: "Dad, can we buy a speedladder so I can practise at home?".  Of course I did buy one! Since then the 'speedladder' is the most favourable part of the soccer training!

Summary

The goal, purpose and the 'Why' is the most important thing for persons and teams. Communicating this clearly to the team is one of the most important things a product owner and organisation need to do in order to get high performant teams.

Quote of the Day

Herding Cats - Glen Alleman - Mon, 10/06/2014 - 17:52

Novum_organum_scientiarumThe unassisted hand, and the understanding left to itself, posses but little power. Effects are produced by the means of instruments and helps, which the understanding requires no less than the hand. And as instruments either promote or regulate the motion of the hand, so those that are applied to the mind prompt or protect the understanding - Novum Organum Sciantiarium (Aphorisms concerning the Interpretation of Nature and the Kingdom of Man), Francis Bacon (1561 - 1626).

 

 

When we hear we can make decisions in the absence of estimating the impacts of those decisions, the cost when complete, or the lost opportunity cost for make alternative decisions, think of Bacon.

He essentially says show me the money.

Control systems from Glen Alleman
Categories: Project Management

Fiddling with ReactiveUI and Xamarin.Forms

Eric.Weblog() - Eric Sink - Mon, 10/06/2014 - 17:00

I'm no expert at ReactiveUI. I'm just fiddling around in the back row of a session here at Xamarin Evolve. :-)

The goal

I want an instance of my DataGrid control that is bound to a collection of objects. And I want the display to automatically update when I change a property on one of those objects.

For the sake of this quickie demo, I'm gonna add a tap handler for a cell that simply appends an asterisk to the text of that cell. That should end up causing a display update.

Something like this code snippet:


Main.SingleTap += (object sender, CellCoords e) => {
    T r = Rows [e.Row];
    ColumnInfo ci = Columns[e.Column];
    var typ = typeof(T);
    var ti = typ.GetTypeInfo();
    var p = ti.GetDeclaredProperty(ci.PropertyName);
    if (p != null)
    {
        var val = p.GetValue(r);
        p.SetValue(r, val.ToString() + "*");
    }
};

Actually, that looks complicated, because I've got some reflection code that figures out which property on my object corresponds to which column. Ignore the snippet above and think of it like this (for a tap in column 0):

Main.SingleTap += (object sender, CellCoords e) => {
    WordPair r = Rows [e.Row];
    r.en += "*";
};

Which will make more sense if you can see the class I'm using to represent a row:

public class WordPair
{
    public string en { get; set; }
    public string sp { get; set; }
}

Or rather, that's what it looked like before I started adapting it for ReactiveUI. Now it needs to notify somebody when its properties change, so it looks more like this:

public class WordPair : ReactiveObject
{
    private string _en;
    private string _sp;

    public string en {
        get { return _en; }
        set { this.RaiseAndSetIfChanged (ref _en, value); }
    }
    public string sp {
        get { return _sp; }
        set { this.RaiseAndSetIfChanged (ref _sp, value); }
    }
}

So, basically I want a DataGrid which is bound to a ReactiveList. But actually, I want it to be more generic than that. I want WordPair to be a type parameter.

So my DataGrid subclass of Xamarin.Forms.View has a type parameter for the type of the row:

public class ColumnishGrid<T> : Xamarin.Forms.View where T : class

And the ReactiveList<T> is stored in a property of that View:

public static readonly BindableProperty RowsProperty = 
    BindableProperty.Create<ColumnishGrid<T>,ReactiveList<T>>(
        p => p.Rows, null);

public ReactiveList<T> Rows {
    get { return (ReactiveList<T>)GetValue(RowsProperty); }
    set { SetValue(RowsProperty, value); } // TODO disallow invalid values
}

And the relevant portions of the code to build a Xamarin.Forms content page look like this:

var mainPage = new ContentPage {
    Content = new ColumnishGrid<WordPair> {

        ...

        Rows = new ReactiveList<WordPair> {
            new WordPair { en = "drive", sp = "conducir" },
            new WordPair { en = "speak", sp = "hablar" },
            new WordPair { en = "give", sp = "dar" },
            new WordPair { en = "be", sp = "ser" },
            new WordPair { en = "go", sp = "ir" },
            new WordPair { en = "wait", sp = "esperar" },
            new WordPair { en = "live", sp = "vivir" },
            new WordPair { en = "walk", sp = "andar" },
            new WordPair { en = "run", sp = "correr" },
            new WordPair { en = "sleep", sp = "dormir" },
            new WordPair { en = "want", sp = "querer" },
        }
    }
};

The implementation of ColumnishGrid contains the following snippet, which will be followed by further explanation:

IRowList<T> rowlist = new RowList_Bindable_ReactiveList<T>(this, RowsProperty);

IValuePerCell<string> vals = new ValuePerCell_RowList_Properties<string,T>(rowlist, propnames);

IDrawCell<IGraphics> dec = new DrawCell_Text (vals, fmt);

In DataGrid, a RowList is an interface used for binding some data type that represents a whole row.

public interface IRowList<T> : IPerCell
{
    bool get_value(int r, out T val);
}

A RowList It could be an array of something (like strings), using the column number as an index. But in this case, I am using a class, with each property mapped to a column.

A ValuePerCell object is used anytime DataGrid needs, er, a value per cell (like the text to be displayed):

public interface IValuePerCell<T> : IPerCell
{
    bool get_value(int col, int row, out T val);
}

And ValuePerCell_RowList_Properties is an object which does the mapping from a column number (like 0) to a property name (like WordPair.en).

Then the ValuePerCell object gets handed off to DrawCell_Text, which is what actually draws the cell text on the screen.

I skipped one important thing in the snippet above, and that's RowList_Bindable_ReactiveList. Since I'm storing my ReactiveList in a property on my View, there are two separate things to listen on. First, I obviously want to listen for changes to the ReactiveList and update the display appropriately. But I also need to listen for the case where somebody replaces the entire list.

RowList_Bindable_ReactiveList handles the latter, so it has code that looks like this:

obj.PropertyChanged += (object sender, System.ComponentModel.PropertyChangedEventArgs e) => {
    if (e.PropertyName == prop.PropertyName)
    {
        ReactiveList<T> lst = (ReactiveList<T>)obj.GetValue(prop);
        _next = new RowList_ReactiveList<T>(lst);
        if (changed != null) {
            changed(this, null);
        }
        _next.changed += (object s2, CellCoords e2) => {;
            if (changed != null) {
                changed(this, e2);
            }
        };
    }
};

And finally, the code which listens to the ReactiveList itself:

public RowList_ReactiveList(ReactiveList<T> rx)
{
    _rx = rx;

    _rx.ChangeTrackingEnabled = true;
    _rx.ItemChanged.Subscribe (x => {
        if (changed != null) {
            int pos = _rx.IndexOf(x.Sender);
            changed(this, new CellCoords(0, pos));
        }
    });
}

DataGrid uses a chain of drawing objects which pass notifications up the chain with a C# event. In the end, the DataGrid core panel will hear about the change, and it will trigger the renderer, which will cause a redraw.

And that's how the word "give" ended up with an asterisk in the screen shot at the top of this blog entry.

 

How Clay.io Built their 10x Architecture Using AWS, Docker, HAProxy, and Lots More

This is a guest repost by Zoli Kahan from Clay.io. 

This is the first post in my new series 10x, where I share my experiences and how we do things at Clay.io to develop at scale with a small team. If you find these things interesting, we're hiring - zoli@clay.io.

The Cloud CloudFlare

CloudFlare

CloudFlare handles all of our DNS, and acts as a distributed caching proxy with some additional DDOS protection features. It also handles SSL.

Amazon EC2 + VPC + NAT server
Categories: Architecture

Ten Thousand Baby Steps

NOOP.NL - Jurgen Appelo - Mon, 10/06/2014 - 15:35
Euro Disco/Dance

Last week, I added the last songs to my two big Spotify playlists: Euro Dance Heaven (2,520 songs) and Euro Disco Heaven (1,695 songs). I added the first of those songs to my playlists on December 16, 2010. That’s almost four years ago.

The post Ten Thousand Baby Steps appeared first on NOOP.NL.

Categories: Project Management

Conceptual Model vs Graph Model

Mark Needham - Mon, 10/06/2014 - 08:11

We’ve started running some sessions on graph modelling in London and during the first session it was pointed out that the process I’d described was very similar to that when modelling for a relational database.

I thought I better do some reading on the way relational models are derived and I came across an excellent video by Joe Maguire titled ‘Data Modelers Still Have Jobs: Adjusting For the NoSQL Environment

Joe starts off by showing the following ‘big picture framework’ which describes the steps involved in coming up with a relational model:

2014 10 05 19 04 46

A couple of slides later he points out that we often blur the lines between the different stages and end up designing a model which contains a lot of implementation details:

2014 10 06 23 25 22

If, on the other hand, we compare a conceptual model with a graph model this is less of an issue as the two models map quite closely:

  • Entities -> Nodes / Labels
  • Attributes -> Properties
  • Relationships -> Relationships
  • Identifiers -> Unique Constraints

Unique Constraints don’t quite capture everything that Identifiers do since it’s possible to create a node of a specific label without specifying the property which is uniquely constrained. Other than that though each concept matches one for one.

We often say that graphs are white board friendly by which we mean that that the model you sketch on a white board is the same as that stored in the database.

For example, consider the following sketch of people and their interactions with various books:

IMG 2342

If we were to translate that into a write query using Neo4j’s cypher query language it would look like this:

CREATE (ian:Person {name: "Ian"})
CREATE (alan:Person {name: "Alan"})
CREATE (gg:Person:Author {name: "Graham Greene"})
CREATE (jlc:Person:Author {name: "John Le Carre"})
 
CREATE (omih:Book {name: "Our Man in Havana"})
CREATE (ttsp:Book {name: "Tinker Tailor, Soldier, Spy"})
 
CREATE (gg)-[:WROTE]->(omih)
CREATE (jlc)-[:WROTE]->(ttsp)
CREATE (ian)-[:PURCHASED {date: "05-02-2011"}]->(ttsp)
CREATE (ian)-[:PURCHASED {date: "08-09-2011"}]->(omih)
CREATE (alan)-[:PURCHASED {date: "05-07-2014"}]->(ttsp)

There are a few extra brackets and the ‘CREATE’ key word but we haven’t lost any of the fidelity of the domain and in my experience a non technical / commercial person would be able to understand the query.

By contrast this article shows the steps we might take from a conceptual model describing employees, departments and unions to the eventual relational model.

If you don’t have the time to read through that, we start with this initial model…

2014 10 07 00 13 51

…and by the time we’ve got to a model that can be stored in our relational database:

2014 10 07 00 14 32

You’ll notice we’ve lost the relationship types and they’ve been replaced by 4 foreign keys that allow us to join the different tables/sets together.

In a graph model we’d have been able to stay much closer to the conceptual model and therefore closer to the language of the business.

I’m still exploring the world of data modelling and next up for me is to read Joe’s ‘Mastering Data Modeling‘ book. I’m also curious how normal forms and data redundancy apply to graphs so I’ll be looking into that as well.

Thoughts welcome, as usual!

Categories: Programming

How to create a Value Stream Map

Xebia Blog - Mon, 10/06/2014 - 08:05

Value Stream Mapping (VSM) is a -very- useful tool to gain insight in the workflow of a process and can be used to identify both Value Adding Activities and Non Value Adding Activities in a process stream while providing handles for optimizing the process chain. The results of a VSM can be used for many occasions: from writing out a business case, to defining a prioritized list to optimize processes within your organization, to pinpointing bottlenecks in your existing processes and gain a common understanding of process related issues.

When creating a VSM of your current software delivery process you quite possibly will be amazed by the amount of waste and therefor the room for improvement you might find. I challenge you to try this out within your own organization. It will leave you with a very powerful tool to explain to your management the steps that need to change, as it will leave you with facts.

To quickly get you started, I wrote out some handles on how to write out a proper Value Stream Map.

In many organizations there is the tendency to ‘solely’ perform local optimizations to steps in the process (i.e. per Business Unit), while in reality the largest process optimizations can be gained by optimizing the area’s which are in between the process steps and do not add any value to the customer at all; the Non Value Adding activities. Value Stream Mapping is a great tool for optimizing the complete process chain, not just the local steps.

Local_vs_complete

The Example - Mapping a Software Delivery Process
Many example value streams found on the internet focus on selling a mortgage, packaging objects in a factory or some logistic process. The example I will be using focuses on a typical Software Delivery Processes as we still see them today: the 'traditional' Software Delivery Process containing many manual steps.

You first need to map the 'as-is' process as you need this to form the baseline. This baseline provides you the required insight to remove steps from the process that do not add any value to your customer and therefor can be seen as pure waste to your organization.

It is important to write out the Value Stream as a group process (a workshop), where group-members represent people that are part of the value chain as it is today*. This is the only way to spot (hidden) activities and will provide a common understanding of the situation today. Apart from that, failure to execute the Value Stream Mapping activity as a group process will very likely reduce the acceptance rate at the end of the day. Never write out a VSM in isolation.

Value Stream mapping is 'a paper and pencil tool’ where you should ask participants to write out the stickies and help you form the map. You yourself will not write on stickies (okay, okay, maybe sometimes … but be careful not to do the work for the group). Writing out a process should take you about 4 to 6 hours, including discussions and the coffee breaks of course. So, now for the steps!

* Note that the example value stream is a simplified and fictional process based on the experience at several customers.

Step 0 Prepare
Make sure you have all materials available.

Here is a list:
- two 4 meter strokes of brown paper.
- Plastic tape to attach paper to the wall
- stickies square multiple colors
- stickies rectangle multiple colors
- small stickies one or two colors
- lot’s of sharpies (people need to be able to pick up the pens)
- colored ‘dot' stickies.

What do you need? (the helpful colleague not depicted)

What do you need? (the helpful colleague not depicted)

Step 1 & 2 define objectives and process steps
Make sure to work one process at a time and start off with defining customer objectives (the Voice Of Customer). A common understanding of the VoC is important because in later stage you will determine with the team which activities are really adding to this VoC and which steps are not. Quite often these objectives are defined in Time, Cost and Quality. For our example, let’s say the customer would like to be able to deliver a new feature every hour, with a max cost of $1000 a feature and with zero defects.

First, write down the Voice of the Customer in the top right corner. Now, together with the group, determine all the actors (organizations / persons / teams) that are part of the current process and glue these actors as orange stickies to the brown paper.

Defining Voice of Customer and Process Steps

Defining Voice of Customer and Process Steps

Step 3 Define activities performed within each process step
With the group, per determine the activities that take place. Underneath the orange stickies, add green stickies that describe the activities that take place in a given step.

Defining activities performed in each step

Defining activities performed in each step

Step 4 Define Work in Progress (WiP)
Now, add pink stickies in between the steps, describing the number of features / requirements / objects / activities that is currently in process in between actors. This is referred to as WiP - Work in Progress. Whenever there is a high WiP limit in between steps, you have identified a bottleneck causing the process 'flow' to stop.

On top of the pink WiP stickies containing particular high WiP levels, add a small sticky indicating what the group thinks is causing the high WiP. For instance, a document has to be distributed via internal mail, or a wait is introduced for a bi-weekly meeting or travel to another location is required. This information can later be used to optimize the process.

Note that in the workshop you should also take some time to finding WiP within the activities itself (this is not depicted in this example). Spend time on finding information for causes of high WiP and add this as stickies to each activity.

Define work in process

Define work in process

Step 5 Identify rework
Rework is waste. Still, many times you'll see that a deliverable is to be returned to a previous step for reprocessing. Together with the group, determine where this happens and what is the cause of this rework. A nice additional is to also write out first-time-right levels.

Identify rework

Identify rework

Step 6 Add additional information
Spend some time in adding additional comments for activities on the green stickies. Some activities might for instance not be optimized, are not easy to handle or from a group perspective considered obsolete. Mark these comments with blue stickies next to the activity at hand.

Add additional information

Add additional information

Step 7 Add Process time, Wait time and Lead time and determining Process Cycle Efficiency

Now, as we have the process more or less complete, we can start adding information related to timing. In this step you would like to determine the following information:

  • process time: the real amount of time that is required to perform a task without interruptions
  • lead time: the actual time that it takes for the activity to be completed (also known as elapse time)
  • wait time: time when no processing is done at all, for example when for waiting on a 'event' like a bi-weekly meeting.

(Not in picture): for every activity on the green sticky, write down a small sticky with two numbers vertically aligned. The top-number reflects the process-time, (i.e. 40 hours). The bottom-number reflects the lead time (i.e. 120 hours).

(In picture): add a block diagram underneath the process, where timing information in the upper section represents total processing time for all activities and timing information the lower section represents total lead time for all activities. (just add up the timing information for the individual activities I described in previous paragraph). Also add noticeable wait time in-between process steps. As a final step, to the right of this block diagram, add the totals.

Now that you have all information on the paper, the following  can be calculated:

  • Total Process Time - The total time required to actually work on activities if one could focus on the activity at hand.
  • Total Lead Time - The total time this process actually needs.
  • Project Cycle Efficiency (PCE): -> Total Process Time / Total Lead Time *100%.

Add this information to the lower right corner of your brown paper. The numbers for this example are:

Total Process Time: add all numbers in top section of stickies: 424 hours
Process Lead Time (PLT): add all numbers in lower section of stickies + wait time in between steps: 1740 hours
Project Cycle Efficiency (PCE) now is: -> Total Process  Time / Total Process Lead Time: 24%.
Note that 24% is -very- high which is caused by using an example. Usually you’ll see a PCE at about 4 - 8% for a traditional process.

Add process, wait and lead times

Add process, wait and lead times

Step 8 Identify Customer Value Add and Non Value Add activities
Now, categorize tasks into 2 types: tasks that add value to the customer (Customer Value Add, CVA) and tasks that do not add value to the customer (Non Value Add, NVA). The NVA you can again split into two categories: tasks that add Business Value (Business Value Add, BVA) and ‘Waste’. When optimizing a process, waste is to be eliminated completely as it does not add value to the customer nor the business as a whole. But also for the activities categorized as 'BVA', you have to ask yourself whether these activities add to the chain.

Mark CVA tasks with a green dot, BVA tasks with a blue dot and Waste with a red dot. Put the legend on the map for later reference.

When identifying CVA, NVA and BVA … force yourself to refer back to the Voice of Customer you jotted down in step 1 and think about who is your customer here. In this example, the customer is not the end user using the system, but the business. And it was the business that wanted Faster, Cheaper & Better. Now when you start to tag each individual task, give yourself some time in figuring out which tasks actually add to these goals.

Determine Customer Value Add & Non Value Add

Determine Customer Value Add & Non Value Add

To give you some guidance on how you can approach tagging each task, I’ll elaborate a bit on how I tagged the activities. Note again, this is just an example, within the workshop your team might tag differently.

Items I tagged as CVA: coding, testing (unit, static, acceptance), execution of tests and configuration of monitoring are adding value to the customer (business). Why? Because all these items relate to a faster, better (high quality through test + monitoring) and cheaper (less errors through higher quality of code) delivery of code.

Items I tagged as BVA: documentation, configuration of environments, deployments of VMs, installation of MW are required to be able to deliver to the customer when using this (typical waterfall) Software Delivery Processes. (Note: I do not necessarily concur with this process.) :)

Items I tagged as pure Waste, not adding any value to the customer: items like getting approval, the process to get funding (although probably required), discussing details and documenting results for later reference or waiting for the Quarterly release cycle. Non of these items are required to either deliver faster, cheaper or better so in that respect these items can be considered waste.

That's it (and step 9) - you've mapped your current process
So, that’s about it! The Value Stream Map is now more or less complete an contains all relevant information required to optimize the process in a next step. Step 9 here would be: Take some time to write out items/bottlenecks that are most important or easy to address and discuss internally with your team about a solution. Focus on items that you either tagged as BVA or pure waste and think of alternatives to eliminate these steps. Put your customer central, not your process! Just dropping an activity as a whole seems somewhat radical, but sometimes good ideas just are! Note by the way that when addressing a bottleneck, another bottleneck will pop up. There always will be a bottleneck somewhere in the process and therefor process optimization must be seen as a continuous process.

A final tip: to be able to perform a Value Stream Mapping workshop at the customer, it might be a good idea to join a more experienced colleague first, just to get a grasp of what the dynamics in such a workshop are like. The fact that all participants are at the same table, outlining the delivery process together and talk about it, will allow you to come up with an optimized process on which each person will buy in. But still, it takes some effort to get the workshop going. Take your time, do not rush it.

For now, I hope you can use the steps above the identify the current largest bottlenecks within your own organization and get going. In a next blog, if there is sufficient interest, I will write about what would be possible solutions in solving the bottlenecks in my example. If you have any ideas, just drop a line below so we can discuss! The aim for me would be to work towards a solution that caters for Continuous Delivery of Software.

Michiel Sens.

SPaMCAST 310 – Mike Burrows, Kanban from the Inside

www.spamcast.net

http://www.spamcast.net

Listen to the SPaMCAST 310

Software Process and Measurement Cast 310 features our interview with Mike Burrows. This is Mike’s second visit to the Software Process and Measurement Cast.  In this visit we discussed his new book, Kanban from the Inside (Kindle).  The book lays out why Kanban is a management method built on a set of values rather than just a set of techniques. Mike explains why Kanban leads to better outcomes for projects, managers, organizations and customers!

Mike is the UK Director and Principal Consultant at David J Anderson and Associates. In a career spanning the aerospace, banking, energy and government sectors, Mike has been a global development manager, IT director and software developer. He speaks regularly at Lean/Kanban-related events in several countries and his book Kanban from the Inside (Kindle)was published in September.

Mike’s email is mike@djaa.com
Twitter: https://twitter.com/asplake and @KanbanInside
Blog is http://positiveincline.com/index.php/about/UK

Kanban conference: http://lkuk.leankanban.com/
Kanban conference series: http://conf.leankanban.com/

Next

SPaMCAST 311 features our essay on backlog grooming. Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time). In the next podcast we get into the nuts and bolts of making your backlog better!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 310 – Mike Burrows, Kanban from the Inside

Software Process and Measurement Cast - Sun, 10/05/2014 - 22:00

Software Process and Measurement Cast 310 features our interview with Mike Burrows. This is Mike’s second visit to the Software Process and Measurement Cast.  In this visit we discussed his new book, Kanban from the Inside (Kindle).  The book lays out why Kanban is a management method built on a set of values rather than just a set of techniques. Mike explains why Kanban leads to better outcomes for projects, managers, organizations and customers!

Mike is the UK Director and Principal Consultant at David J Anderson and Associates. In a career spanning the aerospace, banking, energy and government sectors, Mike has been a global development manager, IT director and software developer. He speaks regularly at Lean/Kanban-related events in several countries and his book Kanban from the Inside (Kindle)was published in September.

Mike’s email is mike@djaa.com
Twitter: https://twitter.com/asplake and @KanbanInside
Blog is http://positiveincline.com/index.php/about/UK

Kanban conference: http://lkuk.leankanban.com/
Kanban conference series: http://conf.leankanban.com/

Next

SPaMCAST 311 features our essay on backlog grooming. Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time). In the next podcast we get into the nuts and bolts of making your backlog better!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Sun, 10/05/2014 - 15:33

Nothing is too wonderful to be true, if it is consistent with the laws of nature, and in  such things as these, experiment is the best test of such consistency − Michael Faraday's Diary, March 19, 1849

When we hear wonderful concepts conjectured that are untested outside personal anecdote, ask where has this worked, in what domain, in what governance framework, and what was the value at risk to applying the idea.

Categories: Project Management

Distributed Agile: Checklist of the Basics

Hydrogen is elementary!

Hydrogen is elementary!

Hand drawn chart Saturday.

In Scrum there are four-ish basic meetings. They are sprint planning, daily stand-up, demonstrations, retrospectives and backlog grooming (the “ish” part). Whether distributed or co-located, these meetings are critical to planning, communication and controlling how Agile is typically practiced. Getting them right is not optional, especially when the Agile team is distributed. While there specific techniques for each type of meeting (some people call them rituals) there a few basics that can be used as a checklist. They are:

  • Schedule and invite participants. Team members are easy! Schedule all standard meetings for team members upfront for as many sprints as you are panning to have. As a team, decide on who will participate in the demo and make sure they are invited as early as possible.
  • Review the goals and rules of the meeting upfront. Don’t assume that everyone knows the goal of the meeting and the ground rules for their participation.
  • Publish an agenda. Agendas provide focus for any meeting. While an agenda for a daily stand-up might sound like overkill (and for long-term, stable teams it probably is), I either review the outline of the meeting or send the outline to all participants before the meeting starts.
  • Check the tools and connections. Distributed teams will require tools and software packages; including audio conferencing, video conferencing, screen sharing and chat software. Ensure they are on, connected and that everyone has access BEFORE the meeting starts.
  • Ensure active facilitation is available. All meetings are facilitated. Actively facilitated meetings are typically more focused, while un-facilitated meetings tend to be less focused and more ad-hoc. Active or passive facilitation is your choice. Distributed teams should almost choose facilitation. If using Scrum, part of role of the Scrum master is to act as a facilitator. The Scrum master guides the team and participants to ensure all of the meetings are effective and meet their goals.
  • Hold a meeting retrospective. Spend a few moments after each meeting to validate the goals were met and what could be done better in the future.

Agile is not magic. All Agile teams use techniques that assume the team has a common goal to guide them and then use feedback generated through communication to stay on track. Distributed Agile teams need to pay more careful attention to the basics. The Scrum master should strive to make the tools and process needed for all of the meetings fade into the background; for the majority of the team the end must be more important than the process.


Categories: Process Management

Tips for Error Handling with Android Wear APIs

Android Developers Blog - Sat, 10/04/2014 - 04:54

By +Wayne Piekarski, Developer Advocate, Android Wear

For developers using the Android Wear APIs in Google Play services, it is important to correctly handle all the error conditions that can occur on legacy phones or when users do not have a wearable device. This post describes the best practice in handling error conditions with the GoogleApiClient connect() method. If you do not implement this correctly, your existing application functionality may fail for non-wearable users.

There are two ways that the connect() method can return ConnectionResult.API_UNAVAILABLE for wearable support with Google Play services:

  • When requesting Wearable.API on any device running Android 4.2 (API level 17) or earlier
  • When requesting Wearable.API when no Android Wear companion application or device is available

Google Play services provides a wide range of useful features such as integration with Google Drive, Wallet, Google+, and Google Play games services (just to name a few!). During initialization, the application uses GoogleApiClient.Builder() to make calls to addApi() to request the features that are necessary. The connect() method is then called to establish a connection to the Google Play services library, and it can return error codes if any API is not available.

If you request multiple APIs from a single GoogleApiClient, such as Drive and Wear, and the Wear support returns API_UNAVAILABLE, then the Drive request will also fail. Since Wear support is not guaranteed to be available on all devices, you should make sure to use a separate client for this request.

The best practice for developers is to implement two separate GoogleApiClient connections:

  • One connection for Android Wear support, and
  • A separate connection for all of the other APIs you need

This will ensure that the functionality of your app will remain for all users, whether or not there is wearable support available on their devices, as well as on older legacy devices.

It's important that you implement this best practice immediately, because your current users may be affected if not handled correctly in your app.

Join the discussion on

+Android Developers
Categories: Programming

Distributed Agile: Retrospectives

Retrospectives are reflective!

Retrospectives are reflective!

 

Retrospectives are the team’s chance to make their life better. Process of making the team’s life better may mean confronting hard truths and changing how work is done. Hard conversations require trust and safety. Trust and safety are attributes that are hard to generate remote, especially if team members have never met each other. Facilitation and techniques tailored to distributed teams are needed to get real value from retrospectives when the team is distributed.

  1. Bring team members together. Joint retrospectives will serve a number of purposes including building relationships and trust. The combination of deeper relationships and trust will help team members tackle harder conversations when team members are apart.
  2. Use collaboration tools. Many retrospective techniques generate lists and then ask participants to vote. Listing techniques work best when participants see what is being listed rather than trying to remember or reference any notes that have been taken. I have used free mind-mapping tools (such as FreeMind) and screen-sharing software to make sure everyone can see the “list.”
  3. Geographic distances can mask culture differences. The facilitator needs to make sure he or she is aware of cultural differences (some cultures find it harder to expose and discuss problems). Differences in culture should be shared with the team before the retrospective begins. Consider adding a few minutes before beginning retrospective to discuss cultural issues if your team has members in or from different counties or if there are glaring cultural differences. Note the same ideas can be used to address personality differences.
  4. Use more than one facilitator. Until team members get comfortable with each other consider having a second (or third) facilitator to support the retrospective. When using multiple facilitators ensure that the facilitators understand their roles and are synchronized on the agenda.
  5. Consider assigning pre-retrospective homework. Poll team members for comments and issues before the retrospective session. The issues and comments can be shared to seed discussions, provide focus or just break the ice.

All of these suggestions presume that the retrospective has stable tele/video communication tools and the meeting time has been negotiated. If participation due to attendance, first ask what the problem is and if the problem is that attending a retrospective in the middle of the night is hard then consider an alternate meeting time (share the time zone pain).

Retrospectives are critical to help teams grow and become more effective. Retrospectives in distributed teams are harder than in co-located teams. The answer to being harder should be to use these techniques or others to facilitate communication and interaction. The answer should never be to abandon retrospectives, leave remote members out of the meeting or to hold separate but equal retrospectives. Remember, one team and one retrospective but that only work well when members trust each other and feel safe to share their ideas for improvement.


Categories: Process Management

Estimating Software-Intensive Systems

Herding Cats - Glen Alleman - Fri, 10/03/2014 - 23:27

Estimating SW Intensive SystemsThe are numerous conjectures about waste of software project estimates. Most are based on personal opinion divorced from the business processes of writing sofwtare for money.

From the Introduction of the book to the left.

Good estimates are key to project (and product) success. Estimates provide information to make decisions, define feasible performance objectives, and plans. Measurements provide data to gauge adherence to performance specifications and plans, make decisions, revise designs and plans, and improve future estimates and processes.

Engineers use estimates and measurements to evaluate the feasibility and affordability of proposed products, choose amoung alternatives designs, assess risk, and support business decisions. Engineers and planners estimate the resources needed to develop, maintain, enhance, and deploy a product. Project planners use the estimated staffing level to identify needed facilities.

Planners and managers use the resource estimates to compute project cost and schedule, and prepare budgets and plans. Estimates of product, project and process characteristics provide "baselines"to assess progress the execution of the project. Managers compare compare estimates and actual values to identify deviations from the project plan and to understand the causes of the variation.

For products, engineers compare estimates of the technical baseline to observed performance to decide if the product meets its functional and operational requirements. Process capability baselines establish norms for process performance. Managers use these norms to control the process and detect compliance problems. Process engineers use capability baselines to improve the production process.

Bad estimates affect everyone associated with the project - the engineers and managers, the customer who buys the product, and sometimes even the stockholders of the company responsible for delivering the software. Incomplete or inaccurate resource estimates for a project mean that the project may not have enough time and money to complete the required work.

If you work in a domain where none of these conditions are in place, then by all means don't estimate.

If you do recognize some or all of these conditions, then here's a summary of the reasons to estimate and measure, from the book.

  • Product Size, Performance, and Quality
    • Evaluate feasibility of requirements
    • Analyze alternative product designs
    • Determine the required capacity and performance of hardware.
    • Evaluate product performance - accuracy, speed, reliability, availability, and all the ...ilities. (ACA web site missed answering this question).
    • Identify and assess technical risks
    • Provide technical baselines for tracking and controlling - this is called Closed Loop Control. No steering targets with measures of actual performance assessed against desired performance is called Open Loop Control.
  • Project Effort, Cost, and Schedule - yes Virginia real business managers need to know when you'll be done, how much it will cost, and what you'll deliver on that day for that cost. And yes Virginia there is no Santa Claus
    • Determine project feasibility in terms of cost and time
    • Identify and assess project risks - Risk Management is how Adults Manage Projects  - Tim Lister
    • Negotiate achievable commitments
    • Prepare realistic plans and budgets
    • Evaluate business value - cost versus benefit is how business stay in business
    • Provide cost and schedule baseline for tracking and controlling
  • Process Capability and Performance
    • Predict resource consumption and efficiency
    • Establish norms of expected performance - back to the steering targets
    • Identify opportunities for improvement.

 

Related articles The Failure of Open Loop Thinking Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices An Agile Estimating Story How To Make Decisions Incremental Commitment Spiral Model Probabilistic Cost and Schedule Processes Project Finance The Three Elements of Project Work and Their Estimates How Not To Make Decisions Using Bad Estimates
Categories: Project Management

Componentize the web, with Polycasts!

Google Code Blog - Fri, 10/03/2014 - 18:58
Today at Google, we’re excited to announce the launch of Polycasts, a new video series to get developers up and running with Polymer and Web Components.

Web Components usher in a new era of web development, allowing developers to create encapsulated, interoperable elements that extend HTML itself. Built atop these new standards, Polymer makes it easier and faster to build Web Components, while also adding polyfill support so they work across all modern browsers.

Because Polymer and Web Components are such big changes for the platform, there’s a lot to learn, and it can be easy to get lost in the complexity. For that reason, we created Polycasts.

Polycasts are designed to be bite sized, and to teach one concept at a time. Along the way we plan to highlight best practices for not only working with Polymer, but also using the DevTools to make sure your code is performant.

We’ll be releasing new videos often over the coming weeks, initially focusing on core elements and layout. These episodes will also be embedded throughout the Polymer site, helping to augment the existing documentation. Because there’s so much to cover in the Polymer universe, we want to hear from you! What would you like to see? Feel free to shoot a tweet to @rob_dodson, if you have an idea for a show, and be sure to subscribe to our YouTube channel so you’re notified when new episodes are released.

Posted by Rob Dodson, Developer Advocate
Categories: Programming

Integrating Geb with FitNesse using the Groovy ConfigSlurper

Xebia Blog - Fri, 10/03/2014 - 18:01

We've been playing around with Geb for a while now and writing tests using WebDriver and Groovy has been a delight! Geb integrates well with JUnit, TestNG, Spock, and Cucumber. All there is left to do is integrating it with FitNesse ... or not :-).

Setup Gradle and Dependencies

First we start with grabbing the gradle fitnesse classpath builder from Arjan Molenaar.
Add the following dependencies to the gradle build file:

compile 'org.codehaus.groovy:groovy-all:2.3.7'
compile 'org.gebish:geb-core:0.9.3'
compile 'org.seleniumhq.selenium:selenium-java:2.43.1
Configure different drivers with the ConfigSlurper

Geb provides a configuration mechanism using the Groovy ConfigSlurper. It's perfect for environment sensitive configuration. Geb uses the geb.env system property to determine the environment to use. So we use the ConfigSlurper to configure different drivers.

import org.openqa.selenium.chrome.ChromeDriver
import org.openqa.selenium.firefox.FirefoxDriver

driver = { new FirefoxDriver() }

environments {
  chrome {
    driver = { new ChromeDriver() }
  }
}
FitNesse using the ConfigSlurper

We need to tweak the gradle build script to let FitNesse play nice with the ConfigSlurper. So we pass the geb.env system property as a JVM argument. Look for the gradle task "wiki" in the gradle build script and add the following lines.

def gebEnv = (System.getProperty("geb.env")) ? (System.getProperty("geb.env")) : "firefox"
jvmArgs "-Dgeb.env=${gebEnv}"

Since FitNesse spins up a separate 'service' process when you execute a test, we need to pass the geb.env system property into the COMMAND_PATTERN of FitNesse. That service needs the geb.env system property to let Geb know which environment to use. Put the following lines in the FitNesse page.

!define COMMAND_PATTERN {java -Dgeb.env=${geb.env} -cp %p %m}

Now you can control the Geb environment by specifying it on the following command line.

gradle wiki -Dgeb.env=chrome

The gradle build script will pass the geb.env system property as JVM argument when FitNesse starts up. And the COMMAND_PATTERN will pass it to the test runner service.

Want to see it in action? Sources can be found here.

Preview: DataGrid for Xamarin.Forms

Eric.Weblog() - Eric Sink - Fri, 10/03/2014 - 17:00

What is it?

It's a Xamarin.Forms grid control for displaying data in rows and columns.

Where's the code?

https://github.com/ericsink

Is this open source?

Yes. Apache License v2.

Why are you writing a grid?

Because I see an unmet need. Xamarin.Forms needs a good way to display row/column data. And it needs to be capable of handling lots (millions) of cells. And it needs to be really, really fast.

I'm one of the founders of Zumero. We're all about mobile apps for businesses dealing with data. Many of our customers are using Xamarin, and we want to be able to recommend Xamarin.Forms to them. A DataGrid is one of the pieces we need.

What are the features?
  • Scrolling, both vertical and horizontal
  • Either scroll range can be fixed, infinite, or wraparound.
  • Optional headers, top, left, right, bottom.
  • Ample flexibility in connecting to different data sources
  • Column widths can be fixed width or variable. Same for row heights.
Is this ready for use?

No, this code is still pretty rough.

Is this ready to play with and complain about?

Yes, hence its presence on GitHub. :-)

Open dg.sln in Xamarin Studio and it should build. There's a demo app (Android and iOS) you can use to try it out. The WP8 code isn't there yet, but it'll be moving in soon.

Is there a NuGet package?

Not yet.

Is the API frozen yet?

No. In fact, I'm still considering some API changes that could be described as major.

What platforms does it support?

Android and iOS are currently in decent shape. Windows Phone is in progress. (The header row was bashful and refused to cooperate for the WP8 screenshot.)

What will the API be like?

I don't know yet. In fact, I'm tempted to quibble when you say "the API", because you're speaking of it in the singular, and I think I will end up with more than one. :-)

Earlier, I described this thing as "a grid control", but it would be more accurate right now to describe it as a framework for building grid controls.

I have implemented some sample grids, mostly just to demonstrate the framework's capabilities and to experiment with what kinds of user-facing APIs would be most friendly. Examples include:

  • A grid that gets its data from an IList, where the properties of objects of class T become columns.
  • A data connector that gets its data from ReactiveList (uses ReactiveUI).
  • A grid that gets its data from a sqlite3_stmt handle (uses my SQLitePCL.raw package).
  • A grid that just draws shapes.
  • A grid that draws nothing but a cell border, but the farther you scroll, the bigger the cells get.
  • A 2x2 grid that scrolls forever and just repeats its four cells over and over.
How is this different from the layouts built into Xamarin.Forms?

This control is not a "Layout", in the Xamarin.Forms sense. It is not a subclass of Xamarin.Forms.Layout. You can't add child views to it.

If you need something to help arrange the visual elements of your UI on the screen, DataGrid is not what you want. Just use one of the Layouts. That's what they're for.

But maybe you need to display a lot of data. Maybe you have 200,000 rows. Maybe you don't know how many rows you have and you won't know until you read the last one. Maybe you have lots of columns too, so you need the ability to scroll in both directions. Maybe you need one or more header rows at the top which sync-scroll horizontally but are frozen vertically. And so on.

Those kind of use cases are what DataGrid is aimed for.

What drawing API are you using?

Mostly I'm working with a hacked-up copy of Frank Krueger's CrossGraphics library, modified in a variety of ways.

The biggest piece of the code (in DataGrid.Core) actually doesn't care about the graphics API. That assembly contains generic classes which accept <TGraphics> as a type parameter. (As a proof of concept demo, I've got an iOS implementation built on CGContext which doesn't actually depend on Xamarin.Forms at all.)

So I can't add ANY child views to your DataGrid control?

Currently, no. I would like to add this capability in the future.

(At the moment, I'm pretty sure it is impossible to build a layout control for Xamarin.Forms unless you're a Xamarin employee. There seem to be a few important things that are not public.)

How fast is it?

Very. On my Nexus 7, a debug build of DataGrid can easily scroll a full screen of text cells at 60 frames/second. Performance on iOS is similar.

How much code is cross-platform?

Not counting the demo app or my copy of CrossGraphics, the following table shows lines of code in each combination of dependencies:

 PortableiOS-specificAndroid-specific <TGraphics>2,741141174 Xamarin.Forms6339281

Xamarin.Forms is [going to be] a wonderful foundation for cross-platform mobile apps.

Can I use this from Objective-C or Java?

No. It's all C#.

Why are you naming_things_with_underscores?

Sorry about that. It's a habit from my Unix days that I keep slipping back into. I'll clean up my mess soon.

What's up with IChanged? Why not IObservable<T>?

Er, yeah, remember above when I said I'm still considering some major changes? That's one them.

Does this in any way depend on your Zumero for SQL Server product?

No, DataGrid is a standalone open source library.

But it's rather likely that our commercial products will in the future depend on DataGrid.

 

Stuff The Internet Says On Scalability For October 3rd, 2014

Hey, it's HighScalability time:


Is the database landscape evolving or devolving?

 

  • 76 million: once more through the data breach; 2016: when a Zettabyte is transfered over the Internet in one year
  • Quotable Quotes:
    • @wattersjames: Words missing from the Oracle PaaS keynote: agile, continuous delivery, microservices, scalability, polyglot, open source, community #oow14
    • @samcharrington: At last count, there were over 1,000,000 million containers running in the wild. http://stats.openvz.org  @jejb_ #ccevent
    • @mappingbabel: Oracle's cloud has 30,000 computers. Google has about two million computers. Amazon over a million. Rackspace over 100,000.
    • Andrew Auernheimer: The world should have given the GNU project some money to hire developers and security auditors. Hell, it should have given Stallman a place to sleep that isn't a couch at a university. There is no f*cking justice in this world.
    • John Nagle: The right answer is to track wins and losses on delayed and non-delayed ACKs. Don't turn on ACK delay unless you're sending a lot of non-delayed ACKs closely followed by packets on which the ACK could have been piggybacked. Turn it off when a delayed ACK has to be sent. I should have pushed for this in the 1980s.
    • @neil_conway: The number of < 15 node Hadoop clusters is >> the number of > 15 node Hadoop clusters. Unfortunately not reflected in SW architecture.

  • In the meat world Google wants devices to talk to you. The Physical Web. This will be better than Apple's beacons because Apple is severely limiting the functionality of beacons by requiring IDs be baked into applications. It's a very static and controlled world. In other words, it's very Apple. By using URLs Google is supporting both the web and apps; and adding flexibility because a single app can dynamically and generically handle the interaction from any kind of device. In other words, it's very Google. Apple has the numbers though, with hundreds of millions of beacon enabled phones in customer hands. Since it's just another protocol over BLE it should work on Apple devices as well.

  • Did Netflix survive the great AWS rebootathon? The Chaos Monkey says yes, yes they did: Out of our 2700+ production Cassandra nodes, 218 were rebooted. 22 Cassandra nodes were on hardware that did not reboot successfully. This led to those Cassandra nodes not coming back online. Our automation detected the failed nodes and replaced them all, with minimal human intervention. Netflix experienced 0 downtime that weekend. 

  • Google Compute Engine is following Moore's Law by announcing a 10% discount. Bandwidth is still expensive because networks don't care about silly laws. And margin has to come from somewhere.

  • Software is eating...well you've heard it before. Mesosphere cofounder envisions future data center as ‘one big computer’: The data center of the future will be fully virtualized, with everything from power supplies to storage devices consolidated into a single pool and managed by software, according to an executive whose company intends to lead the way.

  • Companies, startups, hacker spaces, teams are all intentional communities. People choose to work together towards some end. A consistent group killer is that people can be really sh*tty to each other. There's a lot of work that has been done around how to make intentional communities work. Holacracy is just one option. Here's a really interesting interview with Diana Leafe Christian on what makes communities work. It requires creating Community Glue, Good Process and Communication Skill, Effective Project Management, and good Governance and Decision making. Which is why most communities fail. Did you know there's even something called Non-Defensive Communication? If followed the Internet would collapse.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Distributed Agile: Demonstrations

Demonstrations!

Demonstrations!

Demonstrations are an important tool for teams to gather feedback to shape the value they deliver.  Demonstrations provide a platform for the team to show the stories that have been completed so the stakeholders can interact with the solution.  The feedback a team receives not only ensures that the solution delivered meets the needs but also generates new insights and lets the team know they are on track.  Demonstrations should provide value to everyone involved. Given the breadth of participation in a demo the chance of a distributed meeting is even more likely.  Techniques that support distributed demonstrations include:

  1. More written documentation: Teams, especially long-established teams, often develop shorthand expressions that convey meaning fall short before a broader audience. Written communication can be more effective at conveying meaning where body language can’t be read and eye contact can’t be made. Publish an agenda to guide the meeting; this will help everyone stay on track or get back on track when the phone line drops. Capture comments and ideas on paper where everyone can see them.  If using flip charts, use webcams to share the written notes.  Some collaboration tools provide a notepad feature that stays resident on the screen that can be used to capture notes that can be referenced by all sites.
  2. Prepare and practice the demo. The risk that something will go wrong with the logistics of the meeting increase exponentially with the number of sites involved.  Have a plan for the demo and then practice the plan to reduce the risk that you have not forgotten something.  Practice will not eliminate all risk of an unforeseen problem, but it will help.
  3. Replicate the demo in multiple locations. In scenarios with multiple locations with large or important stakeholder populations, consider running separate demonstrations.  Separate demonstrations will lose some of the interaction between sites and add some overhead but will reduce the logistical complications.
  4. Record the demo. Some sites may not be able to participate in the demo live due to their time zones or other limitations. Recording the demo lets stakeholders that could not participate in the live demo hear and see what happened and provide feedback, albeit asynchronously.  Recording the demo will also give the team the ability to use the recording as documentation and reference material, which I strongly recommend.
  5. Check the network(s)! Bandwidth is generally not your friend. Make sure the network at each location can support the tools you are going to use (video, audio or other collaboration tools) and then have a fallback plan. Fallback plans should be as low tech as practical.  One team I observed actually had to fall back to scribes in two locations who kept notes on flip charts by mirroring each-other (cell phones, bluetooth headphones and whispering were employed) when the audio service they were using went down.

Demonstrations typically involve stakeholders, management and others.  The team needs feedback, but also needs to ensure a successful demo to maintain credibility within the organization.  In order to get the most effective feedback in a demo everyone needs to be able to hear, see and get involved.  Distributed demos need to focus on facilitating interaction more than in-person demos. Otherwise, distributed demos risk not being effective.


Categories: Process Management