Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

What Is Quality?

Mike Cohn's Blog - Tue, 10/07/2014 - 15:00

Agile teams build high-quality products. Agile team members write high-quality code. Agile teams produce functionality quickly by not sacrificing quality.

Each of these is something I’ve said before. And if you haven’t said these exact things, you’ve likely said something similar.

Quality gets mentioned a lot in discussions about agile. And so, perhaps it’s worth clarifying my definition of quality. Of course, others have thought about quality more deeply than I’m capable of. And so, I won’t be providing a new definition of quality here. But I will explain how I think of quality.

One of the leading advocates for quality was Philip Crosby. In the 1970s he proclaimed that “quality is free” because doing something right the first time at a high level of quality was cheaper than fixing it later. Crosby defined quality as “conformance to requirements.”

I never really bought into Crosby “conformance with requirements” approach (even before agile came around) because there was never a way to be confident requirements were accurate. Saying something like old Microsoft Bob was high quality because it complied with some ill-conceived requirements document never felt right to me.

Similarly, quality isn’t just being bug-free though, as that’s the same problem.

Another approach to defining quality comes from Joseph Juran. He was one of a number of management theorists who worked in Japan in the 1950s. Juran defined quality as “fitness for use”:

"An essential requirement of these products is that they meet the needs of those members of society who will actually use them. This concept of fitness for use is universal. It applies to all goods and services, without exception. The popular term for fitness for use is Quality, and our basic definition becomes: quality means fitness for use."

This definition of quality really resonates with me. Quality is “fitness for use.” A high-quality product does what its customers want in such a way that they actually use the product. Something that conforms to ill-conceived requirements (such as Microsoft Bob) is not high quality. Something that is buggy isn’t high quality because it isn’t fit for use.

What do you think? Is quality best thought of as “conformance to requirements?” Or “fitness for use?” Or perhaps something else entirely? Please share your thoughts in the comments below.

What Is Quality?

Mike Cohn's Blog - Tue, 10/07/2014 - 15:00

Agile teams build high-quality products. Agile team members write high-quality code. Agile teams produce functionality quickly by not sacrificing quality.

Each of these is something I’ve said before. And if you haven’t said these exact things, you’ve likely said something similar.

Quality gets mentioned a lot in discussions about agile. And so, perhaps it’s worth clarifying my definition of quality. Of course, others have thought about quality more deeply than I’m capable of. And so, I won’t be providing a new definition of quality here. But I will explain how I think of quality.

One of the leading advocates for quality was Philip Crosby. In the 1970s he proclaimed that “quality is free” because doing something right the first time at a high level of quality was cheaper than fixing it later. Crosby defined quality as “conformance to requirements.”

I never really bought into Crosby “conformance with requirements” approach (even before agile came around) because there was never a way to be confident requirements were accurate. Saying something like old Microsoft Bob was high quality because it complied with some ill-conceived requirements document never felt right to me.

Similarly, quality isn’t just being bug-free though, as that’s the same problem.

Another approach to defining quality comes from Joseph Juran. He was one of a number of management theorists who worked in Japan in the 1950s. Juran defined quality as “fitness for use”:

"An essential requirement of these products is that they meet the needs of those members of society who will actually use them. This concept of fitness for use is universal. It applies to all goods and services, without exception. The popular term for fitness for use is Quality, and our basic definition becomes: quality means fitness for use."

This definition of quality really resonates with me. Quality is “fitness for use.” A high-quality product does what its customers want in such a way that they actually use the product. Something that conforms to ill-conceived requirements (such as Microsoft Bob) is not high quality. Something that is buggy isn’t high quality because it isn’t fit for use.

What do you think? Is quality best thought of as “conformance to requirements?” Or “fitness for use?” Or perhaps something else entirely? Please share your thoughts in the comments below.

The Complete Guide to Treadmill Desk Walking While Working

Making the Complex Simple - John Sonmez - Tue, 10/07/2014 - 14:07

Just about every day, I spend at least some portion of my day walking on a treadmill desk while doing my work. I started doing this about four years ago–and although I haven’t always been consistent with it–I’ve found it to be a very easy way to burn some extra calories during the day and […]

The post The Complete Guide to Treadmill Desk Walking While Working appeared first on Simple Programmer.

Categories: Programming

Azure: Redis Cache, Disaster Recovery to Azure, Tagging Support, Elastic Scale for SQLDB, DocDB

ScottGu's Blog - Scott Guthrie - Tue, 10/07/2014 - 06:02

Over the last few days we’ve released a number of great enhancements to Microsoft Azure.  These include:

  • Redis Cache: General Availability of Redis Cache Service
  • Site Recovery: General Availability of Disaster Recovery to Azure using Azure Site Recovery
  • Management: Tags support in the Azure Preview Portal
  • SQL DB: Public preview of Elastic Scale for Azure SQL Database (available through .NET lib, Azure service templates)
  • DocumentDB: Support for Document Explorer, Collection management and new metrics
  • Notification Hub: Support for Baidu Push Notification Service
  • Virtual Network: Support for static private IP support in the Azure Preview Portal
  • Automation updates: Active Directory authentication, PowerShell script converter, runbook gallery, hourly scheduling support

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Redis Cache: General Availability of Redis Cache Service

I’m excited to announce the General Availability of the Azure Redis Cache. The Azure Redis Cache service provides the ability for you to use a secure/dedicated Redis cache, managed as a service by Microsoft. The Azure Redis Cache is now the recommended distributed cache solution we advocate for Azure applications. Redis Cache

Unlike traditional caches which deal only with key-value pairs, Redis is popular for its support of high performance data types, on which you can perform atomic operations such as appending to a string, incrementing the value in a hash, pushing to a list, computing set intersection, union and difference, or getting the member with highest ranking in a sorted set.  Other features include support for transactions, pub/sub, Lua scripting, keys with a limited time-to-live, and configuration settings to make Redis behave more like a traditional cache.

Finally, Redis has a healthy, vibrant open source ecosystem built around it. This is reflected in the diverse set of Redis clients available across multiple languages. This allows it to be used by nearly any application, running on either Windows or Linux, that you host inside of Azure. Redis Cache Sizes and Editions

The Azure Redis Cache Service is today offered in the following sizes:  250 MB, 1 GB, 2.8 GB, 6 GB, 13 GB, 26 GB, 53 GB.  We plan to support even higher-memory options in the future.

Each Redis cache size option is also offered in two editions:

  • Basic – A single cache node, without a formal SLA, recommended for use in dev/test or non-critical workloads.
  • Standard – A multi-node, replicated cache configured in a two-node Master/Replica configuration for high-availability, and backed by an enterprise SLA.

With the Standard edition, we manage replication between the two nodes and perform an automatic failover in the case of any failure of the Master node (because of either an un-planned server failure, or in the event of planned patching maintenance). This helps ensure the availability of the cache and the data stored within it. 

Details on Azure Redis Cache pricing can be found on the Azure Cache pricing page.  Prices start as low as $17 a month. Create a New Redis Cache and Connect to It

You can create a new instance of a Redis Cache using the Azure Preview Portal.  Simply select the New->Redis Cache item to create a new instance. 

You can then use a wide variety of programming languages and corresponding client packages to connect to the Redis Cache you’ve provisioned.  You use the same Redis client packages that you’d use to connect to your own Redis instance as you do to connect to an Azure Redis Cache service.  The API + libraries are exactly the same.

Below we’ll use a .NET Redis client called StackExchange.Redis to connect to our Azure Redis Cache instance. First open any Visual Studio project and add the StackExchange.Redis NuGet package to it, with the NuGet package manager.  Then, obtain the cache endpoint and key respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal.

image

Once you’ve retrieved these, create a connection instance to the cache with the code below:

var connection = StackExchange.Redis.ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

Once the connection is established, retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method.

IDatabase cache = connection.GetDatabase();

Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods (or their async counterparts – StringSetAsync and StringGetAsync).

cache.StringSet("Key1", "HelloWorld");

cache.StringGet("Key1");

You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure. For an example of an end to end application using Azure Redis Cache, please check out the MVC Movie Application blog post. Using Redis for ASP.NET Session State and Output Caching

You can also take advantage of Redis to store out-of-process ASP.NET Session State as well as to share Output Cached content across web server instances. 

For more details on using Redis for Session State, checkout this blog post: ASP.NET Session State for Redis

For details on using Redis for Output Caching, checkout this MSDN post: ASP.NET Output Cache for Redis Monitoring and Alerting

Every Azure Redis cache instance has built-in monitoring support on by default. Currently you can track Cache Hits, Cache Misses, Get/Set Commands, Total Operations, Evicted Keys, Expired Keys, Used Memory, Used Bandwidth and Used CPU.  You can easily visualize these using the Azure Preview Portal:

image

You can also create alerts on metrics or events (just click the “Add Alert” button above). For example, you could create an alert rule to notify the cache administrator when the cache is seeing evictions. This in turn might signal that the cache is running hot and needs to be scaled up with more memory. Learn more

For more information about the Azure Redis Cache, please visit the following links:

Site Recovery: Announcing the General Availability of Disaster Recovery to Azure

I’m excited to announce the general availability of the Azure Site Recovery Service’s new Disaster Recovery to Azure functionality.  The Disaster Recovery to Azure capability enables consistent replication, protection, and recovery of on-premises VMs to Microsoft Azure. With support for both Disaster Recovery and Migration to Azure, the Azure Site Recovery service now provides a simple, reliable, and cost-effective DR solution for enabling Virtual Machine replication and recovery between on-premises private clouds across different enterprise locations, or directly to the cloud with Azure.

This month’s release builds upon our recent InMage acquisition, and the integration of InMage Scout with Azure Site Recovery enables us to provide hybrid cloud business continuity solutions for any customer IT environment – regardless of whether it is Windows or Linux, running on physical servers or virtualized servers using Hyper-V, VMware or other virtualization solutions. Microsoft Azure is now the ideal destination for disaster recovery for virtually every enterprise server in the world.

In addition to enabling replication to and disaster recovery in Azure, the Azure Site Recovery service also enables the automated protection of VMs, remote health monitoring of them, no-impact disaster recovery plan testing, and single click orchestrated recovery - all backed by an enterprise-grade SLA. A new addition with this GA release is the ability to also invoke Azure Automation runbooks from within Azure Site Recovery Plans, enabling you to further automate your solutions.

image

Learn More about Azure Site Recovery

For more information on Azure Site Recovery, check out the recording of the Azure Site Recovery session at TechEd 2014 where we discussed the preview.  You can also visit the Azure Site Recovery forum on MSDN for additional information and to engage with the engineering team or other customers.

Once you’re ready to get started with Azure Site Recovery, check out additional pricing or product information, and sign up for a free Azure trial.

Beginning this month, Azure Backup and Azure Site Recovery will also be available in a convenient, and economical promotion offer available for purchase via a Microsoft Enterprise Agreement.  Each unit of the Azure Backup & Site Recovery annual subscription offer covers protection of a single instance to Azure with Site Recovery, as well as backup of data with Azure Backup.  You can contact your Microsoft Reseller or Microsoft representative for more information. Management: Tag Support with Resources

I’m excited to announce the support of tags in the Azure management platform and in the Azure preview portal.

Tags provide an easy way to organize your Azure resources and resources groups, by allowing you to tag your resources with name/value pairs to further categorize and view resources across resource groups and across subscriptions.  For example, you could use tags to identify which of your resources are used for “production” versus “dev/test” – and enable easy filtering/searching of the resources based on which tag you were interested in – regardless of which application or resource group they were in.  Using Tags

To get started with the new Tag support, browse to any resource or resource group in the Azure Preview Portal and click on the Tags tile on the resource.

image

On the Tags blade that appears, you'll see a list of any tags you've already applied. To add a new tag, simply specify a name and value and press enter. After you've added a few tags, you'll notice autocomplete options based on pre-existing tag names and values to better ensure a consistent taxonomy across your resources and to avoid common mistakes, like misspellings.

image

You can also use our command-line tools to tag resources as well.  Below is an example of using the Azure PowerShell module to quickly tag all of the resources in your Azure subscription:

image

Once you've tagged your resources and resource groups, you can view the full list of tags across all of your subscriptions using the Browse hub.

image

You can also “pin” tags to your Startboard for quick access.  This provides a really easy way to quickly jump to any resource in a tag you’ve pinned:

image 

SQL Databases: Public Preview of Elastic Scale Support

I am excited to announce the public preview of Elastic Scale for Azure SQL Database. Elastic Scale enables the data-tier of an application to scale out via industry-standard sharding practices, while significantly streamlining the development and management of your sharded cloud applications. The new capabilities are provided through .NET libraries and Azure service templates that are hosted in your own Azure subscription to manage your highly scalable applications. Elastic Scale implements the infrastructure aspects of sharding and thus allows you to instead focus on the business logic of your application.

Elastic Scale allows developers to establish a “contract” that defines where different slices of data reside across a collection of database instances.  This enables applications to easily and automatically direct transactions to the appropriate database (shard) and perform queries that cross many or all shards using simple extensions to the ADO.NET programming model. Elastic Scale also enables coordinated data movement between shards to split or merge ranges of data among different databases and satisfy common scenarios such as pulling a busy tenant into its own shard. 

image

We are also announcing the Federation Migration Utility which is available as part of the preview. This utility will help current SQL Database Federations customers migrate their Federations application to Elastic Scale without having to perform any data movement.

Get Started with the Elastic Scale preview today, and watch our Channel 9 video to learn more. DocumentDB: Document Explorer, Collection management and new metrics

Last week we released a bunch of updates to the Azure DocumentDB service experience in the Azure Preview Portal. We continue to improve the developer and management experiences so you can be more productive and build great applications on DocumentDB. These improvements include:

  • Document Explorer: View and access JSON documents in your database account
  • Collection management: Easily add and delete collections
  • Database performance metrics and storage information: View performance metrics and storage consumed at a Database level
  • Collection performance metrics and storage information: View performance metrics and storage consumed at a Collection level
  • Support for Azure tags: Apply custom tags to DocumentDB Accounts
Document Explorer

Near the bottom of the DocumentDB Account, Database, and Collection blades, you’ll now find a new Developer Tools lens with a Document Explorer part.

image

This part provides you with a read-only document explorer experience. Select a database and collection within the Document Explorer and view documents within that collection.

image

Note that the Document Explorer will load up to the first 100 documents in the selected Collection. You can load additional documents (in batches of 100) by selecting the “Load more” option at the bottom of the Document Explorer blade. Future updates will expand Document Explorer functionality to enable document CRUD operations as well as the ability to filter documents. Collection Management

The DocumentDB Database blade now allows you to quickly create a new Collection through the Add Collection command found on the top left of the Database blade.

image

Health Metrics

We’ve added a new Collection blade which exposes Collection level performance metrics and storage information. You can access this new blade by selecting a Collection from the list of Collections on the Database blade.

image

The Database and Collection level metrics are available via the Database and Collection blades.

image

image

As always, we’d love to hear from you about the DocumentDB features and experiences you would find most valuable within the Azure portal. You can submit your suggestions on the Microsoft Azure DocumentDB feedback forum. Notification Hubs: support for Baidu Cloud Push

Azure Notification Hubs enable cross platform mobile push notifications for Android, iOS, Windows, Windows Phone, and Kindle devices. Thousands of customers now use Notification Hubs for instant cross platform broadcast, personalized notifications to dynamic segments of their mobile audience, or simply to reach individual customers of their mobile apps regardless which device they use.  Today I am excited to announce support for another mobile notifications platform, Baidu Cloud Push, which will help Notification Hubs customers reach the diverse family of Android devices in China. 

Delivering push notifications to Android devices in China is no easy task, due to a diverse set of app stores and push services. Pushing notifications to an Android device via Google Cloud Messaging Service (GCM) does not work, as most Android devices in China are not configured to use GCM.  To help app developers reach every Android device independent of which app store they’re configured with, Azure Notification Hubs now supports sending push notifications via the Baidu Cloud Push service.

To use Baidu from your Notification Hub, register your app with Baidu, and obtain the appropriate identifiers (UserId and ChannelId) for your application.

image

Then configure your Notification Hub within the Azure Management Portal with these identifiers:

image

For more details, follow the tutorial in English & Chinese. You can learn more about Push Notifications using Azure at the Notification Hubs dev center. Virtual Machines: Instance-Level Public IPs generally available

Azure now supports the ability for you to assign public IP addresses to VMs and web or worker roles so they become directly addressable on the Internet - without having to map a virtual IP endpoint for access. With Instance-Level Public IPs, you can enable scenarios like running FTP servers in Azure and monitoring VMs directly using their IPs.

For more information, please visit the Instance-Level Public IP Addresses webpage.

Automation: Updates

Earlier this year, we introduced preview availability of Azure Automation, a service that allows you to automate the deployment, monitoring, and maintenance of your Azure resources. I am excited to announce several new features in Azure Automation:

  • Active Directory Authentication
  • PowerShell Script Converter
  • Runbook Gallery
  • Hourly Scheduling
Active Directory Authentication

We now offer an easier alternative to using certificates to authenticate from the Azure Automation service to your Azure environment. You can now authenticate to Azure using an Azure Active Directory organization identity which provides simple, credential-based authentication.

If you do not have an Active Directory user set up already, simply create a new user and provide the user with access to manage your Azure subscription. Once you have done this, create an Automation Asset with its credentials and reference the credential in your runbook. You need to do this setup only once and can then use the stored credentials going forward, greatly simplifying the number of steps that you need to take to start automating. You can read this blog to learn more about getting set up with Active Directory Authentication. PowerShell Script Converter

Azure Automation now supports importing PowerShell scripts as runbooks. When a PowerShell script is imported that does not contain a single PowerShell Workflow, Automation will attempt to convert it from PowerShell script to PowerShell Workflow, and then create a runbook from the result. This allows the vast amount of PowerShell content and knowledge that exists today to be more easily leveraged in Azure Automation, despite the fact that Automation executes PowerShell Workflow and not PowerShell. Runbook Gallery

The Runbook Gallery allows you to quickly discover Automation sample, utility, and scenario runbooks from within the Azure management portal. The Runbook Gallery consists of runbooks that can be used as is or with minor modification, and runbooks that can serve as examples of how to create your own runbooks. The Runbook Gallery features content not only by Microsoft, but also by active members of the Azure community. If you have created a runbook that you think other users may benefit from, you can share it with the community on Script Center and it will show up in the Gallery. If you are interested in learning more about the Runbook Gallery, this TechNet article describes how the Gallery works in more detail and provides information on how you can contribute.

You can access the Gallery from +New, and then selecting App Services > Automation > Runbook > From Gallery.

image

In the Gallery wizard, you can browse for runbooks by selecting the category in the left hand pane and then view the description of the selected runbook in the right pane. You can then preview the code and finally import the runbook into your personal space:

image

We will be adding the ability to expand the Gallery to include PowerShell scripts in the near future. These scripts will be converted to Workflows when they are imported to your Automation Account using the new PowerShell Script Converter. This means that you will have more content to choose from and a tool to help you get your PowerShell scripts running in Azure. Hourly Scheduling

Based on popular request from our users, hourly scheduling is now available in Azure Automation. This feature allows you to schedule your runbook hourly or every X hours, making it that much easier to start runbooks at a regular frequency that is smaller than a day. Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu omni

Categories: Architecture, Programming

The Heavyweight Championship of Agile

Without a framework, the building falls.

Without a framework, the building falls.

Team-based frameworks, techniques and methods have dominated the discussion over short history of Agile as a named movement. Concepts like Scrum and Extreme Programing (just two of the more prominent frameworks) caught the interest of software developers and IT management for many reasons, including the promise of customer responsiveness, faster delivery rates, increased productivity and the implementation excesses of heavyweight frameworks, such as the CMMI. Excess of the later still resounds in the minds of many developers, process improvement specialist and IT leaders. The fear of excess overhead and administration has dampened the discussion and exploration of techniques to scale Agile to the address program level and enterprise issues.  In the past few years, three primary frameworks have emerged to vie for the heavyweight championship of Agile. They are:

  1. Dynamic Systems Development Method (DSDM). DSDM is a project delivery/system development method. Originally released in 1994, DSDM predates the Agile Manifesto and was one the frameworks that drove the evolution of lightweight incremental development. DSMD takes a broader view of projects and integrates other lifecycle steps into the framework on top of a developer lead Scrum team. The framework has been tailored to integrate the PRINCE2 (European Project Management Standard) as a project and program management tool. DSDM is open source. (http://www.dsdm.org/)
  2. Disciplined Agile Delivery (DAD). DAD is a framework that incorporates concepts from many of the standard techniques and frameworks (Scrum, lean, Agile modeling and Agile Unified Process to name a few) to develop an organizational model. Given that DAD was developed and introduced after the introduction of Agile-as-a-movement, DAD explicitly reflects the Agile principles espoused in the Agile Manifesto while reflecting organizational scaling concerns. DAD has been influenced and championed by Scott Amber (Disciplined Agile Delivery: A Practitioner’s Guide to Agile, 2012) and IBM. The model is proprietary (http://disciplinedagiledelivery.com/)
  3. Scaled Agile Framework Enterprise (SAFe). SAFe as a framework leverages Kanban as tool to introduce organizational portfolio management and then to evolve concepts to programs to project and finally to Scrum teams. SAFe explicitly addresses the necessity of architecture to develop in lockstep with functionality. As with DSDM and DAD, SAFe’s goal is provide a framework to scale Agile in a manner that addresses the concerns and needs to the overall business, IT and development organizations. SAFe is a proprietary framework that was originally developed by Dean Leffingwell and now is supported by the company Scaled Agile. (http://scaledagileframework.com/)

There is a huge amount of controversy in the Agile community about the need for frameworks for scaling Agile, as evidenced by shouting matches at conferences and the vitriol on message boards. However many large organizations in both commercial and governmental organizations are using DSDM, DAD and SAFe as mechanisms to scale Agile.  Shouting is not going to stop what many organizations view as a valuable mechanism to deliver functionality to their customers.


Categories: Process Management

Why 'Why' Is Everything

Xebia Blog - Mon, 10/06/2014 - 20:46

The 'Why' part is perhaps the most important aspect of a user story. This links to the sprint goal which links ultimately to the product vision and organisation's vision.

Lately, I got reminded of the very truth of this statement. My youngest son is part of a soccer team and they have training every week. Part of the training are exercises that use a so-called speedladder.

foto-1

After the training while driving home I asked him what he especially liked about the training and what he wants to do differently next time. This time he answered that he didn't like the training at all. So I asked him what part he disliked: "The speedladder. It is such a stupid thing to do.". Although I realised it to be a poor mans answer I told him that some parts are not that nice and he needs to accept that: practising is not always fun. I wasn't happy with the answer but couldn't think of a better one.

Some time passed when I overheard the trainers explaining to each other that the speedladder is for improving the 'footwork', coordination, and sensory development. Then I got an idea!
I knew that his ambition is to become as good as Messi :-) so when at home I explained this to my son and that it helps him to improve feints and unparalleled actions. I noticed his twinkling eyes and he enthusiastically replied: "Dad, can we buy a speedladder so I can practise at home?".  Of course I did buy one! Since then the 'speedladder' is the most favourable part of the soccer training!

Summary

The goal, purpose and the 'Why' is the most important thing for persons and teams. Communicating this clearly to the team is one of the most important things a product owner and organisation need to do in order to get high performant teams.

Quote of the Day

Herding Cats - Glen Alleman - Mon, 10/06/2014 - 17:52

Novum_organum_scientiarumThe unassisted hand, and the understanding left to itself, posses but little power. Effects are produced by the means of instruments and helps, which the understanding requires no less than the hand. And as instruments either promote or regulate the motion of the hand, so those that are applied to the mind prompt or protect the understanding - Novum Organum Sciantiarium (Aphorisms concerning the Interpretation of Nature and the Kingdom of Man), Francis Bacon (1561 - 1626).

 

 

When we hear we can make decisions in the absence of estimating the impacts of those decisions, the cost when complete, or the lost opportunity cost for make alternative decisions, think of Bacon.

He essentially says show me the money.

Control systems from Glen Alleman
Categories: Project Management

Fiddling with ReactiveUI and Xamarin.Forms

Eric.Weblog() - Eric Sink - Mon, 10/06/2014 - 17:00

I'm no expert at ReactiveUI. I'm just fiddling around in the back row of a session here at Xamarin Evolve. :-)

The goal

I want an instance of my DataGrid control that is bound to a collection of objects. And I want the display to automatically update when I change a property on one of those objects.

For the sake of this quickie demo, I'm gonna add a tap handler for a cell that simply appends an asterisk to the text of that cell. That should end up causing a display update.

Something like this code snippet:


Main.SingleTap += (object sender, CellCoords e) => {
    T r = Rows [e.Row];
    ColumnInfo ci = Columns[e.Column];
    var typ = typeof(T);
    var ti = typ.GetTypeInfo();
    var p = ti.GetDeclaredProperty(ci.PropertyName);
    if (p != null)
    {
        var val = p.GetValue(r);
        p.SetValue(r, val.ToString() + "*");
    }
};

Actually, that looks complicated, because I've got some reflection code that figures out which property on my object corresponds to which column. Ignore the snippet above and think of it like this (for a tap in column 0):

Main.SingleTap += (object sender, CellCoords e) => {
    WordPair r = Rows [e.Row];
    r.en += "*";
};

Which will make more sense if you can see the class I'm using to represent a row:

public class WordPair
{
    public string en { get; set; }
    public string sp { get; set; }
}

Or rather, that's what it looked like before I started adapting it for ReactiveUI. Now it needs to notify somebody when its properties change, so it looks more like this:

public class WordPair : ReactiveObject
{
    private string _en;
    private string _sp;

    public string en {
        get { return _en; }
        set { this.RaiseAndSetIfChanged (ref _en, value); }
    }
    public string sp {
        get { return _sp; }
        set { this.RaiseAndSetIfChanged (ref _sp, value); }
    }
}

So, basically I want a DataGrid which is bound to a ReactiveList. But actually, I want it to be more generic than that. I want WordPair to be a type parameter.

So my DataGrid subclass of Xamarin.Forms.View has a type parameter for the type of the row:

public class ColumnishGrid<T> : Xamarin.Forms.View where T : class

And the ReactiveList<T> is stored in a property of that View:

public static readonly BindableProperty RowsProperty = 
    BindableProperty.Create<ColumnishGrid<T>,ReactiveList<T>>(
        p => p.Rows, null);

public ReactiveList<T> Rows {
    get { return (ReactiveList<T>)GetValue(RowsProperty); }
    set { SetValue(RowsProperty, value); } // TODO disallow invalid values
}

And the relevant portions of the code to build a Xamarin.Forms content page look like this:

var mainPage = new ContentPage {
    Content = new ColumnishGrid<WordPair> {

        ...

        Rows = new ReactiveList<WordPair> {
            new WordPair { en = "drive", sp = "conducir" },
            new WordPair { en = "speak", sp = "hablar" },
            new WordPair { en = "give", sp = "dar" },
            new WordPair { en = "be", sp = "ser" },
            new WordPair { en = "go", sp = "ir" },
            new WordPair { en = "wait", sp = "esperar" },
            new WordPair { en = "live", sp = "vivir" },
            new WordPair { en = "walk", sp = "andar" },
            new WordPair { en = "run", sp = "correr" },
            new WordPair { en = "sleep", sp = "dormir" },
            new WordPair { en = "want", sp = "querer" },
        }
    }
};

The implementation of ColumnishGrid contains the following snippet, which will be followed by further explanation:

IRowList<T> rowlist = new RowList_Bindable_ReactiveList<T>(this, RowsProperty);

IValuePerCell<string> vals = new ValuePerCell_RowList_Properties<string,T>(rowlist, propnames);

IDrawCell<IGraphics> dec = new DrawCell_Text (vals, fmt);

In DataGrid, a RowList is an interface used for binding some data type that represents a whole row.

public interface IRowList<T> : IPerCell
{
    bool get_value(int r, out T val);
}

A RowList It could be an array of something (like strings), using the column number as an index. But in this case, I am using a class, with each property mapped to a column.

A ValuePerCell object is used anytime DataGrid needs, er, a value per cell (like the text to be displayed):

public interface IValuePerCell<T> : IPerCell
{
    bool get_value(int col, int row, out T val);
}

And ValuePerCell_RowList_Properties is an object which does the mapping from a column number (like 0) to a property name (like WordPair.en).

Then the ValuePerCell object gets handed off to DrawCell_Text, which is what actually draws the cell text on the screen.

I skipped one important thing in the snippet above, and that's RowList_Bindable_ReactiveList. Since I'm storing my ReactiveList in a property on my View, there are two separate things to listen on. First, I obviously want to listen for changes to the ReactiveList and update the display appropriately. But I also need to listen for the case where somebody replaces the entire list.

RowList_Bindable_ReactiveList handles the latter, so it has code that looks like this:

obj.PropertyChanged += (object sender, System.ComponentModel.PropertyChangedEventArgs e) => {
    if (e.PropertyName == prop.PropertyName)
    {
        ReactiveList<T> lst = (ReactiveList<T>)obj.GetValue(prop);
        _next = new RowList_ReactiveList<T>(lst);
        if (changed != null) {
            changed(this, null);
        }
        _next.changed += (object s2, CellCoords e2) => {;
            if (changed != null) {
                changed(this, e2);
            }
        };
    }
};

And finally, the code which listens to the ReactiveList itself:

public RowList_ReactiveList(ReactiveList<T> rx)
{
    _rx = rx;

    _rx.ChangeTrackingEnabled = true;
    _rx.ItemChanged.Subscribe (x => {
        if (changed != null) {
            int pos = _rx.IndexOf(x.Sender);
            changed(this, new CellCoords(0, pos));
        }
    });
}

DataGrid uses a chain of drawing objects which pass notifications up the chain with a C# event. In the end, the DataGrid core panel will hear about the change, and it will trigger the renderer, which will cause a redraw.

And that's how the word "give" ended up with an asterisk in the screen shot at the top of this blog entry.

 

How Clay.io Built their 10x Architecture Using AWS, Docker, HAProxy, and Lots More

This is a guest repost by Zoli Kahan from Clay.io. 

This is the first post in my new series 10x, where I share my experiences and how we do things at Clay.io to develop at scale with a small team. If you find these things interesting, we're hiring - zoli@clay.io.

The Cloud CloudFlare

CloudFlare

CloudFlare handles all of our DNS, and acts as a distributed caching proxy with some additional DDOS protection features. It also handles SSL.

Amazon EC2 + VPC + NAT server
Categories: Architecture

Ten Thousand Baby Steps

NOOP.NL - Jurgen Appelo - Mon, 10/06/2014 - 15:35
Euro Disco/Dance

Last week, I added the last songs to my two big Spotify playlists: Euro Dance Heaven (2,520 songs) and Euro Disco Heaven (1,695 songs). I added the first of those songs to my playlists on December 16, 2010. That’s almost four years ago.

The post Ten Thousand Baby Steps appeared first on NOOP.NL.

Categories: Project Management

Conceptual Model vs Graph Model

Mark Needham - Mon, 10/06/2014 - 08:11

We’ve started running some sessions on graph modelling in London and during the first session it was pointed out that the process I’d described was very similar to that when modelling for a relational database.

I thought I better do some reading on the way relational models are derived and I came across an excellent video by Joe Maguire titled ‘Data Modelers Still Have Jobs: Adjusting For the NoSQL Environment

Joe starts off by showing the following ‘big picture framework’ which describes the steps involved in coming up with a relational model:

2014 10 05 19 04 46

A couple of slides later he points out that we often blur the lines between the different stages and end up designing a model which contains a lot of implementation details:

2014 10 06 23 25 22

If, on the other hand, we compare a conceptual model with a graph model this is less of an issue as the two models map quite closely:

  • Entities -> Nodes / Labels
  • Attributes -> Properties
  • Relationships -> Relationships
  • Identifiers -> Unique Constraints

Unique Constraints don’t quite capture everything that Identifiers do since it’s possible to create a node of a specific label without specifying the property which is uniquely constrained. Other than that though each concept matches one for one.

We often say that graphs are white board friendly by which we mean that that the model you sketch on a white board is the same as that stored in the database.

For example, consider the following sketch of people and their interactions with various books:

IMG 2342

If we were to translate that into a write query using Neo4j’s cypher query language it would look like this:

CREATE (ian:Person {name: "Ian"})
CREATE (alan:Person {name: "Alan"})
CREATE (gg:Person:Author {name: "Graham Greene"})
CREATE (jlc:Person:Author {name: "John Le Carre"})
 
CREATE (omih:Book {name: "Our Man in Havana"})
CREATE (ttsp:Book {name: "Tinker Tailor, Soldier, Spy"})
 
CREATE (gg)-[:WROTE]->(omih)
CREATE (jlc)-[:WROTE]->(ttsp)
CREATE (ian)-[:PURCHASED {date: "05-02-2011"}]->(ttsp)
CREATE (ian)-[:PURCHASED {date: "08-09-2011"}]->(omih)
CREATE (alan)-[:PURCHASED {date: "05-07-2014"}]->(ttsp)

There are a few extra brackets and the ‘CREATE’ key word but we haven’t lost any of the fidelity of the domain and in my experience a non technical / commercial person would be able to understand the query.

By contrast this article shows the steps we might take from a conceptual model describing employees, departments and unions to the eventual relational model.

If you don’t have the time to read through that, we start with this initial model…

2014 10 07 00 13 51

…and by the time we’ve got to a model that can be stored in our relational database:

2014 10 07 00 14 32

You’ll notice we’ve lost the relationship types and they’ve been replaced by 4 foreign keys that allow us to join the different tables/sets together.

In a graph model we’d have been able to stay much closer to the conceptual model and therefore closer to the language of the business.

I’m still exploring the world of data modelling and next up for me is to read Joe’s ‘Mastering Data Modeling‘ book. I’m also curious how normal forms and data redundancy apply to graphs so I’ll be looking into that as well.

Thoughts welcome, as usual!

Categories: Programming

How to create a Value Stream Map

Xebia Blog - Mon, 10/06/2014 - 08:05

Value Stream Mapping (VSM) is a -very- useful tool to gain insight in the workflow of a process and can be used to identify both Value Adding Activities and Non Value Adding Activities in a process stream while providing handles for optimizing the process chain. The results of a VSM can be used for many occasions: from writing out a business case, to defining a prioritized list to optimize processes within your organization, to pinpointing bottlenecks in your existing processes and gain a common understanding of process related issues.

When creating a VSM of your current software delivery process you quite possibly will be amazed by the amount of waste and therefor the room for improvement you might find. I challenge you to try this out within your own organization. It will leave you with a very powerful tool to explain to your management the steps that need to change, as it will leave you with facts.

To quickly get you started, I wrote out some handles on how to write out a proper Value Stream Map.

In many organizations there is the tendency to ‘solely’ perform local optimizations to steps in the process (i.e. per Business Unit), while in reality the largest process optimizations can be gained by optimizing the area’s which are in between the process steps and do not add any value to the customer at all; the Non Value Adding activities. Value Stream Mapping is a great tool for optimizing the complete process chain, not just the local steps.

Local_vs_complete

The Example - Mapping a Software Delivery Process
Many example value streams found on the internet focus on selling a mortgage, packaging objects in a factory or some logistic process. The example I will be using focuses on a typical Software Delivery Processes as we still see them today: the 'traditional' Software Delivery Process containing many manual steps.

You first need to map the 'as-is' process as you need this to form the baseline. This baseline provides you the required insight to remove steps from the process that do not add any value to your customer and therefor can be seen as pure waste to your organization.

It is important to write out the Value Stream as a group process (a workshop), where group-members represent people that are part of the value chain as it is today*. This is the only way to spot (hidden) activities and will provide a common understanding of the situation today. Apart from that, failure to execute the Value Stream Mapping activity as a group process will very likely reduce the acceptance rate at the end of the day. Never write out a VSM in isolation.

Value Stream mapping is 'a paper and pencil tool’ where you should ask participants to write out the stickies and help you form the map. You yourself will not write on stickies (okay, okay, maybe sometimes … but be careful not to do the work for the group). Writing out a process should take you about 4 to 6 hours, including discussions and the coffee breaks of course. So, now for the steps!

* Note that the example value stream is a simplified and fictional process based on the experience at several customers.

Step 0 Prepare
Make sure you have all materials available.

Here is a list:
- two 4 meter strokes of brown paper.
- Plastic tape to attach paper to the wall
- stickies square multiple colors
- stickies rectangle multiple colors
- small stickies one or two colors
- lot’s of sharpies (people need to be able to pick up the pens)
- colored ‘dot' stickies.

What do you need? (the helpful colleague not depicted)

What do you need? (the helpful colleague not depicted)

Step 1 & 2 define objectives and process steps
Make sure to work one process at a time and start off with defining customer objectives (the Voice Of Customer). A common understanding of the VoC is important because in later stage you will determine with the team which activities are really adding to this VoC and which steps are not. Quite often these objectives are defined in Time, Cost and Quality. For our example, let’s say the customer would like to be able to deliver a new feature every hour, with a max cost of $1000 a feature and with zero defects.

First, write down the Voice of the Customer in the top right corner. Now, together with the group, determine all the actors (organizations / persons / teams) that are part of the current process and glue these actors as orange stickies to the brown paper.

Defining Voice of Customer and Process Steps

Defining Voice of Customer and Process Steps

Step 3 Define activities performed within each process step
With the group, per determine the activities that take place. Underneath the orange stickies, add green stickies that describe the activities that take place in a given step.

Defining activities performed in each step

Defining activities performed in each step

Step 4 Define Work in Progress (WiP)
Now, add pink stickies in between the steps, describing the number of features / requirements / objects / activities that is currently in process in between actors. This is referred to as WiP - Work in Progress. Whenever there is a high WiP limit in between steps, you have identified a bottleneck causing the process 'flow' to stop.

On top of the pink WiP stickies containing particular high WiP levels, add a small sticky indicating what the group thinks is causing the high WiP. For instance, a document has to be distributed via internal mail, or a wait is introduced for a bi-weekly meeting or travel to another location is required. This information can later be used to optimize the process.

Note that in the workshop you should also take some time to finding WiP within the activities itself (this is not depicted in this example). Spend time on finding information for causes of high WiP and add this as stickies to each activity.

Define work in process

Define work in process

Step 5 Identify rework
Rework is waste. Still, many times you'll see that a deliverable is to be returned to a previous step for reprocessing. Together with the group, determine where this happens and what is the cause of this rework. A nice additional is to also write out first-time-right levels.

Identify rework

Identify rework

Step 6 Add additional information
Spend some time in adding additional comments for activities on the green stickies. Some activities might for instance not be optimized, are not easy to handle or from a group perspective considered obsolete. Mark these comments with blue stickies next to the activity at hand.

Add additional information

Add additional information

Step 7 Add Process time, Wait time and Lead time and determining Process Cycle Efficiency

Now, as we have the process more or less complete, we can start adding information related to timing. In this step you would like to determine the following information:

  • process time: the real amount of time that is required to perform a task without interruptions
  • lead time: the actual time that it takes for the activity to be completed (also known as elapse time)
  • wait time: time when no processing is done at all, for example when for waiting on a 'event' like a bi-weekly meeting.

(Not in picture): for every activity on the green sticky, write down a small sticky with two numbers vertically aligned. The top-number reflects the process-time, (i.e. 40 hours). The bottom-number reflects the lead time (i.e. 120 hours).

(In picture): add a block diagram underneath the process, where timing information in the upper section represents total processing time for all activities and timing information the lower section represents total lead time for all activities. (just add up the timing information for the individual activities I described in previous paragraph). Also add noticeable wait time in-between process steps. As a final step, to the right of this block diagram, add the totals.

Now that you have all information on the paper, the following  can be calculated:

  • Total Process Time - The total time required to actually work on activities if one could focus on the activity at hand.
  • Total Lead Time - The total time this process actually needs.
  • Project Cycle Efficiency (PCE): -> Total Process Time / Total Lead Time *100%.

Add this information to the lower right corner of your brown paper. The numbers for this example are:

Total Process Time: add all numbers in top section of stickies: 424 hours
Process Lead Time (PLT): add all numbers in lower section of stickies + wait time in between steps: 1740 hours
Project Cycle Efficiency (PCE) now is: -> Total Process  Time / Total Process Lead Time: 24%.
Note that 24% is -very- high which is caused by using an example. Usually you’ll see a PCE at about 4 - 8% for a traditional process.

Add process, wait and lead times

Add process, wait and lead times

Step 8 Identify Customer Value Add and Non Value Add activities
Now, categorize tasks into 2 types: tasks that add value to the customer (Customer Value Add, CVA) and tasks that do not add value to the customer (Non Value Add, NVA). The NVA you can again split into two categories: tasks that add Business Value (Business Value Add, BVA) and ‘Waste’. When optimizing a process, waste is to be eliminated completely as it does not add value to the customer nor the business as a whole. But also for the activities categorized as 'BVA', you have to ask yourself whether these activities add to the chain.

Mark CVA tasks with a green dot, BVA tasks with a blue dot and Waste with a red dot. Put the legend on the map for later reference.

When identifying CVA, NVA and BVA … force yourself to refer back to the Voice of Customer you jotted down in step 1 and think about who is your customer here. In this example, the customer is not the end user using the system, but the business. And it was the business that wanted Faster, Cheaper & Better. Now when you start to tag each individual task, give yourself some time in figuring out which tasks actually add to these goals.

Determine Customer Value Add & Non Value Add

Determine Customer Value Add & Non Value Add

To give you some guidance on how you can approach tagging each task, I’ll elaborate a bit on how I tagged the activities. Note again, this is just an example, within the workshop your team might tag differently.

Items I tagged as CVA: coding, testing (unit, static, acceptance), execution of tests and configuration of monitoring are adding value to the customer (business). Why? Because all these items relate to a faster, better (high quality through test + monitoring) and cheaper (less errors through higher quality of code) delivery of code.

Items I tagged as BVA: documentation, configuration of environments, deployments of VMs, installation of MW are required to be able to deliver to the customer when using this (typical waterfall) Software Delivery Processes. (Note: I do not necessarily concur with this process.) :)

Items I tagged as pure Waste, not adding any value to the customer: items like getting approval, the process to get funding (although probably required), discussing details and documenting results for later reference or waiting for the Quarterly release cycle. Non of these items are required to either deliver faster, cheaper or better so in that respect these items can be considered waste.

That's it (and step 9) - you've mapped your current process
So, that’s about it! The Value Stream Map is now more or less complete an contains all relevant information required to optimize the process in a next step. Step 9 here would be: Take some time to write out items/bottlenecks that are most important or easy to address and discuss internally with your team about a solution. Focus on items that you either tagged as BVA or pure waste and think of alternatives to eliminate these steps. Put your customer central, not your process! Just dropping an activity as a whole seems somewhat radical, but sometimes good ideas just are! Note by the way that when addressing a bottleneck, another bottleneck will pop up. There always will be a bottleneck somewhere in the process and therefor process optimization must be seen as a continuous process.

A final tip: to be able to perform a Value Stream Mapping workshop at the customer, it might be a good idea to join a more experienced colleague first, just to get a grasp of what the dynamics in such a workshop are like. The fact that all participants are at the same table, outlining the delivery process together and talk about it, will allow you to come up with an optimized process on which each person will buy in. But still, it takes some effort to get the workshop going. Take your time, do not rush it.

For now, I hope you can use the steps above the identify the current largest bottlenecks within your own organization and get going. In a next blog, if there is sufficient interest, I will write about what would be possible solutions in solving the bottlenecks in my example. If you have any ideas, just drop a line below so we can discuss! The aim for me would be to work towards a solution that caters for Continuous Delivery of Software.

Michiel Sens.

SPaMCAST 310 – Mike Burrows, Kanban from the Inside

www.spamcast.net

http://www.spamcast.net

Listen to the SPaMCAST 310

Software Process and Measurement Cast 310 features our interview with Mike Burrows. This is Mike’s second visit to the Software Process and Measurement Cast.  In this visit we discussed his new book, Kanban from the Inside (Kindle).  The book lays out why Kanban is a management method built on a set of values rather than just a set of techniques. Mike explains why Kanban leads to better outcomes for projects, managers, organizations and customers!

Mike is the UK Director and Principal Consultant at David J Anderson and Associates. In a career spanning the aerospace, banking, energy and government sectors, Mike has been a global development manager, IT director and software developer. He speaks regularly at Lean/Kanban-related events in several countries and his book Kanban from the Inside (Kindle)was published in September.

Mike’s email is mike@djaa.com
Twitter: https://twitter.com/asplake and @KanbanInside
Blog is http://positiveincline.com/index.php/about/UK

Kanban conference: http://lkuk.leankanban.com/
Kanban conference series: http://conf.leankanban.com/

Next

SPaMCAST 311 features our essay on backlog grooming. Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time). In the next podcast we get into the nuts and bolts of making your backlog better!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 310 – Mike Burrows, Kanban from the Inside

Software Process and Measurement Cast - Sun, 10/05/2014 - 22:00

Software Process and Measurement Cast 310 features our interview with Mike Burrows. This is Mike’s second visit to the Software Process and Measurement Cast.  In this visit we discussed his new book, Kanban from the Inside (Kindle).  The book lays out why Kanban is a management method built on a set of values rather than just a set of techniques. Mike explains why Kanban leads to better outcomes for projects, managers, organizations and customers!

Mike is the UK Director and Principal Consultant at David J Anderson and Associates. In a career spanning the aerospace, banking, energy and government sectors, Mike has been a global development manager, IT director and software developer. He speaks regularly at Lean/Kanban-related events in several countries and his book Kanban from the Inside (Kindle)was published in September.

Mike’s email is mike@djaa.com
Twitter: https://twitter.com/asplake and @KanbanInside
Blog is http://positiveincline.com/index.php/about/UK

Kanban conference: http://lkuk.leankanban.com/
Kanban conference series: http://conf.leankanban.com/

Next

SPaMCAST 311 features our essay on backlog grooming. Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time). In the next podcast we get into the nuts and bolts of making your backlog better!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Sun, 10/05/2014 - 15:33

Nothing is too wonderful to be true, if it is consistent with the laws of nature, and in  such things as these, experiment is the best test of such consistency − Michael Faraday's Diary, March 19, 1849

When we hear wonderful concepts conjectured that are untested outside personal anecdote, ask where has this worked, in what domain, in what governance framework, and what was the value at risk to applying the idea.

Categories: Project Management

Distributed Agile: Checklist of the Basics

Hydrogen is elementary!

Hydrogen is elementary!

Hand drawn chart Saturday.

In Scrum there are four-ish basic meetings. They are sprint planning, daily stand-up, demonstrations, retrospectives and backlog grooming (the “ish” part). Whether distributed or co-located, these meetings are critical to planning, communication and controlling how Agile is typically practiced. Getting them right is not optional, especially when the Agile team is distributed. While there specific techniques for each type of meeting (some people call them rituals) there a few basics that can be used as a checklist. They are:

  • Schedule and invite participants. Team members are easy! Schedule all standard meetings for team members upfront for as many sprints as you are panning to have. As a team, decide on who will participate in the demo and make sure they are invited as early as possible.
  • Review the goals and rules of the meeting upfront. Don’t assume that everyone knows the goal of the meeting and the ground rules for their participation.
  • Publish an agenda. Agendas provide focus for any meeting. While an agenda for a daily stand-up might sound like overkill (and for long-term, stable teams it probably is), I either review the outline of the meeting or send the outline to all participants before the meeting starts.
  • Check the tools and connections. Distributed teams will require tools and software packages; including audio conferencing, video conferencing, screen sharing and chat software. Ensure they are on, connected and that everyone has access BEFORE the meeting starts.
  • Ensure active facilitation is available. All meetings are facilitated. Actively facilitated meetings are typically more focused, while un-facilitated meetings tend to be less focused and more ad-hoc. Active or passive facilitation is your choice. Distributed teams should almost choose facilitation. If using Scrum, part of role of the Scrum master is to act as a facilitator. The Scrum master guides the team and participants to ensure all of the meetings are effective and meet their goals.
  • Hold a meeting retrospective. Spend a few moments after each meeting to validate the goals were met and what could be done better in the future.

Agile is not magic. All Agile teams use techniques that assume the team has a common goal to guide them and then use feedback generated through communication to stay on track. Distributed Agile teams need to pay more careful attention to the basics. The Scrum master should strive to make the tools and process needed for all of the meetings fade into the background; for the majority of the team the end must be more important than the process.


Categories: Process Management

Tips for Error Handling with Android Wear APIs

Android Developers Blog - Sat, 10/04/2014 - 04:54

By +Wayne Piekarski, Developer Advocate, Android Wear

For developers using the Android Wear APIs in Google Play services, it is important to correctly handle all the error conditions that can occur on legacy phones or when users do not have a wearable device. This post describes the best practice in handling error conditions with the GoogleApiClient connect() method. If you do not implement this correctly, your existing application functionality may fail for non-wearable users.

There are two ways that the connect() method can return ConnectionResult.API_UNAVAILABLE for wearable support with Google Play services:

  • When requesting Wearable.API on any device running Android 4.2 (API level 17) or earlier
  • When requesting Wearable.API when no Android Wear companion application or device is available

Google Play services provides a wide range of useful features such as integration with Google Drive, Wallet, Google+, and Google Play games services (just to name a few!). During initialization, the application uses GoogleApiClient.Builder() to make calls to addApi() to request the features that are necessary. The connect() method is then called to establish a connection to the Google Play services library, and it can return error codes if any API is not available.

If you request multiple APIs from a single GoogleApiClient, such as Drive and Wear, and the Wear support returns API_UNAVAILABLE, then the Drive request will also fail. Since Wear support is not guaranteed to be available on all devices, you should make sure to use a separate client for this request.

The best practice for developers is to implement two separate GoogleApiClient connections:

  • One connection for Android Wear support, and
  • A separate connection for all of the other APIs you need

This will ensure that the functionality of your app will remain for all users, whether or not there is wearable support available on their devices, as well as on older legacy devices.

It's important that you implement this best practice immediately, because your current users may be affected if not handled correctly in your app.

Join the discussion on

+Android Developers
Categories: Programming

Distributed Agile: Retrospectives

Retrospectives are reflective!

Retrospectives are reflective!

 

Retrospectives are the team’s chance to make their life better. Process of making the team’s life better may mean confronting hard truths and changing how work is done. Hard conversations require trust and safety. Trust and safety are attributes that are hard to generate remote, especially if team members have never met each other. Facilitation and techniques tailored to distributed teams are needed to get real value from retrospectives when the team is distributed.

  1. Bring team members together. Joint retrospectives will serve a number of purposes including building relationships and trust. The combination of deeper relationships and trust will help team members tackle harder conversations when team members are apart.
  2. Use collaboration tools. Many retrospective techniques generate lists and then ask participants to vote. Listing techniques work best when participants see what is being listed rather than trying to remember or reference any notes that have been taken. I have used free mind-mapping tools (such as FreeMind) and screen-sharing software to make sure everyone can see the “list.”
  3. Geographic distances can mask culture differences. The facilitator needs to make sure he or she is aware of cultural differences (some cultures find it harder to expose and discuss problems). Differences in culture should be shared with the team before the retrospective begins. Consider adding a few minutes before beginning retrospective to discuss cultural issues if your team has members in or from different counties or if there are glaring cultural differences. Note the same ideas can be used to address personality differences.
  4. Use more than one facilitator. Until team members get comfortable with each other consider having a second (or third) facilitator to support the retrospective. When using multiple facilitators ensure that the facilitators understand their roles and are synchronized on the agenda.
  5. Consider assigning pre-retrospective homework. Poll team members for comments and issues before the retrospective session. The issues and comments can be shared to seed discussions, provide focus or just break the ice.

All of these suggestions presume that the retrospective has stable tele/video communication tools and the meeting time has been negotiated. If participation due to attendance, first ask what the problem is and if the problem is that attending a retrospective in the middle of the night is hard then consider an alternate meeting time (share the time zone pain).

Retrospectives are critical to help teams grow and become more effective. Retrospectives in distributed teams are harder than in co-located teams. The answer to being harder should be to use these techniques or others to facilitate communication and interaction. The answer should never be to abandon retrospectives, leave remote members out of the meeting or to hold separate but equal retrospectives. Remember, one team and one retrospective but that only work well when members trust each other and feel safe to share their ideas for improvement.


Categories: Process Management

Estimating Software-Intensive Systems

Herding Cats - Glen Alleman - Fri, 10/03/2014 - 23:27

Estimating SW Intensive SystemsThe are numerous conjectures about waste of software project estimates. Most are based on personal opinion divorced from the business processes of writing sofwtare for money.

From the Introduction of the book to the left.

Good estimates are key to project (and product) success. Estimates provide information to make decisions, define feasible performance objectives, and plans. Measurements provide data to gauge adherence to performance specifications and plans, make decisions, revise designs and plans, and improve future estimates and processes.

Engineers use estimates and measurements to evaluate the feasibility and affordability of proposed products, choose amoung alternatives designs, assess risk, and support business decisions. Engineers and planners estimate the resources needed to develop, maintain, enhance, and deploy a product. Project planners use the estimated staffing level to identify needed facilities.

Planners and managers use the resource estimates to compute project cost and schedule, and prepare budgets and plans. Estimates of product, project and process characteristics provide "baselines"to assess progress the execution of the project. Managers compare compare estimates and actual values to identify deviations from the project plan and to understand the causes of the variation.

For products, engineers compare estimates of the technical baseline to observed performance to decide if the product meets its functional and operational requirements. Process capability baselines establish norms for process performance. Managers use these norms to control the process and detect compliance problems. Process engineers use capability baselines to improve the production process.

Bad estimates affect everyone associated with the project - the engineers and managers, the customer who buys the product, and sometimes even the stockholders of the company responsible for delivering the software. Incomplete or inaccurate resource estimates for a project mean that the project may not have enough time and money to complete the required work.

If you work in a domain where none of these conditions are in place, then by all means don't estimate.

If you do recognize some or all of these conditions, then here's a summary of the reasons to estimate and measure, from the book.

  • Product Size, Performance, and Quality
    • Evaluate feasibility of requirements
    • Analyze alternative product designs
    • Determine the required capacity and performance of hardware.
    • Evaluate product performance - accuracy, speed, reliability, availability, and all the ...ilities. (ACA web site missed answering this question).
    • Identify and assess technical risks
    • Provide technical baselines for tracking and controlling - this is called Closed Loop Control. No steering targets with measures of actual performance assessed against desired performance is called Open Loop Control.
  • Project Effort, Cost, and Schedule - yes Virginia real business managers need to know when you'll be done, how much it will cost, and what you'll deliver on that day for that cost. And yes Virginia there is no Santa Claus
    • Determine project feasibility in terms of cost and time
    • Identify and assess project risks - Risk Management is how Adults Manage Projects  - Tim Lister
    • Negotiate achievable commitments
    • Prepare realistic plans and budgets
    • Evaluate business value - cost versus benefit is how business stay in business
    • Provide cost and schedule baseline for tracking and controlling
  • Process Capability and Performance
    • Predict resource consumption and efficiency
    • Establish norms of expected performance - back to the steering targets
    • Identify opportunities for improvement.

 

Related articles The Failure of Open Loop Thinking Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices An Agile Estimating Story How To Make Decisions Incremental Commitment Spiral Model Probabilistic Cost and Schedule Processes Project Finance The Three Elements of Project Work and Their Estimates How Not To Make Decisions Using Bad Estimates
Categories: Project Management

Componentize the web, with Polycasts!

Google Code Blog - Fri, 10/03/2014 - 18:58
Today at Google, we’re excited to announce the launch of Polycasts, a new video series to get developers up and running with Polymer and Web Components.

Web Components usher in a new era of web development, allowing developers to create encapsulated, interoperable elements that extend HTML itself. Built atop these new standards, Polymer makes it easier and faster to build Web Components, while also adding polyfill support so they work across all modern browsers.

Because Polymer and Web Components are such big changes for the platform, there’s a lot to learn, and it can be easy to get lost in the complexity. For that reason, we created Polycasts.

Polycasts are designed to be bite sized, and to teach one concept at a time. Along the way we plan to highlight best practices for not only working with Polymer, but also using the DevTools to make sure your code is performant.

We’ll be releasing new videos often over the coming weeks, initially focusing on core elements and layout. These episodes will also be embedded throughout the Polymer site, helping to augment the existing documentation. Because there’s so much to cover in the Polymer universe, we want to hear from you! What would you like to see? Feel free to shoot a tweet to @rob_dodson, if you have an idea for a show, and be sure to subscribe to our YouTube channel so you’re notified when new episodes are released.

Posted by Rob Dodson, Developer Advocate
Categories: Programming