Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Updated Cross-Platform Tools in Google Play Game Services

Android Developers Blog - Tue, 10/07/2014 - 22:51

By Ben Frenkel, Google Play Games team

Game services UIs are now updated for material design, across all of the SDKs.

Game developers, we've updated some of our popular developer tools to give you a consistent set of game services across platforms, a refreshed UI based on material design, and new tools to give you better visibility into what users are doing in your games.

Let’s take a look at the new features.

Real-time Multiplayer in the Play Games cross-platform C++ SDK

To make it easier to build cross-platform games, we’ve added Real-Time Multiplayer (RTMP) to the latest Google Play Games C++ SDK. The addition of RTMP brings the C++ SDK to feature parity with the Play services SDK on Android and the Play Games iOS SDK. Learn more »

Material Design refresh across Android, cross-platform C++, and iOS SDKs

We’ve incorporated material design into the user-interface of the latest Play Games services SDKs for Android, cross-platform C++, and iOS. This gives you a bold, colorful design that’s consistent across all of your games, for all of your users. Learn more »

New quests features and completion statistics

Quests are a popular way to increase player engagement by adding fresh content without updating your game. We’ve added some new features to quests to make them easier to implement and manage.

First, we’ve simplified quests implementations by providing out-of-the-box toasts for “quest accepted” and “quest completed” events. You can invoke these toasts from your game with just a single call, on any platform. This removes the need to create your own custom toasts, though you are still free to do so.

You also have more insight into how your quests are performing through new in-line quest stats in the Developer Console. With these stats, you can better monitor how many people are completing their quests, so you can adjust the criteria to make them easier to achieve, if needed. Learn more »

Last, we’ve eliminated the 24-hour lead-time requirement for publishing and allowing repeating quests to have the same name. You now have the freedom to publish quests whenever you want with whatever name you want.

New quest stats let you see how many users are completing their quests.

Multiplayer game statistics

Now when you add multiplayer support through Google Play game services, you get multiplayer stats for free, without having to implement a custom logging solution. You can simply visit the Developer Console to see how players are using your multiplayer integration and look at trends in overall usage. The new stats are available as tabs under the Engagement section. Learn more »

Multiplayer stats let you see trends in how players are using your app's multiplayer integration.

New game services insights and alerts

We’re continuing to expand the types of alerts we offer the Developer Console to let you know about more types of issues that might be affecting your users' gameplay experiences. You’ll now get an alert when you have a broken implementation of real-time and turn-based multiplayer, and we’ll also notify you if your Achievements and Leaderboard implementations use too many duplicate images. Learn more »

Get Started

You can get started with all of these new features right away. Visit the Google Play game services developer site to download the updated SDKs. For migration details on the Game Services SDK for iOS, see the release notes. You can take a look at the new stats and alerts by visiting the Google Play Developer Console.

Join the discussion on

+Android Developers
Categories: Programming

Unexciting Thresholds

Software Requirements Blog - Seilevel.com - Tue, 10/07/2014 - 17:00
Kano analysis, named for Professor Noriaki Kano, is helpful for figuring out what features will have the greatest sway on customer satisfaction. The approach uses five categories for considering satisfaction: Exciters (or Delighters), Performance, Threshold, Indifferent, and Reversed (or Questionable). These categories have been translated into English using various names, so you might see Kano […]
Categories: Requirements

What Is Quality?

Mike Cohn's Blog - Tue, 10/07/2014 - 15:00

Agile teams build high-quality products. Agile team members write high-quality code. Agile teams produce functionality quickly by not sacrificing quality.

Each of these is something I’ve said before. And if you haven’t said these exact things, you’ve likely said something similar.

Quality gets mentioned a lot in discussions about agile. And so, perhaps it’s worth clarifying my definition of quality. Of course, others have thought about quality more deeply than I’m capable of. And so, I won’t be providing a new definition of quality here. But I will explain how I think of quality.

One of the leading advocates for quality was Philip Crosby. In the 1970s he proclaimed that “quality is free” because doing something right the first time at a high level of quality was cheaper than fixing it later. Crosby defined quality as “conformance to requirements.”

I never really bought into Crosby “conformance with requirements” approach (even before agile came around) because there was never a way to be confident requirements were accurate. Saying something like old Microsoft Bob was high quality because it complied with some ill-conceived requirements document never felt right to me.

Similarly, quality isn’t just being bug-free though, as that’s the same problem.

Another approach to defining quality comes from Joseph Juran. He was one of a number of management theorists who worked in Japan in the 1950s. Juran defined quality as “fitness for use”:

"An essential requirement of these products is that they meet the needs of those members of society who will actually use them. This concept of fitness for use is universal. It applies to all goods and services, without exception. The popular term for fitness for use is Quality, and our basic definition becomes: quality means fitness for use."

This definition of quality really resonates with me. Quality is “fitness for use.” A high-quality product does what its customers want in such a way that they actually use the product. Something that conforms to ill-conceived requirements (such as Microsoft Bob) is not high quality. Something that is buggy isn’t high quality because it isn’t fit for use.

What do you think? Is quality best thought of as “conformance to requirements?” Or “fitness for use?” Or perhaps something else entirely? Please share your thoughts in the comments below.

What Is Quality?

Mike Cohn's Blog - Tue, 10/07/2014 - 15:00

Agile teams build high-quality products. Agile team members write high-quality code. Agile teams produce functionality quickly by not sacrificing quality.

Each of these is something I’ve said before. And if you haven’t said these exact things, you’ve likely said something similar.

Quality gets mentioned a lot in discussions about agile. And so, perhaps it’s worth clarifying my definition of quality. Of course, others have thought about quality more deeply than I’m capable of. And so, I won’t be providing a new definition of quality here. But I will explain how I think of quality.

One of the leading advocates for quality was Philip Crosby. In the 1970s he proclaimed that “quality is free” because doing something right the first time at a high level of quality was cheaper than fixing it later. Crosby defined quality as “conformance to requirements.”

I never really bought into Crosby “conformance with requirements” approach (even before agile came around) because there was never a way to be confident requirements were accurate. Saying something like old Microsoft Bob was high quality because it complied with some ill-conceived requirements document never felt right to me.

Similarly, quality isn’t just being bug-free though, as that’s the same problem.

Another approach to defining quality comes from Joseph Juran. He was one of a number of management theorists who worked in Japan in the 1950s. Juran defined quality as “fitness for use”:

"An essential requirement of these products is that they meet the needs of those members of society who will actually use them. This concept of fitness for use is universal. It applies to all goods and services, without exception. The popular term for fitness for use is Quality, and our basic definition becomes: quality means fitness for use."

This definition of quality really resonates with me. Quality is “fitness for use.” A high-quality product does what its customers want in such a way that they actually use the product. Something that conforms to ill-conceived requirements (such as Microsoft Bob) is not high quality. Something that is buggy isn’t high quality because it isn’t fit for use.

What do you think? Is quality best thought of as “conformance to requirements?” Or “fitness for use?” Or perhaps something else entirely? Please share your thoughts in the comments below.

The Complete Guide to Treadmill Desk Walking While Working

Making the Complex Simple - John Sonmez - Tue, 10/07/2014 - 14:07

Just about every day, I spend at least some portion of my day walking on a treadmill desk while doing my work. I started doing this about four years ago–and although I haven’t always been consistent with it–I’ve found it to be a very easy way to burn some extra calories during the day and […]

The post The Complete Guide to Treadmill Desk Walking While Working appeared first on Simple Programmer.

Categories: Programming

Azure: Redis Cache, Disaster Recovery to Azure, Tagging Support, Elastic Scale for SQLDB, DocDB

ScottGu's Blog - Scott Guthrie - Tue, 10/07/2014 - 06:02

Over the last few days we’ve released a number of great enhancements to Microsoft Azure.  These include:

  • Redis Cache: General Availability of Redis Cache Service
  • Site Recovery: General Availability of Disaster Recovery to Azure using Azure Site Recovery
  • Management: Tags support in the Azure Preview Portal
  • SQL DB: Public preview of Elastic Scale for Azure SQL Database (available through .NET lib, Azure service templates)
  • DocumentDB: Support for Document Explorer, Collection management and new metrics
  • Notification Hub: Support for Baidu Push Notification Service
  • Virtual Network: Support for static private IP support in the Azure Preview Portal
  • Automation updates: Active Directory authentication, PowerShell script converter, runbook gallery, hourly scheduling support

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Redis Cache: General Availability of Redis Cache Service

I’m excited to announce the General Availability of the Azure Redis Cache. The Azure Redis Cache service provides the ability for you to use a secure/dedicated Redis cache, managed as a service by Microsoft. The Azure Redis Cache is now the recommended distributed cache solution we advocate for Azure applications. Redis Cache

Unlike traditional caches which deal only with key-value pairs, Redis is popular for its support of high performance data types, on which you can perform atomic operations such as appending to a string, incrementing the value in a hash, pushing to a list, computing set intersection, union and difference, or getting the member with highest ranking in a sorted set.  Other features include support for transactions, pub/sub, Lua scripting, keys with a limited time-to-live, and configuration settings to make Redis behave more like a traditional cache.

Finally, Redis has a healthy, vibrant open source ecosystem built around it. This is reflected in the diverse set of Redis clients available across multiple languages. This allows it to be used by nearly any application, running on either Windows or Linux, that you host inside of Azure. Redis Cache Sizes and Editions

The Azure Redis Cache Service is today offered in the following sizes:  250 MB, 1 GB, 2.8 GB, 6 GB, 13 GB, 26 GB, 53 GB.  We plan to support even higher-memory options in the future.

Each Redis cache size option is also offered in two editions:

  • Basic – A single cache node, without a formal SLA, recommended for use in dev/test or non-critical workloads.
  • Standard – A multi-node, replicated cache configured in a two-node Master/Replica configuration for high-availability, and backed by an enterprise SLA.

With the Standard edition, we manage replication between the two nodes and perform an automatic failover in the case of any failure of the Master node (because of either an un-planned server failure, or in the event of planned patching maintenance). This helps ensure the availability of the cache and the data stored within it. 

Details on Azure Redis Cache pricing can be found on the Azure Cache pricing page.  Prices start as low as $17 a month. Create a New Redis Cache and Connect to It

You can create a new instance of a Redis Cache using the Azure Preview Portal.  Simply select the New->Redis Cache item to create a new instance. 

You can then use a wide variety of programming languages and corresponding client packages to connect to the Redis Cache you’ve provisioned.  You use the same Redis client packages that you’d use to connect to your own Redis instance as you do to connect to an Azure Redis Cache service.  The API + libraries are exactly the same.

Below we’ll use a .NET Redis client called StackExchange.Redis to connect to our Azure Redis Cache instance. First open any Visual Studio project and add the StackExchange.Redis NuGet package to it, with the NuGet package manager.  Then, obtain the cache endpoint and key respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal.

image

Once you’ve retrieved these, create a connection instance to the cache with the code below:

var connection = StackExchange.Redis.ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

Once the connection is established, retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method.

IDatabase cache = connection.GetDatabase();

Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods (or their async counterparts – StringSetAsync and StringGetAsync).

cache.StringSet("Key1", "HelloWorld");

cache.StringGet("Key1");

You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure. For an example of an end to end application using Azure Redis Cache, please check out the MVC Movie Application blog post. Using Redis for ASP.NET Session State and Output Caching

You can also take advantage of Redis to store out-of-process ASP.NET Session State as well as to share Output Cached content across web server instances. 

For more details on using Redis for Session State, checkout this blog post: ASP.NET Session State for Redis

For details on using Redis for Output Caching, checkout this MSDN post: ASP.NET Output Cache for Redis Monitoring and Alerting

Every Azure Redis cache instance has built-in monitoring support on by default. Currently you can track Cache Hits, Cache Misses, Get/Set Commands, Total Operations, Evicted Keys, Expired Keys, Used Memory, Used Bandwidth and Used CPU.  You can easily visualize these using the Azure Preview Portal:

image

You can also create alerts on metrics or events (just click the “Add Alert” button above). For example, you could create an alert rule to notify the cache administrator when the cache is seeing evictions. This in turn might signal that the cache is running hot and needs to be scaled up with more memory. Learn more

For more information about the Azure Redis Cache, please visit the following links:

Site Recovery: Announcing the General Availability of Disaster Recovery to Azure

I’m excited to announce the general availability of the Azure Site Recovery Service’s new Disaster Recovery to Azure functionality.  The Disaster Recovery to Azure capability enables consistent replication, protection, and recovery of on-premises VMs to Microsoft Azure. With support for both Disaster Recovery and Migration to Azure, the Azure Site Recovery service now provides a simple, reliable, and cost-effective DR solution for enabling Virtual Machine replication and recovery between on-premises private clouds across different enterprise locations, or directly to the cloud with Azure.

This month’s release builds upon our recent InMage acquisition, and the integration of InMage Scout with Azure Site Recovery enables us to provide hybrid cloud business continuity solutions for any customer IT environment – regardless of whether it is Windows or Linux, running on physical servers or virtualized servers using Hyper-V, VMware or other virtualization solutions. Microsoft Azure is now the ideal destination for disaster recovery for virtually every enterprise server in the world.

In addition to enabling replication to and disaster recovery in Azure, the Azure Site Recovery service also enables the automated protection of VMs, remote health monitoring of them, no-impact disaster recovery plan testing, and single click orchestrated recovery - all backed by an enterprise-grade SLA. A new addition with this GA release is the ability to also invoke Azure Automation runbooks from within Azure Site Recovery Plans, enabling you to further automate your solutions.

image

Learn More about Azure Site Recovery

For more information on Azure Site Recovery, check out the recording of the Azure Site Recovery session at TechEd 2014 where we discussed the preview.  You can also visit the Azure Site Recovery forum on MSDN for additional information and to engage with the engineering team or other customers.

Once you’re ready to get started with Azure Site Recovery, check out additional pricing or product information, and sign up for a free Azure trial.

Beginning this month, Azure Backup and Azure Site Recovery will also be available in a convenient, and economical promotion offer available for purchase via a Microsoft Enterprise Agreement.  Each unit of the Azure Backup & Site Recovery annual subscription offer covers protection of a single instance to Azure with Site Recovery, as well as backup of data with Azure Backup.  You can contact your Microsoft Reseller or Microsoft representative for more information. Management: Tag Support with Resources

I’m excited to announce the support of tags in the Azure management platform and in the Azure preview portal.

Tags provide an easy way to organize your Azure resources and resources groups, by allowing you to tag your resources with name/value pairs to further categorize and view resources across resource groups and across subscriptions.  For example, you could use tags to identify which of your resources are used for “production” versus “dev/test” – and enable easy filtering/searching of the resources based on which tag you were interested in – regardless of which application or resource group they were in.  Using Tags

To get started with the new Tag support, browse to any resource or resource group in the Azure Preview Portal and click on the Tags tile on the resource.

image

On the Tags blade that appears, you'll see a list of any tags you've already applied. To add a new tag, simply specify a name and value and press enter. After you've added a few tags, you'll notice autocomplete options based on pre-existing tag names and values to better ensure a consistent taxonomy across your resources and to avoid common mistakes, like misspellings.

image

You can also use our command-line tools to tag resources as well.  Below is an example of using the Azure PowerShell module to quickly tag all of the resources in your Azure subscription:

image

Once you've tagged your resources and resource groups, you can view the full list of tags across all of your subscriptions using the Browse hub.

image

You can also “pin” tags to your Startboard for quick access.  This provides a really easy way to quickly jump to any resource in a tag you’ve pinned:

image 

SQL Databases: Public Preview of Elastic Scale Support

I am excited to announce the public preview of Elastic Scale for Azure SQL Database. Elastic Scale enables the data-tier of an application to scale out via industry-standard sharding practices, while significantly streamlining the development and management of your sharded cloud applications. The new capabilities are provided through .NET libraries and Azure service templates that are hosted in your own Azure subscription to manage your highly scalable applications. Elastic Scale implements the infrastructure aspects of sharding and thus allows you to instead focus on the business logic of your application.

Elastic Scale allows developers to establish a “contract” that defines where different slices of data reside across a collection of database instances.  This enables applications to easily and automatically direct transactions to the appropriate database (shard) and perform queries that cross many or all shards using simple extensions to the ADO.NET programming model. Elastic Scale also enables coordinated data movement between shards to split or merge ranges of data among different databases and satisfy common scenarios such as pulling a busy tenant into its own shard. 

image

We are also announcing the Federation Migration Utility which is available as part of the preview. This utility will help current SQL Database Federations customers migrate their Federations application to Elastic Scale without having to perform any data movement.

Get Started with the Elastic Scale preview today, and watch our Channel 9 video to learn more. DocumentDB: Document Explorer, Collection management and new metrics

Last week we released a bunch of updates to the Azure DocumentDB service experience in the Azure Preview Portal. We continue to improve the developer and management experiences so you can be more productive and build great applications on DocumentDB. These improvements include:

  • Document Explorer: View and access JSON documents in your database account
  • Collection management: Easily add and delete collections
  • Database performance metrics and storage information: View performance metrics and storage consumed at a Database level
  • Collection performance metrics and storage information: View performance metrics and storage consumed at a Collection level
  • Support for Azure tags: Apply custom tags to DocumentDB Accounts
Document Explorer

Near the bottom of the DocumentDB Account, Database, and Collection blades, you’ll now find a new Developer Tools lens with a Document Explorer part.

image

This part provides you with a read-only document explorer experience. Select a database and collection within the Document Explorer and view documents within that collection.

image

Note that the Document Explorer will load up to the first 100 documents in the selected Collection. You can load additional documents (in batches of 100) by selecting the “Load more” option at the bottom of the Document Explorer blade. Future updates will expand Document Explorer functionality to enable document CRUD operations as well as the ability to filter documents. Collection Management

The DocumentDB Database blade now allows you to quickly create a new Collection through the Add Collection command found on the top left of the Database blade.

image

Health Metrics

We’ve added a new Collection blade which exposes Collection level performance metrics and storage information. You can access this new blade by selecting a Collection from the list of Collections on the Database blade.

image

The Database and Collection level metrics are available via the Database and Collection blades.

image

image

As always, we’d love to hear from you about the DocumentDB features and experiences you would find most valuable within the Azure portal. You can submit your suggestions on the Microsoft Azure DocumentDB feedback forum. Notification Hubs: support for Baidu Cloud Push

Azure Notification Hubs enable cross platform mobile push notifications for Android, iOS, Windows, Windows Phone, and Kindle devices. Thousands of customers now use Notification Hubs for instant cross platform broadcast, personalized notifications to dynamic segments of their mobile audience, or simply to reach individual customers of their mobile apps regardless which device they use.  Today I am excited to announce support for another mobile notifications platform, Baidu Cloud Push, which will help Notification Hubs customers reach the diverse family of Android devices in China. 

Delivering push notifications to Android devices in China is no easy task, due to a diverse set of app stores and push services. Pushing notifications to an Android device via Google Cloud Messaging Service (GCM) does not work, as most Android devices in China are not configured to use GCM.  To help app developers reach every Android device independent of which app store they’re configured with, Azure Notification Hubs now supports sending push notifications via the Baidu Cloud Push service.

To use Baidu from your Notification Hub, register your app with Baidu, and obtain the appropriate identifiers (UserId and ChannelId) for your application.

image

Then configure your Notification Hub within the Azure Management Portal with these identifiers:

image

For more details, follow the tutorial in English & Chinese. You can learn more about Push Notifications using Azure at the Notification Hubs dev center. Virtual Machines: Instance-Level Public IPs generally available

Azure now supports the ability for you to assign public IP addresses to VMs and web or worker roles so they become directly addressable on the Internet - without having to map a virtual IP endpoint for access. With Instance-Level Public IPs, you can enable scenarios like running FTP servers in Azure and monitoring VMs directly using their IPs.

For more information, please visit the Instance-Level Public IP Addresses webpage.

Automation: Updates

Earlier this year, we introduced preview availability of Azure Automation, a service that allows you to automate the deployment, monitoring, and maintenance of your Azure resources. I am excited to announce several new features in Azure Automation:

  • Active Directory Authentication
  • PowerShell Script Converter
  • Runbook Gallery
  • Hourly Scheduling
Active Directory Authentication

We now offer an easier alternative to using certificates to authenticate from the Azure Automation service to your Azure environment. You can now authenticate to Azure using an Azure Active Directory organization identity which provides simple, credential-based authentication.

If you do not have an Active Directory user set up already, simply create a new user and provide the user with access to manage your Azure subscription. Once you have done this, create an Automation Asset with its credentials and reference the credential in your runbook. You need to do this setup only once and can then use the stored credentials going forward, greatly simplifying the number of steps that you need to take to start automating. You can read this blog to learn more about getting set up with Active Directory Authentication. PowerShell Script Converter

Azure Automation now supports importing PowerShell scripts as runbooks. When a PowerShell script is imported that does not contain a single PowerShell Workflow, Automation will attempt to convert it from PowerShell script to PowerShell Workflow, and then create a runbook from the result. This allows the vast amount of PowerShell content and knowledge that exists today to be more easily leveraged in Azure Automation, despite the fact that Automation executes PowerShell Workflow and not PowerShell. Runbook Gallery

The Runbook Gallery allows you to quickly discover Automation sample, utility, and scenario runbooks from within the Azure management portal. The Runbook Gallery consists of runbooks that can be used as is or with minor modification, and runbooks that can serve as examples of how to create your own runbooks. The Runbook Gallery features content not only by Microsoft, but also by active members of the Azure community. If you have created a runbook that you think other users may benefit from, you can share it with the community on Script Center and it will show up in the Gallery. If you are interested in learning more about the Runbook Gallery, this TechNet article describes how the Gallery works in more detail and provides information on how you can contribute.

You can access the Gallery from +New, and then selecting App Services > Automation > Runbook > From Gallery.

image

In the Gallery wizard, you can browse for runbooks by selecting the category in the left hand pane and then view the description of the selected runbook in the right pane. You can then preview the code and finally import the runbook into your personal space:

image

We will be adding the ability to expand the Gallery to include PowerShell scripts in the near future. These scripts will be converted to Workflows when they are imported to your Automation Account using the new PowerShell Script Converter. This means that you will have more content to choose from and a tool to help you get your PowerShell scripts running in Azure. Hourly Scheduling

Based on popular request from our users, hourly scheduling is now available in Azure Automation. This feature allows you to schedule your runbook hourly or every X hours, making it that much easier to start runbooks at a regular frequency that is smaller than a day. Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu omni

Categories: Architecture, Programming

The Heavyweight Championship of Agile

Without a framework, the building falls.

Without a framework, the building falls.

Team-based frameworks, techniques and methods have dominated the discussion over short history of Agile as a named movement. Concepts like Scrum and Extreme Programing (just two of the more prominent frameworks) caught the interest of software developers and IT management for many reasons, including the promise of customer responsiveness, faster delivery rates, increased productivity and the implementation excesses of heavyweight frameworks, such as the CMMI. Excess of the later still resounds in the minds of many developers, process improvement specialist and IT leaders. The fear of excess overhead and administration has dampened the discussion and exploration of techniques to scale Agile to the address program level and enterprise issues.  In the past few years, three primary frameworks have emerged to vie for the heavyweight championship of Agile. They are:

  1. Dynamic Systems Development Method (DSDM). DSDM is a project delivery/system development method. Originally released in 1994, DSDM predates the Agile Manifesto and was one the frameworks that drove the evolution of lightweight incremental development. DSMD takes a broader view of projects and integrates other lifecycle steps into the framework on top of a developer lead Scrum team. The framework has been tailored to integrate the PRINCE2 (European Project Management Standard) as a project and program management tool. DSDM is open source. (http://www.dsdm.org/)
  2. Disciplined Agile Delivery (DAD). DAD is a framework that incorporates concepts from many of the standard techniques and frameworks (Scrum, lean, Agile modeling and Agile Unified Process to name a few) to develop an organizational model. Given that DAD was developed and introduced after the introduction of Agile-as-a-movement, DAD explicitly reflects the Agile principles espoused in the Agile Manifesto while reflecting organizational scaling concerns. DAD has been influenced and championed by Scott Amber (Disciplined Agile Delivery: A Practitioner’s Guide to Agile, 2012) and IBM. The model is proprietary (http://disciplinedagiledelivery.com/)
  3. Scaled Agile Framework Enterprise (SAFe). SAFe as a framework leverages Kanban as tool to introduce organizational portfolio management and then to evolve concepts to programs to project and finally to Scrum teams. SAFe explicitly addresses the necessity of architecture to develop in lockstep with functionality. As with DSDM and DAD, SAFe’s goal is provide a framework to scale Agile in a manner that addresses the concerns and needs to the overall business, IT and development organizations. SAFe is a proprietary framework that was originally developed by Dean Leffingwell and now is supported by the company Scaled Agile. (http://scaledagileframework.com/)

There is a huge amount of controversy in the Agile community about the need for frameworks for scaling Agile, as evidenced by shouting matches at conferences and the vitriol on message boards. However many large organizations in both commercial and governmental organizations are using DSDM, DAD and SAFe as mechanisms to scale Agile.  Shouting is not going to stop what many organizations view as a valuable mechanism to deliver functionality to their customers.


Categories: Process Management

Why 'Why' Is Everything

Xebia Blog - Mon, 10/06/2014 - 20:46

The 'Why' part is perhaps the most important aspect of a user story. This links to the sprint goal which links ultimately to the product vision and organisation's vision.

Lately, I got reminded of the very truth of this statement. My youngest son is part of a soccer team and they have training every week. Part of the training are exercises that use a so-called speedladder.

foto-1

After the training while driving home I asked him what he especially liked about the training and what he wants to do differently next time. This time he answered that he didn't like the training at all. So I asked him what part he disliked: "The speedladder. It is such a stupid thing to do.". Although I realised it to be a poor mans answer I told him that some parts are not that nice and he needs to accept that: practising is not always fun. I wasn't happy with the answer but couldn't think of a better one.

Some time passed when I overheard the trainers explaining to each other that the speedladder is for improving the 'footwork', coordination, and sensory development. Then I got an idea!
I knew that his ambition is to become as good as Messi :-) so when at home I explained this to my son and that it helps him to improve feints and unparalleled actions. I noticed his twinkling eyes and he enthusiastically replied: "Dad, can we buy a speedladder so I can practise at home?".  Of course I did buy one! Since then the 'speedladder' is the most favourable part of the soccer training!

Summary

The goal, purpose and the 'Why' is the most important thing for persons and teams. Communicating this clearly to the team is one of the most important things a product owner and organisation need to do in order to get high performant teams.

Quote of the Day

Herding Cats - Glen Alleman - Mon, 10/06/2014 - 17:52

Novum_organum_scientiarumThe unassisted hand, and the understanding left to itself, posses but little power. Effects are produced by the means of instruments and helps, which the understanding requires no less than the hand. And as instruments either promote or regulate the motion of the hand, so those that are applied to the mind prompt or protect the understanding - Novum Organum Sciantiarium (Aphorisms concerning the Interpretation of Nature and the Kingdom of Man), Francis Bacon (1561 - 1626).

 

 

When we hear we can make decisions in the absence of estimating the impacts of those decisions, the cost when complete, or the lost opportunity cost for make alternative decisions, think of Bacon.

He essentially says show me the money.

Control systems from Glen Alleman
Categories: Project Management

Fiddling with ReactiveUI and Xamarin.Forms

Eric.Weblog() - Eric Sink - Mon, 10/06/2014 - 17:00

I'm no expert at ReactiveUI. I'm just fiddling around in the back row of a session here at Xamarin Evolve. :-)

The goal

I want an instance of my DataGrid control that is bound to a collection of objects. And I want the display to automatically update when I change a property on one of those objects.

For the sake of this quickie demo, I'm gonna add a tap handler for a cell that simply appends an asterisk to the text of that cell. That should end up causing a display update.

Something like this code snippet:


Main.SingleTap += (object sender, CellCoords e) => {
    T r = Rows [e.Row];
    ColumnInfo ci = Columns[e.Column];
    var typ = typeof(T);
    var ti = typ.GetTypeInfo();
    var p = ti.GetDeclaredProperty(ci.PropertyName);
    if (p != null)
    {
        var val = p.GetValue(r);
        p.SetValue(r, val.ToString() + "*");
    }
};

Actually, that looks complicated, because I've got some reflection code that figures out which property on my object corresponds to which column. Ignore the snippet above and think of it like this (for a tap in column 0):

Main.SingleTap += (object sender, CellCoords e) => {
    WordPair r = Rows [e.Row];
    r.en += "*";
};

Which will make more sense if you can see the class I'm using to represent a row:

public class WordPair
{
    public string en { get; set; }
    public string sp { get; set; }
}

Or rather, that's what it looked like before I started adapting it for ReactiveUI. Now it needs to notify somebody when its properties change, so it looks more like this:

public class WordPair : ReactiveObject
{
    private string _en;
    private string _sp;

    public string en {
        get { return _en; }
        set { this.RaiseAndSetIfChanged (ref _en, value); }
    }
    public string sp {
        get { return _sp; }
        set { this.RaiseAndSetIfChanged (ref _sp, value); }
    }
}

So, basically I want a DataGrid which is bound to a ReactiveList. But actually, I want it to be more generic than that. I want WordPair to be a type parameter.

So my DataGrid subclass of Xamarin.Forms.View has a type parameter for the type of the row:

public class ColumnishGrid<T> : Xamarin.Forms.View where T : class

And the ReactiveList<T> is stored in a property of that View:

public static readonly BindableProperty RowsProperty = 
    BindableProperty.Create<ColumnishGrid<T>,ReactiveList<T>>(
        p => p.Rows, null);

public ReactiveList<T> Rows {
    get { return (ReactiveList<T>)GetValue(RowsProperty); }
    set { SetValue(RowsProperty, value); } // TODO disallow invalid values
}

And the relevant portions of the code to build a Xamarin.Forms content page look like this:

var mainPage = new ContentPage {
    Content = new ColumnishGrid<WordPair> {

        ...

        Rows = new ReactiveList<WordPair> {
            new WordPair { en = "drive", sp = "conducir" },
            new WordPair { en = "speak", sp = "hablar" },
            new WordPair { en = "give", sp = "dar" },
            new WordPair { en = "be", sp = "ser" },
            new WordPair { en = "go", sp = "ir" },
            new WordPair { en = "wait", sp = "esperar" },
            new WordPair { en = "live", sp = "vivir" },
            new WordPair { en = "walk", sp = "andar" },
            new WordPair { en = "run", sp = "correr" },
            new WordPair { en = "sleep", sp = "dormir" },
            new WordPair { en = "want", sp = "querer" },
        }
    }
};

The implementation of ColumnishGrid contains the following snippet, which will be followed by further explanation:

IRowList<T> rowlist = new RowList_Bindable_ReactiveList<T>(this, RowsProperty);

IValuePerCell<string> vals = new ValuePerCell_RowList_Properties<string,T>(rowlist, propnames);

IDrawCell<IGraphics> dec = new DrawCell_Text (vals, fmt);

In DataGrid, a RowList is an interface used for binding some data type that represents a whole row.

public interface IRowList<T> : IPerCell
{
    bool get_value(int r, out T val);
}

A RowList It could be an array of something (like strings), using the column number as an index. But in this case, I am using a class, with each property mapped to a column.

A ValuePerCell object is used anytime DataGrid needs, er, a value per cell (like the text to be displayed):

public interface IValuePerCell<T> : IPerCell
{
    bool get_value(int col, int row, out T val);
}

And ValuePerCell_RowList_Properties is an object which does the mapping from a column number (like 0) to a property name (like WordPair.en).

Then the ValuePerCell object gets handed off to DrawCell_Text, which is what actually draws the cell text on the screen.

I skipped one important thing in the snippet above, and that's RowList_Bindable_ReactiveList. Since I'm storing my ReactiveList in a property on my View, there are two separate things to listen on. First, I obviously want to listen for changes to the ReactiveList and update the display appropriately. But I also need to listen for the case where somebody replaces the entire list.

RowList_Bindable_ReactiveList handles the latter, so it has code that looks like this:

obj.PropertyChanged += (object sender, System.ComponentModel.PropertyChangedEventArgs e) => {
    if (e.PropertyName == prop.PropertyName)
    {
        ReactiveList<T> lst = (ReactiveList<T>)obj.GetValue(prop);
        _next = new RowList_ReactiveList<T>(lst);
        if (changed != null) {
            changed(this, null);
        }
        _next.changed += (object s2, CellCoords e2) => {;
            if (changed != null) {
                changed(this, e2);
            }
        };
    }
};

And finally, the code which listens to the ReactiveList itself:

public RowList_ReactiveList(ReactiveList<T> rx)
{
    _rx = rx;

    _rx.ChangeTrackingEnabled = true;
    _rx.ItemChanged.Subscribe (x => {
        if (changed != null) {
            int pos = _rx.IndexOf(x.Sender);
            changed(this, new CellCoords(0, pos));
        }
    });
}

DataGrid uses a chain of drawing objects which pass notifications up the chain with a C# event. In the end, the DataGrid core panel will hear about the change, and it will trigger the renderer, which will cause a redraw.

And that's how the word "give" ended up with an asterisk in the screen shot at the top of this blog entry.

 

How Clay.io Built their 10x Architecture Using AWS, Docker, HAProxy, and Lots More

This is a guest repost by Zoli Kahan from Clay.io. 

This is the first post in my new series 10x, where I share my experiences and how we do things at Clay.io to develop at scale with a small team. If you find these things interesting, we're hiring - zoli@clay.io.

The Cloud CloudFlare

CloudFlare

CloudFlare handles all of our DNS, and acts as a distributed caching proxy with some additional DDOS protection features. It also handles SSL.

Amazon EC2 + VPC + NAT server
Categories: Architecture

Ten Thousand Baby Steps

NOOP.NL - Jurgen Appelo - Mon, 10/06/2014 - 15:35
Euro Disco/Dance

Last week, I added the last songs to my two big Spotify playlists: Euro Dance Heaven (2,520 songs) and Euro Disco Heaven (1,695 songs). I added the first of those songs to my playlists on December 16, 2010. That’s almost four years ago.

The post Ten Thousand Baby Steps appeared first on NOOP.NL.

Categories: Project Management

Conceptual Model vs Graph Model

Mark Needham - Mon, 10/06/2014 - 08:11

We’ve started running some sessions on graph modelling in London and during the first session it was pointed out that the process I’d described was very similar to that when modelling for a relational database.

I thought I better do some reading on the way relational models are derived and I came across an excellent video by Joe Maguire titled ‘Data Modelers Still Have Jobs: Adjusting For the NoSQL Environment

Joe starts off by showing the following ‘big picture framework’ which describes the steps involved in coming up with a relational model:

2014 10 05 19 04 46

A couple of slides later he points out that we often blur the lines between the different stages and end up designing a model which contains a lot of implementation details:

2014 10 06 23 25 22

If, on the other hand, we compare a conceptual model with a graph model this is less of an issue as the two models map quite closely:

  • Entities -> Nodes / Labels
  • Attributes -> Properties
  • Relationships -> Relationships
  • Identifiers -> Unique Constraints

Unique Constraints don’t quite capture everything that Identifiers do since it’s possible to create a node of a specific label without specifying the property which is uniquely constrained. Other than that though each concept matches one for one.

We often say that graphs are white board friendly by which we mean that that the model you sketch on a white board is the same as that stored in the database.

For example, consider the following sketch of people and their interactions with various books:

IMG 2342

If we were to translate that into a write query using Neo4j’s cypher query language it would look like this:

CREATE (ian:Person {name: "Ian"})
CREATE (alan:Person {name: "Alan"})
CREATE (gg:Person:Author {name: "Graham Greene"})
CREATE (jlc:Person:Author {name: "John Le Carre"})
 
CREATE (omih:Book {name: "Our Man in Havana"})
CREATE (ttsp:Book {name: "Tinker Tailor, Soldier, Spy"})
 
CREATE (gg)-[:WROTE]->(omih)
CREATE (jlc)-[:WROTE]->(ttsp)
CREATE (ian)-[:PURCHASED {date: "05-02-2011"}]->(ttsp)
CREATE (ian)-[:PURCHASED {date: "08-09-2011"}]->(omih)
CREATE (alan)-[:PURCHASED {date: "05-07-2014"}]->(ttsp)

There are a few extra brackets and the ‘CREATE’ key word but we haven’t lost any of the fidelity of the domain and in my experience a non technical / commercial person would be able to understand the query.

By contrast this article shows the steps we might take from a conceptual model describing employees, departments and unions to the eventual relational model.

If you don’t have the time to read through that, we start with this initial model…

2014 10 07 00 13 51

…and by the time we’ve got to a model that can be stored in our relational database:

2014 10 07 00 14 32

You’ll notice we’ve lost the relationship types and they’ve been replaced by 4 foreign keys that allow us to join the different tables/sets together.

In a graph model we’d have been able to stay much closer to the conceptual model and therefore closer to the language of the business.

I’m still exploring the world of data modelling and next up for me is to read Joe’s ‘Mastering Data Modeling‘ book. I’m also curious how normal forms and data redundancy apply to graphs so I’ll be looking into that as well.

Thoughts welcome, as usual!

Categories: Programming

How to create a Value Stream Map

Xebia Blog - Mon, 10/06/2014 - 08:05

Value Stream Mapping (VSM) is a -very- useful tool to gain insight in the workflow of a process and can be used to identify both Value Adding Activities and Non Value Adding Activities in a process stream while providing handles for optimizing the process chain. The results of a VSM can be used for many occasions: from writing out a business case, to defining a prioritized list to optimize processes within your organization, to pinpointing bottlenecks in your existing processes and gain a common understanding of process related issues.

When creating a VSM of your current software delivery process you quite possibly will be amazed by the amount of waste and therefor the room for improvement you might find. I challenge you to try this out within your own organization. It will leave you with a very powerful tool to explain to your management the steps that need to change, as it will leave you with facts.

To quickly get you started, I wrote out some handles on how to write out a proper Value Stream Map.

In many organizations there is the tendency to ‘solely’ perform local optimizations to steps in the process (i.e. per Business Unit), while in reality the largest process optimizations can be gained by optimizing the area’s which are in between the process steps and do not add any value to the customer at all; the Non Value Adding activities. Value Stream Mapping is a great tool for optimizing the complete process chain, not just the local steps.

Local_vs_complete

The Example - Mapping a Software Delivery Process
Many example value streams found on the internet focus on selling a mortgage, packaging objects in a factory or some logistic process. The example I will be using focuses on a typical Software Delivery Processes as we still see them today: the 'traditional' Software Delivery Process containing many manual steps.

You first need to map the 'as-is' process as you need this to form the baseline. This baseline provides you the required insight to remove steps from the process that do not add any value to your customer and therefor can be seen as pure waste to your organization.

It is important to write out the Value Stream as a group process (a workshop), where group-members represent people that are part of the value chain as it is today*. This is the only way to spot (hidden) activities and will provide a common understanding of the situation today. Apart from that, failure to execute the Value Stream Mapping activity as a group process will very likely reduce the acceptance rate at the end of the day. Never write out a VSM in isolation.

Value Stream mapping is 'a paper and pencil tool’ where you should ask participants to write out the stickies and help you form the map. You yourself will not write on stickies (okay, okay, maybe sometimes … but be careful not to do the work for the group). Writing out a process should take you about 4 to 6 hours, including discussions and the coffee breaks of course. So, now for the steps!

* Note that the example value stream is a simplified and fictional process based on the experience at several customers.

Step 0 Prepare
Make sure you have all materials available.

Here is a list:
- two 4 meter strokes of brown paper.
- Plastic tape to attach paper to the wall
- stickies square multiple colors
- stickies rectangle multiple colors
- small stickies one or two colors
- lot’s of sharpies (people need to be able to pick up the pens)
- colored ‘dot' stickies.

What do you need? (the helpful colleague not depicted)

What do you need? (the helpful colleague not depicted)

Step 1 & 2 define objectives and process steps
Make sure to work one process at a time and start off with defining customer objectives (the Voice Of Customer). A common understanding of the VoC is important because in later stage you will determine with the team which activities are really adding to this VoC and which steps are not. Quite often these objectives are defined in Time, Cost and Quality. For our example, let’s say the customer would like to be able to deliver a new feature every hour, with a max cost of $1000 a feature and with zero defects.

First, write down the Voice of the Customer in the top right corner. Now, together with the group, determine all the actors (organizations / persons / teams) that are part of the current process and glue these actors as orange stickies to the brown paper.

Defining Voice of Customer and Process Steps

Defining Voice of Customer and Process Steps

Step 3 Define activities performed within each process step
With the group, per determine the activities that take place. Underneath the orange stickies, add green stickies that describe the activities that take place in a given step.

Defining activities performed in each step

Defining activities performed in each step

Step 4 Define Work in Progress (WiP)
Now, add pink stickies in between the steps, describing the number of features / requirements / objects / activities that is currently in process in between actors. This is referred to as WiP - Work in Progress. Whenever there is a high WiP limit in between steps, you have identified a bottleneck causing the process 'flow' to stop.

On top of the pink WiP stickies containing particular high WiP levels, add a small sticky indicating what the group thinks is causing the high WiP. For instance, a document has to be distributed via internal mail, or a wait is introduced for a bi-weekly meeting or travel to another location is required. This information can later be used to optimize the process.

Note that in the workshop you should also take some time to finding WiP within the activities itself (this is not depicted in this example). Spend time on finding information for causes of high WiP and add this as stickies to each activity.

Define work in process

Define work in process

Step 5 Identify rework
Rework is waste. Still, many times you'll see that a deliverable is to be returned to a previous step for reprocessing. Together with the group, determine where this happens and what is the cause of this rework. A nice additional is to also write out first-time-right levels.

Identify rework

Identify rework

Step 6 Add additional information
Spend some time in adding additional comments for activities on the green stickies. Some activities might for instance not be optimized, are not easy to handle or from a group perspective considered obsolete. Mark these comments with blue stickies next to the activity at hand.

Add additional information

Add additional information

Step 7 Add Process time, Wait time and Lead time and determining Process Cycle Efficiency

Now, as we have the process more or less complete, we can start adding information related to timing. In this step you would like to determine the following information:

  • process time: the real amount of time that is required to perform a task without interruptions
  • lead time: the actual time that it takes for the activity to be completed (also known as elapse time)
  • wait time: time when no processing is done at all, for example when for waiting on a 'event' like a bi-weekly meeting.

(Not in picture): for every activity on the green sticky, write down a small sticky with two numbers vertically aligned. The top-number reflects the process-time, (i.e. 40 hours). The bottom-number reflects the lead time (i.e. 120 hours).

(In picture): add a block diagram underneath the process, where timing information in the upper section represents total processing time for all activities and timing information the lower section represents total lead time for all activities. (just add up the timing information for the individual activities I described in previous paragraph). Also add noticeable wait time in-between process steps. As a final step, to the right of this block diagram, add the totals.

Now that you have all information on the paper, the following  can be calculated:

  • Total Process Time - The total time required to actually work on activities if one could focus on the activity at hand.
  • Total Lead Time - The total time this process actually needs.
  • Project Cycle Efficiency (PCE): -> Total Process Time / Total Lead Time *100%.

Add this information to the lower right corner of your brown paper. The numbers for this example are:

Total Process Time: add all numbers in top section of stickies: 424 hours
Process Lead Time (PLT): add all numbers in lower section of stickies + wait time in between steps: 1740 hours
Project Cycle Efficiency (PCE) now is: -> Total Process  Time / Total Process Lead Time: 24%.
Note that 24% is -very- high which is caused by using an example. Usually you’ll see a PCE at about 4 - 8% for a traditional process.

Add process, wait and lead times

Add process, wait and lead times

Step 8 Identify Customer Value Add and Non Value Add activities
Now, categorize tasks into 2 types: tasks that add value to the customer (Customer Value Add, CVA) and tasks that do not add value to the customer (Non Value Add, NVA). The NVA you can again split into two categories: tasks that add Business Value (Business Value Add, BVA) and ‘Waste’. When optimizing a process, waste is to be eliminated completely as it does not add value to the customer nor the business as a whole. But also for the activities categorized as 'BVA', you have to ask yourself whether these activities add to the chain.

Mark CVA tasks with a green dot, BVA tasks with a blue dot and Waste with a red dot. Put the legend on the map for later reference.

When identifying CVA, NVA and BVA … force yourself to refer back to the Voice of Customer you jotted down in step 1 and think about who is your customer here. In this example, the customer is not the end user using the system, but the business. And it was the business that wanted Faster, Cheaper & Better. Now when you start to tag each individual task, give yourself some time in figuring out which tasks actually add to these goals.

Determine Customer Value Add & Non Value Add

Determine Customer Value Add & Non Value Add

To give you some guidance on how you can approach tagging each task, I’ll elaborate a bit on how I tagged the activities. Note again, this is just an example, within the workshop your team might tag differently.

Items I tagged as CVA: coding, testing (unit, static, acceptance), execution of tests and configuration of monitoring are adding value to the customer (business). Why? Because all these items relate to a faster, better (high quality through test + monitoring) and cheaper (less errors through higher quality of code) delivery of code.

Items I tagged as BVA: documentation, configuration of environments, deployments of VMs, installation of MW are required to be able to deliver to the customer when using this (typical waterfall) Software Delivery Processes. (Note: I do not necessarily concur with this process.) :)

Items I tagged as pure Waste, not adding any value to the customer: items like getting approval, the process to get funding (although probably required), discussing details and documenting results for later reference or waiting for the Quarterly release cycle. Non of these items are required to either deliver faster, cheaper or better so in that respect these items can be considered waste.

That's it (and step 9) - you've mapped your current process
So, that’s about it! The Value Stream Map is now more or less complete an contains all relevant information required to optimize the process in a next step. Step 9 here would be: Take some time to write out items/bottlenecks that are most important or easy to address and discuss internally with your team about a solution. Focus on items that you either tagged as BVA or pure waste and think of alternatives to eliminate these steps. Put your customer central, not your process! Just dropping an activity as a whole seems somewhat radical, but sometimes good ideas just are! Note by the way that when addressing a bottleneck, another bottleneck will pop up. There always will be a bottleneck somewhere in the process and therefor process optimization must be seen as a continuous process.

A final tip: to be able to perform a Value Stream Mapping workshop at the customer, it might be a good idea to join a more experienced colleague first, just to get a grasp of what the dynamics in such a workshop are like. The fact that all participants are at the same table, outlining the delivery process together and talk about it, will allow you to come up with an optimized process on which each person will buy in. But still, it takes some effort to get the workshop going. Take your time, do not rush it.

For now, I hope you can use the steps above the identify the current largest bottlenecks within your own organization and get going. In a next blog, if there is sufficient interest, I will write about what would be possible solutions in solving the bottlenecks in my example. If you have any ideas, just drop a line below so we can discuss! The aim for me would be to work towards a solution that caters for Continuous Delivery of Software.

Michiel Sens.

SPaMCAST 310 – Mike Burrows, Kanban from the Inside

www.spamcast.net

http://www.spamcast.net

Listen to the SPaMCAST 310

Software Process and Measurement Cast 310 features our interview with Mike Burrows. This is Mike’s second visit to the Software Process and Measurement Cast.  In this visit we discussed his new book, Kanban from the Inside (Kindle).  The book lays out why Kanban is a management method built on a set of values rather than just a set of techniques. Mike explains why Kanban leads to better outcomes for projects, managers, organizations and customers!

Mike is the UK Director and Principal Consultant at David J Anderson and Associates. In a career spanning the aerospace, banking, energy and government sectors, Mike has been a global development manager, IT director and software developer. He speaks regularly at Lean/Kanban-related events in several countries and his book Kanban from the Inside (Kindle)was published in September.

Mike’s email is mike@djaa.com
Twitter: https://twitter.com/asplake and @KanbanInside
Blog is http://positiveincline.com/index.php/about/UK

Kanban conference: http://lkuk.leankanban.com/
Kanban conference series: http://conf.leankanban.com/

Next

SPaMCAST 311 features our essay on backlog grooming. Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time). In the next podcast we get into the nuts and bolts of making your backlog better!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 310 – Mike Burrows, Kanban from the Inside

Software Process and Measurement Cast - Sun, 10/05/2014 - 22:00

Software Process and Measurement Cast 310 features our interview with Mike Burrows. This is Mike’s second visit to the Software Process and Measurement Cast.  In this visit we discussed his new book, Kanban from the Inside (Kindle).  The book lays out why Kanban is a management method built on a set of values rather than just a set of techniques. Mike explains why Kanban leads to better outcomes for projects, managers, organizations and customers!

Mike is the UK Director and Principal Consultant at David J Anderson and Associates. In a career spanning the aerospace, banking, energy and government sectors, Mike has been a global development manager, IT director and software developer. He speaks regularly at Lean/Kanban-related events in several countries and his book Kanban from the Inside (Kindle)was published in September.

Mike’s email is mike@djaa.com
Twitter: https://twitter.com/asplake and @KanbanInside
Blog is http://positiveincline.com/index.php/about/UK

Kanban conference: http://lkuk.leankanban.com/
Kanban conference series: http://conf.leankanban.com/

Next

SPaMCAST 311 features our essay on backlog grooming. Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time). In the next podcast we get into the nuts and bolts of making your backlog better!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Componentize the web, with Polycasts!

Google Code Blog - Fri, 10/03/2014 - 18:58
Today at Google, we’re excited to announce the launch of Polycasts, a new video series to get developers up and running with Polymer and Web Components.

Web Components usher in a new era of web development, allowing developers to create encapsulated, interoperable elements that extend HTML itself. Built atop these new standards, Polymer makes it easier and faster to build Web Components, while also adding polyfill support so they work across all modern browsers.

Because Polymer and Web Components are such big changes for the platform, there’s a lot to learn, and it can be easy to get lost in the complexity. For that reason, we created Polycasts.

Polycasts are designed to be bite sized, and to teach one concept at a time. Along the way we plan to highlight best practices for not only working with Polymer, but also using the DevTools to make sure your code is performant.

We’ll be releasing new videos often over the coming weeks, initially focusing on core elements and layout. These episodes will also be embedded throughout the Polymer site, helping to augment the existing documentation. Because there’s so much to cover in the Polymer universe, we want to hear from you! What would you like to see? Feel free to shoot a tweet to @rob_dodson, if you have an idea for a show, and be sure to subscribe to our YouTube channel so you’re notified when new episodes are released.

Posted by Rob Dodson, Developer Advocate
Categories: Programming

Integrating Geb with FitNesse using the Groovy ConfigSlurper

Xebia Blog - Fri, 10/03/2014 - 18:01

We've been playing around with Geb for a while now and writing tests using WebDriver and Groovy has been a delight! Geb integrates well with JUnit, TestNG, Spock, and Cucumber. All there is left to do is integrating it with FitNesse ... or not :-).

Setup Gradle and Dependencies

First we start with grabbing the gradle fitnesse classpath builder from Arjan Molenaar.
Add the following dependencies to the gradle build file:

compile 'org.codehaus.groovy:groovy-all:2.3.7'
compile 'org.gebish:geb-core:0.9.3'
compile 'org.seleniumhq.selenium:selenium-java:2.43.1
Configure different drivers with the ConfigSlurper

Geb provides a configuration mechanism using the Groovy ConfigSlurper. It's perfect for environment sensitive configuration. Geb uses the geb.env system property to determine the environment to use. So we use the ConfigSlurper to configure different drivers.

import org.openqa.selenium.chrome.ChromeDriver
import org.openqa.selenium.firefox.FirefoxDriver

driver = { new FirefoxDriver() }

environments {
  chrome {
    driver = { new ChromeDriver() }
  }
}
FitNesse using the ConfigSlurper

We need to tweak the gradle build script to let FitNesse play nice with the ConfigSlurper. So we pass the geb.env system property as a JVM argument. Look for the gradle task "wiki" in the gradle build script and add the following lines.

def gebEnv = (System.getProperty("geb.env")) ? (System.getProperty("geb.env")) : "firefox"
jvmArgs "-Dgeb.env=${gebEnv}"

Since FitNesse spins up a separate 'service' process when you execute a test, we need to pass the geb.env system property into the COMMAND_PATTERN of FitNesse. That service needs the geb.env system property to let Geb know which environment to use. Put the following lines in the FitNesse page.

!define COMMAND_PATTERN {java -Dgeb.env=${geb.env} -cp %p %m}

Now you can control the Geb environment by specifying it on the following command line.

gradle wiki -Dgeb.env=chrome

The gradle build script will pass the geb.env system property as JVM argument when FitNesse starts up. And the COMMAND_PATTERN will pass it to the test runner service.

Want to see it in action? Sources can be found here.

Preview: DataGrid for Xamarin.Forms

Eric.Weblog() - Eric Sink - Fri, 10/03/2014 - 17:00
Note

Update, 11 November 2014

Because the approach of this code prevents it from hosting Xamarin.Forms.View objects within cells, and because of current difficulties in getting a truly high-performance cross-platform drawing API on all three Xamarin.Forms platforms (insert unhappy face aimed at Windows Phone here), I have stopped work on this project for the time being.

However, I am currently working on another DataGrid implementation, implemented with a different approach. Sorry, nothing from my second attempt at a DataGrid has been released publicly yet. Watch my blog and Twitter for announcements.

What is it?

It's a Xamarin.Forms grid control for displaying data in rows and columns.

Where's the code?

https://github.com/ericsink

Is this open source?

Yes. Apache License v2.

Why are you writing a grid?

Because I see an unmet need. Xamarin.Forms needs a good way to display row/column data. And it needs to be capable of handling lots (millions) of cells. And it needs to be really, really fast.

I'm one of the founders of Zumero. We're all about mobile apps for businesses dealing with data. Many of our customers are using Xamarin, and we want to be able to recommend Xamarin.Forms to them. A DataGrid is one of the pieces we need.

What are the features?
  • Scrolling, both vertical and horizontal
  • Either scroll range can be fixed, infinite, or wraparound.
  • Optional headers, top, left, right, bottom.
  • Ample flexibility in connecting to different data sources
  • Column widths can be fixed width or variable. Same for row heights.
Is this ready for use?

No, this code is still pretty rough.

Is this ready to play with and complain about?

Yes, hence its presence on GitHub. :-)

Open dg.sln in Xamarin Studio and it should build. There's a demo app (Android and iOS) you can use to try it out. The WP8 code isn't there yet, but it'll be moving in soon.

Is there a NuGet package?

Not yet.

Is the API frozen yet?

No. In fact, I'm still considering some API changes that could be described as major.

What platforms does it support?

Android and iOS are currently in decent shape. Windows Phone is in progress. (The header row was bashful and refused to cooperate for the WP8 screenshot.)

What will the API be like?

I don't know yet. In fact, I'm tempted to quibble when you say "the API", because you're speaking of it in the singular, and I think I will end up with more than one. :-)

Earlier, I described this thing as "a grid control", but it would be more accurate right now to describe it as a framework for building grid controls.

I have implemented some sample grids, mostly just to demonstrate the framework's capabilities and to experiment with what kinds of user-facing APIs would be most friendly. Examples include:

  • A grid that gets its data from an IList, where the properties of objects of class T become columns.
  • A data connector that gets its data from ReactiveList (uses ReactiveUI).
  • A grid that gets its data from a sqlite3_stmt handle (uses my SQLitePCL.raw package).
  • A grid that just draws shapes.
  • A grid that draws nothing but a cell border, but the farther you scroll, the bigger the cells get.
  • A 2x2 grid that scrolls forever and just repeats its four cells over and over.
How is this different from the layouts built into Xamarin.Forms?

This control is not a "Layout", in the Xamarin.Forms sense. It is not a subclass of Xamarin.Forms.Layout. You can't add child views to it.

If you need something to help arrange the visual elements of your UI on the screen, DataGrid is not what you want. Just use one of the Layouts. That's what they're for.

But maybe you need to display a lot of data. Maybe you have 200,000 rows. Maybe you don't know how many rows you have and you won't know until you read the last one. Maybe you have lots of columns too, so you need the ability to scroll in both directions. Maybe you need one or more header rows at the top which sync-scroll horizontally but are frozen vertically. And so on.

Those kind of use cases are what DataGrid is aimed for.

What drawing API are you using?

Mostly I'm working with a hacked-up copy of Frank Krueger's CrossGraphics library, modified in a variety of ways.

The biggest piece of the code (in DataGrid.Core) actually doesn't care about the graphics API. That assembly contains generic classes which accept <TGraphics> as a type parameter. (As a proof of concept demo, I've got an iOS implementation built on CGContext which doesn't actually depend on Xamarin.Forms at all.)

So I can't add ANY child views to your DataGrid control?

Currently, no. I would like to add this capability in the future.

(At the moment, I'm pretty sure it is impossible to build a layout control for Xamarin.Forms unless you're a Xamarin employee. There seem to be a few important things that are not public.)

How fast is it?

Very. On my Nexus 7, a debug build of DataGrid can easily scroll a full screen of text cells at 60 frames/second. Performance on iOS is similar.

How much code is cross-platform?

Not counting the demo app or my copy of CrossGraphics, the following table shows lines of code in each combination of dependencies:

 PortableiOS-specificAndroid-specific <TGraphics>2,741141174 Xamarin.Forms6339281

Xamarin.Forms is [going to be] a wonderful foundation for cross-platform mobile apps.

Can I use this from Objective-C or Java?

No. It's all C#.

Why are you naming_things_with_underscores?

Sorry about that. It's a habit from my Unix days that I keep slipping back into. I'll clean up my mess soon.

What's up with IChanged? Why not IObservable<T>?

Er, yeah, remember above when I said I'm still considering some major changes? That's one them.

Does this in any way depend on your Zumero for SQL Server product?

No, DataGrid is a standalone open source library.

But it's rather likely that our commercial products will in the future depend on DataGrid.

 

Promises in the Google APIs JavaScript Client Library

Google Code Blog - Thu, 10/02/2014 - 21:28
The JavaScript Client Library for Google APIs is now Promises/A+-conformant. Requests made using gapi.client.request, gapi.client.newBatch, and from generated API methods like gapi.client.plus.people.search are also promises. You can pass in response and error handlers through their then methods.

Requests can be made using the then syntax provided by Promises:
gapi.client.load(‘plus’, ‘v1’).then(function () { 
gapi.client.plus.people.search({query: ‘John’}).then(function(res) {
console.log(res.result.items);
}, function(err) {
console.error(err.result);
});
})
All fulfilled responses and rejected application errors passed to the handlers will have these fields:
{
result: *, // JSON-parsed body or boolean false if not JSON-parseable
body: string,
headers: (Object.),
status: (?number),
statusText: (?string)
}
The promises can also be chained, making your code more readable:
gapi.client.youtube.playlistItems.list({
playlistId: 'PLOU2XLYxmsIIwGK7v7jg3gQvIAWJzdat_',
part: 'snippet'
}).then(function(res) {
return res.result.items.map(function(item) {
return item.snippet.resourceId.videoId;
});
}).then(function(videoIds) {
return gapi.client.youtube.videos.list({
id: videoIds.join(','),
part: 'snippet,contentDetails'
});
}).then(function(res) {
res.result.items.forEach(function(item) {
console.log(item);
});
}, function(err) {
console.error(error.result);
});
Using promises makes it easy to handle results of API requests and offer elegant error propagation.

To learn more about promises in the library and about converting from callbacks to promises, visit Using Promises and check out our latest API reference.

Posted by Jane Park, Software Engineer
Categories: Programming