Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Azure: SQL Databases, API Management, Media Services, Websites, Role Based Access Control and More

ScottGu's Blog - Scott Guthrie - Fri, 09/12/2014 - 07:14

This week we released a major set of updates to Microsoft Azure. This week’s updates include:

  • SQL Databases: General Availability of Azure SQL Database Service Tiers
  • API Management: General Availability of our API Management Service
  • Media Services: Live Streaming, Content Protection, Faster and Cost Effective Encoding, and Media Indexer
  • Web Sites: Virtual Network integration, new scalable CMS with WordPress and updates to Web Site Backup in the Preview Portal
  • Role-based Access Control: Preview release of role-based access control for Azure Management operations
  • Alerting: General Availability of Azure Alerting and new alerts on events

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:   SQL Databases: General Availability of Azure SQL Database Service Tiers

I’m happy to announce the General Availability of our new Azure SQL Database service tiers - Basic, Standard, and Premium.  The SQL Database service within Azure provides a compelling database-as-a-service offering that enables you to quickly innovate & stand up and run SQL databases without having to manage or operate VMs or infrastructure.

Today’s SQL Database Service Tiers all come with a 99.99% SLA, and databases can now grow up to 500GB in size.

Each SQL Database tier now guarantees a consistent performance level that you can depend on within your applications – avoiding the need to worry about “noisy neighbors” who might impact your performance from time to time.

Built-in point-in-time restore support now provides you with the ability to automatically re-create databases at a certain point of time (giving you much more backup flexibility and allowing you to restore to exactly the point before you accidentally did something bad to your data).

Built-in auditing support enables you to gain insight into events and changes that occur with the databases you host.

Built-in active geo-replication support, available with the premium tier, enables you to create up to 4 readable, secondary, databases in any Azure region.  When active geo-replication is enabled, we will ensure that all transactions committed to the database in your primary region are continuously replicated to the databases in the other regions as well:

image

One of the primary benefits of active geo-replication is that it provides application control over disaster recovery at a database level.  Having cross-region redundancy enables your applications to recover in the event of a disaster (e.g. a natural disaster, etc).  The new active geo-replication support enables you to initiate/control any failovers – allowing you to shift the primary database to any of your secondary regions:

image

This provides a robust business continuity offering, and enables you to run mission critical solutions in the cloud with confidence.  More Flexible Pricing

SQL Databases are now billed on a per-hour basis – allowing you to quickly create and tear down databases, and dynamically scale up or down databases even more cost effectively.

Basic Tier databases support databases up to 2GB in size and cost $4.99 for a full month of use.  Standard Tier databases support 250GB databases and now start at $15/month (there are also higher performance standard tiers at $30/month and $75/month). Premium Tier databases support 500GB databases as well as the active geo-replication feature and now start at $465/month.

The below table provides a quick look at the different tiers and functionality:

image

This page provides more details on how to think about DTU performance with each of the above tiers, and provides benchmark details on the number of transactions supported by each of the above service tiers and performance levels.

During the preview, we’ve heard from some ISVs, which have a large number of databases with variable performance demands, that they need the flexibility to share DTU performance resources across multiple databases as opposed to managing tiers for databases individually.  For example, some SaaS ISVs may have a separate SQL database for each customer and as the activity of each database varies, they want to manage a pool of resources with a defined budget across these customer databases.  We are working to enable this scenario within the new service tiers in a future service update. If you are an ISV with a similar scenario, please click here to sign up to learn more.

Learn more about SQL Databases on Azure here. API Management Service: General Availability Release

I’m excited to announce the General Availability of the Azure API Management Service.

In my last post I discussed how API Management enables customers to securely publish APIs to developers and accelerate partner adoption.  These APIs can be used from mobile and client applications (on any device) as well as other cloud and service based applications.

The API management service supports the ability to take any APIs you already have (either in the cloud or on-premises) and publish them for others to use.  The API Management service enables you to:

  • Throttle, rate limit and quota your APIs
  • Gain analytic insights on how your APIs are being used and by whom
  • Secure your APIs using OAuth or key-based access
  • Track the health of your APIs and quickly identify errors
  • Easily expose a developer portal for your APIs that provides documentation and test experiences to developers who want to use your APIs

Today’s General Availability provides a formal SLA for Standard tier services.  We also have a developer tier of the service that you can use, starting at just $49 per month. OAuth support in the Developer Portal

The API Management service provides a developer console that enables a great on-boarding and interactive learning experience for developers who want to use your APIs.  The developer console enables you to easily expose documentation as well enable developers to try/test your APIs.

With this week’s GA release we are also adding support that enables API publishers to register their OAuth Authorization Servers for use in the console, which in turn allows developers to sign in with their own login credentials when interacting with your API - a critical feature for any API that supports OAuth. All normative authorization grant types are supported plus scopes and default scopes.

image

For more details on how to enable OAuth 2 support with API Management and integration in the new developer portal, check out this tutorial.

Click here to learn more about the API Management service and try it out for free. Media Services: Live Streaming, DRM, Faster Cost Effective Encoding, and Media Indexer

This week we are excited to announce the public preview of Live Streaming and Content Protection support with Azure Media Services.

The same Internet scale streaming solution that leading international broadcasters used to live stream the 2014 Winter Olympic Games and 2014 FIFA World Cup to tens of millions of customers globally is now available in public preview to all Azure customers. This means you can now stream live events of any size with the same level of scalability, uptime, and reliability that was available to the Olympics and World Cup. DRM Content Protection

This week Azure Media Services is also introducing a new Content Protection offering which features both static and dynamic encryption with first party PlayReady license delivery and an AES 128-bit key delivery service.  This makes it easy to DRM protect both your live and pre-recorded video assets – and have them be available for users to easily watch them on any device or platform (Windows, Mac, iOS, Android and more). Faster and More Cost Effective Media Encoding

This week, we are also introducing faster media encoding speeds and more cost-effective billing. Our enhanced Azure Media Encoder is designed for premium media encoding and is billed based on output GBs. Our previous encoder was billed on both input + output GBs, so the shift to output only billing will result in a substantial price reduction for all of our customers.

To help you further optimize your encoding workflows, we’re introducing Basic, Standard, and Premium Encoding Reserved units, which give you more flexibility and allow you to tailor the encoding capability you pay for to the needs of your specific workflows. Media Indexer

Additionally, I’m happy to announce the General Availability of Azure Media Indexer, a powerful, market differentiated content extraction service which can be used to enhance the searchability of audio and video files.  With Media Indexer you can automatically analyze your media files and index the audio and video content in them. You can learn more about it here. More Media Partners

I’m also pleased to announce the addition this week of several media workflow partners and client players to our existing large set of media partners:

  • Azure Media Services and Telestream’s Wirecast are now fully integrated, including a built-in destination that makes its quick and easy to send content from Wirecast’s live streaming production software to Azure.
  • Similarly, Newtek’s Tricaster has also been integrated into the Azure platform, enabling customers to combine the high production value of Tricaster with the scalability and reliability of Azure Media Services.
  • Cires21 and Azure Media have paired up to help make monitoring the health of your live channels simple and easy, and the widely-used JW player is now fully integrated with Azure to enable you to quickly build video playback experiences across virtually all platforms.
Learn More

Visit the Azure Media Services site for more information and to get started for free. Websites: Virtual Network Integration, new Scalable CMS with WordPress

This week we’ve also released a number of great updates to our Azure Websites service.

Virtual Network Integration

Starting this week you can now integrate your Azure Websites with Azure Virtual Networks. This support enables your Websites to access resources attached to your virtual networks.  For example: this means you can now have a Website directly connect to a database hosted in a non-public VM on a virtual network.  If your Virtual Network is connected to your on-premises network (using a Site-to-Site software VPN or ExpressRoute dedicated fiber VPN) you can also now have your Website connect to resources in your on-premises network as well.

The new Virtual Network support enables both TCP and UDP protocols and will work with your VNET DNS. Hybrid Connections and Virtual Network are compatible such that you can also mix both in the same Website.  The new virtual network support for Web Sites is being released this week in preview.  Standard web hosting plans can have up to 5 virtual networks enabled. A website can only be connected to one virtual network at a time but there is no restriction on the number of websites that can be connected to a virtual network.

You can configure a Website to use a Virtual Network using the new Preview Azure Portal (http://portal.azure.com).  Click the “Virtual Network” tile in your web-site to bring up a virtual network blade that you can use to either create a new virtual network or attach to an existing one you already have:

image

Note that an Azure Website requires that your Virtual Network has a configured gateway and Point-to-Site enabled. It will remained grayed out in the UI above until you have enabled this. Scalable CMS with WordPress

This week we also released support for a Scalable CMS solution with WordPress running on Azure Websites.  Scalable CMS with WordPress provides the fastest way to build an optimized and hassle free WordPress Website. It is architected so that your WordPress site loads fast and can support millions of page views a month, and you can easily scale up or scale out as your traffic increases.

It is pre-configured to use Azure Storage, which can be used to store your site’s media library content, and can be easily configured to use the Azure CDN.  Every Scalable CMS site comes with auto-scale, staged publishing, SSL, custom domains, Webjobs, and backup and restore features of Azure Websites enabled. Scalable WordPress also allows you to use Jetpack to supercharge your WordPress site with powerful features available to WordPress.com users.

You can now easily deploy Scalable CMS with WordPress solutions on Azure via the Azure Gallery integrated within the new Azure Preview Portal (http://portal.azure.com).  When you select it within the portal it will walk you through automatically setting up and deploying a complete solution on Azure:

image

Scalable WordPress is ideal for Web developers, creative agencies, businesses and enterprises wanting a turn-key solution that maximizes performance of running WordPress on Azure Websites.  It’s fast, simple and secure WordPress hosting on Azure Websites. Updates to Website Backup

This week we also updated our built-in Backup feature within Azure Websites with a number of nice enhancements.  Starting today, you can now:

  • Choose the exact destination of your backups, including the specific Storage account and blob container you wish to store your backups within.
  • Choose to backup SQL databases or MySQL databases that are declared in the connection strings of the website.
  • On the restore side, you can now restore to both a new site, and to a deployment slot on a site. This makes it possible to verify your backup before you make it live.

These new capabilities make it easier than ever to have a full history of your website and its associated data. Security: Role Based Access Control for Management of Azure

As organizations move more and more of their workloads to Azure, one of the most requested features has been the ability to control which cloud resources different employees can access and what actions they can perform on those resources.

Today, I’m excited to announce the preview release of Role Based Access Control (RBAC) support in the Azure platform. RBAC is now available in the Azure preview portal and can be used to control access in the portal or access to the Azure Resource Manager APIs. You can use this support to limit the access of users and groups by assigning them roles on Azure resources. Highlights include:

  • A subscription is no longer the access management boundary in Azure. In April, we introduced Resource Groups, a container to group resources that share lifecycle. Now, you can grant users access on a resource group as well as on individual resources like specific Websites or VMs.
  • You can now grant access to both users groups. RBAC is based on Azure Active Directory, so if your organization already uses groups in Azure Active Directory or Windows Server Active Directory for access management, you will be able to manage access to Azure the same way.

Below are some more details on how this works and can be enabled.

Azure Active Directory

Azure Active Directory is our directory service in the cloud.  You can create organizational tenants within Azure Active Directory and define users and groups within it – without having to have any existing Active Directory setup on-premises.

Alternatively, you can also sync (or federate) users and groups from your existing on-premises Active Directory to Azure Active Directory, and have your existing users and groups automatically be available for use in the cloud with Azure, Office 365, as well as over 2000 other SaaS based applications:

image

All users that access your Azure subscriptions, are now present in the Azure Active Directory, to which the subscription is associated. This enables you to manage what they can do as well as revoke their access to all Azure subscriptions by disabling their account in the directory. Role Permissions

In this first preview we are pre-defining three built-in Azure roles that give you a choice of granting restricted access:

  • A Owner can perform all management operations for a resource and its child resources including access management.
  • A Contributor can perform all management operations for a resource including create and delete resources. A contributor cannot grant access to others.
  • A Reader has read-only access to a resource and its child resources. A Reader cannot read secrets.

In the RBAC model, users who have been configured to be the service administrator and co-administrators of an Azure subscription are mapped as belonging to the Owners role of the subscription. Their access to both the current and preview management portals remains unchanged.

Additional users and groups that you then assign to the new RBAC roles will only have those permissions, and also will only be able to manage Azure resources using the new Azure preview portal and Azure Resource Manager APIs.  RBAC is not supported in the current Azure management portal or via older management APIs (since neither of these were built with the concept of role based security built-in).

Restricting Access based on Role Based Permissions

Let’s assume that your team is using Azure for development, as well as to host the production instance of your application. When doing this you might want to separate the resources employed in development and testing from the production resources using Resource Groups.

You might want to allow everyone in your team to have a read-only view of all resources in your Azure subscription – including the ability to read and review production analytics data. You might then want to only allow certain users to have write/contributor access to the production resources.  Let’s look at how to set this up:

Step 1: Setting up Roles at the Subscription Level

We’ll begin by mapping some users to roles at the subscription level.  These will then by default be inherited by all resources and resource groups within our Azure subscription.

To set this up, open the Billing blade within the Preview Azure Portal (http://portal.azure.com), and within the Billing blade select the Azure subscription that you wish to setup roles for: 

image

Then scroll down within the blade of subscription you opened, and locate the Roles tile within it:

image

Clicking the Roles title will bring up a blade that lists the pre-defined roles we provide by default (Owner, Contributor, Reader).  You can click any of the roles to bring up a list of the users assigned to the role.  Clicking the Add button will then allow you to search your Azure Active Directory and add either a user or group to that role. 

Below I’ve opened up the default Reader role and added David and Fred to it:

image

Once we do this, David and Fred will be able to log into the Preview Azure Portal and will have read-only access to the resources contained within our subscription.  They will not be able to edit any changes, though, nor be able to see secrets (passwords, etc).

Note that in addition to adding users and groups from within your directory, you can also use the Invite button above to invite users who are not currently part of your directory, but who have a Microsoft Account (e.g. scott@outlook.com), to also be mapped into a role.

Step 2: Setting up Roles at the Resource Level

Once you’ve defined the default role mappings at the subscription level, they will by default apply to all resources and resource groups contained within it. 

If you wish to scope permissions even further at just an individual resource (e.g. a VM or Website or Database) or at a resource group level (e.g. an entire application and all resources within it), you can also open up the individual resource/resource-group blade and use the Roles tile within it to further specify permissions.

For example, earlier we granted David reader role access to all resources within our Azure subscription.  Let’s now grant him contributor role access to just an individual VM within the subscription.  Once we do this he’ll be able to stop/start the VM as well as make changes to it.

To enable this, I’ve opened up the blade for the VM below.  I’ve then scrolled down the blade and found the Roles tile within the VM.  Clicking the contributor role within the Roles tile will then bring up a blade that allows me to configure which users will be contributors (meaning have read and modify permissions) for this particular VM.  Notice below how I’ve added David to this:

image

Using this resource/resource-group level approach enables you to have really fine-grained access control permissions on your resources. Command Line and API Access for Azure Role Based Access Control

The enforcement of the access policies that you configure using RBAC is done by the Azure Resource Manager APIs.  Both the Azure preview portal as well as the command line tools we ship use the Resource Manager APIs to execute management operations. This ensures that access is consistently enforced regardless of what tools are used to manage Azure resources.

With this week’s release we’ve included a number of new Powershell APIs that enable you to automate setting up as well as controlling role based access. Learn More about Role Based Access

Today’s Role Based Access Control Preview provides a lot more flexibility in how you manage the security of your Azure resources.  It is easy to setup and configure.  And because it integrates with Azure Active Directory, you can easily sync/federate it to also integrate with the existing Active Directory configuration you might already have in your on-premises environment.

Getting started with the new Azure Role Based Access Control support is as simple as assigning the appropriate users and groups to roles on your Azure subscription or individual resources. You can read more detailed information on the concepts and capabilities of RBAC here. Your feedback on the preview features is critical for all improvements and new capabilities coming in this space, so please try out the new features and provide us your feedback. Alerts: General Availability of Azure Alerting and new Alerts on Events support

I’m excited to announce the release of Azure Alerting to General Availability. Azure alerts supports the ability to create alert thresholds on metrics that you are interested in, and then have Azure automatically send an email notification when that threshold is crossed. As part of the general availability release, we are removing the 10 alert rule cap per subscription.

Alerts are available in the full azure portal by clicking Management Services in the left navigation bar:

image

Also, alerting is available on most of the resources in the Azure preview portal:

image

You can create alerts on metrics from 8 different services today (and we are adding more all the time):

  • Cloud Services
  • Virtual Machines
  • Websites
  • Web hosting plans
  • Storage accounts
  • SQL databases
  • Redis Cache
  • DocumentDB accounts

In addition to general availability for alerts on metrics, we are also previewing the ability to create alerts on operational events. This means you can get an email if someone stops your website, if your virtual machines are deleted, or if your Azure Resource Manager template deployment failed. Like alerts on metrics, you can route these alerts to the service and co-administrators, or, to a custom email address you provide.  You can configure these events on a resource in the Azure Preview Portal.  We have enabled this within the Portal for Websites – we’ll be extending it to all resources in the future.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

omni
Categories: Architecture, Programming

Community of Practice: Killers

A community of interest needs a common focus!

A community of interest needs a common focus!

All community of practices (COP) are not successful or at least don’t stay successful.  While there can many issues that cause a COP to fail, there are three very typical problems that kill off COPs.

  1. Poor leadership – All groups have a leader that exerts influence on the direction of the group. The best COP leaders I have observed (best being defined in terms of the health of the COP) are servant leaders.  In a community of practice, the servant leader will work to empower and serve the community. Empowerment of the COP is reflected by the removing of impediments and coaching the team so it meets its goals of connection, encouragement and sharing.  In COPs with a poor leader the goals of the group generally shift towards control of the message or to the aggrandizement of specific group or person.  Earlier in my career I was involved with a local SPIN (software process improvement network) group that had existed for several years.  The SPIN group was a community of practice that drew members form 20 – 30 companies in my area. At one point a leader emerged whose goal was to generate sales leads for himself.  Membership fell precipitously before a new leader emerged and re-organized the remnants.
  2. Lack of a common interest – A group put together without a common interest reminds me of sitting in the back of station wagon with my four siblings on long Sunday drives in the country.  Not exactly pointless but to be avoided if possible.  A community of practice without a common area of interest isn’t a community of practice.
  3. Natural life cycle – Ideas and groups have a natural life cycle.  When a COP purpose passes or fades, the group should either be re-purposed or shutdown. As an example, the SPIN mentioned above reached its zenith during the heyday of the CMMI and faded as that framework became less popular. I have often observed that as a COP’s original purpose wanes the group seeks to preserve itself by finding a new purpose. Re-purposing often fails because the passion the group had for the original concept does not transfer.  Re-purposing works best when the ideas being pursued are a natural evolutionary path.  I recently observed a Scrum Master COP that was in transition. Scrum was institutionalized within the organization and there was a general feeling that the group had run its course unless something was done to energize the group. The group decided to begin exploring Scaled Agile Framework as a potential extension of their common interest in Agile project and program management.

In general, a community of practice is not an institution that lasts forever. Idea and groups follow a natural life cycle.  COPs generally hit their zenith when members finally get the most benefit from sharing and connecting.  The amount of benefit that a member of the community perceives they get from participation is related to the passion they have for the group. As ideas and concepts become mainstream or begin to fade, the need for a COP can also fade. As passion around the idea fades, leaders can emerge that have other motives than serving the community which hastens the breakdown of the COP. When the need to a COP begins to fade, generally it is time to disband or re-purpose the COP.


Categories: Process Management

Cost, Value & Investment: How Much Will This Project Cost? Part 2

This post is continued from Cost, Value & Investment: How Much Will This Project Cost, Part 1

We’ve established that you need to know how much this project will cost. I’m assuming you have more than a small project.

If you have to estimate a project, please read the series starting at Estimating the Unknown: Dates or Budget, Part 1. Or, you could get Essays on Estimation. I’m in the midst of fixing it so it reads like a real book. I have more tips on estimation there.

For a program, each team does this for its own ranked backlog:

  • Take every item on the backlog and roadmap, and use whatever relative sizing approach you use now to estimate. You want to use relative sizing, because you need to estimate everything on the backlog.
  • Tip: If each item on the backlog/roadmap is about team-day or smaller, this is easy. The farther out you go, the more uncertainty you have and the more difficult the estimation is. That’s why this is a tip.
  • Walk through the entire backlog, estimating as you proceed. Don’t worry about how large the features are. Keep a count of the large features. Decide as a team if this feature is larger than two or three team-days. If it is, keep a count of those features. The larger the features, the more uncertainty you have in your estimate.
  • Add up your estimate of relative points. Add up the number of large features. Now, you have a relative estimate, which based on your previous velocity means something to you. You also have a number of large features, which will decrease the confidence in that estimate.
  • Do you have 50 features, of which only five are large? Maybe you have 75% confidence in your estimate. On the other hand, maybe all your features are large. I might only have 5-10% confidence in the estimate. Why? Because the team hasn’t completed any work yet and you have no idea how long your work will take.
Technical Program with Communities of Practice

Technical Program with Communities of Practice

As a software program team, get together, and assess the total estimate. Why the program team? Because the program team is the cross-functional team whose job is to get the software product to done. It’s not just the software teams—it’s everyone involved in the technical program team.

Note: the teams have to trust Sally, Joe, Henry and Tom to represent them to the software program team. If the teams do not, no one has confidence in any estimate at all. The estimate is a total SWAG.

The delegates to the program team know what their estimates mean individually. Now, they “add” them together, whatever that means. Do you realize why we will call this prediction? Do Sally, Joe, Henry, and Tom have feature teams, service teams, or component teams? Do they have to add time for the experiments as they transition to agile? Do they need to gain the trust of their management? Or, are they already experienced agile feature teams?

The more experienced the teams are at agile, the better the estimate is. The more the teams are feature teams, the better the estimate is. If you are new to agile, or have feature teams, or have a mixed program (agile and non-agile teams), you know that estimate is way off.

Is it time for the software program manager to say, “We have an initial order-of-magnitude prediction. But we haven’t tested this estimate with any work, so we don’t know how accurate our estimates are. Right now our confidence is about 5-10% (or whatever it is) in our estimate. We’ve spent a day or so estimating, because we can spend time delivering, rather than estimating. We need to do a week or two of work, deliver a working skeletong, and then we can tell you more about our prediction. We can better our prediction as we proceed. Remember, back in the waterfall days, we spent a month estimating and we were wrong. This way, you’ll get to see product as we work.”

You want to use the word “prediction” as much as possible, because people understand the word prediction. They hear weather predictions all the time. They know about weather predictions. But when they hear estimates of work, they think you are correct, even if you use confidence numbers, they think you are accurate. Use the word prediction.

Beware of These Program Estimation Traps

There are plenty of potential traps when you estimate programs. Here are some common problems:

  • The backlog is full of themes. You haven’t even gotten to epics, never mind stories. I don’t see how you can make a prediction. That’s like me saying, “I can go from Boston to China on an airplane. Yes, I can. It will take time.” I need more data: which time of year? Mid-week, weekend? Otherwise, I can only provide a ballpark, not a real estimate.
  • Worse, the backlog is full of tasks, so you don’t know the value of a story. “Fix the radio button” does not tell me the value of a story. Maybe we can eliminate the button instead of fix it.
  • The people estimating are not the ones who will do the work, so the estimate is full of estimation bias. Just because work looks easy or looks hard does not mean it is.
  • The estimate becomes a target. This never works, but managers do it all the time. “Sure, my team can do that work by the end of Q1.”
  • The people on your program multitask, so the estimate is wrong. Have you read the Cost of Delay due to Multitasking?
  • Managers think they can predict team size from the estimate. This is the problem of splitting work in the mistaken belief that more people make it easier to do more work. More people make the communications burden heavier.

Estimating a program is more difficult, because bigness makes everything harder. A better way to manage the issues of a program is to decide if it’s worth funding in the project portfolio. Then, work in an agile way. Be ready to change the order of work in the backlog, for teams and among teams.

As a program manager, you have two roles when people ask for estimates. You want to ask your sponsors these questions:

  • How much do you want to invest before we stop? Are you ready to watch the program grow as we build it?
  • What is the value of this project or program?

You want to ask the teams and product owners these questions:

  • Please produce walking skeletons (of features in the product) and build on it
  • Please produce small features, so we can see the product evolve every day

The more the sponsors see the product take shape, the less interested they will be in an overall estimate. They may ask for more specific estimates (when can you do this specific feature), which is much easier to answer.

Delivering working software builds trust. Trust obviates many needs for estimates. If your managers or customers have never had trust with a project or program team before, they will start asking for estimates. Your job is to deliver working software every day, so they stop asking.

Categories: Project Management

Managing Stakeholders Expectations- What to do when your SMEs think your project will solve all their problems….

Software Requirements Blog - Seilevel.com - Thu, 09/11/2014 - 17:00
There are lots of times that stakeholders have unrealistic expectations and that you, as the product manager/business analyst will have to manage them so that the scope of the project doesn’t balloon out of proportion. In this blog post, however, I am going to speak to a very specific type of stakeholder expectation: that your […]
Categories: Requirements

Ten at Ten Meetings

Ten at Ten are a very simple tool for helping teams stay focused, connected, and collaborate more effectively, the Agile way.

I’ve been leading distributed teams and v-teams for years.   I needed a simple way to keep everybody on the same page, expose issues, and help everybody on the team increase their awareness of results and progress, as well as unblock and breakthrough blocking issues.

Why Ten at Ten Meetings?

When people are remote, it’s easy to feel disconnected, and it’s easy to start to feel like different people are just a “black box” or “go dark.”

Ten at Ten Meetings have been my friend and have helped me help everybody on the team stay in sync and appreciate each other’s work, while finding better ways to team up on things, and drive to results, in a collaborative way.  I believe I started Ten at Ten Meetings back in 2003 (before that, I wasn’t as consistent … I think 2003 is where I realized a quick sync each day, keeps the “black box” away.)

Overview of Ten at Ten Meetings

I’ve written about Ten at Ten Meetings before in my posts on How To Lead High-Performance Distributed Teams, How I Use Agile Results, Interview on Timeboxing for HBR (Harvard Business Review), Agile Results Works for Teams and Leaders Too,  and 10 Free Leadership Tools for Work and Life, but I thought it would be helpful to summarize some of the key information at a glance.

Here is an overview of Ten at Ten Meetings:

This is one of my favorite tools for reducing email and administration overhead and getting everybody on the same page fast.  It's simply a stand-up meeting.  I tend to have them at 10:00, and I set a limit of 10 minutes.  This way people look forward to the meeting as a way to very quickly catch up with each other, and to stay on top of what's going on, and what's important.  The way it works is I go around the (virtual) room, and each person identifies what they got done yesterday, what they're getting done today, and any help they need.  It's a fast process, although it can take practice in the beginning.  When I first started, I had to get in the habit of hanging up on people if it went past 10 minutes.  People very quickly realized that the ten minute meeting was serious.  Also, as issues came up, if they weren't fast to solve on the fly and felt like a distraction, then we had to learn to take them offline.  Eventually, this helped build a case for a recurring team meeting where we could drill deeper into recurring issues or patterns, and focus on improving overall team effectiveness.

3 Steps for Ten at Ten Meetings

Here is more of a step-by-step approach:

  1. I schedule ten minutes for Monday through Thursday, at whatever time the team can agree to, but in the AM. (no meetings on Friday)
  2. During the meeting, we go around and ask three simple questions:  1)  What did you get done?  2) What are you getting done today? (focused on Three Wins), and 3) Where do you need help?
  3. We focus on the process (the 3 questions) and the timebox (10 minutes) so it’s a swift meeting with great results.   We put issues that need more drill-down or exploration into a “parking lot” for follow up.  We focus the meeting on status and clarity of the work, the progress, and the impediments.

You’d be surprised at how quickly people start to pay attention to what they’re working on and on what’s worth working on.  It also helps team members very quickly see each other’s impact and results.  It also helps people raise their bar, especially when they get to hear  and experience what good looks like from their peers.

Most importantly, it shines the light on little, incremental progress, and, if you didn’t already know, progress is the key to happiness in work and life.

You Might Also Like

10 Free Leadership Tools for Work and Life

How I Use Agile Results

How To Lead High-Performance Distributed Teams

Categories: Architecture, Programming

Soft Skills is the Manning Deal of the Day!

Making the Complex Simple - John Sonmez - Thu, 09/11/2014 - 15:00

Great news! Early access to my new book Soft Skills: The Software Developer’s Life Manual, is on sale today (9/11/2014) only as Manning’s deal of the day! If you’ve been thinking about getting the book, now is probably the last chance to get it at a discounted rate. Just use the code: dotd091114au to the get […]

The post Soft Skills is the Manning Deal of the Day! appeared first on Simple Programmer.

Categories: Programming

Open as Many Doors as Possible

Making the Complex Simple - John Sonmez - Thu, 09/11/2014 - 15:00

Even though specialization is important, it doesn’t mean you shouldn’t strive to open up as many doors as possible in your life.

The post Open as Many Doors as Possible appeared first on Simple Programmer.

Categories: Programming

Software Development Linkopedia September 2014

From the Editor of Methods & Tools - Thu, 09/11/2014 - 09:36
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about the software developer condition, scaling Agile, technical debt, behavior-driven development, Agile metrics, UX (user eXperience), NoSQL databases and software design. Blog: The Developer is Dead, Long Live the Developer Blog: Scaling Agile at Gilt Blog: Technical debt 101 Blog: Behaviour Driven Development: Tips for Writing Better Feature Files Article: Acceptance Criteria – Demonstrate Them by Drawing a House Article: Actionable Metrics At Siemens Health Services Article: Adapting Scrum to a ...

Xcode 6 GM & Learning Swift (with the help of Xebia)

Xebia Blog - Thu, 09/11/2014 - 07:32

I guess re-iterating the announcements Apple did on Tuesday is not needed.

What is most interesting to me about everything that happened on Tuesday is the fact that iOS 8 now reached GM status and Apple sent the call to bring in your iOS 8 uploads to iTunes connect. iOS 8 is around the corner in about a week from now allowing some great new features to the platform and ... Swift.

Swift

I was thinking about putting together a list of excellent links about Swift. But obviously somebody has done that already
https://github.com/Wolg/awesome-swift
(And best of all, if you find/notice an epic Swift resource out there, submit a pull request to that REPO, or leave a comment on this blog post.)

If you are getting started, check out:
https://github.com/nettlep/learn-swift
It's a Github repo filled with extra Playgrounds to learn Swift in a hands-on matter. It elaborates a bit further on the later chapters of the Swift language book.

But the best way to learn Swift I can come up with is to join Xebia for a day (or two) and attend one of our special purpose update training offers hosted by Daniel Steinberg on 6 and 7 november. More info on that:

Booting a Raspberry Pi B+ with the Raspbian Debian Wheezy image

Agile Testing - Grig Gheorghiu - Thu, 09/11/2014 - 01:32
It took me a while to boot my brand new Raspberry Pi B+ with a usable Linux image. I chose the Raspbian Debian Wheezy image available on the downloads page of the official raspberrypi.org site. Here are the steps I needed:

1) Bought micro SD card. Note DO NOT get a regular SD card for the B+ because it will not fit in the SD card slot. You need a micro SD card.

2) Inserted the SD card via an SD USB adaptor in my MacBook Pro.

3) Went to the command line and ran df to see which volume the SD card was mounted as. In my case, it was /dev/disk1s1.

4) Unmounted the SD card. I initially tried 'sudo umount /dev/disk1s1' but the system told me to use 'diskutil unmount', so the command that worked for me was:

diskutil unmount /dev/disk1s1

5) Used dd to copy the Raspbian Debian Wheezy image (which I previously downloaded) per these instructions. Important note: the target of the dd command is /dev/disk1 and NOT /dev/disk1s1. I tried initially with the latter, and the Raspberry Pi wouldn't boot (one of the symptoms that something was wrong other than the fact that nothing appeared on the monitor, was that the green light was solid and not flashing; a google search revealed that one possible cause for that was a problem with the SD card). The dd command I used was:

dd if=2014-06-20-wheezy-raspbian.img of=/dev/disk1 bs=1m

6) At this point, I inserted the micro SD card into the SD slot on the Raspberry Pi, then connected the Pi to a USB power cable, a monitor via an HDMI cable, a USB keyboard and a USB mouse. I was able to boot and change the password for the pi user. The sky is the limit next ;-)




Community of Practice: Owning The Message

Some time controlling the message is important!

Some time controlling the message is important!

The simplest definition of a community of practice (COP) is people connecting, encouraging each other and sharing ideas and experiences. The power of COPs is generated by the interchange between people in a way that helps both the individuals and the group to achieve their goals. Who owns the message that the COP focuses on will affect how well the interchange occurs. Ownership, viewed in a black and white mode, generates two kinds of COPs. In the first type of COP, the group owns the message. In the second, the organization owns the message. “A Community of Practice: An Example” described a scenario in which the organization created a COP for a specific practices and made attendance mandatory. The inference in this scenario is that the organization is using the COP to deliver a message. The natural tendency is to view COPs, in which the organization controls the message and membership, as delivering less value.

Organizational ownership of a COP’s message and membership are generally viewed as anti-patterns. The problem is that ownership and control membership can impact the COP’s ability to:

  1. Connect like-minded colleagues and peers
  2. Share experiences safely if they do not conform to the organization’s message
  3. Innovate and to create new ideas that are viewed as outside-the-box.

The exercise of control will constrain the COP’s focus which in an organization implementing concepts such self-organizing and self-managing team (Agile concepts) and will send very mixed messages to the organization.

The focus that control generates can be used to implement, reinforce and institutionalize new ideas that are being rolled out on an organizational basis. Control of message and membership can:

  1. Accelerate learning by generating focus
  2. Validate and build on existing knowledge, the organization’s message
  3. Foster collaboration and consistency of process

In the short-run this behavior may well be a beneficial mechanism to deliver and then reinforce the organization’s message.  The positives that the constraints generate will quickly be overwhelmed once the new idea loses its bright and shiny status.

In organizations that use top-down process improvement methods, COPs can be used to deliver a message and then to reinforce the message as implementation progresses. However, as soon as institutionalization begins, the organization should get out of the COP control business. This does not mean that support, such as providing budget and logistics, should be withdrawn.  Support does not have to equate to control. Remember that control might be effective in the short run; however, COPs in which the message and membership is explicitly controlled, in the long-term will not be able to evolve and support its members effectively.


Categories: Process Management

10 Common Server Setups For Your Web Application

If you need a good overview of different ways to setup your web service then Mitchell Anicas has written a good article for you: 5 Common Server Setups For Your Web Application.

We've even included a few additional possibilities at no extra cost.

  1. Everything on One Server. Simple. Potential for poor performance because of resource contention. Not horizontally scalable. 
  2. Separate Database Server. There's an application server and a database server. Application and database don't share resources. Can independently vertically scale each component. Increases latency because the database is a network hop away.
  3. Load Balancer (Reverse Proxy). Distribute workload across multiple servers. Native horizontal scaling. Protection against DDOS attacks using rules. Adds complexity. Can be a performance bottleneck. Complicates issues like SSL termination and stick sessions.
  4. HTTP Accelerator (Caching Reverse Proxy). Caches web responses in memory so they can be served faster. Reduces CPU load on web server. Compression reduces bandwidth requirements. Requires tuning. A low cache-hit rate could reduce performance. 
  5. Master-Slave Database Replication. Can improve read and write performance. Adds a lot of complexity and failure modes.
  6. Load Balancer + Cache + Replication. Combines load balancing the caching servers and the application servers, along with database replication. Nice explanation in the article.
  7. Database-as-a-Service (DBaaS). Let someone else run the database for you.  RDS is one example from Amazon and there are hosted versions of many popular databases.
  8. Backend as a Service (BaaS). If you are writing a mobile application and you don't want to deal with the backend component then let someone else do it for you. Just concentrate on the mobile platform. That's hard enough. Parse and Firebase are popular examples, but there are many more.
  9. Platform as a Service (PaaS). Let someone else run most of your backend, but you get more flexibility than you have with BaaS to build your own application. Google App Engine, Heroku, and Salesforce are popular examples, but there are many more.
  10. Let Somone Else Do it. Do you really need servers at all? If you have a store then a service like Etsy saves a lot of work for very little cost. Does someone already do what you need done? Can you leverage it?
Categories: Architecture

Cloud Changes the Game from Deployment to Adoption

Before the Cloud, there was a lot of focus on deployment, as if deployment was success. 

Once you shipped the project, it was time to move on to the next project.  And project success was measured in terms of “on time” and “on budget.”   If you could deploy things quickly, you were a super shipper.

Of course, what we learned was that if you simply throw things over the wall and hope they stick, it’s not very successful.

"If you build it" ... users don't always come.

It was easy to confuse shipping projects on time and on budget with business impact.  

But let's compound the problem. 

The Development Hump

The big hump of software development was the hump in the middle—A big development hump.  And that hump was followed by a big deployment hump (installing software, fixing issues, dealing with deployment hassles, etc.)

So not only were development cycles long, but deployment was tough, too.

Because development cycles were long, and deployment was so tough, it was easy to confuse effort for value.

Cloud Changes the Hump

Now, let's turn it around.

With the Cloud, deployment is simplified.  You can reach more users, and it's easier to scale.  And it's easier to be available 24x7.

Add Agile to the mix, and people ship smaller, more frequent releases.

So with smaller, more-frequent releases, and simpler deployment, some software teams have turned into shipping machines.

The Cloud shrinks the development and deployment humps.

So now the game is a lot more obvious.

Deployment doesn't mark the finish.  It starts the game.

The real game of software success is adoption.

The Adoption Hump is Where the Benefits Are

If you picture the old IT project hump, where there is a long development cycle in the middle, now it's shorter humps in the middle.

The big hump is now user adoption.

It’s not new.  It was always there.   But the adoption hump was hidden beyond the development and deployment humps, and simply written off as “Value Leakage.”

And if you made it over the first two humps, since most projects did not plan or design for adoption, or allocate any resources or time, adoption was mostly an afterthought.  

And so the value leaked.

But the adoption hump is where the business benefits are.   The ROI is sitting there, gathering dust, in our "pay-for-play" world.   The value is simply waiting to be released and unleashed. 

Software solutions are sitting idle waiting for somebody to realize the value.

Accelerate Value by Accelerating Adoption

All of the benefits to the business are locked up in that adoption hump.   All of the benefits around how users will work better, faster, or cheaper, or how you will change the customer interaction experience, or how back-office systems will be better, faster, cheaper ... they are all locked up in that adoption hump.

As I said before, the key to Value Realization is adoption.  

So if you want to realize more value, drive more user adoption. 

And if you want to accelerate value, then accelerate user adoption.

In Essence …

In a Cloud world, the original humps of design, development, and deployment shrink.   But it’s not just time and effort that shrink.  Costs shrink, too.   With online platforms to build on (Infrastructure as a Service, Platforms as a Service, and Software as a Service), you don’t have to start from scratch or roll your own.   And if you adopt a configure before customize mindset, you can further reduce your costs of design and development.

Architecture moves up the stack from basic building blocks to composition.

And adoption is where the action is.  

What was the afterthought in the last generation of solutions, is now front and center. 

In the new world, adoption is a planned spend, and it’s core to the success of the planned value delivery.

If you want to win the game, think “Adoption-First.”

You Might Also Like

Continuous Value Delivery the Agile Way

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Architecture, Programming

A Community of Practice: An Example And An Evaluation

A Community of Practice must has a have a common interest.

A Community of Practice must has a have a common interest.

I am often asked to describe a community practice.  We defined a community of practice (COP) as a group of people with a common area of interest to sharing knowledge. The interaction and sharing of knowledge serves to generate relationships that reinforce a sense of community and support.

An example of a recently formed COP:

Organizational context:  The organization is a large multinational organization with four large development centers (US, Canada, India and China). Software development projects (includes development, enhancements and maintenance) leverage a mix of Scrum/xP and classic plan based methods.  Each location has an internal project management group that meets monthly and is sponsored by the company.  This group is called the Project Manager COP (PMCOP).  The PMCOP primarily meets at lunchtimes at the each of the corporate locations with quarterly events early in the afternoon (Eastern time zone in Canada). Attendance is mandatory and active involvement in the PMCOP is perceived to be a career enhancement. Senior management viewed the PMCOP as extremely successful while many participants viewed it as little more than a free lunch.

Agile COP: The organization recently had adopted Agile as an approved development and software project management approach.  A large number of Scrum masters were appointed.  Some were certified, some had experience in previous organizations and some had “read the book” (their description). The organization quickly recognized that the consistency of practice was needed and implemented a package of coaching (mostly internal), auditing and started a COP with the similar attributes as the PMCOP. Differences implementation included more localization and self-organization.  Each location met separately and developed its own list of topics (this allowed each group to be more culturally sensitive) and each location rotated the meeting chair on a 3 month basis.  Participation was still mandatory (attendance was taken and shared with senior management).

In both cases the COP included a wide range of programming including outside presenters (live and webinar), retrospective like sessions and practitioner sharing.

In order to determine whether a COP is positioned to be effective, we can use the four attributes from Communities of Practice to evaluate the programs.  If we use the framework to evaluate the two examples in our mini-case study the results would show:

 

PMCOP Agile COP Common area of interest Yes, project management is a specific area of interest. Yes – ish, The COP is currently focused on Scrum masters; however, Agile can include a very broad range of practices therefore other areas of focus may need to be broken up. Process Yes, the PMCOP has a set of procedures for membership, attendance and logistics The Agile COP adopted most the PMCOP processes (the rules for who chairs the meeting and local topics were modified). Support Yes, the organization provided space, budget for lunch and other incidentals. Yes, the organization provided space, budget for lunch and other incidentals.  Note in both the Agile and PMCOP the requirement to participate was considered support by the management team but not by the practitioners. Interest The requirement to participate makes interest hard to measure from mere attendance.  We surveyed the members which lead to changes in programming to increase perceived value. The Agile COP had not been in place long enough to judge long-term interest; however, the results from the PMCOP was used to push for more local, culturally sensitive programming.

 

Using the four common attributes needed for an effective community of practice is a good first step to determine the strengths and weaknesses of a planned-to-be-implemented community of practice.


Categories: Process Management

Chrome - Firefox WebRTC Interop Test - Pt 2

Google Testing Blog - Tue, 09/09/2014 - 21:09
by Patrik Höglund

This is the second in a series of articles about Chrome’s WebRTC Interop Test. See the first.

In the previous blog post we managed to write an automated test which got a WebRTC call between Firefox and Chrome to run. But how do we verify that the call actually worked?

Verifying the CallNow we can launch the two browsers, but how do we figure out the whether the call actually worked? If you try opening two apprtc.appspot.com tabs in the same room, you will notice the video feeds flip over using a CSS transform, your local video is relegated to a small frame and a new big video feed with the remote video shows up. For the first version of the test, I just looked at the page in the Chrome debugger and looked for some reliable signal. As it turns out, the remoteVideo.style.opacity property will go from 0 to 1 when the call goes up and from 1 to 0 when it goes down. Since we can execute arbitrary JavaScript in the Chrome tab from the test, we can simply implement the check like this:

bool WaitForCallToComeUp(content::WebContents* tab_contents) {
// Apprtc will set remoteVideo.style.opacity to 1 when the call comes up.
std::string javascript =
"window.domAutomationController.send(remoteVideo.style.opacity)";
return test::PollingWaitUntil(javascript, "1", tab_contents);
}

Verifying Video is PlayingSo getting a call up is good, but what if there is a bug where Firefox and Chrome cannot send correct video streams to each other? To check that, we needed to step up our game a bit. We decided to use our existing video detector, which looks at a video element and determines if the pixels are changing. This is a very basic check, but it’s better than nothing. To do this, we simply evaluate the .js file’s JavaScript in the context of the Chrome tab, making the functions in the file available to us. The implementation then becomes

bool DetectRemoteVideoPlaying(content::WebContents* tab_contents) {
if (!EvalInJavascriptFile(tab_contents, GetSourceDir().Append(
FILE_PATH_LITERAL(
"chrome/test/data/webrtc/test_functions.js"))))
return false;
if (!EvalInJavascriptFile(tab_contents, GetSourceDir().Append(
FILE_PATH_LITERAL(
"chrome/test/data/webrtc/video_detector.js"))))
return false;

// The remote video tag is called remoteVideo in the AppRTC code.
StartDetectingVideo(tab_contents, "remoteVideo");
WaitForVideoToPlay(tab_contents);
return true;
}

where StartDetectingVideo and WaitForVideoToPlay call the corresponding JavaScript methods in video_detector.js. If the video feed is frozen and unchanging, the test will time out and fail.

What to Send in the CallNow we can get a call up between the browsers and detect if video is playing. But what video should we send? For chrome, we have a convenient --use-fake-device-for-media-stream flag that will make Chrome pretend there’s a webcam and present a generated video feed (which is a spinning green ball with a timestamp). This turned out to be useful since Firefox and Chrome cannot acquire the same camera at the same time, so if we didn’t use the fake device we would have two webcams plugged into the bots executing the tests!

Bots running in Chrome’s regular test infrastructure do not have either software or hardware webcams plugged into them, so this test must run on bots with webcams for Firefox to be able to acquire a camera. Fortunately, we have that in the WebRTC waterfalls in order to test that we can actually acquire hardware webcams on all platforms. We also added a check to just succeed the test when there’s no real webcam on the system since we don’t want it to fail when a dev runs it on a machine without a webcam:

if (!HasWebcamOnSystem())
return;

It would of course be better if Firefox had a similar fake device, but to my knowledge it doesn’t.

Downloading all Code and Components Now we have all we need to run the test and have it verify something useful. We just have the hard part left: how do we actually download all the resources we need to run this test? Recall that this is actually a three-way integration test between Chrome, Firefox and AppRTC, which require the following:

  • The AppEngine SDK in order to bring up the local AppRTC instance, 
  • The AppRTC code itself, 
  • Chrome (already present in the checkout), and 
  • Firefox nightly.

While developing the test, I initially just hand-downloaded these and installed and hard-coded the paths. This is a very bad idea in the long run. Recall that the Chromium infrastructure is comprised of thousands and thousands of machines, and while this test will only run on perhaps 5 at a time due to its webcam requirements, we don’t want manual maintenance work whenever we replace a machine. And for that matter, we definitely don’t want to download a new Firefox by hand every night and put it on the right location on the bots! So how do we automate this?

Downloading the AppEngine SDK
First, let’s start with the easy part. We don’t really care if the AppEngine SDK is up-to-date, so a relatively stale version is fine. We could have the test download it from the authoritative source, but that’s a bad idea for a couple reasons. First, it updates outside our control. Second, there could be anti-robot measures on the page. Third, the download will likely be unreliable and fail the test occasionally.

The way we solved this was to upload a copy of the SDK to a Google storage bucket under our control and download it using the depot_tools script download_from_google_storage.py. This is a lot more reliable than an external website and will not download the SDK if we already have the right version on the bot.

Downloading the AppRTC Code
This code is on GitHub. Experience has shown that git clone commands run against GitHub will fail every now and then, and fail the test. We could either write some retry mechanism, but we have found it’s better to simply mirror the git repository in Chromium’s internal mirrors, which are closer to our bots and thereby more reliable from our perspective. The pull is done by a Chromium DEPS file (which is Chromium’s dependency provisioning framework).

Downloading Firefox
It turns out that Firefox supplies handy libraries for this task. We’re using mozdownload in this script in order to download the Firefox nightly build. Unfortunately this fails every now and then so we would like to have some retry mechanism, or we could write some mechanism to “mirror” the Firefox nightly build in some location we control.

Putting it TogetherWith that, we have everything we need to deploy the test. You can see the final code here.

The provisioning code above was put into a separate “.gclient solution” so that regular Chrome devs and bots are not burdened with downloading hundreds of megs of SDKs and code that they will not use. When this test runs, you will first see a Chrome browser pop up, which will ensure the local apprtc instance is up. Then a Firefox browser will pop up. They will each acquire the fake device and real camera, respectively, and after a short delay the AppRTC call will come up, proving that video interop is working.

This is a complicated and expensive test, but we believe it is worth it to keep the main interop case under automation this way, especially as the spec evolves and the browsers are in varying states of implementation.

Future Work

  • Also run on Windows/Mac. 
  • Also test Opera. 
  • Interop between Chrome/Firefox mobile and desktop browsers. 
  • Also ensure audio is playing. 
  • Measure bandwidth stats, video quality, etc.


Categories: Testing & QA

Techno-BDDs: What Daft Punk Can Teach Us About Requirements

Software Requirements Blog - Seilevel.com - Tue, 09/09/2014 - 17:00
I listen to a lot of Daft Punk lately—especially while I’m running.   On one of my recent runs, my mind’s reflections upon the day’s work merged with my mental impressions of the music blaring from my earbuds, which happened to be the Daft Punk song, “Technologic.”  For those of you unfamiliar with the song, […]
Categories: Requirements

The Main Benefit of Story Points

Mike Cohn's Blog - Tue, 09/09/2014 - 15:00

If story points are an estimate of the time (effort) involved in doing something, why not just estimate directly in hours or days? Why use points at all?

There are multiple good reasons to estimate product backlog items in story points, and I cover them fully in the Agile Estimating and Planning video course, but there is one compelling reason that on its own is enough to justify the use of points.

It has to do with King Henry I who reigned between 1100 and 1135. Prior to his reign, a “yard” was a unit of measure from a person’s nose to his outstretched thumb. Just imagine all the confusion this caused with that distance being different for each person.

King Henry eventually decided a yard would always be the distance between the king’s nose and outstretched thumb. Convenient for him, but also convenient for everyone else because there was now a standard unit of measure.

You might learn that for you, a yard (as defined by the king’s arm) was a little more or less than your arm. I’d learn the same about my arm. And we’d all have a common unit of measure.

Story points are much the same. Like English yards, they allow team members with different skill levels to communicate about and agree on an estimate. As an example, imagine you and I decide to go for a run. I like to run but am very slow. You, on the other hand, are a very fast runner. You point to a trail and say, “Let’s run that trail. It’ll take 30 minutes.”

I am familiar with that trail, but being a much slower runner than you, I know it takes me 60 minutes every time I run that trail. And I tell you I’ll run that trail with you but that will take 60 minutes.

And so we argue. “30.” “60.” “30.” “60.”

We’re getting nowhere. Perhaps we compromise and call it 45 minutes. But that is possibly the worst thing we could do. We now have an estimate that is no good for either of us.

So instead of compromising on 45, we continue arguing. “30.” “60.” “30.” “60.”

Eventually you say to me, “Mike, it’s a 5-mile trail. I can run it in 30 minutes.”

And I tell you, “I agree: it’s a 5-mile trail. That takes me 60 minutes.”

The problem is that we are both right. You really can run it in 30 minutes, and it really will take me 60. When we try to put a time estimate on running this trail, we find we can’t because we work (run) at different speeds.

But, when we use a more abstract measure—in this case, miles—we can agree. You and I agree the trail is 5 miles. We just differ in how long it will take each of us to run it.

Story points serve much the same purpose. They allow individuals with differing skill sets and speeds of working to agree. Instead of a fast and slow runner, consider two programmers of differing productivity.

Like the runners, these two programmers may agree that a given user story is 5 points (rather than 5 miles). The faster programmer may be thinking it’s easy and only a day of work. The slower programmer may be thinking it will take two days of work. But they can agree to call it 5 points, as the number of points assigned to the first story is fairly arbitrary.

What’s important is once they agree that the first story is 5 points, our two programmers can then agree on subsequent estimates. If the fast programmer thinks a new user story will take two days (twice his estimate for the 5-point story), he will estimate the new story as 10 points. So will the second programmer if she thinks it will take four days (twice as long as her estimate for the 5-point story).

And so, like the distance from King Henry’s nose to his thumb, story points allow agreement among individuals who perform at different rates.

The Main Benefit of Story Points

Mike Cohn's Blog - Tue, 09/09/2014 - 15:00

If story points are an estimate of the time (effort) involved in doing something, why not just estimate directly in hours or days? Why use points at all?

There are multiple good reasons to estimate product backlog items in story points, and I cover them fully in the Agile Estimating and Planning video course, but there is one compelling reason that on its own is enough to justify the use of points.

It has to do with King Henry I who reigned between 1100 and 1135. Prior to his reign, a “yard” was a unit of measure from a person’s nose to his outstretched thumb. Just imagine all the confusion this caused with that distance being different for each person.

King Henry eventually decided a yard would always be the distance between the king’s nose and outstretched thumb. Convenient for him, but also convenient for everyone else because there was now a standard unit of measure.

You might learn that for you, a yard (as defined by the king’s arm) was a little more or less than your arm. I’d learn the same about my arm. And we’d all have a common unit of measure.

Story points are much the same. Like English yards, they allow team members with different skill levels to communicate about and agree on an estimate. As an example, imagine you and I decide to go for a run. I like to run but am very slow. You, on the other hand, are a very fast runner. You point to a trail and say, “Let’s run that trail. It’ll take 30 minutes.”

I am familiar with that trail, but being a much slower runner than you, I know it takes me 60 minutes every time I run that trail. And I tell you I’ll run that trail with you but that will take 60 minutes.

And so we argue. “30.” “60.” “30.” “60.”

We’re getting nowhere. Perhaps we compromise and call it 45 minutes. But that is possibly the worst thing we could do. We now have an estimate that is no good for either of us.

So instead of compromising on 45, we continue arguing. “30.” “60.” “30.” “60.”

Eventually you say to me, “Mike, it’s a 5-mile trail. I can run it in 30 minutes.”

And I tell you, “I agree: it’s a 5-mile trail. That takes me 60 minutes.”

The problem is that we are both right. You really can run it in 30 minutes, and it really will take me 60. When we try to put a time estimate on running this trail, we find we can’t because we work (run) at different speeds.

But, when we use a more abstract measure—in this case, miles—we can agree. You and I agree the trail is 5 miles. We just differ in how long it will take each of us to run it.

Story points serve much the same purpose. They allow individuals with differing skill sets and speeds of working to agree. Instead of a fast and slow runner, consider two programmers of differing productivity.

Like the runners, these two programmers may agree that a given user story is 5 points (rather than 5 miles). The faster programmer may be thinking it’s easy and only a day of work. The slower programmer may be thinking it will take two days of work. But they can agree to call it 5 points, as the number of points assigned to the first story is fairly arbitrary.

What’s important is once they agree that the first story is 5 points, our two programmers can then agree on subsequent estimates. If the fast programmer thinks a new user story will take two days (twice his estimate for the 5-point story), he will estimate the new story as 10 points. So will the second programmer if she thinks it will take four days (twice as long as her estimate for the 5-point story).

And so, like the distance from King Henry’s nose to his thumb, story points allow agreement among individuals who perform at different rates.

Cost, Value & Investment: How Much Will This Project Cost? Part 1

I’ve said before that you cannot use capacity planning for the project portfolio. I also said that managers often want to know how much the project will cost. Why? Because businesses have to manage costs. No one can have runaway projects. That is fair.

If you use an agile or incremental approach to your projects, you have options. You don’t have to have runaway projects. Here are two better questions:

  • How much do you want to invest before we stop?
  • How much value is this project or program worth to you?

You need to think about cost, value, and investment, not just cost when you think about about the project portfolio. If you think about cost, you miss the potentially great projects and features.

However, no business exists without managing costs. In fact, the cost question might be critical to your business. If you proceed without thinking about cost, you might doom your business.

Why do you want to know about cost? Do you have a contract? Does the customer need to know? A cost-bound contract is a good reason.  (If you have other reasons for needing to know cost, let me know. I am serious when I say you need to evaluate the project portfolio on value, not on cost.)

You Have a Cost-Bound Contract

I’ve talked before about providing date ranges or confidence ranges with estimates. It all depends on why you need to know. If you are trying to stay within your predicted cost-bound contract, you need a ranked backlog. If you are part of a program, everyone needs to see the roadmap. Everyone needs to see the product backlog burnup charts. You’d better have feature teams so you can create features. If you don’t, you’d better have small features.

Why? You can manage the interdependencies among and between teams more easily with small features and with a detailed roadmap. The larger the program, the smaller you want the batch size to be. Otherwise, you will waste a lot of money very fast. (The teams will create components and get caught in integration hell. That wastes money.)

Your Customer Wants to Know How Much the Project Will Cost

Why does your customer want to know? Do you have payments based on interim deliverables? If the customer needs to know, you want to build trust by delivering value, so the customer trusts you over time.

If the customer wants to contain his or her costs, you want to work by feature, delivering value. You want to share the roadmap, delivering value. You want to know what value the estimate has for your customer. You can provide an estimate for your customer, as long as you know why your customer needs it.

Some of you think I’m being perverse. I’m not being helpful by saying what you could do to provide an estimate. Okay, in Part 2, I will provide a suggestion of how you could do an order-of-magnitude approach for estimating a program.

Categories: Project Management

Firm foundations

Coding the Architecture - Simon Brown - Tue, 09/09/2014 - 12:32

I truly believe that a lightweight, pragmatic approach to software architecture is pivotal to successfully delivering software, and that it can complement agile approaches rather than compete against them. After all, a good architecture enables agility and this doesn't happen by magic. But the people in our industry often tend to have a very different view. Historically, software architecture has been a discipline steeped in academia and, I think, subsequently feels inaccessible to many software developers. It's also not a particularly trendy topic when compared to [microservices|Node.js|Docker|insert other new thing here].

I've been distilling the essence of software architecture over the past few years, helping software teams to understand and introduce it into the way that they work. And, for me, the absolute essence of software architecture is about building firm foundations; both in terms of the team that is building the software and for the software itself. It's about technical leadership, creating a shared vision and stacking the chances of success in your favour.

I'm delighted to have been invited back to ASAS 2014 and my opening keynote is about what a software team can do in order to create those firm foundations. I'm also going to talk about some of the concrete examples of what I've done in the past, illustrating how I apply a minimal set of software architecture practices in the real world to take an idea through to working software. I'll also share some examples of where this hasn't exactly gone to plan too! I look forward to seeing you in a few weeks.

Read more...

Categories: Architecture