Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Stuff The Internet Says On Scalability For December 12th, 2014

Hey, it's HighScalability time:


We've had a wee bit of a storm in the bay area.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Extreme Engineering - Building a Rube Goldberg machine with scrum

Xebia Blog - Fri, 12/12/2014 - 15:16

Is agile usable to do other things than software development? Well we knew that already; yes!
But to create a machine in 1 day with 5 teams and continuously changing members using scrum might be exciting!

See our report below (it's in Dutch for now)

 

Extreme engineering video

 

 

Agile Metrics: Filling The Agile Measurement Framework Quadrants

Agile Metrics Framework

Agile Metrics Framework

What gets measured depends on the team’s and the organization’s reporting needs and the measurement goals. For instance, an organization that needs to quantitatively prove their transformation will need to consider measures (and metrics) that can be generated consistently across project teams. Organizations whose teams are standalone and do not anticipate the need for baselining or benchmarking can easily leverage team-based relative measures, such as function points. The simple metrics framework suggested here with potential metrics is shown below: [

  1. Labor Productivity Quadrant
    1. Labor productivity is generally expressed as a functional measure of size per person month, for example: function points per person month or use case points per person month. Labor productivity is typically the measure of choice when an overall transformation program needs to prove efficiency results. These measures are easily comparable between teams and have industry benchmarks available for comparison. The drawbacks are the perceived level of effort to generate the measure and the invasiveness of the process used to generate the size component of the metric.
    2. Story completion (variants include measure of percentage story completion) is a relatively easy metric for teams to collect and leverage. The simplest form of this measure is a simple count of the number of stories competed in a sprint (or period of time for Kanban). Adding a time component creates a rate of completion (a metric) which can be used as a variant of velocity.
  2. Quality Quadrant
    1. Customer satisfaction is a measure of how satisfied the customers (or stakeholders) of a project are with the team performance or functionality delivered. Customer satisfaction can measured using techniques as simple as asking specific stakeholder how they feel about the project or very formal techniques, such as surveys and calculations such as Net Promoters. The higher the formality the more effort that will be required to collect and analyze the metric and the more comparable the metric will be between teams across the life of long running projects.
    2. Delivered defects are a count of the number of defects found after the code (or other deliverable) is marked as done. This measure is generally considered one of the more important reflection of quality, because all code or other deliverables are potentially implementable after they have be marked as done, which means any defects found, regardless of by whom, could have been found in production. Defects found in production can negatively impact customer satisfaction and the overall business.
    3. Usability is typically a measure of compliance against a set of industry and/or organizational standards. Most teams build usability compliance into the definition of done, therefore what is delivered as done is compliant. The metric is used as a mechanism to reflect progress while functionality is being built.
  3. Predictability Quadrant
    1. Velocity is the average number of units of work delivered in a sprint (or any other repeating unit of time). Typically velocity is expressed as the average number of story points a team delivers per sprint. While similar to productivity, velocity is typically used to represent team predictability. Predictability metrics can be used to generate release plans (when will some group of functionality be ready for production) and in sprint planning.
    2. Time-to-market is very similar to velocity reflecting how fast functionality is developed and delivered. Time to market is generally used to reflect plan-based (non-Agile) projects or in organizations using functional metrics (e.g. function points). An example of a time-to-market metric would be the number of function points per calendar. Note: like velocity, time-to-market metrics are generally averages and used in planning exercises or in benchmark.
    3. Effort burn-down is a measure of a team’s estimate of the number of hours of effort remaining in a sprint to deliver the functionality that was committed to during planning. This almost always used as mechanism for teams to anticipate whether will complete work by the end of the sprint. There are numerous variations on effort burn-down chart including story points, task and feature burn-down charts. In every case some measure of work is count (e.g. hours, tasks, story points) and then as they are completed, used up or new instances discovered, the number remaining is either incremented or decremented.
  4. Value Quadrant
    1. Business value is an estimate of the net value being delivered in a unit of work (e.g. story, epic or feature). While business value is the Holy Grail in this category, it is generally very difficult to estimate at a story or feature level, therefore tracking value tends at a higher level such as a release.
    2. Features delivered is a proxy for business value. This measure is typically a count of the number of features delivered in sprint. Variants of this measure include counting stories or epic (larger user stories). Features and stories reflect requirements therefore as the number of features delivered increase the value users and the product owner perceive.

The measures and metrics noted above barely scratch the surface of what can be measured. What should be measured is dependent on the needs and goals of the team and the organization. In ALL cases the measures and metrics must be vetted to ensure they meet the Agile measurement philosophies.


Categories: Process Management

New Code Samples for Lollipop

Android Developers Blog - Thu, 12/11/2014 - 23:43

Posted by Trevor Johns, Developer Programs Engineer

With the launch of Android 5.0 Lollipop, we’ve added more than 20 new code samples demonstrating how to implement some of the great new features of this release. To access the code samples, you can easily import them in Android Studio 1.0 using the new Samples Wizard.

Go to File > Import Sample in order to browse the available samples, which include a description and preview for each. Once you’ve made your selection, select “Next” and a new project will be automatically created for you. Run the project on an emulator or device, and feel free to experiment with the code.

Samples Wizard in Android Studio 1.0 Newly imported sample project in Android Studio

Alternatively, you can browse through them via the Samples browser on the developer site. Each sample has an Overview description, Project page to browse app file structure, and Download link for obtaining a ZIP file of the sample. As a third option, code samples can also be accessed in the SDK Manager by downloading the SDK samples for Android 5.0 (API 21) and importing them as existing projects into your IDE.


Sample demonstrating transition animations
Material Design

When adopting material design, you can refer to our collection of sample code highlighting material elements:

For additional help, please refer to our design checklist, list of key APIs and widgets, and documentation guide.

To view some of these material design elements in action, check out the Google I/O app source code.

Platform

Lollipop brings the most extensive update to the Android platform yet. The Overview screen allows an app to surface multiple tasks as concurrent documents. You can include enhanced notifications with this sample code, which shows you how to use the lockscreen and heads-up notification APIs.

We also introduced a new Camera API to provide developers more advanced image capture and processing capabilities. These samples detail how to use the camera preview and take photos, how to record video, and implement a real-time high-dynamic range camera viewfinder.

Elsewhere, Project Volta encourages developers to make their apps more battery-efficient with new APIs and tools. The JobScheduler sample demonstrates how you can schedule background tasks to be completed later or under specific conditions.

For those interested in the enterprise device administration use case, there are sample apps on setting app restrictions and creating a managed profile.

Android Wear

For Android Wear, we have a speed tracker sample to show how to take advantage of GPS support on wearables. You can browse the rest of the Android Wear samples too, and here are some highlights that demonstrate the unique capabilities of wearables, such as data synchronization, notifications, and supporting round displays:

Android TV

Extend your app for Android TV using the Leanback library described in this training guide and sample.

To try out a game that is specifically optimized for Android TV, download Pie Noon from Google Play. It’s an open-source game developed in-house at Google that supports multiple players using Bluetooth controllers or touch controls on mobile devices.

Android Auto

For the use cases highlighted in the Introduction to Android Auto DevByte, we have two code samples. The Media Browser sample (DevByte) demonstrates how easy it is to make an audio app compatible with Android Auto by using the new Lollipop media APIs, while the Messaging sample (DevByte) demonstrates how to implement notifications that support replies using speech recognition.

Google Play services

Since we’ve discussed sample resources for the Android platform and form factors, we also want to mention that there are existing samples for Google Play services. With Google Play services, your app can take advantage of the latest Google-powered APIs such as Maps, Google Fit, Google Cast, and more. Access samples in the Google Play services SDK or visit the individual pages for each API on the developer site. For game developers, you can reference the Google Play Games services samples for how to add achievements, leaderboards, and multiplayer support to your game.

Check out a sample today to help you with your development!

Join the discussion on

+Android Developers
Categories: Programming

Here’s My Management 3.0 Story. What’s Yours?

NOOP.NL - Jurgen Appelo - Thu, 12/11/2014 - 21:28
Delegation Levels

After 4 years of working by myself, I am now the proud manager of a team of seven great people. Two months ago, I had already extended my one-person company Happy Melly One by adding Lisette Sutherland (general management) and Sergey Kotlov (software development). In the last couple of weeks, I asked Lisette and Sergey to conduct job interviews with people around the world for the roles of Internet marketing, content writing, web development, and video editing.

The post Here’s My Management 3.0 Story. What’s Yours? appeared first on NOOP.NL.

Categories: Project Management

Hello World, meet our new experimental toolchain, Jack and Jill

Android Developers Blog - Thu, 12/11/2014 - 20:22

Posted by Paul Rashidi, Developer Programs Engineer

We've been working on a new toolchain for Android that’s designed to improve build times and simplify development by reducing dependencies on other tools. Today, we’re introducing you to Jack (Java Android Compiler Kit) and Jill (Jack Intermediate Library Linker), the two tools at the core of the new toolchain.

We are making an early, experimental version of Jack and Jill available for testing with non-production versions of your apps. This post describes how the toolchain works, how to configure it, and how to let us know of your feature requests and any bugs you find.

So how does it work?

When the new tool chain is enabled, Jill will translate any libraries you are referencing to a new Jack library file (.jack). This prepares them to be quickly merged with other .jack files. The Android Gradle plugin and Jack collect any .jack library files, along with your source code, and compiles them into a set of dex files. During the process, Jack also handles any requested code minification. The output is then assembled into an APK file as normal. We also include support for multiple dex files, if you have enabled that support.

How do I use it?

Jack and Jill are already available in the 21.1.1+ Build Tools for Android Studio. Complementary Gradle support is also currently available in the Android 1.0.0+ Gradle plugin. To get started, all you need to do is make sure you're using these versions of the tooling and then add a single line in your build.gradle file. Perform a build of your application to receive a newly built APK.

android {
    ...
    buildToolsRevision '21.1.1'
    defaultConfig {
      // Enable the experimental Jack build tools.
      useJack = true
    }
    ...
}
If you want to build your app with both toolchains, Product Flavors are a great way to do this. Your build.gradle file might look something like the snippet below.
android {
    ...
    productFlavors {
        dev {
            ...
        }
        experimental {
            useJack = true
        }
        prod {
            ...
        }
    }
    ...
}
How do I configure my build?

We are making the transition to Jack as smooth as possible by supporting minification (shrinking and/or obfuscation), as well as repackaging (i.e. similar to tools like jarjar), while using the same input files as you are used to. Minification is available in the Gradle plugin immediately and repackaging will follow. You should continue to use the "minifyEnabled true" directive to reduce the size of your app among all other optimizations you would normally use. There are more details on our reference page (linked below) regarding the level of support for each type of optimization. We encourage you to provide feedback there if your current configuration isn't supported.

Give us your feedback

We are attempting to make the toolchain as easy to test out as possible and we're looking for your help to fine tune it. Use the reference page to find known issues, file feature requests, and report bugs. Happy building!

Join the discussion on

+Android Developers
Categories: Programming

Azure: Premium Storage, RemoteApp, SQL Database Update, Live Media Streaming, Search and More

ScottGu's Blog - Scott Guthrie - Thu, 12/11/2014 - 20:14

Today we released a number of great enhancements to Microsoft Azure. These include:

  • Premium Storage: New Premium high-performance Storage for Azure Virtual Machine workloads
  • RemoteApp: General Availability of Azure RemoteApp service
  • SQL Database: Enhancements to Azure SQL Databases
  • Media Services: General Availability of Live Channels for Media Streaming
  • Azure Search: Enhanced management experience, multi-language support and more
  • DocumentDB: Support for Bulk Add Documents and Query Syntax Highlighting
  • Site Recovery: General Availability of disaster recovery to Azure for branch offices and SMB customers
  • Azure Active Directory: General Availability of Azure Active Directory application proxy and password write back support

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Premium Storage: High-performance Storage for Virtual Machines

I’m excited to announce the public preview of our new Azure Premium Storage offering. With the introduction of the new Premium Storage option, Azure now offers two types of durable storage: Premium Storage and Standard Storage. Premium Storage stores data durably on Solid State Drives (SSDs) and provides high performance, low latency, disk storage with consistent performance delivery guarantees.

image

Premium Storage is ideal for I/O-sensitive workloads - and is great for database workloads hosted within Virtual Machines.  You can optionally attach several premium storage disks to a single VM, and support up to 32 TB of disk storage per Virtual Machine and drive more than 50,000 IOPS per VM at less than 1 millisecond latency for read operations. This provides a wickedly fast storage option that enables you to run even more workloads in the cloud.

Using Premium Storage, Azure now offers the ability to "lift-and-shift" more demanding enterprise applications to the cloud - including SQL Server, Dynamics AX, Dynamics CRM, Exchange Server, MySQL, Oracle Database, IBM DB2, and SAP Business Suite solutions.

Premium Storage is now available in public preview starting today. To sign up to use the Azure Premium Storage preview, visit the Azure Preview page. Disk Sizes and Performance

Premium Storage disks provide up to 5,000 IOPS and 200 MB/sec throughput depending on the disk size. When you create a new premium storage disk you get the option to select the disk size and performance characteristics you want based on your application performance and storage capacity needs.  For the public preview, we are offering three Premium Storage disk configurations:

Disk Types<?xml:namespace prefix = "o" />

P10

P20

P30

Disk Size

128 GB

512 GB

1 TB

IOPS per Disk

500

2300

5000

Throughput per Disk

100 MB/sec

150 MB/sec

200 MB/sec

You can maximize the performance of your VMs by attaching multiple Premium Storage disks to them (up to the network bandwidth limit available to the VM for disk traffic). To learn the disk bandwidth available for each VM size, see the Virtual Machine and Cloud Service Sizes for Azure Durability

Durability of data is of utmost importance for any persistent storage option. Azure customers have critical applications that depend on the persistence of their data and high tolerance against failures. Premium Storage keeps three replicas of data within the same region. In addition, you can also optionally create snapshots of your disks and copy those snapshots to a Standard GRS storage account - which enables you to maintain a geo-redundant snapshot of your data that is stored at least 400 miles away from your primary Azure region. Learn More

You can learn more about Premium Storage disks here.  To sign up to use Premium Storage, go to the Azure Preview page, and sign up for Microsoft Azure Premium Storage service using your subscription. RemoteApp: General Availability of Azure RemoteApp

I’m excited to announce the general availability of Azure RemoteApp. Using Azure RemoteApp, you can deploy Windows desktop applications in the cloud, and provide your users with an intuitive, high-fidelity, WAN-ready remote application experience.  Users can use the cloud-hosted Windows applications you enable on their phones, tablets, or PCs - including Windows, Mac, iOS and Android based devices.  We are delivering RemoteApp with a super competitive price - you can host your user's applications in the cloud for just $10/user/month.  With today’s release, Azure RemoteApp is backed by an SLA and supported by Microsoft Support, offering the full scalability and security of the Azure cloud. Getting Started

Setting up RemoteApp is easy. In the Azure Management Portal, select NEW -> App Services -> RemoteApp -> Quick Create. Pick a name, region, select the scale configuration plan you want to use, pick one of the standard template images, and click OK. When you do this for the first time, your 30-day free trial will also start. This is a fully featured trial, available to all Azure customers.

image

A RemoteApp instance is an elastic, automatically scaled, collection of Windows Server VMs that are running the Remote Desktop Session Host role and host the applications. The VMs are all created based on the template image you provide. You can provide your own template image containing your custom apps, or use one of the standard template images we provide. One of these is for Office 365 ProPlus, which you can use if you have a subscription that contains the Office 365 ProPlus service:

image

Once enabled, your users can quickly and easily connect to the applications you host in Azure.  They can use Windows, Mac, iOS and Android devices to connect to the RemoteApp service - enabling you to use Azure to run your Windows desktop applications anywhere in the world, on any device. Enabling Hybrid Applications

Many business-critical Windows applications rely on on-premises infrastructure such as identity and machine management, and require access to on-premises databases and resources. Azure RemoteApp provides a hybrid deployment model that supports all of these scenarios.

  • Hybrid Management: In a hybrid RemoteApp collection, the VMs which host your applications are joined to your AD domain. Therefore, you can use on-premises management tools like Group Policy, System Center, and many other enterprise management tools that rely on AD membership.

  • Federated Identity: You can use Azure Active Directory integrated with your on-premises AD and your users can logon with their familiar corporate identities. When the Windows applications starts, it is running in a fully domain-joined session, with the usual integrated authentication capabilities of a Windows domain.
  • Hybrid Networking: Windows applications in a hybrid RemoteApp collection can seamlessly access on-premises data and resources. This capability is built on Azure Virtual Networking with site-to-site VPN, providing cloud-premise virtual network connectivity. In the future, RemoteApp collections will support full range of Azure Networking capabilities, including ExpressRoute.

Performance and Scale Configurations

With today's general availability release, we are offering two scale configurations: BASIC and STANDARD.

  • BASIC is intended for lighter, task-worker use cases, such as a single productivity application or a data-entry frontend to a line of business system.
  • STANDARD is intended for typical productivity use cases such as using Outlook, Word, Excel and other knowledge worker line of business and productivity applications.

You can select the scale configuration for your RemoteApp collection while creating it. If you want to change it later, you can do so using the SCALE tab. Your applications and settings and your user data remain intact through this change.

image Pricing

We are making the RemoteApp service available at a very attractive, affordable price.  You can host a line of business Windows application for as little as $10/user per month using the BASIC configuration.

At the STANDARD level, you can host your users’ entire productivity workspace for just $15/user per month.

Learn More

A variety of resources are available on the RemoteApp overview page. You can also download the client for your device and take a test drive. Finally, RDV Team blog discusses today’s new features in more detail. SQL Databases: Now with SQL 2014 Features and Compatibility

Today we are making available a preview of the next-generation release of our Azure SQL Database service.  We announced some of the preview's new features earlier in November.  Today's release delivers near-complete SQL Server 2014 engine compatibility and even better performance.

Our internal benchmark tests (using over 600 million rows of data) show query performance improvements of around 5x with today's preview relative to our existing Premium Tier SQL Database offering and up to 100x performance improvements when using the new In-memory columnstore technology now supported with today's preview release. Lots of great new features and improvements

Key highlights of today's preview include:

  • Better management of large databases. We now support heavier database workload management with parallel queries, table partitioning, online indexing, worry-free large index rebuilds with the previous 2GB size limit removed, and more alter database commands.

  • Support for more programmability capabilities: You can now build even more robust applications with CLR, T-SQL Windows functions, XML index, and change tracking support.

  • Up to 100x performance improvements with support for In-memory columnstore queries for data mart and analytic workloads.

  • Improved monitoring and troubleshooting: Extended Events (XEvents) and visibility into over 100 new table views via an expanded set of Database Management Views (DMVs).

  • New S3 performance level: Today's preview introduces a new pricing option for SQL Databases. The new "S3" performance tier delivers 100 DTU of performance (twice the DTU level of the existing S2 tier) and all of the features available in the Standard tier. It enables an even more cost effective way to run applications with higher performance needs.

Learn More and Get Started

You can read more about the enhancements in today's preview on the preview getting started page.  To use today's preview, you can navigate to the SETTINGS part on the SQL Database blade in the Azure Preview Portal and upgrade to use the preview.

image

Try our the new features and give us your feedback! Media Services: General Availability of Live Media Streaming

I’m very excited to announce the General Availability of Live Channels Media Streaming support.  Live Channels with Azure Media Services is the live services backbone that broadcasters such as NBC Sports have used to deliver live online media streamed events such as English Premier League, NHL hockey, Sunday Night Football.  A dozen international broadcasters also used it to seamlessly deliver live media streaming coverage of the 2014 Sochi Winter Olympics and 2014 FIFA World Cup.

You can now use Live Channels to stream events of your own - and scale to literally millions of users watching them.  Today's general availability release is backed by an enterprise-grade Service-Level Agreement (SLA) for all customers. 

Live Streaming Learn More

For more information on functionality and pricing, visit the Getting Started with Live Streaming blog post, the Media Services webpage or Media Services Pricing webpage, or the Live Streaming MSDN section.

Search: Portal Management, Multi-language support

I am happy to announce a number of highly requested features available today in Azure Search.  Azure Search provides developers with all of the features needed to build out search experience for web and mobile applications without having to deal with the typical complexities that come with managing, tuning and scaling a real-world search service.  Azure Portal Enhancements

Helping developers setup and manage their services quickly and easily is a key goal of the Azure Management Portal. Today's release adds several new capabilities to the Azure Search support in the portal that makes it even easier to get started with Azure Search and reduce the need to write code.

For example, you can now easily create a new index. In the portal, you can name the search index, set all of the fields, and assign the properties for each of them - all without writing any code:

image

Once you create the index, you can also now drill into usage details like document counts and storage size. You can see all of the fields associated with this index as shown below:

image

Index tuning is another enhancement now supported in the portal user interface. Boosting relevant items not only provides a better search experience, it also helps you achieve business objectives. For example, if you are searching a product index, you might want to boost documents where the result was found in the product name, as opposed to another document where the result was found in the product description. Or you may wish to use a scoring function that allows you to boost items that have high star ratings or that provide higher margins.

The task of tuning an index was previously only available through the API. Starting today, using the Azure Preview portal you can create or alter scoring profiles, instantly tuning the results of your search queries without having to write a line of code:

image Multilanguage Support across 27 Languages

With today’s release, Azure Search now has support for 27 languages. This allows Azure Search to accommodate the unique characteristics of a given language, enabling word-breaking and text normalization to work as expected for each language. Part of this enhancement includes support for stemming in the relevant languages, reducing words to their word stems. For example, you can search for the word “runs” and find documents that say “run” or “running”.

When creating an index you can choose to include content from multiple languages, allowing you to search and return results based on the chosen language of your user. For more information, you can visit the Language Support page. Over time, we will continue to enhance multi-language support to include additional languages. API features

Last but not least, we’ve introduced a new Azure Search Management REST API that allows you to perform common administrative tasks, such as creating new services, and scaling services to allow for additional storage or higher query-per-second rates. You can see a sample of how to use this Management API at CodePlex. Document DB: Bulk Add Documents and Syntax Highlighting

DocumentDB is a NoSQL document database service designed for scalable and high performance modern applications.  DocumentDB is delivered as a fully managed service (meaning you don’t have to manage any infrastructure or VMs yourself) with an enterprise grade SLA.

We now support some nice new capabilities for Document DB in the Azure Preview Portal:

  • Add Documents: Upload existing JSON documents via Document Explorer
  • Query syntax highlighting: Document DB SQL query

These features make it even easier to get started and explore DocumentDB.

Add Documents Support within the Azure Portal

The DocumentDB Document Explorer within the Azure Preview Portal now supports the uploading of existing JSON documents - which makes it easy to import and start using existing data stored in existing JSON files. Simply open Document Explorer and click the Add Document command:

image

In the new blade that opens, click the browse button to open a file explorer and select 1 or more JSON documents to upload. Note that Document Explorer currently supports up to 100 JSON document files in a single upload operation.

image

Once you’re satisfied with your selection, click the upload button. The documents will automatically be added to the Document Explorer grid and aggregate results will be displayed as the upload operation is in progress.

image

Once the operation has completed, you can select up to another 100 documents to upload without having to close the Add Document blade.  This makes it easy to import data into your DocumentDB databases. Query Explorer – Syntax Highlighting

We’ve also enabled basic keyword and value highlighting within Query Explorer.

image

This makes it even easier to experiment and test queries against your NoSQL databases.

Please send us your feedback and suggestions on the Microsoft Azure DocumentDB feedback forum. If you haven’t tried DocumentDB yet, you can learn more about how to get started here. Disaster Recovery: GA of DR for Branch Offices & SMB Customers

I’m excited to announce the General Availability of the Disaster Recovery (DR) to Azure for Branch offices and SMB feature in our Azure Site Recovery (ASR) service.  Today's new support enables consistent replication, protection, and recovery of Virtual Machines directly in Microsoft Azure.  With this new support we have extended the Azure Site Recovery service to become a simple, reliable & cost effective DR Solution for enabling Virtual Machine replication and recovery between Windows Server 2012 R2 and Microsoft Azure without having to deploy a System Center Virtual Machine Manager on your primary site.

These features builds on top of the Hyper-V Replica technology available in Windows Server 2012 R2 and Microsoft Azure to provide remote health monitoring, no-impact recovery plan testing and single click orchestrated recovery – all of this backed by an SLA that is enterprise grade. Verify DR Plans with Confidence

The Test Failover feature within Azure Site Recovery allows you to test your disaster recovery plans without impacting your production workload which ensures that you can perform periodic DR drills to meet your compliance objectives. You can connect to the virtual machine running in Azure via RDP after enabling the appropriate endpoints for the virtual machine running in Azure.

A Planned Failover will do a shutdown of your on-premises machine, transfer all the last changes inside the virtual machine to Azure & then bring up an instance of the VM in Azure without any data loss. An Unplanned Failover is usually triggered when your on-premises site has been hit by an unexpected disaster.

If you are looking for failing over a multi-virtual machine application, you can do so using the One-Click Orchestration using Recovery Plans feature available in Azure Site Recovery. Recovery plans make failover and failback from Azure easy and ensure that you meet your Recovery Time Objectives (RTO) goals of your organization.

Check out additional pricing or product information about Azure Site Recovery, and sign up for a free Azure trial and start using it today. Active Directory: GA of Application Proxy and Password Writeback support

Today's Azure update includes some great updates to Azure Active Directory. Azure Active Directory Application Proxy

The Azure Active Directory Application Proxy allows you to make your on-premises web applications securely accessible to users who want to use them from the cloud - and enables you to authenticate access to them using Azure AD.

You can do this without changing your applications and without having to change your DMZ configuration. Just install a lightweight connector anywhere on your network and configure access to the application in your Azure Active Directory, and you can make your SharePoint, Outlook Web Access (or any other Web application that relies on Kerberos) available to users outside your corporate network.

image

With today's release we added support for Kerberos Constrained Delegation. Now, once a user has authenticated to Azure Active Directory, the Azure Active Directory Application Proxy can automatically authenticate users to your on-premises application. Password Writeback for Azure Active Directory Premium Customers

With the new Password Writeback support in Azure AD Sync, you can now configure your Active Directory system so that any time a user or administrator changes a password in Azure AD, the new password is also written back to your on-premises Active Directory as well. So, for example, when a user forgets their password to your on-premises AD, they can reset their password using the Azure AD password reset service we provide in the cloud, and then use their new password to sign on to your on-premises AD.  This makes it easier for organizations to enable a variety of self-service IT and password reset features to their employees and partners. Preview of security questions for password reset

With today's release we’re also introducing preview support that enables you to configure security questions for password reset scenarios. This enables users to register their answers to secret questions, and then use those answers to prove who they are when they go to reset a forgotten password. Add your own password SSO for SaaS applications

With today's release we are introducing the preview of functionality that lets you configure password-based single sign-on for any web application that has an HTML sign-in page, even for applications that are not in the Azure AD Application Gallery. You can also add any links to your users’ Azure AD Access Panel, such as deep links to specific SharePoint pages, or to web apps that use Active Directory Federation Services. More Ways to Get AD Premium

We now support the ability to purchase Azure Active Directory Premium online at the Office 365 commerce catalogue, where you can purchase Azure AD Premium licenses for as many users as you want.  You can then easily manage your Azure Active Directory by navigating to http://aka.ms/accessAAD or through the Office administration portal.

To learn more about these new capabilities and how you can start using them, read Alex Simons’ post on the Active Directory Team Blog. Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu omni

Categories: Architecture, Programming

R: data.table/dplyr/lubridate – Error in wday(date, label = TRUE, abbr = FALSE) : unused arguments (label = TRUE, abbr = FALSE)

Mark Needham - Thu, 12/11/2014 - 20:03

I spent a couple of hours playing around with data.table this evening and tried changing some code written using a data frame to use a data table instead.

I started off by building a data frame which contains all the weekends between 2010 and 2015…

> library(lubridate)
 
> library(dplyr)
 
> dates = data.frame(date = seq( dmy("01-01-2010"), to=dmy("01-01-2015"), by="day" ))
 
> dates = dates %>% filter(wday(date, label = TRUE, abbr = FALSE) %in% c("Saturday", "Sunday"))

…which works fine:

> dates %>% head()
         date
1: 2010-01-02
2: 2010-01-03
3: 2010-01-09
4: 2010-01-10
5: 2010-01-16
6: 2010-01-17

I then tried to change the code to use a data table instead which led to the following error:

> library(data.table)
 
> dates = data.table(date = seq( dmy("01-01-2010"), to=dmy("01-01-2015"), by="day" ))
 
> dates = dates %>% filter(wday(date, label = TRUE, abbr = FALSE) %in% c("Saturday", "Sunday"))
Error in wday(date, label = TRUE, abbr = FALSE) : 
  unused arguments (label = TRUE, abbr = FALSE)

I wasn’t sure what was going on so I went back to the data frame version to check if that still worked…

> dates = data.frame(date = seq( dmy("01-01-2010"), to=dmy("01-01-2015"), by="day" ))
 
> dates = dates %>% filter(wday(date, label = TRUE, abbr = FALSE) %in% c("Saturday", "Sunday"))
Error in wday(c(1262304000, 1262390400, 1262476800, 1262563200, 1262649600,  : 
  unused arguments (label = TRUE, abbr = FALSE)

…except it now didn’t work either! I decided to check what wday was referring to…

Help on topic ‘wday’ was found in the following packages:

Integer based date class
(in package data.table in library /Library/Frameworks/R.framework/Versions/3.1/Resources/library)
Get/set days component of a date-time.
(in package lubridate in library /Library/Frameworks/R.framework/Versions/3.1/Resources/library)

…and realised that data.table has its own wday function – I’d been caught out by R’s global scoping of all the things!

We can probably work around that by the order in which we require the various libraries but for now I’m just prefixing the call to wday and all is well:

dates = dates %>% filter(lubridate::wday(date, label = TRUE, abbr = FALSE) %in% c("Saturday", "Sunday"))
Categories: Programming

Building a scalable geofencing API on Google’s App Engine

Google Code Blog - Thu, 12/11/2014 - 19:04
Thorsten Schaeff has been studying Computer Science and Media at the Media University in Stuttgart and the Institute of Art, Design and Technology in Dublin. For the past six months he’s been interning with the Maps for Work Team in London, researching best practice architectures for working with big geo data in the cloud.

Google’s Cloud Platform offers a great set of tools for creating easily scalable applications in the cloud. In this post, I’ll explore some of the special challenges of working with geospatial data in a cloud environment, and how Google’s Cloud can help. I’ve found that there aren’t many options to do this, especially when dealing with complicated operations like geofencing with multiple complex polygons. You can find the complete code for my approach on GitHub.

Geofencing is the procedure of identifying if a location lies within a certain fence, e.g. neighborhood boundaries, school attendance zones or even the outline of a shop in a mall. It’s particularly useful in mobile applications that need to apply this extra context to someone’s exact location. This process isn’t actually as straight forward as you’d hope and, depending on the complexity of your fences, can include some intense calculations and if your app gets a lot of use, you need to make sure this doesn’t impact performance.

In order to simplify this problem this blogpost outlines the process of creating a scalable but affordable geofencing API on Google’s App Engine.

And the best part? It’s completely free to start playing around.
geofencing_API_example.PNGGeofencing a route through NYC against 342 different polygons that resulted from converting the NYC neighbourhood tabulation areas into single-part polygons.Getting startedTo kick things off you can work through the Java Backend API Tutorial. This uses Apache Maven to manage and build the project.

If you want to dive right in you can download the finished geofencing API from my GitHub account.

The ArchitectureThe requirements are:

  • Storing complex fences (represented as polygons) and some metadata like a name and a description. For this I use Cloud Datastore, a scalable, fully managed, schemaless database for storing non-relational data. You can even use this to store and serve GeoJSON to your frontend.
  • Indexing these polygons for fast querying in a spatial index. I use an STR-Tree (part of JTS) which I store as a Java Object in memcache for fast access.
  • Serving results to multiple platforms through HTTP requests. To achieve this I use Google Cloud Endpoints, a set of tools and libraries that allow you to generate APIs and client libraries.

That’s all you need to get started - so let’s start cooking!

Creating the ProjectTo set up the project simply use Apache Maven and follow the instructions here. This creates the correct folder structure, sets up the routing in the web.xml file for use with Google’s API Explorer and creates a Java file with a sample endpoint.

Adding additional LibrariesI’m using the Java Topology Suite to find out which polygon a certain latitude-longitude-pair is in. JTS is an open source library that offers a nice set of geometric functions.

To include this library into your build path you simply add the JTS Maven dependency to the pom.xml file in your project’s root directory.

In addition I’m using the GSON library to handle JSON within the Java backend. You can basically use any JSON library you want to. If you want to use GSON import this dependency.

Writing your EndpointsAdding Fences to Cloud DatastoreFor the sake of convenience you’re only storing the fences’ vertices and some metadata. To send and receive data through the endpoints you need an object model which you need to create in a little Java Bean called MyFence.java.


Now you need to create an endpoint called add. This endpoint expects a string for the group name, a boolean indicating whether to rebuild the spatial index, and a JSON object representing the fence’s object model. From this App Engine creates a new fence and writes it to Cloud Datastore.

Retrieving a List of our FencesFor some use cases it makes sense to fetch all the fences at once in the beginning, therefore you want to have an endpoint to list all fences from a certain group.

Cloud Datastore uses internal indexes to speed up queries. If you deploy the API directly to App Engine you’re probably going to get an error message, saying that the Datastore query you’re using needs an index. The App Engine Development server can auto-generate the indexes, therefore I’d recommend testing all your endpoints on the development server before pushing it to App Engine.

Getting a Fence’s Metadata by its IDWhen querying the fences you only return the ids of the fences in the result, therefore you need an endpoint to retrieve the metadata that corresponds to a fence’s id.

Building the Spatial IndexTo speed up the geofencing queries you put the fences in an STR tree. The JTS library does most of the heavy lifting here, so you only need to fetch all your fences from the Datastore, create a polygon object for each one and add the polygon’s bounding box to the index.

You then build the index and write it to memcache for fast read access.

Testing a pointYou want to have an endpoint to test any latitude-longitude-pair against all your fences. This is the actual geofencing part. That’s so you will be able to know, if the point falls into any of your fences and if so, it should return the ids of the fences the point is in.

For this you first need to retrieve the index from memcache. Then query the index with the bounding box of the point which returns a list of polygons. Since the index only tests against the bounding boxes of the polygons, you need to iterate through the list and test if the point actually lies within the polygon.


Querying for a Polylines or PolygonsThe process of testing for a point can easily be adapted to test the fences against polylines and polygons. In the case of polylines you query the index with the polyline’s bounding box and then test if the polyline actually crosses the returned fences.


When testing for a polygon you want to get back all fences that are either completely or partly contained in the polygon. Therefore you test if the returned fences are within the polygon or are not disjoint. For some use cases you only want to return fences that are completely contained within the polygon. In that case you want to delete the not disjoint test in the if clause.

Testing & Deploying to App EngineTo test or deploy your API simply follow the steps in the ‘Using Apache Maven’ tutorial.

Scalability & PricingThe beauty of this is, since it’s running on App Engine, Google’s platform as a service offering, it scales automatically and you only pay for what you use.

If you want to insure best performance and great scalability you should consider to switch from the free and shared memcache to a dedicated memcache for your application. This guarantees enough capacity for your spatial index and therefore ensures enough space even for a large amount of complex fences.

That’s it - that’s all you need to create a fast and scalable geofencing API.
Preview: Processing Big Spatial Data in the Cloud with Dataflow
In my next post I will show you how I geofenced more than 340 million NYC Taxi locations in the cloud using Google’s new Big Data tool called Cloud Dataflow.
Categories: Programming

Revealing Invisible Requirements

Software Requirements Blog - Seilevel.com - Thu, 12/11/2014 - 17:00
This blog post was written with Karl Wiegers based on our Software Requirements, 3rd Edition book. No matter how thorough a job you do on requirements elicitation, there is no way to be certain that you have found them all. No little green light comes on to announce “You’re done!” You should always plan on new […]
Categories: Requirements

Win Big, Lose Small

Making the Complex Simple - John Sonmez - Thu, 12/11/2014 - 16:00

In this video I talk about the idea of limiting the damage on your bad days and making your good days count the most. In life, you can learn to win big and lose small, you’ll always make forward progress. This is especially helpful if you are trying to lose weight or follow a diet. You’ll eventually mess up, but ... Read More

The post Win Big, Lose Small appeared first on Simple Programmer.

Categories: Programming

Watch Face API Now Available for Android Wear

Android Developers Blog - Thu, 12/11/2014 - 07:08

Posted by Wayne Piekarski, Developer Advocate

We’re pleased to announce that the official Android Wear Watch Face API is now available for developers. Watch faces give users even more ways to express their personal style, while creating an opportunity for developers to customize the most prominent UI feature of the watches. Watch faces have been the most requested feature from users and developers alike, and we can’t wait to see what you build for them.

An Introduction to Watch Faces for Android Wear by Timothy Jordan

Design and development

To get started, first learn about Designing Watch Faces, and then check out the Creating Watch Faces training class. The WatchFace Sample available online and in the Android Studio samples manager also provides a number of different examples to help you jump right in. For a quick overview, you can also watch the Watch Faces for Android Wear DevByte video above.

Watch faces are services that run from your wearable app, so you can provide one or multiple watch faces with a single app install. You can also choose to have configuration activities on the phone or watch, for example to let a user change between 12 and 24-hour time, or to change the watch face’s background. You can use OpenGL to provide smooth graphics, and a background service to pull in useful data like weather and calendar events. Watch faces can be analog, or digital, or display the time in some new way that hasn’t been invented yet––it’s up to you.

Updates to existing devices

Over the next week, the latest release of Android Wear, based on Android 5.0 and implementing API 21, will roll out to users. All Android Wear devices will be updated to Android 5.0 via an over-the air (OTA) update. The update allows users to manage and configure watch faces in the Android Wear app on their phone, and install watch faces from Google Play. Any handheld device running Android 4.3 or later will continue to work with all Android Wear devices.

Upgrade your watch faces

Developers are incredibly resourceful and we’re impressed with the watch faces you were able to create without any documentation at all. If you’ve already built a watch face for Android Wear using an unofficial approach, you should migrate your apps to the official API. The official API ensures a consistent user experience across the platform, while giving you additional information and controls, such as letting you know when the watch enters ambient mode, allowing you to adjust the position of system UI elements, and more. Using the new API is also necessary for your app to be featured in the Watch Faces collection on Google Play.

Deployment of watch faces to Google Play

We recommend you update your apps on Google Play as soon as the Android Wear 5.0 API 21 OTA rollout is complete, which we’ll announce on the Android Wear Developers Google+ community. It’s important to wait until the OTA rollout is complete because a Watch Face requiring API 21 will not be visible on a watch running API 20. Once your user gets the OTA, then the watch face will become visible. If you want to immediately launch your updates during the OTA rollout, make sure you set minSdkVersion to 20 in your wearable app, otherwise the app will fail to install for pre-OTA users. Once the rollout is complete, please transition your existing watch faces to the new API by January 31, 2015, at which point we plan to remove support for watch faces that don't use the official API.

Android Wear apps on Google Play

Starting today, you can submit any of your apps for designation as Android Wear apps on Google Play by following the Distributing to Android Wear guidelines. If your apps follow the criteria in the Wear App Quality checklist and are accepted as Wear apps on Play, it will be easier for Android Wear users to discover them. To opt-in for Android Wear review, visit the Pricing & Distribution section of the Google Play Developer Console.

In the few short months since we’ve launched Android Wear, developers have already written thousands of apps, taking advantage of custom notifications, voice actions, and fully native Android capabilities. Thanks to you, users have infinite ways to personalize their watches, choosing from six devices, a range of watch bands, and thousands of apps. With support for custom watch faces launching today, users will have even more choices in the future. These choices are at the heart of a rich Android Wear ecosystem and as we continue to open up core features of the platform to developers, we can’t wait to see what you build next.

Join the discussion on

+Android Developers
Categories: Programming

Agile Metrics: Differences Based On Organizational Levels

Metrics Are About Prediction

Metrics Are About Prediction

There are three reasons to measure. The first is to guide specific behaviors. The second is to provide information on the status of process. And the third is as a tool to help predict the future. At a team level it is easy to take a very narrow view of metrics and measurement, however the organization is another significant stakeholder in the collection and consumption of metrics information. Teams and other organizational stakeholders have different metrics needs for each of three basic reasons for measuring.  Part of maturing as an Agile organization is the development of a common understanding of metrics needs that includes the differences between groups.  Reaching a common understanding is a step toward developing the mechanisms to accommodate all of the relevant metrics needs within the organization.

 

Reason to Measure

Agile Team Perspective Organizational Perspective Guide Behaviors The goal of metrics and tools at a team level are to support tactical behaviors focused on the delivery the functionality the team has committed to delivering.  Metrics can be delivered with tools such as card walls (the simple metric of a card moving across the board), burn-down charts or story completion charts.  These tools (also known as information radiators) provide information that teams generally find useful for guiding behaviors such as swarming, collaboration and continuous re-planning. The goal of measurement that guides behavior at the organizational level is to reinforce desired overall Agile behavior. The metrics needed to support and reinforce Agile behavior will evolve as an organization completes its Agile transformation.  Examples of organization metrics that guide behavior include Ka8znztcskills/capabilities tracking (gamification – Gamification is a mechanism that leverages the competitive attributes of the target audience). [As the transformation matures, measurement against Agile Maturity Models can be leveraged to guide behavior. Provide Status Tactical Perspective: The team shares status on a daily basis during the stand-up/Scrum meeting while leveraging tools line the card wall and burn-down charts as metrics and informational radiators. Burn-down chart provides team level status information that can by share across multiple layers of the organization hierarchy, however team level data tends to be seen as too granular as projects morph into programs and status is passed up the organizational hierarchy. Program level burn-up charts and story maps provide quantifiable measurement feedback that is accessible to senior leaders. Predict Future Scrum and Scrumban teams need to be able see the work in front of them to understand how to plan both at a short, medium term and long term basis.  Tools like burn-down (short term), burn-up (program level view), story maps and product roadmaps (both long-term) provide a quantified view of progress. Organizations need to develop tactical and strategic plans that are supported by software functionality.  Portfolio metrics and information radiators (story maps and product roadmaps) leverage naturally occurring data from project performance.

Different stakeholders have different measurement needs and perspectives.  Occasionally there is a suggestion that the only measurement data that Agile should generate is what the team needs.  While teams and other organizational stakeholders, such as product, IT and executive management, can (and should) use similar tools, organizational data needs extend to being able to monitor and guide the Agile transformation and other process improvement efforts. Those needs will require everyone involved collect a wider range of data and generate different metrics.


Categories: Process Management

Google Cardboard: Seriously Fun

Google Code Blog - Wed, 12/10/2014 - 19:42
As simple as they are, cardboard boxes are pretty great. Maybe you transformed one into a fort or castle growing up. Or maybe your kids took last week’s package delivery and turned the box into a puppet theater. The best part about cardboard is that it can become anything—all you need is your imagination.

It’s this same spirit that inspired our team to turn a smartphone, and some cardboard, into a virtual reality (VR) viewer earlier this year. Suddenly, exploring the Palace of Versailles was as easy as opening an app. And the response was kind of delightful.

We’ve been working to improve Google Cardboard ever since. And today—with more than half a million Cardboard viewers in people’s hands—we've got a fresh round of updates for users, developers, and makers.
cardboard-500k.png
For users: more apps to enjoy, and more places to buy

There are now dozens of Cardboard-compatible apps on Google Play, and starting today we’re dedicating a new collection page to some of our favorites. These VR experiences range from test drives to live concerts to fully-immersive games, and they all have something amazing to offer. So give ‘em a try today, and download the new Cardboard app to watch the collection grow over time.
cardboard-apps-blog-mock-option2.pngExample apps for Cardboard (clockwise, from top left):
Paul McCartney concert, Volvo Reality test drive, Proton Pulse 3D game

If you don’t have a Cardboard viewer yet, you can now pick one up from DODOcase, I Am Cardboard, Knoxlabs, and Unofficial Cardboard. And of course you can always build your own (with new specs below!).

For developers: SDKs for Android and Unity

If you’ve ever tried creating a VR application, then you’ve probably wrestled with issues like lens distortion correction, head tracking, and side-by-side rendering. It’s important to get these things right, but they can suck up all your time—time you’d rather spend on gameplay or graphics.

We want to give you that time back, so today we’re introducing Cardboard SDKs for Android and Unity. The SDKs simplify common VR development tasks so you can focus on your awesome, immersive app. And with both Android and Unity support, you can use the tools you already know and love. Download the SDKs today, and check out apps like Caaaaardboard! and Tilt Brush Gallery to see what’s already possible.

For makers: tool-specific specs, and custom viewer calibration

To help bring VR experiences to everyone, we open sourced a Cardboard viewer specification earlier this year. Since then we’ve seen all sorts of viewers from all sorts of makers, and today we’re investing in this community even further.

For starters, we’re publishing new building specs with specific cutting tools in mind. So whether you’re laser- or die-cutting your Cardboard viewers in high quantities, or carving single units with a blade, we’ve got you covered.

Once you’ve got your custom viewer, we also want to help you tailor the viewing experience to its unique optical layout. So early next year we’ll be adding a viewer calibration tool to the Cardboard SDK. You’ll be able to define your viewer’s base and focal length, for example, then have every Cardboard app adjust accordingly.

For the future: watch this space, and we’re hiring

The growth of mobile, and the acceleration of open platforms like Android make it an especially exciting time for VR. There are more devices, and more enthusiastic developers than ever before, and we can’t wait to see what’s next! We’re also working on a few projects ourselves, so if you’re passionate about VR, you should know we’re hiring.

Here’s to the cardboard box, and all the awesome it brings.

by Andrew Nartker, Product Manager, Google Cardboard
Categories: Programming

Finite State Machine Compiler

Phil Trelford's Array - Wed, 12/10/2014 - 18:22

Yesterday I noticed a tweet recommending an “Uncle” Bob Martin video that is intended to “demystify compilers”. I haven’t seen the video (Episode 29, SMC Parser) as it’s behind a pay wall, but I did find a link on the page to a github repository with a hand rolled parser written in vanilla Java, plus a transformation step that compiles it out to either Java or C code.

cats and dogs

The parser implementation is quite involved with a number of files covering both lexing and parsing:

I found the Java code clean but a little hard to follow so tried to reverse engineer the state machine syntax off of a BNF (found in comments) and some examples (found in unit tests):

Actions: Turnstile
FSM: OneCoinTurnstile
Initial: Locked
{
Locked Coin Unlocked {alarmOff unlock}
Locked Pass Locked  alarmOn
Unlocked Coin Unlocked thankyou
Unlocked Pass Locked lock
}

It reminded me a little of the state machine sample in Martin Fowler’s Domain-Specific Language book, which I recreated in F# a few years back as both internal and external DSLs.

On the train home I thought it would be fun to knock up a parser for the state machine in F# using FParsec, a parser combinator library. Parser combinators mix the lexing and parsing stages, and make light work of parsing. One of the many things I like about FParsec is that you get pretty good error messages for free from the library.

Finite State Machine Parser

I used F#’s discriminated unions to describe the AST which quite closely resembles the core of Uncle Bob’s BNF:

type Name = string
type Event = Name
type State = Name
type Action = Name
type Transition = { OldState:State; Event:Event; NewState:State; Actions:Action list }
type Header = Header of Name * Name

The parser, using FParsec, turned out to be pretty short (less than 40 loc):

open FParsec

let pname = many1SatisfyL isLetter "name"

let pheader = pname .>> pstring ":" .>> spaces1 .>>. pname .>> spaces |>> Header

let pstate = pname .>> spaces1
let pevent = pname .>> spaces1
let paction = pname

let pactions = 
   paction |>> fun action -> [action]
   <|> between (pstring "{") (pstring "}") (many (paction .>> spaces))

let psubtransition =
   pipe3 pevent pstate pactions (fun ev ns act -> ev,ns,act)

let ptransition1 =
   pstate .>>. psubtransition
   |>> fun (os,(ev,ns,act)) -> [{OldState=os;Event=ev;NewState=ns;Actions=act}]

let ptransitionGroup =
   let psub = spaces >>. psubtransition .>> spaces
   pstate .>>. (between (pstring "{") (pstring "}") (many1 psub))
   |>> fun (os,subs) -> 
      [for (ev,ns,act) in subs -> {OldState=os;Event=ev;NewState=ns;Actions=act}]

let ptransitions =
   let ptrans = attempt ptransition1 <|> ptransitionGroup
   between (pstring "{") (pstring "}") (many (spaces >>. ptrans .>> spaces))
   |>> fun trans -> List.collect id trans

let pfsm =
   spaces >>. many pheader .>>. ptransitions .>> spaces

let parse code =
   match run pfsm code with
   | Success(result,_,_) -> result
   | Failure(msg,_,_) -> failwith msg

You can try the parser snippet out directly in F#’s interactive window.

Finite State Machine Compiler

I also found code for compiling out to Java and C, and ported the former:

let compile (headers,transitions) =
   let header name = 
      headers |> List.pick (function Header(key,value) when key = name -> Some value | _ -> None)
   let states = 
      transitions |> List.collect (fun trans -> [trans.OldState;trans.NewState]) |> Seq.distinct      
   let events =
      transitions |> List.map (fun trans -> trans.Event) |> Seq.distinct

   "package thePackage;\n" +
   (sprintf "public abstract class %s implements %s {\n" (header "FSM") (header "Actions")) +
   "\tpublic abstract void unhandledTransition(String state, String event);\n" +
   (sprintf "\tprivate enum State {%s}\n" (String.concat "," states)) +
   (sprintf "\tprivate enum Event {%s}\n" (String.concat "," events)) +
   (sprintf "\tprivate State state = State.%s;\n" (header "Initial")) +
   "\tprivate void setState(State s) {state = s;}\n" +
   "\tprivate void handleEvent(Event event) {\n" +
   "\t\tswitch(state) {\n" +
   (String.concat ""
      [for (oldState,ts) in transitions |> Seq.groupBy (fun t -> t.OldState) ->
         (sprintf "\t\t\tcase %s:\n" oldState) +
         "\t\t\t\tswitch(event) {\n" +
         (String.concat ""
            [for t in ts ->
               (sprintf "\t\t\t\t\tcase %s:\n" t.Event) +
               (sprintf "\t\t\t\t\t\tsetState(State.%s);\n" t.NewState)+
               (String.concat ""
                  [for a in t.Actions -> sprintf "\t\t\t\t\t\t%s();\n" a]
               ) +
               "\t\t\t\t\t\tbreak;\n"
            ]         
         ) +
         "\t\t\t\t\tdefault: unhandledTransition(state.name(), event.name()); break;\n" +
         "\t\t\t\t}\n" +
         "\t\t\t\tbreak;\n"
      ] 
   )+   
   "\t\t}\n" +
   "\t}\n" +   
   "}\n"

Again, and probably not surprisingly, this is shorter than Uncle Bob’s Java implementation.

And again, you can try the compiler snippet out in the F# interactive window.

Conclusion

Writing a parser using a parser combinator library (and F#) appears to be significantly easier and more succinct than writing a hand rolled parser (in Java).

Categories: Programming

Reactive prefetch speeds Google's mobile search by 100-150 milliseconds.

Increasing responsiveness by parallelizing and prefetching content using hints and dependency graphs, is an old concept, but seldom do we see such a nice tight example of the benefit as is given by the great Ilya Grigorik in this G+ post

The insight here is that we're initiating the fetch for the HTML and its critical resources in parallel... which requires that the page initiating the navigation knows which critical resources are being used on the target page.

This is a powerful pattern and one that you can use to accelerate your site as well. The key insight is that we are not speculatively prefetching resources and do not incur unnecessary downloads. Instead, we wait for the user to click the link and tell us exactly where they are headed, and once we know that, we tell the browser which other resources it should fetch in parallel - aka, reactive prefetch!

As you can infer, implementing the above strategy requires a lot of smarts both in the browser and within the search engine... First, we need to know the list of critical resources that may delay rendering of the destination page for every page on the web! No small feat, but the Search team has us covered - they're good like that. Next, we need a browser API that allows us to invoke the prefetch logic when the click occurs: the search page listens for the click event, and once invoked, dynamically inserts prefetch hints into the search results page. Finally, this is where Chrome comes in: as the search results page is unloaded, the browser begins fetching the hinted resources in parallel with the request for the destination page. The net result is that the critical resources are fetched much sooner, allowing the browser to render the destination page 100-150 milliseconds earlier.

Categories: Architecture

Conveniently Unburdened by Evidence

Herding Cats - Glen Alleman - Wed, 12/10/2014 - 17:32

Conveniently Unburdened by Evidence - Kate Beckett to Richard Castle

When we hear a conjecture about a topic that skips over principles of business, the economics of decision making, or the mathematics of probabilistic and statistical modeling, listen to what Kate said to Richard. 

Putting This Skepticism To Work

There are three concerns for every project manager and those funding the work of the project †

  1. Schedule - Will the project go over schedule? All projects are probabilistic endeavors. Uncertainty abounds. Both reducible uncertainty and irreducible uncertainty. Work can address the reducible uncertainty. Buying down the risk associated with this reducible uncertainty. Irreducible uncertainty can only be addressed with margin. Schedule margin, cost margin, technical margin. 

  2. Cost - Will the project overrun its budget? Cost margin is needed to protect the project from an over budget condition. This is called Management Reserve. But MR can only do some much the estimate of the cost, the management of the work to that estimate is also needed. With a credible estimate, MR and Contingency are still needed to avoid going over budget.

  3. Performance - Will the deliverables  satisfy the goal(s) of the project? The technical performance of the deliverables is founded on the Measures of Effectiveness and Measures of Performance of the capabilities provided by the projects. Capabilities Based Planning is the foundation of defining what DONE looks like in units of measure meaningful to the decision makers.

At the start and up until the end of a project, the answer to each of these questions is knowable to some degree of confidence - less in the beginning and more as the project progresses. A yes answer to any or all of the questions is taken to be an undesirable outcome. These are business questions as well as technical questions. But it is the business that is most interested in the answers and the confidence level of the answer - a simple Yes or No is not sufficient. Yes, we have an 80% confidence of completing on or before the need date

In The End

To provide answers to these questions before arriving at the end of the project, we need estimates. So when we answer Yes to the question - which is unavoidable - we don't to proceed in the absence of corrective actions to increase the probability of a desirable outcome. At the beginning of the project that confidence is low, since project evolve. To provide credible answers about the confidence of arriving on time, on budget, with the needed capabilities, we must estimate not only the cost, schedule, and outcomes, but estimate the impact of our corrective actions.

If we fail to do this, if by lack of knowledge or experience or with intentional ignorance of the probabilistic process of all projects, we've set the foundation of failure. The approach of making decisions in the absence of estimating the cost or that decision and the resulting impact of that decision, ignore  - with intent - the principles of Microeconomics of decision making. Ignoring the opportunity cost of the decision. This opportunity cost must be estimated, since it will occur in the future and is usually beyond our ability to measure directly.

Ignoring opportunity cost and ignoring estimating the future is called Open Loop Control. To increase the probability of project success we need to apply the principles of Closed Loop Control. And when we manage projects with Open Loop processes, those providing us the money to produce value will be disappointed.

Control systems from Glen Alleman

† Quantitative Risk Analysis for Project Management A Critical Review, Lionel Galway, WR-112-RC, February 2004

Related articles Software Estimating for Non Trival Projects Mike Cohn's Agile Quotes Show Me Your Math Estimating Guidance Complex Project Management
Categories: Project Management

Software Development Linkopedia December 2014

From the Editor of Methods & Tools - Wed, 12/10/2014 - 15:37
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about slow programming, technical career paths, Agile QA, Scrum backlog refinement meetings, being a better test manager, java BDD, mixing Waterfall and Agile, the TDD cycle and dealing with bad Java code. Blog: The Case for Slow Programming Blog: Coding, Fast and Slow: Developers and the Psychology of Overconfidence Blog: Climbing off the CTO ladder Blog: What Does QA Do on the First Day of a Sprint? Blog: Stop ...

How Think Like a Rocket Scientist - Irreducible Complexity

Herding Cats - Glen Alleman - Wed, 12/10/2014 - 15:00

Orion launched today and recovery after two orbits. The test of the launch system, Pad Abort system, and Heat Shield were the main purposes of the flight.

I worked the proposal - after coming off the winning proposal for Hubble Robotic Service Mission. The Crew Exploration Vehicle was the original name of the flight vehicle. The Integrated Master Plan and Integrated Master Schedule described the increasing maturity of the deliverables for the space craft and it's flight support systems. After the contract win, I moved to the flight avionics firm and defined the IMP/IMS and project performance management processes for that major subcontractor. When you get to minute 21:17, Tracking Data Relay Satellite is mentioned. I worked that project as a new graduate student many decades ago.

Starting back on TDRSS, agile - meaning emerging requirements, test driven development, direct customer feedback on short iterations - and the development process were deployed with rolling waves, 44 day rule Work Packages, and emergent technical requirements derived from Mission Capabilities. 

Here's the long version of the launch to orbit. 

 

After two orbits, Orion came home. The double boom is the sonic boom. Tests of the heat shield will confirm if it functioned properly.

 

Recently a statement was made about agile and complexity and it was conjectured if the project is too complex for a physical board - a place to put the stickies for the stories - then we've missed opportunities to simplify. Possibly not realizing that complexity, as well as complex system, are the norm in many domains and complexity management processes using tools - rather than manual means - is also the norm. 

If your Agile planning needs are too complex for a physical board, you've probably missed opportunities to simplify / improve.

When I suggested that agile and agile tools are used to deal with complex problem in these environments, without the need to reduce that complexity, there was a conversation of sorts that suggested...

I'd be surprised to hear Orion was using a COTS Agile project management tool in a significant way

Some Necessary Complexity

On Hubble mission, there is a Service Mission Assurance Process that reveals some of the complexity of the System of Systems found in space flight. The Interface Control processes for example for the payload on STS 125.

HST ICD

External knowledge of what tools were used, what processes were applied, how the flight avionics software for Orion was converted from the 777 suite to the spacecraft suite, tested, altered to user needs, simulated, emulated, verified and validated on rolling waves, on 44 day iteration cycles could have only been obtained if you were actually in the building in the vendors shop.

But there are other surprises in the business of space flight. A few good places to start include:

Beyond the outsiders comments of surprise inside space and defense firms, agile tools from Rally, VersionOne, and JIRA are used in a wide variety of domains from embedded systems to intelligence systems, where the requirements don't come from the users, they come from the enemy. Here's an example of agile in the INTEL business.

Maybe those  surprised by the many different applications of the principles of agile - developed long before the Agile Manifesto - missed those processes in Building O6, Sepulveda Blvd, Redondo Beach, circa 1978.

Screen Shot 2014-12-05 at 8.12.48 PM

In The End

There are numerous approaches to applying agile development in a wide variety of domains. I work in a domain where Systems Engineering and Earned Value Management is the starting point and Agile is used to develop code guided by EAI-748-C and DID 81861.

In these environments, development of software is incremental and iterative, with emerging requirements, with stable capabilities. These programs are complex and tools are the basis of success for managing all the moving parts of the program. Rarely is everyone in the same room, since these are System of Systems programs. As well Integration and Test are done by external sources - V&V for flight safety. So many of the processes found in small commercial projects are not applicable to programs in our domain.

To suggest there is but one way to reduce complexity by putting all the stories on cards on the wall is a bit naive in the absence of establishing the external needs of the project first, then deciding what processes to apply.

Some background on applying agile in the DOD can be found at:

Domain first, Context second, Only then Process

Related articles Systems Engineering in the Enterprise IT Domain Estimating Guidance Software Estimating for Non Trival Projects Improving DOE Project Performance Using the DOD Integrated Master Plan
Categories: Project Management

When Should You Move from Iterations to Flow?

I’m writing part of the program management book, talking about how you need to keep everything small to maintain momentum. Sometimes, to keep your work small, teams move from iterations to flow.

Here are times when you might consider moving from iteration to flow:

  • The Product Owner wants to change the order of features in the iteration for business reasons, and you are already working in one- or two-week iterations. Yes, you have that much change.
  • You feel as if you have a death march agile project. You lurch from one iteration to the next, always cramming too much into an iteration. You could use more teams working on your backlog.
  • You are working on too many projects in one iteration. No one is managing the project portfolio.

This came home to me when I was coaching a program manager working on a geographically distributed program in 2009. One of the feature teams was responsible for the database that “fed” all the other feature teams. They had their own features, but the access and what the database could do was centralized in one database team. That team tried to work in iterations. They had small, one- or two-day stories. They did a great job meeting their iteration commitments. And, they always felt as if they were behind.

Why? Because they had requests backed up. The rank of the requests into that team changed faster than the iteration duration.

When they changed to flow, they were able to respond to requests for the different reports, access, whatever the database needed to do much faster. They were no longer a bottleneck on the program. Of course, they used continuous integration for each feature. Every day, or every other day, they updated the access into the database, or what the database was capable of doing.

The entire program regained momentum.

kanban.iterationThis is a simplified board. I’m sure your board will look different.

When you work in flow, you have a board with a fixed set of Ready items (the team’s backlog), and the team always works on the top-ranked item first. Depending on the work in progress limits, the team might take more than one item off the Ready column at a time.

The Product Owner has the capability to change any of the items in the Ready column at any time. If the item is large, the team will spend more time working on that item. It is in the Product Owner’s and the team’s interest to learn how to make small stories. That way, work moves across the board fast.

If you use a board something like this, combined with an agile roadmap, the team still has the big picture of what the product looks like. Many of us like to know what the big picture is. And, we see from the board, what we are working on in the small. However, we don’t need to do iteration planning. We take the next item off the top of the Ready list.

There is no One Right Answer as to whether you should move from iteration to flow. It depends on your circumstances. Your Product Owner needs to write stories that are small enough that the team can complete them and move on to another story. Agile is about the ability to change, right? If the team is stuck on a too-large story, it’s just as bad as being stuck in an iteration, waiting for the iteration to end.

However, if you discover, especially if you are a feature team working in a program, that you need to change what you do, or the order of what you do more often than your iterations allow, consider moving to flow. You may decide that iterations are too confining for what you need.

Categories: Project Management