Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Continuous Value Delivery the Agile Way

Continuous Value Delivery helps businesses realize the benefits from their technology investments in a continuous fashion.

Businesses these days expect at least quarterly results from their technology investments.  The beauty is, with Continuous Value Delivery they can get it, too.  

Continuous Value Delivery is a practice that makes delivering user value and business value a rapid, reliable, and repeatable process.  It’s a natural evolution from Continuous Integration and Continuous Delivery.  Continuous Value Delivery simply adds a focus on Value Realization, which addresses planning for value, driving adoption, and measuring results.

But let’s take a look at the evolution of software practices that have made it possible to provide Continuous Value Delivery in our Cloud-first, mobile-first world.

Long before there was Continuous Value Delivery, there was Continuous Integration …

Continuous Integration

Continuous Integration is a software development practice where team members integrate their work frequently.  The goal of Continuous Integration is to reduce and prevent integration problems.  In Continuous Integration, each integration is verified against tests.

Then, along came, Continuous Delivery …

Continuous Delivery

Continuous Delivery extended the idea of Continuous Integration to automate and improve the process of software delivery.  With Continuous Delivery,  software checked in on the mainline is always ready for release.  When you combine automated testing, Continuous Integration, and Continuous Delivery, it's possible to push out updates, fixes, and new releases to customers with lower risk and minimal manual overhead.

Continuous Delivery changes the model from a big bang approach, where software is shipped at the end of a long project cycle, to where software can be iteratively and incrementally shipped along the way.

This set the stage for Continuous Value Delivery …

Continuous Value Delivery

Continuous Value Delivery puts a focus on Value Realization as a first-class citizen.  

To be able to ship value on a continuous basis, you need to have a simple way to have a simple mechanism for units of value.  Scenarios and stories are an effective way to chunk and carve up value into useful increments.  Scenario and stories also help with driving adoption.

For Continuous Value Delivery, you also need a way to "pull" value, as well as "push" value.   Kanbans provide an easy way to visualize the flow of value, and support a “pull” mechanism and reinforce “the voice of the customer.”    User stories provide an easy way to create a backlog or catalog of potential value, that you can “push” based on priorities and user demand.

Businesses that are making the most of their technology investments are linking scenarios, backlogs, and Kanbans to their value chains and their value streams.

Value Planning Enables Continuous Value Delivery

If you want to drive continuous value to the business, then you need to plan for it.  As part of value planning, you need to identify key stakeholders in the business.    With the stakeholders you need to identify the business benefits that they care about, along with the KPIs and value measures that they care about.

At this stage, you also want to identify who in the business will be responsible for collecting the data and reporting the value.

Adoption is the Key to Value Realization

Adoption is the key component of Continuous Value Delivery.  After all, if you release new features, but nobody uses them, then the users won't get the new benefits.   In order to realize the value, users need to use the new features and actually change their behaviors.

So while deployment was the old bottleneck, adoption is the new bottleneck.

Users and the business can only absorb so much value at once.  In order to flow more value, you need to reduce friction around adoption, and drive consumption of technology.  You can do this through effective adoption planning, user readiness, communication plans, and measurement.

Value Measurement and Reporting

To close the loop, you want the business to acknowledge the delivery of value.   That’s where measurement and reporting come in.

From a measurement standpoint, you can use adoption and usage metrics to better understand what's being used and how much.  But that’s only part of the story.

To connect the dots back to the business impact, you need to measure changes in behavior, such as what people have stopped doing, started doing, and continue doing.   This will be an indicator of benefits being realized.

Ultimately, to show the most value to the business, you need to move the benefits up the stack.  At the lowest level, you can observe the benefits, by simply observing the changes in behavior.  If you can observe the benefits, then you should be able to measure the benefits.  And if you can measure the benefits, then you should be able to quantify the benefits.   And if you can quantify the benefits, then you should be able to associate some sort of financial amount that shows how things are being done better, faster, or cheaper.

The value reporting exercise should help inform and adjust any value planning efforts.  For example, if adoption is proving to be the bottleneck, now you can drill into where exactly the bottleneck is occurring and you can refocus efforts more effectively.

Plan, Do, Check, Act

In essence, your value realization loop is really a cycle of plan, do, check, act, where value is made explicit, and it is regarded as a first-class citizen throughout the process of Continuous Value Delivery.

That’s a way better approach than building solutions and hoping that value will come or that you’ll stumble your way into business impact.

As history shows, too many projects try to luck their way into value, and it’s far better to design for it.

Value Sprints

A Sprint is simply a unit of development in Scrum.   The idea is to provide a working increment of the solution at the end of the Sprint, that is potentially shippable.  

It’s a “timeboxed” effort.   This helps reduce risk as well as support a loop of continuous learning.  For example, a team might work in 1 week, 2 week or 1 month sprints.   At the end of the Sprint, you can review the progress, and make any necessary adjustments to improve for the next Sprint.

In the business arena, we can think in terms of Value Sprints, where we don’t want to stop at just shipping a chunk of value.

Just shipping or deploying software and solutions does not lead to adoption.

And that’s how software and IT projects fall down.

With a Value Sprint, we want to do a add a few specific things to the mix to ensure appropriate Value Realization and true benefits delivery.  Specifically, we want to integrate Value Planning right up front, and as part of each Sprint.   Most importantly, we want to plan and drive adoption, as part of the Value Sprint.

If we can accelerate adoption, then we can accelerate time to value.

And, of course, we want to report on the value as part of the Value Sprint.

In practice, our field tells us that Value Sprints of 6-8 weeks tend to work well with the business.    Obviously, the right answer depends on your context, but it helps to know what others have been doing.   The length of the loop depends on the business cadence, as well as how well adoption can be driven in an organization, which varies drastically based on ability to execute and maturity levels.  And, for a lot of businesses, it’s important to show results within a quarterly cycle.

But what’s really important is that you don’t turn value into a long winded run, or a long shot down the line, and that you don’t simply hope that value happens.

Through Value Sprints and Continuous Value Delivery you can create a sustainable approach where the business can realize the value from it’s technology investments in a sustainable and more reliable way for real business results.

And that’s how you win in the game of software today.

You Might Also Like

Blessing Sibanyoni on Value Realization

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Architecture, Programming

The Beautiful Design Summer 2014 Collection on Google Play

Android Developers Blog - 3 hours 38 min ago
Posted by Marco Paglia, Android Design Team

It’s that time again! Last summer, we published the first Beautiful Design collection on Google Play, and updated it in the winter with a fresh set of beautifully crafted apps.

Since then, developers have been hard at work updating their existing apps with new design ideas, and many new apps targeted to phones and tablets have launched on Google Play sporting exquisite detail in their UIs. Some apps are even starting to incorporate elements from material design, which is great to see. We’re on the lookout for even more material design concepts applied across the Google Play ecosystem!

Today, we're refreshing the Beautiful Design collection with our latest favorite specimens of delightful design from Google Play. As a reminder, the goal of this collection is to highlight beautiful apps with masterfully crafted design details such as beautiful presentation of photos, crisp and meaningful layout and typography, and delightful yet intuitive gestures and transitions.

The newly updated Beautiful Design Summer 2014 collection includes:

Flight Track 5, whose gorgeously detailed flight info, full of maps and interactive charts, stylishly keeps you in the know.

Oyster, a book-reading app whose clean, focused reading experience and delightful discovery makes it a joy to take your library with you, wherever you go.

Gogobot, an app whose bright colors and big images make exploring your next city delightful and fun.

Lumosity, Vivino, FIFA, Duolingo, SeriesGuide, Spotify, Runtastic, Yahoo News Digest… each with delightful design details.

Airbnb, a veteran of the collection from this past winter, remains as they continue to finesse their app.

If you’re an Android designer or developer, make sure to play with some of these apps to get a sense for the types of design details that can separate good apps from great ones. And remember to review the material design spec for ideas on how to design your next beautiful Android app!.


Join the discussion on

+Android Developers
Categories: Programming

Runtastic on Android Wear

Google Code Blog - 3 hours 49 min ago

By Austin Robison, Product Manager, Android Wear




Fitness apps make  great additions to Android Wear. Let’s take a look at one of our favorites, Runtastic. Runtastic is a fitness app that lets you track your walks, runs, bike rides and more. With Runtastic on Android Wear, you'll see your time, distance, and calories burned at a glance on your wrist. You can also start, stop and pause your activity by touch. Tuck your phone away in a pocket or backpack and do everything on your watch.

It's challenging to build user experiences that really come alive on Android Wear because it's such a new type of device. Runtastic does a great job of showing the right information and providing just the right controls on the screen on your wrist. Let's dig into some of the Android Wear platform features that Runtastic uses to make such a great user experience.Voice ActionsAndroid Wear enables developers to launch their activities with voice. Runtastic responds to “Ok Google, start running” by beginning to track a session and displaying a card with your total time. This means you can start exercising without needing to pull your phone out of a pocket or arm strap. Android Wear is all about bringing you useful information just when you need it and enabling users to quickly and easily take action.

runtastic_01.png

Responding to platform voice intents on Wear is as simple as declaring a standard intent filter to start an activity.  For example, to launch your activity for the “start running” voice action, add the following to your activity’s entry in your AndroidManifest.xml:

<intent-filter>    <action android:name="vnd.google.fitness.TRACK"/>    <category android:name="android.intent.category.DEFAULT"/>    <data android:mimeType="vnd.google.fitness.activity/running"/></intent-filter>

Custom CardsOnce a user has started a run, Runtastic inserts a card in the stream as an ongoing notification to ensure it is ranked near the top of the stream during the activity. This card uses the setDisplayIntent() function to display custom UI. It provides quick, glanceable information, showing your activity time. Cool!

When the user swipes to the right of the card to expose its actions, we see some quick and easy to understand options; following the Android Wear style guidelines means that Runtastic has a familiar UI and feels like a natural extension of the watch. There are actions for pausing, stopping, and an action to see more details on the run.  This action launches a full screen Activity where Runtastic draws a completely custom layout.



You’ll notice this data updates live; Runtastic makes use of the Wearable Data Layer API in Google Play Services to synchronize data between the phone and the watch. It's an easy to use API for syncing data between your devices.

Background ServicesWhen a user finishes their run, Runtastic presents them with a special summary card that appears only on the watch. In this case, the notification is generated directly on the watch by a Service. This Service uses the Data Layer to receive information about the completed activity from the phone to the watch, including an image of a map of the user’s run generated through the Google Maps API.

To show that information, the app uses Android Wear’s NotificationManager, which functions just like the NotificationManager on a phone or tablet, except that instead of creating notifications in the pull-down shade, they appear in the stream.



Runtastic's implementation on Android Wear is a perfect example of how to take advantage of wearables to make something truly useful for users. For more information on these and other great platform features, please see the developer documentation.

For more inspiring Android Wear user experiences, check out this collection on the Play Store!

Posted by Mano Marks, Google Developer Platform Team
Categories: Programming

React in modern web applications: Part 1

Xebia Blog - 7 hours 49 min ago

At Xebia we love to share knowledge! One of the ways we do this is by organizing 1-day courses during the summer. Together with Frank Visser we decided to do a training about full stack development with Node.js, AngularJS and Facebook's React. The goal of the training was to show the students how one could create a simple timesheet application. This application would use nothing but modern Javascript technologies while also teaching them best practices with regards to setting up and maintaining it.

To further share the knowledge gained during the creation of this training we'll be releasing several blog posts. In this first part we'll talk about why to use React, what React is and how you can incorporate it into your Grunt lifecycle.

This series of blog posts assume that you're familiar with the Node.js platform and the Javascript task runner Grunt.

What is React?

ReactJS logo

React is a Javascript library for creating user interfaces made by Facebook. It is their answer to the V in MVC. As it only takes care of the user interface part of a web application React can be (and most often will be) combined with other frameworks (e.g. AngularJS, Backbone.js, ...) for handling the MC part.

In case you're unfamiliar with the MVC architecture, it stands for model-view-controller and it is an architectural pattern for dividing your software into 3 parts with the goal of separating the internal representation of data from the representation shown to the actual user of the software.

Why use React?

There are quite a lot of Javascript MVC frameworks which also allow you to model your views. What are the benefits of using React instead of for example AngularJS?

What sets React apart from other Javascript MVC frameworks like AngularJS is the way React handles UI updates. To dynamically update a web UI you have to apply DOM updates whenever data in your UI changes. These DOM updates, compared to reading data from the DOM, are expensive operations which can drastically slow down your application's responsiveness if you do not minimize the amount of updates you do. React took a clever approach to minimizing the amount of DOM updates by using a virtual DOM (or shadow DOM) diff.

In contrast to the normal DOM consisting of nodes the virtual DOM consists of lightweight Javascript objects that represent your different React components. This representation is used to determine the minimum amount of steps required to go from the previous render to the next render. By using an observable to check if the state has changed React prevents unnecessary re-renders. By calling the setState method you mark a component 'dirty' which essentially tells React to update the UI for this component. When setState is called the component rebuilds the virtual DOM for all its children. React will then compare this to the current virtual sub-tree for the same component to determine the changes and thus find the minimum amount of data to update.

Besides efficient updates of only sub-trees, React batches these virtual DOM batches into real DOM updates. At the end of the React event loop, React will look up all components marked as dirty and re-render them.

How does React compare to AngularJS?

It is important to note that you can perfectly mix the usage of React with other frameworks like AngularJS for creating user interfaces. You can of course also decide to only use React for the UI and keep using AngularJS for the M and C in MVC.

In our opinion, using React for simple components does not give you an advantage over using AngularJS. We believe the true strength of React lies in demanding components that re-render a lot. React tends to really outperform AngularJS (and a lot of other frameworks) when it comes to UI elements that require a lot of re-rendering. This is due to how React handles UI updates internally as explained above.

JSX

JSX is a Javascript XML syntax transform recommended for use with React. It is a statically-typed object-oriented programming language designed for modern browsers. It is faster, safer and easier to use than Javascript itself. Although JSX and React are independent technologies, JSX was built with React in mind. React works without JSX out of the box but they do recommend using it. Some of the many reasons for using JSX:

  • It's easier to visualize the structure of the DOM
  • Designers are more comfortable making changes
  • It's familiar for those who have used MXML or XAML

If you decide to go for JSX you will have to compile the JSX to Javascript before running your application. Later on in this article I'll show you how you can automate this using a Grunt task. Besides Grunt there are a lot of other build tools that can compile JSX. To name a few, there are plugins for Gulp, Broccoli or Mimosa.

An example JSX file for creating a simple link looks as follows:

/** @jsx React.DOM */
var link = React.DOM.a({href: 'http://facebook.github.io/react'}, 'React');

Make sure to never forget the starting comment or your JSX file will not be processed by React.

Components

With React you can construct UI views using multiple, reusable components. You can separate the different concepts of your application by creating modular components and thus get the same benefits when using functions and classes. You should strive to break down the different common elements in your UI into reusable components that will allow you to reduce boilerplate and keep it DRY.

You can construct component classes by calling React.createClass() and each component has a well-defined interface and can contain state (in the form of props) specific to that component. A component can have ownership over other components and in React, the owner of a component is the one setting the props of that component. An owner, or parent component can access its children by calling this.props.children.

Using React you could create a hello world application as follows:

/** @jsx React.DOM */
var HelloWorld = React.createClass({
  render: function() {
    return <div>Hello world!</div>;
  }
});

Creating a component does not mean it will get rendered automatically. You have to define where you would like to render your different components using React.renderComponent as follows:

React.renderComponent(<HelloWorld />, targetNode);

By using for example document.getElementById or a jQuery selector you target the DOM node where you would like React to render your component and you pass it on as the targetNode parameter.

Automating JSX compilation in Grunt

To automate the compilation of JSX files you will need to install the grunt-react package using Node.js' npm installer:

npm install grunt-react --save-dev

After installing the package you have to add a bit of configuration to your Gruntfile.js so that the task knows where your JSX source files are located and where and with what extension you would like to store the compiled Javascript files.

react: {
  dynamic_mappings: {
    files: [
      {
        expand: true,
        src: ['scripts/jsx/*.jsx'],
        dest: 'app/build_jsx/',
        ext: '.js'
      }
    ]
  }
}

To speed up development you can also configure the grunt-contrib-watch package to keep an eye on JSX files. Watching for JSX files will allow you to run the grunt-react task whenever you change a JSX file resulting in continuous compilation of JSX files while you develop your application. You simply specify the type of files to watch for and the task that you would like to run when one of these files changes:

watch: {
  jsx: {
    files: ['scripts/jsx/*.jsx'],
    tasks: ['react']
  }
}

Last but not least you will want to add the grunt-react task to one or more of your grunt lifecycle tasks. In our setup we added it to the serve and build tasks.

grunt.registerTask('serve', function (target) {
  if (target === 'dist') {
    return grunt.task.run(['build', 'connect:dist:keepalive']);
  }

  grunt.task.run([
    'clean:server',
    'bowerInstall',
    <strong>'react',</strong>
    'concurrent:server',
    'autoprefixer',
    'configureProxies:server',
    'connect:livereload',
    'watch'
  ]);
});

grunt.registerTask('build', [
  'clean:dist',
  'bowerInstall',
  'useminPrepare',
  'concurrent:dist',
  'autoprefixer',
  'concat',
  <strong>'react',</strong>
  'ngmin',
  'copy:dist',
  'cdnify',
  'cssmin',
  'uglify',
  'rev',
  'usemin',
  'htmlmin'
]);
Conclusion

Due to React's different approach on handling UI changes it is highly efficient at re-rendering UI components. Besides that it's easily configurable and integrate in your build lifecycle.

What's next?

In the next article we'll be discussing how you can use React together with AngularJS, how to deal with state in your components and how to avoid passing through your entire component hierarchy using callbacks when updating state.

Soft Skills: The Software Developer’s Life Manual, Early Access

Making the Complex Simple - John Sonmez - Mon, 09/01/2014 - 20:01

Wow, this is a pretty exciting week for me. This week, the book I’ve been laboring on, is finally available for early access through Manning’s MEAP program. I spent about four months working full-time on this book. That might not seem like a lot of time, but believe me, writing a book is no easy […]

The post Soft Skills: The Software Developer’s Life Manual, Early Access appeared first on Simple Programmer.

Categories: Programming

R: ggplot – Cumulative frequency graphs

Mark Needham - Sun, 08/31/2014 - 23:10

In my continued playing around with ggplot I wanted to create a chart showing the cumulative growth of the number of members of the Neo4j London meetup group.

My initial data frame looked like this:

> head(meetupMembers)
  joinTimestamp            joinDate  monthYear quarterYear       week dayMonthYear
1  1.376572e+12 2013-08-15 13:13:40 2013-08-01  2013-07-01 2013-08-15   2013-08-15
2  1.379491e+12 2013-09-18 07:55:11 2013-09-01  2013-07-01 2013-09-12   2013-09-18
3  1.349454e+12 2012-10-05 16:28:04 2012-10-01  2012-10-01 2012-10-04   2012-10-05
4  1.383127e+12 2013-10-30 09:59:03 2013-10-01  2013-10-01 2013-10-24   2013-10-30
5  1.372239e+12 2013-06-26 09:27:40 2013-06-01  2013-04-01 2013-06-20   2013-06-26
6  1.330295e+12 2012-02-26 22:27:00 2012-02-01  2012-01-01 2012-02-23   2012-02-26

The first step was to transform the data so that I had a data frame where a row represented a day where a member joined the group. There would then be a count of how many members joined on that date.

We can do this with dplyr like so:

library(dplyr)
> head(meetupMembers %.% group_by(dayMonthYear) %.% summarise(n = n()))
Source: local data frame [6 x 2]
 
  dayMonthYear n
1   2011-06-05 7
2   2011-06-07 1
3   2011-06-10 1
4   2011-06-12 1
5   2011-06-13 1
6   2011-06-15 1

To turn that into a chart we can plug it into ggplot and use the cumsum function to generate a line showing the cumulative total:

ggplot(data = meetupMembers %.% group_by(dayMonthYear) %.% summarise(n = n()), 
       aes(x = dayMonthYear, y = n)) + 
  ylab("Number of members") +
  xlab("Date") +
  geom_line(aes(y = cumsum(n)))
2014 08 31 22 58 42

Alternatively we could bring the call to cumsum forward and generate a data frame which has the cumulative total:

> head(meetupMembers %.% group_by(dayMonthYear) %.% summarise(n = n()) %.% mutate(n = cumsum(n)))
Source: local data frame [6 x 2]
 
  dayMonthYear  n
1   2011-06-05  7
2   2011-06-07  8
3   2011-06-10  9
4   2011-06-12 10
5   2011-06-13 11
6   2011-06-15 12

And if we plug that into ggplot we’ll get the same curve as before:

ggplot(data = meetupMembers %.% group_by(dayMonthYear) %.% summarise(n = n()) %.% mutate(n = cumsum(n)), 
       aes(x = dayMonthYear, y = n)) + 
  ylab("Number of members") +
  xlab("Date") +
  geom_line()

If we want the curve to be a bit smoother we can group it by quarter rather than by day:

> head(meetupMembers %.% group_by(quarterYear) %.% summarise(n = n()) %.% mutate(n = cumsum(n)))
Source: local data frame [6 x 2]
 
  quarterYear   n
1  2011-04-01  13
2  2011-07-01  18
3  2011-10-01  21
4  2012-01-01  43
5  2012-04-01  60
6  2012-07-01 122

Now let’s plug that into ggplot:

ggplot(data = meetupMembers %.% group_by(quarterYear) %.% summarise(n = n()) %.% mutate(n = cumsum(n)), 
       aes(x = quarterYear, y = n)) + 
    ylab("Number of members") +
    xlab("Date") +
    geom_line()
2014 08 31 23 08 24
Categories: Programming

FParsec Tutorial

Phil Trelford's Array - Sun, 08/31/2014 - 16:23

Back at the start of the year, I took the F# parser combinator library FParsec out for a spin, writing an extended Small Basic compiler and later a similar parser for a subset of C#. Previously I’d been using hand rolled parsers, for projects like TickSpec, a .Net BDD library, and Cellz, an open source spreadsheet. With FParsec you can construct a parser relatively rapidly and easily using the powerful built-in functions and F# interactive for quick feedback.

FParsec has been used in a number of interesting projects including FunScript, for parsing TypeScript definition files, and FogBugz for search queries in Kiln.

Like any library there is a bit of a learning curve, taking time to get up to speed before you reap the benefits. So with that in mind I put together a short hands on tutorial that I ran at the F#unctional Londoners meetup held at Skills Matter last week.

The tutorial consisted of a short introduction to DSLs and parsing. Then a set of tasks leading to a parser for a subset of the Logo programming language. Followed by examples of scaling out to larger parsers and building a compiler backend, using Small Basic and C# as examples.

FParsec Hands On - F#unctional Londoners 2014 from Phillip Trelford

Download the tasks from: http://trelford.com/FParsecTutorial.zip

Logo programming language

One of my earliest experiences with programming was a Logo session in the 70s, when my primary school had a short term loan of a turtle robot:

1968_LogoTurtle

The turtle, either physical or on the screen, can be controlled with simple commands like forward, left, right and repeat, e.g.

> repeat 10 [right 36 repeat 5 [forward 54 right 72]]

image

Abstract Syntax Tree

The abstract syntax tree (AST) for these commands can be easily described using F#’s discriminated unions type:

type arg = int
type command =
   | Forward of arg
   | Turn of arg
   | Repeat of arg * command list

Note: right and left can simply be represented as Turn with a positive or negative argument.

The main task was to use FParsec to parse the commands in to AST form.

Parsing

A parser for the forward command can be easily constructed using built-in FParsec parser functions and the >>. operator to combine them:

let forward = pstring "forward" >>. spaces1 >>. pfloat

The parsed float value can be used to construct the Forward case using the |>> operator:

let pforward = forward |>> fun n -> Forward(int n)

To parse the forward or the short form fd, the <|> operator can be employed:

let pforward = (pstring "fd" <|> pstring "forward") >>. spaces1 >>. pfloat
               |>> fun n -> Forward(int n)

Parsing left and right is almost identical:

let pleft = (pstring "left" <|> pstring "lt") >>. spaces1 >>. pfloat 
            |>> fun x -> Left(int -x)
let pright = (pstring "right" <|> pstring "right") >>. spaces1 >>. pfloat 
             |>> fun x -> Right(int x)

To parse a choice of commands, we can use the <|> operator again:

let pcommand = pforward <|> pleft <|> pright

To handle a sequence of commands there is the many function

let pcommands = many (pcommand .>> spaces)

To parse the repeat command we need to parse the repeat count and a block of commands held between square brackets:

let block = between (pstring "[") (pstring "]") pcommands


let prepeat = 
    pstring "repeat" >>. spaces1 >>. pfloat .>> spaces .>>. block
    |>> fun (n, commands) -> Repeat(int n, commands)

Putting this altogether we can parse a simple circle drawing function:

> repeat 36 [forward 10 right 10]

However we cannot yet parse a repeat command within a repeat block, as the command parser does not reference the repeat command.

Forward references

To separate the definition of repeat’s parser function from it’s implementation we can use the createParserForwardedToRef function:

let prepeat, prepeatimpl = createParserForwardedToRef ()

Then we can define the choice of commands to include repeat:

let pcommand = pforward <|> pleft <|> pright <|> prepeat

And finally define the implementation of the repeat parser that refers to itself:

prepeatimpl := 
    pstring "repeat" >>. spaces1 >>. pfloat .>> spaces .>>. block
    |>> fun (n, commands) -> Repeat(int n, commands)

Allowing us to parse nested repeats, i.e.

> repeat 10 [right 36 repeat 5 [forward 54 right 72]]

Parses to:

> Repeat (10,[Right 36; Repeat (5,[Forward 54; Right 72])])

Interpreter

Evaluation of a program can now be easily achieved using pattern matching over the AST:

let rec perform turtle = function
    | Forward n ->
        let r = float turtle.A * Math.PI / 180.0
        let dx, dy = float n * cos r, float n * sin r
        let x, y =  turtle.X, turtle.Y
        let x',y' = x + dx, y + dy
        drawLine (x,y) (x',y')
        { turtle with X = x'; Y = y' }
    | Turn n -> { turtle with A=turtle.A + n }
    | Repeat(n,commands) ->
        let rec repeat turtle = function
            | 0 -> turtle
            | n -> repeat (performAll turtle commands) (n-1)
        repeat turtle n
and performAll = List.fold perform

Check out this snippet for the full implementation as a script: http://fssnip.net/nM

User Commands

Logo lets you define your own commands, e.g.

>  to square
     repeat 4 [forward 50 right 90]
   end
   to flower
     repeat 36 [right 10 square]
   end
   to garden
     repeat 25 [set-random-position flower]
   end

garden

The parser can be easily extended to support this, try the snippet: http://fssnip.net/nN

Small Basic

Small Basic is a Microsoft programming language also aimed at teaching kids, and also featuring turtle functionality. At the beginning of the year I wrote a short series of posts on writing an extended compiler for Small Basic:

    The series starts with an AST, internal DSL and interpreter. Then moves on to parsing the language with FParsec and compiling the AST to IL code using Reflection.Emit. Finally the series ends with extensions for functions with arguments and support for tuples and pattern matching.
    It’s a fairly short hop from implementing Logo to implementing a larger language like Small Basic.
    Parsing C#

A few weeks later as an experiment I knocked up an AST and parser for a fairly large subset of C#, which shares much of the imperative core of Small Basic: http://fssnip.net/lf

Check out Neil Danson’s blog on building a C# compiler in F# to see C# compiled to IL using a similar AST.

DDD North: Write your own compiler in 24 hours

If you’re interested in learning more, I’ll be speaking at DDD North in Leeds on Saturday 18th October about how to write your own compiler in 24 hours.

Categories: Programming

The Web Search API is Retiring

Google Code Blog - Fri, 08/29/2014 - 13:00
Posted by Dan Ciruli, Product Manager


On November 1, 2010, we announced the deprecation of the Web Search API. As per our policy at the time, we supported the API for a three year period (and beyond), but as all things come to an end, so has its deprecation window.

We are now announcing the turndown of the Web Search API. You may wish to look at our Custom Search API (note: it has a free quota of 100 queries per day).
The service will cease operations on September 29th, 2014.
Categories: Programming

R: dplyr – group_by dynamic or programmatic field / variable (Error: index out of bounds)

Mark Needham - Fri, 08/29/2014 - 10:13

In my last blog post I showed how to group timestamp based data by week, month and quarter and by the end we had the following code samples using dplyr and zoo:

library(RNeo4j)
library(zoo)
 
timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01", tz = "GMT")
 
query = "MATCH (:Person)-[:HAS_MEETUP_PROFILE]->()-[:HAS_MEMBERSHIP]->(membership)-[:OF_GROUP]->(g:Group {name: \"Neo4j - London User Group\"})
         RETURN membership.joined AS joinTimestamp"
meetupMembers = cypher(graph, query)
 
meetupMembers$joinDate <- timestampToDate(meetupMembers$joinTimestamp)
meetupMembers$monthYear <- as.Date(as.yearmon(meetupMembers$joinDate))
meetupMembers$quarterYear <- as.Date(as.yearqtr(meetupMembers$joinDate))
 
meetupMembers %.% group_by(week) %.% summarise(n = n())
meetupMembers %.% group_by(monthYear) %.% summarise(n = n())
meetupMembers %.% group_by(quarterYear) %.% summarise(n = n())

As you can see there’s quite a bit of duplication going on – the only thing that changes in the last 3 lines is the name of the field that we want to group by.

I wanted to pull this code out into a function and my first attempt was this:

groupMembersBy = function(field) {
  meetupMembers %.% group_by(field) %.% summarise(n = n())
}

And now if we try to group by week:

> groupMembersBy("week")
 Show Traceback
 
 Rerun with Debug
 Error: index out of bounds

It turns out if we want to do this then we actually want the regroup function rather than group_by:

groupMembersBy = function(field) {
  meetupMembers %.% regroup(list(field)) %.% summarise(n = n())
}

And now if we group by week:

> head(groupMembersBy("week"), 20)
Source: local data frame [20 x 2]
 
         week n
1  2011-06-02 8
2  2011-06-09 4
3  2011-06-16 1
4  2011-06-30 2
5  2011-07-14 1
6  2011-07-21 1
7  2011-08-18 1
8  2011-10-13 1
9  2011-11-24 2
10 2012-01-05 1
11 2012-01-12 3
12 2012-02-09 1
13 2012-02-16 2
14 2012-02-23 4
15 2012-03-01 2
16 2012-03-08 3
17 2012-03-15 5
18 2012-03-29 1
19 2012-04-05 2
20 2012-04-19 1

Much better!

Categories: Programming

Inspirational Work Quotes at a Glance

What if your work could be your ultimate platform? … your ultimate channel for your growth and greatness?

We spend a lot of time at work. 

For some people, work is their ultimate form of self-expression

For others, work is a curse.

Nobody stops you from using work as a chance to challenge yourself, to grow your skills, and become all that you’re capable of.

But that’s a very different mindset than work is a place you have to go to, or stuff you have to do.

When you change your mind, you change your approach.  And when you change your approach, you change your results.   But rather than just try to change your mind, the ideal scenario is to expand your mind, and become more resourceful.

You can do so with quotes.

Grow Your “Work Intelligence” with Inspirational Work Quotes

In fact, you can actually build your “work intelligence.”

Here are a few ways to think about “intelligence”:

  1. the ability to learn or understand things or to deal with new or difficult situations (Merriam Webster)
  2. the more distinctions you have for a given concept, the more intelligence you have

In Rich Dad, Poor Dad, Robert Kiyosaki, says, “intelligence is the ability to make finer distinctions.”   And, Tony Robbins, says “intelligence is the measure of the number and the quality of the distinctions you have in a given situation.”

If you want to grow your “work intelligence”, one of the best ways is to familiarize yourself with the best inspirational quotes about work.

By drawing from wisdom of the ages and modern sages, you can operate at a higher level and turn work from a chore, into a platform of lifelong learning, and a dojo for personal growth, and a chance to master your craft.

You can use inspirational quotes about work to fill your head with ideas, distinctions, and key concepts that help you unleash what you’re capable of.

To give you a giant head start and to help you build a personal library of profound knowledge, here are two work quotes collections you can draw from:

37 Inspirational Quotes for Work as Self-Expression

Inspirational Work Quotes

10 Distinct Ideas for Thinking About Your Work

Let’s practice.   This will only take a minute, and if you happen to hear the right words, which are the keys for you, your insight or “ah-ha” can be just the breakthrough that you needed to get more of your work, and, as a result, more out of life (or at least your moments.)

Here is a sample of distinct ideas and depth that you use to change how you perceive your work, and/or how you do your work:

  1. “Either write something worth reading or do something worth writing.” — Benjamin Franklin
  2. “You don’t get paid for the hour. You get paid for the value you bring to the hour.” — Jim Rohn
  3. “Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do.” — Steve Jobs
  4. “Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” -- Bill Gates
  5. “We must each have the courage to transform as individuals. We must ask ourselves, what idea can I bring to life? What insight can I illuminate? What individual life could I change? What customer can I delight? What new skill could I learn? What team could I help build? What orthodoxy should I question?” – Satya Nadella
  6. “My work is a game, a very serious game.” — M. C. Escher
  7. “Hard work is a prison sentence only if it does not have meaning. Once it does, it becomes the kind of thing that makes you grab your wife around the waist and dance a jig.” — Malcolm Gladwell
  8. “The test of the artist does not lie in the will with which he goes to work, but in the excellence of the work he produces.” -- Thomas Aquinas
  9. “Are you bored with life? Then throw yourself into some work you believe in with all you heart, live for it, die for it, and you will find happiness that you had thought could never be yours.” — Dale Carnegie
  10. “I like work; it fascinates me. I can sit and look at it for hours.” -– Jerome K. Jerome

For more ideas, take a stroll through my inspirational work quotes.

As you can see, there are lots of ways to think about work and what it means.  At the end of the day, what matters is how you think about it, and what you make of it.  It’s either an investment, or it’s an incredible waste of time.  You can make it mundane, or you can make it matter.

The Pleasant Life, The Good Life, and The Meaningful Life

Here’s another surprise about work.   You can use work to live the good life.   According to Martin Seligman, a master in the art and science of positive psychology, there are three paths to happiness:

  1. The Pleasant Life
  2. The Good Life
  3. The Meaningful Life

In The Pleasant Life, you simply try to have as much pleasure as possible.  In The Good Life, you spend more time in your values.  In The Meaningful Life, you use your strengths in the service of something that is bigger than you are.

There are so many ways you can live your values at work and connect your work with what makes you come alive.

There are so many ways to turn what you do into service for others and become a part of something that’s bigger than you.

If you haven’t figured out how yet, then dig deeper, find a mentor, and figure it out.

You spend way too much time at work to let your influence and impact fade to black.

You Might Also Like

40 Hour Work Week at Microsoft

Agile Avoids Work About Work

How Employees Lost Empathy for Their Work, for the Customer, and for the Final Product

Satya Nadella on Live and Work a Meaningful Life

Short-Burst Work

Categories: Architecture, Programming

R: Grouping by week, month, quarter

Mark Needham - Fri, 08/29/2014 - 01:25

In my continued playing around with R and meetup data I wanted to have a look at when people joined the London Neo4j group based on week, month or quarter of the year to see when they were most likely to do so.

I started with the following query to get back the join timestamps:

library(RNeo4j)
query = "MATCH (:Person)-[:HAS_MEETUP_PROFILE]->()-[:HAS_MEMBERSHIP]->(membership)-[:OF_GROUP]->(g:Group {name: \"Neo4j - London User Group\"})
         RETURN membership.joined AS joinTimestamp"
meetupMembers = cypher(graph, query)
 
> head(meetupMembers)
      joinTimestamp
1 1.376572e+12
2 1.379491e+12
3 1.349454e+12
4 1.383127e+12
5 1.372239e+12
6 1.330295e+12

The first step was to get joinDate into a nicer format that we can use in R more easily:

timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01", tz = "GMT")
meetupMembers$joinDate <- timestampToDate(meetupMembers$joinTimestamp)
 
> head(meetupMembers)
  joinTimestamp            joinDate
1  1.376572e+12 2013-08-15 13:13:40
2  1.379491e+12 2013-09-18 07:55:11
3  1.349454e+12 2012-10-05 16:28:04
4  1.383127e+12 2013-10-30 09:59:03
5  1.372239e+12 2013-06-26 09:27:40
6  1.330295e+12 2012-02-26 22:27:00

Much better!

I started off with grouping by month and quarter and came across the excellent zoo library which makes it really easy to transform dates:

library(zoo)
meetupMembers$monthYear <- as.Date(as.yearmon(meetupMembers$joinDate))
meetupMembers$quarterYear <- as.Date(as.yearqtr(meetupMembers$joinDate))
 
> head(meetupMembers)
  joinTimestamp            joinDate  monthYear quarterYear
1  1.376572e+12 2013-08-15 13:13:40 2013-08-01  2013-07-01
2  1.379491e+12 2013-09-18 07:55:11 2013-09-01  2013-07-01
3  1.349454e+12 2012-10-05 16:28:04 2012-10-01  2012-10-01
4  1.383127e+12 2013-10-30 09:59:03 2013-10-01  2013-10-01
5  1.372239e+12 2013-06-26 09:27:40 2013-06-01  2013-04-01
6  1.330295e+12 2012-02-26 22:27:00 2012-02-01  2012-01-01

The next step was to create a new data frame which grouped the data by those fields. I’ve been learning dplyr as part of Udacity’s EDA course so I thought I’d try and use that:

> head(meetupMembers %.% group_by(monthYear) %.% summarise(n = n()), 20)
 
    monthYear  n
1  2011-06-01 13
2  2011-07-01  4
3  2011-08-01  1
4  2011-10-01  1
5  2011-11-01  2
6  2012-01-01  4
7  2012-02-01  7
8  2012-03-01 11
9  2012-04-01  3
10 2012-05-01  9
11 2012-06-01  5
12 2012-07-01 16
13 2012-08-01 32
14 2012-09-01 14
15 2012-10-01 28
16 2012-11-01 31
17 2012-12-01  7
18 2013-01-01 52
19 2013-02-01 49
20 2013-03-01 22
> head(meetupMembers %.% group_by(quarterYear) %.% summarise(n = n()), 20)
 
   quarterYear   n
1   2011-04-01  13
2   2011-07-01   5
3   2011-10-01   3
4   2012-01-01  22
5   2012-04-01  17
6   2012-07-01  62
7   2012-10-01  66
8   2013-01-01 123
9   2013-04-01 139
10  2013-07-01 117
11  2013-10-01  94
12  2014-01-01 266
13  2014-04-01 359
14  2014-07-01 216

Grouping by week number is a bit trickier but we can do it with a bit of transformation on our initial timestamp:

meetupMembers$week <- as.Date("1970-01-01")+7*trunc((meetupMembers$joinTimestamp / 1000)/(3600*24*7))
 
> head(meetupMembers %.% group_by(week) %.% summarise(n = n()), 20)
 
         week n
1  2011-06-02 8
2  2011-06-09 4
3  2011-06-16 1
4  2011-06-30 2
5  2011-07-14 1
6  2011-07-21 1
7  2011-08-18 1
8  2011-10-13 1
9  2011-11-24 2
10 2012-01-05 1
11 2012-01-12 3
12 2012-02-09 1
13 2012-02-16 2
14 2012-02-23 4
15 2012-03-01 2
16 2012-03-08 3
17 2012-03-15 5
18 2012-03-29 1
19 2012-04-05 2
20 2012-04-19 1

We can then plug that data frame into ggplot if we want to track membership sign up over time at different levels of granularity and create some bar charts of scatter plots depending on what we feel like!

Categories: Programming

CocoaHeadsNL @ Xebia on September 16th

Xebia Blog - Thu, 08/28/2014 - 11:20

On Tuesday the 16th the Dutch CocoaHeads will be visiting us. It promises to be a great night for anybody doing iOS or OSX development. The night starts at 17:00, diner at 18:00.

If you are an iOS/OSX developer and like to meet fellow developers? Come join the CocoaHeads on september 16th at our office. More details are on the CocoaHeadsNL meetup page.

What is your next step in Continuous Delivery? Part 1

Xebia Blog - Wed, 08/27/2014 - 21:15

Continuous Delivery helps you deliver software faster, with better quality and at lower cost. Who doesn't want to delivery software faster, better and cheaper? I certainly want that!

No matter how good you are at Continuous Delivery, you can always do one step better. Even if you are as good as Google or Facebook, you can still do one step better. Myself included, I can do one step better.

But also if you are just getting started with Continuous Delivery, there is a feasible step to take you forward.

In this series, I describe a plan that helps you determine where you are right now and what your next step should be. To be complete, I'll start at the very beginning. I expect most of you have passed the first steps already.

The steps you already took

This is the first part in the series: What is your next step in Continuous Delivery? I'll start with three steps combined in a single post. This is because the great majority of you has gone through these steps already.

Step 0: Your very first lines of code

Do you remember the very first lines of code you wrote? Perhaps as a student or maybe before that as a teenager? Did you use version control? Did you bring it to a test environment before going to production? I know I did not.

None of us was born with an innate skills for delivering software in a certain way. However, many of us are taught a certain way of delivering software that still is a long way from Continuous Delivery.

Step 1: Version control

At some point during your study of career, you have been introduced to Version Control. I remember starting with CVS, migrating to Subversion and I am currently using Git. Each of these systems are an improvement over te previous one.

It is common to store the source code for your software in version control. Do you already have definitions or scripts for your infrastructure in version control? And for your automated acceptance tests or database schemas? In later steps, we'll get back to that.

Step 2: Release process

Your current release process may be far from Continuous Delivery. Despite appearances, your current release process is a useful step towards Continuous Delivery.

Even if you delivery to production less than twice a year, you are better off than a company that delivers their code unpredictably, untested and unmanaged. Or worse, a company that edits their code directly on a production machine.

In your delivery process, you have planning, control, a production-like testing environment, actual testing and maintenance after the go-live. The main difference with Continuous Delivery is the frequency and the amount of software that is released at the same time.

So yes, a release process is a productive step towards Continuous Delivery. Now let's see if we can optimize beyond this manual release process.

Step 3: Scripts

Imagine you have issues on your production server... Who do you go to for help? Do you have someone in mind?

Let me guess, you are thinking about a middle-aged guy who has been working at your organisation for 10+ years. Even if your organization is only 3 years old, I bet he's been working there for more than 10 years. Or at least, it seems like it.

My next guess is that this guy wrote some scripts to automate recurring tasks and make his life easier. Am I right?

These scripts are an important step towards Continuous Delivery. in fact, Continuous Delivery is all about automating repetitive tasks. The only thing that falls short is that these scripts are a one-man-initiative. It is a good initiative, but there is no strategy behind it and a lack of management support.

If you don't have this guy working for you, then you may have a bigger step to take when continuing towards the next step of Continuous Delivery. To successfully adopt Continuous Delivery on the long run, you are going to need someone like him.

Following steps

In the next parts, we will look at the following steps towards becoming world champion delivering software:

  • Step 4: Continuous Delivery
  • Step 5: Continuous Deployment
  • Step 6: "Hands-off"
  • Step 7: High Scalability

Stay tuned for the following posts.

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Personas and scenario can be a powerful tool for driving adoption and business value realization.  

All too often, people deploy technology without fully understanding the users that it’s intended for. 

Worse, if the technology does not get used, the value does not get realized.

Keep in mind that the value is in the change.  

The change takes the form of doing something better, faster, cheaper, and behavior change is really the key to value realization.

If you deploy a technology, but nobody adopts it, then you won’t realize the value.  It’s a waste.  Or, more precisely, it’s only potential value.  It’s only potential value because nobody has used it to change their behavior to be better, faster, or cheaper with the new technology.  

In fact, you can view change in terms of behavior changes:

What should users START doing or STOP doing, in order to realize the value?

Behavior change becomes a useful yardstick for evaluating adoption and consumption of technology, and significant proxy for value realization.

What is a Persona?

I’ve written about personas before  in Actors, Personas, and Roles, MSF Agile Persona Template, and Personas at patterns & practices, and Microsoft Research has a whitepaper called Personas: Practice and Theory.

A persona, simply defined is a fictitious character that represents user types.  Personas are the “who” in the organization.    You use them to create familiar faces and to inspire project teams to know their clients as well as to build empathy and clarity around the user base. 

Using personas helps characterize sets of users.  It’s a way to capture and share details about what a typical day looks like and what sorts of pains, needs, and desired outcomes the personas have as they do their work. 

You need to know how work currently gets done so that you can provide relevant changes with technology, plan for readiness, and drive adoption through specific behavior changes.

Using personas can help you realize more value, while avoiding “value leakage.”

What is a Scenario?

When it comes to users, and what they do, we're talking about usage scenarios.  A usage scenario is a story or narrative in the form of a flow.  It shows how one or more users interact with a system to achieve a goal.

You can picture usage scenarios as high-level storyboards.  Here is an example:

clip_image001

In fact, since scenario is often an overloaded term, if people get confused, I just call them Solution Storyboards.

To figure out relevant usage scenarios, we need to figure out the personas that we are creating solutions for.

Workforce Analysis with Personas

In practice, you would segment the user population, and then assign personas to the different user segments.  For example, let’s say there are 20,000 employees.  Let’s say that 3,000 of them are business managers, let’s say that 6,000 of them are sales people.  Let’s say that 1,000 of them are product development engineers.   You could create a persona named Mary to represent the business managers, a persona named Sally to represent the sales people, and a persona named Bob to represent the product development engineers.

This sounds simple, but it’s actually powerful.  If you do a good job of workforce analysis, you can better determine how many users a particular scenario is relevant for.  Now you have some numbers to work with.  This can help you quantify business impact.   This can also help you prioritize.  If a particular scenario is relevant for 10 people, but another is relevant for 1,000, you can evaluate actual numbers.

  Persona 1
”Mary
Persona 2
”Sally”
Persona 3
”Bob”
Persona 4
”Jill”
Persona 5
”Jack”
User Population 3,000 6,000 1,000 5,000 5,000 Scenario 1 X         Scenario 2 X X       Scenario 3     X     Scenario 4       X X Scenario 5 X         Scenario 6 X X X X X Scenario 7 X X       Scenario 8     X X   Scenario 9 X X X X X Scenario 10   X   X   Analyzing a Persona

Let’s take Bob for example.  As a product development engineer, Bob designs and develops new product concepts.  He would love to collaborate better with his distributed development team, and he would love better feedback loops and interaction with real customers.

We can drill in a little bit to get a get a better picture of his work as a product development engineer. 

Here are a few ways you can drill in:

  • A Day in the Life – We can shadow Bob for a day and get a feel for the nature of his work.  We can create  a timeline for the day and characterize the types of activities that Bob performs.
  • Knowledge and Skills - We can identify the knowledge Bob needs and the types of skills he needs to perform his job well.  We can use this as input to design more effective readiness plans.
  • Enabling Technologies –  Based on the scenario you are focused on, you can evaluate the types of technologies that Bob needs.  For example, you can identify what technologies Bob would need to connect and interact better with customers.

Another approach is to focus on the roles, responsibilities, challenges, work-style, needs and wants.  This helps you understand which solutions are appropriate, what sort of behavior changes would be involved, and how much readiness would be required for any significant change.

At the end of the day, it always comes down to building empathy, understanding, and clarity around pains, needs, and desired outcomes.

Persona Creation Process

Here’s an example of a high-level process for persona creation:

  1. Kickoff workshop
  2. Interview users
  3. Create skeletons
  4. Validate skeletons
  5. Create final personas
  6. Present final personas

Doing persona analysis is actually pretty simple.  The challenge is that people don’t do it, or they make a lot of assumptions about what people actually do and what their pains and needs really are.  When’s the last time somebody asked you what your pains and needs are, or what you need to perform your job better?

A Story of Using Personas to Create the Future of Digital Banking

In one example I know of a large bank that transformed itself by focusing on it’s personas and scenarios.  

It started with one usage scenario:

Connect with customers wherever they are.

This scenario was driven from pain in the business.  The business was out of touch with customers, and it was operating under a legacy banking model.   This simple scenario reflected an opportunity to change how employees connect with customers (though Cloud, Mobile, and Social).

On the customer side of the equation, customers could now have virtual face-to-face communication from wherever they are.  On the employee side, it enabled a flexible work-style, helped employees pair up with each other for great customer service, and provided better touch and connection with the customers they serve.

And in the grand scheme of things, this helped transform a brick-and-mortar bank to a digital bank of the future, setting a new bar for convenience, connection, and collaboration.

Here is a video that talks through the story of one bank’s transformation to the digital banking arena:

Video: NedBank on The Future of Digital Banking

In the video, you’ll see Blessing Sibanyoni, one of Microsoft’s Enterprise Architects in action.

If you’re wondering how to change the world, you can start with personas and scenarios.

You Might Also Like

Scenarios in Practice

How I Learned to Use Scenarios to Evaluate Things

How Can Enterprise Architects Drive Business Value the Agile Way?

Business Scenarios for the Cloud

IT Scenarios for the Cloud

Categories: Architecture, Programming

Episode 208: Randy Shoup on Hiring in the Software Industry

With this episode, Software Engineering Radio begins a series of interviews on social/nontechnical aspects of working as a software engineer as Tobias Kaatz talks to Randy Shoup, former CTO at KIXEYE, about hiring in the software industry. Prior to KIXEYE, Randy worked as director of engineering at Google for the Google App Engine and as […]
Categories: Programming

Synchronize the Team

Xebia Blog - Tue, 08/26/2014 - 13:52

How can you, as a scrum master, improve the chances that the scrum team has a common vision and understanding of both the user story and the solution, from the start until the end of the sprint?   

The problem

The planning session is where the team should synchronize on understanding the user story and agree on how to build the solution. But there is no real validation that all the team members are on the same page. The team tends to dive into the technical details quite fast in order to identify and size the tasks. The technical details are often discussed by only a few team members and with little or no functional or business context. Once the team leaves the session, there is no guarantee that they remain synchronized when the sprint progresses. 

The only other team synchronization ritual, prescribed by the scrum process, is the daily scrum or stand-up. In most teams the daily scrum is as short as possible, avoiding semantic discussions. I also prefer the stand-ups to be short and sweet. So how can you or the team determine that the team is (still) synchronized?

Specify the story

In the planning session, after a story is considered ready enough be to pulled into the sprint, we start analyzing the story. This is the specification part, using a technique called ‘Specification by Example’. The idea is to write testable functional specifications with actual examples. We decompose the story into specifications and define the conditions of failure and success with examples, so they can be tested. Thinking of examples makes the specification more concrete and the interpretation of the requirements more specific.

Having the whole team work out the specifications and examples, helps the team to stay focussed on the functional part of the story longer and in more detail, before shifting mindsets to the development tasks.  Writing the specifications will also help to determine wether a story is ready enough. While the sprint progresses and all the tests are green, the story should be done for the part of building the functionality.

You can use a tool like Fitnesse  or Cucumber to write testable specifications. The tests are run against the actual code, so they provide an accurate view on the progress. When all the tests pass, the team has successfully created the functionality. In addition to the scrum board and burn down charts, the functional tests provide a good and accurate view on the sprint progress.

Design the solution

Once the story has been decomposed into clear and testable specifications we start creating a design on a whiteboard. The main goal is to create a shared visible understanding of the solution, so avoid (technical) details to prevent big up-front designs and loosing the involvement of the less technical members on the team. You can use whatever format works for your team (e.g. UML), but be sure it is comprehensible by everybody on the team.

The creation of the design, as an effort by the whole team, tends to sparks discussion. In stead of relying on the consistency of non-visible mental images in the heads of team members, there is a tangible image shared with everyone.

The whiteboard design will be a good starting point for refinement as the team gains insight during the sprint. The whiteboard should always be visible and within reach of the team during the sprint. Using a whiteboard makes it easy to adapt or complement the design. You’ll notice the team standing around the whiteboard or pointing to it in discussions quite often.

The design can be easily turned into a digital artefact by creating a photo copy of it. A digital copy can be valuable to anyone wanting to learn the system in the future. The design could also be used in the sprint demo, should the audience be interested in a technical overview.

Conclusion

The team now leaves the sprint planning with a set of functional tests and a whiteboard design. The tests are useful to validate and synchronize on the functional goals. The whiteboard designs are useful to validate and synchronize on the technical goals. The shared understanding of the team is more visible and can be validated, throughout the sprint. The team has become more transparent.

It might be a good practice to have the developers write the specification, and the testers or analysts draw the designs on the board. This is to provoke more communication, by getting the people out of their comfort zone and forcing them to ask more questions.

There are more compelling reasons to implement (or not) something like specification by design or to have the team make design overviews. But it also helps the team to stay on the same page, when there are visible and testable artefacts to rely on during the sprint.

C# Records & Pattern Matching Proposal

Phil Trelford's Array - Mon, 08/25/2014 - 21:27

Following on from VB.Net’s new basic pattern matching support, the C# team has recently put forward a proposal for record types and pattern matching in C# which was posted in the Roslyn discussion area on CodePlex:

Pattern matching extensions for C# enable many of the benefits of algebraic data types and pattern matching from functional languages, but in a way that smoothly integrates with the feel of the underlying language. The basic features are: records, which are types whose semantic meaning is described by the shape of the data; and pattern matching, which is a new expression form that enables extremely concise multilevel decomposition of these data types. Elements of this approach are inspired by related features in the programming languages F# and Scala.

There has been a very active discussion on the forum ever since, particularly around syntax.

Background

Algebraic types and pattern matching have been a core language feature in functional-first languages like ML (early 70s), Miranda (mid 80s), Haskell (early 90s) and F# (mid 00s).

I like to think of records as part of a succession of data types in a language:

Name Example (F#) Description Scalar
let width = 1.0
let height = 2.0
Single values Tuple
// Tuple of float * float
let rect = (1.0, 2.0)
Multiple values Record
type Rect = {Width:float; Height:float}
let rect = {Width=1.0; Height=2.0}
Multiple named fields Sum type(single case)
type Rect = Rect of float * float
let rect = Rect(1.0,2.0)
Tagged tuple Sum type(named fields)
type Rect = Rect of width:float*height:float
let rect = Rect(width=1.0,height=2.0)
Tagged tuple with named fields Sum type(multi case)
type Shape=
   | Circle of radius:float
   | Rect of width:float * height:float
Union of tagged tuples

Note: in F# sum types are also often referred to as discriminated unions or union types, and in functional programming circles algebraic data types tend to refer to tuples, records and sum types.

Thus in the ML family of languages records are like tuples with named fields. That is, where you use a tuple you could equally use a record instead to add clarity, but at the cost of defining a type. C#’s anonymous types fit a similar lightweight data type space, but as there is no type definition their scope is limited (pun intended).

For the most part I find myself pattern matching over tuples and sum types in F# (or in Erlang simply using tuples where the first element is the tag to give a similar effect).

Sum Types

The combination of sum types and pattern matching is for me one of the most compelling features of functional programming languages.

Sum types allow complex data structures to be succinctly modelled in just a few lines of code, for example here’s a concise definition for a generic tree:

type 'a Tree =
    | Tip
    | Node of 'a * 'a Tree * 'a Tree

Using pattern matching the values in a tree can be easily summed:

let rec sumTree tree =
    match tree with
    | Tip -> 0
    | Node(value, left, right) ->
        value + sumTree(left) + sumTree(right)

The technique scales up easily to domain models, for example here’s a concise definition for a retail store:

/// For every product, we store code, name and price
type Product = Product of Code * Name * Price

/// Different options of payment
type TenderType = Cash | Card | Voucher

/// Represents scanned entries at checkout
type LineItem = 
  | Sale of Product * Quantity
  | Cancel of int
  | Tender of Amount * TenderType

Class Hierarchies versus Pattern Matching

In class-based programming languages like C# and Java, classes are the primary data type  where (frequently mutable) data and related methods are intertwined. Hierarchies of related types are typically described via inheritance. Inheritance makes it relatively easy to add new types, but adding new methods or behaviour usually requires visiting the entire hierarchy. That said the compiler can help here by emitting an error if a required method is not implemented.

Sum types also describe related types, but data is typically separated from functions, where functions employ pattern matching to handle separate cases. This pattern matching based approach makes it easier to add new functions, but adding a new case may require visiting all existing functions. Again the compiler helps here by emitting a warning if a case is not covered.

Another subtle advantage of using sum types is being able to see the behaviour for all cases in a single place, which can be helpful for readability. This may also help when attempting to separate concerns, for example if we want to add a method to print to a device to a hierarchy of classes in C# we could end up adding printer related dependencies to all related classes. With a sum type the printer functionality and related dependencies are more naturally encapsulated in a single module

In F# you have the choice of class-based inheritance or sum types and can choose in-situ. In practice most people appear to use sum types most of the time.

C# Case Classes

The C# proposal starts with a simple “record” type definition:

public record class Cartesian(double x: X, double y: Y);

Which is not too dissimilar to an F# record definition, i.e.:

type Cartesian = { X: double, Y: double }

However from there it then starts to differ quite radically. The C# proposal allows a “record” to inherit from another class, in effect allowing sum types to be defined, i.e:

abstract class Expr; 
record class X() : Expr; 
record class Const(double Value) : Expr; 
record class Add(Expr Left, Expr Right) : Expr; 
record class Mult(Expr Left, Expr Right) : Expr; 
record class Neg(Expr Value) : Expr;

which allows pattern matching to be performed using an extended switch case statement:

switch (e) 
{ 
  case X(): return Const(1); 
  case Const(*): return Const(0); 
  case Add(var Left, var Right): 
    return Add(Deriv(Left), Deriv(Right)); 
  case Mult(var Left, var Right): 
    return Add(Mult(Deriv(Left), Right), Mult(Left, Deriv(Right))); 
  case Neg(var Value): 
    return Neg(Deriv(Value)); 
}

This is very similar to Scala case classes, in fact change “record” to case, drop semicolons and voilà:

abstract class Term
case class Var(name: String) extends Term
case class Fun(arg: String, body: Term) extends Term
case class App(f: Term, v: Term) extends Term

To sum up, the proposed C# “record” classes appear to be case classes which support both single and multi case sum types.

Language Design

As someone who has to spend some of their time working in C# and who feels more productive having concise types and pattern matching in their toolbox, overall I welcome our new overlords this proposal.

From my years of experience using F#, I feel it would be nice to see a simple safety feature included, to what is in effect a sum type representation, so that sum types can be exhaustive. This would allow compile time checks to ensure that all cases have been covered in a switch/case statement, and a warning given otherwise.

Then again, I feel this is quite a radical departure from the style of implementation I’ve seen in C# codebases in the wild, to the point where it’s starting to look like an entirely different language… and so this may be a feature that if it does see the light of day is likely to get more exposure in C# shops working on greenfield projects.

Categories: Programming

Powerful New Messaging Features with GCM

Android Developers Blog - Mon, 08/25/2014 - 18:51

By Subir Jhanb, Google Cloud Messaging team

Developers from all segments are increasingly relying on Google Cloud Messaging (GCM) to handle their messaging needs and make sure that their apps stay battery-friendly. GCM has been experiencing incredible momentum, with more than 100,000 apps registered, 700,000 QPS, and 300% QPS growth over the past year.

At Google I/O we announced the general availability of several GCM capabilities, including the GCM Cloud Connection Server, User Notifications, and a new API called Delivery Receipt. This post highlights the new features and how you can use them in your apps. You can watch these and other GCM announcements at our I/O presentation.

Two-way XMPP messaging with Cloud Connection Server

XMPP-based Cloud Connection Server (CCS) provides a persistent, asynchronous, bidirectional connection to Google servers. You can use the connection to send and receive messages between your server and your users' GCM-connected devices. Apps can now send upstream messages using CCS, without needing to manage network connections. This helps keep battery and data usage to a minimum. You can establish up to 100 XMPP connections and have up to 100 outstanding messages per connection. CCS is available for both Android and Chrome.

User notifications managed across multiple devices

Nowadays users have multiple devices and hence receive notifications multiple times. This can reduce notifications from being a useful feature to being an annoyance. Thankfully, the GCM User Notifications API provides a convenient way to reach all devices for a user and help you synchronise notifications including dismissals - when the user dismisses a notification on one device, the notification disappears automatically from all the other devices. User Notifications is available on both HTTP and XMPP.

Insight into message status through delivery receipts

When sending messages to a device, a common request from developers is to get more insight on the state of the message and to know if it was delivered. This is now available using CCS with the new Delivery Receipt API. A receipt is sent as soon as the message is sent to the endpoint, and you can also use upstream for app level delivery receipt.

How to get started

If you’re already using GCM, you can take advantage of these new features right away. If you haven't used GCM yet, you’ll be surprised at how easy it is to set up — get started today! And remember, GCM is completely free no matter how big your messaging needs are.

To learn more about GCM and its new features — CCS, user notifications, and Delivery Receipt — take a look at the I/O Bytes video below and read our developer documentation.


Join the discussion on
+Android Developers


Categories: Programming

How Can Enterprise Architects Drive Business Value the Agile Way?

An Enterprise Architect can have a tough job when it comes to driving value to the business.   With multiple stakeholders, multiple moving parts, and a rapid rate of change, delivering value is tough enough.   But what if you want to accelerate value and maximize business impact?

Enterprise Architects can borrow a few concepts from the Agile world to be much more effective in today’s world.

A Look Back at How Agile Helped Connect Development to Business Impact …

First, let’s take a brief look at traditional development and how it evolved.  Traditionally, IT departments focused on delivering value to the business by shipping big bang projects.   They would plan it, build it, test it, and then release it.   The measure of success was on time, on budget.   

Few projects ever shipped on time.  Few were ever on budget.  And very few ever met the requirements of the business.

Then along came Agile approaches and they changed the game.

One of the most important ideas was a shift away from thick requirements documentation to user stories.  Developers got customers telling stories about what they wanted the future solution to do.  For example, a user story for a sale representative might look like this:

“As a sales rep, I want to see my customer’s account information so that I can identify cross-sell and upsell opportunities.” 

The use of user stories accomplished several things.   First, user stories got the development teams talking to the business users.  Rather than throwing documents back and forth, people started having face-to-face communication to understand the user stories.  Second, user stories helped chunk bigger units of value down into smaller units of value.  Rather than a big bang project where all the value is promised at the end of some long development cycle, a development team could now ship the solution in increments, where each increment was a prioritized set of stories.   The user stories effectively create a shared language for value

Third, it made it easier to test the delivery of value.  Now the user and the development team could test the solution against the user stories and acceptance criteria.  If the story met acceptance criteria, the user would acknowledge that the value was delivered.  In this way, the user stories created both a validation mechanism and a feedback loop for delivering and acknowledging value.

In the Agile world, bigger stories are called epics, and collections of stories are called themes.  Often a story starts off as an epic until it gets broken down into multiple stories.  What’s important here is that the collections of stories serve as a catalog of potential value.   Specifically, this catalog of stories reflects potential value with real stakeholders.  In this way, Agile helps drive customer focus and customer connection.  It’s really effective stakeholder management in action.

Agile approaches have been used in software projects large and small.  And they’ve forever changed how developers and project managers approach projects.

A Look at How Agile Can Help Enterprise Architecture Accelerate Business Value …

But how does this apply to Enterprise Architects?

As an Enterprise Architect, chances are you are responsible for achieving business outcomes.  You do this by driving business transformation.   The way you achieve business transformation is through driving capability change including business, people, and technical capabilities.

That’s a tall order.   And you need a way to chunk this up and make it meaningful to all the parties involved.

The Power of Scenarios as Units of Value for the Enterprise

This is where scenarios come into play.  Scenarios are a simple way to capture pains, needs and desired outcomes.   You can think of the desired outcome as the future capability vision.   It’s really a story that helps articulate the art of the possible.   More precisely, you can use scenarios to help build empathy with stakeholders for what value will look like, by painting a conceptual scene of the future.

An Enterprise scenario is simply a chunk of organizational change, typically about 3-5 business capabilities, 3-5 people capabilities, and 3-5 technical capabilities.

If that sounds like a lot of theory, let’s step into an example to show what it looks like in practice.

Let’s say you’re in a situation where you need to help a healthcare provider change their business.  

You can come up with a lot of scenarios, but it helps to start with the pains and needs of the business owner.  Otherwise, you might start going through a bunch of scenarios for the patients or for the doctors.  In this case, the business owner would be the Chief Medical Officer or the doctor of doctors.

Scenario: Tele-specialist for Healthcare

If we walk the pains, needs, and desired outcomes of the Chief Medical Officer, we might come up with a scenario that looks something like this, where the CURRENT STATE reflects the current pains, and needs, and the FUTURE STATE reflects the desired outcome.

CURRENT STATE

Here is an example of the CURRENT STATE portion of the scenario:

The Chief Medical Officer of Contoso Provider is struggling with increased costs and declining revenues. Costs are rising due to the Affordable Healthcare Act regulatory compliance requirements and increasing malpractice insurance premiums. Revenue is declining due to decreasing medical insurance payments per claim.

FUTURE STATE

Here is an example of the FUTURE STATE portion of the scenario:

Doctors can consult with patients, peers, and specialists from anywhere. Contoso provider's doctors can see more patients, increase accuracy of first time diagnosis, and grow revenues.


image

 

Storyboard for the Future Capability Vision

It helps to be able to picture what the Future Capability Vision might look like.   That’s where storyboarding can come in.  An Enterprise Architect can paint a simple scene of the future with a storyboard that shows the Future Capability Vision in action.  This practice lends itself to whiteboarding, and the beauty of a whiteboard is you can quickly elaborate where you need to, without getting mired in details.

image

As you can see in this example storyboard of the Future Capability Vision, we listed out some business benefits, which we could then drill-down into relevant KPIs and value measures.   We’ve also outlines some building blocks required for this Future Capability Vision in the form of business capabilities and technical capabilities.

Now this simple approach accomplishes a lot.   It helps ensure that any technology solution actually connects back to business drivers and pains that a business decision maker actually cares about.   This gets their fingerprints on the solution concept.   And it creates a simple “flashcard” for value.   If we name the Enterprise scenario well, then we can use it as a handle to get back to the story we created with the business of a better future.

The obvious thing this does, aside from connecting IT to the business, is it helps the business justify any investment in IT.

And all we did was walk through one Enterprise Scenario.  

But there is a lot more value to be found in the Enterprise.   We can literally explore and chunk up the value in the Enterprise if we take a step back and add another tool to our toolbelt:  the Scenario Chain.

Scenario Chain:  Chaining the Industry Scenarios to Enterprise Scenarios

The Scenario Chain is another powerful conceptual visualization tool.  It helps you quickly map out what’s happening in the marketplace in terms of industry drivers or industry scenarios.  You can then identify potential investment objectives.   These investment objectives lead to patterns of value or patterns of solutions in the Enterprise, which are effectively Enterprise scenarios.   From the Enterprise scenarios, you can then identify relevant usage scenarios.  The usage scenarios effectively represent new ways of working for the employees, or new interaction models with customers, which is effectively a change to your value stream.

image

With one simple glance, the Scenario Chain is a bird’s-eye view of how you can respond to the changing marketplace and how you can transform your business.   And, by using Enterprise scenarios, you can chunk up the change into meaningful units of value that reflect pains, needs, and desired outcomes for the business.  And, because you have the fingerprints of stakeholders from both business and IT, you’ve effectively created a shared vision for the future, that has business impact, a justification for investment, and it creates a pull-through mechanism for additional value, by driving the adoption of the usage scenarios.

Let’s elaborate on adoption and how scenarios can help accelerate business value.

Using Scenario to Drive Adoption and Accelerate Business Value

Driving adoption is a key way to realize the business value.  If nobody adopts the solution, then that’s what Gartner would call “Value Leakage.”  Value Realization really comes down to governance, measurement, and adoption.

With scenarios at your fingertips, you have a powerful way to articulate value, justify business cases, drive business transformation, and accelerate business value.   The key lies in using the scenarios as a unit of value, and focusing on scenarios as a way to drive adoption and change.

Here are three ways you can use scenarios to drive adoption and accelerate business value:

1.  Accelerate Business Adoption

One of the ways to accelerate business value is to accelerate adoption.    You can use scenarios to help enumerate specific behavior changes that need to happen to drive the adoption.   You can establish metrics and measures around specific behavior changes.   In this way, you make adoption a lot more specific, concrete, intentional, and tangible.

This approach is about doing the right things, faster.

2.  Re-Sequence the Scenarios

Another way to accelerate business value is to re-sequence the scenarios.   If your big bang is way at the end (way, way at the end), no good.  Sprinkle some of your bangs up front.   In fact, a great way to design for change is to build rolling thunder.   Put some of the scenarios up front that will get people excited about the change and directly experiencing the benefits.  Make it real.

The approach is about putting first things first.

3.  Identify Higher Value Scenarios

The third way to accelerate business value is to identify higher-value scenarios.   One of the things that happens along the way, is you start to uncover potential scenarios that you may not have seen before, and these scenarios represent orders of magnitude more value.   This is the space of serendipity.   As you learn more about users and what they value, and stakeholders and what they value, you start to connect more dots between the scenarios you can deliver and the value that can be realized (and therefore, accelerated.)

This approach is about trading up for higher value and more impact.

As you can see, Enterprise Architects can drive business value and accelerate business value realization by using scenarios and storyboarding.   It’s a simple and agile approach for connecting business and IT, and for shaping a more Agile Enterprise.

I’ll share more on this topic in future posts.   Value Realization is an art and a science and I’d like to reduce the gap between the state of the art and the state of the practice.

You Might Also Like

3 Ways to Accelerate Business Value

6 Steps for Enterprise Architecture as Strategy

Cognizant on the Next Generation Enterprise

Simple Enterprise Strategy

The Mission of Enterprise Services

The New Competitive Landscape

What Am I Doing on the Enterprise Strategy Team?

Why Have a Strategy?

Categories: Architecture, Programming

Anything Worth Doing is Worth Doing Right

Making the Complex Simple - John Sonmez - Mon, 08/25/2014 - 15:00

It seems that I am always in a rush. I find it very difficult to just do what I am doing without thinking about what is coming next or when I’ll be finished with whatever I am working on. Even as I am sitting and writing this blog post, I’m not really as immersed in […]

The post Anything Worth Doing is Worth Doing Right appeared first on Simple Programmer.

Categories: Programming