Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

TypeScript Mario

Phil Trelford's Array - Thu, 09/04/2014 - 09:23

Earlier this year I had a play with Microsoft’s new compile to JavaScript language, TypeScript. Every man and his dog has a compile to JavaScript solution these days. TypeScript’s angle appears to be to provide optional static typing over JavaScript and some ES6 functionality while compiling out to ES3 by default. It provides a class based syntax similar to C#’s and seems to be aimed at developer’s attempting to scale out JavaScript based solutons.

Last year I ported Elm’s Mario sample to F#, which ended up looking similarly concise. I tried both FunScript and WebSharper for compiling F# to JavaScript, and both worked well:

mario

So I thought I’d try the sample out in TypeScript as a way to get a feel for the language.

TypeScript Interfaces

In F# I defined a type for Mario using a record:

// Definition 
type mario = { x:float; y:float; vx:float; vy:float; dir:string }
// Instantiation 
let mario = { x=0.; y=0.; vx=0.; vy=0.; dir="right" }

In TypeScript I used an interface which looks pretty similar syntactically:

// Definition
interface Character {
    x: number; y: number; vx: number; vy: number; dir: string
};
// Instantiation
var mario = { x:0, y:0, vx:0, vy:0, dir:"right" };

TypeScript transcompiles this to a JavaScript associative array using object notation:

var mario = { x: 0, y: 0, vx: 0, vy: 0, dir: "right" };

Composition

For me the cute part of the Elm and F# versions was using the record “with” syntax and function composition, i.e.

let jump (_,y) m = if y > 0 && m.y = 0. then  { m with vy = 5. } else m
let gravity m = if m.y > 0. then { m with vy = m.vy - 0.1 } else m
let physics m = { m with x = m.x + m.vx; y = max 0. (m.y + m.vy) }
let walk (x,_) m = 
    { m with vx = float x 
             dir = if x < 0 then "left" elif x > 0 then "right" else m.dir }

let step dir mario = mario |> physics |> walk dir |> gravity |> jump dir

I couldn’t fine either of those features available out-of-the-box in TypeScript so I resorted to imperative code with mutation and procedures:

function walk(velocity: CursorKeys.Velocity, character: Character) {
    character.vx = velocity.x;
    if (velocity.x < 0) character.dir = "left";
    else if (velocity.x > 0) character.dir = "right";
}

function jump(velocity:CursorKeys.Velocity, character:Character) {
    if (velocity.y > 0 && character.y == 0) character.vy = 5;    
}

function gravity(character: Character) {
    if (character.y > 0) character.vy -= 0.1;
}

function physics(character: Character) {
    character.x += character.vx;
    character.y = Math.max(0, character.y + character.vy);
}

function verb(character: Character): string {
    if (character.y > 0) return "jump";
    if (character.vx != 0) return "walk";
    return "stand";
}

function step(velocity: CursorKeys.Velocity, character:Character) {
    walk(velocity, mario);
    jump(velocity, mario);
    gravity(mario);
    physics(mario);
}

The only difference between the TypeScript and the resultant JavaScript is the type annotations.

HTML Canvas

TypeScript provides typed access to JavaScript libraries via type definition files. The majority appear to be held on a personal github repository.

Note: both FunScript and WebSharper can make use of these type definition files to provide types within F# too.

Among other things this lets you get typed access over things like the HTML canvas element albeit with some funky casts:

    var canvas = <HTMLCanvasElement> document.getElementById("canvas");
    canvas.width = w;
    canvas.height = h;

This has some value, but you do have to rely on the definition files being kept up-to-date.

Conclusions

On the functional reactive side TypeScript didn't appear to offer much value add in comparison to Elm or F#.

To be honest, for a very small app, I couldn’t find any advantages to using TypeScript over vanilla JavaScript. I guess I’d need to build something a lot bigger to find any.

Sample source code: https://bitbucket.org/ptrelford/mario.typescript

Categories: Programming

Minimum Credible Release (MCR) and Minimum Viable Product (MVP)

A Minimum Credible Release, or MCR, is simply the minimal set of user stories that need to be implemented in order for the product increment to be worth releasing.

I don’t know exactly when Minimum Credible Release became an established practice, but I do know that we were using Minimum Credible Release as a concept back in the early 2000’s on the Microsoft patterns & practices team.  It’s how we defined the minimum scope for our project releases.

The value of the Minimum Credible Release is that it provides a baseline for the team to focus on so they can ship.   It’s a metaphorical “finish line.”   This is especially important when the team gets into the thick of things, and you start to face scope creep.

The Minimum Credible Release is also a powerful tool when it comes to communicating to stakeholders what to expect.   If you want people to invest, they need to know what to expect in terms of the minimum bar that they will get for their investment.

The Minimum Credible Release is also the hallmark of great user experience in action.  It takes great judgment to define a compelling minimal release.

A sample is worth a thousand words, so here is a visual way to think about this.  

Let’s say you had a pile of prioritized user stories, like this:

image

You would establish a cut line for your minimum release:

image

Note that this is an over-simplified example to keep the focus on the idea of a list of user stories with a cut line.

And the art part is in where and how you draw the line for the release.

While you would think this is such a simple, obvious, and high-value practice, not everybody does it.

All too often there are projects that run for a period of time without a defined Minimum Credible Release.   They often turn into never-ending projects or somebody’s bitter disappointment.   If you get agreement with users about what the Minimum Credible Release will be, you have a much better chance of making your users happy.  This goes for stakeholders, too.

There is another concept that, while related, I don’t think it’s the same thing.

It’s Minimum Viable Product, or MVP.

Here is what Eric Ries, author of The Lean Startup, says about the Minimum Viable Product:

“The idea of minimum viable product is useful because you can basically say: our vision is to build a product that solves this core problem for customers and we think that for the people who are early adopters for this kind of solution, they will be the most forgiving. And they will fill in their minds the features that aren’t quite there if we give them the core, tent-pole features that point the direction of where we’re trying to go.

So, the minimum viable product is that product which has just those features (and no more) that allows you to ship a product that resonates with early adopters; some of whom will pay you money or give you feedback.”

And, here is what Josh Kaufman, author of The Personal MBA, has to say about the Minimum Viable Product:

“The Lean Startup provides many strategies for validating the worth of a business idea. One core strategy is to develop a minimum viable product – the smallest offer you can create that someone will actually buy, then offer it to real customers. If they buy, you’re in good shape. If your original idea doesn’t work, you simply ‘pivot’ and try another idea.”

So if you want happier users, better products, reduced risk, and more reliable releases, look to Minimum Credible Releases and Minimum Viable Products.

You Might Also Like

Continuous Value Delivery the Agile Way

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Architecture, Programming

A Project Sponsor Isn’t A Project Manager, Scrum Master or Product Owner!

Don't fall into the trap of assuming that the project sponsor can fill all roles.

Don’t fall into the trap of assuming that the project sponsor can fill all roles.

Project sponsors play a critical role in all projects. Sponsor’s  typically are senior leaders in an organization with operational roles that make playing multiple roles on a project difficult at best.  Project sponsors have the bandwidth to take on the project sponsor role, their day job and no other project role, therefore project sponsors are not project managers, Scrum masters or product owners.

Project managers develop plans, report and track progress, assign work and manage resources. Sponsors, on the other hand, provide direction and access to resources. Sponsors are informed by the project manager. On large or medium sized projects, the project manager role is generally a full-time position while the sponsor (generally a member of senior management) spends the majority of his or her time managing a portion of the business rather than a specific project.

In Agile projects the roles of the project sponsor and Scrum master are sometimes confused. A Scrum master facilitates the team. The Scrum master continuously interacts with the team ironing out the interpersonal conflicts, focusing the team on the flow of work and ensuring that nothing blocks the team from achieving their sprint goals. The sponsor provides motivation and exposure for the team at a higher level. A sponsor has issues and blockages escalated to them when they are outside of the team’s span of control. As with the project manager role, the Scrum master’s role provides intimate day-to-day, hour-to-hour support for the team while the sponsor is involved when needed or called upon.

Rarely is the sponsor the product owner. The only time I have seen the two roles combined is in very small organizations or in very small projects (and it wasn’t a great idea in either case). While both roles represent the voice of the business and the organization, a sponsor typically brings significantly more hierarchical power to the table. This positional power tends to dampen important Agile behaviors such as collaboration and self-organization. The product owner role will draw significantly on the time and focus of the project sponsor, which can cause them to take their eye off the direction of the business having negative ramifications.

As noted in The Role of The Project Sponsor, sponsors provide teams with a goal or vision, with access to resources and the political support needed to stay focused. The role can’t be played well by those in the organization without the needed sources of power, interest and resources needed to empower the project. Nor can someone play the role without the time needed to invest in the role. Project sponsors are typically senior leaders within an organization that are tied closely to the day-to-day operations of the organization, which makes it difficult if not impossible for them to play the role of project manager, Scrum master or product owner.


Categories: Process Management

Standard Flavored Markdown

Coding Horror - Jeff Atwood - Wed, 09/03/2014 - 21:06

In 2009 I lamented the state of Markdown:

Right now we have the worst of both worlds. Lack of leadership from the top, and a bunch of fragmented, poorly coordinated community efforts to advance Markdown, none of which are officially canon. This isn't merely incovenient for anyone trying to find accurate information about Markdown; it's actually harming the project's future.

In late 2012, David Greenspan from Meteor approached me and proposed we move forward, and a project crystallized:

I propose that Stack Exchange, GitHub, Meteor, Reddit, and any other company with lots of traffic and a strategic investment in Markdown, all work together to come up with an official Markdown specification, and standard test suites to validate Markdown implementations. We've all been working at cross purposes for too long, accidentally fragmenting Markdown while popularizing it.

We formed a small private working group with key representatives from GitHub, from Reddit, from Stack Exchange, from the open source community. We spent months hashing out the details and agreeing on the necessary changes to turn Markdown into a language you can parse without feeling like you just walked through a sewer – while preserving the simple, clear, ASCII email inspired spirit of Markdown.

We really struggled with this at Discourse, which is also based on Markdown, but an even more complex dialect than the one we built at Stack Overflow. In Discourse, you can mix three forms of markup interchangeably:

  • Markdown
  • HTML (safe subset)
  • BBCode (subset)

Discourse is primarily a JavaScript app, so naturally we needed a nice, compliant implementation of Markdown in JavaScript. Surely such a thing exists, yes? Nope. Even in 2012, we found zero JavaScript implementations of Markdown that could pass the only Markdown test suite I know of, MDTest. It isn't authoritative, it's a community created initiative that embodies its own decisions about rendering ambiguities in Markdown, but it's all we've got. We contributed many upstream fixes to markdown.js to make it pass MDTest – but it still only passes in our locally extended version.

As an open source project ourselves, we're perfectly happy contributing upstream code to improve it for everyone. But it's an indictment of the state of the Markdown ecosystem that any remotely popular implementation wasn't already testing itself against a formal spec and test suite. But who can blame them, because it didn't exist!

Well, now it does.

It took a while, but I'm pleased to announce that Standard Markdown is now finally ready for public review.

standardmarkdown.com

It's a spec, including embedded examples, and implementations in portable C and JavaScript. We strived mightily to stay true to the spirit of Markdown in writing it. The primary author, John MacFarlane, explains in the introduction to the spec:

Because Gruber’s syntax description leaves many aspects of the syntax undetermined, writing a precise spec requires making a large number of decisions, many of them somewhat arbitrary. In making them, I have appealed to existing conventions and considerations of simplicity, readability, expressive power, and consistency. I have tried to ensure that “normal” documents in the many incompatible existing implementations of markdown will render, as far as possible, as their authors intended. And I have tried to make the rules for different elements work together harmoniously. In places where different decisions could have been made (for example, the rules governing list indentation), I have explained the rationale for my choices. In a few cases, I have departed slightly from the canonical syntax description, in ways that I think further the goals of markdown as stated in that description.

Part of my contribution to the project is to host the discussion / mailing list for Standard Markdown in a Discourse instance.

talk.standardmarkdown.com

Fortunately, Discourse itself just reached version 1.0. If the only thing Standard Markdown does is help save a few users from the continuing horror that is mailing list web UI, we all win.

What I'm most excited about is that we got a massive contribution from the one person who, in my mind, was the most perfect person in the world to work on this project: John MacFarlane. He took our feedback and wrote the entire Standard Markdown spec and both implementations.

A lot of people know of John through his Pandoc project, which is amazing in its own right, but I found out about him because he built Babelmark. I learned to refer to Babelmark extensively while working on Stack Overflow and MarkdownSharp, a C# implementation of Markdown.

Here's how crazy Markdown is: to decide what the "correct" behavior is, you provide sample Markdown input to 20+ different Markdown parsers … and then pray that some consensus emerges in all their output. That's what Babelmark does.

Consider this simple Markdown example:

# Hello there

This is a paragraph.

- one
- two
- three
- four

1. pirate
2. ninja
3. zombie

Just for that, I count fifteen different rendered outputs from 22 different Markdown parsers.

In Markdown, we literally built a Tower of Babel.

Have I mentioned that it's a good idea for a language to have a formal specification and test suites? Maybe now you can see why that is.

Oh, and in his spare time, John is also the chair of the department of philosophy at the University of California, Berkeley. No big deal. While I don't mean to minimize the contributions of anyone to the Standard Markdown project, we all owe a special thanks to John.

Markdown is indeed everywhere. And that's a good thing. But it needs to be sane, parseable, and standard. That's the goal of Standard Markdown — but we need your help to get there. If you use Markdown on a website, ask what it would take for that site to become compatible with Standard Markdown; when you see the word "Markdown" you have the right to expect consistent rendering across all the websites you visit. If you implement Markdown, take a look at the spec, try to make your parser compatible with Standard Markdown, and discuss improvements or refinements to the spec.

Update: The project was renamed CommonMark. See my subsequent blog post.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Categories: Programming

SEMAT: The Essence of Software Engineering

From the Editor of Methods & Tools - Wed, 09/03/2014 - 17:37
SEMAT (Software Engineering Method and Theory) is an initiative to reshape software engineering such that software engineering qualifies as a rigorous discipline. SEMAT and Essence are big thinking for software developers. There are millions of software engineers on the planet in countless programs, projects and teams; the millions of line of software that run the world are testament to their talents, but as community we still find it difficult to share our best practices, truly empower our teams, seamless integrate software engineering into our businesses, and maintain the health of our ...

Strategy: Change the Problem

James T. Kirk's infamous gambit in Starfleet's impossible to win Kobayashi Maru test was to redefine the problem into a challenge he could beat. 

Interestingly, an article titled Shifts In Algorithm Design, says something like the same gambit is the modern method of solving algorithmic problems.

In the past: 

I, Dick, recall the “good old days of theory.” When I first started working in theory—a sort of double meaning—I could only use deterministic methods. I needed to get the exact answer, no approximations. I had to solve the problem that I was given—no changing the problem.

 

In the good old days of theory, we got a problem, we worked on it, and sometimes we solved it. Nothing shifty, no changing the problem or modifying the goal. 

Today:
Categories: Architecture

The Role of a Project Sponsor

Sponsors champion a vision for the project!

Sponsors champion a vision for the project!

 

The role of a project sponsor (or sponsors) is discussed less often than project and program managers, Scrum masters, developers or even coaches. Even in its simplest form the role of a sponsor is not only important, but also critical. Good sponsors provide funding and resources, vision and political support.

A sponsor typically provides the funding for the project or program. He or she that owns the project checkbook will in the end own the decisions that affect the overall budget. The budget is then translated into the people, tools and software needed to deliver value to the business by the sponsor’s lieutenants. Lieutenants can include project and program managers, the product owner, Scrum masters or technical leads. Control of the budget is also an indication that the sponsor is ultimately responsible for delivering an amount of value to the overall business. They are often judged by whether the money spent on projects delivers a return on investment. Therefore sponsors will be interested in how their money is being spent.

If sponsors are responsible for both the funding for a project and then the value the project delivers, they must also be the champions of the project’s vision. The vision represents the purpose or motivation for the project. Until delivered, a vision is the picture that anyone involved with the project should be able to describe. I often liken the project vision as the flag on top of the mountain that acts as a rallying point. While a sponsor does not have to conceive of every project vision, they must act as a cheerleader motivating the organization to support the team or teams.

I can’t conceive of an organizational environment outside of a single proprietorship in which the day-to-day pressures of delivery put pressure on project personnel and resources to focus their time more urgent tasks (urgent and important are not the same thing).  The general level of noise and jockeying for people and resources can lead to a loss of focus or worse yet a premature change in direction. One of the most fundamental roles that a project sponsor has is to act as bulwark to shield the project from pressures that could delay or pull the project apart. Standing up against the pressure that even mundane day-to-day operations can create requires the use of the sponsor’s political capital within the organization. Political capital is generated from where the sponsor is in the organization hierarchy, how critical the project is to the organizations mission, the perceived ROI of the project and the sponsor’s ability to deliver winners. 

Project sponsors are generally senior figures in most companies. Sponsors are called on to champion the projects vision and then to back words with funding and political capital. All of the assets a sponsor brings to the table are perishable therefore sponsors will always be every interested in the work being done in their name.


Categories: Process Management

Stop Being Difficult! How to Deal with Passive Aggressive Stakeholders

Software Requirements Blog - Seilevel.com - Tue, 09/02/2014 - 17:00
Talking to a stakeholder, you hear “Sure. Ok.” but you sense that wasn’t what the stakeholder was thinking. That’s because you are probably dealing with a difficult stakeholder — one who likely doesn’t even mean to be one. We’ve all run into them (and some of us have been the difficult ones!). “Difficult” can look […]
Categories: Requirements

Sponsored Post: Apple, Scalyr, Tumblr, Gawker, FoundationDB, CopperEgg, Logentries, BlueStripe, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here. 
    • Site Reliability Engineer. The iOS Systems team is building out a Site Reliability organization. In this role you will be expected to work hand-in-hand with the teams across all phases of the project lifecycle to support systems and to take ownership as they move from QA through integrated testing, certification and production.  Please apply here.
    • Server Software Engineer - Maps Community. As an engineer woking on Maps Community services, your primary responsibility will be backend server software development for the services that power our data crowdsourcing efforts. You’ll be part of a small team working in Java and Scala to add new features and improve our core infrastructure, leveraging best-of-breed frameworks for scalable distributed computing. Please apply here

  • Make Tumblr fast, reliable and available for hundreds of millions of visitors and tens of millions of users. As a Site Reliability Engineer you are a software developer with a love of highly performant, fault-tolerant, massively distributed systems. Apply here.

  • Systems & Networking Lead at Gawker. We are looking for someone to take the initiative on the lowest layers of the Kinja platform. All the way down to power and up through hardware, networking, load-balancing, provisioning and base-configuration. The goal for this quarter is a roughly 30% capacity expansion, and the goal for next quarter will be a rolling CentOS7 upgrade as well as to planning/quoting/pitching our 2015 footprint and budget. For the full job spec and to apply, click here: http://grnh.se/t8rfbw

  • FoundationDB is seeking outstanding developers to join our growing team and help us build the next generation of transactional database technology. You will work with a team of exceptional engineers with backgrounds from top CS programs and successful startups. We don’t just write software. We build our own simulations, test tools, and even languages to write better software. We are well-funded, offer competitive salaries and option grants. Interested? You can learn more here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Your event here.
Cool Products and Services
  • Better, Faster, Cheaper: Pick Three. Scalyr is your universal tool for visibility into your production systems. Log aggregation, server metrics, monitoring, alerting, dashboards, and more. Not just “hosted grep” or “hosted graphs”; our columnar data store enables enterprise-grade functionality with sane pricing and insane performance. Trusted by in-the-know companies like Codecademy – get on board!

  • CopperEgg. Simple, Affordable Cloud Monitoring. CopperEgg gives you instant visibility into all of your cloud-hosted servers and applications. Cloud monitoring has never been so easy: lightweight, elastic monitoring; root cause analysis; data visualization; smart alerts. Get Started Now.

  • Whitepaper Clarifies ACID Support in Aerospike. In our latest whitepaper, author and Aerospike VP of Engineering & Operations, Srini Srinivasan, defines ACID support in Aerospike, and explains how Aerospike maintains high consistency by using techniques to reduce the possibility of partitions.  Read the whitepaper: http://www.aerospike.com/docs/architecture/assets/AerospikeACIDSupport.pdf.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Continuous Value Delivery the Agile Way

Continuous Value Delivery helps businesses realize the benefits from their technology investments in a continuous fashion.

Businesses these days expect at least quarterly results from their technology investments.  The beauty is, with Continuous Value Delivery they can get it, too.  

Continuous Value Delivery is a practice that makes delivering user value and business value a rapid, reliable, and repeatable process.  It’s a natural evolution from Continuous Integration and Continuous Delivery.  Continuous Value Delivery simply adds a focus on Value Realization, which addresses planning for value, driving adoption, and measuring results.

But let’s take a look at the evolution of software practices that have made it possible to provide Continuous Value Delivery in our Cloud-first, mobile-first world.

Long before there was Continuous Value Delivery, there was Continuous Integration …

Continuous Integration

Continuous Integration is a software development practice where team members integrate their work frequently.  The goal of Continuous Integration is to reduce and prevent integration problems.  In Continuous Integration, each integration is verified against tests.

Then, along came, Continuous Delivery …

Continuous Delivery

Continuous Delivery extended the idea of Continuous Integration to automate and improve the process of software delivery.  With Continuous Delivery,  software checked in on the mainline is always ready for release.  When you combine automated testing, Continuous Integration, and Continuous Delivery, it's possible to push out updates, fixes, and new releases to customers with lower risk and minimal manual overhead.

Continuous Delivery changes the model from a big bang approach, where software is shipped at the end of a long project cycle, to where software can be iteratively and incrementally shipped along the way.

This set the stage for Continuous Value Delivery …

Continuous Value Delivery

Continuous Value Delivery puts a focus on Value Realization as a first-class citizen.  

To be able to ship value on a continuous basis, you need to have a simple way to have a simple mechanism for units of value.  Scenarios and stories are an effective way to chunk and carve up value into useful increments.  Scenario and stories also help with driving adoption.

For Continuous Value Delivery, you also need a way to "pull" value, as well as "push" value.   Kanbans provide an easy way to visualize the flow of value, and support a “pull” mechanism and reinforce “the voice of the customer.”    User stories provide an easy way to create a backlog or catalog of potential value, that you can “push” based on priorities and user demand.

Businesses that are making the most of their technology investments are linking scenarios, backlogs, and Kanbans to their value chains and their value streams.

Value Planning Enables Continuous Value Delivery

If you want to drive continuous value to the business, then you need to plan for it.  As part of value planning, you need to identify key stakeholders in the business.    With the stakeholders you need to identify the business benefits that they care about, along with the KPIs and value measures that they care about.

At this stage, you also want to identify who in the business will be responsible for collecting the data and reporting the value.

Adoption is the Key to Value Realization

Adoption is the key component of Continuous Value Delivery.  After all, if you release new features, but nobody uses them, then the users won't get the new benefits.   In order to realize the value, users need to use the new features and actually change their behaviors.

So while deployment was the old bottleneck, adoption is the new bottleneck.

Users and the business can only absorb so much value at once.  In order to flow more value, you need to reduce friction around adoption, and drive consumption of technology.  You can do this through effective adoption planning, user readiness, communication plans, and measurement.

Value Measurement and Reporting

To close the loop, you want the business to acknowledge the delivery of value.   That’s where measurement and reporting come in.

From a measurement standpoint, you can use adoption and usage metrics to better understand what's being used and how much.  But that’s only part of the story.

To connect the dots back to the business impact, you need to measure changes in behavior, such as what people have stopped doing, started doing, and continue doing.   This will be an indicator of benefits being realized.

Ultimately, to show the most value to the business, you need to move the benefits up the stack.  At the lowest level, you can observe the benefits, by simply observing the changes in behavior.  If you can observe the benefits, then you should be able to measure the benefits.  And if you can measure the benefits, then you should be able to quantify the benefits.   And if you can quantify the benefits, then you should be able to associate some sort of financial amount that shows how things are being done better, faster, or cheaper.

The value reporting exercise should help inform and adjust any value planning efforts.  For example, if adoption is proving to be the bottleneck, now you can drill into where exactly the bottleneck is occurring and you can refocus efforts more effectively.

Plan, Do, Check, Act

In essence, your value realization loop is really a cycle of plan, do, check, act, where value is made explicit, and it is regarded as a first-class citizen throughout the process of Continuous Value Delivery.

That’s a way better approach than building solutions and hoping that value will come or that you’ll stumble your way into business impact.

As history shows, too many projects try to luck their way into value, and it’s far better to design for it.

Value Sprints

A Sprint is simply a unit of development in Scrum.   The idea is to provide a working increment of the solution at the end of the Sprint, that is potentially shippable.  

It’s a “timeboxed” effort.   This helps reduce risk as well as support a loop of continuous learning.  For example, a team might work in 1 week, 2 week or 1 month sprints.   At the end of the Sprint, you can review the progress, and make any necessary adjustments to improve for the next Sprint.

In the business arena, we can think in terms of Value Sprints, where we don’t want to stop at just shipping a chunk of value.

Just shipping or deploying software and solutions does not lead to adoption.

And that’s how software and IT projects fall down.

With a Value Sprint, we want to do a add a few specific things to the mix to ensure appropriate Value Realization and true benefits delivery.  Specifically, we want to integrate Value Planning right up front, and as part of each Sprint.   Most importantly, we want to plan and drive adoption, as part of the Value Sprint.

If we can accelerate adoption, then we can accelerate time to value.

And, of course, we want to report on the value as part of the Value Sprint.

In practice, our field tells us that Value Sprints of 6-8 weeks tend to work well with the business.    Obviously, the right answer depends on your context, but it helps to know what others have been doing.   The length of the loop depends on the business cadence, as well as how well adoption can be driven in an organization, which varies drastically based on ability to execute and maturity levels.  And, for a lot of businesses, it’s important to show results within a quarterly cycle.

But what’s really important is that you don’t turn value into a long winded run, or a long shot down the line, and that you don’t simply hope that value happens.

Through Value Sprints and Continuous Value Delivery you can create a sustainable approach where the business can realize the value from it’s technology investments in a sustainable and more reliable way for real business results.

And that’s how you win in the game of software today.

You Might Also Like

Blessing Sibanyoni on Value Realization

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Architecture, Programming

The Beautiful Design Summer 2014 Collection on Google Play

Android Developers Blog - Tue, 09/02/2014 - 16:10
Posted by Marco Paglia, Android Design Team

It’s that time again! Last summer, we published the first Beautiful Design collection on Google Play, and updated it in the winter with a fresh set of beautifully crafted apps.

Since then, developers have been hard at work updating their existing apps with new design ideas, and many new apps targeted to phones and tablets have launched on Google Play sporting exquisite detail in their UIs. Some apps are even starting to incorporate elements from material design, which is great to see. We’re on the lookout for even more material design concepts applied across the Google Play ecosystem!

Today, we're refreshing the Beautiful Design collection with our latest favorite specimens of delightful design from Google Play. As a reminder, the goal of this collection is to highlight beautiful apps with masterfully crafted design details such as beautiful presentation of photos, crisp and meaningful layout and typography, and delightful yet intuitive gestures and transitions.

The newly updated Beautiful Design Summer 2014 collection includes:

Flight Track 5, whose gorgeously detailed flight info, full of maps and interactive charts, stylishly keeps you in the know.

Oyster, a book-reading app whose clean, focused reading experience and delightful discovery makes it a joy to take your library with you, wherever you go.

Gogobot, an app whose bright colors and big images make exploring your next city delightful and fun.

Lumosity, Vivino, FIFA, Duolingo, SeriesGuide, Spotify, Runtastic, Yahoo News Digest… each with delightful design details.

Airbnb, a veteran of the collection from this past winter, remains as they continue to finesse their app.

If you’re an Android designer or developer, make sure to play with some of these apps to get a sense for the types of design details that can separate good apps from great ones. And remember to review the material design spec for ideas on how to design your next beautiful Android app!.


Join the discussion on

+Android Developers
Categories: Programming

Runtastic on Android Wear

Google Code Blog - Tue, 09/02/2014 - 16:00

By Austin Robison, Product Manager, Android Wear




Fitness apps make  great additions to Android Wear. Let’s take a look at one of our favorites, Runtastic. Runtastic is a fitness app that lets you track your walks, runs, bike rides and more. With Runtastic on Android Wear, you'll see your time, distance, and calories burned at a glance on your wrist. You can also start, stop and pause your activity by touch. Tuck your phone away in a pocket or backpack and do everything on your watch.

It's challenging to build user experiences that really come alive on Android Wear because it's such a new type of device. Runtastic does a great job of showing the right information and providing just the right controls on the screen on your wrist. Let's dig into some of the Android Wear platform features that Runtastic uses to make such a great user experience.Voice ActionsAndroid Wear enables developers to launch their activities with voice. Runtastic responds to “Ok Google, start running” by beginning to track a session and displaying a card with your total time. This means you can start exercising without needing to pull your phone out of a pocket or arm strap. Android Wear is all about bringing you useful information just when you need it and enabling users to quickly and easily take action.

runtastic_01.png

Responding to platform voice intents on Wear is as simple as declaring a standard intent filter to start an activity.  For example, to launch your activity for the “start running” voice action, add the following to your activity’s entry in your AndroidManifest.xml:

<intent-filter>    <action android:name="vnd.google.fitness.TRACK"/>    <category android:name="android.intent.category.DEFAULT"/>    <data android:mimeType="vnd.google.fitness.activity/running"/></intent-filter>

Custom CardsOnce a user has started a run, Runtastic inserts a card in the stream as an ongoing notification to ensure it is ranked near the top of the stream during the activity. This card uses the setDisplayIntent() function to display custom UI. It provides quick, glanceable information, showing your activity time. Cool!

When the user swipes to the right of the card to expose its actions, we see some quick and easy to understand options; following the Android Wear style guidelines means that Runtastic has a familiar UI and feels like a natural extension of the watch. There are actions for pausing, stopping, and an action to see more details on the run.  This action launches a full screen Activity where Runtastic draws a completely custom layout.



You’ll notice this data updates live; Runtastic makes use of the Wearable Data Layer API in Google Play Services to synchronize data between the phone and the watch. It's an easy to use API for syncing data between your devices.

Background ServicesWhen a user finishes their run, Runtastic presents them with a special summary card that appears only on the watch. In this case, the notification is generated directly on the watch by a Service. This Service uses the Data Layer to receive information about the completed activity from the phone to the watch, including an image of a map of the user’s run generated through the Google Maps API.

To show that information, the app uses Android Wear’s NotificationManager, which functions just like the NotificationManager on a phone or tablet, except that instead of creating notifications in the pull-down shade, they appear in the stream.



Runtastic's implementation on Android Wear is a perfect example of how to take advantage of wearables to make something truly useful for users. For more information on these and other great platform features, please see the developer documentation.

For more inspiring Android Wear user experiences, check out this collection on the Play Store!

Posted by Mano Marks, Google Developer Platform Team
Categories: Programming

Story Points Are Still About Effort

Mike Cohn's Blog - Tue, 09/02/2014 - 15:00

Story points are about time. There, I’ve said it, and can’t be more clear than that. I’ve written previously about why story points are about effort, not complexity. But I want to revisit that topic here.

The primary reason for estimating product backlog items is so that predictions can be made about how much functionality can be delivered by what date. If we want to estimate what can be delivered by when, we’re talking about time. We need to estimate time. More specifically, we need to estimate effort, which is essentially the person-days (or hours) required to do something.

Estimating something other than effort may be helpful, but we can’t use it to answer questions about when a project can be delivered. For example, suppose a team were to estimate for each product backlog item how many people would be involved in delivering that item.

One item might involve only a programmer and a tester, so it is given a “two.” Another item might involve two programmers, a designer, a database engineer, and a tester. So it is given an estimate of “five.”

It is entirely possible that the product backlog item involving only two people will take significantly longer than the one involving five people. This would be the case if the two people were involved intensely for days while the five were only involved for a few hours.

We may say that the number of people involved in delivering a product backlog item is a proxy for how long the feature will take to develop. In fact, I’d suspect that if we looked at a large number of product backlog items, we would see that those involving more people do, on average, take longer than those involving fewer people.

However, I’m equally sure we’d see lots of counter-examples, like that of the five and two people above. This means that the number of people involved is not a very good proxy for the effort involved in delivering the feature.

This is the problem with equating story points with complexity. Complexity is a factor in how long a product backlog item will take to develop. But complexity is not the only factor, and it is not sufficiently explanatory that we can get by with estimating just the complexity of each product backlog item.

Instead, story points should be an estimate of how long it will take to develop a user story. Story points represent time. This has to be so because time is what our bosses, clients and customers care about. They only care about complexity to the extent it influences the amount of time something will take.

So story points represent the effort involved to deliver a product backlog item. An estimate of the effort involved can be influenced by risk, uncertainty, and complexity.

Let’s look at an example:

Suppose you and I are to walk to a building. We agree that it will take one walking point to get there. That doesn’t mean one minute, one mile or even one kilometer. We just call it one walking point. We could have called it 2, 5, 10 or a million, but let’s call it 1.

What’s nice about calling this one walking point is that you and I can agree on that estimate, even though you are going to walk there while I hobble over there on crutches. Clearly you can get there much faster than I can; yet using walking points, we can agree to call it one point.

Next, we point to another building and agree that walking to it will take two points. That is, we both think it will take us twice as long to get to.

Let’s add a third building. This building is physically the same distance as the two-point building. So we are tempted to call it a two. However, separating us from that building is a narrow walkway across a deep chasm filled with boiling lava. The walkway is just wide enough that we can traverse it if we’re extremely careful. But, one misstep, and we fall into the lava.

Even though this third building is the same physical distance as the building we previously estimated as two walking points, I want to put a higher estimate on this building because of the extra complexity in walking to it.

As long as I’m cautious, there’s no real risk of falling into the lava, but I assure you I am going to walk more slowly and deliberately across that walkway. So slow, in fact, that I’m going to estimate that building as four walking points away.

Make sense? The extra complexity has influenced my estimate.

Complexity influences an estimate, but only to the extent the extra complexity affects the effort involved in doing the work. Walking to the one-point building while singing “Gangnam Style” is probably more complex that walking there without singing. But the extra complexity of singing won’t affect the amount of time it takes me to walk there, so my estimate in this case would remain one.

Risk and uncertainty affect estimates similarly. Suppose a fourth building is also physically the same distance as the building we called a two. But in walking to that building we must cross some train tracks. And the train crosses at completely unpredictable times.

There is extra uncertainty in walking to that building—sometimes we get there in two points. Other times we get stuck waiting for the train to pass and it takes longer. On average, we might decide to estimate this building as a three.

So, story points are about time—the effort involved in doing something. Because our bosses, clients and customers want to know when something will be done, we need to estimate with something based on effort. Risk, uncertainty and complexity are factors that may influence the effort involved.

Let me know what you think in the comments below.

Story Points Are Still About Effort

Mike Cohn's Blog - Tue, 09/02/2014 - 15:00

Story points are about time. There, I’ve said it, and can’t be more clear than that. I’ve written previously about why story points are about effort, not complexity. But I want to revisit that topic here.

The primary reason for estimating product backlog items is so that predictions can be made about how much functionality can be delivered by what date. If we want to estimate what can be delivered by when, we’re talking about time. We need to estimate time. More specifically, we need to estimate effort, which is essentially the person-days (or hours) required to do something.

Estimating something other than effort may be helpful, but we can’t use it to answer questions about when a project can be delivered. For example, suppose a team were to estimate for each product backlog item how many people would be involved in delivering that item.

One item might involve only a programmer and a tester, so it is given a “two.” Another item might involve two programmers, a designer, a database engineer, and a tester. So it is given an estimate of “five.”

It is entirely possible that the product backlog item involving only two people will take significantly longer than the one involving five people. This would be the case if the two people were involved intensely for days while the five were only involved for a few hours.

We may say that the number of people involved in delivering a product backlog item is a proxy for how long the feature will take to develop. In fact, I’d suspect that if we looked at a large number of product backlog items, we would see that those involving more people do, on average, take longer than those involving fewer people.

However, I’m equally sure we’d see lots of counter-examples, like that of the five and two people above. This means that the number of people involved is not a very good proxy for the effort involved in delivering the feature.

This is the problem with equating story points with complexity. Complexity is a factor in how long a product backlog item will take to develop. But complexity is not the only factor, and it is not sufficiently explanatory that we can get by with estimating just the complexity of each product backlog item.

Instead, story points should be an estimate of how long it will take to develop a user story. Story points represent time. This has to be so because time is what our bosses, clients and customers care about. They only care about complexity to the extent it influences the amount of time something will take.

So story points represent the effort involved to deliver a product backlog item. An estimate of the effort involved can be influenced by risk, uncertainty, and complexity.

Let’s look at an example:

Suppose you and I are to walk to a building. We agree that it will take one walking point to get there. That doesn’t mean one minute, one mile or even one kilometer. We just call it one walking point. We could have called it 2, 5, 10 or a million, but let’s call it 1.

What’s nice about calling this one walking point is that you and I can agree on that estimate, even though you are going to walk there while I hobble over there on crutches. Clearly you can get there much faster than I can; yet using walking points, we can agree to call it one point.

Next, we point to another building and agree that walking to it will take two points. That is, we both think it will take us twice as long to get to.

Let’s add a third building. This building is physically the same distance as the two-point building. So we are tempted to call it a two. However, separating us from that building is a narrow walkway across a deep chasm filled with boiling lava. The walkway is just wide enough that we can traverse it if we’re extremely careful. But, one misstep, and we fall into the lava.

Even though this third building is the same physical distance as the building we previously estimated as two walking points, I want to put a higher estimate on this building because of the extra complexity in walking to it.

As long as I’m cautious, there’s no real risk of falling into the lava, but I assure you I am going to walk more slowly and deliberately across that walkway. So slow, in fact, that I’m going to estimate that building as four walking points away.

Make sense? The extra complexity has influenced my estimate.

Complexity influences an estimate, but only to the extent the extra complexity affects the effort involved in doing the work. Walking to the one-point building while singing “Gangnam Style” is probably more complex that walking there without singing. But the extra complexity of singing won’t affect the amount of time it takes me to walk there, so my estimate in this case would remain one.

Risk and uncertainty affect estimates similarly. Suppose a fourth building is also physically the same distance as the building we called a two. But in walking to that building we must cross some train tracks. And the train crosses at completely unpredictable times.

There is extra uncertainty in walking to that building—sometimes we get there in two points. Other times we get stuck waiting for the train to pass and it takes longer. On average, we might decide to estimate this building as a three.

So, story points are about time—the effort involved in doing something. Because our bosses, clients and customers want to know when something will be done, we need to estimate with something based on effort. Risk, uncertainty and complexity are factors that may influence the effort involved.

Let me know what you think in the comments below.

My New #Workout Book Is Best Selling on Amazon

NOOP.NL - Jurgen Appelo - Tue, 09/02/2014 - 14:04
#Workout

On Sunday, I uploaded the Kindle Edition of #Workout to Amazon.
On Monday, I notified all my friends on my exclusive mailing list.
On Tuesday, the book already had 26 great reviews and was listed as Best Selling in the Management category, which makes it one of the Hot New Releases.
I’m so happy! :-)

The post My New #Workout Book Is Best Selling on Amazon appeared first on NOOP.NL.

Categories: Project Management

React in modern web applications: Part 1

Xebia Blog - Tue, 09/02/2014 - 12:00

At Xebia we love to share knowledge! One of the ways we do this is by organizing 1-day courses during the summer. Together with Frank Visser we decided to do a training about full stack development with Node.js, AngularJS and Facebook's React. The goal of the training was to show the students how one could create a simple timesheet application. This application would use nothing but modern Javascript technologies while also teaching them best practices with regards to setting up and maintaining it.

To further share the knowledge gained during the creation of this training we'll be releasing several blog posts. In this first part we'll talk about why to use React, what React is and how you can incorporate it into your Grunt lifecycle.

This series of blog posts assume that you're familiar with the Node.js platform and the Javascript task runner Grunt.

What is React?

ReactJS logo

React is a Javascript library for creating user interfaces made by Facebook. It is their answer to the V in MVC. As it only takes care of the user interface part of a web application React can be (and most often will be) combined with other frameworks (e.g. AngularJS, Backbone.js, ...) for handling the MC part.

In case you're unfamiliar with the MVC architecture, it stands for model-view-controller and it is an architectural pattern for dividing your software into 3 parts with the goal of separating the internal representation of data from the representation shown to the actual user of the software.

Why use React?

There are quite a lot of Javascript MVC frameworks which also allow you to model your views. What are the benefits of using React instead of for example AngularJS?

What sets React apart from other Javascript MVC frameworks like AngularJS is the way React handles UI updates. To dynamically update a web UI you have to apply DOM updates whenever data in your UI changes. These DOM updates, compared to reading data from the DOM, are expensive operations which can drastically slow down your application's responsiveness if you do not minimize the amount of updates you do. React took a clever approach to minimizing the amount of DOM updates by using a virtual DOM (or shadow DOM) diff.

In contrast to the normal DOM consisting of nodes the virtual DOM consists of lightweight Javascript objects that represent your different React components. This representation is used to determine the minimum amount of steps required to go from the previous render to the next render. By using an observable to check if the state has changed React prevents unnecessary re-renders. By calling the setState method you mark a component 'dirty' which essentially tells React to update the UI for this component. When setState is called the component rebuilds the virtual DOM for all its children. React will then compare this to the current virtual sub-tree for the same component to determine the changes and thus find the minimum amount of data to update.

Besides efficient updates of only sub-trees, React batches these virtual DOM batches into real DOM updates. At the end of the React event loop, React will look up all components marked as dirty and re-render them.

How does React compare to AngularJS?

It is important to note that you can perfectly mix the usage of React with other frameworks like AngularJS for creating user interfaces. You can of course also decide to only use React for the UI and keep using AngularJS for the M and C in MVC.

In our opinion, using React for simple components does not give you an advantage over using AngularJS. We believe the true strength of React lies in demanding components that re-render a lot. React tends to really outperform AngularJS (and a lot of other frameworks) when it comes to UI elements that require a lot of re-rendering. This is due to how React handles UI updates internally as explained above.

JSX

JSX is a Javascript XML syntax transform recommended for use with React. It is a statically-typed object-oriented programming language designed for modern browsers. It is faster, safer and easier to use than Javascript itself. Although JSX and React are independent technologies, JSX was built with React in mind. React works without JSX out of the box but they do recommend using it. Some of the many reasons for using JSX:

  • It's easier to visualize the structure of the DOM
  • Designers are more comfortable making changes
  • It's familiar for those who have used MXML or XAML

If you decide to go for JSX you will have to compile the JSX to Javascript before running your application. Later on in this article I'll show you how you can automate this using a Grunt task. Besides Grunt there are a lot of other build tools that can compile JSX. To name a few, there are plugins for Gulp, Broccoli or Mimosa.

An example JSX file for creating a simple link looks as follows:

/** @jsx React.DOM */
var link = React.DOM.a({href: 'http://facebook.github.io/react'}, 'React');

Make sure to never forget the starting comment or your JSX file will not be processed by React.

Components

With React you can construct UI views using multiple, reusable components. You can separate the different concepts of your application by creating modular components and thus get the same benefits when using functions and classes. You should strive to break down the different common elements in your UI into reusable components that will allow you to reduce boilerplate and keep it DRY.

You can construct component classes by calling React.createClass() and each component has a well-defined interface and can contain state (in the form of props) specific to that component. A component can have ownership over other components and in React, the owner of a component is the one setting the props of that component. An owner, or parent component can access its children by calling this.props.children.

Using React you could create a hello world application as follows:

/** @jsx React.DOM */
var HelloWorld = React.createClass({
  render: function() {
    return <div>Hello world!</div>;
  }
});

Creating a component does not mean it will get rendered automatically. You have to define where you would like to render your different components using React.renderComponent as follows:

React.renderComponent(<HelloWorld />, targetNode);

By using for example document.getElementById or a jQuery selector you target the DOM node where you would like React to render your component and you pass it on as the targetNode parameter.

Automating JSX compilation in Grunt

To automate the compilation of JSX files you will need to install the grunt-react package using Node.js' npm installer:

npm install grunt-react --save-dev

After installing the package you have to add a bit of configuration to your Gruntfile.js so that the task knows where your JSX source files are located and where and with what extension you would like to store the compiled Javascript files.

react: {
  dynamic_mappings: {
    files: [
      {
        expand: true,
        src: ['scripts/jsx/*.jsx'],
        dest: 'app/build_jsx/',
        ext: '.js'
      }
    ]
  }
}

To speed up development you can also configure the grunt-contrib-watch package to keep an eye on JSX files. Watching for JSX files will allow you to run the grunt-react task whenever you change a JSX file resulting in continuous compilation of JSX files while you develop your application. You simply specify the type of files to watch for and the task that you would like to run when one of these files changes:

watch: {
  jsx: {
    files: ['scripts/jsx/*.jsx'],
    tasks: ['react']
  }
}

Last but not least you will want to add the grunt-react task to one or more of your grunt lifecycle tasks. In our setup we added it to the serve and build tasks.

grunt.registerTask('serve', function (target) {
  if (target === 'dist') {
    return grunt.task.run(['build', 'connect:dist:keepalive']);
  }

  grunt.task.run([
    'clean:server',
    'bowerInstall',
    <strong>'react',</strong>
    'concurrent:server',
    'autoprefixer',
    'configureProxies:server',
    'connect:livereload',
    'watch'
  ]);
});

grunt.registerTask('build', [
  'clean:dist',
  'bowerInstall',
  'useminPrepare',
  'concurrent:dist',
  'autoprefixer',
  'concat',
  <strong>'react',</strong>
  'ngmin',
  'copy:dist',
  'cdnify',
  'cssmin',
  'uglify',
  'rev',
  'usemin',
  'htmlmin'
]);
Conclusion

Due to React's different approach on handling UI changes it is highly efficient at re-rendering UI components. Besides that it's easily configurable and integrate in your build lifecycle.

What's next?

In the next article we'll be discussing how you can use React together with AngularJS, how to deal with state in your components and how to avoid passing through your entire component hierarchy using callbacks when updating state.

Free and open source example software guidebook

Coding the Architecture - Simon Brown - Tue, 09/02/2014 - 09:19

It needs a little updating (isn't that always the case!), but I've moved the example software guidebook (previously an appendix in my Software Architecture for Developers book) into a separate free and open source book on Leanpub.

techtribes.je - Software Guidebook

techtribes.je is a side-project of mine to create a content aggregator for the tech, IT and digital sector in Jersey, Channel Islands. The code behind the techtribes.je website is open source and available on GitHub. The source for the software guidebook is also open source and available on GitHub.

The techtribes.je software guidebook is based upon the concept of a software guidebook as described in my Software Architecture for Developers book; the software guidebook is a lightweight, pragmatic way to document the "big picture" of a software system. In essence, it's my simplified version of many "software architecture document" templates you'll find out there on the web.

techtribes.je - Software Guidebook is available to download for free from Leanpub. I hope you find it useful.

Categories: Architecture

Traceability: Interpreting the Model

Tallying Up the Answers:
After assessing the three components (customer involvement, criticality and complexity), count the number of “yes” and “no” answers for each model axis. Plotting the results is merely a matter of indicating the number of yes and no answers on each axis. For example, if an appraisal yields:

Customer Involvement:   8 Yes 1 No

Criticality:                       7 Yes 2 No

Complexity:                    5 Yes 4 No

The responses could be shown graphically as:

1

The Traceability model is premised on the idea that as criticality and complexity increases, the need for communication intensifies. Communication becomes more difficult as customer involvement shifts from intimate to arm’s length. Each component of the model influences the other to some extent. In circumstances where customer involvement is high, there are many different planning and control tools that must be utilized than when involvement is lower. The relationships between each axes will suggest a different implementation of traceability. In a perfect world, the model would be implemented as a continuum with an infinite number of nuanced implementations of traceability. In real life, continuums are difficult to implement. Therefore, for ease of use, I suggest an implementation of model with three basic levels of traceability (the Three Bears Approach); Papa Bear or formal/detailed tracking, Mama Bear or formal with function level tracking and Baby Bear or informal (but disciplined)/anecdote based tracking. The three bears analogy is not meant to be pejorative; heavy, medium and light would work as well.

Interpreting the axes:
Assemble the axes you have plotted with the zero intercept at the center (see example below).

Untitled

As noted earlier, I suggest three levels of traceability, ranging from agile to formal. In general if the accumulated “No” answers exceed three (on any axis); an agile approach is not appropriate. An accumulated of 7, 8 or 9 strongly suggests as formal an approach as possible should be used. Note there are certain “NO” answers that are more equal than others. For example, in the Customer Involvement Category, if ‘Agile Methods Used’ is no . . . it probably makes sense to raise the level of formality immediately. A future refinement of the model will create a hierarchy of questions and to vary the impact of the responses based on that hierarchy. All components of the model are notional rather than carved in stone – implementing the model in specific environments will require tailoring. Apply the model through the filter of your experience. Organizational culture and experience will be most important on the cusps (3-4 and 6-7 yes answer ranges).

Informal – Anecdote Based Tracing

Component Scores: No axis with more than three “No” answers.

Traceability will be accomplished through combination of stories, test cases and later test results coupled with the tight interplay between customer and developers found in agile methods. This will ensure what was planned (and not unplanned) is implemented and what was implemented was what was planned.

Moderately Formal – Function Based Tracking

Component Scores: No axis with more than six “No” answers.

The moderately formal implementation of traceability links requirements to functions (each organization needs to define the precise unit – tracing use cases can be very effective when a detailed level control is not indicated), tests cases (development and user acceptance). This type of linkage is typically accomplished using matrices and numbering, requirements tools or some combination of the two.

Formal – Detailed Traceability

Component Scores: One or more axis with more than six “No” answers.

The most formal version of traceability links individual requirements (detailed, granular requirements) through design components, code and test cases, and results. This level of traceability provides the highest level of control and oversight. This type of traceability can be accomplished using paper and pencil for small projects; however for projects of any size, tools are required.

Caveats – As with all models, the proposed traceability model is a simplification of the real world. Therefore customization is expected. Three distinct levels of traceability may be too many for some organizations or too few for others. One implemented version of the model swings between an agile approach (primarily for WEB based projects where SCRUM is being practiced) and the moderately formal model for other types of projects.   For the example organization, adding additional layers has been difficult to implement without support to ensure high degrees of consistency. We found that leveraging project level tailoring for specific nuances has been the most practical means for dealing with “one off” issues.

In practice, teams have reported major benefits to using the model.

The first benefit is that using the model ensures an honest discussion of risks, complexity and customer involvement early in the life of the project. The model works best when all project team members (within reason) participate in the discussion and assessment of the model. Facilitation is sometimes required to ensure that discussion paralysis does not occur. One organization I work with has used this mechanism as a team building exercise.

The second benefit is that the model allows project managers, coaches and team members to define the expectations for the processes to be used for traceability in a transparent/collaborative manner. The framework presented allows all parties to understand what is driving where on the formality continuum your implementation of scalability will fall – HUH?. It should be noted that once the scalability topic is broached for traceability, it is difficult to contain the discussion to just this topic. I applaud those who embrace the discussion and would suggest that all project process need to be scalable based on a disciplined and participative process that can be applied early in a project.

Examples:

Extreme examples are easy to apply without leveraging a model, a questionnaire, or graph. An extreme example would be a critical system where defects could be life threatening – such as a project to build an air traffic control system. The attributes of this type of project would include extremely high levels of complexity, a large system, many groups of customers, each with differing needs, and probably a hard deadline with large penalties for missing the date and any misses on anticipated functionality. The model recommends that a detailed requirement for traceability is a component on the path of success. A similar example could be constructed for the model agile project in which intimate customer involvement can substitute for detailed traceability.

A more illustrative example would be for projects that inhabit gray areas. The following example employs the model to suggest a traceability approach.

An organization (The Org) was engaged a firm (WEB.CO) after evaluating a series of competitive bids to build a new ecommerce web site. The RFP required the use of several WEB 2.0 community and ecommerce functions. The customer that engaged WEB.CO felt they had defined the high level requirements in the RFP. WEB.CO uses some agile techniques on all projects in which they are engaged. The techniques include defining user stories, two weeks sprints, and a coach to support the team, co-located teams and daily builds. The RFP and negotiations indicated that the customer would not be on-site and at times would have constraints on their ability to participate in the project. These early pronouncements on involvement were deemed to non-negotiable. The contract included performance penalties that WEB.CO wished to avoid. The site was considered critical to the customer’s business. Delivery of the site was timed to be in conjunction with the initial introduction of the business. Let’s consider how we would apply the questionnaire in this case.

Question Number Involvement Complexity Criticality 1 Yes Yes No 2 No Yes No 3 No Yes Unknown
(need to know) 4 Yes Yes Yes 5 Yes
(Inferred) Yes Yes 6 Yes Yes No 7 Yes Yes No 8 Yes Yes No 9 Yes Yes Yes

 

Graphically the results look like:

2

Running the numbers on the individual radar plot axes highlights the high degree of perceived criticality for this project. The model recommends the moderate level of traceability documentation. As a final note, if this were a project I was involved on, I would keep an eye on the weakness in the involvement category. Knowing that there are weaknesses in the customer involvement category will make sure you do not rationalize away the criticality score.


Categories: Process Management

FParsec Tutorial

Phil Trelford's Array - Sun, 08/31/2014 - 17:23

Back at the start of the year, I took the F# parser combinator library FParsec out for a spin, writing an extended Small Basic compiler and later a similar parser for a subset of C#. Previously I’d been using hand rolled parsers, for projects like TickSpec, a .Net BDD library, and Cellz, an open source spreadsheet. With FParsec you can construct a parser relatively rapidly and easily using the powerful built-in functions and F# interactive for quick feedback.

FParsec has been used in a number of interesting projects including FunScript, for parsing TypeScript definition files, and FogBugz for search queries in Kiln.

Like any library there is a bit of a learning curve, taking time to get up to speed before you reap the benefits. So with that in mind I put together a short hands on tutorial that I ran at the F#unctional Londoners meetup held at Skills Matter last week.

The tutorial consisted of a short introduction to DSLs and parsing. Then a set of tasks leading to a parser for a subset of the Logo programming language. Followed by examples of scaling out to larger parsers and building a compiler backend, using Small Basic and C# as examples.

FParsec Hands On - F#unctional Londoners 2014 from Phillip Trelford

Download the tasks from: http://trelford.com/FParsecTutorial.zip

Logo programming language

One of my earliest experiences with programming was a Logo session in the 70s, when my primary school had a short term loan of a turtle robot:

1968_LogoTurtle

The turtle, either physical or on the screen, can be controlled with simple commands like forward, left, right and repeat, e.g.

> repeat 10 [right 36 repeat 5 [forward 54 right 72]]

image

Abstract Syntax Tree

The abstract syntax tree (AST) for these commands can be easily described using F#’s discriminated unions type:

type arg = int
type command =
   | Forward of arg
   | Turn of arg
   | Repeat of arg * command list

Note: right and left can simply be represented as Turn with a positive or negative argument.

The main task was to use FParsec to parse the commands in to AST form.

Parsing

A parser for the forward command can be easily constructed using built-in FParsec parser functions and the >>. operator to combine them:

let forward = pstring "forward" >>. spaces1 >>. pfloat

The parsed float value can be used to construct the Forward case using the |>> operator:

let pforward = forward |>> fun n -> Forward(int n)

To parse the forward or the short form fd, the <|> operator can be employed:

let pforward = (pstring "fd" <|> pstring "forward") >>. spaces1 >>. pfloat
               |>> fun n -> Forward(int n)

Parsing left and right is almost identical:

let pleft = (pstring "left" <|> pstring "lt") >>. spaces1 >>. pfloat 
            |>> fun x -> Left(int -x)
let pright = (pstring "right" <|> pstring "right") >>. spaces1 >>. pfloat 
             |>> fun x -> Right(int x)

To parse a choice of commands, we can use the <|> operator again:

let pcommand = pforward <|> pleft <|> pright

To handle a sequence of commands there is the many function

let pcommands = many (pcommand .>> spaces)

To parse the repeat command we need to parse the repeat count and a block of commands held between square brackets:

let block = between (pstring "[") (pstring "]") pcommands


let prepeat = 
    pstring "repeat" >>. spaces1 >>. pfloat .>> spaces .>>. block
    |>> fun (n, commands) -> Repeat(int n, commands)

Putting this altogether we can parse a simple circle drawing function:

> repeat 36 [forward 10 right 10]

However we cannot yet parse a repeat command within a repeat block, as the command parser does not reference the repeat command.

Forward references

To separate the definition of repeat’s parser function from it’s implementation we can use the createParserForwardedToRef function:

let prepeat, prepeatimpl = createParserForwardedToRef ()

Then we can define the choice of commands to include repeat:

let pcommand = pforward <|> pleft <|> pright <|> prepeat

And finally define the implementation of the repeat parser that refers to itself:

prepeatimpl := 
    pstring "repeat" >>. spaces1 >>. pfloat .>> spaces .>>. block
    |>> fun (n, commands) -> Repeat(int n, commands)

Allowing us to parse nested repeats, i.e.

> repeat 10 [right 36 repeat 5 [forward 54 right 72]]

Parses to:

> Repeat (10,[Right 36; Repeat (5,[Forward 54; Right 72])])

Interpreter

Evaluation of a program can now be easily achieved using pattern matching over the AST:

let rec perform turtle = function
    | Forward n ->
        let r = float turtle.A * Math.PI / 180.0
        let dx, dy = float n * cos r, float n * sin r
        let x, y =  turtle.X, turtle.Y
        let x',y' = x + dx, y + dy
        drawLine (x,y) (x',y')
        { turtle with X = x'; Y = y' }
    | Turn n -> { turtle with A=turtle.A + n }
    | Repeat(n,commands) ->
        let rec repeat turtle = function
            | 0 -> turtle
            | n -> repeat (performAll turtle commands) (n-1)
        repeat turtle n
and performAll = List.fold perform

Check out this snippet for the full implementation as a script: http://fssnip.net/nM

User Commands

Logo lets you define your own commands, e.g.

>  to square
     repeat 4 [forward 50 right 90]
   end
   to flower
     repeat 36 [right 10 square]
   end
   to garden
     repeat 25 [set-random-position flower]
   end

garden

The parser can be easily extended to support this, try the snippet: http://fssnip.net/nN

Small Basic

Small Basic is a Microsoft programming language also aimed at teaching kids, and also featuring turtle functionality. At the beginning of the year I wrote a short series of posts on writing an extended compiler for Small Basic:

    The series starts with an AST, internal DSL and interpreter. Then moves on to parsing the language with FParsec and compiling the AST to IL code using Reflection.Emit. Finally the series ends with extensions for functions with arguments and support for tuples and pattern matching.
    It’s a fairly short hop from implementing Logo to implementing a larger language like Small Basic.
    Parsing C#

A few weeks later as an experiment I knocked up an AST and parser for a fairly large subset of C#, which shares much of the imperative core of Small Basic: http://fssnip.net/lf

Check out Neil Danson’s blog on building a C# compiler in F# to see C# compiled to IL using a similar AST.

DDD North: Write your own compiler in 24 hours

If you’re interested in learning more, I’ll be speaking at DDD North in Leeds on Saturday 18th October about how to write your own compiler in 24 hours.

Categories: Programming

Why do I use Leanpub?

Coding the Architecture - Simon Brown - Sat, 08/30/2014 - 11:35

There's been some interesting discussion over the past fews days about Leanpub, both on Twitter and blogs. Jurgen Appelo posted Why I Don't Use Leanpub and Peter Armstrong responded. I think the biggest selling points of Leanpub as a publishing platform from an author's perspective may have been lost in the discussion. So, here's why my take on why I use Leanpub for Software Architecture for Developers.

Some history

I pitched my book idea to a number of traditional publishing companies in 2008 and none of them were very interested. "Nice idea, but it won't sell" was the basic summary. A few years later I decided to self-publish my book instead and I was about to head down the route of creating PDF and EPUB versions using a combination of Pages and iBooks Author on the Mac. Why? Because I love books like Garr Reynolds' Presentation Zen and I wanted to do something similar. At first I considered simply giving the book away for free on my website but, after Googling around for self-publishing options, I stumbled across Leanpub. Despite the Leanpub bookstore being fairly sparse at the start of 2012, the platform piqued my interest and the rest is history.

The headline: book creation, publishing, sales and distribution as a service

I use Leanpub because it allows me to focus on writing content. Period. The platform takes care of creating and selling e-books in a number of different formats. I can write some Markdown, sync the files via Dropbox and publish a new version of my book within minutes.

Typesetting and layout

I frequently get asked for advice about whether Leanpub is a good platform for somebody to write a book. The number one question to ask is whether you have specific typesetting/layout needs. If you want to produce a "Presentation Zen" style book or if having control of your layout is important to you, then Leanpub isn't for you. If, however, you want to write a traditional book that mostly consists of words, then Leanpub is definitely worth taking a look at.

Leanpub uses a slightly customised version of Markdown, which is a super-simple language for writing content. Here's an example of a Markdown file from my book, and you can see the result in the online sample of my book. Leanpub does allow you to tweak things like PDF page size, font size, page breaking, section numbering, etc but you're not going to get pixel perfect typesetting. I think that Leanpub actually does a pretty fantastic job of creating good looking PDF, EPUB and MOBI format ebooks based upon the very minimal Markdown. This is especially true when you consider the huge range of ebook reader software across PCs, Macs, Android devices, Apple devices, Kindles, etc. Plus the readers themselves can mess with the fonts/font sizes too.

Book formatting on Leanpub

It's like building my own server at Rackspace versus using a "Platform as a Service" such as Cloud Foundry. You need to make a decision about the trade-off between control and simplicity/convenience. Since authoring isn't my full-time job and I have lots of other stuff to be getting on with, I'm more than happy to supply the content and let Leanpub take care of everything else for me.

Toolchain

My toolchain as a Leanpub author is incredibly simple: Dropbox and Mou. From a structural perspective, I have one Markdown file per essay and that's basically it. Leanpub does now provide support for using GitHub to store your content and I can see the potential for a simple Leanpub-aware authoring tool, but it's not rocket science. And to prove the point, a number of non-technical people here in Jersey have books on Leanpub too (e.g. Thrive with The Hive and a number of books by Richard Rolfe).

Iterative and incremental delivery

Before starting, I'd already decided that I'd like to write the book as a collection of short essays and this was cemented by the fact that Leanpub allows me to publish an in-progress ebook. I took an iterative and incremental approach to publishing the book. Rather than starting with essay number one and progressing in order, I tried to initially create a minimum viable book that covered the basics. I then fleshed out the content with additional essays once this skeleton was in place, revisiting and iterating upon earlier essays as necessary. I signed up for Leanpub in January 2012 and clicked the "Publish" button four weeks later. That first version of my book was only about ten pages in length but I started selling copies immediately.

Variable pricing and coupons

Another thing that I love about Leanpub is that it gives you full control over how you price your book. The whole pricing thing is a balancing act between readership and royalties, but I like that I'm in control of this. My book started out at $4.99 and, as content was added, that price increased. The book now currently has a minimum price of $20 and a recommended price of $30. I can even create coupons for reduced price or free copies too. There's some human psychology that I don't understand here, but not everybody pays the minimum price. Far from it, and I've had a good number of people pay more than the recommend price too. Leanpub provides all of the raw data, so you can analyse it as needed.

An incubator for books

As I've already mentioned, I pitched my book idea to a bunch of regular publishing companies and they weren't interested. Fast-forward a few years and my book is the currently the "bestselling" book on Leanpub this week, fifth by lifetime earnings and twelfth in terms of number of copies sold. I've used quotes around "bestselling" because Jurgen did. ;-)

Leanpub bestsellers

In his blog post, Peter Armstrong emphasises that Leanpub is a platform for publishing in-progress ebooks, especially because you can publish using an iterative and incremental approach. For this reason, I think that Leanpub is a fantastic way for authors to prove an idea and get some concrete feedback in terms of sales. Put simply, Leanpub is a fantastic incubator for books. I know of a number of books that were started on Leanpub have been taken on by traditional publishing companies. I've had a number of offers too, including some for commercial translations. Sure, there are other ways to publish in-progress ebooks, but Leanpub makes this super-easy and the barrier to entry is incredibly low.

The future for my book?

What does the future hold for my book then? I'm not sure that electronic products are ever really "finished" and, although I consider my book to be "version 1", I do have some additional content that is being lined up. And when I do this, thanks to the Leanpub platform, all of my existing readers will get the updates for free.

I've so far turned down the offers that I've had from publishing companies, primarily because they can't compete in terms of royalties and I'm unconvinced that they will be able to significantly boost readership numbers. Leanpub is happy for authors to sell their books through other channels (e.g. Amazon) but, again, I'm unconvinced that simply putting the book onto Amazon will yield an increased readership. I do know of books on the Kindle store that haven't sold a single copy, so I take "Amazon is bigger and therefore better" arguments with a pinch of salt.

What I do know is that I'm extremely happy with the return on my investment. I'm not going to tell you how much I've earned, but a naive calculation of $17.50 (my royalty on a $20 sale) x 4,600 (the total number of readers) is a little high but gets you into the right ballpark. In summary, Leanpub allows me focus on content, takes care of pretty much everything and gives me an amazing author royalty as a result. This is why I use Leanpub.

Categories: Architecture