Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Strategy: Change the Problem

James T. Kirk's infamous gambit in Starfleet's impossible to win Kobayashi Maru test was to redefine the problem into a challenge he could beat. 

Interestingly, an article titled Shifts In Algorithm Design, says something like the same gambit is the modern method of solving algorithmic problems.

In the past: 

I, Dick, recall the “good old days of theory.” When I first started working in theory—a sort of double meaning—I could only use deterministic methods. I needed to get the exact answer, no approximations. I had to solve the problem that I was given—no changing the problem.

 

In the good old days of theory, we got a problem, we worked on it, and sometimes we solved it. Nothing shifty, no changing the problem or modifying the goal. 

Today:
Categories: Architecture

Sponsored Post: Apple, Scalyr, Tumblr, Gawker, FoundationDB, CopperEgg, Logentries, BlueStripe, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here. 
    • Site Reliability Engineer. The iOS Systems team is building out a Site Reliability organization. In this role you will be expected to work hand-in-hand with the teams across all phases of the project lifecycle to support systems and to take ownership as they move from QA through integrated testing, certification and production.  Please apply here.
    • Server Software Engineer - Maps Community. As an engineer woking on Maps Community services, your primary responsibility will be backend server software development for the services that power our data crowdsourcing efforts. You’ll be part of a small team working in Java and Scala to add new features and improve our core infrastructure, leveraging best-of-breed frameworks for scalable distributed computing. Please apply here

  • Make Tumblr fast, reliable and available for hundreds of millions of visitors and tens of millions of users. As a Site Reliability Engineer you are a software developer with a love of highly performant, fault-tolerant, massively distributed systems. Apply here.

  • Systems & Networking Lead at Gawker. We are looking for someone to take the initiative on the lowest layers of the Kinja platform. All the way down to power and up through hardware, networking, load-balancing, provisioning and base-configuration. The goal for this quarter is a roughly 30% capacity expansion, and the goal for next quarter will be a rolling CentOS7 upgrade as well as to planning/quoting/pitching our 2015 footprint and budget. For the full job spec and to apply, click here: http://grnh.se/t8rfbw

  • FoundationDB is seeking outstanding developers to join our growing team and help us build the next generation of transactional database technology. You will work with a team of exceptional engineers with backgrounds from top CS programs and successful startups. We don’t just write software. We build our own simulations, test tools, and even languages to write better software. We are well-funded, offer competitive salaries and option grants. Interested? You can learn more here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Your event here.
Cool Products and Services
  • Better, Faster, Cheaper: Pick Three. Scalyr is your universal tool for visibility into your production systems. Log aggregation, server metrics, monitoring, alerting, dashboards, and more. Not just “hosted grep” or “hosted graphs”; our columnar data store enables enterprise-grade functionality with sane pricing and insane performance. Trusted by in-the-know companies like Codecademy – get on board!

  • CopperEgg. Simple, Affordable Cloud Monitoring. CopperEgg gives you instant visibility into all of your cloud-hosted servers and applications. Cloud monitoring has never been so easy: lightweight, elastic monitoring; root cause analysis; data visualization; smart alerts. Get Started Now.

  • Whitepaper Clarifies ACID Support in Aerospike. In our latest whitepaper, author and Aerospike VP of Engineering & Operations, Srini Srinivasan, defines ACID support in Aerospike, and explains how Aerospike maintains high consistency by using techniques to reduce the possibility of partitions.  Read the whitepaper: http://www.aerospike.com/docs/architecture/assets/AerospikeACIDSupport.pdf.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Continuous Value Delivery the Agile Way

Continuous Value Delivery helps businesses realize the benefits from their technology investments in a continuous fashion.

Businesses these days expect at least quarterly results from their technology investments.  The beauty is, with Continuous Value Delivery they can get it, too.  

Continuous Value Delivery is a practice that makes delivering user value and business value a rapid, reliable, and repeatable process.  It’s a natural evolution from Continuous Integration and Continuous Delivery.  Continuous Value Delivery simply adds a focus on Value Realization, which addresses planning for value, driving adoption, and measuring results.

But let’s take a look at the evolution of software practices that have made it possible to provide Continuous Value Delivery in our Cloud-first, mobile-first world.

Long before there was Continuous Value Delivery, there was Continuous Integration …

Continuous Integration

Continuous Integration is a software development practice where team members integrate their work frequently.  The goal of Continuous Integration is to reduce and prevent integration problems.  In Continuous Integration, each integration is verified against tests.

Then, along came, Continuous Delivery …

Continuous Delivery

Continuous Delivery extended the idea of Continuous Integration to automate and improve the process of software delivery.  With Continuous Delivery,  software checked in on the mainline is always ready for release.  When you combine automated testing, Continuous Integration, and Continuous Delivery, it's possible to push out updates, fixes, and new releases to customers with lower risk and minimal manual overhead.

Continuous Delivery changes the model from a big bang approach, where software is shipped at the end of a long project cycle, to where software can be iteratively and incrementally shipped along the way.

This set the stage for Continuous Value Delivery …

Continuous Value Delivery

Continuous Value Delivery puts a focus on Value Realization as a first-class citizen.  

To be able to ship value on a continuous basis, you need to have a simple way to have a simple mechanism for units of value.  Scenarios and stories are an effective way to chunk and carve up value into useful increments.  Scenario and stories also help with driving adoption.

For Continuous Value Delivery, you also need a way to "pull" value, as well as "push" value.   Kanbans provide an easy way to visualize the flow of value, and support a “pull” mechanism and reinforce “the voice of the customer.”    User stories provide an easy way to create a backlog or catalog of potential value, that you can “push” based on priorities and user demand.

Businesses that are making the most of their technology investments are linking scenarios, backlogs, and Kanbans to their value chains and their value streams.

Value Planning Enables Continuous Value Delivery

If you want to drive continuous value to the business, then you need to plan for it.  As part of value planning, you need to identify key stakeholders in the business.    With the stakeholders you need to identify the business benefits that they care about, along with the KPIs and value measures that they care about.

At this stage, you also want to identify who in the business will be responsible for collecting the data and reporting the value.

Adoption is the Key to Value Realization

Adoption is the key component of Continuous Value Delivery.  After all, if you release new features, but nobody uses them, then the users won't get the new benefits.   In order to realize the value, users need to use the new features and actually change their behaviors.

So while deployment was the old bottleneck, adoption is the new bottleneck.

Users and the business can only absorb so much value at once.  In order to flow more value, you need to reduce friction around adoption, and drive consumption of technology.  You can do this through effective adoption planning, user readiness, communication plans, and measurement.

Value Measurement and Reporting

To close the loop, you want the business to acknowledge the delivery of value.   That’s where measurement and reporting come in.

From a measurement standpoint, you can use adoption and usage metrics to better understand what's being used and how much.  But that’s only part of the story.

To connect the dots back to the business impact, you need to measure changes in behavior, such as what people have stopped doing, started doing, and continue doing.   This will be an indicator of benefits being realized.

Ultimately, to show the most value to the business, you need to move the benefits up the stack.  At the lowest level, you can observe the benefits, by simply observing the changes in behavior.  If you can observe the benefits, then you should be able to measure the benefits.  And if you can measure the benefits, then you should be able to quantify the benefits.   And if you can quantify the benefits, then you should be able to associate some sort of financial amount that shows how things are being done better, faster, or cheaper.

The value reporting exercise should help inform and adjust any value planning efforts.  For example, if adoption is proving to be the bottleneck, now you can drill into where exactly the bottleneck is occurring and you can refocus efforts more effectively.

Plan, Do, Check, Act

In essence, your value realization loop is really a cycle of plan, do, check, act, where value is made explicit, and it is regarded as a first-class citizen throughout the process of Continuous Value Delivery.

That’s a way better approach than building solutions and hoping that value will come or that you’ll stumble your way into business impact.

As history shows, too many projects try to luck their way into value, and it’s far better to design for it.

Value Sprints

A Sprint is simply a unit of development in Scrum.   The idea is to provide a working increment of the solution at the end of the Sprint, that is potentially shippable.  

It’s a “timeboxed” effort.   This helps reduce risk as well as support a loop of continuous learning.  For example, a team might work in 1 week, 2 week or 1 month sprints.   At the end of the Sprint, you can review the progress, and make any necessary adjustments to improve for the next Sprint.

In the business arena, we can think in terms of Value Sprints, where we don’t want to stop at just shipping a chunk of value.

Just shipping or deploying software and solutions does not lead to adoption.

And that’s how software and IT projects fall down.

With a Value Sprint, we want to do a add a few specific things to the mix to ensure appropriate Value Realization and true benefits delivery.  Specifically, we want to integrate Value Planning right up front, and as part of each Sprint.   Most importantly, we want to plan and drive adoption, as part of the Value Sprint.

If we can accelerate adoption, then we can accelerate time to value.

And, of course, we want to report on the value as part of the Value Sprint.

In practice, our field tells us that Value Sprints of 6-8 weeks tend to work well with the business.    Obviously, the right answer depends on your context, but it helps to know what others have been doing.   The length of the loop depends on the business cadence, as well as how well adoption can be driven in an organization, which varies drastically based on ability to execute and maturity levels.  And, for a lot of businesses, it’s important to show results within a quarterly cycle.

But what’s really important is that you don’t turn value into a long winded run, or a long shot down the line, and that you don’t simply hope that value happens.

Through Value Sprints and Continuous Value Delivery you can create a sustainable approach where the business can realize the value from it’s technology investments in a sustainable and more reliable way for real business results.

And that’s how you win in the game of software today.

You Might Also Like

Blessing Sibanyoni on Value Realization

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Architecture, Programming

React in modern web applications: Part 1

Xebia Blog - Tue, 09/02/2014 - 12:00

At Xebia we love to share knowledge! One of the ways we do this is by organizing 1-day courses during the summer. Together with Frank Visser we decided to do a training about full stack development with Node.js, AngularJS and Facebook's React. The goal of the training was to show the students how one could create a simple timesheet application. This application would use nothing but modern Javascript technologies while also teaching them best practices with regards to setting up and maintaining it.

To further share the knowledge gained during the creation of this training we'll be releasing several blog posts. In this first part we'll talk about why to use React, what React is and how you can incorporate it into your Grunt lifecycle.

This series of blog posts assume that you're familiar with the Node.js platform and the Javascript task runner Grunt.

What is React?

ReactJS logo

React is a Javascript library for creating user interfaces made by Facebook. It is their answer to the V in MVC. As it only takes care of the user interface part of a web application React can be (and most often will be) combined with other frameworks (e.g. AngularJS, Backbone.js, ...) for handling the MC part.

In case you're unfamiliar with the MVC architecture, it stands for model-view-controller and it is an architectural pattern for dividing your software into 3 parts with the goal of separating the internal representation of data from the representation shown to the actual user of the software.

Why use React?

There are quite a lot of Javascript MVC frameworks which also allow you to model your views. What are the benefits of using React instead of for example AngularJS?

What sets React apart from other Javascript MVC frameworks like AngularJS is the way React handles UI updates. To dynamically update a web UI you have to apply DOM updates whenever data in your UI changes. These DOM updates, compared to reading data from the DOM, are expensive operations which can drastically slow down your application's responsiveness if you do not minimize the amount of updates you do. React took a clever approach to minimizing the amount of DOM updates by using a virtual DOM (or shadow DOM) diff.

In contrast to the normal DOM consisting of nodes the virtual DOM consists of lightweight Javascript objects that represent your different React components. This representation is used to determine the minimum amount of steps required to go from the previous render to the next render. By using an observable to check if the state has changed React prevents unnecessary re-renders. By calling the setState method you mark a component 'dirty' which essentially tells React to update the UI for this component. When setState is called the component rebuilds the virtual DOM for all its children. React will then compare this to the current virtual sub-tree for the same component to determine the changes and thus find the minimum amount of data to update.

Besides efficient updates of only sub-trees, React batches these virtual DOM batches into real DOM updates. At the end of the React event loop, React will look up all components marked as dirty and re-render them.

How does React compare to AngularJS?

It is important to note that you can perfectly mix the usage of React with other frameworks like AngularJS for creating user interfaces. You can of course also decide to only use React for the UI and keep using AngularJS for the M and C in MVC.

In our opinion, using React for simple components does not give you an advantage over using AngularJS. We believe the true strength of React lies in demanding components that re-render a lot. React tends to really outperform AngularJS (and a lot of other frameworks) when it comes to UI elements that require a lot of re-rendering. This is due to how React handles UI updates internally as explained above.

JSX

JSX is a Javascript XML syntax transform recommended for use with React. It is a statically-typed object-oriented programming language designed for modern browsers. It is faster, safer and easier to use than Javascript itself. Although JSX and React are independent technologies, JSX was built with React in mind. React works without JSX out of the box but they do recommend using it. Some of the many reasons for using JSX:

  • It's easier to visualize the structure of the DOM
  • Designers are more comfortable making changes
  • It's familiar for those who have used MXML or XAML

If you decide to go for JSX you will have to compile the JSX to Javascript before running your application. Later on in this article I'll show you how you can automate this using a Grunt task. Besides Grunt there are a lot of other build tools that can compile JSX. To name a few, there are plugins for Gulp, Broccoli or Mimosa.

An example JSX file for creating a simple link looks as follows:

/** @jsx React.DOM */
var link = React.DOM.a({href: 'http://facebook.github.io/react'}, 'React');

Make sure to never forget the starting comment or your JSX file will not be processed by React.

Components

With React you can construct UI views using multiple, reusable components. You can separate the different concepts of your application by creating modular components and thus get the same benefits when using functions and classes. You should strive to break down the different common elements in your UI into reusable components that will allow you to reduce boilerplate and keep it DRY.

You can construct component classes by calling React.createClass() and each component has a well-defined interface and can contain state (in the form of props) specific to that component. A component can have ownership over other components and in React, the owner of a component is the one setting the props of that component. An owner, or parent component can access its children by calling this.props.children.

Using React you could create a hello world application as follows:

/** @jsx React.DOM */
var HelloWorld = React.createClass({
  render: function() {
    return <div>Hello world!</div>;
  }
});

Creating a component does not mean it will get rendered automatically. You have to define where you would like to render your different components using React.renderComponent as follows:

React.renderComponent(<HelloWorld />, targetNode);

By using for example document.getElementById or a jQuery selector you target the DOM node where you would like React to render your component and you pass it on as the targetNode parameter.

Automating JSX compilation in Grunt

To automate the compilation of JSX files you will need to install the grunt-react package using Node.js' npm installer:

npm install grunt-react --save-dev

After installing the package you have to add a bit of configuration to your Gruntfile.js so that the task knows where your JSX source files are located and where and with what extension you would like to store the compiled Javascript files.

react: {
  dynamic_mappings: {
    files: [
      {
        expand: true,
        src: ['scripts/jsx/*.jsx'],
        dest: 'app/build_jsx/',
        ext: '.js'
      }
    ]
  }
}

To speed up development you can also configure the grunt-contrib-watch package to keep an eye on JSX files. Watching for JSX files will allow you to run the grunt-react task whenever you change a JSX file resulting in continuous compilation of JSX files while you develop your application. You simply specify the type of files to watch for and the task that you would like to run when one of these files changes:

watch: {
  jsx: {
    files: ['scripts/jsx/*.jsx'],
    tasks: ['react']
  }
}

Last but not least you will want to add the grunt-react task to one or more of your grunt lifecycle tasks. In our setup we added it to the serve and build tasks.

grunt.registerTask('serve', function (target) {
  if (target === 'dist') {
    return grunt.task.run(['build', 'connect:dist:keepalive']);
  }

  grunt.task.run([
    'clean:server',
    'bowerInstall',
    <strong>'react',</strong>
    'concurrent:server',
    'autoprefixer',
    'configureProxies:server',
    'connect:livereload',
    'watch'
  ]);
});

grunt.registerTask('build', [
  'clean:dist',
  'bowerInstall',
  'useminPrepare',
  'concurrent:dist',
  'autoprefixer',
  'concat',
  <strong>'react',</strong>
  'ngmin',
  'copy:dist',
  'cdnify',
  'cssmin',
  'uglify',
  'rev',
  'usemin',
  'htmlmin'
]);
Conclusion

Due to React's different approach on handling UI changes it is highly efficient at re-rendering UI components. Besides that it's easily configurable and integrate in your build lifecycle.

What's next?

In the next article we'll be discussing how you can use React together with AngularJS, how to deal with state in your components and how to avoid passing through your entire component hierarchy using callbacks when updating state.

Free and open source example software guidebook

Coding the Architecture - Simon Brown - Tue, 09/02/2014 - 09:19

It needs a little updating (isn't that always the case!), but I've moved the example software guidebook (previously an appendix in my Software Architecture for Developers book) into a separate free and open source book on Leanpub.

techtribes.je - Software Guidebook

techtribes.je is a side-project of mine to create a content aggregator for the tech, IT and digital sector in Jersey, Channel Islands. The code behind the techtribes.je website is open source and available on GitHub. The source for the software guidebook is also open source and available on GitHub.

The techtribes.je software guidebook is based upon the concept of a software guidebook as described in my Software Architecture for Developers book; the software guidebook is a lightweight, pragmatic way to document the "big picture" of a software system. In essence, it's my simplified version of many "software architecture document" templates you'll find out there on the web.

techtribes.je - Software Guidebook is available to download for free from Leanpub. I hope you find it useful.

Categories: Architecture

Let's Build Maker Cities for Maker People Around New Resources Like Bandwidth, Compute, and Atomically-Precise Manufacturing

TL;DR: There’s a lot of unused space in North America. Yet cities like San Francisco are becoming ever more expensive because of a bubble created by high tech jobs that seemingly can be done anywhere. Historically cities are built around resources that provide some service to humans. The age of infrastructure rising around physical resources is declining while the age of digital resource exploitation is rising. Cities are still valuable because they are amazing idea and problem solving machines. How about we create thousands of new Maker Cities in the vast emptiness that is North America and build them around digital resources like bandwidth, compute power, Atomically-Precise Manufacturing (AMP), and all things future and bright?

Observation Number One: There’s lots of empty space out there.
Categories: Architecture

Why do I use Leanpub?

Coding the Architecture - Simon Brown - Sat, 08/30/2014 - 11:35

There's been some interesting discussion over the past fews days about Leanpub, both on Twitter and blogs. Jurgen Appelo posted Why I Don't Use Leanpub and Peter Armstrong responded. I think the biggest selling points of Leanpub as a publishing platform from an author's perspective may have been lost in the discussion. So, here's why my take on why I use Leanpub for Software Architecture for Developers.

Some history

I pitched my book idea to a number of traditional publishing companies in 2008 and none of them were very interested. "Nice idea, but it won't sell" was the basic summary. A few years later I decided to self-publish my book instead and I was about to head down the route of creating PDF and EPUB versions using a combination of Pages and iBooks Author on the Mac. Why? Because I love books like Garr Reynolds' Presentation Zen and I wanted to do something similar. At first I considered simply giving the book away for free on my website but, after Googling around for self-publishing options, I stumbled across Leanpub. Despite the Leanpub bookstore being fairly sparse at the start of 2012, the platform piqued my interest and the rest is history.

The headline: book creation, publishing, sales and distribution as a service

I use Leanpub because it allows me to focus on writing content. Period. The platform takes care of creating and selling e-books in a number of different formats. I can write some Markdown, sync the files via Dropbox and publish a new version of my book within minutes.

Typesetting and layout

I frequently get asked for advice about whether Leanpub is a good platform for somebody to write a book. The number one question to ask is whether you have specific typesetting/layout needs. If you want to produce a "Presentation Zen" style book or if having control of your layout is important to you, then Leanpub isn't for you. If, however, you want to write a traditional book that mostly consists of words, then Leanpub is definitely worth taking a look at.

Leanpub uses a slightly customised version of Markdown, which is a super-simple language for writing content. Here's an example of a Markdown file from my book, and you can see the result in the online sample of my book. Leanpub does allow you to tweak things like PDF page size, font size, page breaking, section numbering, etc but you're not going to get pixel perfect typesetting. I think that Leanpub actually does a pretty fantastic job of creating good looking PDF, EPUB and MOBI format ebooks based upon the very minimal Markdown. This is especially true when you consider the huge range of ebook reader software across PCs, Macs, Android devices, Apple devices, Kindles, etc. Plus the readers themselves can mess with the fonts/font sizes too.

Book formatting on Leanpub

It's like building my own server at Rackspace versus using a "Platform as a Service" such as Cloud Foundry. You need to make a decision about the trade-off between control and simplicity/convenience. Since authoring isn't my full-time job and I have lots of other stuff to be getting on with, I'm more than happy to supply the content and let Leanpub take care of everything else for me.

Toolchain

My toolchain as a Leanpub author is incredibly simple: Dropbox and Mou. From a structural perspective, I have one Markdown file per essay and that's basically it. Leanpub does now provide support for using GitHub to store your content and I can see the potential for a simple Leanpub-aware authoring tool, but it's not rocket science. And to prove the point, a number of non-technical people here in Jersey have books on Leanpub too (e.g. Thrive with The Hive and a number of books by Richard Rolfe).

Iterative and incremental delivery

Before starting, I'd already decided that I'd like to write the book as a collection of short essays and this was cemented by the fact that Leanpub allows me to publish an in-progress ebook. I took an iterative and incremental approach to publishing the book. Rather than starting with essay number one and progressing in order, I tried to initially create a minimum viable book that covered the basics. I then fleshed out the content with additional essays once this skeleton was in place, revisiting and iterating upon earlier essays as necessary. I signed up for Leanpub in January 2012 and clicked the "Publish" button four weeks later. That first version of my book was only about ten pages in length but I started selling copies immediately.

Variable pricing and coupons

Another thing that I love about Leanpub is that it gives you full control over how you price your book. The whole pricing thing is a balancing act between readership and royalties, but I like that I'm in control of this. My book started out at $4.99 and, as content was added, that price increased. The book now currently has a minimum price of $20 and a recommended price of $30. I can even create coupons for reduced price or free copies too. There's some human psychology that I don't understand here, but not everybody pays the minimum price. Far from it, and I've had a good number of people pay more than the recommend price too. Leanpub provides all of the raw data, so you can analyse it as needed.

An incubator for books

As I've already mentioned, I pitched my book idea to a bunch of regular publishing companies and they weren't interested. Fast-forward a few years and my book is the currently the "bestselling" book on Leanpub this week, fifth by lifetime earnings and twelfth in terms of number of copies sold. I've used quotes around "bestselling" because Jurgen did. ;-)

Leanpub bestsellers

In his blog post, Peter Armstrong emphasises that Leanpub is a platform for publishing in-progress ebooks, especially because you can publish using an iterative and incremental approach. For this reason, I think that Leanpub is a fantastic way for authors to prove an idea and get some concrete feedback in terms of sales. Put simply, Leanpub is a fantastic incubator for books. I know of a number of books that were started on Leanpub have been taken on by traditional publishing companies. I've had a number of offers too, including some for commercial translations. Sure, there are other ways to publish in-progress ebooks, but Leanpub makes this super-easy and the barrier to entry is incredibly low.

The future for my book?

What does the future hold for my book then? I'm not sure that electronic products are ever really "finished" and, although I consider my book to be "version 1", I do have some additional content that is being lined up. And when I do this, thanks to the Leanpub platform, all of my existing readers will get the updates for free.

I've so far turned down the offers that I've had from publishing companies, primarily because they can't compete in terms of royalties and I'm unconvinced that they will be able to significantly boost readership numbers. Leanpub is happy for authors to sell their books through other channels (e.g. Amazon) but, again, I'm unconvinced that simply putting the book onto Amazon will yield an increased readership. I do know of books on the Kindle store that haven't sold a single copy, so I take "Amazon is bigger and therefore better" arguments with a pinch of salt.

What I do know is that I'm extremely happy with the return on my investment. I'm not going to tell you how much I've earned, but a naive calculation of $17.50 (my royalty on a $20 sale) x 4,600 (the total number of readers) is a little high but gets you into the right ballpark. In summary, Leanpub allows me focus on content, takes care of pretty much everything and gives me an amazing author royalty as a result. This is why I use Leanpub.

Categories: Architecture

Stuff The Internet Says On Scalability For August 29th, 2014

Hey, it's HighScalability time:


In your best Carl Sagan voice...Billions and Billions of Habitable Planets.
  • Quotable Quotes:
    • @Kurt_Vonnegut: Another flaw in the human character is that everybody wants to build and nobody wants to do maintenance.
    • @neil_conway: "The paucity of innovation in calculating join predicate selectivities is truly astounding."
    • @KentBeck: power law walks into a bar. bartender says, "i've seen a hundred power laws. nobody orders anything." power law says, "1000 beers, please".
    • @CompSciFact: RT @jfcloutier: Prolog: thinking in proofs Erlang: thinking in processes UML: wishful thinking

  • For your acoustic listening pleasure let me present...The Orbiting Vibes playing Scaling Doesn't Matter. I don't quite understand how it relates to scaling, but my deep learning algorithm likes it. 

  • The Rise of the Algorithm. Another interesting podcast with James Allworth and Ben Thompson. Much pondering of how to finance content. Do you trust content with embedded affiliate links? Do you trust content written by writers judged on their friendliness to advertisers? Why trust at all is the bigger question. Facebook is the soft news advertisers love. Twitter is the hard news advertisers avoid. A traditional newspaper combined both. Humans are the new horses. < Capitalism doesn't care if people are employed anymore than it cared about horses being employed. Employment is simply a byproduct of inefficient processes. The Faith that the future will provide is deliciously ironic given the rigorous rationalism underlying most of the episodes.

  • Great reading list for Berkeley CS286: Implementation of Database Systems, Fall 2014. 

  • Is it just me or is it totally weird that all the spy systems use the same diagrams that any other project would use? It makes it seem so...normal. The Surveillance Engine: How the NSA Built Its Own Secret Google.

  • The Mathematics of Herding Sheep. By little border collie Annie embodies a very smart algorithm to herd sheep:  When sheep become dispersed beyond a certain point, dogs put their effort into rounding them up, reintroducing predatory pressure into the herd, which responds according to selfish herd principles, bunching tightly into a more cohesive unit. < What's so disturbing is how well this algorithm works with people.

  • Inside Google's Secret Drone-Delivery Program. What I really want are pick-up drones, where I send my drone to pick stuff up. Or are pick-up and delivery cars a better bet? Though I can see swarms of drones delivering larger objects in parts that self-assemble

  • Lambda Architecture at Indix: "break down the various stages in your data pipeline into the layers of the architecture and choose technologies and frameworks that satisfy the specific requirements of each layer."

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Inspirational Work Quotes at a Glance

What if your work could be your ultimate platform? … your ultimate channel for your growth and greatness?

We spend a lot of time at work. 

For some people, work is their ultimate form of self-expression

For others, work is a curse.

Nobody stops you from using work as a chance to challenge yourself, to grow your skills, and become all that you’re capable of.

But that’s a very different mindset than work is a place you have to go to, or stuff you have to do.

When you change your mind, you change your approach.  And when you change your approach, you change your results.   But rather than just try to change your mind, the ideal scenario is to expand your mind, and become more resourceful.

You can do so with quotes.

Grow Your “Work Intelligence” with Inspirational Work Quotes

In fact, you can actually build your “work intelligence.”

Here are a few ways to think about “intelligence”:

  1. the ability to learn or understand things or to deal with new or difficult situations (Merriam Webster)
  2. the more distinctions you have for a given concept, the more intelligence you have

In Rich Dad, Poor Dad, Robert Kiyosaki, says, “intelligence is the ability to make finer distinctions.”   And, Tony Robbins, says “intelligence is the measure of the number and the quality of the distinctions you have in a given situation.”

If you want to grow your “work intelligence”, one of the best ways is to familiarize yourself with the best inspirational quotes about work.

By drawing from wisdom of the ages and modern sages, you can operate at a higher level and turn work from a chore, into a platform of lifelong learning, and a dojo for personal growth, and a chance to master your craft.

You can use inspirational quotes about work to fill your head with ideas, distinctions, and key concepts that help you unleash what you’re capable of.

To give you a giant head start and to help you build a personal library of profound knowledge, here are two work quotes collections you can draw from:

37 Inspirational Quotes for Work as Self-Expression

Inspirational Work Quotes

10 Distinct Ideas for Thinking About Your Work

Let’s practice.   This will only take a minute, and if you happen to hear the right words, which are the keys for you, your insight or “ah-ha” can be just the breakthrough that you needed to get more of your work, and, as a result, more out of life (or at least your moments.)

Here is a sample of distinct ideas and depth that you use to change how you perceive your work, and/or how you do your work:

  1. “Either write something worth reading or do something worth writing.” — Benjamin Franklin
  2. “You don’t get paid for the hour. You get paid for the value you bring to the hour.” — Jim Rohn
  3. “Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do.” — Steve Jobs
  4. “Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” -- Bill Gates
  5. “We must each have the courage to transform as individuals. We must ask ourselves, what idea can I bring to life? What insight can I illuminate? What individual life could I change? What customer can I delight? What new skill could I learn? What team could I help build? What orthodoxy should I question?” – Satya Nadella
  6. “My work is a game, a very serious game.” — M. C. Escher
  7. “Hard work is a prison sentence only if it does not have meaning. Once it does, it becomes the kind of thing that makes you grab your wife around the waist and dance a jig.” — Malcolm Gladwell
  8. “The test of the artist does not lie in the will with which he goes to work, but in the excellence of the work he produces.” -- Thomas Aquinas
  9. “Are you bored with life? Then throw yourself into some work you believe in with all you heart, live for it, die for it, and you will find happiness that you had thought could never be yours.” — Dale Carnegie
  10. “I like work; it fascinates me. I can sit and look at it for hours.” -– Jerome K. Jerome

For more ideas, take a stroll through my inspirational work quotes.

As you can see, there are lots of ways to think about work and what it means.  At the end of the day, what matters is how you think about it, and what you make of it.  It’s either an investment, or it’s an incredible waste of time.  You can make it mundane, or you can make it matter.

The Pleasant Life, The Good Life, and The Meaningful Life

Here’s another surprise about work.   You can use work to live the good life.   According to Martin Seligman, a master in the art and science of positive psychology, there are three paths to happiness:

  1. The Pleasant Life
  2. The Good Life
  3. The Meaningful Life

In The Pleasant Life, you simply try to have as much pleasure as possible.  In The Good Life, you spend more time in your values.  In The Meaningful Life, you use your strengths in the service of something that is bigger than you are.

There are so many ways you can live your values at work and connect your work with what makes you come alive.

There are so many ways to turn what you do into service for others and become a part of something that’s bigger than you.

If you haven’t figured out how yet, then dig deeper, find a mentor, and figure it out.

You spend way too much time at work to let your influence and impact fade to black.

You Might Also Like

40 Hour Work Week at Microsoft

Agile Avoids Work About Work

How Employees Lost Empathy for Their Work, for the Customer, and for the Final Product

Satya Nadella on Live and Work a Meaningful Life

Short-Burst Work

Categories: Architecture, Programming

Speaking in September

Coding the Architecture - Simon Brown - Thu, 08/28/2014 - 16:01

After a lovely summer (mostly) spent in Jersey, September is right around the corner and is shaping up to be a busy month. Here's a list of the events where you'll be able to find me.

It's going to be a fun month and besides, I have to keep up my British Airways frequent flyer status somehow, right? ;-)

Categories: Architecture

CocoaHeadsNL @ Xebia on September 16th

Xebia Blog - Thu, 08/28/2014 - 11:20

On Tuesday the 16th the Dutch CocoaHeads will be visiting us. It promises to be a great night for anybody doing iOS or OSX development. The night starts at 17:00, diner at 18:00.

If you are an iOS/OSX developer and like to meet fellow developers? Come join the CocoaHeads on september 16th at our office. More details are on the CocoaHeadsNL meetup page.

What is your next step in Continuous Delivery? Part 1

Xebia Blog - Wed, 08/27/2014 - 21:15

Continuous Delivery helps you deliver software faster, with better quality and at lower cost. Who doesn't want to delivery software faster, better and cheaper? I certainly want that!

No matter how good you are at Continuous Delivery, you can always do one step better. Even if you are as good as Google or Facebook, you can still do one step better. Myself included, I can do one step better.

But also if you are just getting started with Continuous Delivery, there is a feasible step to take you forward.

In this series, I describe a plan that helps you determine where you are right now and what your next step should be. To be complete, I'll start at the very beginning. I expect most of you have passed the first steps already.

The steps you already took

This is the first part in the series: What is your next step in Continuous Delivery? I'll start with three steps combined in a single post. This is because the great majority of you has gone through these steps already.

Step 0: Your very first lines of code

Do you remember the very first lines of code you wrote? Perhaps as a student or maybe before that as a teenager? Did you use version control? Did you bring it to a test environment before going to production? I know I did not.

None of us was born with an innate skills for delivering software in a certain way. However, many of us are taught a certain way of delivering software that still is a long way from Continuous Delivery.

Step 1: Version control

At some point during your study of career, you have been introduced to Version Control. I remember starting with CVS, migrating to Subversion and I am currently using Git. Each of these systems are an improvement over te previous one.

It is common to store the source code for your software in version control. Do you already have definitions or scripts for your infrastructure in version control? And for your automated acceptance tests or database schemas? In later steps, we'll get back to that.

Step 2: Release process

Your current release process may be far from Continuous Delivery. Despite appearances, your current release process is a useful step towards Continuous Delivery.

Even if you delivery to production less than twice a year, you are better off than a company that delivers their code unpredictably, untested and unmanaged. Or worse, a company that edits their code directly on a production machine.

In your delivery process, you have planning, control, a production-like testing environment, actual testing and maintenance after the go-live. The main difference with Continuous Delivery is the frequency and the amount of software that is released at the same time.

So yes, a release process is a productive step towards Continuous Delivery. Now let's see if we can optimize beyond this manual release process.

Step 3: Scripts

Imagine you have issues on your production server... Who do you go to for help? Do you have someone in mind?

Let me guess, you are thinking about a middle-aged guy who has been working at your organisation for 10+ years. Even if your organization is only 3 years old, I bet he's been working there for more than 10 years. Or at least, it seems like it.

My next guess is that this guy wrote some scripts to automate recurring tasks and make his life easier. Am I right?

These scripts are an important step towards Continuous Delivery. in fact, Continuous Delivery is all about automating repetitive tasks. The only thing that falls short is that these scripts are a one-man-initiative. It is a good initiative, but there is no strategy behind it and a lack of management support.

If you don't have this guy working for you, then you may have a bigger step to take when continuing towards the next step of Continuous Delivery. To successfully adopt Continuous Delivery on the long run, you are going to need someone like him.

Following steps

In the next parts, we will look at the following steps towards becoming world champion delivering software:

  • Step 4: Continuous Delivery
  • Step 5: Continuous Deployment
  • Step 6: "Hands-off"
  • Step 7: High Scalability

Stay tuned for the following posts.

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Personas and scenario can be a powerful tool for driving adoption and business value realization.  

All too often, people deploy technology without fully understanding the users that it’s intended for. 

Worse, if the technology does not get used, the value does not get realized.

Keep in mind that the value is in the change.  

The change takes the form of doing something better, faster, cheaper, and behavior change is really the key to value realization.

If you deploy a technology, but nobody adopts it, then you won’t realize the value.  It’s a waste.  Or, more precisely, it’s only potential value.  It’s only potential value because nobody has used it to change their behavior to be better, faster, or cheaper with the new technology.  

In fact, you can view change in terms of behavior changes:

What should users START doing or STOP doing, in order to realize the value?

Behavior change becomes a useful yardstick for evaluating adoption and consumption of technology, and significant proxy for value realization.

What is a Persona?

I’ve written about personas before  in Actors, Personas, and Roles, MSF Agile Persona Template, and Personas at patterns & practices, and Microsoft Research has a whitepaper called Personas: Practice and Theory.

A persona, simply defined is a fictitious character that represents user types.  Personas are the “who” in the organization.    You use them to create familiar faces and to inspire project teams to know their clients as well as to build empathy and clarity around the user base. 

Using personas helps characterize sets of users.  It’s a way to capture and share details about what a typical day looks like and what sorts of pains, needs, and desired outcomes the personas have as they do their work. 

You need to know how work currently gets done so that you can provide relevant changes with technology, plan for readiness, and drive adoption through specific behavior changes.

Using personas can help you realize more value, while avoiding “value leakage.”

What is a Scenario?

When it comes to users, and what they do, we're talking about usage scenarios.  A usage scenario is a story or narrative in the form of a flow.  It shows how one or more users interact with a system to achieve a goal.

You can picture usage scenarios as high-level storyboards.  Here is an example:

clip_image001

In fact, since scenario is often an overloaded term, if people get confused, I just call them Solution Storyboards.

To figure out relevant usage scenarios, we need to figure out the personas that we are creating solutions for.

Workforce Analysis with Personas

In practice, you would segment the user population, and then assign personas to the different user segments.  For example, let’s say there are 20,000 employees.  Let’s say that 3,000 of them are business managers, let’s say that 6,000 of them are sales people.  Let’s say that 1,000 of them are product development engineers.   You could create a persona named Mary to represent the business managers, a persona named Sally to represent the sales people, and a persona named Bob to represent the product development engineers.

This sounds simple, but it’s actually powerful.  If you do a good job of workforce analysis, you can better determine how many users a particular scenario is relevant for.  Now you have some numbers to work with.  This can help you quantify business impact.   This can also help you prioritize.  If a particular scenario is relevant for 10 people, but another is relevant for 1,000, you can evaluate actual numbers.

  Persona 1
”Mary
Persona 2
”Sally”
Persona 3
”Bob”
Persona 4
”Jill”
Persona 5
”Jack”
User Population 3,000 6,000 1,000 5,000 5,000 Scenario 1 X         Scenario 2 X X       Scenario 3     X     Scenario 4       X X Scenario 5 X         Scenario 6 X X X X X Scenario 7 X X       Scenario 8     X X   Scenario 9 X X X X X Scenario 10   X   X   Analyzing a Persona

Let’s take Bob for example.  As a product development engineer, Bob designs and develops new product concepts.  He would love to collaborate better with his distributed development team, and he would love better feedback loops and interaction with real customers.

We can drill in a little bit to get a get a better picture of his work as a product development engineer. 

Here are a few ways you can drill in:

  • A Day in the Life – We can shadow Bob for a day and get a feel for the nature of his work.  We can create  a timeline for the day and characterize the types of activities that Bob performs.
  • Knowledge and Skills - We can identify the knowledge Bob needs and the types of skills he needs to perform his job well.  We can use this as input to design more effective readiness plans.
  • Enabling Technologies –  Based on the scenario you are focused on, you can evaluate the types of technologies that Bob needs.  For example, you can identify what technologies Bob would need to connect and interact better with customers.

Another approach is to focus on the roles, responsibilities, challenges, work-style, needs and wants.  This helps you understand which solutions are appropriate, what sort of behavior changes would be involved, and how much readiness would be required for any significant change.

At the end of the day, it always comes down to building empathy, understanding, and clarity around pains, needs, and desired outcomes.

Persona Creation Process

Here’s an example of a high-level process for persona creation:

  1. Kickoff workshop
  2. Interview users
  3. Create skeletons
  4. Validate skeletons
  5. Create final personas
  6. Present final personas

Doing persona analysis is actually pretty simple.  The challenge is that people don’t do it, or they make a lot of assumptions about what people actually do and what their pains and needs really are.  When’s the last time somebody asked you what your pains and needs are, or what you need to perform your job better?

A Story of Using Personas to Create the Future of Digital Banking

In one example I know of a large bank that transformed itself by focusing on it’s personas and scenarios.  

It started with one usage scenario:

Connect with customers wherever they are.

This scenario was driven from pain in the business.  The business was out of touch with customers, and it was operating under a legacy banking model.   This simple scenario reflected an opportunity to change how employees connect with customers (though Cloud, Mobile, and Social).

On the customer side of the equation, customers could now have virtual face-to-face communication from wherever they are.  On the employee side, it enabled a flexible work-style, helped employees pair up with each other for great customer service, and provided better touch and connection with the customers they serve.

And in the grand scheme of things, this helped transform a brick-and-mortar bank to a digital bank of the future, setting a new bar for convenience, connection, and collaboration.

Here is a video that talks through the story of one bank’s transformation to the digital banking arena:

Video: NedBank on The Future of Digital Banking

In the video, you’ll see Blessing Sibanyoni, one of Microsoft’s Enterprise Architects in action.

If you’re wondering how to change the world, you can start with personas and scenarios.

You Might Also Like

Scenarios in Practice

How I Learned to Use Scenarios to Evaluate Things

How Can Enterprise Architects Drive Business Value the Agile Way?

Business Scenarios for the Cloud

IT Scenarios for the Cloud

Categories: Architecture, Programming

The 1.2M Ops/Sec Redis Cloud Cluster Single Server Unbenchmark

This is a guest post by Itamar Haber, Chief Developers Advocate, Redis Labs.

While catching up with the world the other day, I read through the High Scalability guest post by Anshu and Rajkumar's from Aerospike (great job btw). I really enjoyed the entire piece and was impressed by the heavy tweaking that they did to their EC2 instance to get to the 1M mark, but I kept wondering - how would Redis do?

I could have done a full-blown benchmark. But doing a full-blown benchmark is a time- and resource-consuming ordeal. And that's without taking into account the initial difficulties of comparing apples, oranges and other sorts of fruits. A real benchmark is a trap, for it is no more than an effort deemed from inception to be backlogged. But I wanted an answer, and I wanted it quick, so I was willing to make a few sacrifices to get it. That meant doing the next best thing - an unbenchmark.

An unbenchmark is, by (my very own) definition, nothing like a benchmark (hence the name). In it, you cut every corner and relax every assumption to get a quick 'n dirty ballpark figure. Leaning heavily on the expertise of the guys in our labs, we measured the performance of our Redis Cloud software without any further optimizations. We ran our unbenchmark with the following setup:

Categories: Architecture

Synchronize the Team

Xebia Blog - Tue, 08/26/2014 - 13:52

How can you, as a scrum master, improve the chances that the scrum team has a common vision and understanding of both the user story and the solution, from the start until the end of the sprint?   

The problem

The planning session is where the team should synchronize on understanding the user story and agree on how to build the solution. But there is no real validation that all the team members are on the same page. The team tends to dive into the technical details quite fast in order to identify and size the tasks. The technical details are often discussed by only a few team members and with little or no functional or business context. Once the team leaves the session, there is no guarantee that they remain synchronized when the sprint progresses. 

The only other team synchronization ritual, prescribed by the scrum process, is the daily scrum or stand-up. In most teams the daily scrum is as short as possible, avoiding semantic discussions. I also prefer the stand-ups to be short and sweet. So how can you or the team determine that the team is (still) synchronized?

Specify the story

In the planning session, after a story is considered ready enough be to pulled into the sprint, we start analyzing the story. This is the specification part, using a technique called ‘Specification by Example’. The idea is to write testable functional specifications with actual examples. We decompose the story into specifications and define the conditions of failure and success with examples, so they can be tested. Thinking of examples makes the specification more concrete and the interpretation of the requirements more specific.

Having the whole team work out the specifications and examples, helps the team to stay focussed on the functional part of the story longer and in more detail, before shifting mindsets to the development tasks.  Writing the specifications will also help to determine wether a story is ready enough. While the sprint progresses and all the tests are green, the story should be done for the part of building the functionality.

You can use a tool like Fitnesse  or Cucumber to write testable specifications. The tests are run against the actual code, so they provide an accurate view on the progress. When all the tests pass, the team has successfully created the functionality. In addition to the scrum board and burn down charts, the functional tests provide a good and accurate view on the sprint progress.

Design the solution

Once the story has been decomposed into clear and testable specifications we start creating a design on a whiteboard. The main goal is to create a shared visible understanding of the solution, so avoid (technical) details to prevent big up-front designs and loosing the involvement of the less technical members on the team. You can use whatever format works for your team (e.g. UML), but be sure it is comprehensible by everybody on the team.

The creation of the design, as an effort by the whole team, tends to sparks discussion. In stead of relying on the consistency of non-visible mental images in the heads of team members, there is a tangible image shared with everyone.

The whiteboard design will be a good starting point for refinement as the team gains insight during the sprint. The whiteboard should always be visible and within reach of the team during the sprint. Using a whiteboard makes it easy to adapt or complement the design. You’ll notice the team standing around the whiteboard or pointing to it in discussions quite often.

The design can be easily turned into a digital artefact by creating a photo copy of it. A digital copy can be valuable to anyone wanting to learn the system in the future. The design could also be used in the sprint demo, should the audience be interested in a technical overview.

Conclusion

The team now leaves the sprint planning with a set of functional tests and a whiteboard design. The tests are useful to validate and synchronize on the functional goals. The whiteboard designs are useful to validate and synchronize on the technical goals. The shared understanding of the team is more visible and can be validated, throughout the sprint. The team has become more transparent.

It might be a good practice to have the developers write the specification, and the testers or analysts draw the designs on the board. This is to provoke more communication, by getting the people out of their comfort zone and forcing them to ask more questions.

There are more compelling reasons to implement (or not) something like specification by design or to have the team make design overviews. But it also helps the team to stay on the same page, when there are visible and testable artefacts to rely on during the sprint.

MixRadio Architecture - Playing with an Eclectic Mix of Services

This is a guest repost by Steve Robbins, Chief Architect at MixRadio.

At MixRadio, we offer a free music streaming service that learns from listening habits to deliver people a personalised radio station, at the single touch of a button. MixRadio marries simplicity with an incredible level of personalization, for a mobile-first approach that will help everybody, not just the avid music fan, enjoy and discover new music. It's as easy as turning on the radio, but you're in control - just one touch of Play Me provides people with their own personal radio station.   The service also offers hundreds of hand-crafted expert and celebrity mixes categorised by genre and mood for each region. You can also create your own artist mix and mixes can be saved for offline listening during times without signal such as underground travel, as well as reducing data use and costs.   Our apps are currently available on Windows Phone, Windows 8, Nokia Asha phones and the web. We’ve spent years evolving a back-end that we’re incredibly proud of, despite being British! Here's an overview of our back-end architecture.

 

Architecture Overview
Categories: Architecture

How Can Enterprise Architects Drive Business Value the Agile Way?

An Enterprise Architect can have a tough job when it comes to driving value to the business.   With multiple stakeholders, multiple moving parts, and a rapid rate of change, delivering value is tough enough.   But what if you want to accelerate value and maximize business impact?

Enterprise Architects can borrow a few concepts from the Agile world to be much more effective in today’s world.

A Look Back at How Agile Helped Connect Development to Business Impact …

First, let’s take a brief look at traditional development and how it evolved.  Traditionally, IT departments focused on delivering value to the business by shipping big bang projects.   They would plan it, build it, test it, and then release it.   The measure of success was on time, on budget.   

Few projects ever shipped on time.  Few were ever on budget.  And very few ever met the requirements of the business.

Then along came Agile approaches and they changed the game.

One of the most important ideas was a shift away from thick requirements documentation to user stories.  Developers got customers telling stories about what they wanted the future solution to do.  For example, a user story for a sale representative might look like this:

“As a sales rep, I want to see my customer’s account information so that I can identify cross-sell and upsell opportunities.” 

The use of user stories accomplished several things.   First, user stories got the development teams talking to the business users.  Rather than throwing documents back and forth, people started having face-to-face communication to understand the user stories.  Second, user stories helped chunk bigger units of value down into smaller units of value.  Rather than a big bang project where all the value is promised at the end of some long development cycle, a development team could now ship the solution in increments, where each increment was a prioritized set of stories.   The user stories effectively create a shared language for value

Third, it made it easier to test the delivery of value.  Now the user and the development team could test the solution against the user stories and acceptance criteria.  If the story met acceptance criteria, the user would acknowledge that the value was delivered.  In this way, the user stories created both a validation mechanism and a feedback loop for delivering and acknowledging value.

In the Agile world, bigger stories are called epics, and collections of stories are called themes.  Often a story starts off as an epic until it gets broken down into multiple stories.  What’s important here is that the collections of stories serve as a catalog of potential value.   Specifically, this catalog of stories reflects potential value with real stakeholders.  In this way, Agile helps drive customer focus and customer connection.  It’s really effective stakeholder management in action.

Agile approaches have been used in software projects large and small.  And they’ve forever changed how developers and project managers approach projects.

A Look at How Agile Can Help Enterprise Architecture Accelerate Business Value …

But how does this apply to Enterprise Architects?

As an Enterprise Architect, chances are you are responsible for achieving business outcomes.  You do this by driving business transformation.   The way you achieve business transformation is through driving capability change including business, people, and technical capabilities.

That’s a tall order.   And you need a way to chunk this up and make it meaningful to all the parties involved.

The Power of Scenarios as Units of Value for the Enterprise

This is where scenarios come into play.  Scenarios are a simple way to capture pains, needs and desired outcomes.   You can think of the desired outcome as the future capability vision.   It’s really a story that helps articulate the art of the possible.   More precisely, you can use scenarios to help build empathy with stakeholders for what value will look like, by painting a conceptual scene of the future.

An Enterprise scenario is simply a chunk of organizational change, typically about 3-5 business capabilities, 3-5 people capabilities, and 3-5 technical capabilities.

If that sounds like a lot of theory, let’s step into an example to show what it looks like in practice.

Let’s say you’re in a situation where you need to help a healthcare provider change their business.  

You can come up with a lot of scenarios, but it helps to start with the pains and needs of the business owner.  Otherwise, you might start going through a bunch of scenarios for the patients or for the doctors.  In this case, the business owner would be the Chief Medical Officer or the doctor of doctors.

Scenario: Tele-specialist for Healthcare

If we walk the pains, needs, and desired outcomes of the Chief Medical Officer, we might come up with a scenario that looks something like this, where the CURRENT STATE reflects the current pains, and needs, and the FUTURE STATE reflects the desired outcome.

CURRENT STATE

Here is an example of the CURRENT STATE portion of the scenario:

The Chief Medical Officer of Contoso Provider is struggling with increased costs and declining revenues. Costs are rising due to the Affordable Healthcare Act regulatory compliance requirements and increasing malpractice insurance premiums. Revenue is declining due to decreasing medical insurance payments per claim.

FUTURE STATE

Here is an example of the FUTURE STATE portion of the scenario:

Doctors can consult with patients, peers, and specialists from anywhere. Contoso provider's doctors can see more patients, increase accuracy of first time diagnosis, and grow revenues.


image

 

Storyboard for the Future Capability Vision

It helps to be able to picture what the Future Capability Vision might look like.   That’s where storyboarding can come in.  An Enterprise Architect can paint a simple scene of the future with a storyboard that shows the Future Capability Vision in action.  This practice lends itself to whiteboarding, and the beauty of a whiteboard is you can quickly elaborate where you need to, without getting mired in details.

image

As you can see in this example storyboard of the Future Capability Vision, we listed out some business benefits, which we could then drill-down into relevant KPIs and value measures.   We’ve also outlines some building blocks required for this Future Capability Vision in the form of business capabilities and technical capabilities.

Now this simple approach accomplishes a lot.   It helps ensure that any technology solution actually connects back to business drivers and pains that a business decision maker actually cares about.   This gets their fingerprints on the solution concept.   And it creates a simple “flashcard” for value.   If we name the Enterprise scenario well, then we can use it as a handle to get back to the story we created with the business of a better future.

The obvious thing this does, aside from connecting IT to the business, is it helps the business justify any investment in IT.

And all we did was walk through one Enterprise Scenario.  

But there is a lot more value to be found in the Enterprise.   We can literally explore and chunk up the value in the Enterprise if we take a step back and add another tool to our toolbelt:  the Scenario Chain.

Scenario Chain:  Chaining the Industry Scenarios to Enterprise Scenarios

The Scenario Chain is another powerful conceptual visualization tool.  It helps you quickly map out what’s happening in the marketplace in terms of industry drivers or industry scenarios.  You can then identify potential investment objectives.   These investment objectives lead to patterns of value or patterns of solutions in the Enterprise, which are effectively Enterprise scenarios.   From the Enterprise scenarios, you can then identify relevant usage scenarios.  The usage scenarios effectively represent new ways of working for the employees, or new interaction models with customers, which is effectively a change to your value stream.

image

With one simple glance, the Scenario Chain is a bird’s-eye view of how you can respond to the changing marketplace and how you can transform your business.   And, by using Enterprise scenarios, you can chunk up the change into meaningful units of value that reflect pains, needs, and desired outcomes for the business.  And, because you have the fingerprints of stakeholders from both business and IT, you’ve effectively created a shared vision for the future, that has business impact, a justification for investment, and it creates a pull-through mechanism for additional value, by driving the adoption of the usage scenarios.

Let’s elaborate on adoption and how scenarios can help accelerate business value.

Using Scenario to Drive Adoption and Accelerate Business Value

Driving adoption is a key way to realize the business value.  If nobody adopts the solution, then that’s what Gartner would call “Value Leakage.”  Value Realization really comes down to governance, measurement, and adoption.

With scenarios at your fingertips, you have a powerful way to articulate value, justify business cases, drive business transformation, and accelerate business value.   The key lies in using the scenarios as a unit of value, and focusing on scenarios as a way to drive adoption and change.

Here are three ways you can use scenarios to drive adoption and accelerate business value:

1.  Accelerate Business Adoption

One of the ways to accelerate business value is to accelerate adoption.    You can use scenarios to help enumerate specific behavior changes that need to happen to drive the adoption.   You can establish metrics and measures around specific behavior changes.   In this way, you make adoption a lot more specific, concrete, intentional, and tangible.

This approach is about doing the right things, faster.

2.  Re-Sequence the Scenarios

Another way to accelerate business value is to re-sequence the scenarios.   If your big bang is way at the end (way, way at the end), no good.  Sprinkle some of your bangs up front.   In fact, a great way to design for change is to build rolling thunder.   Put some of the scenarios up front that will get people excited about the change and directly experiencing the benefits.  Make it real.

The approach is about putting first things first.

3.  Identify Higher Value Scenarios

The third way to accelerate business value is to identify higher-value scenarios.   One of the things that happens along the way, is you start to uncover potential scenarios that you may not have seen before, and these scenarios represent orders of magnitude more value.   This is the space of serendipity.   As you learn more about users and what they value, and stakeholders and what they value, you start to connect more dots between the scenarios you can deliver and the value that can be realized (and therefore, accelerated.)

This approach is about trading up for higher value and more impact.

As you can see, Enterprise Architects can drive business value and accelerate business value realization by using scenarios and storyboarding.   It’s a simple and agile approach for connecting business and IT, and for shaping a more Agile Enterprise.

I’ll share more on this topic in future posts.   Value Realization is an art and a science and I’d like to reduce the gap between the state of the art and the state of the practice.

You Might Also Like

3 Ways to Accelerate Business Value

6 Steps for Enterprise Architecture as Strategy

Cognizant on the Next Generation Enterprise

Simple Enterprise Strategy

The Mission of Enterprise Services

The New Competitive Landscape

What Am I Doing on the Enterprise Strategy Team?

Why Have a Strategy?

Categories: Architecture, Programming

Vert.x with core.async. Handling asynchronous workflows

Xebia Blog - Mon, 08/25/2014 - 12:00

Anyone who was written code that has to coordinate complex asynchronous workflows knows it can be a real pain, especially when you limit yourself to using only callbacks directly. Various tools have arisen to tackle these issues, like Reactive Extensions and Javascript promises.

Clojure's answer comes in the form of core.async: An implementation of CSP for both Clojure and Clojurescript. In this post I want to demonstrate how powerful core.async is under a variety of circumstances. The context will be writing a Vert.x event-handler.

Vert.x is a young, light-weight, polyglot, high-performance, event-driven application platform on top of the JVM. It has an actor-like concurrency model, where the coarse-grained actors (called verticles) can communicate over a distributed event bus. Although Vert.x is still quite young, it's sure to grow as a big player in the future of the reactive web.

Scenarios

The scenario is as follows. Our verticle registers a handler on some address and depends on 3 other verticles.

1. Composition

Imagine the new Mars rover got stuck against some Mars rock and we need to send it instructions to destroy the rock with its inbuilt laser. Also imagine that the controlling software is written with Vert.x. There is a single verticle responsible for handling the necessary steps:

  1. Use the sensor to locate the position of the rock
  2. Use the position to scan hardness of the rock
  3. Use the hardness to calibrate and fire the laser. Report back status
  4. Report success or failure to the main caller

As you can see, in each step we need the result of the previous step, meaning composition.
A straightforward callback-based approach would look something like this:

(ns example.verticle
  (:require [vertx.eventbus :as eb]))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (let [reply-msg eb/*current-message*]
      (eb/send "rover.scope" (scope-msg instructions)
        (fn [coords]
          (eb/send "rover.sensor" (sensor-msg coords)
            (fn [data]
              (let [power (calibrate-laser data)]
                (eb/send "rover.laser" (laser-msg power)
                  (fn [status]
                    (eb/reply* reply-msg (parse-status status))))))))))))

A code structure quite typical of composed async functions. Now let's bring in core.async:

(ns example.verticle
  (:refer-clojure :exclude [send])
  (:require [ vertx.eventbus :as eb]
            [ clojure.core.async :refer [go chan put! <!]]))

(defn send [addr msg]
  (let [ch (chan 1)]
    (eb/send addr msg #(put! ch %))
    ch))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (go (let [coords (<! (send "rover.scope" (scope-msg instructions)))
              data (<! (send "rover.sensor" (sensor-msg coords)))
              power (calibrate-laser data)
              status (<! (send "rover.laser" (laser-msg power)))]
          (eb/reply (parse-status status))))))

We created our own reusable send function which returns a channel on which the result of eb/send will be put. Apart from the 2. Concurrent requests

Another thing we might want to do is query different handlers concurrently. Although we can use composition, this is not very performant as we do not need to wait for reply from service-A in order to call service-B.

As a concrete example, imagine we need to collect atmospheric data about some geographical area in order to make a weather forecast. The data will include the temperature, humidity and wind speed which are requested from three different independent services. Once all three asynchronous requests return, we can create a forecast and reply to the main caller. But how do we know when the last callback is fired? We need to keep some memory (mutable state) which is updated when each of the callback fires and process the data when the last one returns.

core.async easily accommodates this scenario without adding extra mutable state for coordinations inside your handlers. The state is contained in the channel.

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan 3)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go (let [data (merge (<! ch) (<! ch) (<! ch))
                forecast (create-forecast data)]
            (eb/reply forecast))))))
3. Fastest response

Sometimes there are multiple services at your disposal providing similar functionality and you just want the fastest one. With just a small adjustment, we can make the previous code work for this scenario as well.

(eb/on-message
  "server.request"
  (fn [msg]
    (let [ch (chan 3)]
      (eb/send "service-A" msg #(put! ch %))
      (eb/send "service-B" msg #(put! ch %))
      (eb/send "service-C" msg #(put! ch %))
      (go (eb/reply (<! ch))))))

We just take the first result on the channel and ignore the other results. After the go block has replied, there are no more takers on the channel. The results from the services that were too late are still put on the channel, but after the request finished, there are no more references to it and the channel with the results can be garbage-collected.

4. Handling timeouts and choice with alts!

We can create timeout channels that close themselves after a specified amount of time. Closed channels can not be written to anymore, but any messages in the buffer can still be read. After that, every read will return nil.

One thing core.async provides that most other tools don't is choice. From the examples:

One killer feature for channels over queues is the ability to wait on many channels at the same time (like a socket select). This is done with `alts!!` (ordinary threads) or `alts!` in go blocks.

This, combined with timeout channels gives the ability to wait on a channel up a maximum amount of time before giving up. By adjusting example 2 a bit:

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan)
          t-ch (timeout 3000)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go-loop [n 3 data {}]
        (if (pos? n)
          (if-some [result (alts! [ch t-ch])]
            (recur (dec n) (merge data result))
            (eb/fail 408 "Request timed out"))
          (eb/reply (create-forecast data)))))))

This will do the same thing as before, but we will wait a total of 3s for the requests to finish, otherwise we reply with a timeout failure. Notice that we did not put the timeout parameter in the vert.x API call of eb/send. Having a first-class timeout channel allows us to coordinate these timeouts more more easily than adding timeout parameters and failure-callbacks.

Wrapping up

The above scenarios are clearly simplified to focus on the different workflows, but they should give you an idea on how to start using it in Vert.x.

Some questions that have arisen for me is whether core.async can play nicely with Vert.x, which was the original motivation for this blog post. Verticles are single-threaded by design, while core.async introduces background threads to dispatch go-blocks or state machine callbacks. Since the dispatched go-blocks carry the correct message context the functions eb/send, eb/reply, etc.. can be called from these go blocks and all goes well.

There is of course a lot more to core.async than is shown here. But that is a story for another blog.

Docker on a raspberry pi

Xebia Blog - Mon, 08/25/2014 - 07:11

This blog describes how easy it is to use docker in combination with a Raspberry Pi. Because of docker, deploying software to the Raspberry Pi is a piece of cake.

What is a raspberry pi?
The Raspberry Pi is a credit-card sized computer that plugs into your TV and a keyboard. It is a capable little computer which can be used in electronics projects and for many things that your desktop PC does, like spreadsheets, word-processing and games. It also plays high-definition video. A raspberry pi runs linux, has an ARM processor of 700 MHZ and internal memory of 512 MB. Last but not least, it only costs around  35 Euro.

A raspberry pi

A raspberry pi version B

Because of the price, size and performance, the raspberry pi is a step to the 'Internet of things' principle. With a raspberry pi it is possible to control and connect everything to everything. For instance, my home project which is an raspberry pi controlling a robot.

 

Raspberry Pi in action

What is docker?
Docker is an open platform for developers and sysadmins to build, ship and run distributed applications. With Docker, developers can build any app in any language using any toolchain. “Dockerized” apps are completely portable and can run anywhere. A dockerized app contains the application, its environment, dependencies and even the OS.

Why combine docker and raspberry pi?
It is nice to work with a Raspberry Pi because it is a great platform to connect devices. Deploying anything however, is kind of a pain. With dockerized apps we can develop and test our application on our own home machine, when it works we can deploy it to the raspberry. We can do this without any pain or worries about corruption of the underlying operating system and tools. And last but not least, you can easily undo your tryouts.

What is better than I expected
First of all; it was relatively easy to install docker on the raspberry pi. When you use the Arch Linux operating system, docker is already part of the package manager! I expected to do a lot of cross-compiling of the docker application, because the raspberry pi uses an ARM-architecture (instead of the default x86 architecture), but someone did this already for me!

Second of all; there are a bunch of ready-to-use docker-images especially for the raspberry pi. To run dockerized applications on the raspberry pi you are depending on the base images. These base images must also support the ARM-architecture. For each situation there is an image, whether you want to run node.js, python, ruby or just java.

The worst thing that worried me was the performance of running virtualized software on a raspberry pi. But it all went well and I did not notice any performance reduce. Docker requires far less resources than running virtual machines. A docker proces runs straight on the host, giving native CPU performance. Using Docker requires a small overhead for memory and network.

What I don't like about docker on a raspberry pi
The slogan of docker to 'build, ship and run any app anywhere' is not entirely valid. You cannot develop your Dockerfile on your local machine and deploy the same application directly to your raspberry pi. This is because each dockerfile includes a core image. For running your application on your local machine, you need a x86-based docker-image. For your raspberry pi you need an ARM-based image. That is a pity, because this means you can only build your docker-image for your Raspberry Pi on the raspberry pi, which is slow.

I tried several things.

  1. I used the emulator QEMU to emulate the rasberry pi on a fast Macbook. But, because of the inefficiency of the emulation, it is just as slow as building your dockerfile on a raspberry pi.
  2. I tried cross-compiling. This wasn't possible, because the commands in your dockerfile are replayed on a running image and the running raspberry-pi image can only be run on ... a raspberry pi.

How to run a simple node.js application with docker on a raspberry pi  

Step 1: Installing Arch Linux
The first step is to install arch linux on an SD card for the raspberry pi. The preferred OS for the raspberry pi is a debian based OS: Raspbian, which is nicely configured to work with a raspberry pi. But in this case, the ArchLinux is better because we use the OS only to run docker on it. Arch Linux is a much smaller and a more barebone OS. The best way is by following the steps at http://archlinuxarm.org/platforms/armv6/raspberry-pi. In my case, I use version 3.12.20-4-ARCH. In addition to the tutorial:

  1. After downloading the image, install it on a sd-card by running the command:
    sudo dd if=path_of_your_image.img of=/dev/diskn bs=1m
  2. When there is no HDMI output at boot, remove the config.txt on the SD-card. It will magically work!
  3. Login using root / root.
  4. Arch Linux will use 2 GB by default. If you have a SD-card with a higher capacity you can resize it using the following steps http://gleenders.blogspot.nl/2014/03/raspberry-pi-resizing-sd-card-root.html

Step 2: Installing a wifi dongle
In my case I wanted to connect a wireless dongle to the raspberry pi, by following these simple steps

  1. Install the wireless tools:
        pacman -Syu
        pacman -S wireless_tool
        
  2. Setup the configuration, by running:
    wifi-menu
  3. Autostart the wifi with:
        netctl list
        netctl enable wlan0-[name]
    

Because the raspberry pi is now connected to the network you are able to SSH to it.

Step 3: Installing docker
The actual install of docker is relative easy. There is a docker version compatible with the ARM processor (that is used within the Raspberry Pi). This docker is part of the package manager of Arch Linux and the used version is 1.0.0. At the time of writing this blog docker release version 1.1.2. The missing features are

  1. Enhanced security for the LXC driver.
  2. .dockerignore support.
  3. Pause containers during docker commit.
  4. Add --tail to docker logs.

You will install docker and start is as a service on system boot by the commands:

pacman -S docker
systemctl enable docker
Installing docker with pacman

Installing docker with pacman

Step 4: Run a single nodejs application
After we've installed docker on the raspberry pi, we want to run a simple nodejs application. The application we will deploy is inspired on the nodejs web in the tutorial on the docker website: https://github.com/enokd/docker-node-hello/. This nodejs application prints a "hello world" to the console of the webbrowser. We have to change the dockerfile to:

# DOCKER-VERSION 1.0.0
FROM resin/rpi-raspbian

# install required packages
RUN apt-get update
RUN apt-get install -y wget dialog

# install nodejs
RUN wget http://node-arm.herokuapp.com/node_latest_armhf.deb
RUN dpkg -i node_latest_armhf.deb

COPY . /src
RUN cd /src; npm install

# run application
EXPOSE 8080
CMD ["node", "/src/index.js"]

And it works!

Screen Shot 2014-08-07 at 20.52.09

The webpage that runs in nodejs on a docker image on a raspberry pi

 

Just by running four little steps, you are able to use docker on your raspberry pi! Good luck!

 

C4 model poster

Coding the Architecture - Simon Brown - Sun, 08/24/2014 - 23:20

A few people have recently asked me for a poster/cheat sheet/quick reference of the C4 model that I use for communicating and diagramming software systems. You may have seen an old copy floating around the blog, but I've made a few updates and you can grab the new version from http://static.codingthearchitecture.com/c4.pdf (PDF, A3 size).

Software architecture and the C4 model

Enjoy!

Categories: Architecture