Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Sketching API Connections

Coding the Architecture - Simon Brown - Tue, 10/28/2014 - 14:52

Daniel Bryant, Simon and I recently had a discussion about how to represent system communication with external APIs. The requirement for integration with external APIs is now extremely common but it's not immediately obvious how to clearly show them in architectural diagrams.


How to Represent an External System?

The first thing we discussed was what symbol to use for a system supplying an API. Traditionally, UML has used the Actor (stick man) symbol to represent a "user or any other system that interacts with the subject" (UML Superstructure Specification, v2.1.2). Therefore a system providing an API may look like this:


Actor


I've found that this symbol tends to confuse those who aren't well versed in UML as most people assume that the Actor symbol always represents a *person* rather than a system. Sometimes this is stereotyped to make it more obvious e.g.


Actor with stereotype


However the symbol is very powerful and tends to overpower the stereotype. Therefore I prefer to use a stereotyped box for an external system supplying an API. Let's compare two context diagrams using Boxes vs Stick Actors.


Actor

In which diagram is it more obvious what are systems or people?

Note that ArchiMate has a specific symbol for Application Service that can be used to represent an API:


Actor (Application Service notation from the Open Group's ArchiMate 2.1 Specification)


An API or the System that Supplies it?

Whatever symbol we choose, what we've done is to show the *system* rather than the actual API. The API is a definition of a service provided by the system in question. How should we provide more details about the API?

There are a number of ways we could do this but my preference is to give details of the API on the connector (line connecting two elements/boxes). In C4 the guidelines for a container diagram includes listing protocol information on the connector and an API can be viewed as the layer above the protocol. For example:


Actor


Multiple APIs per External System

Many API providers supply multiple services/APIs (I'm not referring to different operations within an API but multiple sets of operations in different APIs, which may even use different underlying protocols.) For example a financial marketplace may have APIs that do the following:

  1. Allow a bulk, batch download of static data (such as details of companies listed on a stock market) via xml over HTTP.
  2. Supply real time, low latency updates of market prices via bespoke messages over UDP.
  3. Allow entry of trades via industry standard FPML over a queuing system.
  4. Supply a bulk, batch download of trades for end-of-day reconciliation via FPML over HTTP.

Two of the services use the same protocol (xml over HTTP) but have very different content and use. One of the APIs is used to constantly supply information after user subscription (market data) and the last service involves the user supplying all the information with no acknowledgment (although it should reconcile at EOD).

There are multiple ways of showing this. We could:

  1. Have a single service element, list the APIs on it and have all components linking to it.
  2. Show each service/API as a separate box and connect the components that use the individual service to the relevant box.
  3. Show a single service element with multiple connections. Each connection is labeled and represents an API.
  4. Use a Port and Connector style notation to represent each API from the service provider. Provide a key for the ports.
  5. Use a UML style 'cup and ball' notation to define interfaces and their usage.

Some examples are below:


A Single Service element and simple description


Actor


In the above diagram the containers are stating what they are using but contain no information about how to use the APIs. We don't know if it is a single API (with different operations) or anything about the mechanisms used to transport the data. This isn't very useful for anyone implementing a solution or resolving operational issues.


Single, Service box with descriptive connectors


Actor

In this diagram there is a single, service box with descriptive connectors. The above diagram shows all the information so is much more useful as a diagnostic or implementation tool. However it does look quite crowded.


Services/APIs shown as separate boxes


Actor


Here the external system has its services/APIs shown as separate boxes. This contains all the information but might be mistaken as defining the internal structure of the external system. We want to show the services it provides but we know nothing about the internal structure.


Using Ports to Represent APIs


Actor with


In the above diagram the services/APIs are shown as 'ports' on the external system and the details have been moved into a separate key/table. This is less likely to be mistaken as showing any internal structure of the external service. (Note that I could have also shown outgoing rPorts from the Brokerage System.)


UML Interfaces


Actor

This final diagram is using a UML style interface provider and requirer. This is a clean diagram but requires the user to be aware of what the cup and ball means (although I could have explained this in the key).


Conclusion

Any of these solutions could be appropriate depending on the complexity of the API set you are trying to represent. I'd suggest starting with a simple representation (i.e. fully labeled connections) and moving to a more complex one if needed BUT remember to use a key to explain any elements you use!

Categories: Architecture

Software architecture vs code (DevDay 2014)

Coding the Architecture - Simon Brown - Mon, 10/27/2014 - 19:10

I had the pleasure of delivering the closing keynote at the DevDay 2014 conference in Krakow, Poland last month. It's a one day event, with a bias towards the .NET platform, and one of my favourite conferences from this year. Beautiful city, fantastic crowd and top-notch hospitality. If you get the chance to attend next year, do it!

If you missed it, you can find videos of the talks to watch online. Here's mine called Software architecture vs code. It covers the conflict between software architecture and code, how we can resolve this, the benefits of doing so, fishing and a call for donations to charity every time you write public class without thinking. Enjoy!

p.s. I've written about some of these same topics on the blog ... for example, Modularity and testability and Software architecture vs code. My Structurizr project is starting to put some of this into practice too.

Categories: Architecture

Microservices in Production - the Good, the Bad, the it Works

This is a guest repost written by Andrew Harmel-Law on his real world experiences with Microservices. The original article can be found here.

It’s reached the point where it’s even a cliche to state “there’s a lot written about Microservices these days.” But despite this, here’s another post on the topic. Why does the internet need another? Please bear with me…

We’re doing Microservices. We’re doing it based on a mash-up of some “Netflix Cloud” (as it seems to becoming known - we just call it “Archaius / Hystrix”), a gloop of Codahale Metrics, a splash of Spring Boot, and a lot of Camel, gluing everything together. We’ve even found time to make a bit of Open Source ourselves - archaius-spring-adapter - and also contribute some stuff back.

Lets be clear; when I say we’re “doing Microservices”, I mean we’ve got some running; today; under load; in our Production environment. And they’re running nicely. We’ve also got a lot more coming down the dev-pipe.

All the time we’ve been crafting these we’ve been doing our homework. We’ve followed the great debate, some contributions of which came from within Capgemini itself, and other less-high-profile contributions from our very own manager. It’s been clear for a while that, while there is a lot of heat and light generated in this debate, there is also a lot of valid inputs that we should be bearing in mind.

Despite this, the Microservices architectural style is still definitely in the honeymoon period, which translates personally into the following: whenever I see a new post on the topic from a Developer I respect my heart sinks a little as I open it and read… Have they discovered the fatal flaw in all of this that everyone else has so far missed? Have they put their finger on the unique aspect that mean 99% of us will never realise the benefits of this new approach and that we’re all off on a wild goose chase? Have they proven that Netflix really are unicorns and that the rest of us are just dreaming?

Despite all this we’re persisting. Despite always questioning every decision we make in this area far more than we normally would, Microservices still feel right to us for a whole host of reasons. In the rest of this post I hope I’ll be able to point out some of the subtleties which might have eluded you as you’ve researched and fiddled, and also, I’ve aimed to highlight some of the old “givens” which might not be “givens” any more.

The Good
Categories: Architecture

How to create Java microservices with Dropwizard

Xebia Blog - Mon, 10/27/2014 - 15:10

On Tuesday October 14th the Amsterdam Middleware Meetup experimented with Dropwizard. The idea was to find out what this technology is about, where it could be useful and what the alternatives are. So below I’ll give you an overview of Dropwizard and compare it to Spring Boot.
The Dropwizard website claims:

Dropwizard pulls together stable, mature libraries from the Java ecosystem into a simple, light-weight package that lets you focus on getting things done.

I’ll discuss each of these claims below.

Stable and mature
Dropwizard uses Jetty, Jersey, Jackson and Metrics as its most important frameworks, but also a host of other stuff like Guava, Liquibase and Joda Time. The latest Dropwizard release is version 0.7.1, released on June 20th 2014. It depends on these versions of some core libraries:
Jetty - 9.2.3.v20140905 - May 2014
Jackson - 2.4.1 - June 2014
Jersey - 2.11 - July 2014

The table shows that stable != out-of-date which is fine of course. The versions of core libraries used are recent though. I guess ‘stable’ means libraries with a long history.

Simple
The components of a Dropwizard application are shown below (taken from the tutorial
http://dropwizard.io/getting-started.html):
Dropview components overview

  1. Application (HelloWorldApplication.java): the applications main method, responsible for startup.
  2. Configuration (HelloWorldConfiguration.java) sets configuration for an environment, this is where you may set hostnames for systems the application depends on or set usernames.
  3. Data object (Saying.java).
  4. Resource (HelloWordResource.java): service implementation entry point
  5. Health Check (TemplateHealthCheck.java): runtime tests that show if the application still works.

Light weight
We did some experiments trying to answer the question whether Dropwizard applications are light weight. The table below summarizes some of the sizes of deployments and tools.
Tomcat size 14 mb
Tomcat lib folder size 7 MB
Jetty size 14,6 MB
Jetty in Dropwizard jar: 5,4 MB
Dropwizard tutorial example 10 mb
Dropwizard extended example 20 MB
Dropwizard Hibernate classes in package: 5 MB

A Tomcat or Jetty installation takes about 14 MB, but if you count only the lib folder the size goes down to about 7 MB. The Jetty folder in Dropwizard however is only 5.5 MB. Apparently Dropwizard managed to strip away some code you don’t really need (or is packaged somewhere else, we didn’t look into that).
Building the tutorial results in a 10 MB jar, so if you would run a webapp in its own Tomcat container, switching to Dropwizard saves quite a bit. On the other hand, deployment size isn’t all that important if we’re still talking < 50 MB.
Compared to your default Weblogic install (513 MB, Weblogic-only on OSX) however, savings are humongous (but this is also true when you compare Weblogic to Tomcat or Jetty).

Productivity
We tried to run the build for the tutorial application (dropwizard-example in the dropwizard project on Github). This works fine and takes about 8 seconds using mocks for external connections. One option to explore would be to run tests against a deployed application. What we’re used to is that deploying an application for test takes lots of time and resources, but starting a Dropwizard app is quite cheap. Therefore it would be possible to run an integration test of services at the end of a build. This would be quite hard to do with e.g. Weblogic or Websphere.

Spring boot
Spring boot is interesting, as well as the discussion around the differences between Spring boot and Dropwizard. See https://groups.google.com/forum/#!topic/dropwizard-user/vH1h2PgC8bU

The official Spring boot website says: Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that can you can "just run". We take an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss. Most Spring Boot applications need very little Spring configuration.
It’s good to see a platform change according to new insights, but still, I remember Rod Johnson saying some ten years ago that J2EE was bloated and complex and Spring was the answer. Now it seems we need Spring boot to make Spring simple? Or is it just that we don’t need application servers anymore to divide resources among processes?

Dropwizard and Docker
Finally we experimented with running Dropwizard in a Docker container. This can be done with limited effort because Dropwizard applications have such a small number of dependencies. Thomas Kruitbosch will report on this later.

References
Spring boot: http://projects.spring.io/spring-boot/
Dropwizard: http://dropwizard.io/

100 Top Agile Blogs

Luis Goncalves has put together a list called the 100 Top Agile Blogs:

If you don't know Luis, he lives and breathes driving adoption of Agile practices.

Luis is also an Agile Coach, Co-Author, Speaker, and Blogger.  He is also the co-founder of a MeetUp group called High Performing Teams, and he is a certified Scrum Master and Product Owner.

Here is a preview of the list of top 100 Agile Blogs:

image

 

For the rest of the list, check out 100 Top Agile Blogs.

Lists like these are a great way to discover blogs you may not be aware of.  

While there will be a bunch of blogs you already know, chances are, with that many at a glance, there will be at least a few new ones you can add to your reading list.

Categories: Architecture, Programming

Big changes

Gridshore - Sun, 10/26/2014 - 08:27

The first 10 years of my career I worked as a consultant for Capgemini and Accenture. I learned a lot in that time. One of the things I learned was that I wanted something else. I wanted to do something with more impact, more responsibility and together with people that wanted to do challenging projects. Not pure to get up in your career but more because they like doing cool stuff. Therefore I left Accenture to become part of a company called JTeam. It has been over 6 years that this took place.

I started as Chief Architect at JTeam. The goal was to become a leader to the other architects and create a team together with Bram. That time I was lucky that Allard joined me. We share a lot of ideas, which makes it easier to set goals and accomplish them. I got to learn a few very good people at JTeam, to bad that some of them left, but that is life.

After a few years bigger changes took place. Leonard left for Steven and the shift to a company that needs to grow started. We took over two companies (Funk and Neteffect), we now had all disciplines of software development available. From front-end to operations. As the company grew some things had to change. I got more involved in arranging things like internships, tech events, partnerships and human resource management.

We moved into a bigger building and we had better opportunities. One of the opportunities was a search solution created by Shay Banon. Gone was Steven, together with Shay he founded Elasticsearch. We got acquired by Trifork. In this change we lost most of our search expertise because all of our search people joined the elasticsearch initiative. Someone had to pick up search at Trifork and that was me together with Bram.

For over 2 years I invested a lot of time in learning about mainly elasticsearch. I created a number of workshops/trainings and got involved with multiple customers that needed search. I have given trainings to a number of customers to groups varying between 2 and 15 people. In general they were all really pleased with the trainings I have given.

Having so much focus for a while gave me a lot of time to think, I did not need to think about next steps for the company, I just needed to get more knowledgeable about elasticsearch. In that time I started out on a journey to find out what I want. I talked to my management about it and thought about it myself a lot. Then, right before summer holiday I had a diner with two people I know through the Nljug, Hans and Bert. We had a very nice talk and in the end they gave me an opportunity that I really had to have some good thoughts about. It was really interesting, a challenge, not really a technical challenge, but more an experience that is hard to find. During summer holiday I convinced myself this was a very interesting direction and I took the next step.

I had a lunch meeting with my soon to be business partner Sander. After around 30 minutes it already felt good. I really feel the energy of creating something new, I feel inspired again. This is the feeling I have been missing for a while. In September we were told that Bram was leaving Trifork. Since he is the person that got me into JTeam back in the days it felt weird. I understand his reasons to go out and try to start something new. Bram leaving resulted in a vacancy for a CTO and the management team had decided to approach Allard for this role. This was a surprise to me, but a very nice opportunity for Allard and I know he i going to do a good job. At the end of September Sander and myself presented the draft business plan to the board for Luminis. That afternoon hands were shaken. It was than that I made the last call and decided to resign from my Job at Trifork and take this new opportunity at Luminis.

I feel sad about leaving some people behind. I am going to mis the morning talks in the car with Allard about everything related to the company, I am going to mis doing projects with Roberto (We are a hell of team), I am going to mis Byron for his capabilities (You make me feel proud that I guided your first steps within Trifork), I am going to mis chasing customers with Henk (We did a good job the passed year) and I am going to mis Daphne and the after lunch walks . To all of you and all the others at Trifork, it is a small world …

Luminis logo

Together with Sander, and with the help of all the others at Luminis, we are going to start Luminis Amsterdam. This is going to be a challenge for me, but together with Sander I feel we are going to make it happen. I feel confident that the big changes to come will be good changes.

The post Big changes appeared first on Gridshore.

Categories: Architecture, Programming

Stuff The Internet Says On Scalability For October 24th, 2014

Hey, it's HighScalability time:


This is an ultrasound powered brain implant! (65nm GP CMOS technology, high speed, low power (100 µW))
  • 70: percentage of the worlds transactions processed using COBOL.  
  • Quotable Quotes:
    • John Siracusa: Apple has shown that it wants to succeed more than it fears being seen as a follower.
    • @Dries: "99% of Warren Buffett's wealth was built after his 50th birthday."
    • @Pinboard: It is insane to run a bookmarking site on AWS at any kind of scale. Unless you are competing with me, in which case it’s a great idea—do it!
    • @dvellante: I sound like a broken record but AWS has the scale to make infrastructure outsourcing marginal costs track SW curve 
    • @BrentO: LOL RT @SQLPerfTips: "guess which problem you are more likely to have - needing joins, or scaling beyond facebook?"
    • @astorrs: Legacy systems? Yes they're still relevant. ~20x the number of transactions as Google searches @IBM #DOES14 
    • @SoberBuildEng: "It was all the Agile guys' fault at the beginning.Y'know, if the toilet overflowed, it was 'What, are those Agile guys in there?!'" #DOES14
    • @cshl1: #DOES14  @netflix "$1.8M revenue / employee" << folks, this is an amazing number
    • Isaac Asimov: Probably more inhibiting than anything else is a feeling of responsibility. The great ideas of the ages have come from people who weren’t paid to have great ideas, but were paid to be teachers or patent clerks or petty officials, or were not paid at all. The great ideas came as side issues.

  • With Fabric can Twitter mend the broken threads of developer trust? A good start would be removing 3rd party client user limit caps. Not sure a kit of many colors will do it.

  • Not only do I wish I had said this, I wish I had even almost thought it. tjradcliffe: I distinguish between two types of puzzles: human-made (which I call puzzles) and everything else (which I call problems.) In those terms, I hate puzzles and love problems. Puzzles are contrived by humans and are generally as much psychology problems as anything else. They basically require you to think like the human who created them, and they have bizarre and arbitrary constraints that are totally unlike the real world, where, as Feyrabend told us, "Anything goes."

  • David Rosenthal with a great look at Facebook's Warm Storage: 9 [BLOB] types have dropped by 2 orders of magnitude within 8 months...the vast majority of the BLOBs generate I/O rates at least 2 orders of magnitude less than recently generated BLOBs...Within a data center it uses erasure coding...Between data centers it uses XOR coding...When fully deployed, this will save 87PB of storage...heterogeneity as a way of avoiding correlated failures.

  • Gene Tene on is it a CPU bound future: I don't think CPU speed is a problem. The CPUs and main RAM channels are still (by far) the highest performing parts of our systems. For example, yes, you can move ~10-20Gbps over various links today (wired or wifi, "disk" (ssd) or network), but a single Xeon chip today can sustain well over 10x that bandwidth in random access to DRAM. A single chip has more than enough CPU bandwidth to stream through that data, too. E.g. a single current Haswell core can move more than that 10-20Gbps in/out of it's cache levels. and even relatively low end chips (e.g. laptops) will have 4 or more of these cores on a single chip these days. < BTW, a great thread if you are interested in latency issues.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

The Future of IT Leaders

I’ll need to elaborate on this at some point, to share what I’ve experienced across lots of businesses large and small, as well as some of the biggest businesses on the planet, as they transform themselves for the digital economy.

Meanwhile, here is an interesting read on CIO Straight Talk magazine.

In their words, "CIO Straight Talk is a series of "straight talking" articles from senior IT executives and leading companies and government and nonprofit organizations."

This first edition is focused on learning, failing and learning in the Second Machine Age, and features two non-practitioner experts on current topics:

“Andrew McAfee, co-author of the New York Times bestseller The Second Machine Age, cofounder of MIT’s Initiative on the Digital Economy and Principal Research Scientist at MIT Sloan School of Management, talks about ‘The CIO’s role in the enterprise of the future.’ Says McAfee: ‘The overall trend is that companies of all stripes will need, proportionately, many fewer people in IT. Those who remain will be very highly valued, very highly skilled, very important… Enterprises are going to need someone to help them navigate the second machine age… I think that if the CIO plays her cards right, this can absolutely be her role in the enterprise.’”

Michelle Gallen, the CEO of Shhmooze, a social networking start-up, talks about failure, not to be confused Failure Lite – ‘I failed. How nice. I learned so much’ – often hailed breezily by management experts as something everyone should experience and every company should encourage. Real failure, according to this serial entrepreneur, isn’t pretty. Says Gallen: ‘I don’t think you learn without failing… In the start-up world, innovation is the ability to take an idea and turn it into an invoice. Lots of larger business organizations also rely on cash flow to keep them alive, and therefore innovation has to be monetized. If you’re Apple or Microsoft, you’ve got a war chest, and you can actually allow failure. A lot of companies can’t actually afford it. It’s quite an expensive hobby, failing.’”

So there you have it -- failure is an expensive hobby and the few IT leaders left in organizations will be very highly valued, very highly skilled, and very important.

There’s more to the story and I’ll share what I’ve learned over the past few years helping companies cross the Cloud chasm and accelerating their digital transformation.

Categories: Architecture, Programming

Software architecture sketching in Iceland

Coding the Architecture - Simon Brown - Thu, 10/23/2014 - 10:46

I'll be in Iceland next month for the Agile Iceland 2014 conference, which I'm really looking forward to as everybody tells me that Iceland is a fantastic country to visit. While in Iceland, I'll also be running my 1-day software architecture sketching workshop on the 6th of November. If you're interested in learning how to communicate the design of your software in a simple yet effective way without using lots of complex UML diagrams, please do join me. Everybody who attends will also get a copy of my Software Architecture for Developers ebook too. :-)

Categories: Architecture

Paper: Actor Model of Computation: Scalable Robust Information Systems

With Reactive Systems becoming the new old hotness, it will help to have a thorough grounding in the Actor Model. Here's a good start. Carl Hewitt in Actor Model of Computation: Scalable Robust Information Systems gives a very thorough and relatively concise explanation of the Actor model.

Here's the abstract.

The Actor model is a mathematical theory that treats "Actors" as the universal primitives of concurrent digital computation. The model has been used both as a framework for a theoretical understanding of concurrency, and as the theoretical basis for several practical implementations of concurrent systems. Unlike previous models of computation, the Actor model was inspired by physical laws. It was also influenced by the programming languages Lisp, Simula 67 and Smalltalk-72, as well as ideas for Petri Nets, capability-based systems and packet switching. The advent of massive concurrency through client-cloud computing and many-core computer architectures has galvanized interest in the Actor model.


Actor technology will see significant application for integrating all kinds of digital information for individuals, groups, and organizations so their information usefully links together. Information integration needs to make use of the following information system principles:
    * Persistence. Information is collected and indexed.
    * Concurrency: Work proceeds interactively and concurrently, overlapping in time.
    * Quasi-commutativity: Information can be used regardless of whether it initiates new work or become relevant to ongoing work.
    * Sponsorship: Sponsors provide resources for computation, i.e., processing, storage, and communications.
    * Pluralism: Information is heterogeneous, overlapping and often inconsistent.
    * Provenance: The provenance of information is carefully tracked and recorded

The Actor Model is intended to provide a foundation for inconsistency robust information integration.

Related Articles
Categories: Architecture

Think a Series of Sprints, Not Marathons

When you drive business change and digital initiatives with Cloud, Mobile, Social, and Big Data (and Internet of Things), successful businesses think a series of sprints, not marathons.

Successful businesses go digital by transforming their customer experiences, their employee experiences, and their back-office experiences through rapid prototyping, building proofs-of-concept, testing pilots, and going to production.  It’s a fast cycle of prototype –> pilot –> POC –> production.

These short cycles create rapid learning loops, build momentum, and help adapt for change.

In the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned in driving digital initiatives and agile transformation.

The Digital World Moves Quickly

Avoid Big Up Front Design.  Whenever there is a big lag time between designing it, developing it, and using it, you’re introducing more risk.  You’re breaking feedback loops.  You’re falling into the pit of analysis paralysis.   Focus on “just enough design” so that you can test what works and what doesn’t, and respond accordingly.

Via Leading Digital:

“The digital world moves quickly.  The rapid pace of technology innovation today does not lend itself to multiyear planning and waterfall development methods common in the ERP era.  Markets change, new technologies become mainstream, an disruptive entrants begin courting your customers.  Your roadmap will need to be nimble enough to recognize these changes, adapt for them, and course-correct.”

Keep a Vision in Mind and Build on Success Along the Way

Hold on to the vision and use that to guide you as you test your ideas and implement them, without getting bogged down.

Via Leading Digital:

“To design an agile transformation, borrow an approach that has become common among today's leading software companies.  Keep people committed to the end goal, but pace your initiatives as short sprints of effort.  Create prototype solutions, and experiment with new technologies or approaches.  Evaluate the results, and incorporate the results into your evolving roadmap.  Adam Brotman, Starbucks CDO, explained the iterative process: 'We didn't have all the answers, but we started thinking about other things we could do ... I think it worked not to go too far, too fast, but to keep a vision in mind and keep building on success along the way.”

Test Ideas, Save Time, Adapt to Changes

Short cycle times help you respond to market change and adapt as you learn what works and what doesn’t.

Via Leading Digital:

“The test-and-learn approach will require some new ways of working in its own right, but it enjoys some distinct advantages.  By marketing ideas quickly before they go to scale, this approach saves time and money.  It's short cycle times also make it more adaptive to external changes.  Finally, it enables your transformation to sustain momentum through small, incremental successes, rather than the big-bang approach of long-term programs.”

When it comes to your digital strategy and driving business transformation, drive your business change the agile way.

You Might Also Like

10 Ways to Make Agile Design More Effective

Building Better Business Cases for Digital Initiatives

Cloud Changes the Game from Deployment to Adoption

How Digital is Changing Physical Experiences

McKinsey on Unleashing the Value of Big Data Analytics

Categories: Architecture, Programming

Comparing solutions

Coding the Architecture - Simon Brown - Wed, 10/22/2014 - 14:38

Here's a little snippet that my class really picked up on yesterday. During the training course, we get people into groups and ask them to design a solution based upon some simple requirements. The deliverable is "one or more diagrams to describe your solution". Aside from answering a few questions about the business domain and the environment, that's pretty much all the guidance that groups get.

As you can probably imagine, the resulting diagrams are all very different. Some are very high-level, others very low-level. Some show static structure, others show runtime and behavioural views. Some show technology, others don't. Without a consistent approach, these differing diagrams make it hard for people to understand the solutions being presented to them. But furthermore, the differing diagrams make it really hard to compare solutions too.

As I've said before, I don't actually teach people to draw pictures. What I do instead is to teach people how to think about, and therefore describe, their software using a simple set of abstractions. This is my C4 model. With these abstractions in place, groups then redraw their diagrams. Despite the notations still differing between the groups, the solutions are much easier to understand. The solutions are much easier to compare too, because of the consistency in the way they are being described. A common set of abstractions is much more important than a common notation.

Categories: Architecture

Facebook Mobile Drops Pull For Push-based Snapshot + Delta Model

We've learned mobile is different. In If You're Programming A Cell Phone Like A Server You're Doing It Wrong we learned programming for a mobile platform is its own specialty. In How Facebook Makes Mobile Work At Scale For All Phones, On All Screens, On All Networks we learned bandwidth on mobile networks is a precious resource. 

Given all that, how do you design a protocol to sync state (think messages, comments, etc.) between mobile nodes and the global state holding servers located in a datacenter?

Facebook recently wrote about their new solution to this problem in Building Mobile-First Infrastructure for Messenger. They were able to reduce bandwidth usage by 40% and reduced by 20% the terror of hitting send on a phone.

That's a big win...that came from a protocol change.

Facebook Messanger went from a traditional notification triggered full state pull:

Categories: Architecture

How to deploy a Docker application into production on Amazon AWS

Xebia Blog - Fri, 10/17/2014 - 17:00

Docker reached production status a few months ago. But having the container technology alone is not enough. You need a complete platform infrastructure before you can deploy your docker application in production. Amazon AWS offers exactly that: a production quality platform that offers capacity provisioning, load balancing, scaling, and application health monitoring for Docker applications.

In this blog, you will learn how to deploy a Docker application to production in five easy steps.

For demonstration purposes, you are going to use the node.js application that was build for CloudFoundry and used to demonstrate Deis in a previous post. A truly useful app of which the sources are available on github.

1. Create a Dockerfile

First thing you need to do is to create a Dockerfile to create an image. This is quite simple: you install the node.js and npm packages, copy the source files and install the javascript modules.

# DOCKER-VERSION 1.0
FROM    ubuntu:latest
#
# Install nodejs npm
#
RUN apt-get update
RUN apt-get install -y nodejs npm
#
# add application sources
#
COPY . /app
RUN cd /app; npm install
#
# Expose the default port
#
EXPOSE  5000
#
# Start command
#
CMD ["nodejs", "/app/web.js"]
2. Test your docker application

Now you can create the Docker image and test it.

$ docker build -t sample-nodejs-cf .
$ docker run -d -p 5000:5000 sample-nodejs-cf

Point your browser at http://localhost:5000, click the 'start' button and Presto!

3. Zip the sources

Now you know that the instance works, you zip the source files. The image will be build on Amazon AWS based on your Dockerfile.

$ zip -r /tmp/sample-nodejs-cf-srcs.zip .
4. Deploy Docker application to Amazon AWS

Now you install and configure the amazon AWS command line interface (CLI) and deploy the docker source files to elastic beanstalk.  You can do this all manually, but here you use the script deploy-to-aws.sh that I created.

$ deploy-to-aws.sh \
         sample-nodejs-cf \
         /tmp/sample-nodejs-cf-srcs.zip \
         demo-env

After about 8-10 minutes your application is running. The output should look like this..

INFO: creating application sample-nodejs-cf
INFO: Creating environment demo-env for sample-nodejs-cf
INFO: Uploading sample-nodejs-cf-srcs.zip for sample-nodejs-cf, version 1412948762.
upload: ./sample-nodejs-cf-srcs.zip to s3://elasticbeanstalk-us-east-1-233211978703/1412948762-sample-nodejs-cf-srcs.zip
INFO: Creating version 1412948762 of application sample-nodejs-cf
INFO: demo-env in status Launching, waiting to get to Ready..
...
INFO: demo-env in status Launching, waiting to get to Ready..
INFO: Updating environment demo-env with version 1412948762 of sample-nodejs-cf
INFO: demo-env in status Updating, waiting to get to Ready..
...
INFO: demo-env in status Updating, waiting to get to Ready..
INFO: Version 1412948762 of sample-nodejs-cf deployed in environment
INFO: current status is Ready, goto http://demo-env-vm2tqi3qk4.elasticbeanstalk.com
5. Test your Docker application on the internet!

Your application is now available on the Internet. Browse to the designated URL and click on start. When you increase the number of instances at Amazon, they will appear in the application. When you deploy a new version of the application, you can observe how new versions of the application  appear without any errors on the client application.

For more information, goto Amazon Elastic Beanstalk adds Docker support. and Dockerizing a Node.js Web App.

Stuff The Internet Says On Scalability For October 17th, 2014

Hey, it's HighScalability time:


What could this be? Swarms of drones painting 3D light sculptures against the night sky!
  • Quotable Quotes:
    • Visnja Zeljeznjak: Steve Jobs' product pricing formula: cost of materials x 3 + 33%
    • Benedict Evans: We now have over 2bn iOS and Android devices on earth, and this will grow in the next few years to well over 3bn.
    • @ClearStoryData: It's true! Avg beer drinker attracts 4.4% more Mosquitos than water drinker #Strataconf
    • Leslie Lamport: The core idea of the problem of that notion of causality came about because of my familiarity with special relativity...where one event could causally effect another depended on weather or not information from one could physically reach the other.
    • @laurelatoreilly: Fascinating session about cargo ships going dark to shift market prices #IoT #strataconf "your decisions are only as good as your data"
    • @muratdemirbas: Distributed/decentralized coordination is expensive & hard to scale. Centralized coordination is cheap & scales easily using hierarchies.
    • @froidianslip: ”Kafka is awesome. We heard it cures cancer." -- @gwenshap #Strataconf
    • @timoreilly: RT @grapealope: The self-driving car has 6000 sensors, and takes readings at 4Hz. That's a lot of data. @MCSrivas #strataconf #MapR
    • @froidianslip: Love the paraphrase borrowed from Ray Bradbury, "Any sufficiently complex configuration is indistinguishable from code." #Strataconf
    • @matei_zaharia: Spark shatters MapReduce's 100 TB and 1 PB sort records... with 10x fewer nodes
    • @msallstr: “Synchronous calls in this environment are the crystal meth of programming”  @mjpt777 on the new   reactive manifesto 
    • @postwait: “If you put them under enough stress, perfectly rational people will panic and start believing in science” #priceless
    • Ilya Grigorik: It's great to see access from mobile is around 30% faster compared to last year.
    • @ryandotsmith: Recently migrated an async system to SQS. Much simple. Tiny latency. Here is the code (maybe a gem?)

  • People just don't appreciate the power of messy. The problematic culture of "Worse is Better". There's an implied notion here that people can't recognize better when they see it. Better is not a platonic ideal. It can't be proved by argument. Better, like evolution, is something that works itself out in practice. Like evolution, Worse is Better is an algorithm for stepping through a possibility space by jumping from one working phenotype to the next more adapted working phenotype. And for many, that's better. Not Ideal, but Better.

  • The Times They Are a-Changin'. Docker and Microsoft partner to drive adoption of distributed applications. What's the goal? nickstinemates: Package your Windows app in a docker container, use same tooling you would otherwise use to deploy to a docker engine running on a Windows host. Package your Linux app in a docker container, use same tooling you would otherwise use to deploy to a docker engine running on a Linux host.

  • Leandro Pereira writes a fine autobiography in Life of a HTTP request, as seen by my toy web server. All the stages of life are there. Socket creation. Acceptance. Scheduling. Coroutines. Reading requests. Parsing requests. All the way to the reply and the death of the connection. A lot to learn if you want to look at the simplified internals of a service.

  • Wonderful talk: Call Me Maybe: Carly Rae Jepsen and the Perils of Network Partitions. Kyle Kingsbury takes a detailed look at different partition problems in different databases. There are split brains. Masters dying. Lost data. General network mayhem. It's great. The lesson: what's written down in the marketing documentation is not always what you get. Test your application and see what really happens. The world is not simple. A dumb solution where you understand the failure modes can be a good choice.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Then When Given

Xebia Blog - Fri, 10/17/2014 - 14:50

People who practice ATDD all know how frustrating it can be to write automated examples. Especially when you get stuck overthinking the preconditions of examples.

This post describes an alternative approach to writing acceptance tests: write them backwards!

Imagine that you are building the very first online phone book. We need to define an acceptance tests for viewing the location of a florist. Using the Given-When-Then formula you would probably describe the behaviour like this.


Given I am on the online phone book homepage
When I type “Florist” in the business type field
And I click …
...

Most of the time you will be discussing and describing details that have nothing to do with viewing the location of a florist. To avoid this, write down the Then clause of the formula first.
Make sure the Then clause contains an observable result.


Then I see the location “Floriststreet 123”

Next, we will try to answer the following question: What caused the Then clause?
Make sure the When clause contains an actor and an action.


When I click “View map” of the search result
Then I see the location “Floriststreet 123”

The last thing we will need to do is answer the following question: Why can I perform that action?
Make sure the Given clause contains a simple precondition.


Given I see a search result for florist “Floral Designs”
When I click “View map” of the search result
Then I see the location “Floriststreet 123”

You might have noticed that I left out certain parts where the user goes to the homepage and selects UI objects in the search area. It was not worth mentioning in the Given-When-Then formula. Too much details make us lose focus of what we really want to check. The essence of this acceptance test is clicking on the link "View map" and exposing the location to the user.

Try it a couple of times and let me know how it went.

Testing CDN and geolocation with webpagetest.org

Agile Testing - Grig Gheorghiu - Wed, 10/15/2014 - 19:31
Assume you want to migrate some.example.com to a new CDN provider. Eventually you'll have to point example.mycompany.com as a CNAME to a domain name handled by the CDN provider, let's call it example.cdnprovider.com. To test this setup before you put it in production, the usual way is to get an IP address corresponding to example.cndprovider.com, then associate example.mycompany.com with that IP address in your local /etc/hosts file.

This works well for testing most of the functionality of your web site, but it doesn't work when you want to test geolocation-specific features such as displaying the currency based on the users's country of origin. For this, you can use a nifty feature from the amazing free service WebPageTest.

On the main page of WebPageTest, you can specify the test location from the dropdown. It contains a generous list of locations across the globe. To fake your DNS setting and point example.mycompany.com, you can specify something like this in the Script tab:

setDNSName example.mycompany.com example.cdnprovider.comnavigate http://example.mycompany.com
This will effectively associate the page you want to test with the CDN provider-specified URL, so you will hit the CDN first from the location you chose.

Building Better Business Cases for Digital Initiatives

It’s hard to drive digital initiatives and business transformation if you can’t create the business case.  Stakeholder want to know what their investment is supposed to get them

One of the simplest ways to think about business cases is to think in terms of stakeholders, benefits, KPIs, costs, and risks over time frames.

While that’s the basic frame, there’s a bit of art and science when it comes to building effective business cases, especially when it involves transformational change.

Lucky for us, in the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned in building better business cases for digital initiatives.

What I like about their guidance is that it matches my experience

Link Operational Changes to Tangible Business Benefits

The more you can link your roadmap to benefits that people care about and can measure, the better off you are.

Via Leading Digital:

“You need initiative-based business cases that establish a clear link from the operational changes in your roadmap to tangible business benefits.  You will need to involve employees on the front lines to help validate how operational changes will contribute to strategic goals.”

Work Out the Costs, the Benefits, and the Timing of Return

On a good note, the same building blocks that apply to any business case, apply to digital initiatives.

Via Leading Digital:

“The basic building blocks of a business case for digital initiatives are the same as for any business case.  Your team needs to work out the costs, the benefits, and the timing of the return.  But digital transformation is still uncharted territory.  The cost side of the equation is easier, but benefits can be difficult to quantify, even when, intuitively, they seem crystal clear.”

Start with What You Know

Building a business case is an art and a science.   To avoid getting lost in analysis paralysis, start with what you know.

Via Leading Digital:

“Building a business case for digital initiatives is both an art an a science.  With so many unknowns, you'll need to take a pragmatic approach to investments in light of what you know and what you don't know.

Start with what you know, where you have most of the information you need to support a robust cost-benefit analysis.  A few lessons learned from our Digital Masters can be useful.”

Don’t Build Your Business Case as a Series of Technology Investments

If you only consider the technology part of the story, you’ll miss the bigger picture.  Digital initiatives involves organizational change management as well as process change.  A digital initiative is really a change in terms of people, process, and technology, and adoption is a big deal.

Via Leading Digital:

“Don't build your business case as a series of technology investments.  You will miss a big part of the costs.  Cost the adoption efforts--digital skill building, organizational change, communication, and training--as well as the deployment of the technology.  You won't realize the full benefits--or possibly any benefits--without them.”

Frame the Benefits in Terms of Business Outcomes

If you don’t work backwards from the end-in-mind, you might not get there.  You need clarity on the business outcomes so that you can chunk up the right path to get there, while flowing continuous value along the way.

Via Leading Digital:

“Frame the benefits in terms of the business outcomes you want to reach.  These outcomes can be the achievement of goals or the fixing of problems--that is, outcomes that drive more customer value, higher revenue, or a better cost position.  Then define the tangible business impact and work backward into the levers and metrics that will indicate what 'good' looks like.  For instance, if one of your investments is supposed to increase digital customer engagement, your outcome might be increasing engagement-to-sales conversation.  Then work back into the main metrics that drive this outcome, for example, visits, like inquiries, ratings, reorders, and the like.

When the business impact5 of an initiative is not totally clear, look at companies that have already made similar investments.  Your technology vendors can also be a rich, if somewhat biased, source of business cases for some digital investments.”

Run Small Pilots, Evaluate Results, and Refine Your Approach

To reduce risk, start with pilots to live and learn.   This will help you make informed decisions as part of your business case development.

Via Leading Digital:

“But, whatever you do, some digital investment cases will be trickier to justify, be they investments in emerging technologies or cutting-edge practices.  For example, what is the value of gamifying your brand's social communities?  For these types of investment opportunities, experiment with a test-and-learn approach.  State your measures of success, run small pilots, evaluate results, and refine your approach.  Several useful tools and methods exist, such as hypothesis-driven experiments with control groups, or A/B testing.  The successes (and failures) of small experiments can then become the benefits rationale to invest at greater scale.  Whatever the method, use an analytical approach; the quality of your estimated return depends on it.

Translating your vision into strategic goals and building an actionable roadmap is the firs step in focusing your investment.  It will galvanize the organization into action.  But if you needed to be an architect to develop your vision, you need to be a plumber to develop your roadmap.  Be prepared to get your hands dirty.”

While practice makes perfect, business cases aren’t about perfect.  Their job is to help you get the right investment from stakeholders so you can work on the right things, at the right time, to make the right impact.

You Might Also Like

Cloud Changes the Game from Deployment to Adoption

How Digital is Changing Physical Experiences

McKinsey on Unleashing the Value of Big Data Analytics

Categories: Architecture, Programming

Using a SSD Cache in Front of EBS Boosted Throughput by 50%, for Free

Using EBS has lots of advantages--reliability, snapshotting, resizing--but overcoming the performance problems by using Provisioned IOPS is expensive. 

Swrve, an integrated marketing and A/B testing and optimization platform for mobile apps, did something clever. They are using the c3.xlarge EC2 instances, that have two 40GB SSD devices per instance, as a cache.

They found through testing RAID-0 striping using a 4-way stripe along with enhanceio, effectively increased throughput by over 50%, for free. With no filesystem corruption problems.

How is it free? "We were planning on upgrading to the C3 class of instance anyway, and sticking with EBS as the backing store. Once you’re using an instance which has SSD ephemeral storage, there are no additional fees to use that hardware."

For great analysis, lots of juicy details, graphs, and configuration commands, please take a look at How we increased our EC2 event throughput by 50%, for free

Categories: Architecture

Docker and Microsoft: Integrating Docker with Windows Server and Microsoft Azure

ScottGu's Blog - Scott Guthrie - Wed, 10/15/2014 - 14:30

I’m excited to announce today that Microsoft is partnering with Docker, Inc to enable great container-based development experiences on Linux, Windows Server and Microsoft Azure.

Docker is an open platform that enables developers and administrators to build, ship, and run distributed applications. Consisting of Docker Engine, a lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments.

Earlier this year, Microsoft released support for Docker containers with Linux on Azure.  This support integrates with the Azure VM agent extensibility model and Azure command-line tools, and makes it easy to deploy the latest and greatest Docker Engine in Azure VMs and then deploy Docker based images within them.   Docker Support for Windows Server + Docker Hub integration with Microsoft Azure

Today, I’m excited to announce that we are working with Docker, Inc to extend our support for Docker much further.  Specifically, I’m excited to announce that:

1) Microsoft and Docker are integrating the open-source Docker Engine with the next release of Windows Server.  This release of Windows Server will include new container isolation technology, and support running both .NET and other application types (Node.js, Java, C++, etc) within these containers.  Developers and organizations will be able to use Docker to create distributed, container-based applications for Windows Server that leverage the Docker ecosystem of users, applications and tools.  It will also enable a new class of distributed applications built with Docker that use Linux and Windows Server images together.

image 

2) We will support the Docker client natively on Windows.  Developers and administrators running Windows will be able to use the same standard Docker client and interface to deploy and manage Docker based solutions with both Linux and Windows Server environments.

image

 

3) Docker for Windows Server container images will be available in the Docker Hub alongside the Docker for Linux container images available today.  This will enable developers and administrators to easily share and automate application workflows using both Windows Server and Linux Docker images.

4) We will integrate Docker Hub with the Microsoft Azure Gallery and Azure Management Portal.  This will make it trivially easy to deploy and run both Linux and Windows Server based Docker images in Microsoft Azure.

5) Microsoft is contributing code to Docker’s Open Orchestration APIs.  These APIs provide a portable way to create multi-container Docker applications that can be deployed into any datacenter or cloud provider environment. This support will allow a developer or administrator using the Docker command line client to launch either Linux or Windows Server based Docker applications directly into Microsoft Azure from his or her development machine.

Exciting Opportunities Ahead

At Microsoft we continue to be inspired by technologies that can dramatically improve how quickly teams can bring new solutions to market. The partnership we are announcing with Docker today will enable developers and administrators to use the best container tools available for both Linux and Windows Server based applications, and to run all of these solutions within Microsoft Azure.  We are looking forward to seeing the great applications you build with them.

You can learn more about today’s announcements here and here.

Hope this helps,

Scott omni

Categories: Architecture, Programming