Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Let's Build Maker Cities for Maker People Around New Resources Like Bandwidth, Compute, and Atomically-Precise Manufacturing

TL;DR: There’s a lot of unused space in North America. Yet cities like San Francisco are becoming ever more expensive because of a bubble created by high tech jobs that seemingly can be done anywhere. Historically cities are built around resources that provide some service to humans. The age of infrastructure rising around physical resources is declining while the age of digital resource exploitation is rising. Cities are still valuable because they are amazing idea and problem solving machines. How about we create thousands of new Maker Cities in the vast emptiness that is North America and build them around digital resources like bandwidth, compute power, Atomically-Precise Manufacturing (AMP), and all things future and bright?

Observation Number One: There’s lots of empty space out there.
Categories: Architecture

Why do I use Leanpub?

Coding the Architecture - Simon Brown - Sat, 08/30/2014 - 11:35

There's been some interesting discussion over the past fews days about Leanpub, both on Twitter and blogs. Jurgen Appelo posted Why I Don't Use Leanpub and Peter Armstrong responded. I think the biggest selling points of Leanpub as a publishing platform from an author's perspective may have been lost in the discussion. So, here's why my take on why I use Leanpub for Software Architecture for Developers.

Some history

I pitched my book idea to a number of traditional publishing companies in 2008 and none of them were very interested. "Nice idea, but it won't sell" was the basic summary. A few years later I decided to self-publish my book instead and I was about to head down the route of creating PDF and EPUB versions using a combination of Pages and iBooks Author on the Mac. Why? Because I love books like Garr Reynolds' Presentation Zen and I wanted to do something similar. At first I considered simply giving the book away for free on my website but, after Googling around for self-publishing options, I stumbled across Leanpub. Despite the Leanpub bookstore being fairly sparse at the start of 2012, the platform piqued my interest and the rest is history.

The headline: book creation, publishing, sales and distribution as a service

I use Leanpub because it allows me to focus on writing content. Period. The platform takes care of creating and selling e-books in a number of different formats. I can write some Markdown, sync the files via Dropbox and publish a new version of my book within minutes.

Typesetting and layout

I frequently get asked for advice about whether Leanpub is a good platform for somebody to write a book. The number one question to ask is whether you have specific typesetting/layout needs. If you want to produce a "Presentation Zen" style book or if having control of your layout is important to you, then Leanpub isn't for you. If, however, you want to write a traditional book that mostly consists of words, then Leanpub is definitely worth taking a look at.

Leanpub uses a slightly customised version of Markdown, which is a super-simple language for writing content. Here's an example of a Markdown file from my book, and you can see the result in the online sample of my book. Leanpub does allow you to tweak things like PDF page size, font size, page breaking, section numbering, etc but you're not going to get pixel perfect typesetting. I think that Leanpub actually does a pretty fantastic job of creating good looking PDF, EPUB and MOBI format ebooks based upon the very minimal Markdown. Especially when you consider that all ebook reader software is different and the readers themselves can mess with the fonts/font sizes anyway.

Book formatting on Leanpub

It's like building my own server at Rackspace versus using a "Platform as a Service" such as Cloud Foundry. You need to make a decision about the trade-off between control and simplicity/convenience. Since authoring isn't my full-time job and I have lots of other stuff to be getting on with, I'm more than happy to supply the content and let Leanpub take care of everything else for me.

Toolchain

My toolchain as a Leanpub author is incredibly simple: Dropbox and Mou. From a structural perspective, I have one Markdown file per essay and that's basically it. Leanpub does now provide support for using GitHub to store your content and I can see the potential for a simple Leanpub-aware authoring tool, but it's not rocket science. And to prove the point, a number of non-technical people here in Jersey have books on Leanpub too (e.g. Thrive with The Hive and a number of books by Richard Rolfe).

Iterative and incremental delivery

Before starting, I'd already decided that I'd like to write the book as a collection of short essays and this was cemented by the fact that Leanpub allows me to publish an in-progress ebook. I took an iterative and incremental approach to publishing the book. Rather than starting with essay number one and progressing in order, I tried to initially create a minimum viable book that covered the basics. I then fleshed out the content with additional essays once this skeleton was in place, revisiting and iterating upon earlier essays as necessary. I signed up for Leanpub in January 2012 and clicked the "Publish" button four weeks later. That first version of my book was only about ten pages in length but I started selling copies immediately.

Variable pricing and coupons

Another thing that I love about Leanpub is that it gives you full control over how you price your book. The whole pricing thing is a balancing act between readership and royalties, but I like that I'm in control of this. My book started out at $4.99 and, as content was added, that price increased. The book now currently has a minimum price of $20 and a recommended price of $30. I can even create coupons for reduced price or free copies too. There's some human psychology that I don't understand here, but not everybody pays the minimum price. Far from it, and I've had a good number of people pay more than the recommend price too. Leanpub provides all of the raw data, so you can analyse it as needed.

An incubator for books

As I've already mentioned, I pitched my book idea to a bunch of regular publishing companies and they weren't interested. Fast-forward a few years and my book is the currently the "bestselling" book on Leanpub this week, fifth by lifetime earnings and twelfth in terms of number of copies sold. I've used quotes around "bestselling" because Jurgen did. ;-)

Leanpub bestsellers

In his blog post, Peter Armstrong emphasises that Leanpub is a platform for publishing in-progress ebooks, especially because you can publish using an iterative and incremental approach. For this reason, I think that Leanpub is a fantastic way for authors to prove an idea and get some concrete feedback in terms of sales. Put simply, Leanpub is a fantastic incubator for books. I know of a number of books that were started on Leanpub have been taken on by traditional publishing companies. I've had a number of offers too, including some for commercial translations. Sure, there are other ways to publish in-progress ebooks, but Leanpub makes this super-easy and the barrier to entry is incredibly low.

The future for my book?

What does the future hold for my book then? I'm not sure that electronic products are ever really "finished" and, although I consider my book to be "version 1", I do have some additional content that is being lined up. And when I do this, thanks to the Leanpub platform, all of my existing readers will get the updates for free.

I've so far turned down the offers that I've had from publishing companies, primarily because they can't compete in terms of royalties and I'm unconvinced that they will be able to significantly boost readership numbers. Leanpub is happy for authors to sell their books through other channels (e.g. Amazon) but, again, I'm unconvinced that simply putting the book onto Amazon will yield an increased readership. I do know of books on the Kindle store that haven't sold a single copy, so I take "Amazon is bigger and therefore better" arguments with a pinch of salt.

What I do know is that I'm extremely happy with the return on my investment. I'm not going to tell you how much I've earned, but a naive calculation of $17.50 (my royalty on a $20 sale) x 4,600 (the total number of readers) is a little high but gets you into the right ballpark. In summary, Leanpub allows me focus on content, takes care of pretty much everything and gives me an amazing author royalty as a result. This is why I use Leanpub.

Categories: Architecture

Stuff The Internet Says On Scalability For August 29th, 2014

Hey, it's HighScalability time:


In your best Carl Sagan voice...Billions and Billions of Habitable Planets.
  • Quotable Quotes:
    • @Kurt_Vonnegut: Another flaw in the human character is that everybody wants to build and nobody wants to do maintenance.
    • @neil_conway: "The paucity of innovation in calculating join predicate selectivities is truly astounding."
    • @KentBeck: power law walks into a bar. bartender says, "i've seen a hundred power laws. nobody orders anything." power law says, "1000 beers, please".
    • @CompSciFact: RT @jfcloutier: Prolog: thinking in proofs Erlang: thinking in processes UML: wishful thinking

  • For your acoustic listening pleasure let me present...The Orbiting Vibes playing Scaling Doesn't Matter. I don't quite understand how it relates to scaling, but my deep learning algorithm likes it. 

  • The Rise of the Algorithm. Another interesting podcast with James Allworth and Ben Thompson. Much pondering of how to finance content. Do you trust content with embedded affiliate links? Do you trust content written by writers judged on their friendliness to advertisers? Why trust at all is the bigger question. Facebook is the soft news advertisers love. Twitter is the hard news advertisers avoid. A traditional newspaper combined both. Humans are the new horses. < Capitalism doesn't care if people are employed anymore than it cared about horses being employed. Employment is simply a byproduct of inefficient processes. The Faith that the future will provide is deliciously ironic given the rigorous rationalism underlying most of the episodes.

  • Great reading list for Berkeley CS286: Implementation of Database Systems, Fall 2014. 

  • Is it just me or is it totally weird that all the spy systems use the same diagrams that any other project would use? It makes it seem so...normal. The Surveillance Engine: How the NSA Built Its Own Secret Google.

  • The Mathematics of Herding Sheep. By little border collie Annie embodies a very smart algorithm to herd sheep:  When sheep become dispersed beyond a certain point, dogs put their effort into rounding them up, reintroducing predatory pressure into the herd, which responds according to selfish herd principles, bunching tightly into a more cohesive unit. < What's so disturbing is how well this algorithm works with people.

  • Inside Google's Secret Drone-Delivery Program. What I really want are pick-up drones, where I send my drone to pick stuff up. Or are pick-up and delivery cars a better bet? Though I can see swarms of drones delivering larger objects in parts that self-assemble

  • Lambda Architecture at Indix: "break down the various stages in your data pipeline into the layers of the architecture and choose technologies and frameworks that satisfy the specific requirements of each layer."

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Inspirational Work Quotes at a Glance

What if your work could be your ultimate platform? … your ultimate channel for your growth and greatness?

We spend a lot of time at work. 

For some people, work is their ultimate form of self-expression

For others, work is a curse.

Nobody stops you from using work as a chance to challenge yourself, to grow your skills, and become all that you’re capable of.

But that’s a very different mindset than work is a place you have to go to, or stuff you have to do.

When you change your mind, you change your approach.  And when you change your approach, you change your results.   But rather than just try to change your mind, the ideal scenario is to expand your mind, and become more resourceful.

You can do so with quotes.

Grow Your “Work Intelligence” with Inspirational Work Quotes

In fact, you can actually build your “work intelligence.”

Here are a few ways to think about “intelligence”:

  1. the ability to learn or understand things or to deal with new or difficult situations (Merriam Webster)
  2. the more distinctions you have for a given concept, the more intelligence you have

In Rich Dad, Poor Dad, Robert Kiyosaki, says, “intelligence is the ability to make finer distinctions.”   And, Tony Robbins, says “intelligence is the measure of the number and the quality of the distinctions you have in a given situation.”

If you want to grow your “work intelligence”, one of the best ways is to familiarize yourself with the best inspirational quotes about work.

By drawing from wisdom of the ages and modern sages, you can operate at a higher level and turn work from a chore, into a platform of lifelong learning, and a dojo for personal growth, and a chance to master your craft.

You can use inspirational quotes about work to fill your head with ideas, distinctions, and key concepts that help you unleash what you’re capable of.

To give you a giant head start and to help you build a personal library of profound knowledge, here are two work quotes collections you can draw from:

37 Inspirational Quotes for Work as Self-Expression

Inspirational Work Quotes

10 Distinct Ideas for Thinking About Your Work

Let’s practice.   This will only take a minute, and if you happen to hear the right words, which are the keys for you, your insight or “ah-ha” can be just the breakthrough that you needed to get more of your work, and, as a result, more out of life (or at least your moments.)

Here is a sample of distinct ideas and depth that you use to change how you perceive your work, and/or how you do your work:

  1. “Either write something worth reading or do something worth writing.” — Benjamin Franklin
  2. “You don’t get paid for the hour. You get paid for the value you bring to the hour.” — Jim Rohn
  3. “Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do.” — Steve Jobs
  4. “Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” -- Bill Gates
  5. “We must each have the courage to transform as individuals. We must ask ourselves, what idea can I bring to life? What insight can I illuminate? What individual life could I change? What customer can I delight? What new skill could I learn? What team could I help build? What orthodoxy should I question?” – Satya Nadella
  6. “My work is a game, a very serious game.” — M. C. Escher
  7. “Hard work is a prison sentence only if it does not have meaning. Once it does, it becomes the kind of thing that makes you grab your wife around the waist and dance a jig.” — Malcolm Gladwell
  8. “The test of the artist does not lie in the will with which he goes to work, but in the excellence of the work he produces.” -- Thomas Aquinas
  9. “Are you bored with life? Then throw yourself into some work you believe in with all you heart, live for it, die for it, and you will find happiness that you had thought could never be yours.” — Dale Carnegie
  10. “I like work; it fascinates me. I can sit and look at it for hours.” -– Jerome K. Jerome

For more ideas, take a stroll through my inspirational work quotes.

As you can see, there are lots of ways to think about work and what it means.  At the end of the day, what matters is how you think about it, and what you make of it.  It’s either an investment, or it’s an incredible waste of time.  You can make it mundane, or you can make it matter.

The Pleasant Life, The Good Life, and The Meaningful Life

Here’s another surprise about work.   You can use work to live the good life.   According to Martin Seligman, a master in the art and science of positive psychology, there are three paths to happiness:

  1. The Pleasant Life
  2. The Good Life
  3. The Meaningful Life

In The Pleasant Life, you simply try to have as much pleasure as possible.  In The Good Life, you spend more time in your values.  In The Meaningful Life, you use your strengths in the service of something that is bigger than you are.

There are so many ways you can live your values at work and connect your work with what makes you come alive.

There are so many ways to turn what you do into service for others and become a part of something that’s bigger than you.

If you haven’t figured out how yet, then dig deeper, find a mentor, and figure it out.

You spend way too much time at work to let your influence and impact fade to black.

You Might Also Like

40 Hour Work Week at Microsoft

Agile Avoids Work About Work

How Employees Lost Empathy for Their Work, for the Customer, and for the Final Product

Satya Nadella on Live and Work a Meaningful Life

Short-Burst Work

Categories: Architecture, Programming

Speaking in September

Coding the Architecture - Simon Brown - Thu, 08/28/2014 - 16:01

After a lovely summer (mostly) spent in Jersey, September is right around the corner and is shaping up to be a busy month. Here's a list of the events where you'll be able to find me.

It's going to be a fun month and besides, I have to keep up my British Airways frequent flyer status somehow, right? ;-)

Categories: Architecture

CocoaHeadsNL @ Xebia on September 16th

Xebia Blog - Thu, 08/28/2014 - 11:20

On Tuesday the 16th the Dutch CocoaHeads will be visiting us. It promises to be a great night for anybody doing iOS or OSX development. The night starts at 17:00, diner at 18:00.

If you are an iOS/OSX developer and like to meet fellow developers? Come join the CocoaHeads on september 16th at our office. More details are on the CocoaHeadsNL meetup page.

What is your next step in Continuous Delivery? Part 1

Xebia Blog - Wed, 08/27/2014 - 21:15

Continuous Delivery helps you deliver software faster, with better quality and at lower cost. Who doesn't want to delivery software faster, better and cheaper? I certainly want that!

No matter how good you are at Continuous Delivery, you can always do one step better. Even if you are as good as Google or Facebook, you can still do one step better. Myself included, I can do one step better.

But also if you are just getting started with Continuous Delivery, there is a feasible step to take you forward.

In this series, I describe a plan that helps you determine where you are right now and what your next step should be. To be complete, I'll start at the very beginning. I expect most of you have passed the first steps already.

The steps you already took

This is the first part in the series: What is your next step in Continuous Delivery? I'll start with three steps combined in a single post. This is because the great majority of you has gone through these steps already.

Step 0: Your very first lines of code

Do you remember the very first lines of code you wrote? Perhaps as a student or maybe before that as a teenager? Did you use version control? Did you bring it to a test environment before going to production? I know I did not.

None of us was born with an innate skills for delivering software in a certain way. However, many of us are taught a certain way of delivering software that still is a long way from Continuous Delivery.

Step 1: Version control

At some point during your study of career, you have been introduced to Version Control. I remember starting with CVS, migrating to Subversion and I am currently using Git. Each of these systems are an improvement over te previous one.

It is common to store the source code for your software in version control. Do you already have definitions or scripts for your infrastructure in version control? And for your automated acceptance tests or database schemas? In later steps, we'll get back to that.

Step 2: Release process

Your current release process may be far from Continuous Delivery. Despite appearances, your current release process is a useful step towards Continuous Delivery.

Even if you delivery to production less than twice a year, you are better off than a company that delivers their code unpredictably, untested and unmanaged. Or worse, a company that edits their code directly on a production machine.

In your delivery process, you have planning, control, a production-like testing environment, actual testing and maintenance after the go-live. The main difference with Continuous Delivery is the frequency and the amount of software that is released at the same time.

So yes, a release process is a productive step towards Continuous Delivery. Now let's see if we can optimize beyond this manual release process.

Step 3: Scripts

Imagine you have issues on your production server... Who do you go to for help? Do you have someone in mind?

Let me guess, you are thinking about a middle-aged guy who has been working at your organisation for 10+ years. Even if your organization is only 3 years old, I bet he's been working there for more than 10 years. Or at least, it seems like it.

My next guess is that this guy wrote some scripts to automate recurring tasks and make his life easier. Am I right?

These scripts are an important step towards Continuous Delivery. in fact, Continuous Delivery is all about automating repetitive tasks. The only thing that falls short is that these scripts are a one-man-initiative. It is a good initiative, but there is no strategy behind it and a lack of management support.

If you don't have this guy working for you, then you may have a bigger step to take when continuing towards the next step of Continuous Delivery. To successfully adopt Continuous Delivery on the long run, you are going to need someone like him.

Following steps

In the next parts, we will look at the following steps towards becoming world champion delivering software:

  • Step 4: Continuous Delivery
  • Step 5: Continuous Deployment
  • Step 6: "Hands-off"
  • Step 7: High Scalability

Stay tuned for the following posts.

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Personas and scenario can be a powerful tool for driving adoption and business value realization.  

All too often, people deploy technology without fully understanding the users that it’s intended for. 

Worse, if the technology does not get used, the value does not get realized.

Keep in mind that the value is in the change.  

The change takes the form of doing something better, faster, cheaper, and behavior change is really the key to value realization.

If you deploy a technology, but nobody adopts it, then you won’t realize the value.  It’s a waste.  Or, more precisely, it’s only potential value.  It’s only potential value because nobody has used it to change their behavior to be better, faster, or cheaper with the new technology.  

In fact, you can view change in terms of behavior changes:

What should users START doing or STOP doing, in order to realize the value?

Behavior change becomes a useful yardstick for evaluating adoption and consumption of technology, and significant proxy for value realization.

What is a Persona?

I’ve written about personas before  in Actors, Personas, and Roles, MSF Agile Persona Template, and Personas at patterns & practices, and Microsoft Research has a whitepaper called Personas: Practice and Theory.

A persona, simply defined is a fictitious character that represents user types.  Personas are the “who” in the organization.    You use them to create familiar faces and to inspire project teams to know their clients as well as to build empathy and clarity around the user base. 

Using personas helps characterize sets of users.  It’s a way to capture and share details about what a typical day looks like and what sorts of pains, needs, and desired outcomes the personas have as they do their work. 

You need to know how work currently gets done so that you can provide relevant changes with technology, plan for readiness, and drive adoption through specific behavior changes.

Using personas can help you realize more value, while avoiding “value leakage.”

What is a Scenario?

When it comes to users, and what they do, we're talking about usage scenarios.  A usage scenario is a story or narrative in the form of a flow.  It shows how one or more users interact with a system to achieve a goal.

You can picture usage scenarios as high-level storyboards.  Here is an example:

clip_image001

In fact, since scenario is often an overloaded term, if people get confused, I just call them Solution Storyboards.

To figure out relevant usage scenarios, we need to figure out the personas that we are creating solutions for.

Workforce Analysis with Personas

In practice, you would segment the user population, and then assign personas to the different user segments.  For example, let’s say there are 20,000 employees.  Let’s say that 3,000 of them are business managers, let’s say that 6,000 of them are sales people.  Let’s say that 1,000 of them are product development engineers.   You could create a persona named Mary to represent the business managers, a persona named Sally to represent the sales people, and a persona named Bob to represent the product development engineers.

This sounds simple, but it’s actually powerful.  If you do a good job of workforce analysis, you can better determine how many users a particular scenario is relevant for.  Now you have some numbers to work with.  This can help you quantify business impact.   This can also help you prioritize.  If a particular scenario is relevant for 10 people, but another is relevant for 1,000, you can evaluate actual numbers.

  Persona 1
”Mary
Persona 2
”Sally”
Persona 3
”Bob”
Persona 4
”Jill”
Persona 5
”Jack”
User Population 3,000 6,000 1,000 5,000 5,000 Scenario 1 X         Scenario 2 X X       Scenario 3     X     Scenario 4       X X Scenario 5 X         Scenario 6 X X X X X Scenario 7 X X       Scenario 8     X X   Scenario 9 X X X X X Scenario 10   X   X   Analyzing a Persona

Let’s take Bob for example.  As a product development engineer, Bob designs and develops new product concepts.  He would love to collaborate better with his distributed development team, and he would love better feedback loops and interaction with real customers.

We can drill in a little bit to get a get a better picture of his work as a product development engineer. 

Here are a few ways you can drill in:

  • A Day in the Life – We can shadow Bob for a day and get a feel for the nature of his work.  We can create  a timeline for the day and characterize the types of activities that Bob performs.
  • Knowledge and Skills - We can identify the knowledge Bob needs and the types of skills he needs to perform his job well.  We can use this as input to design more effective readiness plans.
  • Enabling Technologies –  Based on the scenario you are focused on, you can evaluate the types of technologies that Bob needs.  For example, you can identify what technologies Bob would need to connect and interact better with customers.

Another approach is to focus on the roles, responsibilities, challenges, work-style, needs and wants.  This helps you understand which solutions are appropriate, what sort of behavior changes would be involved, and how much readiness would be required for any significant change.

At the end of the day, it always comes down to building empathy, understanding, and clarity around pains, needs, and desired outcomes.

Persona Creation Process

Here’s an example of a high-level process for persona creation:

  1. Kickoff workshop
  2. Interview users
  3. Create skeletons
  4. Validate skeletons
  5. Create final personas
  6. Present final personas

Doing persona analysis is actually pretty simple.  The challenge is that people don’t do it, or they make a lot of assumptions about what people actually do and what their pains and needs really are.  When’s the last time somebody asked you what your pains and needs are, or what you need to perform your job better?

A Story of Using Personas to Create the Future of Digital Banking

In one example I know of a large bank that transformed itself by focusing on it’s personas and scenarios.  

It started with one usage scenario:

Connect with customers wherever they are.

This scenario was driven from pain in the business.  The business was out of touch with customers, and it was operating under a legacy banking model.   This simple scenario reflected an opportunity to change how employees connect with customers (though Cloud, Mobile, and Social).

On the customer side of the equation, customers could now have virtual face-to-face communication from wherever they are.  On the employee side, it enabled a flexible work-style, helped employees pair up with each other for great customer service, and provided better touch and connection with the customers they serve.

And in the grand scheme of things, this helped transform a brick-and-mortar bank to a digital bank of the future, setting a new bar for convenience, connection, and collaboration.

Here is a video that talks through the story of one bank’s transformation to the digital banking arena:

Video: NedBank on The Future of Digital Banking

In the video, you’ll see Blessing Sibanyoni, one of Microsoft’s Enterprise Architects in action.

If you’re wondering how to change the world, you can start with personas and scenarios.

You Might Also Like

Scenarios in Practice

How I Learned to Use Scenarios to Evaluate Things

How Can Enterprise Architects Drive Business Value the Agile Way?

Business Scenarios for the Cloud

IT Scenarios for the Cloud

Categories: Architecture, Programming

The 1.2M Ops/Sec Redis Cloud Cluster Single Server Unbenchmark

This is a guest post by Itamar Haber, Chief Developers Advocate, Redis Labs.

While catching up with the world the other day, I read through the High Scalability guest post by Anshu and Rajkumar's from Aerospike (great job btw). I really enjoyed the entire piece and was impressed by the heavy tweaking that they did to their EC2 instance to get to the 1M mark, but I kept wondering - how would Redis do?

I could have done a full-blown benchmark. But doing a full-blown benchmark is a time- and resource-consuming ordeal. And that's without taking into account the initial difficulties of comparing apples, oranges and other sorts of fruits. A real benchmark is a trap, for it is no more than an effort deemed from inception to be backlogged. But I wanted an answer, and I wanted it quick, so I was willing to make a few sacrifices to get it. That meant doing the next best thing - an unbenchmark.

An unbenchmark is, by (my very own) definition, nothing like a benchmark (hence the name). In it, you cut every corner and relax every assumption to get a quick 'n dirty ballpark figure. Leaning heavily on the expertise of the guys in our labs, we measured the performance of our Redis Cloud software without any further optimizations. We ran our unbenchmark with the following setup:

Categories: Architecture

Synchronize the Team

Xebia Blog - Tue, 08/26/2014 - 13:52

How can you, as a scrum master, improve the chances that the scrum team has a common vision and understanding of both the user story and the solution, from the start until the end of the sprint?   

The problem

The planning session is where the team should synchronize on understanding the user story and agree on how to build the solution. But there is no real validation that all the team members are on the same page. The team tends to dive into the technical details quite fast in order to identify and size the tasks. The technical details are often discussed by only a few team members and with little or no functional or business context. Once the team leaves the session, there is no guarantee that they remain synchronized when the sprint progresses. 

The only other team synchronization ritual, prescribed by the scrum process, is the daily scrum or stand-up. In most teams the daily scrum is as short as possible, avoiding semantic discussions. I also prefer the stand-ups to be short and sweet. So how can you or the team determine that the team is (still) synchronized?

Specify the story

In the planning session, after a story is considered ready enough be to pulled into the sprint, we start analyzing the story. This is the specification part, using a technique called ‘Specification by Example’. The idea is to write testable functional specifications with actual examples. We decompose the story into specifications and define the conditions of failure and success with examples, so they can be tested. Thinking of examples makes the specification more concrete and the interpretation of the requirements more specific.

Having the whole team work out the specifications and examples, helps the team to stay focussed on the functional part of the story longer and in more detail, before shifting mindsets to the development tasks.  Writing the specifications will also help to determine wether a story is ready enough. While the sprint progresses and all the tests are green, the story should be done for the part of building the functionality.

You can use a tool like Fitnesse  or Cucumber to write testable specifications. The tests are run against the actual code, so they provide an accurate view on the progress. When all the tests pass, the team has successfully created the functionality. In addition to the scrum board and burn down charts, the functional tests provide a good and accurate view on the sprint progress.

Design the solution

Once the story has been decomposed into clear and testable specifications we start creating a design on a whiteboard. The main goal is to create a shared visible understanding of the solution, so avoid (technical) details to prevent big up-front designs and loosing the involvement of the less technical members on the team. You can use whatever format works for your team (e.g. UML), but be sure it is comprehensible by everybody on the team.

The creation of the design, as an effort by the whole team, tends to sparks discussion. In stead of relying on the consistency of non-visible mental images in the heads of team members, there is a tangible image shared with everyone.

The whiteboard design will be a good starting point for refinement as the team gains insight during the sprint. The whiteboard should always be visible and within reach of the team during the sprint. Using a whiteboard makes it easy to adapt or complement the design. You’ll notice the team standing around the whiteboard or pointing to it in discussions quite often.

The design can be easily turned into a digital artefact by creating a photo copy of it. A digital copy can be valuable to anyone wanting to learn the system in the future. The design could also be used in the sprint demo, should the audience be interested in a technical overview.

Conclusion

The team now leaves the sprint planning with a set of functional tests and a whiteboard design. The tests are useful to validate and synchronize on the functional goals. The whiteboard designs are useful to validate and synchronize on the technical goals. The shared understanding of the team is more visible and can be validated, throughout the sprint. The team has become more transparent.

It might be a good practice to have the developers write the specification, and the testers or analysts draw the designs on the board. This is to provoke more communication, by getting the people out of their comfort zone and forcing them to ask more questions.

There are more compelling reasons to implement (or not) something like specification by design or to have the team make design overviews. But it also helps the team to stay on the same page, when there are visible and testable artefacts to rely on during the sprint.

MixRadio Architecture - Playing with an Eclectic Mix of Services

This is a guest repost by Steve Robbins, Chief Architect at MixRadio.

At MixRadio, we offer a free music streaming service that learns from listening habits to deliver people a personalised radio station, at the single touch of a button. MixRadio marries simplicity with an incredible level of personalization, for a mobile-first approach that will help everybody, not just the avid music fan, enjoy and discover new music. It's as easy as turning on the radio, but you're in control - just one touch of Play Me provides people with their own personal radio station.   The service also offers hundreds of hand-crafted expert and celebrity mixes categorised by genre and mood for each region. You can also create your own artist mix and mixes can be saved for offline listening during times without signal such as underground travel, as well as reducing data use and costs.   Our apps are currently available on Windows Phone, Windows 8, Nokia Asha phones and the web. We’ve spent years evolving a back-end that we’re incredibly proud of, despite being British! Here's an overview of our back-end architecture.

 

Architecture Overview
Categories: Architecture

How Can Enterprise Architects Drive Business Value the Agile Way?

An Enterprise Architect can have a tough job when it comes to driving value to the business.   With multiple stakeholders, multiple moving parts, and a rapid rate of change, delivering value is tough enough.   But what if you want to accelerate value and maximize business impact?

Enterprise Architects can borrow a few concepts from the Agile world to be much more effective in today’s world.

A Look Back at How Agile Helped Connect Development to Business Impact …

First, let’s take a brief look at traditional development and how it evolved.  Traditionally, IT departments focused on delivering value to the business by shipping big bang projects.   They would plan it, build it, test it, and then release it.   The measure of success was on time, on budget.   

Few projects ever shipped on time.  Few were ever on budget.  And very few ever met the requirements of the business.

Then along came Agile approaches and they changed the game.

One of the most important ideas was a shift away from thick requirements documentation to user stories.  Developers got customers telling stories about what they wanted the future solution to do.  For example, a user story for a sale representative might look like this:

“As a sales rep, I want to see my customer’s account information so that I can identify cross-sell and upsell opportunities.” 

The use of user stories accomplished several things.   First, user stories got the development teams talking to the business users.  Rather than throwing documents back and forth, people started having face-to-face communication to understand the user stories.  Second, user stories helped chunk bigger units of value down into smaller units of value.  Rather than a big bang project where all the value is promised at the end of some long development cycle, a development team could now ship the solution in increments, where each increment was a prioritized set of stories.   The user stories effectively create a shared language for value

Third, it made it easier to test the delivery of value.  Now the user and the development team could test the solution against the user stories and acceptance criteria.  If the story met acceptance criteria, the user would acknowledge that the value was delivered.  In this way, the user stories created both a validation mechanism and a feedback loop for delivering and acknowledging value.

In the Agile world, bigger stories are called epics, and collections of stories are called themes.  Often a story starts off as an epic until it gets broken down into multiple stories.  What’s important here is that the collections of stories serve as a catalog of potential value.   Specifically, this catalog of stories reflects potential value with real stakeholders.  In this way, Agile helps drive customer focus and customer connection.  It’s really effective stakeholder management in action.

Agile approaches have been used in software projects large and small.  And they’ve forever changed how developers and project managers approach projects.

A Look at How Agile Can Help Enterprise Architecture Accelerate Business Value …

But how does this apply to Enterprise Architects?

As an Enterprise Architect, chances are you are responsible for achieving business outcomes.  You do this by driving business transformation.   The way you achieve business transformation is through driving capability change including business, people, and technical capabilities.

That’s a tall order.   And you need a way to chunk this up and make it meaningful to all the parties involved.

The Power of Scenarios as Units of Value for the Enterprise

This is where scenarios come into play.  Scenarios are a simple way to capture pains, needs and desired outcomes.   You can think of the desired outcome as the future capability vision.   It’s really a story that helps articulate the art of the possible.   More precisely, you can use scenarios to help build empathy with stakeholders for what value will look like, by painting a conceptual scene of the future.

An Enterprise scenario is simply a chunk of organizational change, typically about 3-5 business capabilities, 3-5 people capabilities, and 3-5 technical capabilities.

If that sounds like a lot of theory, let’s step into an example to show what it looks like in practice.

Let’s say you’re in a situation where you need to help a healthcare provider change their business.  

You can come up with a lot of scenarios, but it helps to start with the pains and needs of the business owner.  Otherwise, you might start going through a bunch of scenarios for the patients or for the doctors.  In this case, the business owner would be the Chief Medical Officer or the doctor of doctors.

Scenario: Tele-specialist for Healthcare

If we walk the pains, needs, and desired outcomes of the Chief Medical Officer, we might come up with a scenario that looks something like this, where the CURRENT STATE reflects the current pains, and needs, and the FUTURE STATE reflects the desired outcome.

CURRENT STATE

Here is an example of the CURRENT STATE portion of the scenario:

The Chief Medical Officer of Contoso Provider is struggling with increased costs and declining revenues. Costs are rising due to the Affordable Healthcare Act regulatory compliance requirements and increasing malpractice insurance premiums. Revenue is declining due to decreasing medical insurance payments per claim.

FUTURE STATE

Here is an example of the FUTURE STATE portion of the scenario:

Doctors can consult with patients, peers, and specialists from anywhere. Contoso provider's doctors can see more patients, increase accuracy of first time diagnosis, and grow revenues.


image

 

Storyboard for the Future Capability Vision

It helps to be able to picture what the Future Capability Vision might look like.   That’s where storyboarding can come in.  An Enterprise Architect can paint a simple scene of the future with a storyboard that shows the Future Capability Vision in action.  This practice lends itself to whiteboarding, and the beauty of a whiteboard is you can quickly elaborate where you need to, without getting mired in details.

image

As you can see in this example storyboard of the Future Capability Vision, we listed out some business benefits, which we could then drill-down into relevant KPIs and value measures.   We’ve also outlines some building blocks required for this Future Capability Vision in the form of business capabilities and technical capabilities.

Now this simple approach accomplishes a lot.   It helps ensure that any technology solution actually connects back to business drivers and pains that a business decision maker actually cares about.   This gets their fingerprints on the solution concept.   And it creates a simple “flashcard” for value.   If we name the Enterprise scenario well, then we can use it as a handle to get back to the story we created with the business of a better future.

The obvious thing this does, aside from connecting IT to the business, is it helps the business justify any investment in IT.

And all we did was walk through one Enterprise Scenario.  

But there is a lot more value to be found in the Enterprise.   We can literally explore and chunk up the value in the Enterprise if we take a step back and add another tool to our toolbelt:  the Scenario Chain.

Scenario Chain:  Chaining the Industry Scenarios to Enterprise Scenarios

The Scenario Chain is another powerful conceptual visualization tool.  It helps you quickly map out what’s happening in the marketplace in terms of industry drivers or industry scenarios.  You can then identify potential investment objectives.   These investment objectives lead to patterns of value or patterns of solutions in the Enterprise, which are effectively Enterprise scenarios.   From the Enterprise scenarios, you can then identify relevant usage scenarios.  The usage scenarios effectively represent new ways of working for the employees, or new interaction models with customers, which is effectively a change to your value stream.

image

With one simple glance, the Scenario Chain is a bird’s-eye view of how you can respond to the changing marketplace and how you can transform your business.   And, by using Enterprise scenarios, you can chunk up the change into meaningful units of value that reflect pains, needs, and desired outcomes for the business.  And, because you have the fingerprints of stakeholders from both business and IT, you’ve effectively created a shared vision for the future, that has business impact, a justification for investment, and it creates a pull-through mechanism for additional value, by driving the adoption of the usage scenarios.

Let’s elaborate on adoption and how scenarios can help accelerate business value.

Using Scenario to Drive Adoption and Accelerate Business Value

Driving adoption is a key way to realize the business value.  If nobody adopts the solution, then that’s what Gartner would call “Value Leakage.”  Value Realization really comes down to governance, measurement, and adoption.

With scenarios at your fingertips, you have a powerful way to articulate value, justify business cases, drive business transformation, and accelerate business value.   The key lies in using the scenarios as a unit of value, and focusing on scenarios as a way to drive adoption and change.

Here are three ways you can use scenarios to drive adoption and accelerate business value:

1.  Accelerate Business Adoption

One of the ways to accelerate business value is to accelerate adoption.    You can use scenarios to help enumerate specific behavior changes that need to happen to drive the adoption.   You can establish metrics and measures around specific behavior changes.   In this way, you make adoption a lot more specific, concrete, intentional, and tangible.

This approach is about doing the right things, faster.

2.  Re-Sequence the Scenarios

Another way to accelerate business value is to re-sequence the scenarios.   If your big bang is way at the end (way, way at the end), no good.  Sprinkle some of your bangs up front.   In fact, a great way to design for change is to build rolling thunder.   Put some of the scenarios up front that will get people excited about the change and directly experiencing the benefits.  Make it real.

The approach is about putting first things first.

3.  Identify Higher Value Scenarios

The third way to accelerate business value is to identify higher-value scenarios.   One of the things that happens along the way, is you start to uncover potential scenarios that you may not have seen before, and these scenarios represent orders of magnitude more value.   This is the space of serendipity.   As you learn more about users and what they value, and stakeholders and what they value, you start to connect more dots between the scenarios you can deliver and the value that can be realized (and therefore, accelerated.)

This approach is about trading up for higher value and more impact.

As you can see, Enterprise Architects can drive business value and accelerate business value realization by using scenarios and storyboarding.   It’s a simple and agile approach for connecting business and IT, and for shaping a more Agile Enterprise.

I’ll share more on this topic in future posts.   Value Realization is an art and a science and I’d like to reduce the gap between the state of the art and the state of the practice.

You Might Also Like

3 Ways to Accelerate Business Value

6 Steps for Enterprise Architecture as Strategy

Cognizant on the Next Generation Enterprise

Simple Enterprise Strategy

The Mission of Enterprise Services

The New Competitive Landscape

What Am I Doing on the Enterprise Strategy Team?

Why Have a Strategy?

Categories: Architecture, Programming

Vert.x with core.async. Handling asynchronous workflows

Xebia Blog - Mon, 08/25/2014 - 12:00

Anyone who was written code that has to coordinate complex asynchronous workflows knows it can be a real pain, especially when you limit yourself to using only callbacks directly. Various tools have arisen to tackle these issues, like Reactive Extensions and Javascript promises.

Clojure's answer comes in the form of core.async: An implementation of CSP for both Clojure and Clojurescript. In this post I want to demonstrate how powerful core.async is under a variety of circumstances. The context will be writing a Vert.x event-handler.

Vert.x is a young, light-weight, polyglot, high-performance, event-driven application platform on top of the JVM. It has an actor-like concurrency model, where the coarse-grained actors (called verticles) can communicate over a distributed event bus. Although Vert.x is still quite young, it's sure to grow as a big player in the future of the reactive web.

Scenarios

The scenario is as follows. Our verticle registers a handler on some address and depends on 3 other verticles.

1. Composition

Imagine the new Mars rover got stuck against some Mars rock and we need to send it instructions to destroy the rock with its inbuilt laser. Also imagine that the controlling software is written with Vert.x. There is a single verticle responsible for handling the necessary steps:

  1. Use the sensor to locate the position of the rock
  2. Use the position to scan hardness of the rock
  3. Use the hardness to calibrate and fire the laser. Report back status
  4. Report success or failure to the main caller

As you can see, in each step we need the result of the previous step, meaning composition.
A straightforward callback-based approach would look something like this:

(ns example.verticle
  (:require [vertx.eventbus :as eb]))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (let [reply-msg eb/*current-message*]
      (eb/send "rover.scope" (scope-msg instructions)
        (fn [coords]
          (eb/send "rover.sensor" (sensor-msg coords)
            (fn [data]
              (let [power (calibrate-laser data)]
                (eb/send "rover.laser" (laser-msg power)
                  (fn [status]
                    (eb/reply* reply-msg (parse-status status))))))))))))

A code structure quite typical of composed async functions. Now let's bring in core.async:

(ns example.verticle
  (:refer-clojure :exclude [send])
  (:require [ vertx.eventbus :as eb]
            [ clojure.core.async :refer [go chan put! <!]]))

(defn send [addr msg]
  (let [ch (chan 1)]
    (eb/send addr msg #(put! ch %))
    ch))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (go (let [coords (<! (send "rover.scope" (scope-msg instructions)))
              data (<! (send "rover.sensor" (sensor-msg coords)))
              power (calibrate-laser data)
              status (<! (send "rover.laser" (laser-msg power)))]
          (eb/reply (parse-status status))))))

We created our own reusable send function which returns a channel on which the result of eb/send will be put. Apart from the 2. Concurrent requests

Another thing we might want to do is query different handlers concurrently. Although we can use composition, this is not very performant as we do not need to wait for reply from service-A in order to call service-B.

As a concrete example, imagine we need to collect atmospheric data about some geographical area in order to make a weather forecast. The data will include the temperature, humidity and wind speed which are requested from three different independent services. Once all three asynchronous requests return, we can create a forecast and reply to the main caller. But how do we know when the last callback is fired? We need to keep some memory (mutable state) which is updated when each of the callback fires and process the data when the last one returns.

core.async easily accommodates this scenario without adding extra mutable state for coordinations inside your handlers. The state is contained in the channel.

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan 3)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go (let [data (merge (<! ch) (<! ch) (<! ch))
                forecast (create-forecast data)]
            (eb/reply forecast))))))
3. Fastest response

Sometimes there are multiple services at your disposal providing similar functionality and you just want the fastest one. With just a small adjustment, we can make the previous code work for this scenario as well.

(eb/on-message
  "server.request"
  (fn [msg]
    (let [ch (chan 3)]
      (eb/send "service-A" msg #(put! ch %))
      (eb/send "service-B" msg #(put! ch %))
      (eb/send "service-C" msg #(put! ch %))
      (go (eb/reply (<! ch))))))

We just take the first result on the channel and ignore the other results. After the go block has replied, there are no more takers on the channel. The results from the services that were too late are still put on the channel, but after the request finished, there are no more references to it and the channel with the results can be garbage-collected.

4. Handling timeouts and choice with alts!

We can create timeout channels that close themselves after a specified amount of time. Closed channels can not be written to anymore, but any messages in the buffer can still be read. After that, every read will return nil.

One thing core.async provides that most other tools don't is choice. From the examples:

One killer feature for channels over queues is the ability to wait on many channels at the same time (like a socket select). This is done with `alts!!` (ordinary threads) or `alts!` in go blocks.

This, combined with timeout channels gives the ability to wait on a channel up a maximum amount of time before giving up. By adjusting example 2 a bit:

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan)
          t-ch (timeout 3000)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go-loop [n 3 data {}]
        (if (pos? n)
          (if-some [result (alts! [ch t-ch])]
            (recur (dec n) (merge data result))
            (eb/fail 408 "Request timed out"))
          (eb/reply (create-forecast data)))))))

This will do the same thing as before, but we will wait a total of 3s for the requests to finish, otherwise we reply with a timeout failure. Notice that we did not put the timeout parameter in the vert.x API call of eb/send. Having a first-class timeout channel allows us to coordinate these timeouts more more easily than adding timeout parameters and failure-callbacks.

Wrapping up

The above scenarios are clearly simplified to focus on the different workflows, but they should give you an idea on how to start using it in Vert.x.

Some questions that have arisen for me is whether core.async can play nicely with Vert.x, which was the original motivation for this blog post. Verticles are single-threaded by design, while core.async introduces background threads to dispatch go-blocks or state machine callbacks. Since the dispatched go-blocks carry the correct message context the functions eb/send, eb/reply, etc.. can be called from these go blocks and all goes well.

There is of course a lot more to core.async than is shown here. But that is a story for another blog.

Docker on a raspberry pi

Xebia Blog - Mon, 08/25/2014 - 07:11

This blog describes how easy it is to use docker in combination with a Raspberry Pi. Because of docker, deploying software to the Raspberry Pi is a piece of cake.

What is a raspberry pi?
The Raspberry Pi is a credit-card sized computer that plugs into your TV and a keyboard. It is a capable little computer which can be used in electronics projects and for many things that your desktop PC does, like spreadsheets, word-processing and games. It also plays high-definition video. A raspberry pi runs linux, has an ARM processor of 700 MHZ and internal memory of 512 MB. Last but not least, it only costs around  35 Euro.

A raspberry pi

A raspberry pi version B

Because of the price, size and performance, the raspberry pi is a step to the 'Internet of things' principle. With a raspberry pi it is possible to control and connect everything to everything. For instance, my home project which is an raspberry pi controlling a robot.

 

Raspberry Pi in action

What is docker?
Docker is an open platform for developers and sysadmins to build, ship and run distributed applications. With Docker, developers can build any app in any language using any toolchain. “Dockerized” apps are completely portable and can run anywhere. A dockerized app contains the application, its environment, dependencies and even the OS.

Why combine docker and raspberry pi?
It is nice to work with a Raspberry Pi because it is a great platform to connect devices. Deploying anything however, is kind of a pain. With dockerized apps we can develop and test our application on our own home machine, when it works we can deploy it to the raspberry. We can do this without any pain or worries about corruption of the underlying operating system and tools. And last but not least, you can easily undo your tryouts.

What is better than I expected
First of all; it was relatively easy to install docker on the raspberry pi. When you use the Arch Linux operating system, docker is already part of the package manager! I expected to do a lot of cross-compiling of the docker application, because the raspberry pi uses an ARM-architecture (instead of the default x86 architecture), but someone did this already for me!

Second of all; there are a bunch of ready-to-use docker-images especially for the raspberry pi. To run dockerized applications on the raspberry pi you are depending on the base images. These base images must also support the ARM-architecture. For each situation there is an image, whether you want to run node.js, python, ruby or just java.

The worst thing that worried me was the performance of running virtualized software on a raspberry pi. But it all went well and I did not notice any performance reduce. Docker requires far less resources than running virtual machines. A docker proces runs straight on the host, giving native CPU performance. Using Docker requires a small overhead for memory and network.

What I don't like about docker on a raspberry pi
The slogan of docker to 'build, ship and run any app anywhere' is not entirely valid. You cannot develop your Dockerfile on your local machine and deploy the same application directly to your raspberry pi. This is because each dockerfile includes a core image. For running your application on your local machine, you need a x86-based docker-image. For your raspberry pi you need an ARM-based image. That is a pity, because this means you can only build your docker-image for your Raspberry Pi on the raspberry pi, which is slow.

I tried several things.

  1. I used the emulator QEMU to emulate the rasberry pi on a fast Macbook. But, because of the inefficiency of the emulation, it is just as slow as building your dockerfile on a raspberry pi.
  2. I tried cross-compiling. This wasn't possible, because the commands in your dockerfile are replayed on a running image and the running raspberry-pi image can only be run on ... a raspberry pi.

How to run a simple node.js application with docker on a raspberry pi  

Step 1: Installing Arch Linux
The first step is to install arch linux on an SD card for the raspberry pi. The preferred OS for the raspberry pi is a debian based OS: Raspbian, which is nicely configured to work with a raspberry pi. But in this case, the ArchLinux is better because we use the OS only to run docker on it. Arch Linux is a much smaller and a more barebone OS. The best way is by following the steps at http://archlinuxarm.org/platforms/armv6/raspberry-pi. In my case, I use version 3.12.20-4-ARCH. In addition to the tutorial:

  1. After downloading the image, install it on a sd-card by running the command:
    sudo dd if=path_of_your_image.img of=/dev/diskn bs=1m
  2. When there is no HDMI output at boot, remove the config.txt on the SD-card. It will magically work!
  3. Login using root / root.
  4. Arch Linux will use 2 GB by default. If you have a SD-card with a higher capacity you can resize it using the following steps http://gleenders.blogspot.nl/2014/03/raspberry-pi-resizing-sd-card-root.html

Step 2: Installing a wifi dongle
In my case I wanted to connect a wireless dongle to the raspberry pi, by following these simple steps

  1. Install the wireless tools:
        pacman -Syu
        pacman -S wireless_tool
        
  2. Setup the configuration, by running:
    wifi-menu
  3. Autostart the wifi with:
        netctl list
        netctl enable wlan0-[name]
    

Because the raspberry pi is now connected to the network you are able to SSH to it.

Step 3: Installing docker
The actual install of docker is relative easy. There is a docker version compatible with the ARM processor (that is used within the Raspberry Pi). This docker is part of the package manager of Arch Linux and the used version is 1.0.0. At the time of writing this blog docker release version 1.1.2. The missing features are

  1. Enhanced security for the LXC driver.
  2. .dockerignore support.
  3. Pause containers during docker commit.
  4. Add --tail to docker logs.

You will install docker and start is as a service on system boot by the commands:

pacman -S docker
systemctl enable docker
Installing docker with pacman

Installing docker with pacman

Step 4: Run a single nodejs application
After we've installed docker on the raspberry pi, we want to run a simple nodejs application. The application we will deploy is inspired on the nodejs web in the tutorial on the docker website: https://github.com/enokd/docker-node-hello/. This nodejs application prints a "hello world" to the console of the webbrowser. We have to change the dockerfile to:

# DOCKER-VERSION 1.0.0
FROM resin/rpi-raspbian

# install required packages
RUN apt-get update
RUN apt-get install -y wget dialog

# install nodejs
RUN wget http://node-arm.herokuapp.com/node_latest_armhf.deb
RUN dpkg -i node_latest_armhf.deb

COPY . /src
RUN cd /src; npm install

# run application
EXPOSE 8080
CMD ["node", "/src/index.js"]

And it works!

Screen Shot 2014-08-07 at 20.52.09

The webpage that runs in nodejs on a docker image on a raspberry pi

 

Just by running four little steps, you are able to use docker on your raspberry pi! Good luck!

 

C4 model poster

Coding the Architecture - Simon Brown - Sun, 08/24/2014 - 23:20

A few people have recently asked me for a poster/cheat sheet/quick reference of the C4 model that I use for communicating and diagramming software systems. You may have seen an old copy floating around the blog, but I've made a few updates and you can grab the new version from http://static.codingthearchitecture.com/c4.pdf (PDF, A3 size).

Software architecture and the C4 model

Enjoy!

Categories: Architecture

Xebia IT Architects Innovation Day

Xebia Blog - Sat, 08/23/2014 - 17:51

Friday August 22nd was Xebia’s first Innovation Day. We spent a full day experimenting with technology. I helped organizing the day for XITA, Xebia’s IT Architects department (Hmm. Department doesn’t feel quite right to describe what we are, but anyway). Innovation days are intended to inspire as well as educate. We split up in small teams and focused on a particular technology. Below is as list of project teams:

• Docker-izing enterprise software
• Run a web application high-available across multiple CoreOS nodes using Kubernetes
• Application architecture (team 1)
• Application architecture (team 2)
• Replace Puppet with Salt
• Scale "infinitely" with Apache Mesos

In the coming weeks we will publish what we learned in separate blogs.

First Xebia Innovation Day

CRC - Components, Responsibilities, Collaborations

Coding the Architecture - Simon Brown - Sat, 08/23/2014 - 09:46

I was reading Dan North Visits Osper and was pleasantly surprised to see Dan mention CRC modelling. CRC is a great technique for the process of designing software, particularly when used in a group/workshop environment. It's not a technique that many people seem to know about nowadays though.

A Google search will yield lots of good explanations on the web, but basically CRC is about helping to identify the classes needed to implement a particular feature, use case, user story, etc. You basically walk through the feature from the start and whenever you identify a candidate class, you write the name of it on a 6x4 index card, additionally annotating the card with the responsibilities of the class. Every card represents a separate class. As you progress through the feature, you identify more classes and create additional cards, annotating the cards with responsibilities and also keeping a note of which classes are collaborating with one another. Of course, you can also refactor your design by splitting classes out or combining them as you progress. When you're done, you can dry-run your feature by walking through the classes (e.g. A calls B to do X, which in turn requests Y from C, etc).

Much of what you'll read about CRC on the web discusses how the technique is useful for teaching OO design at the class level, but I like using it at the component level when faced with architecting a software system given a blank sheet of paper. The same principles apply, but you're identifying components (or services, microservices, etc) rather than classes. When you've done this for a number of significant use cases, you end up with a decent set of CRC cards representing the core components of your software system.

From this, you can start to draw some architecture diagrams using something like my C4 model. Since the cards represent components, you can simply lay out the cards on paper, draw lines between them and you have a component diagram. Each of those components needs to be running in an execution environment (e.g. a web application, database, mobile app, etc). If you draw boxes around groups of components to represent these execution environments, you have a containers diagram. Step up one level further and you can create a simple system context diagram to show how your system interacts with the outside world. My Simple Sketches for Diagramming your Software Architecture article provides more information about the C4 model and the resulting diagrams, but hopefully you get the idea.

CRC then ... yes, it's a great technique for collaborative design, particularly when applied at the component level. And it's a nice starting point for creating software architecture diagrams too.

Categories: Architecture

Stuff The Internet Says On Scalability For August 22nd, 2014

Hey, it's HighScalability time:


Exterminate! 1,024 small, mobile, three-legged machines that can move and communicate using infrared laser beams.
  • 1.6 billion: facts in Google's Knowledge Vault built by bots; 100: lightening strikes every second
  • Quotable Quotes:
    • @stevendborrelli: There's a common feeling here at #MesosCon that we at the beginning of a massive shift in the way we manage infrastructure.
    • @deanwampler: 2000 machine service will see > 10 machine crashes per day. Failure is normal. (Google) #Mesoscon
    • @peakscale: "not everything revolves around docker" /booted from room immediately
    • @deanwampler: Twitter has most of their critical infrastructure on Mesos, O(10^4) machines, O(10^5) tasks, O(10^0) SREs supporting it. #Mesoscon
    • @adrianco: Dig yourself a big data hole, then drown in your data lake...
    • bbulkow:  I saw huge Go uptake at OSCON. I met one guy doing log processing easily at 1M records per minute on a single amazon instance, and knew it would scale.
    • @julian_dunn: clearly, running Netflix on a mainframe would have avoided this problem

  • Programming is the new way in an old tradition of using new ideas to explain old mysteries. Take the new Theory of Everything, doesn't it sound a lot like OO programming?: According to constructor theory, the most fundamental components of reality are entities—“constructors”—that perform particular tasks, accompanied by a set of laws that define which tasks are actually possible for a constructor to carry out. Then there's Our Mathematical Universe, which posits that the attributes of objects are the objects: all physical properties of an electron, say, can be described mathematically; therefore, to him, an electron is itself a mathematical structure. Any data modeler knows how faulty is this conceit. We only model our view relative to a problem, not universally. Modelers also have another intuition, that all attributes arise out of relationships between entities and that entities may themselves not have attributes. So maybe physics and programming have something to do with each other after all?

  • Love this. Multi-Datacenter Cassandra on 32 Raspberry Pi’s. Over the top lobby theatrics is a signature Silicon Valley move.

  • Computation is all around us. Jellyfish Use Novel Search Strategy: instead of using a consistent Lévy walk approach, barrel jellyfish also employ a bouncing technique to locate prey. These large jellies ride the currents to a new depth in search of food. If a meal is not located in the new location, the creature rides the currents back to its original location. 

  • Two months early. 300k under budget. Building a custom CMS using a Javascript based Single Page App (SPA), a Clojure back end, a set of small Clojure based micro services sitting on top of MongoDB, hosted in Rackspace.

  • While Twitter may not fight against the impersonation of certain Journalism professors, it does fight spam with a large sword. Here's how that sword of righteousness was forged: Fighting spam with BotMaker. The main challenge: applying rules defined using their own rule language with a low latency. Spam is detected in three stages: real-time, before the tweet enters the system; near real-time, on the write path; periodic, in the background. The result: a 40% reduction in spam and faster response time to new spam attacks.

  • An architecture of small apps. A PHP/Symfony CMS called Megatron takes 10 seconds to render a page. Pervasive slowness leads to constant problems with cache clearing, timeouts, server spin ups and downs, cache warmup. What to do? As an answer an internal Yammer conversation on different options is shared. The major issue is dumping their CMS for a microservices based approach. Interesting discussion that covers a lot of ground.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Azure: New DocumentDB NoSQL Service, New Search Service, New SQL AlwaysOn VM Template, and more

ScottGu's Blog - Scott Guthrie - Thu, 08/21/2014 - 21:39

Today we released a major set of updates to Microsoft Azure. Today’s updates include:

  • DocumentDB: Preview of a New NoSQL Document Service for Azure
  • Search: Preview of a New Search-as-a-Service offering for Azure
  • Virtual Machines: Portal support for SQL Server AlwaysOn + community-driven VMs
  • Web Sites: Support for Web Jobs and Web Site processes in the Preview Portal
  • Azure Insights: General Availability of Microsoft Azure Monitoring Services Management Library
  • API Management: Support for API Management REST APIs

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: DocumentDB: Announcing a New NoSQL Document Service for Azure

I’m excited to announce the preview of our new DocumentDB service - a NoSQL document database service designed for scalable and high performance modern applications.  DocumentDB is delivered as a fully managed service (meaning you don’t have to manage any infrastructure or VMs yourself) with an enterprise grade SLA.

As a NoSQL store, DocumentDB is truly schema-free. It allows you to store and query any JSON document, regardless of schema. The service provides built-in automatic indexing support – which means you can write JSON documents to the store and immediately query them using a familiar document oriented SQL query grammar. You can optionally extend the query grammar to perform service side evaluation of user defined functions (UDFs) written in server-side JavaScript as well. 

DocumentDB is designed to linearly scale to meet the needs of your application. The DocumentDB service is purchased in capacity units, each offering a reservation of high performance storage and dedicated performance throughput. Capacity units can be easily added or removed via the Azure portal or REST based management API based on your scale needs. This allows you to elastically scale databases in fine grained increments with predictable performance and no application downtime simply by increasing or decreasing capacity units.

Over the last year, we have used DocumentDB internally within Microsoft for several high-profile services.  We now have DocumentDB databases that are each 100s of TBs in size, each processing millions of complex DocumentDB queries per day, with predictable performance of low single digit ms latency.  DocumentDB provides a great way to scale applications and solutions like this to an incredible size.

DocumentDB also enables you to tune performance further by customizing the index policies and consistency levels you want for a particular application or scenario, making it an incredibly flexible and powerful data service for your applications.   For queries and read operations, DocumentDB offers four distinct consistency levels - Strong, Bounded Staleness, Session, and Eventual. These consistency levels allow you to make sound tradeoffs between consistency and performance. Each consistency level is backed by a predictable performance level ensuring you can achieve reliable results for your application.

DocumentDB has made a significant bet on ubiquitous formats like JSON, HTTP and REST – which makes it easy to start taking advantage of from any Web or Mobile applications.  With today’s release we are also distributing .NET, Node.js, JavaScript and Python SDKs.  The service can also be accessed through RESTful HTTP interfaces and is simple to manage through the Azure preview portal. Provisioning a DocumentDB account

To get started with DocumentDB you provision a new database account. To do this, use the new Azure Preview Portal (http://portal.azure.com), click the Azure gallery and select the Data, storage, cache + backup category, and locate the DocumentDB gallery item.

image

Once you select the DocumentDB item, choose the Create command to bring up the Create blade for it.

In the create blade, specify the name of the service you wish to create, the amount of capacity you wish to scale your DocumentDB instance to, and the location around the world that you want to deploy it (e.g. the West US Azure region):

image

Once provisioning is complete, you can start to manage your DocumentDB account by clicking the new instance icon on your Azure portal dashboard. 

image

The keys tile can be used to retrieve the security keys to use to access the DocumentDB service programmatically. Developing with DocumentDB

DocumentDB provides a number of different ways to program against it. You can use the REST API directly over HTTPS, or you can choose from either the .NET, Node.js, JavaScript or Python client SDKs.

The JSON data I am going to use for this example are two families:

// AndersonFamily.json file

{<?xml:namespace prefix = "o" />

    "id": "AndersenFamily",

    "lastName": "Andersen",

    "parents": [

        { "firstName": "Thomas" },

        { "firstName": "Mary Kay" }

    ],

    "children": [

        { "firstName": "John", "gender": "male", "grade": 7 }

    ],

    "pets": [

        { "givenName": "Fluffy" }

    ],

    "address": { "country": "USA", "state": "WA", "city": "Seattle" }

}

and

// WakefieldFamily.json file

{

    "id": "WakefieldFamily",

    "parents": [

        { "familyName": "Wakefield", "givenName": "Robin" },

        { "familyName": "Miller", "givenName": "Ben" }

    ],

    "children": [

        {

            "familyName": "Wakefield",

            "givenName": "Jesse",

            "gender": "female",

            "grade": 1

        },

        {

            "familyName": "Miller",

            "givenName": "Lisa",

            "gender": "female",

            "grade": 8

        }

    ],

    "pets": [

        { "givenName": "Goofy" },

        { "givenName": "Shadow" }

    ],

    "address": { "country": "USA", "state": "NY", "county": "Manhattan", "city": "NY" }

}

Using the NuGet package manager in Visual Studio, I can search for and install the DocumentDB .NET package into any .NET application. With the URI and Authentication Keys for the DocumentDB service that I retrieved earlier from the Azure Management portal, I can then connect to the DocumentDB service I just provisioned, create a Database, create a Collection, Insert some JSON documents and immediately start querying for them:

using (client = new DocumentClient(new Uri(endpoint), authKey))

{

    var database = new Database { Id = "ScottsDemoDB" };

    database = await client.CreateDatabaseAsync(database);

 

    var collection = new DocumentCollection { Id = "Families" };

    collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection);

 

    //DocumentDB supports strongly typed POCO objects and also dynamic objects

    dynamic andersonFamily =  JsonConvert.DeserializeObject(File.ReadAllText(@".\Data\AndersonFamily.json"));

    dynamic wakefieldFamily = JsonConvert.DeserializeObject(File.ReadAllText(@".\Data\WakefieldFamily.json"));

 

    //persist the documents in DocumentDB

    await client.CreateDocumentAsync(collection.SelfLink, andersonFamily);

    await client.CreateDocumentAsync(collection.SelfLink, wakefieldFamily);

 

    //very simple query returning the full JSON document matching a simple WHERE clause

    var query = client.CreateDocumentQuery(collection.SelfLink, "SELECT * FROM Families f WHERE f.id = 'AndersenFamily'");

    var family = query.AsEnumerable().FirstOrDefault();

 

    Console.WriteLine("The Anderson family have the following pets:");              

    foreach (var pet in family.pets)

    {

        Console.WriteLine(pet.givenName);

    }

 

    //select JUST the child record out of the Family record where the child's gender is male

    query = client.CreateDocumentQuery(collection.DocumentsLink, "SELECT * FROM c IN Families.children WHERE c.gender='male'");

    var child = query.AsEnumerable().FirstOrDefault();

 

    Console.WriteLine("The Andersons have a son named {0} in grade {1} ", child.firstName, child.grade);

 

    //cleanup test database

    await client.DeleteDatabaseAsync(database.SelfLink);

}

As you can see above – the .NET API for DocumentDB fully supports the .NET async pattern, which makes it ideal for use with applications you want to scale well. 

Server-side JavaScript Stored Procedures

If I wanted to perform some updates affecting multiple documents within a transaction, I can define a stored procedure using JavaScript that swapped pets between families. In this scenario it would be important to ensure that one family didn’t end up with all the pets and another ended up with none due to something unexpected happening. Therefore if an error occurred during the swap process, it would be crucial that the database rollback the transaction and leave things in a consistent state.  I can do this with the following stored procedure that I run within the DocumentDB service:

function SwapPets(family1Id, family2Id) {

    var context = getContext();

    var collection = context.getCollection();

    var response = context.getResponse();

 

    collection.queryDocuments(collection.getSelfLink(), 'SELECT * FROM Families f where f.id  = "' + family1Id + '"', {},

    function (err, documents, responseOptions) {

        var family1 = documents[0];

 

        collection.queryDocuments(collection.getSelfLink(), 'SELECT * FROM Families f where f.id = "' + family2Id + '"', {},

        function (err2, documents2, responseOptions2) {

            var family2 = documents2[0];

                   

            var itemSave = family1.pets;

            family1.pets = family2.pets;

            family2.pets = itemSave;

 

            collection.replaceDocument(family1._self, family1,

                function (err, docReplaced) {

                    collection.replaceDocument(family2._self, family2, {});

                });

 

            response.setBody(true);

        });

    });

}

 

If an exception is thrown in the JavaScript function due to for instance a concurrency violation when updating a record, the transaction is reversed and system is returned to the state it was in before the function began.

It’s easy to register the stored procedure in code like below (for example: in a deployment script or app startup code):

    //register a stored procedure

    StoredProcedure storedProcedure = new StoredProcedure

    {

        Id = "SwapPets",

        Body = File.ReadAllText(@".\JS\SwapPets.js")

    };

               

    storedProcedure = await client.CreateStoredProcedureAsync(collection.SelfLink, storedProcedure);

 

And just as easy to execute the stored procedure from within your application:

    //execute stored procedure passing in the two family documents involved in the pet swap              

    dynamic result = await client.ExecuteStoredProcedureAsync<dynamic>(storedProcedure.SelfLink, "AndersenFamily", "WakefieldFamily");

If we checked the pets now linked to the Anderson Family we’d see they have been swapped. Learning More

It’s really easy to get started with DocumentDB and create a simple working application in a couple of minutes.  The above was but one simple example of how to start using it.  Because DocumentDB is schema-less you can use it with literally any JSON document.  Because it performs automatic indexing on every JSON document stored within it, you get screaming performance when querying those JSON documents later. Because it scales linearly with consistent performance, it is ideal for applications you think might get large.

You can learn more about DocumentDB from the new DocumentDB development center here.

Search: Announcing preview of new Search as a Service for Azure

I’m excited to announce the preview of our new Azure Search service.  Azure Search makes it easy for developers to add great search experiences to any web or mobile application.   

Azure Search provides developers with all of the features needed to build out their search experience without having to deal with the typical complexities that come with managing, tuning and scaling a real-world search service.  It is delivered as a fully managed service with an enterprise grade SLA.  We also are releasing a Free tier of the service today that enables you to use it with small-scale solutions on Azure at no cost. Provisioning a Search Service

To get started, let’s create a new search service.  In the Azure Preview Portal (http://portal.azure.com), navigate to the Azure Gallery, and choose the Data storage, cache + backup category, and locate the Azure Search gallery item.

image

Locate the “Search” service icon and select Create to create an instance of the service:

image

You can choose from two Pricing Tier options: Standard which provides dedicated capacity for your search service, and a Free option that allows every Azure subscription to get a free small search service in a shared environment.

The standard tier can be easily scaled up or down and provides dedicated capacity guarantees to ensure that search performance is predictable for your application.  It also supports the ability to index 10s of millions of documents with lots of indexes.

The free tier is limited to 10,000 documents, up to 3 indexes and has no dedicated capacity guarantees. However it is also totally free, and also provides a great way to learn and experiment with all of the features of Azure Search. Managing your Azure Search service

After provisioning your Search service, you will land in the Search blade within the portal - which allows you to manage the service, view usage data and tune the performance of the service:

image

I can click on the Scale tile above to bring up the details of the number of resources allocated to my search service. If I had created a Standard search service, I could use this to increase the number of replicas allocated to my service to support more searches per second (or to provide higher availability) and the number of partitions to give me support for higher numbers of documents within my search service. Creating a Search Index

Now that the search service is created, I need to create a search index that will hold the documents (data) that will be searched. To get started, I need two pieces of information from the Azure Portal, the service URL to access my Azure Search service (accessed via the Properties tile) and the Admin Key to authenticate against the service (accessed via the Keys title).

image

Using this search service URL and admin key, I can start using the search service APIs to create an index and later upload data and issue search requests. I will be sending HTTP requests against the API using that key, so I’ll setup a .NET HttpClient object to do this as follows:

HttpClient client = new HttpClient();

client.DefaultRequestHeaders.Add("api-key", "19F1BACDCD154F4D3918504CBF24CA1F");

I’ll start by creating the search index. In this case I want an index I can use to search for contacts in my dataset, so I want searchable fields for their names and tags; I also want to track the last contact date (so I can filter or sort on that later on) and their address as a lat/long location so I can use it in filters as well. To make things easy I will be using JSON.NET (to do this, add the NuGet package to your VS project) to serialize objects to JSON.

var index = new

{

    name = "contacts",

    fields = new[]

    {

        new { name = "id", type = "Edm.String", key = true },

        new { name = "fullname", type = "Edm.String", key = false },

        new { name = "tags", type = "Collection(Edm.String)", key = false },

        new { name = "lastcontacted", type = "Edm.DateTimeOffset", key = false },

        new { name = "worklocation", type = "Edm.GeographyPoint", key = false },

    }

};

 

var response = client.PostAsync("https://scottgu-dev.search.windows.net/indexes/?api-version=2014-07-31-Preview",

                                new StringContent(JsonConvert.SerializeObject(index), Encoding.UTF8, "application/json")).Result;

response.EnsureSuccessStatusCode();

You can run this code as part of your deployment code or as part of application initialization. Populating a Search Index

Azure Search uses a push API for indexing data. You can call this API with batches of up to 1000 documents to be indexed at a time. Since it’s your code that pushes data into the index, the original data may be anywhere: in a SQL Database in Azure, DocumentDb database, blob/table storage, etc.  You can even populate it with data stored on-premises or in a non-Azure cloud provider.

Note that indexing is rarely a one-time operation. You will probably have an initial set of data to load from your data source, but then you will want to push new documents as well as update and delete existing ones. If you use Azure Websites, this is a natural scenario for Webjobs that can run your indexing code regularly in the background.

Regardless of where you host it, the code to index data needs to pull data from the source and push it into Azure Search. In the example below I’m just making up data, but you can see how I could be using the result of a SQL or LINQ query or anything that produces a set of objects that match the index fields we identified above.

var batch = new

{

    value = new[]

    {

        new

        {

            id = "221",

            fullname = "Jay Adams",

            tags = new string[] { "work" },

            lastcontacted = DateTimeOffset.UtcNow,

            worklocation = new

            {

                type = "Point",

                coordinates = new [] { -122.131577, 47.678581 }

            }

        },

        new

        {

            id = "714",

            fullname = "Catherine Abel",

            tags = new string[] { "work", "personal" },

            lastcontacted = DateTimeOffset.UtcNow,

            worklocation = new

            {

                type = "Point",

                coordinates = new [] { -121.825579, 47.1419814}

            }

        }

    }

};

 

var response = client.PostAsync("https://scottgu-dev.search.windows.net/indexes/contacts/docs/index?api-version=2014-07-31-Preview",

                                new StringContent(JsonConvert.SerializeObject(batch), Encoding.UTF8, "application/json")).Result;

response.EnsureSuccessStatusCode();

Searching an Index

After creating an index and populating it with data, I can now issue search requests against the index. Searches are simple HTTP GET requests against the index, and responses contain the data we originally uploaded as well as accompanying scoring information.

I can do a simple search by executing the code below, where searchText is a string containing the user input, something like abel work for example:

var response = client.GetAsync("https://scottgu-dev.search.windows.net/indexes/contacts/docs?api-version=2014-07-31-Preview&search=" + Uri.EscapeDataString(searchText)).Result;

response.EnsureSuccessStatusCode();

 

dynamic results = JsonConvert.DeserializeObject(response.Content.ReadAsStringAsync().Result);

 

foreach (var result in results.value)

{

    Console.WriteLine("FullName:" + result.fullname + " score:" + (double)result["@search.score"]);

}

Learning More

The above is just a simple scenario of what you can do.  There are a lot of other things we could do with searches. For example, I can use query string options to filter, sort, project and page over the results. I can use hit-highlighting and faceting to create a richer way to navigate results and suggestions to implement auto-complete within my web or mobile UI.

In this example, I used the default ranking model, which uses statistics of the indexed text and search string to compute scores. You can also author your own scoring profiles that model scores in ways that match the needs of your application.

Check out the Azure Search documentation for more details on how to get started, and some of the more advanced use-cases you can take advantage of.  With the free tier now available at no cost to every Azure subscriber, there is no longer any reason not to have Search fully integrated within your applications. Virtual Machines: Support for SQL Server AlwaysOn, VM Depot images

Last month we added support for managing VMs within the Azure Preview Portal (http://portal.azure.com).  We also released built-in portal support that enables you to easily create multi-VM SharePoint Server Farms as well as a slew of additional Azure Certified VM images.  You can learn more about these updates in my last blog post.

Today, I’m excited to announce new support for automatically deploying SQL Server VMs with AlwaysOn configured, as well as integrated portal support for community supported VM Depot images. SQL Server AlwaysOn Template

AlwaysOn Availability Groups, released in SQL Server 2012 and enhanced in SQL Server 2014, guarantee high availability for mission-critical workloads. Last year we started supporting SQL Availability Groups on Azure Infrastructure Services. In such a configuration, two SQL replicas (primary and secondary), each in its own Azure VM, are configured for automatic failover, and a listener (DNS name) is configured for client connectivity. Other components required are a file share witness to guarantee quorum in the configuration to avoid “split brain” scenarios, and a domain controller to join all VMs to the same domain. The SQL as well as the domain controller replicas are each deployed to an availability set to ensure they are in different Azure failure and upgrade domains.

Prior to today’s release, setting up the Availability Group configuration could be tedious and time consuming. We have dramatically simplified this experience through a new SQL Server AlwaysOn template in the Azure Gallery. This template fully automates the configuration of a highly available SQL Server deployment on Azure Infrastructure Services using an Availability Group.

You can find the template by navigating to the Azure Gallery within the Azure Preview Portal (http://portal.azure.com), selecting the Virtual Machine category on the left and selecting the SQL Server 2014 AlwaysOn gallery item. In the gallery details page, select Create. All you need is to provide some basic configuration information such as the administrator credentials for the VMs and the rest of the settings are defaulted for you. You may consider changing the defaults for Listener name as this is what your applications will use to connect to SQL Server.

image

Upon creation, 5 VMs are created in the resource group: 2 VMs for the SQL Server replicas, 2 VMs for the Domain Controller replicas, and 1 VM for the file share witness.

Once created, you can RDP to one of the SQL Server VMs to see the Availability Group configuration as depicted below:

image

Try out the SQL Server AlwaysOn template in the Azure Preview Portal today and give us your feedback! VM Depot in Azure Gallery

Community-driven VM Depot images have been supported on the Azure platform for a couple of years now. But prior to today’s release they weren’t fully integrated into the mainline user experience.

Today, I’m excited to announce that we have integrated community VMs  into the Azure Preview Portal and the Azure gallery. With this release, you will find close to 300 pre-configured Virtual Machine images for Microsoft Azure.

Using these images, fully functional Virtual Machines can be deployed in the Preview Portal in minutes and customized for specific use cases. Starting from base operating system distributions (such as Debian, Ubuntu, CentOS, Suse and FreeBSD) through developer stacks (such as LAMP, Ruby on Rails, Node and Django), to complete applications (such as Wordpress, Drupal and Apache Solr), there is something for everyone in VM Depot.

Try out the VM Depot images in the Azure gallery from within the Virtual Machine category. image Web Sites: WebJobs and Process Management in the Preview Portal

Starting with today’s Azure release, Web Site WebJobs are now supported in the Azure Preview Portal.  You can also now drill into your Web Sites and monitor the health of any processes running within them (both to host your web code as well as your web jobs). Web Site WebJobs

Using WebJobs, you can now now run any code within your Azure Web Sites – and do so in a way that is readily parallelizable, globally scalable, and complete with remote debugging, full VS support and an optional SDK to facilitate authoring. For more information about the power of WebJobs, visit Azure WebJobs recommended resources.

With today’s Azure release, we now support two types of Webjobs: on Demand and Continuous.  To use WebJobs in the preview portal, navigate to your web site and select the WebJobs tile within the Web Site blade. Notice that the part also now shows the count of WebJobs available.

image

By drilling into the title, you can view existing WebJobs as well as create new OnDemand or Continuous WebJobs. Scheduled WebJobs are not yet supported in the preview portal, but expect to see this in the near future. Web Site Processes

I’m excited to announce a new feature in the Azure Web Sites experience in the Preview Portal - Websites Processes. Using Websites Process you can enumerate the different instances of your site, browse through the different processes on each instance, and even drill down to the handles and modules associated with each process. You can then check for detailed information like version, language and more.

image

In addition, you also get rich monitoring for CPU, Working Set and Thread count at the process level.  Just like with Task Manager for Windows, data collection begins when you open the Websites Processes blade, and stops when you close it.

image

This feature is especially useful when your site has been scaled out and is misbehaving in some specific instances but not in others. You can quickly identify runaway processes, find open file handles, and even kill a specific process instance. Monitoring and Management SDK: Programmatic Access to Monitoring Data

The Azure Management Portal provides built-in monitoring and management support that makes it easy for you to track the health of your applications and solutions deployed within Azure.

If you want to programmatically access monitoring and management features in Azure, you can also now use our .NET SDK from Nuget. We are releasing this SDK to general availability today, so you can now use it for your production services!

For example, if you want to build your own custom dashboard that shows metric data from across your services, you can get that metric data via the SDK:

// Create the metrics client by obtain the certificate with the specified thumbprint.

MetricsClient metricsClient = new MetricsClient(new CertificateCloudCredentials(SubscriptionId, GetStoreCertificate(Thumbprint)));

 

// Build the resource ID string.

string resourceId = ResourceIdBuilder.BuildWebSiteResourceId("webtest-group-WestUSwebspace", "webtests-site");

 

// Get the metric definitions.

MetricDefinitionCollection metricDefinitions = metricsClient.MetricDefinitions.List(resourceId, null, null).MetricDefinitionCollection;

 

// Display the available metric definitions.

Console.WriteLine("Choose metrics (comma separated) to list:");

int count = 0;

foreach (MetricDefinition metricDefinition in metricDefinitions.Value)

{

    Console.WriteLine(count + ":" + metricDefinition.DisplayName);

    count++;

}

 

// Ask the user which metrics they are interested in.

var desiredMetrics = Console.ReadLine().Split(',').Select(x =>  metricDefinitions.Value.ToArray()[Convert.ToInt32(x.Trim())]);

 

// Get the metric values for the last 20 minutes.

MetricValueSetCollection values = metricsClient.MetricValues.List(

    resourceId,

    desiredMetrics.Select(x => x.Name).ToList(),

    "",

    desiredMetrics.First().MetricAvailabilities.Select(x => x.TimeGrain).Min(),

    DateTime.UtcNow - TimeSpan.FromMinutes(20),

    DateTime.UtcNow

).MetricValueSetCollection;

 

// Display the metric values to the user.

foreach (MetricValueSet valueSet in values.Value )

{

    Console.WriteLine(valueSet.DisplayName + " for the past 20 minutes:");

    foreach (MetricValue metricValue in valueSet.MetricValues)

    {

        Console.WriteLine(metricValue.Timestamp + "\t" + metricValue.Average);

    }

}

 

Console.Write("Press any key to continue:");

Console.ReadKey();

We support metrics for a variety of services with the monitoring SDK:

Service

Typical metrics

Frequencies

Cloud services

CPU, Network, Disk

5 min, 1 hr, 12 hrs

Virtual machines

CPU, Network, Disk

5 min, 1 hr, 12 hrs

Websites

Requests, Errors, Memory, Response time, Data out

1 min, 1 hr

Mobile Services

API Calls, Data Out, SQL performance

1 hr

Storage

Requests, Success rate, End2End latency

1 min, 1 hr

Service Bus

Messages, Errors, Queue length, Requests

5 min

HDInsight

Containers, Apps running

15 min

If you’d like to manage advanced autoscale settings that aren’t possible to do in the Portal, you can also do that via the SDK. For example, you can construct autoscale based on custom metrics – you can autoscale by anything that is returned from MetricDefinitions.

All of the documentation on the SDK is available on MSDN. API Management: Support for Services REST API

We launched the Azure API Management service into preview in May of this year.  The API Management service enables  customers to quickly and securely publish APIs to partners, the public development community, and even internal developers.

Today, I’m excited to announce the availability of the API Management REST API which opens up a large number of new scenarios. It can be used to manage APIs, products, subscriptions, users and groups in addition to accessing your API analytics. In fact, virtually any management operation available in the Management API Portal is now accessible programmatically - opening up a host of integration and automation scenarios, including directly monetizing an API with your commerce provider of choice, taking over user or subscription management, automating API deployments and more.

We've even provided an additional SAS (Shared Access Signature) security option. An integrated experience in the publisher portal allows you to generate SAS tokens - so securely calling your API service couldn’t be easier. In just three easy steps:

  1. Enable the API on the System Settings page on the Publisher Portal
  2. Acquire a time-limited access token either manually or programmatically
  3. Start sending requests to the API, providing the token with every request

image 

See the REST API reference for full details. Delegation of user registration and product subscription

The new API Management REST API makes it easy to automate and integrate other processes with API management. Many customers integrating in this way already have a user account system and would prefer to use this existing resource, instead of the built-in functionality provided by the Developer Portal. This feature, called Delegation, enables your existing website or backend to own the user data, manage subscriptions and seamlessly integrate with API Management's dynamically generated API documentation.

image

It's easy to enable Delegation: in the Publisher Portal navigate to the Delegation section and enable Delegated Sign-in and Sign up, provide the endpoint URL and validation key and you're good to go. For more details, check out the how-to guide. Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at:twitter.com/scottgu

Categories: Architecture, Programming

Two lessons on haproxy checks and swap space

Agile Testing - Grig Gheorghiu - Thu, 08/21/2014 - 02:03
Let's assume you want to host a Wordpress site which is not going to get a lot of traffic. You want to use EC2 for this. You still want as much fault tolerance as you can get at a decent price, so you create an Elastic Load Balancer endpoint which points to 2 (smallish) EC2 instances running haproxy, with each haproxy instance pointing in turn to 2 (not-so-smallish) EC2 instances running Wordpress (Apache + MySQL). 
You choose to run haproxy behind the ELB because it gives you more flexibitity in terms of load balancing algorithms, health checks, redirections etc. Within haproxy, one of the Wordpress servers is marked as a backup for the other, so it only gets hit by haproxy when the primary one goes down. On this secondary Wordpress instance you set up MySQL to be a slave of the primary instance's MySQL. 
Here are two things (at least) that you need to make sure you have in this scenario:
1) Make sure you specify the httpchk option in haproxy.cfg, otherwise the primary server will not be marked as down even if Apache goes down. So you should have something like:
backend servers-http  server s1 10.0.1.1:80 weight 1 maxconn 5000 check port 80  server s2 10.0.1.2:80 backup weight 1 maxconn 5000 check port 80  option httpchk GET /
2) Make sure you have swap space in case the memory on the Wordpress instances gets exhausted, in which case random processes will be killed by the oom process (and one of those processes can be mysqld). By default, there is no swap space when you spin up an Ubuntu EC2 instance. Here's how to set up a 2 GB swapfile:
dd if=/dev/zero of=/swapfile1 bs=1024 count=2097152mkswap /swapfile1chmod 0600 /swapfile1swapon /swapfile1echo "/swapfile1 swap swap defaults 0 0" >> /etc/fstab
I hope these two things will help you if you're not already doing them ;-)