Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Mocking a REST backend for your AngularJS / Grunt web application

Xebia Blog - Thu, 06/26/2014 - 17:15

Anyone who ever developed a web application will know that a lot of time is spend in a browser to check if everything works as well and looks good. And you want to make sure it looks good in all possible situations. For a single-page application, build with a framework such as AngularJS, that gets all it's data from a REST backend this means you should verify your front-end against different responses from your backend. For a small application with primarily GET requests to display data, you might get away with testing against your real (development) backend. But for large and complex applications, you need to mock your backend.

In this post I'll go in to detail how you can solve this by mocking GET requests for an AngularJS web application that's built using Grunt.

In our current project, we're building a new mobile front-end for an existing web application. Very convenient since the backend already exists with all the REST services that we need. An even bigger convenience is that the team that built the existing web application also built an entire mock implementation of the backend. This mock implementation will give standard responses for every possible request. Great for our Protractor end-to-end tests! (Perhaps another post about that another day.) But this mock implementation is not so great for the non standard scenario's. Think of error messages, incomplete data, large numbers or a strange valuta. How can we make sure our UI displays these kind of cases correct? We usually cover all these cases in our unit tests, but sometimes you just want to see it right in front of you as well. So we started building a simple solution right inside our Grunt configuration.

To make this solution work, we need to make sure that all our REST requests go through the Grunt web server layer. Our web application is served by Grunt on localhost port 9000. This is the standard configuration that Yeoman generates (you really should use Yeoman to scaffold your project). Our development backend is also running on localhost, but on port 5000. In our web application we want to make all REST calls using the `/api` path so we need to rewrite all requests to http://localhost:9000/api to our backend: http://localhost:5000/api. We can do this by adding middleware in the connect:livereload configuration of our Gruntfile.

livereload: {
  options: {
    open: true,
    middleware: function (connect, options) {
      return [
        require('connect-modrewrite')(['^/api http://localhost:5000/api [P L]']),

        /* The lines below are generated by Yeoman */
        connect.static('.tmp'),
        connect().use(
          '/bower_components',
          connect.static('./bower_components')
        ),
        connect.static(appConfig.app)
      ];
    }
  }
},

Do the same for the connect:test section as well.

Since we're using 'connect-modrewrite' here, we'll have to add this to our project:

npm install connect-modrewrite --save-dev

With this configuration every request starting will http://localhost:9000/api will be passed on to http://localhost:5000/api so we can just use /api in our AngularJS application. Now that we have this working, we can write some custom middleware to mock some of our requests.

Let's say we have a GET request /api/user returning some JSON data:

{"id": 1, "name":"Bob"}

Now we'd like to see what happens with our application in case the name is missing:

{"id": 1}

It would be nice if we could send a simple POST request to change the response of all subsequent calls. Something like this:

curl -X POST -d '{"id": 1}' http://localhost:9000/mock/api/user

We prefixed the path that we want to mock with /mock in order to know when we should start mocking something. Let's see how we can implement this. In the same Gruntfile that contains our middleware configuration we add a new function that will help us mock our requests.

var mocks = [];
function captureMock() {
  return function (req, res, next) {

    // match on POST requests starting with /mock
    if (req.method === 'POST' && req.url.indexOf('/mock') === 0) {

      // everything after /mock is the path that we need to mock
      var path = req.url.substring(5);

      var body = '';
      req.on('data', function (data) {
        body += data;
      });
      req.on('end', function () {

        mocks[path] = body;

        res.writeHead(200);
        res.end();
      });
    } else {
      next();
    }
  };
}

And we need to add the above function to our middleware configuration:

middleware: function (connect, options) {
  return [
    captureMock(),
    require('connect-modrewrite')(['^/api http://localhost:5000/api [P L]']),

    connect.static('.tmp'),
    connect().use(
      '/bower_components',
      connect.static('./bower_components')
    ),
    connect.static(appConfig.app)
  ];
}

Our function will be called for each incoming request. It will capture each request starting with /mock as a request to define a mock request. Next it stores the body in the mocks variable with the path as key. So if we execute our curl POST request we end up with something like this in our mocks array:

mocks['/api/user'] = '{"id": 1}';

Next we need to actually return this data for requests to http://localhost:9000/api/user. Let's make a new function for that.

function mock() {
  return function (req, res, next) {
    var mockedResponse = mocks[req.url];
    if (mockedResponse) {
      res.writeHead(200);
      res.write(mockedResponse);
      res.end();
    } else {
      next();
    }
  };
}

And also add it to our middleware.

  ...
  captureMock(),
  mock(),
  require('connect-modrewrite')(['^/api http://localhost:5000/api [P L]']),
  ...

Great, we now have a simple mocking solution in just a few lines of code that allows us to send simple POST requests to our server with the requests we want to mock. However, it can only send status codes of 200 and it cannot differentiate between different HTTP methods like GET, PUT, POST and DELETE. Let's change our functions a bit to support that functionality as well.

 var mocks = {
  GET: {},
  PUT: {},
  POST: {},
  PATCH: {},
  DELETE: {}
};

function mock() {
  return function (req, res, next) {
    if (req.method === 'POST' && req.url.indexOf('/mock') === 0) {
      var path = req.url.substring(5);

      var body = '';
      req.on('data', function (data) {
        body += data;
      });
      req.on('end', function () {

        var headers = {
          'Content-Type': req.headers['content-type']
        };
        for (var key in req.headers) {
          if (req.headers.hasOwnProperty(key)) {
            if (key.indexOf('mock-header-') === 0) {
              headers[key.substring(12)] = req.headers[key];
            }
          }
        }

        mocks[req.headers['mock-method'] || 'GET'][path] = {
          body: body,
          responseCode: req.headers['mock-response'] || 200,
          headers: headers
        };

        res.writeHead(200);
        res.end();
      });
    }
  };
};

function mock() {
  return function (req, res, next) {
    var mockedResponse = mocks[req.method][req.url];
    if (mockedResponse) {
      res.writeHead(mockedResponse.responseCode, mockedResponse.headers);
      res.write(mockedResponse.body);
      res.end();
    } else {
      next();
    }
  };
}

We can now create more advanced mocks:

curl -X POST \
    -H "mock-method: DELETE" \
    -H "mock-response: 403" \
    -H "Content-type: application/json" \
    -H "mock-header-Last-Modified: Tue, 15 Nov 1994 12:45:26 GMT" \
    -d '{"error": "Not authorized"}' http://localhost:9000/mock/api/user

curl -D - -X DELETE http://localhost:9000/api/user
HTTP/1.1 403 Forbidden
Content-Type: application/json
last-modified: Tue, 15 Nov 1994 12:45:26 GMT
Date: Wed, 18 Jun 2014 13:39:30 GMT
Connection: keep-alive
Transfer-Encoding: chunked

{"error": "Not authorized"}

Since we thought this would be useful for other developers, we decided to make all this available as open source library on GitHub and NPM

To add this to your project, just install with npm:

npm install mock-rest-request --save-dev

And of course add it to your middleware configuration:

middleware: function (connect, options) {
  var mockRequests = require('mock-rest-request');
  return [
    mockRequests(),
    
    connect.static('.tmp'),
    connect().use(
      '/bower_components',
      connect.static('./bower_components')
    ),
    connect.static(appConfig.app)
  ];
}

The New Competitive Landscape

"All men can see these tactics whereby I conquer, but what none can see is the strategy out of which victory is evolved." -- Sun Tzu

If it feels like strategy cycles are shrinking, they are.

If it feels like competition is even more intense, it is.

If it feels like you are balancing between competing in the world and collaborating with the world, you are.

In the book, The Future of Management, Gary Hamel and Bill Breen share a great depiction of this new world of competition and the emerging business landscape.

Strategy Cycles are Shrinking

Strategy cycles are shrinking and innovation is the only effective response.

Via The Future of Management:

“In a world where strategy life cycles are shrinking, innovation is the only way a company can renew its lease on success.  It's also the only way it can survive in a world of bare-knuckle competition.”

Fortifications are Collapsing

What previously kept people out of the game, no longer works.

Via The Future of Management:

“In decades past, many companies were insulated from the fierce winds of Schumpeterian competition.  Regulatory barriers, patent protection, distribution monopolies, disempowered customers, proprietary standards, scale advantages, import protection, and capital hurdles were bulwarks that protected industry incumbents from the margin-crushing impact of Darwinian competition.  Today, many of the fortifications are collapsing.”

Upstarts No Longer Have to Build a Global Infrastructure to Reach a Worldwide Market

Any startup can reach the world, without having to build their own massive data center to do so.

Via The Future of Management:

“Deregulation and trade liberalization are reducing the barriers to entry in industries as diverse as banking, air transport, and telecommunications.  The power of the Web means upstarts no longer have to build a global infrastructure to reach a worldwide market.  This has allowed companies like Google, eBay, and My Space to scale their businesses freakishly fast.” 

The Disintegration of Large Companies and New Entrants Start Strong

There are global resource pools of top talent available to startups.

Via The Future of Management:

“The disintegration of large companies, via deverticalization and outsourcing has also helped new entrants.  In turning out more and more of their activities to third-party contractors, incumbents have created thousands of 'arms suppliers' that are willing to sell their services to anyone.  By tapping into this global supplier base of designers, brand consultants, and contract manufacturers, new entrants can emerge from the womb nearly full-grown.” 

Ultra-Low-Cost Competition and Less Ignorant Consumers

With smarter consumers and ultra-low-cost competition, it’s tough to compete.

Via The Future of Management:

“Incumbents must also contend with a growing horde of ultra-low-cost competitors - companies like Huawei, the Chinese telecom equipment maker that pays its engineers a starting salary of just $8,500 per year.  Not all cut-price competition comes from China and India.  Ikea, Zara, Ryanair, and AirAsia are just a few of the companies that have radically reinvented industry cost structures.  Web-empowered customers are also hammering down margins.  Before the Internet, most consumers couldn't be sure whether they were getting the best deal on their home mortgage, credit card debt, or auto laon.  This lack of enlightenment buttressed margins.  But consumers are becoming less ignorant by the day.  One U.K. Web site encourages customers to enter the details of their most-used credit cards, including current balances, and then shows them exactly how much they will save by switching to a card with better payment terms.  In addition, the Internet is zeroing-out transaction costs.  The commissions earned by market makers of all kinds -- dealers, brokers, and agents -- are falling off a cliff, or soon will be.”

Distribution Monopolies are Under Attack

You can build your own fan base and reach the world.

Via The Future of Management:

“Distribution monopolies -- another source of friction -- are under attack.  Unlike the publishers of newspapers and magazines, bloggers don't need a physical distribution network to reach their readers.  Similarly, new bands don't have to kiss up to record company reps when they can build a fan base via social networking sites like MySpace.”

Collapsing Entry Barriers and Customer Power Squeeze Margins

Customers have a lot more choice and power now.

Via The Future of Management:

“Collapsing entry barriers, hyper efficient competitors, customer power -- these forces will be squeezing margins for years to come.  In this harsh new world, every company will be faced with a stark choice: either set the fires of innovation ablaze, or be ready to scrape out a mean existence in a world where seabed labor costs are the only difference between making money and going bust.”

What’s the solution?

Innovation.

Innovation is the way to play, and it’s the way to stay in the game.

Innovation is how you reinvent your success, reimagine a new future, and change what your capable of, to compete more effectively in today’s ever-changing world.

You Might Also Like

4 Stages of Market Maturity

Brand is the Ultimate Differentiator

High-Leverage Strategies for Innovation

If You Can Differentiate, You Have a Competitive Monopoly

Short-Burst Work

Categories: Architecture, Programming

The Secret of Scaling: You Can't Linearly Scale Effort with Capacity

The title is a paraphrase of something Raymond Blum, who leads a team of Site Reliability Engineers at Google, said in his talk How Google Backs Up the Internet. I thought it a powerful enough idea that it should be pulled out on its own:

Mr. Blum explained common backup strategies don’t work for Google for a very googly sounding reason: typically they scale effort with capacity.

If backing up twice as much data requires twice as much stuff to do it, where stuff is time, energy, space, etc., it won’t work, it doesn’t scale. 

You have to find efficiencies so that capacity can scale faster than the effort needed to support that capacity.

A different plan is needed when making the jump from backing up one exabyte to backing up two exabytes.

When you hear the idea of not scaling effort with capacity it sounds so obvious that it doesn't warrant much further thought. But it's actually a profound notion. Worthy of better treatment than I'm giving it here:

Categories: Architecture

How Agile accelerates your business

Xebia Blog - Wed, 06/25/2014 - 10:11

This drawing explains how agility accelerates your business. It is free to use and distribute. Should you have any questions regarding the subjects mentioned, feel free to get in touch.
Dia1

Software architecture as code

Coding the Architecture - Simon Brown - Tue, 06/24/2014 - 21:22

If you've been following the blog, you will have seen a couple of posts recently about the alignment of software architecture and code. Software architecture vs code talks about the typical gap between how we think about the software architecture vs the code that we write, while An architecturally-evident coding style shows an example of how to ensure that the code does reflect those architectural concepts. The basic summary of the story so far is that things get much easier to understand if your architectural ideas map simply and explicitly into the code.

Regular readers will also know that I'm a big fan of using diagrams to visualise and communicate the architecture of a software system, and this "big picture" view of the world is often hard to see from the thousands of lines of code that make up our software systems. One of the things that I teach people during my sketching workshops is how to sketch out a software system using a small number of simple diagrams, each at very separate levels of abstraction. This is based upon my C4 model, which you can find an introduction to at Simple sketches for diagramming your software architecture. The feedback from people using this model has been great, and many have a follow-up question of "what tooling would you recommend?". My answer has typically been "Visio or OmniGraffle", but it's obvious that there's an opportunity here.

Representing the software architecture model in code

I've had a lot of different ideas over the past few months for how to create, what is essentially, a lightweight modelling tool and for some reason, all of these ideas came together last week while I was at the GOTO Amsterdam conference. I'm not sure why, but I had a number of conversations that inspired me in different ways, so I skipped one of the talks to throw some code together and test out some ideas. This is basically what I came up with...

It's a description of the context and container levels of my C4 model for the techtribes.je system. Hopefully it doesn't need too much explanation if you're familiar with the model, although there are some ways in which the code can be made simpler and more fluent. Since this is code though, we can easily constrain the model and version it. This approach works well for the high-level architectural concepts because there are very few of them, plus it's hard to extract this information from the code. But I don't want to start crafting up a large amount of code to describe the components that reside in each container, particularly as there are potentially lots of them and I'm unsure of the exact relationships between them.

Scanning the codebase for components

If your code does reflect your architecture (i.e. you're using an architecturally-evident coding style), the obvious solution is to just scan the codebase for those components, and use those to automatically populate the model. How do we signify what a "component" is? In Java, we can use annotations...

Identifying those components is then a matter of scanning the source or the compiled bytecode. I've played around with this idea on and off for a few months, using a combination of Java annotations along with annotation processors and libraries including Scannotation, Javassist and JDepend. The Reflections library on Google Code makes this easy to do, and now I have simple Java program that looks for my component annotation on classes in the classpath and automatically adds those to the model. As for the dependencies between components, again this is fairly straightforward to do with Reflections. I have a bunch of other annotations too, for example to represent dependencies between a component and a container or software system, but the principle is still the same - the architecturally significant elements and their dependencies can mostly be embedded in the code.

Creating some views

The model itself is useful, but ideally I want to look at that model from different angles, much like the diagrams that I teach people to draw when they attend my sketching workshop. After a little thought about what this means and what each view is constrained to show, I created a simple domain model to represent the context, container and component views...

Again, this is all in code so it's quick to create, versionable and very customisable.

Exporting the model

Now that I have a model of my software system and a number of views that I'd like to see, I could do with drawing some pictures. I could create a diagramming tool in Java that reads the model directly, but perhaps a better approach is to serialize the object model out to an external format so that other tools can use it. And that's what I did, courtesy of the Jackson library. The resulting JSON file is over 600 lines long (you can see it here), but don't forget most of this has been generated automatically by Java code scanning for components and their dependencies.

Visualising the views

The last question is how to visualise the information contained in the model and there are a number of ways to do this. I'd really like somebody to build a Google Maps or Prezi-style diagramming tool where you can pinch-zoom in and out to see different views of the model, but my UI skills leave something to be desired in that area. For the meantime, I've thrown together a simple diagramming tool using HTML 5, CSS and JavaScript that takes a JSON string and visualises the views contained within it. My vision here is to create a lightweight model visualisation tool rather than a Visio clone where you have to draw everything yourself. I've deployed this app on Pivotal Web Services and you can try it for yourself. You'll have to drag the boxes around to lay out the elements and it's not very pretty, but the concept works. The screenshot that follows shows the techtribes.je context diagram.

A screenshot of a simple context diagram

Thoughts?

All of the C4 model Java code is open source and sitting on GitHub. This is only a few hours of work so far and there are no tests, so think of this as a prototype more than anything else at the moment. I really like the simplicity of capturing a software architecture model in code, and using an architecturally-evident coding style allows you to create large chunks of that model automatically. This also opens up the door to some other opportunities such automated build plugins, lightweight documentation tooling, etc. Caveats apply with the applicability of this to all software systems, but I'm excited at the possibilities. Thoughts?

Categories: Architecture

Sponsored Post: Apple, Chartbeat, Monitis, Netflix, Salesforce, Blizzard Entertainment, Cloudant, CopperEgg, Logentries, Wargaming.net, PagerDuty, Gengo, ScaleOut Software, Couchbase, MongoDB, BlueStripe, AiScaler, Aerospike, LogicMonitor, AppDynamics, Ma

Who's Hiring?

  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here.
    • Mobile Services Software Engineer. The Emerging Technologies/Mobile Services team is looking for a proactive and hardworking software engineer to join our team. The team is responsible for a variety of high quality and high performing mobile services and applications for internal use. Please apply here
    • Senior Software Engineer. Join Apple's Internet Applications Team, within the Information Systems and Technology group, as a Senior Software Engineer. Be involved in challenging and fast paced projects supporting Apple's business by delivering Java based IS Systems. Please apply here.
    • Sr Software Engineer. Join Apple's Internet Applications Team, within the Information Systems and Technology group, as a Senior Software Engineer. Be involved in challenging and fast paced projects supporting Apple's business by delivering Java based IS Systems. Please apply here.
    • Senior Security Engineer. You will be the ‘tip of the spear’ and will have direct impact on the Point-of-Sale system that powers Apple Retail globally. You will contribute to implementing standards and processes across multiple groups within the organization. You will also help lead the organization through a continuous process of learning and improving secure practices. Please apply here.
    • Quality Assurance Engineer - Mobile Platforms. Apple’s Mobile Services/Emerging Technology group is looking for a highly motivated, result-oriented Quality Assurance Engineer. You will be responsible for overseeing quality engineering of mobile server and client platforms and applications in a fast-paced dynamic environment. Your job is to exceed our business customer's aggressive quality expectations and take the QA team forward on a path of continuous improvement. Please apply here.

  • Chartbeat measures and monetizes attention on the web. Our traffic numbers are growing, and so is our list of product and feature ideas. That means we need you, and all your unparalleled backend engineer knowledge to help up us scale, extend, and evolve our infrastructure to handle it all. If you've these chops: www.chartbeat.com/jobs/be, come join the team!

  • The Salesforce.com Core Application Performance team is seeking talented and experienced software engineers to focus on system reliability and performance, developing solutions for our multi-tenant, on-demand cloud computing system. Ideal candidate is an experienced Java developer, likes solving real-world performance and scalability challenges and building new monitoring and analysis solutions to make our site more reliable, scalable and responsive. Please apply here.

  • Sr. Software Engineer - Distributed Systems. Membership platform is at the heart of Netflix product, supporting functions like customer identity, personalized profiles, experimentation, and more. Are you someone who loves to dig into data structure optimization, parallel execution, smart throttling and graceful degradation, SYN and accept queue configuration, and the like? Is the availability vs consistency tradeoff in a distributed system too obvious to you? Do you have an opinion about asynchronous execution and distributed co-ordination? Come join us

  • Java Software Engineers of all levels, your time is now. Blizzard Entertainment is leveling up its Battle.net team, and we want to hear from experienced and enthusiastic engineers who want to join them on their quest to produce the most epic customer-facing site experiences possible. As a Battle.net engineer, you'll be responsible for creating new (and improving existing) applications in a high-load, high-availability environment. Please apply here.

  • Engine Programmer - C/C++. Wargaming|BigWorld is seeking Engine Programmers to join our team in Sydney, Australia. We offer a relocation package, Australian working visa & great salary + bonus. Your primary responsibility will be to work on our PC engine. Please apply here

  • Human Translation Platform Gengo Seeks Sr. DevOps Engineer. Build an infrastructure capable of handling billions of translation jobs, worked on by tens of thousands of qualified translators. If you love playing with Amazon’s AWS, understand the challenges behind release-engineering, and get a kick out of analyzing log data for performance bottlenecks, please apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Your event here.
Cool Products and Services
  • Now track your log activities with Log Monitor and be on the safe side! Monitor any type of log file and proactively define potential issues that could hurt your business' performance. Detect your log changes for: Error messages, Server connection failures, DNS errors, Potential malicious activity, and much more. Improve your systems and behaviour with Log Monitor.

  • The NoSQL "Family Tree" from Cloudant explains the NoSQL product landscape using an infographic. The highlights: NoSQL arose from "Big Data" (before it was called "Big Data"); NoSQL is not "One Size Fits All"; Vendor-driven versus Community-driven NoSQL.  Create a free Cloudant account and start the NoSQL goodness

  • Finally, log management and analytics can be easy, accessible across your team, and provide deep insights into data that matters across the business - from development, to operations, to business analytics. Create your free Logentries account here.

  • CopperEgg. Simple, Affordable Cloud Monitoring. CopperEgg gives you instant visibility into all of your cloud-hosted servers and applications. Cloud monitoring has never been so easy: lightweight, elastic monitoring; root cause analysis; data visualization; smart alerts. Get Started Now.

  • PagerDuty helps operations and DevOps engineers resolve problems as quickly as possible. By aggregating errors from all your IT monitoring tools, and allowing easy on-call scheduling that ensures the right alerts reach the right people, PagerDuty increases uptime and reduces on-call burnout—so that you only wake up when you have to. Thousands of companies rely on PagerDuty, including Netflix, Etsy, Heroku, and Github.

  • Aerospike in-Memory NoSQL database is now Open Source. Read the news and see who scales with Aerospike. Check out the code on github!

  • consistent: to be, or not to be. That’s the question. Is data in MongoDB consistent? It depends. It’s a trade-off between consistency and performance. However, does performance have to be sacrificed to maintain consistency? more.

  • Do Continuous MapReduce on Live Data? ScaleOut Software's hServer was built to let you hold your daily business data in-memory, update it as it changes, and concurrently run continuous MapReduce tasks on it to analyze it in real-time. We call this "stateful" analysis. To learn more check out hServer.

  • LogicMonitor is the cloud-based IT performance monitoring solution that enables companies to easily and cost-effectively monitor their entire IT infrastructure stack – storage, servers, networks, applications, virtualization, and websites – from the cloud. No firewall changes needed - start monitoring in only 15 minutes utilizing customized dashboards, trending graphs & alerting.

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Performance at Scale: SSDs, Silver Bullets, and Serialization

This is a guest post by Aaron Sullivan, Director & Principal Engineer at Rackspace.

We all love a silver bullet. Over the last few years, if I were to split the outcomes that I see with Rackspace customers who start using SSDs, the majority of the outcomes fall under two scenarios. The first scenario is a silver bullet—adding SSDs creates near-miraculous performance improvements. The second scenario (the most common) is typically a case of the bullet being fired at the wrong target—the results fall well short of expectations.

With the second scenario, the file system, data stores, and processes frequently become destabilized. These demoralizing results, however, usually occur when customers are trying to speed up the wrong thing.

A common phenomena at the heart of the disappointing SSD outcomes is serialization. Despite the fact that most servers have parallel processors (e.g. multicore, multi-socket), parallel memory systems (e.g. NUMA, multi-channel memory controllers), parallel storage systems (e.g. disk striping, NAND), and multithreaded software, transactions still must happen in a certain order. For some parts of your software and system design, processing goes step by step. Step 1. Then step 2. Then step 3. That’s serialization.

And just because some parts of your software or systems are inherently parallel doesn’t mean that those parts aren’t serialized behind other parts. Some systems may be capable of receiving and processing thousands of discrete requests simultaneously in one part, only to wait behind some other, serialized part. Software developers and systems architects have dealt with this in a variety of ways. Multi-tier web architecture was conceived, in part, to deal with this problem. More recently, database sharding also helps to address this problem. But making some parts of a system parallel doesn’t mean all parts are parallel. And some things, even after being explicitly enhanced (and marketed) for parallelism, still contain some elements of serialization.

How far back does this problem go? It has been with us in computing since the inception of parallel computing, going back at least as far as the 1960s(1). Over the last ten years, exceptional improvements have been made in parallel memory systems, distributed database and storage systems, multicore CPUs, GPUs, and so on. The improvements often follow after the introduction of a new innovation in hardware. So, with SSDs, we’re peering at the same basic problem through a new lens. And improvements haven’t just focused on improving the SSD, itself. Our whole conception of storage software stacks is changing, along with it. But, as you’ll see later, even if we made the whole storage stack thousands of times faster than it is today, serialization will still be a problem. We’re always finding ways to deal with the issue, but rarely can we make it go away.

Parallelization and Serialization
Categories: Architecture

How to verify Web Service State in a Protractor Test

Xebia Blog - Sat, 06/21/2014 - 08:24

Sometimes it can be useful to verify the state of a web service in an end-to-end test. In my case, I was testing a web application that was using a third-party Javascript plugin that logged page views to a Rest service. I wanted to have some tests to verify that all our web pages did include the plugin, and that it was communicating with the Rest service properly when a new page was opened.
Because the webpages were written with AngularJS, Protractor was our framework of choice for our end-to-end test suite. But how to verify web service state in Protractor?

My first draft of a Protractor test looked like this:

var businessMonitoring = require('../util/businessMonitoring.js');
var wizard = require('./../pageobjects/wizard.js');

describe('Business Monitoring', function() {
  it('should log the page name of every page view in the wizard', function() {
    wizard.open();
    expect(wizard.activeStepNumber.getText()).toBe('1');

    // We opened the first page of the wizard and we expect it to have been logged
    expect(businessMonitoring.getMonitoredPageName()).toBe('/wizard/introduction');

    wizard.nextButton.click();
    expect(wizard.completeStep.getAttribute('class')).toContain('active');
    // We have clicked the ‘next’ button so the ‘completed’ page has opened, this should have // been logged as well
    expect(businessMonitoring.getMonitoredPageName()).toBe('/wizard/completed');
  });
});

The next thing I had to write was the businessMonitoring.js script, which should somehow make contact with the Rest service to verify that the correct page name was logged.
First I needed a simple plugin to make http requests. I found the 'request' npm package , which provides a simple API to make a http request like this:

var request = require('request');

var executeRequest = function(method, url) {
  var defer = protractor.promise.defer();
  
  // method can be ‘GET’, ‘POST’ or ‘PUT’
  request({uri: url, method: method, json: true}, function(error, response, body) {

    if (error || response.statusCode >= 400) {
      defer.reject({
        error : error,
        message : response
      });
    } else {
      defer.fulfill(body);
    }
  });

  // Return a promise so the caller can wait on it for the request to complete
  return defer.promise;
};

Then I completed the businessmonitoring.js script with a method that gets the last request from the Rest service, using the request plugin.
It looked like this:

var businessMonitoring = exports; 

< .. The request wrapper with the executeRequest method is included here, left out here for brevity ..>

businessMonitoring.getMonitoredPageName = function () {

    var defer = protractor.promise.defer();

    executeRequest('GET', 'lastRequest')  // Calls the method which was defined above
      .then(function success(data) {
        defer.fulfill(data,.url);
      }, function error(e) {
        defer.reject('Error when calling BusinessMonitoring web service: ' + e);
      });

    return defer.promise;
 };

It just fires a GET request to the Rest service to see which page was logged. It is an Ajax call so the result is not immediately available, so a promise is returned instead.
But when I plugged the script into my Protractor test, it didn't work.
I could see that the requests to the Rest service were done, but they were done immediately before any of my end-to-end tests were executed.
How come?

The reason is that Protractor uses the WebdriverJS framework to handle its control flow. Statements like expect(), which we use in our Protractor tests, don't execute their assertions immediately, but instead they put their assertions on a queue. WebdriverJS first fills the queue with all assertions and other statements from the test, and then it executes the commands on the queue. Click here for a more extensive explanation of the WebdriverJs control flow.

That means that all statements in Protractor tests need to return promises, otherwise they will execute immediately when Protractor is only building its test queue. And that's what happened with my first implementation of the businessMonitoring mock.
The solution is to let the getMonitoredPageName return its promise within another promise, like this:

var businessMonitoring = exports; 

businessMonitoring.getMonitoredPageName = function () {
  // Return a promise that will execute the rest call,
  // so that the call is only done when the controlflow queue is executed.
  var deferredExecutor = protractor.promise.defer();

  deferredExecutor.then(function() {
    var defer = protractor.promise.defer();

    executeRequest('GET', 'lastRequest')
      .then(function success(data) {
        defer.fulfill(data.url);
      }, function error(e) {
        defer.reject('Error when calling BusinessMonitoring mock: ' + e);
      });

    return defer.promise;
  });

  return deferredExecutor;
};

Protractor takes care of resolving all the promises, so the code in my Protractor test did not have to be changed.

Concordion without the JUnit Code

Xebia Blog - Fri, 06/20/2014 - 20:58

Concordion is a framework to support Behaviour Driven Design. It is based on JUnit to run tests and HTML enriched with a little Concordion syntax to call fixture methods and make assertions on test outcome. I won't describe Concordion because it is well documented here: http://concordion.org/.
Instead I'll describe a small utility class I've created to avoid code duplication. Concordion requires a JUnit class for each test. The utility I'll describe below allows you to run all Concordion tests without having a utility class for each test.

In Concordion you specify test cases and expected outcomes in a HTML file. Each HTML file is accompanied by a Java class of the same name that has the @RunWith(ConcordionRunner.class annotation. This Java class is comparable with Fitnesse's fixture classes. Here you can create methods that are used in the HTML file to connect the test case to the business code.

In my particular use case the team ended up writing lots of mostly empty Java files. The system under test was processing XML message files, so all the test needed to do was call a single method to hand the XML to the business code and then validate results. Each Java class was basically the same, except for its name.

To avoid this duplication I created a class that uses the JavaAssist library to generate a Java class on the fly and run it as a JUnit test. You can find my code on Github:

git clone git@github.com:jvermeir/concordionDemo

ConcordionRunner generates a class using a template. The template can be really simple like in my example where FixtureTemplate extends MyFixture. MyFixture holds all fixture code to connect the test to the application under test. This is where we would put all fixture code necessary to call a service using a XML message. In the example there's just the single getGreeting() method.
HelloWorldAgain.html is the actual Concordion test. It shows the call to getGreeting(), which is a method of MyFixture.
The dependencies are like this:
FixtureTemplate extends MyFixture
YourTest.html uses Concordion
Example uses ConcordionRunner uses JUnitCore
The example in Example.java shows how to use ConcordionRunner to execute a test. This could easily be extended to recursively go through a directory and execute all tests found. Note that Example writes the generated class to a file. This may help in troubleshooting but isn't really necessary.
Now it would be nice to adapt the Eclipse plugin to allow you to right click the HTML file and run it as a Concordion test without adding a unit test.

Time is The Great Equalizer

Time really is the great equalizer.

I was reading an article by Dr. Donald E. Wemore, a time management specialist, and here’s what he had to say:

"Time is the great equalizer for all of us. We all have 24 hours in a day, 7 days a week, yielding 168 hours per week. Take out 56 hours for sleep (we do spend about a third of our week dead) and we are down to 112 hours to achieve all the results we desire. We cannot save time (ever have any time left over on a Sunday night that you could lop over to the next week?); it can only be spent. And there are only two ways to spend our time: we can spend it wisely, or, not so wisely."

Well put.

And what’s his recommendation to manage time better?

Work smarter, not harder.

In my experience, that’s the only approach that works.

If you find yourself struggling too much, there’s a good chance your time management strategies are off.

Don’t keep throwing time and energy at things if it’s not working.

Change your approach.

The fastest thing you can change in any situation is you.

You Might Also Like

7 Days of Agile Results: A Time Management Boot Camp for Productivity on Fire

10 Big Ideas from Getting Results the Agile Way

Productivity on Fire

Categories: Architecture, Programming

How I Explained My Job to My Grandmother

Well, she wasn’t my grandmother, but you get the idea.

I was trying to explain to somebody that’s in a very different job, what my job is all about.

Here’s what I said …

As far as my day job, I do complex, complicated things. 

I'm in the business of business transformation

I help large Enterprises get ahead in the world through technology and innovation.

I help Enterprises change their capabilities -- their business capabilities, technology capabilities, and people capabilities. 

It’s all about capabilities.

This involves figuring out their current state, their desired future state, the gaps between, the ROI of addressing the gaps, and then a Roadmap for making it happen.  

The interesting thing I've learned though is how much business transformation applies to personal transformation

It's all about figuring out your unique service and contribution to the world -- your unique value -- and then optimizing your strengths to realize your potential and do what you do best in a way that's valued -- where you can both generate value, as well as capture the value -- and lift the world to a better place.

Interestingly, she said she got it, it made sense, and it sounds inspiring.

What a relief.

Categories: Architecture, Programming

Introduction to Agile Presentation

I gave an Introduction to Agile talk recently:

Introduction to Agile Presentation (Slideshow)

I kept it focused on three simple things:

  1. What is Agile and the Agile Mindset (the Values and Principles)
  2. A rapid tour of the big 3 (Extreme Programming, Scrum, and Lean)
  3. Build a shared vocabulary and simple mental models so teams could hit the ground running and work more effectively with each other.

The big take away that I wanted the audience to have was that it’s a journey, but a very powerful one.

It’s a very healthy way to create an organization that embraces agility, empowers people, and ship stuff that customers care about.

In fact, the most powerful aspect of going Agile is that you create a learning organization.

The system and ecosystem you are in can quickly improve if you simply embrace change and focus on learning as a way of driving both continues improvement as well as growing capability.

So many things get a lot better over time, if they get a little better every day.

This was actually my first real talk on Agile and Agile development.  I’ve done lots of talks on Getting Results the Agile Way, and lots of other topics from security to performance to application architecture to team development and the Cloud.  But this was the first time a group asked me to share what I learned from Agile development in patterns & practices.

It was actually fun.

As part of the talk, I shared some of my favorite take aways and insights from the Agile World.

I’ll be sure to share some of these insights in future posts.

For now, if there is one thing to take away, it’s a reminder from David Anderson (Agile Management):

“Don’t do Agile.  Embrace agility.”

Way to be.

I shared my slides on SlideShare at Introduction to Agile Presentation (Slides) to help you learn the language, draw the visuals, and spread the word.

I’ll try to share more of my slides in the future, now that SlideShare seems to be a bit more robust.

You Might Also Like

Don’t Push Agile, Pull It

Extreme Programing at a Glance (Visual)

Scrum at a Glance (Visual)

Waterfall to Agile

What is Agile?

Categories: Architecture, Programming

Deploying a Node.js app to Docker on CoreOS using Deis

Xebia Blog - Wed, 06/18/2014 - 17:00

The world of on-premise private PaaSes is changing rapidly. A few years ago, we were building on on-premise private PaaSes based upon the existing infrastructure and using Puppet as an automation tool to quickly provision new application servers.  We provided a self-service portal where development teams could get any type of server in any type of environment running within minutes.  We created a virtual server for each application to keep it manageable, which of course is quite resource intensive.

Since June 9th, Docker has been declared production ready, so this opens  the option of provisioning light weight containers to the teams instead of full virtual machine. This will increase the speed of provisioning even further while reducing the cost of creating a platform and minimising resource consumption.

To illustrate how easy life is becoming, we are going to deploy an original CloudFoundry node.js application to Docker on a CoreOS cluster. This hands-on experiment is based on MacOS, Vagrant and VirtualBox.

Step 1. Installing  etcdctl and fleetctl

Before you start, you need to install etcdctl and fleetctl on your host.  etcd is a nifty distributed key-value store while fleet manages the deployment of  (Docker) services to a CoreOS cluster.
$ brew install go etcdctl
$ git clone https://github.com/coreos/fleet.git
$ cd fleet && ./build && mv bin/fleetctl /usr/local/bin

 

Step 2. Install the Deis Command Line Client

To control the PaaS you need to install the Deis command line client:

$ brew install python
$ sudo pip install deis

Step 3. Build the platform

Deis  provides all the facilities for building, deployment and managing applications.

$ git clone https://github.com/deis/deis.git
$ cd deis
$ vagrant up

$ ssh-add ~/.vagrant.d/insecure_private_key
$ export DOCKER_HOST=tcp://172.17.8.100:4243
$ export FLEETCTL_TUNNEL=172.17.8.100
$ make pull

Step 4. Start the platform

Now all is set to start the platform:

$ make run

After this run has completed, you can see that the 7 components in the Deis Architecture have been started using the list-units command: The builder, the cache, the controller, the database, the logger, the registry and the router. This architecture looks quite similar to the architecture of CloudFoundry.

$ fleetctl list-units

UNIT STATE LOAD ACTIVE SUB DESC MACHINE deis-builder.service launched loaded active running deis-builder 79874bde.../172.17.8.100 deis-cache.service launched loaded active running deis-cache 79874bde.../172.17.8.100 deis-controller.service launched loaded active running deis-controller 79874bde.../172.17.8.100 deis-database.service launched loaded active running deis-database 79874bde.../172.17.8.100 deis-logger.service launched loaded active running deis-logger 79874bde.../172.17.8.100 deis-registry.service launched loaded active running deis-registry 79874bde.../172.17.8.100 deis-router.1.service launched loaded active running deis-router 79874bde.../172.17.8.100

Alternatively, you can inspect the result by looking inside the virtual machine:

$ vagrant ssh -c "docker ps"

Now we have our platform running, we can start using it!

Step 5. Register a new user to Deis and add the public ssh key

$ deis register 172.17.8.100:8000 \
     --username=mark \
     --password=goddesses \
     --email=mark.van.holsteijn@..com
$ deis keys:add ~/.ssh/id_rsa.pub

Step 6. Create a Cluster

Create a application cluster under the domain 'dev.172.17.8.100.xip.io'.  The --hosts specifies all hosts in the cluster: the only available host  at this moment in the cluster is 172.17.8.100.

$ deis clusters:create dev  dev.172.17.8.100.xip.io \
        --hosts=172.17.8.100 \
        --auth=~/.vagrant.d/insecure_private_key

Step 7. Get the app

We created a simple but effective  node.js application that show you what happens when you scale or push a new version of the application.

$ git clone git@github.com:mvanholsteijn/sample_nodejs_cf.git
$ deis apps:create appmon --cluster=dev
$ deis config:set RELEASE=deis-v1
$ git push deis master

Step 8. open your application

Voila! Your application is running. Now click on start monitoring.

$ deis apps:open 

you should see something like this:

app-mon-1

Step 9. scaling your application

To see scaling in action,  type the following command:

$ deis ps:scale web=4

It will start 3 new containers which will show up in the list.

app-mon-4

 

Step 10. upgrading your application

Now make a change to the application, for instance change the message to 'Greetings from Deis release' and push your change:

$ git commit -a -m "Message changed"
$ git  push deis master

After a while you will see the following on your monitor!

app-mon-8

 

Step 11. Looking on CoreOS

You can use  fleetctl again to look at the new services that have been added to the platform!

 

$ fleetctl list-units

UNIT STATE LOAD ACTIVE SUB DESC MACHINE app-mon_v7.web.1-announce.service launched loaded active running app-mon_v7.web.1 announce 79874bde.../172.17.8.100 app-mon_v7.web.1-log.service launched loaded active running app-mon_v7.web.1 log 79874bde.../172.17.8.100 app-mon_v7.web.1.service launched loaded active running app-mon_v7.web.1 79874bde.../172.17.8.100 app-mon_v7.web.2-announce.service launched loaded active running app-mon_v7.web.2 announce 79874bde.../172.17.8.100 app-mon_v7.web.2-log.service launched loaded active running app-mon_v7.web.2 log 79874bde.../172.17.8.100 app-mon_v7.web.2.service launched loaded active running app-mon_v7.web.2 79874bde.../172.17.8.100 app-mon_v7.web.3-announce.service launched loaded active running app-mon_v7.web.3 announce 79874bde.../172.17.8.100 app-mon_v7.web.3-log.service launched loaded active running app-mon_v7.web.3 log 79874bde.../172.17.8.100 app-mon_v7.web.3.service launched loaded active running app-mon_v7.web.3 79874bde.../172.17.8.100 app-mon_v7.web.4-announce.service launched loaded active running app-mon_v7.web.4 announce 79874bde.../172.17.8.100 app-mon_v7.web.4-log.service launched loaded active running app-mon_v7.web.4 log 79874bde.../172.17.8.100 app-mon_v7.web.4.service launched loaded active running app-mon_v7.web.4 79874bde.../172.17.8.100 deis-builder.service launched loaded active running deis-builder 79874bde.../172.17.8.100 deis-cache.service launched loaded active running deis-cache 79874bde.../172.17.8.100 deis-controller.service launched loaded active running deis-controller 79874bde.../172.17.8.100 deis-database.service launched loaded active running deis-database 79874bde.../172.17.8.100 deis-logger.service launched loaded active running deis-logger 79874bde.../172.17.8.100 deis-registry.service launched loaded active running deis-registry 79874bde.../172.17.8.100 deis-router.1.service launched loaded active running deis-router 79874bde.../172.17.8.100

 

Conclusion

Deis is a very simple and easy to use way to create a PaaS using Docker and CoreOS. The node.js application we created, could be deployed using Deis without a single modification. We will be diving into Deis and CoreOS in more details in following posts!

One Change at a Time

Xebia Blog - Tue, 06/17/2014 - 08:22

One of the twelve Agile principles states "At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly" [1]. Many Agile teams have retrospectives every two weeks that result in concrete actions that are executed in the next time period (sprint). There are many types of tuning and adjustments that a team can do. Examples are actions that improve the flow of work, automation of tasks, team coorporation.

Is it good habit for retrospectives to focus on the same type of improvement or should the team alter the type of improvements that will be done? In this blog I will look into the effect of multiple consecutive actions that affect the flow of work.

The simulation is inspired by the GetKanban game [2].

An Experiment

Ideally one would have a set-up for an experiment in which two exactly equivalent teams are compared. One team that would perform consecutive actions to improve the flow of work. The other team would just make one adjustment to improve the flow, and focus subsequent improvements on other areas than flow. At the same time the flow is measured and verified after a certain period of time whether the first or the second team has achieved better results in terms of flow.

Such an experiment is in practise (very) difficult to perform. In this blog I will study the same experiment by making use of simulations.

Simulation

For the purpose of simulation I consider a team consisting of three specialists: 1 designer, 1 developer, and 1 tester. The team uses a kanban process to achieve flow. See the picture below for the begin situation.


foto

In the simulation, at the beginning of each working day it is determined how much work will be completed by the team. During the simulation the average cycle time is measured. The initial Work in Progress (WiP) limits are set to 3 for each column and is indicated by the red '3's in the picture.

The average amount of work done by the 'team' and the average effort of one work item are such that on average it takes one card about 5,5 days to complete.

At the end of each work day, cards are pulled into the next columns (if allowed by the WiP limits). The policy is to always pull in as much work as allowed so the columns are maximally filled. Furthermore, the backlog is assumed to always have enough user stories ready to be pulled into the 'design' column. This very much resembles developing a new product when the backlog is filled with more than enough stories.

The system starts with a clean board and all column empty. After letting the system run for 75 simulated work days, we will trigger a policy change. Particularly the WiP limit for the 'design' is increased from '3' to '5'. After this policy change the system runs for another 100 work days.

From the chart showing the average cycle time we will be able to study the effect of WiP limit changing adjustments.

Note:

The simulation assumes a simple uniform distribution for the amount of work done by the team and the effort assigned to a work item. I assume this is OK for the purpose of this blog. A consequence of this, is that the result probably can't be scaled. For instance, the situation in which a column in the picture above is a single scrum team is not applicable since a more complex probability distribution should be used instead of the uniform distribution.

Results

The picture below shows the result of running the experiment.

 

retro_length

 

After the start it takes the system little over 40 work days to reach the stable state of an average cycle time of about 24(*) days. This is the cycle time one would expect. Remember, the 'ready' column has a size of '3' and the other columns get work done. So, one would expect a cycle time of around 4 times 5,5 which equals 22 days which is close to 24.

At day 75 the WiP limit is changed. As can be inferred from the picture, the cycle time starts to rise only at day 100 (takes about 1 cycle time (24 days) to respond). The new stable state is reached at day 145 with an average cycle time of around 30(**) days. It takes 70 days(!) to reach the new equilibrium.

The chart shows the following interesting features:

  1. It takes roughly 2 times the (new) average cycle time to reach the equilibrium state,
  2. The response time (when one begins to notice an effect of the policy change) is about the length of the average cycle time.

(*) One can calculate (using transition matrices) the theoretical average cycle time for this system to be 24 days.

(**) Similar, the theoretical average cycle time of the system after policy change is 31 days.

 

Conclusion

In this blog we have seen that when a team makes adjustments that affect the flow, the system needs time to get to its new stable state. Until this state has been reached any new tuning of the flow is questionable. Simulations show that the time it takes to reach the new stable state is about 2 times the average cycle time.

For scrum teams that have 2-week sprints, the system may need about 2 months before new tuning of flow is effective. Meanwhile, the team can very well focus on other improvements, e.g. retrospectives that focus on the team aspect or collaboration with the team's environment.

Moreover, don't expect to see any changes in measurements of e.g. cycle time within the time period of the average cycle time after making a flow affecting change.

To summarise, after making flow affecting changes (e.g. increasing or decreasing WiP limits):

  • Let the system run for at least the duration of the average cycle time so it has time to respond to the change,
  • After it responds, notice the effect of the change,
  • If the effect is positive, let the system run for another duration of the average cycle time, to get to the new stable state,
  • If the effect is negative, do something else, e.g. go back to the old state, and remember that the system needs to respond to this as well!
References

[1] Agile manifesto, http://agilemanifesto.org/principles.html

[2] GetKanban, http://getkanban.com

Migrating to XtraDB Cluster in EC2 at PagerDuty

This is a guest post by Doug Barth, a software generalist who has currently found himself doing operations work at PagerDuty. Prior to joining PagerDuty, he worked for Signal in Chicago and Orbitz, an online travel company.

A little over six months ago, PagerDuty switched its production MySQL database to XtraDB Cluster running inside EC2. Here's the story of why and how we made the change.

How the Database Looked Before

PagerDuty's MySQL database was a fairly typical deployment. We had:

  • A pair of Percona Servers writing data to a DRBD-backed volume.

  • EBS disks for both the primary and secondary servers backing the DRBD volume.

  • Two synchronous copies of the production database. (In the event of a failure of the primary, we wanted to be able to quickly switch to the secondary server without losing any data.)

  • A number of asynchronous replication slaves for disaster recovery, backups and accidental modification recovery.

Problems With the Old Setup
Categories: Architecture

Combining Salt with Docker

Xebia Blog - Sat, 06/14/2014 - 10:59

You could use Salt to build and run Docker containers but that is not how I use it here. This blogpost is about Docker containers that run Salt minions, which is just an experiment. The use case? Suppose you have several containers that run a particular piece of middleware, and this piece of middleware needs a security update, i.e. an OpenSSL hotfix. It is necessary to perform the update immediately.

 

The Dockerfile

In order to build a container you have to write down the container description in a file called Dockerfile. Here is the Dockerfile:

#-------
# Standard heading stuff

FROM centos
MAINTAINER No Reply noreply@xebia.com

# Do Salt install stuff and squeeze in a master.conf snippet that tells the minion
# to contact the master specified.

RUN rpm -Uvh http://ftp.linux.ncsu.edu/pub/epel/6/i386/epel-release-6-8.noarch.rpm
RUN yum install -y salt-minion --enablerepo=epel-testing
RUN [ ! -d /etc/salt/minion.d ] && mkdir /etc/salt/minion.d
ADD ./master.conf /etc/salt/minion.d/master.conf

# Run the Salt Minion and do not detach from the terminal.
# This is important because the Docker container will exit whenever
# the CMD process exits.

CMD /usr/bin/salt-minion
#-------

 

Build the image

Time to run the Dockerfile through docker. The command is:

$ docker build --rm=true -t salt-minion .

provided that you run this command in the directory where file Dockerfile and master.conf resides. Docker creates an image with tag ‘salt-minion’ and throws away all intermediate images after a successful build.

 

Run a container

The command is:

$ docker run -d salt-minion

and Docker returns:

aab154310ba6452ba2c686d15b1e3ca5fd85124d38c7935f1200d33b3a3e7ced

The Salt minion on the container is started and searches for a Salt master to connect to, defined by the configuration setting “master” in file /etc/salt/minion.d/master.conf. You might want to run the Salt master in “auto_accept” mode so that minion keys are accepted automatically. Docker assigns a container id to the running container. That is the magic key that docker reports as a result of the run command.

The following command shows the running container:

$ docker ps
CONTAINER ID        IMAGE                COMMAND                CREATED             STATUS              NAMES
273a6b77a8fa        salt-minion:latest   /bin/sh -c /etc/rc.l   3 seconds ago       Up 3 seconds        distracted_lumiere

 

Apply the hot fix
There you are: the Salt minion is controlled by your Salt master. Provided that you have a state module that contains the OpenSSL hot fix, you can now easily update all docker nodes to include the hotfix:

salt \* state.sls openssl-hotfix

That is all there is to it.

Setting up the hostname in Ubuntu

Agile Testing - Grig Gheorghiu - Fri, 06/13/2014 - 23:04
Most people recommend setting up the hostname on a Linux box so that:

1) running 'hostname' returns the short name (i.e. myhost)
2) running 'hostname -f' returns the FQDN (i.e. myhost.prod.example.com)
3) running 'hostname -d' returns the domain name (i.e prod.example.com)

After experimenting a bit and also finding this helpful Server Fault post, here's what we did to achieve this (we did it via Chef recipes, but it amounts to the same thing):

  • make sure we have the short name in /etc/hostname:
myhost

(also run 'hostname myhost' at the command line)
  • make sure we have the FQDN as the first entry associated with the IP of the server in /etc/hosts:
10.0.1.10 myhost.prod.example.com myhost myhost.prod
  • make sure we have the domain name set up as the search domain in /etc/resolv.conf:
search prod.example.com

Reboot the box when you're done to make sure all of this survives reboots.


Simplicity is the Ultimate Enabler

“Everything should be made as simple as possible, but not simpler.” – Albert Einstein

Simplicity is among the ultimate of pursuits.  It’s one of your most efficient and effective tools in your toolbox.  I used simplicity as the basis for my personal results system, Agile Results, and it’s served me well for more than a decade.

And yet, simplicity still isn’t treated as a first-class citizen.

It’s almost always considered as an afterthought.  And, by then, it’s too little, too late.

In the book, Simple Architectures for Complex Enterprises (Developer Best Practices), Roger Sessions shares his insights on how simplicity is the ultimate enabler to solving the myriad of problems that complexity creates.

Complex Problems Do Not Require Complex Solutions

Simplicity is the only thing that actually works.

Via Simple Architectures for Complex Enterprises (Developer Best Practices):

“So yes, the problems are complex.  But complex problems do not ipso facto require complex solutions.  Au contraire!  The basic premise of this book is that simple solutions are the only solutions to complex problems that work.  The complex solutions are simply too complex.”

Simplicity is the Antidote to Complexity

It sounds obvious but it’s true.  You can’t solve a problem with the same complexity that got you there in the first place.

Via Simple Architectures for Complex Enterprises (Developer Best Practices):

“The antidote to complexity is simplicity.  Replace complexity with simplicity and the battle is three-quarters over.  Of course, replacing complexity with simplicity is not necessarily simple.” 

Focus on Simplicity as a Core Value

If you want to achieve simplicity, you first have to explicitly focus on it as a core value.

Via Simple Architectures for Complex Enterprises (Developer Best Practices):

“The first thing you need to do to achieve simplicity is focus on simplicity as a core value.  We all discuss the importance of agility, security, performance, and reliability of IT systems as if they are the most important of all requirements.  We need to hold simplicity to as high a standard as we hold these other features.  We need to understand what makes architectures simple with as much critical reasoning as we use to understand what makes architectures secure, fast, or reliable.  In fact, I argue that simplicity is not merely the equal of these other characteristics; it is superior to all of them.  It is, in many ways, the ultimate enabler.”

A Security Example

Complex systems work against security.

Via Simple Architectures for Complex Enterprises (Developer Best Practices):

“Take security for example.  Simple systems that lack security can be made secure.  Complex systems that appear to be secure usually aren't.  And complex systems that aren't secure are virtually impossible to make either simple or secure.”

An Agility Example

Complexity works against agility, and agility is the key to lasting solutions.

Via Simple Architectures for Complex Enterprises (Developer Best Practices):

“Consider agility.  Simple systems, with their well-defined and minimal interactions, can be put together in new ways that were never considered when these systems were first created.  Complex systems can never used in an agile wayThey are simply too complex.  And, of course, retrospectively making them simple is almost impossible.”

Nobody Ever Considers Simplicity as a Critical Feature

And that’s the problem.

Via Simple Architectures for Complex Enterprises (Developer Best Practices):

“Yet, despite the importance of simplicity as a core system requirement, simplicity is almost never considered in architectural planning, development, or reviews.  I recently finished a number of speaking engagements.  I spoke to more than 100 enterprise architects, CIOs, and CTOs spanning many organizations and countries.  In each presentation, I asked if anybody in the audience had ever considered simplicity as a critical architectural feature for any projects on which they had participated. Not one person had. Ever.”

The Quest for Simplicity is Never Over

Simplicity is a quest.  And the quest is never over.  Simplicity is a ongoing pursuit and it’s a dynamic one.  It’s not a one time event, and it’s not static.

Via Simple Architectures for Complex Enterprises (Developer Best Practices):

“The quest for simplicity is never over.  Even systems that are designed from the beginning with simplicity in mind (rare systems, indeed!) will find themselves under a never-ending attack. A quick tweak for performance here, a quick tweak for interoperability there, and before you know it, a system that was beautifully simple two years ago has deteriorated into a mass of incomprehensibility.”

Simplicity is your ultimate sword for hacking your way through complexity … in work … in life … in systems … and ecosystems.

Wield it wisely.

You Might Also Like

10 Ways to Make Information More Useful

Reduce Complexity, Cost, and Time

Simple Enterprise Strategy

Categories: Architecture, Programming

Sponsored Post: Apple, Netflix, Salesforce, Blizzard Entertainment, Cloudant, CopperEgg, Logentries, Wargaming.net, PagerDuty, HelloSign, CrowdStrike, Gengo, ScaleOut Software, Couchbase, MongoDB, BlueStripe, AiScaler, Aerospike, LogicMonitor, AppDynamics

Who's Hiring?

  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here.
    • Enterprise Software Engineer. Apple's Emerging Technology Services group provides a Java based SOA platform for various applications to interact with each other. The platform is designed to handle millions of messages a day with very low latency. We have an immediate opening for a talented Software Engineer in a highly visible team who is passionate about exploring emerging technologies to create elegant scalable solutions. Please apply here
    • Mobile Services Software Engineer. The Emerging Technologies/Mobile Services team is looking for a proactive and hardworking software engineer to join our team. The team is responsible for a variety of high quality and high performing mobile services and applications for internal use. Please apply here
    • Sr. Software Engineer-iOS Systems. Do you love building highly scalable, distributed web applications? Does the idea of performance tuning Java applications make your heart leap? If so, iOS Systems is looking for a highly motivated, detail-oriented, energetic individual with excellent written and oral skills who is not afraid to think outside the box and question assumptions. Please apply here
    • Senior Software Engineering Manager. As a Senior Software Engineering Manager on our team, you will be managing teams of very dedicated and talented engineering team. You will be responsible for managing the development of mobile point of sale system on iPod touch hardware. Please apply here.
    • Sr Software Engineer - Messaging Services. An exciting opportunity for a Software Engineer to join Apple's Messaging Services team. We build the cloud systems that power some of the busiest applications in the world, including iMessage, FaceTime and Apple Push Notifications. We handle hundreds of millions of active users using some of the most desirable devices on the planet and several Billion iMesssages/day, 40 billion push notifications/day, 16+ trillion push notifications sent to date. Please apply here.
    • Senior Software Engineer. Join Apple's Internet Applications Team, within the Information Systems and Technology group, as a Senior Software Engineer. Be involved in challenging and fast paced projects supporting Apple's business by delivering Java based IS Systems. Please apply here.
    • Sr Software Engineer. Join Apple's Internet Applications Team, within the Information Systems and Technology group, as a Senior Software Engineer. Be involved in challenging and fast paced projects supporting Apple's business by delivering Java based IS Systems. Please apply here.
    • Senior Payment Engineer. As a Software Engineer on our team, you will be responsible for working with cross-functional teams and developing Java server-based solutions to address business and technological needs. You will be helping design and build next generation retail solutions. You will be reviewing design and code developed by others on the team.You will build services and integrate with both internal as well as external services in a SOA environment. You will design and develop frameworks to be used by a large community of developers within the organization. Please apply here.

  • The Salesforce.com Core Application Performance team is seeking talented and experienced software engineers to focus on system reliability and performance, developing solutions for our multi-tenant, on-demand cloud computing system. Ideal candidate is an experienced Java developer, likes solving real-world performance and scalability challenges and building new monitoring and analysis solutions to make our site more reliable, scalable and responsive. Please apply here.

  • Sr. Software Engineer - Distributed Systems. Membership platform is at the heart of Netflix product, supporting functions like customer identity, personalized profiles, experimentation, and more. Are you someone who loves to dig into data structure optimization, parallel execution, smart throttling and graceful degradation, SYN and accept queue configuration, and the like? Is the availability vs consistency tradeoff in a distributed system too obvious to you? Do you have an opinion about asynchronous execution and distributed co-ordination? Come join us

  • Java Software Engineers of all levels, your time is now. Blizzard Entertainment is leveling up its Battle.net team, and we want to hear from experienced and enthusiastic engineers who want to join them on their quest to produce the most epic customer-facing site experiences possible. As a Battle.net engineer, you'll be responsible for creating new (and improving existing) applications in a high-load, high-availability environment. Please apply here.

  • Engine Programmer - C/C++. Wargaming|BigWorld is seeking Engine Programmers to join our team in Sydney, Australia. We offer a relocation package, Australian working visa & great salary + bonus. Your primary responsibility will be to work on our PC engine. Please apply here

  • Senior Engineer wanted for large scale, security oriented distributed systems application that offers career growth and independent work environment. Use your talents for good instead of getting people to click ads at CrowdStrike. Please apply here.

  • Ops Engineer - Are you passionate about scaling and automating cloud-based websites? Love Puppet and deployment scripts? Want to take advantage of both your sys-admin and DevOps skills? Join HelloSign as our second Ops Engineer and help us scale as we grow! Apply at http://www.hellosign.com/info/jobs

  • Human Translation Platform Gengo Seeks Sr. DevOps Engineer. Build an infrastructure capable of handling billions of translation jobs, worked on by tens of thousands of qualified translators. If you love playing with Amazon’s AWS, understand the challenges behind release-engineering, and get a kick out of analyzing log data for performance bottlenecks, please apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events

  • The Biggest MongoDB Event Ever Is On. Will You Be There? Join us in New York City June 23-25 for MongoDB World! The conference lineup includes Amazon CTO Werner Vogels and Cloudera Co-Founder Mike Olson for keynote addresses.  You’ll walk away with everything you need to know to build and manage modern applications. Register before April 4 to take advantage of super early bird pricing.

  • Upcoming Webinar: Practical Guide to SQL - NoSQL Migration. Avoid common pitfalls of NoSQL deployment with the best practices in this May 8 webinar with Anton Yazovskiy of Thumbtack Technology. He will review key questions to ask before migration, and differences in data modeling and architectural approaches. Finally, he will walk you through a typical application based on RDBMS and will migrate it to NoSQL step by step. Register for the webinar.
Cool Products and Services
  • The NoSQL "Family Tree" from Cloudant explains the NoSQL product landscape using an infographic. The highlights: NoSQL arose from "Big Data" (before it was called "Big Data"); NoSQL is not "One Size Fits All"; Vendor-driven versus Community-driven NoSQL.  Create a free Cloudant account and start the NoSQL goodness

  • Finally, log management and analytics can be easy, accessible across your team, and provide deep insights into data that matters across the business - from development, to operations, to business analytics. Create your free Logentries account here.

  • CopperEgg. Simple, Affordable Cloud Monitoring. CopperEgg gives you instant visibility into all of your cloud-hosted servers and applications. Cloud monitoring has never been so easy: lightweight, elastic monitoring; root cause analysis; data visualization; smart alerts. Get Started Now.

  • PagerDuty helps operations and DevOps engineers resolve problems as quickly as possible. By aggregating errors from all your IT monitoring tools, and allowing easy on-call scheduling that ensures the right alerts reach the right people, PagerDuty increases uptime and reduces on-call burnout—so that you only wake up when you have to. Thousands of companies rely on PagerDuty, including Netflix, Etsy, Heroku, and Github.

  • Aerospike Releases Client SDK for Node.js 0.10.x. This client makes it easy to build applications in Node.js that need to store and retrieve data from a high-performance Aerospike cluster. This version exposes Key-Value Store functionality - which is the core of Aerospike's In-Memory NoSQL Database. Platforms supported: CentOS 6, RHEL 6, Debian 6, Debian7, Mac OS X, Ubuntu 12.04. Write your first app: https://github.com/aerospike/aerospike-client-nodejs.

  • consistent: to be, or not to be. That’s the question. Is data in MongoDB consistent? It depends. It’s a trade-off between consistency and performance. However, does performance have to be sacrificed to maintain consistency? more.

  • Do Continuous MapReduce on Live Data? ScaleOut Software's hServer was built to let you hold your daily business data in-memory, update it as it changes, and concurrently run continuous MapReduce tasks on it to analyze it in real-time. We call this "stateful" analysis. To learn more check out hServer.

  • LogicMonitor is the cloud-based IT performance monitoring solution that enables companies to easily and cost-effectively monitor their entire IT infrastructure stack – storage, servers, networks, applications, virtualization, and websites – from the cloud. No firewall changes needed - start monitoring in only 15 minutes utilizing customized dashboards, trending graphs & alerting.

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Engineer Your Own Luck

“Chance favors the prepared mind.” - Louis Pasteur

Are you feeling lucky?

If you’re an engineer or a developer, you’ll appreciate the idea that you can design for luck, or stack the deck in your favor.

How do you do this?

As Harry Golden said, "The only thing that overcomes hard luck is hard work."

While I believe in hard work, I also believe in working smarter.

Luck is the same game.

It’s a game of skill.

And, success is a numbers game. 

You have to stay in long enough to get “lucky” over time.   That means finding a sustainable approach and using a sustainable system.  It means avoiding going all in without testing your assumptions and reducing the risk out of it.   It means taking emotion out of the equation, taking calculated risks, minimizing your exposure, and focusing on skills.

That’s why Agile methods and Lean approaches can help you outpace your failures.

Because they are test-driven and focus on continuous learning.

And because they focus on capacity and capability versus burnout or blowup.

So if you aren’t feeling the type of luck you’d like to see more of in your life, go back to the basics, and design for it.

They funny thing about luck is that the less you depend on it, the more of it you get.

BTW – Agile Results and Getting Results the Agile Way continue to help people “get lucky.”  Recently, I heard a story where a social worker shared Getting Results the Agile Way with two girls living off the streets.  They are off drugs now, have jobs, and are buying homes.   I’m not doing the story justice, but it’s great to hear about people turning their lives around and these kinds of life changing things that a simple system for meaningful results can help achieve.

It’s not luck. 

It’s desire, determination, and effective strategies applied in a sustainable way.

The Agile way.

Categories: Architecture, Programming