Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

April 5, 2013: This Week at Engine Yard

Engine Yard Blog - Fri, 04/05/2013 - 18:01

Welcome to our first “This Week at Engine Yard” update! This is where we plan to keep you apprised of the weekly events, engineering work, and various interesting articles that we host/push/read every week.

Let me know what you think, and if there’s anything else you’d like to see highlighted!

--Tasha Drew, Product Manager

Engineering Updates

Our new Gentoo distribution has been released into early access. We’ve tested our latest distribution against many typical use cases but we would love you to try out your applications on this stack and send us any feedback.

Check out what new features we have in Early Access and Limited Access (aka behind a feature flag). Highlights include: Chef 10, Gentoo 12.11, Riak Clusters, ELBs, and many others. We love customer feedback on these features, so please let us know what you think in our early access feedback forum!

Data Data Data

Postgres had a big security update released; we immediately posted an update and strongly recommend that you upgrade!

We had a customer set up the first node.js + Riak environment we’ve had on Engine Yard’s Cloud product! We are thrilled to see customers using this in its early access phase and looking forward to continuing to improve Riak as a product as we move towards making it GA.

Social Calendar (Come say hi!)

Distill Conference’s CFP is closing in a few days! Send in a CFP if you have something to share and enjoy San Francisco’s Indian Summer, developing, fun, and/or whiskey.

In other Distill news, we just announced that Nolan Bushnell, the founder of Atari and Chuck E. Cheese, will be a keynote speaker!

Coming up next week:

Engine Yard’s own Eamon Leonard will be giving the keynote at Whisky Web II at Airth Castle in Scotland.

We’re sponsoring the awesomely fun LessConf.

The always charming Long Nguyen from our Buffalo, New York office will be presenting at WNY Ruby Brigade’s Tuesday meetup on “Lessons Learned from Rock Climbing.”

Our Dublin office will be actively populating Thursday’s PubStandards, as well as hosting Tuesday’s Dublin Riak! meetup.

Our tech writer Keri Meredith will be attending Basho’s Write the Docs conference in Portland.

This week:

Engine Yard sponsored Mountain West Ruby Conf, where platform engineer Shai Rosenfeld presented “Testing HTTP APIs with Ruby.”

For those who prefer beach to mountain time, platform engineer Jacob Burkhart  presented at Ancient City Ruby with a cautionary tale of “How to Fail at Background Jobs.”

Rounding out what can only be described as a super busy conference week, application engineer PJ Hagerty presented on “Ruby Groups: Think Locally - Act Globally,” capitalizing upon his own experiences growing the WNY Ruby Brigade at Ruby Midwest.

Our Portland office hosted the PDX Women Who Hack and an initial planning meeting for a Portland branch of CoderDojo.

Our Dublin office hosted Ireland’s inaugural PostgreSQL user group (check out the meetup!), and the DevOps Ireland Dublin Monitoring Meetup.

Articles of Interest

We’re increasing our support of Open Source and Rubinius and welcoming Dirkjan Bussink as we sponsor him to work on that project full time!

Jake Luer wrote a comprehensive blog all about how to deliver iOS push notifications using node.js.



Categories: Programming

Engine Yard Expands Support For Rubinius

Engine Yard Blog - Thu, 04/04/2013 - 21:13

I am very pleased to announce that Engine Yard is sponsoring Dirkjan Bussink of Critical Codes to work on Rubinius.

Engine Yard has been a generous supporter of open source Ruby projects, including multiple Ruby implementations and Ruby on Rails, for many years. Indeed, they originally hired Evan Phoenix, the creator of Rubinius, in 2007, and have sponsored my work on Rubinius since 2008. Their sponsorship improves all aspects of the Ruby community, for developers writing Ruby code and for people everywhere who use applications written in Ruby or Rails. I'd like to thank Engine Yard for making Ruby and other open source technologies better for everyone.

Dirkjan has been a contributor to numerous open source projects, and to Rubinius in particular, for many years. He is eager, helpful and all around a joy to work with. We are lucky to have him helping with Rubinius.

With the accolades and appreciation dispensed, I'd like to cover some of what is coming for Rubinius.

Rubinius is an implementation of Ruby. At present, it supports 1.8.7 and 1.9.3 language modes, with support for Ruby 2.0 coming soon. Rubinius is a drop-in replacement for MRI (Matz's Ruby Implementation), including support for C-extensions. Rubinius includes a modern, generational garbage collector, just-in-time compiler to native code using LLVM, and full support for multi-core and multi-CPU hardware with no global interpreter lock.

We are working toward the 2.0 final release for Rubinius. Dirkjan recently visited the Engine Yard office in Portland, OR for a week so we could talk about current and future development in person. I blogged summaries of our discussions: Welcome Dirkjan! and PDX Summit Recap. If you are interested in technical aspects of Rubinius, please see those posts.

Rubinius is available as an Early Access feature in Engine Yard Cloud. If you are currently using Engine Yard Cloud and are interested in learning more about how Rubinius may benefit you, please contact us. There are professional services available to help evaluate the benefits of Rubinius. Engine Yard also offers a free trial if you are not currently using Cloud. We are also working on Rubinius support in other platforms. Dirkjan is available to contract to assist evaluating and migrating to Rubinius.

The future is concurrent. We see this every day with industry's use of technologies like ErlangClojure, and Node.js. Rubinius has been built from the beginning to bring Ruby into this concurrent world.

We will be writing more about the technology in Rubinius in the coming weeks. In the meantime, try your application, library, or gem on Rubinius. And don't forget to test on Rubinius on Travis CI. That provides us invaluable feedback. If you have a moment, drop by our #rubinius IRC channel and say hello to Dirkjan.

Categories: Programming

What Happens When You Bring Atari, Chuck E. Cheese and Engine Yard Together?

Engine Yard Blog - Tue, 04/02/2013 - 18:37

When I think about Atari I'm immediately brought back to my childhood and the many hours spent hunched over my Atari console while gazing into its beautiful 128-color graphics.  In 1972, Nolan Bushnell co-founded Atari and released Pong with his partner Ted Dabney, one of the first video games to reach mainstream popularity.  In 1977 they went on to release the now famous Atari 2600, forever changing the lives of hundreds of millions of gamers and credited by many with creating the video game industry.  Following Atari, Nolan went on to found many ventures including Catalyst Technologies, the first technology incubator, Etak, the first car navigation system, Androbot, a personal robotics company and Chuck E. Cheese!  Nolan is a fearless technology pioneer, entrepreneur and scientist.  His latest venture, Brainrush, is an educational software company that uses video game technology with real brain science, in a way that Nolan believes will fundamentally change education.  Nolan also just released his latest book.


I had the good fortune of meeting Nolan through his son Brent, also a lifelong engineer and entrepreneur.  Brent brings together his passions for education and live amusement to inspire kids to be makers.  Using software, art, and hardware he creates projects for clients ranging from Google to Disney, and conferences to hotels.  His creations leap beyond the digital screen and into the real world.

We are thrilled to announce Nolan and Brent Bushnell as keynote speakers at Distill, Engine Yard's inaugural developer conference, where they will talk about technology, entrepreneurship and always pushing boundaries.  Here is a little more information on Nolan and Brent:

Nolan Bushnell is best known as the founder of Atari Corporation and Chuck E. Cheese Pizza Time Theater.  Mr. Bushnell is passionate about enhancing and improving the educational process by integrating the latest in brain science, and truly enjoys motivating and inspiring others with his views on entrepreneurship, creativity, innovation and education.  Currently, Mr. Bushnell is devoting his talents to fixing education with his new company, Brainrush.  His beta software is teaching academic subjects at over 10 times the speed in classrooms with over 90% retention. He uses video game metrics to addict learners to academic subjects.  Over the years, Bushnell has garnered many accolades and distinctions.  He was named ASI 1997 Man of the Year, inducted into the Video Game Hall of Fame, inducted into the Consumer Electronics Association Hall of Fame and named one of Newsweek’s “50 Men That Changed America.”  He is also highlighted as one of Silicon Valley’s entrepreneurial icons in “The Revolutionaries” display at the renowned Tech Museum of Innovation in San Jose, California.

Brent plays at making technology fun.  He is he currently the CEO of Two Bit Circus, an LA-based idea factory focused on education and amusement.  Previously he was the on-camera inventor for the ABC TV show Extreme Makeover: Home Edition, a founder of Doppelgames, a mobile game platform company that sold to Handmade Mobile in 2012 and a founder of Syyn Labs, a creative engineering collective responsible for the hit OK Go Rube Goldberg machine music video and other large scale spectacles.  His particular passions include group games, out-of-home entertainment, and inspiring kids via programs such as NFTE.

Distill is a conference to explore the development of inspired applications. The call for papers is now open and tickets go on sale in April.

Categories: Programming

Delivering iOS Push Notifications with Node.js

Engine Yard Blog - Mon, 04/01/2013 - 23:45

Jake Luer is a Node.js developer and consultant focused on building the next generation of mobile and web applications. He is logicalparadox on GitHub and @jakeluer on Twitter.

Mobile is an incredibly important strategy when building applications in today's ecosystem. One of the major challenges facing all application builders, whether start-ups or enterprise, is keeping users engaged. Notifications are the first step in a long-checklist of tactics that can be used to do just that.

In today's tutorial we will be building a small Node.js application that covers all of the basics of working with the Apple Push Notification (APN) service. This will include connecting to Apple's unique streaming api, sending several types of notifications and listening to the unsubscribe feedback service.

This is a JavaScript/Node.js focused tutorial so it does not cover any iOS (Objective-C) programming. However, since we want to be able to test our notifications with an actual iPhone, a sample iOS project has also been prepared and released under an open-source MIT license.

Time Required: ~2-3 hours

Tools Required:

  • Node.js v0.8 or v0.10.
  • Code editor of preference for javascript files.
  • iOS device (push notifications cannot be sent to simulator)
  • xCode (for working with sample iOS app)
  • Apple Developer Account with iOS Provisioning Portal access
Introduction to Apple Push Notification Service

The Apple Push Notifications service is actually a set of services that developers can use to send notifications from their server (the provider) to iOS devices. This flow diagram from Apple's documentation indicates this best.

APN Flow

In actuality the APN service is two seperate components that provide different benefits to a provider. Implementing both in a production application is required.

1. Gateway Component: The gateway component is the TLS connection that a provider should established to send messages for Apple to process and then forward on to devices. Apple recommends that all providers maintain an "always-on" connection with the gateway service even if no messages are expected to be sent for long periods of time. The service implements a very specific protocol and will disconnect in the event of an error. The Node.js module we are using today handles all binary encoding and implements a number of systems to ease the burden of possible reconnects.

2. Feedback Component: The feedback component is the TLS connection that a provider should occasional establish to download a list of devices which are no longer accepting notifications for a specific applications. All providers will need to implement a feedback workflow before going to production as Apple monitors a provider's usage of this service to ensure it is not sending unnecisary notifications. The Node.js module we will be using makes it really easy to automate your feedback workflow.

Sample iOS Application

Before we get into creating a Node.js application we need to prepare for the moment we want to send a notification to an actual device. For this tutorial we will be using a sample iOS application that will allow us to inspect the notifications that are received by the device. We won't need to do any Objective-C coding but we do need to configure the application in xCode so we can run it on our device.

Furthermore, prior to using APN Agent we will need SSL certificates to establish a secure connection with the APN or Feedback service. In addition to creating our new application and provisioning profile, we will also walk through generating these certificates in a format that APN Agent accepts.

For this section you will need an Apple Developer's Account with access to the iOS Provisioning Portal and the latest version of xCode installed on you local Mac developement machine.

1. Application: Log in to the Apple iOS Provisioning Portal and create a new App ID by selectiong "App IDs" from the side menu and then the "New App ID" button from top right. You will need to specify a Bundle Identifier; I suggest using apnagent as the appname segment of the this bundle id. For example, mine is com.qualiancy.apnagent. You will need to remember this for later.

Create Application

2. Enable APN: From the applications list for your newly created application select "Configure" from the action column. Check the box for "Enable for Apple Push Notification server".


3. Configure: Select "Configure" for development environment. Follow the wizard's instructions for generating a CSR file. Make sure to save the CSR file in a safe place so that you can reuse it for future certificates.


4. Generate: After you have uploaded your CSR it will generate a "Apple Push Notification service SSL Certificate". Download it to a safe place.


5. Import: Once downloaded, locate the file with Finder and double-click to import the certificate into "Keychain Access". Use the filters on the left to locate your newly import certificate if it is not visible. It will be listed under the "login" keychain in the "Certificates" category. Once located, right-click and select "Export"


6. Export: When the export dialog appears be sure that the "File Format" is set at ".p12". Name the file accordingto the environment, such as playground-dev.p12 and save it to a safe place. You will be prompted to enter a password to secure the exported item. This is optional so leave it empty for this tutorial. You will then be asked for your login password.

7. Provision: Head back to the iOS Provisioning Portal and select "Provisioning" from the left menu and then the "New Profile" button in the top right to create a new Development provision. Make sure to select the correct App ID and Device. Once created, download the .mobileprovision file and double-click it in Finder to load it into xCode.

Note: If you have never done a provision before you may not have any "Certificates" or "Devices" listed when you go to create a Provisioning Profile. Consult Apple's documentation or the "Development Provisioning Assistant" from the iOS Provisioning Portal home page to fill in these missing pieces.

8. Clone: Next you will need to clone the apnagent-ios repository and open apnagent.xcodeproj in xCode.

git clone
open apnagent-ios/apnagent.xcodeproj

9. Configure Project: The final step is to configure the xCode project use your mobile provision. Under the "Build Settings" for apnagent change the User-Defined BUNDLE_ID setting to the bundle identifier you specified earlier. Then select the "Code Signing Identity" for that bundle identifier.

Configure Project

10. Run: To make sure you have everything configured correctly we are going to run the application. Connect your device and then in the top-left corner of xCode make sure you device is selected for "Scheme". Then click "Run". If you do not get any build errors and the xCode log displays your device token you have configured everything correctly.

Device Token

Node.js Module: APN Agent

APN Agent is a Node.js module that I developed to facilitate sending notifications via the APN service. It is production ready and includes a number of a features that make it easy for developers to implement mobile notifications quickly. It also contains several mock helpers which can assist in testing an application or provide feature parity during development.

The major features you can expect to cover today are:

  • Maintaining a persistent connection with the APN gatway and configuring the system for auto-reconnect and error mitigation.
  • Using the chainable message builder to customize outbound messages for all scenarios that Apple accepts.
  • Using the feedback service to flag a device as no longer accepting push notifications.

This tutorial will cover a lot of ground to get a simple application together but might skim over topics that are only relevant in larger deployments. Keep an eye out for links to sections of the module documentation for a deep-dive into certain subjects.

Create Node.js Project

Now that we have our certificate we can begin work on the Node.js application. Today's application will be called apnagent-playground and it will only focus on how to send APN messages as opposed to building a fully flushed out user-centric application. At the end of this section we will accomplish:

  1. Establish a connection with the APN gateway service.
  2. Send a simple "Hello world" message to a device.
  3. Explore the many different options for messages that can be sent.
  4. Learn how to mitigate errors that might occur in multi-user applications.
Project Skeleton

Download Project Skeleton (zip)

Here is the file structure we will be working with:

├── _cert
│   └── pfx.p12
└── agent
│   ├── _header.js
│   └── hello.js
└── feedback
│   ├── live.js
│   └── mock.js
├── device.js
└── package.json

1. Certificate: The first thing you will need to do is move your pfx.p12 file generated was generated earlier into the the _cert folder.

2. package.json: Next you will need to populate the package.json file. We will be working with apnagent version 1.0.x. Though this project is stable, when adding apnagent to your own project I encourage you to check the apnagent source-code change log for anything that might have changed since this release.

Here is the important parts of the package.json for those who did not download the skeleton.

  "private": true,
  "name": "apnagent-playground",
  "version": "0.0.0",
  "dependencies": {
    "apnagent": "1.0.x"

Once your package.json file is populated run npm install to grab the dependencies.

3. Device Token: Since we will be sending messages to an actual device we need to have it easily accessible. Assuming you have the apnagent iOS application open in xCode, "Run" the application on your connected device. When the application opens it will display the device token in the xCode log. Copy and paste it into the device.js file. Mine looks like this:

module.exports = "<a1b56d2c 08f621d8 7060da2b c3887246 f17bb200 89a9d44b fb91c7d0 97416b30>";
Making the Connection

Now that we have the skelton configured and our dependencies installed we can focus on establishing a connection. The first file we are going to work with is agent/_header.js. This file will handle loading our credentials and establishing a connection with the APN service. The first thing we need is to require all of our dependencies. We will construct a new apnagent.Agent and assign it to module.exports so we can access it from all or our different playground scenarios.

// Locate your certificate
var join = require('path').join
  , pfx = join(__dirname, '../_certs/pfx.p12');

// Create a new agent
var apnagent = require('apnagent')
  , agent = module.exports = new apnagent.Agent();

Now that we have created our agent we need to configure it with our authentication certificates and environment details.

// set our credentials
agent.set('pfx file', pfx);

// our credentials were for development

For more configuration options such as modifying the reconnect time or using different types of credentials, view the agent configuration documentation.

Finally, we need to establish our connection. apnagent uses custom Error objects whenever possible to best describe the context of a given problem. When using the .connect() method, a possible custom message is the "GatewayAuthorizationError" which could occur if Apple does not accept your credentials or if apnagent has a problem loading them. You can check for apnagent specific errors by checking the .name property of the Error object.

agent.connect(function (err) {
  // gracefully handle auth problems
  if (err && === 'GatewayAuthorizationError') {
    console.log('Authentication Error: %s', err.message);

  // handle any other err (not likely)
  else if (err) {
    throw err;

  // it worked!
  var env = agent.enabled('sandbox')
    ? 'sandbox'
    : 'production';

  console.log('apnagent [%s] gateway connected', env);

Now we can test our connection by running the _header.js file. If you receive any message other than "gateway connected" you should revisit the previous steps to ensure you have everything configured successfully. Once you confirm a connection press CTRL-C to stop the process.

$ node agent/_header.js
# apnagent [sandbox] gateway connected

To see this file in full, view it on GitHub: agent/_header.js.

Sending Your First Notification

Now that we have a connection we can send our first message. We will be using a seperate file in the agent folder for each message scenario. Our first one is agent/hello.js.

First we need to import our header and device. You will need to do this for all scenarios.

var agent = require('./_header')
  , device = require('../device');

Requiring the _header file will automatically connect to the APN service. Now we can create our first message using the .createMessage() method from our agent. This will create a new message and provide a chainable API to specify message properties. Once we specify all our properties for that message we invoke .send() to dispatch it.

  .alert('Hello Universe!')

To see this file in full, view it on GitHub: agent/hello.js.

Now we need to run this scenario. Make sure that apnagent-ios is running on your device, then:

$ node agent/hello.js

Within moments you should see your notification received:

Screenshot Hello Universe

If you don't receive a notification on your device jump a few paragraphs down to the "Error Mitigation" section for code on how to debug these kinds of issues.

Other Types of Notifications. Badge Numbers

In this next scenario we will set the badge number. The code is rather simple for agent/badge.js:

// Create a badge message
  .alert('Time to set the badge number to 3.')

View on GitHub: agent/badge.js.

Keep in mind different versions of iOS handle badge number messages different. In iOS v6, the badge will not be displayed automatically if the application is in the foreground. By included an alert body we can see the icon badge change but also inspect the payload in apnagent-ios.

Try sending the message while the application is in different states.

Badge Screenshot

Custom Payload Variables

One of the strongest features of the APN gateway service is the ability to include custom variables in your messages. Even though you should not rely on APNs for mission critical information, custom variables provide a way to associate an incoming message with something in your data store.

  .alert('Custom variables')
  .set('id_1', 12345)
  .set('id_2', 'abcdef')

View on GitHub: agent/custom.js.

The .set() method allows you to include your own key/value pairs. These pairs will then be available to the receiving client application.

Custom Screenshot

Message Expiration

By default all messages have an expiration value of 0 (zero). This indicates to Apple that if you cannot deliver the message immediately after processing then it should be discarded. For example, if the default is kept then messages to devices which are off or out of service range would not be delivered.

Though useful in some application contexts there are many cases where it is not. A social networking application may wish to deliver at any time or a calendar application for an event that occurs within the next hour. For this you may modify the default expiration value or change it on a per-message basis.

Here is our agent/expires.js scenario.

// set default to one day
agent.set('expires', '1d');

// send using default 1d
  .alert('You were invited to a new event.')

// use custom for 1 hour
  .alert('New Event @ 4pm')

// set custom no expiration
  .alert('Event happening now!')

View on GitHub: agent/expires.js.

Error Mitigation

One behavior of the APN service is that it does not respond for every message sent to confirm it has been received. Instead it only responds on an error specifying what error on which message there was a problem. Furthermore, when an error occurs the service will disconnect and flush it's backlog of received data refusing to process further until a clean connection is made. Any message that was dispatched through the outbound socket after the invalid message will need to be sent again after once a new connection has been established in order for it to be delivered. Don't panic! apnagent handles all of this for you.

As you might have noticed in the above .createMessage() examples a callback was not specified for the .send() method though the api allows for one to be set. This callback is invoked when a message has been successfully encoded for transfer over the wire but since the APN service does not provide confirmation that every message has been successfully parsed managing a callback flow can be tricky. Instead, any errors that the APN service reports will be emitted as the event message:error. Code best demonstrates all of the possible scenarios.

This goes in our agent/_header.js file before we make a connection.

agent.on('message:error', function (err, msg) {
  switch ( {
    // This error occurs when Apple reports an issue parsing the message.
    case 'GatewayNotificationError':
      console.log('[message:error] GatewayNotificationError: %s', err.message);

      // The err.code is the number that Apple reports.
      // Example: 8 means the token supplied is invalid or not subscribed
      // to notifications for your application.
      if (err.code === 8) {
        console.log('    > %s', msg.device().toString());
        // In production you should flag this token as invalid and not 
        // send any futher messages to it until you confirm validity


    // This happens when apnagent has a problem encoding the message for transfer
    case 'SerializationError':
      console.log('[message:error] SerializationError: %s', err.message);

    // unlikely, but could occur if trying to send over a dead socket
      console.log('[message:error] other error: %s', err.message);

As you can see there is a lot that can go on here; too much to cover in this article. For more information view Apple's APNs documentation for all possible response codes.

Using the Feedback Service

The Feedback Service is the method by which Apple informs a developer which devices should no longer recieve push notifications. The primary reason to cease notifications is that the application has been uninstalled from the device.

Working with Feedback Events

APN Agent's Feedback Interface will periodically connect the APN Feedback Service and download a list of devices that should be marked for unsub. Each "row" in the download consists of the device token and the timestamp that the uninstall occurred. I recommend that when you gather your token from a device you also store the timestamp for the most recent time that token has been reported. This allows you to compare the timestamps to determine if the application was reinstalled since the feedback unsubscribe notice.

There is one "gotcha" that developers should be aware of. The connection that the device maintains to the APN service is disconnected when there are no applications installed that are configured to receive push notifications. The side-effect is that if your application is the last one to be uninstalled the device will NOT notify the APN Feedback service that it was uninstalled. In production, this is highly unlikely to occur, but if you are developing an application and using the sandbox connection, and are the only sandbox application, this behavior will also occur. You have been warned!

Making the Connection

Luckily, APN Agent also has a Mock Feedback interface so we can easily simulate feedback behavior and test our code.

var apnagent = require('apnagent')
  , feedback = new apnagent.MockFeedback();

  .set('interval', '30s') // default is 30 minutes?

The .connect() method for the apnagent.MockFeedback simulates the same behavior as the real apnagent.Feedback. It will perform the initial connection and retrieve the unsubscribed list. Each row will be parsed and added to the to-be-processed queue. Once Apple has finished sending the list they will disconnect and the Feedback interface will schedule the next download to occur after the set interval time has elapsed.

Handling Unsubscribed Devices

Once the Feedback interface has received a list of devices it will place each response into a throttled processing queue. Since we have no idea how long this list will be and reacting to feedback is not as mission-critical as responding to an http request, this throttled queue helps us avoid bottle-necks in any of our node application's finite resources. By default this queue will process up to ten items in parallel, but for our testing we are going to change the concurrency to 1.

feedback.set('concurrency', 1);

Now we need to instruct the feedback service how to handle any device that has been marked as unsubscribed. The following example is pseudo-code so we can't run it directly. You are welcome to adapt it for your database of choice.

 * @param {apnagent.Device} device token
 * @param {Date} timestamp of unsub
 * @param {Function} done callback

feedback.use(function (device, timestamp, done) {
  var token = device.toString()
    , ts = timestamp.getTime();

  // psuedo db code
  db.query('devices', { token: token, active: true }, function (err, devices) {
    if (err) return done(); // bail
    if (!devices.length) return done(); // no devices match

    // async forEach
    devices.forEach(function (device, next) {
      // this device hasn't pinged our api since it unsubscribed
      if (device.get('timestamp') <= ts) {
        device.set('active', false);;

      // we have seen this device recently so we don't need to deactive it
      else {
    }, done);
Testing Feedback Events

Live feedback events are difficult to trigger as they require the right conditions and Apple's block-box logic might not recognize those conditions for some time. That is why we are using the MockFeedback class for our example. To make it easy to test this scenarios we can push in our own device-timestamp pairs.

Here is an example that will unsubscribe your device:

// pull in your device
var device = require('../device');

// unsub it as of 30 minutes ago
feedback.unsub(device, '-30m');

This will not invoke our .use() statement immediately. Since it fully emulates the like Feedback class it will wait until the next simulated connection to the feedback service. Since we changed our interval to 30 seconds we won't have to wait very long.

If you have adapted this example to use your own database then you can run it to see what happens. If you would like to see a full-featured example the apnagent-playground repository has this in it's master branch.

Closing Remarks

Today's tutorial covered a lot of ground: connecting to APNs, sending messages and handling feedback. If you are ready to take this to next step the apnagent documentation is the best place to start. For example, there is a full express.js application that implements the MockAgent or live Agent depending on environment which can serve as the foundation of many production applications.

Please let me know if you have any questions in the comments below. Alternatively, if you run into specific issues with any of the code used in this tutorial, please report them under their GitHub Issues.




Categories: Programming

PDX Drinkup Tonight!

Engine Yard Blog - Thu, 03/28/2013 - 19:29

Portland area developers, we're excited to invite you over to the Engine Yard PDX offices tonight for drinks from 6-8pm. We’ll have lots of tasty beverages and snacks, as well as great company, with the likes of our engineering lead Amy Woodward, marathoner extraordinaire Matt Whiteley and marketing guru Mark Gaydos, among many more. Afterward, we’ll be heading over to the GitHub drinkup (8pm and on) which is conveniently stumbling distance from the office. For more information, check this out.


We'll be sponsoring and attending BarCamp PDX - Friday evening and Saturday. BarCamp is a great workshop-like event with content generated entirely by its participants. Topics often focus on but are not limited to early-stage web applications, and related open source technologies, social protocols, open data formats and other DIY/hacker/open culture themes. Register for it here:

We will also be hosting the PDX Women Who Hack on Sunday afternoon. Women Who Hack is an awesome organization for women of all programming experience levels who just want to hack on projects together. All languages and platforms are welcome.

women who hack

And finally, Wednesday is the first organizational meeting for the Portland CoderDojo. Unfamiliar with CoderDojo? It’s a free and open organization committed to teaching young people how to program. If you're interested in helping get the Dojo off the ground, come join Colin Dabritz and Amy Woodward:

pretty office


Categories: Programming

RVM Autolibs: Automatic Dependency Handling and Ruby 2.0

Engine Yard Blog - Wed, 03/27/2013 - 20:10

Last month marked a very important milestone for Rubyists - The release of Ruby 2.0.0. It comes with new RubyGems and new dependencies, including OpenSSL. RVM was not doing much to resolve dependencies earlier, instead installing LibYAML because it is required for RubyGems to function properly. The situation changes with OpenSSL as it's now a bigger dependency. Initially for Ruby 2.0.0-rc1 RVM was installing OpenSSL. However compiling OpenSSL is not that easy task as with LibYAML, it also duplicates the effort with distribution maintainers to compile a working OpenSSL.

A new approach

To make this work, RVM takes a new approach. It will now work with the system package manager to install required libraries. This is no easy feat, as different systems have different names for packages, with some of them being available by default and some not available at all.

It’s easy when it’s easy

It’s easy to use an existing package manager on any of the systems. The trouble begins when distribution does not have a default package manager which is the case for OSX. There are a number of package managers, and none of them are popular enough to be de-facto standard. With this in mind it’s necessary for RVM to find an existing package manager and install one when there isn’t one available.

Sensible defaults

When autolibs was first added RVM assumed users wanted to have all the work done for them. Unfortunately we fast hit the reality that some users know better and still prefer to install dependencies manually. There had to be a compromise to fit both needs. In the end RVM will by default detect available libraries and fail if they are not available. Users have now option to switch to other modes including “do it all for me” and “let me do it myself”.

Do it all for me

Users who want get the libraries installed automatically can use autolibs mode 4 aka. enable. This will tell RVM to find package manager (installing one if necessary), install all dependencies, and finally use them for compiling rubies. If the package manager is not available (on OS X) Homebrew will be installed. However users can select what package manager will be installed with autolibs modes osx_port, osx_fink and smf. The smf package manager is for the lesser known RailsInstaller’s SM Framework.

For systems with a default package manager mode 4 is the same as mode 3, which means install missing packages.

Let me do it myself

For users that do not want RVM do the automatic there are two modes that will come in handy. Mode 1 allows users to instruct RVM to pick the libraries and just show warnings if they are missing. In case when even the automatic detection is to much it can be turned off with mode 0. Unfortunately there is a caveat. Given that the code is more dynamic, there is no longer a list to show what is required. This means that some libraries are picked depending on current system state. So if users do not want to use the automated modes (3 or 4) then RVM can only report what is missing, not all the dependencies that might be required on similar distributions.

Some tricks

To install RVM with Ruby, Ruby on Rails and all the required libraries (aka. the poor man’s RailsInstaller):

 \curl -L | bash -s stable --rails --autolibs=enable

To use rvm in deployment where sudo requires extra handling like in capistrano:

    task :install_requirements do
      sudo “rvm --autolibs=4 requirements #{rvm_ruby_string}”
    task :install_ruby do
      run “rvm --autolibs=1 install #{rvm_ruby_string}”

You can find more details about autolibs in our docs

Let us know

We have been testing autolibs code for some time now, but as always bringing it to wider audience creates new cases, detects new flaws, or just creates possible misunderstandings. We are open to get those fixed please report issues to RVM’s issue tracker or talk to use using IRC

 Thanks for using RVM, and may the autolibs feature improve your Ruby experience.

Other Announcements Officially opening RVM 2.0 work.

RVM 1.19 was last release where we included new features (Autolibs), all new feature requests will be deferred to RVM 2.0. We still will provide support, work on fixing bugs and update all software versions as long as RVM 2.0 is not released and marked stable. But to allow the work on RVM 2.0 we need to freeze the feature set available in RVM 1.x.

Updates to the website!

RVM has long had an unorganized website that simply adds information and has become hard for both maintainers and for users and since we are opening up development on RVM 2.0 work we are also opening up development on a brand new site! We hope to clean up and simplify the way you interact with the site, implement a cleaner design using Twitter’s Bootstrap and make the documentation more like man pages so that they can be ported back and forth between RVM and the website making everything more seamless not only for us, but for users also.


Categories: Programming

Be the Expert on Your Own Experience

Engine Yard Blog - Tue, 03/26/2013 - 18:09

There are dozens of tech conferences happening this year, and I’d like to encourage you to submit talks and proposals. Some of my colleagues have told me that they are disinclined to submit proposals because they feel like they lack expertise. This all-too-common feeling prevents people from sharing interesting and fresh perspectives at events, and I’d like that to change.

When I first began submitting proposals to conferences, I carefully crafted one very specific talk proposal. It took me about a week. I then proceeded to rewrite the same proposal over and over again as I applied to (and got rejected by) various conferences.

In mid-2012, I applied to Cascadia Ruby, and for the first time ever I submitted more than one proposal. Maybe you could even call it "rage-proposing:” Here's a proposal, here's another proposal, how about this other random topic? Take that!

I showed my proposals (current and past) to my co-workers and the responses were all the same. I was told that they were "too vague", "too broad", and "unfocused". I was told that it seemed like I was trying to cover too much stuff in a single talk.

I appreciated the honest feedback, but it was frustrating because I had worked so hard on them. And If I took the extra time narrow my proposal down to exactly what I would talk then the inevitable rejection that followed would be even worse.

So I said to myself: "I'm gonna write a proposal on something they know nothing about: surfing!"

So it came to pass that the first talk I ever gave at a tech conference had absolutely no technical content (except for a slide where I made a poor comparison between being a beginner surfer "kook" to writing a really disgusting method body).

So--what is the lesson here?

Talk about your experiences. Don’t try to reverse-engineer a talk based on what you expect people to be interested in. Propose a talk that speaks to your passions.

I felt pretty fortunate at Cascadia because I was certain I'd be the expert on surfing compared to a room full of programmers. However, I haven't had the same luxury at subsequent talks.

Which brings me to the next lesson:

Instead of being the expert on a topic, be the expert on your own experience. That's it. Who isn't the expert on his or her own experience? (People with amnesia, maybe).

And that's what I'll be doing in a few weeks when I speak at Ancient City Ruby about "How to Fail at Background Jobs". Yep, I've experienced a lot of fails. To the rest of you reading this, I foolishly promise to provide feedback on your rejections to whatever extent I come in contact with your proposals as a tiny cog helping with Engine Yard’s upcoming conference, Distill.

In conclusion, prepare lots of talk proposals, submit them everywhere. Especially to Distill!

As an aside, I'd like to thank Michał Taszycki of Railsberry conference for his e-mail to me explaining why all FIVE of my talk proposals were rejected. Railsberry was the only conference ever to do this, each of those one-sentence explanations really went a long way in helping me improve future proposals.

Distill is a conference to explore the development of inspired applications. The call for papers is now open and tickets go on sale in April.

Categories: Programming

In Case You Missed It: February’s Node.js Meetup

Engine Yard Blog - Fri, 03/22/2013 - 19:01

Recently, we were pleased to host the San Francisco Node.js/Serverside Javascripters meetup at Engine Yard. Didn’t make it yourself? Not to worry--we’ve got video of three awesome presentations given by local Node.js experts. Dig in and enjoy!

1) Matt Pardee of StrongLoop presents "From Inception to Launch in 4 Weeks: A practical, real-world discussion of building StrongLoop's site entirely in Node." This talk addresses which architecture, modules, and hosting StrongLoop chose and how 3rd-party integrations were implemented. Caveats and pitfalls are also discussed.

2) Giovanni Donelli of Essence App presents "Indy Web Dev/Designer Node: A case study on how to design your app with Node.js" Review how he designed and developed an app using node and deployed it to the cloud. This presentation is targeted to solo designers and independent developers out there who already have some experience in app design and are trying to understand how to take their app from a device to the cloud.

3)  Finally, Daniel Imrie-Situnayake (Green Dot): "Within the Whale: a story of enterprise Node advocacy from the inside out. How we're promoting Node.js within Green Dot, a large company with a lot at stake." An insightful use case of Node in the enterprise.

If you’d like to learn more about deploying a Node.js app to Engine Yard, check out these resources for best practices and FAQs.

Categories: Programming

James Whelton, Co-Founder of CoderDojo, to Keynote Distill

Engine Yard Blog - Wed, 03/20/2013 - 17:52

urlI first met James Whelton in 2011 as he was just launching CoderDojo in Ireland. At that time, I saw huge potential in his vision for educating a new generation of developers through free coding clubs worldwide. What further inspired me about James was that he was unencumbered by the magnitude of what he was trying to accomplish and the resources and commitments it would take to accomplish it. Today, just two years later, there are 130 dojos across 22 countries with 10,000 kids learning to code for free each week. One student, 13 year old Harry Moran, developed Pizzabot, a game that debuted at the top of the iPhone paid downloads charts in Ireland, beating out Angry Birds!

We are very pleased to announce James as one of our keynote speakers at Distill, Engine Yard's inaugural developer conference, where he will talk about inspiring others, dreaming big, reaching your goals and striving for more. Here is a little more information on James:

James Whelton hails from Cork, Ireland. A 20 year old developer and social entrepreneur, passionate about using technology to improve the world and making the opportunity to experience the awesomeness of coding available to young people around the world. A background in iOS and Web development, he's ventured in everything from building Twitter powered doors to proximity controlled kettles to realtime services to teaching 11 years olds Objective-C. He was named a Forbes 30 under 30 in 2013 for social entrepreneurship and Irish Software Association Person of the Year 2012. He likes hacking the system, using code to achieve big things, pina coladas and getting caught in the rain.

Distill is a conference to explore the development of inspired applications. The call for papers is now open and tickets go on sale in April.

Categories: Programming

Introducing Gentoo 12.11, the New & Improved Engine Yard Distribution

Engine Yard Blog - Tue, 03/19/2013 - 18:35

The distribution is one of the most crucial components of the Engine Yard stack. Much has changed since the company was founded, and the distribution needed to change with it. On behalf of the Distribution Team, including Gordon Malm, Kirk Haines and myself, I am pleased to announce the Early Availability of the new Engine Yard distribution. Even with the changes made, the team has worked hard to closely match the underlying system with what users have familiarized themselves with. With this in mind I’d like to take the time to point out the main changes which we feel are beneficial to you, our customer.

Enhanced Ruby Support

While supporting a number of new languages recently, including PHP and NodeJS, Engine Yard has a strong Ruby presence. Since Ruby was first released, there have been many new implementations that have come out, and with it the need to better support existing and future implementations. The new distribution’s Ruby architecture improves the support of these implementations through a more modular backend.

To make for an even more customized experience for users, RVM is now available on all new distribution installations. A big thanks to Michal Papis, the RVM lead, who has been instrumental in helping make this happen. This has been a request from many customers, and we’re excited to be able to deliver on it.

More User Focus

Work on the new distribution allowed for the team to start with a cleaner slate, which meant that more focused user centered customizations could be made. Packages such as Nginx and PHP were re-evaluated to ensure that they were customized to fit the needs of a majority of our customers. Supported versions were re-evaluated as well for major packages, allowing our support team the ability to support the new distribution more efficiently. Finally, the Linux kernel has been updated to the 3.4 series and the configuration options have been re-evaluated. One of the most prominent changes being the move to EXT4 as the default filesystem.

Hardened Toolchain

There has been substantial process in the area of compiler based security over the years. The new distribution utilizes a hardened toolchain to provide the benefits of this effort. Such protections include:

  • Stack Smashing Protection (SSP) for mitigation against stack overflows

  • Position Independent Executables (PIEs) for mitigation against attacks requiring executable code be located at a specific address

  • FORTIFY_SOURCE for mitigation against attacks resulting from the overflow of fixed length buffers and some format-string attacks

  • RELRO for mitigation of attacks against various sections in an ELF binary

  • BIND_NOW for mitigation of attacks that rely on loading shared objects at the time of function execution

These changes help to provide additional security for the system, reducing the possible attack vectors that could be utilized by an exploit.

Improved Testing

Testing an operating system is an extremely difficult process, and requires constant adaptation. Work on the new distribution has led to an increase in the creation of runtime tests for ensuring the reliability of the system. Core packages that had test suites were evaluated to ensure as much code level reliability as possible. I would in particular like to thank the Engine Yard QA team, who has played an instrumental role in helping us with this goal. However, testing is once again a constant effort and we look forward to helping improve the quality of the testing process.


These are just a few of the many improvements that have been made to the new distribution to better help serve our customers. Our work does not end here however, and we look forward to improving our processes even further to better serve your needs. On behalf of the distribution team we thank you for being Engine Yard customers, and look forward to working with you now and in the future.

To get started with early access for the new distribution, please refer to the Use Engine Yard Gentoo 12.11 Early Access documentation on the Engine Yard website.

Categories: Programming

All About Cloud Storage

Engine Yard Blog - Fri, 03/15/2013 - 18:50

With the rise of social apps like Facebook, Instagram, YouTube and more, managing user generated content has become a growing challenge and problem to be solved.  Amazon AWS S3, Google Storage, Rackspace Cloud Files, and other similar services have sprung up to help application developers solve a common problem - scalable asset storage.  And of course, they all utilize “the cloud”!

The Problem

Popular social applications, scientific applications, media generating applications and more are able to generate massive amounts of data in a small amount of time.  Here are just a few examples:

  • 72 hours of video are uploaded every minute by YouTube users. (source)
  • 20 million photos are uploaded to SnapChat every day. (source)
  • Pinterest has stored over 8 billion objects and 410 terabytes of data since their launch in 2009. (source)
  • Twitter generates roughly 12 terabytes of data per day. (source)

When your application begins to store massive amounts of user generated data, your team will inevitably need to decide where to spend its engineering effort in relation to that data.  If your application is engineered to store assets on your own hardware/infrastructure, your team will spend plenty of time and money related to storing and serving your assets efficiently.  Alternately, an application can easily store assets with a cloud storage provider.  Choosing this route allows application content to scale almost limitlessly while only paying for the resources and space needed to store and serve the assets. In effect, cloud storage frees up your teams engineering time to focus more on creating unique application features, rather than reinventing the file storage wheel when scalability becomes an issue.

When should you consider using cloud based storage for your application?

  • When user generated assets are part of your application.Does your application accept uploads from your users?  Does your application generate files serverside?  If your application is going to accept uploads from users or generate content stored on the filesystem, you will likely want to consider using a cloud storage provider sooner rather than later.
  • When your application is likely to grow beyond a single server.If your application is small enough to run on a dedicated single server or web host, and you don’t expect it to grow beyond that single server, it doesn’t make sense to use cloud storage for your assets.  Simply store them on the local file system and call it a day.If, however, you expect any growth from your application that would require you to run more than one application server, you will immediately reap the benefits of storing your assets in the cloud.  By storing your assets in the cloud, you can horizontally scale your service to as many front-end application servers as your heart desires without the need to replicate your file system assets to the any new servers.  Because your assets are stored centrally with a cloud service, they will be accessible from a given hostname, no matter how many application servers your application runs.
  • When its more cost effective for your team to focus on business critical application features rather than engineering a scalable file storage system.If you are strapped for either time or money, and you expect your application to grow, you can’t go wrong with cloud storage.  Cloud storage gives you the flexibility to get up and running quickly, scale your storage to the growing needs of your application and only pay for the storage and resources you use.  This in turn allows you to focus less on hardware costs, operations and configuration for storing assets and more importantly focus your time on developing your business.

Integration & Access

Most of the leaders in online cloud storage provide API access to their platform allowing developers to integrate web-scale asset storage and file access within their applications.  Below we’ll look at some code examples using an SDK or library to store assets on Amazon S3.  Many libraries and SDKs make setting the storage provider a breeze, allowing you to easily deploy file storage on many of the popular providers.

Ruby & Carrierwave

Code examples below have been adapted from the CarrierWave github repository.

  • Install CarrierWave: gem install carrierwave or in your gemfile gem 'carrierwave'
  • Install Fog: gem install fog or in your gemfile gem "fog", "~> 1.3.1"
  • In an initiailization file add the following:
    CarrierWave.configure do |config|
    config.fog_credentials = {
    :provider               => 'AWS',       # required
    :aws_access_key_id      => 'xxx',       # required
    :aws_secret_access_key  => 'yyy',       # required
    :region                 => 'eu-west-1'  # optional, defaults to 'us-east-1'
    config.fog_directory  = 'name_of_directory'
    config.fog_public     = false
    config.fog_attributes = {'Cache-Control'=>'max-age=315576000'}
    config.asset_host     = '’
  • Create your uploader class:
    class AvatarUploader < CarrierWave::Uploader::Base
    storage :fog
  • Using your uploader directly:
    uploader =!(my_file)
  • Using your uploader with ActiveRecord:
  • Add a field to your database table and require CarrierWave:
    add_column :users, :avatar, :string in a database migration file
    require 'carrierwave/orm/activerecord' in your model file.
  • Mount your uploader to your model:
    class User < ActiveRecord::Base
    mount_uploader :avatar, AvatarUploader
  • Work with your model and files:
    u =
    u.avatar = params[:file]
    u.avatar ='somewhere')!
    u.avatar.url # => '/url/to/file.png'
    u.avatar.current_path # => 'path/to/file.png'
    u.avatar.identifier # => 'file.png'

Here are some CarrierWave examples for uploading to Amazon S3, Rackspace Cloud Files and Google Storage.  And some gems for using with other ORMs like DataMapper, Mongoid and Sequel.


Amazon provides a PHP SDK to work with AWS APIs and services.  For this code example we will be using instructions straight from the SDK repository README and sample code.

  • Copy the contents of and add your credentials as instructed in the file.
  • Move your file to ~/.aws/sdk/
  • Make sure that getenv('HOME') points to your user directory. If not you'll need to set putenv('HOME=<your-user-directory>')
  • // Instantiate the AmazonS3 class
    $s3 = new AmazonS3();
    // Create a bucket to upload to
    $bucket = 'YOUR-BUCKET-NAME' . strtolower($s3->key);
    if (!$s3->if_bucket_exists($bucket))
    $response = $s3->create_bucket($bucket,AmazonS3::REGION_US_E1);
    if (!$response->isOK()) die('Could not create `' . $bucket . '`.');
    // Download a public object.
    $response = $s3->get_object('aws-sdk-for-php', 'some/path-to-file.ext',array(
    'fileDownload' => './local/path-to-file.ext'
    // Uploading an object.
    $response = $s3->create_object($bucket, 'some/path-to-file.ext', array(
    'fileUpload' => './local/path-to-file.ext'

Node & Knox
For Node.js I have adapted example code from the Knox Amazon S3 Client on Github.

  • // Configure the client
    var client = knox.createClient({
    key: '<api-key-here>'
    , secret: '<secret-here>'
    , bucket: 'BUCKET-NAME'
    // Putting a file on S3
    client.putFile('some/path-to-file.ext', 'bucket/file-name.ext', function(err, res){
    // Logic
    // Getting a file from S3
    client.get('/some/path-to-file.ext’).on('response', function(res){
    res.on('data', function(chunk){
    // Deleting a file on S3
    client.del('/some/path-to-file.ext’).on('response', function(res){

As you can see in the previous code examples, working with the AWS S3 APIs are very straightforward and there are plenty of libraries readily available for most languages.  I definitely recommend taking a hard look into using a cloud storage provider for your next project.  You’ll save time not reinventing file storage solutions, reap the benefits of focusing on developing your application, and have practically unlimited storage scalability as you need it.

Categories: Programming

Welcome Riak 1.3 to Engine Yard

Engine Yard Blog - Wed, 03/13/2013 - 00:06

Hello friends! A few months ago we rolled out support for Riak, Basho’s distributed key/value database, on the Engine Yard stack. Today we want to share with you the improvements we’ve made to the product, including the release of a brand new version, 1.3. We hope you are as excited about this release as we are. It includes major enhancements and features.

We want to make it easier than ever to get started with Riak on our platform and we’ve made a video to help you. Follow Edward’s screencast and you’ll have a production-ready cluster in a matter of minutes.

Introducing Riak from Engine Yard on Vimeo.

Why you should use/upgrade to Riak 1.3

Here is a summary of the enhancements included in version 1.3 that you definitely want to be aware of. For more information check Basho’s Introducing Riak 1.3 documentation.

Active Anti-Entropy

Riak is designed to be highly available. This means that the database understands and can survive (normally catastrophic) events like node-failures and data inconsistencies.  These inconsistencies, often referred to as ‘entropy’, can arise due to failure modes, concurrent updates, and physical data loss or corruption. Ironically these events are not uncommon in distributed systems and the Cloud is the most pervasive example.

Riak already had several features for repairing this “entropy” (lost or corrupted data), but they required user intervention. Riak 1.3 introduces active anti-entropy (AAE) to solve this problem automatically. The AAE process repairs cluster entropy on an ongoing basis by using data replicated in other nodes to heal itself. This feature is enabled by default on Riak 1.3 clusters.

Improved MapReduce

Riak supports MapReduce queries in both JavaScript and Erlang for aggregation and analytics tasks. In Riak 1.3, tunable backpressure is extended to the MapReduce sink to prevent problems at endpoint processes. (Backpressure keeps Riak processes from being overwhelmed and it also prevents memory consumption getting out of control.)

New file system default on EBS-backed Riak clusters

Now when creating a EBS-backed Riak cluster we’ll format your volume using the ext4 file system instead of ext3. This change enhances I/O performance and durability.

Riak Haproxy enabled in utility instances

In addition to application instances, we now allow utility instances to address Riak cluster nodes using haproxy.

And there is more! Riak 1.3 provides Advanced Multi-Datacenter Replication Capabilities (available for Riak Enterprise customers).  This release also provides better performance, more TCP connections and easier configuration.

Give it a try!

With 500 hours for free you can try Riak 1.3 without any hassle.  We have you covered! All of our Riak installations come with full support from us at Engine Yard and from our partner Basho.

Categories: Programming

Hack your bundle for fun and profit

Engine Yard Blog - Fri, 03/08/2013 - 01:44

Note: Friend of Engine Yard André Arko, a member of Bundler core, wrote this insightful piece for us about little-known tricks in Bundler. Check out his own site here.

Bundler has turned out to be an amazingly useful tool for installing and tracking gems that a Ruby project needs--so useful, in fact, that nearly every Ruby project uses it. Even though it shows up practically everywhere, most people don’t know about Bundler’s built-in tools and helpers. In an attempt to increase awareness (and Ruby developer productivity), I’m going to tell you about them.

Install, update, and outdated

You probably already know this, but I’m going to summarize for the people who are just getting started and don’t know yet. Run bundle install to install the bundle that’s requested by your project. If you’ve just run git pull and there are new gems? bundle install. If you’ve just added new gems or changed gem versions in the Gemfile? bundle install. It might seem like you want to bundle update, but that won’t just install gems — it will try to upgrade every single gem in your bundle. That’s usually a disaster unless you really meant to do it.

The update command is for when gems you use has been updated, and you want your bundle to have the newest version that your Gemfile will allow. Run bundle outdated to print a list of gems that could be upgraded. If you want to upgrade a specific gem, run bundle update GEM, or run bundle update to update everything. After the update finishes, make sure all your tests pass before you commit your new Gemfile.lock!

Show and open

Most people know about bundle show, which prints the full path to the location where a gem is installed (probably because it’s called out in the success message after installing!). Far more useful, however, is the bundle open command, which will open the gem itself directly into your EDITOR. Here’s a minimalist demo:

$ bundle install
Fetching gem metadata from
Resolving dependencies...
Installing rack (1.5.2)
Using bundler (1.3.1)
Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.
$ echo $EDITOR
mate -w
$ bundle open rack

That’s all you need to get the installed copy of rack open in your editor. Being able to edit gems without having to look for them can be an amazing debugging tool. It makes it possible to insert print or debugger statements in a few seconds. If you do change your gems, though, be sure to reset them afterwards! There will be a pristine command soon, but for now, just run bundle exec gem pristine to restore the gems that you edited.


The show command still has one more trick up it’s sleeve, though: bundle show --paths. Printing a list of paths may not sound terribly useful, but it makes it trivial to search through the source of every gem in your bundle. Want to know where ActionDispatch::RemoteIp is defined? It’s a one-liner:

$ grep ActionDispatch::RemoteIp `bundle show --paths`

Whether you use grep, ack, or ag, it’s very easy to set up a shell function that allows you to search the current bundle in just a few characters. Here’s mine:

function back () {
ack "$@" `bundle show --paths`

With that function, searching becomes even easier:

$ back ActionDispatch::RemoteIp

One of the most annoying things about using Bundler is the way that you (probably) have to run bundle exec whatever anytime you want to run a command. One of the easiest ways around that is installing Bundler binstubs. By running bundle binstubs GEM, you can generate stubs in the bin/ directory. Those stubs will load your bundle, and the correct version of the gem, before running the command. Here's an example of setting up a binstub for rspec.

$ bundle binstubs rspec-core
$ bin/rspec spec
No examples found.
Finished in 0.00006 seconds
0 examples, 0 failures

Use binstubs for commands that you run often, or for commands that you might want to run from (say) a cronjob. Since the binstubs don't have to load as much code, they even run faster. Rails 4 adopts binstubs as an official convention, and ships with bin/rails and bin/rake, which are both set up to always run for that specific application.

Creating a Gemfile

I've seen some complaints recently that it's too much work to type source '' when creating a new Gemfile. Happily, Bundler will do that for you! When you're starting a new project, you can create a new Gemfile with as the source by running a single command:

$ bundle init

At this point, you're ready to add gems and install away!

Git local gems

A lot of people ask how they can use Bundler to modify and commit to a gem in their Gemfile. Thanks to work lead by José Valim, Bundler 1.2 allows this, in a pretty elegant way. With one setting, you can load your own git clone in development, but deploying to production will simply check out the last commit you used.

Here’s how to set up a git local copy of rack:

$ echo "gem 'rack', :github => 'rack/rack', :branch => 'master'" >> Gemfile
$ bundle config local.rack ~/sw/gems/rack
$ bundle show rack

Now that it’s set up, you can edit the code your application will use, but still commit in that repository as often as you like. Pretty sweet.

Ruby versions

Another feature of Bundler 1.2 is ruby version requirements. If you know that your application only works with one version of ruby, you can require that version. Just add one line to your Gemfile specifying the version number as a string.

ruby '1.9.3'

Now Bundler will raise an exception if you try to run your application on a different version of ruby. Never worry about accidentally using the wrong version while developing again!

Dependency graph

Bundler uses your Gemfile to create what is technically called a “dependency graph”, where there are many gems that have various dependencies on eachother. It can be pretty cool to see that dependency graph drawn as a literal graph, and that’s what the bundle viz command does. You need to install GraphViz and the ruby-graphviz gem.

$ brew install graphviz
$ gem install ruby-graphviz
$ bundle viz

Once you’ve done that, though, you get a pretty picture that’s a lot of fun to look at. Here’s the graph for a Gemfile that just contains the Rails gem.


IRB in your bundle

I have one final handy tip before the big finale: the console command. Running bundle console will open an IRB prompt for you, but it will also load your entire bundle and all the gems in it beforehand. If you want to try expirimenting with the gems you use, but don’t have the Rails gem to give you a Rails console, this is a great alternative.

$ bundle console
=> #<Rack::Server:0x007fb439037970 @options=nil>
Creating a new gem

Finally, what I think is the biggest and most useful feature of Bundler after installing things. Since Bundler exists to manage gems, the Bundler team is very motivated to make it easy to create and manage gems. It’s really, really easy. You can create a directory with the skeleton of a new gem just by running bundle gem NAME. You’ll get a directory with a gemspec, readme, and lib file to drop your code into. Once you’ve added your code, you can install the gem on your own system to try it out just by running rake install. Once you’re happy with the gem and want to share it with others, pushing a new version of your gem to is as easy as rake release. As a side benefit, gems created this way can also easily be used as git gems. That means you (and anyone else using your gem) can fork, edit, and bundle any commit they want to.

Step 3: Profit

Now that you know all of the handy stuff Bundler will do for you, I suggest trying it out! Search your bundle, create a gem, edit it with git locals, and release it to As far as I’m concerned, the absolute best thing about Bundler is that it makes it easier for everyone to share useful code, and collaborate to make Ruby better for everyone.

Categories: Programming

Facebook Has An Architectural Governance Challenge

Just to be clear, I don't work for Facebook, I have no active engagements with Facebook, my story here is my own and does not necessarily represent that of IBM. I'd spent a little time at Facebook some time ago, I've talked with a few of its principal developers, and I've studied its architecture. That being said:

Facebook has a looming architectural governance challenge.

When I last visited the company, they had only a hundred of so developers, the bulk of whom fit cozily in one large war room. Honestly, it was little indistinguishable from a Really Nice college computer lab: nice work desks, great workstations, places where you could fuel up with caffeine and sugar. Dinner was served right there, so you never needed to leave. Were I a twenty-something with only a dog and a futon to my name, it would be been geek heaven. The code base at the time was, by my estimate, small enough that it was grokable, and the major functional bits were not so large and were sufficiently loosely coupled such that development could proceed along nearly independent threads of progress.

I'll reserve my opinions of Facebook's development and architectural maturity for now. But, I read with interest this article that reports that Facebook plans to double in size in the coming year.

Oh my, the changes they are a comin'.

Let's be clear, there are certain limited conditions under which the maxim "give me PHP and a place to stand, and I will move the world" holds true. Those conditions include having a) a modest code base b) with no legacy friction c) growth and acceptance and limited competition that masks inefficiencies, d) a hyper energetic, manically focused group of developers e) who all fit pretty much in the same room. Relax any of those constraints, and Developing Really Really Hard just doesn't cut it any more.

Consider: the moment you break a development organization across offices, you introduce communication and coordination challenges. Add the crossing of time zones, and unless you've got some governance in place, architectural rot will slowly creep in and the flaws in your development culture will be magnified. The subtly different development cultures that will evolve in each office will yield subtly different textures of code; it's kind of like the evolutionary drift on which Darwin reported. If your architecture is well-structure, well-syndicated, and well-governed, you can more easily split the work across groups; if your architecture is poorly-structured, held in the tribal memory of only a few, and ungoverned, then you can rely on heroics for a while, but that's unsustainable. Your heros will dig in, burn out, or cash out.

Just to be clear, I'm not picking on Facebook. What's happening here is a story that every group that's at the threshold of complexity must cross. If you are outsourcing to India or China or across the city, if you are growing your staff to the point where the important architectural decisions no longer will fit in One Guy's Head, if you no longer have the time to just rewrite everything, if your growing customer base grows increasingly intolerant of capricious changes, then, like it or not, you've got to inject more discipline.

Now, I'm not advocating extreme, high ceremony measures. As a start, there are some fundamentals that will go a long way: establish a well-instrumented and well-automated build and release system; use some collaboration tools that channel work but also allow for serendipitous connections; codify and syndicate the system's load bearing wells/architectural decisions; create a culture of patterns and refactoring.

Remind your developers that what they do, each of of them, is valued; remind your developers there is more to life than coding.

It will be interesting to watch how Facebook metabolizes this growth. Some organizations are successful in so doing; many are not. But I really do wish Facebook success. If they thought the past few years were interesting times, my message to them is that the really interesting times are only now beginning. And I hope they enjoy the journey.
Categories: Architecture

How Watson Works

Earlier this year, I conducted an archeological dig on Watson. I applied the techniques I've developed for the Handbook which involves the use of the UML, Philippe Kruchten's 4+1 View Model, and IBM's Rational Software Architect. The fruits of this work have proven to be useful as groups other than Watson's original developers begin to transform the Watson code base for use in other domains.

You can watch my presentation at IBM Innovate on How Watson Works here.
Categories: Architecture

Books on Computing

Over the past several years, I've immersed myself in the literature of the history and the implications of computing. All told, I've consumed over two hundred books, almost one hundred documentaries, and countless articles and websites - and I have a couple of hundred more books yet to metabolize. I've begun to name the resources I've studied here and so offer them up for your reading pleasure.

I've just begun to enter my collection of books - what you see there now at the time of this blog is just a small number of the books that currently surround me in my geek cave - so stay tuned as this list grows. If you have any particular favorites you think I should study, please let me know.
Categories: Architecture

The Computing Priesthood

At one time, computing was a priesthood, then it became personal; now it is social, but it is becoming more human.

In the early days of modern computing - the 40s, 50s and 60s - computing was a priesthood. Only a few were allowed to commune directly with the machine; all others would give their punched card offerings to the anointed, who would in turn genuflect before their card readers and perform their rituals amid the flashing of lights, the clicking of relays, and the whirring of fans and motors. If the offering was well-received, the anointed would call the communicants forward and in solemn silence hand them printed manuscripts, whose signs and symbols would be studied with fevered brow.

But there arose in the world heretics, the Martin Luthers of computing, who demanded that those glass walls and raised floors be brought down. Most of these heretics cried out for reformation because they once had a personal revelation with a machine; from time to time, a secular individual was allowed full access to an otherwise sacred machine, and therein would experience an epiphany that it was the machines who should serve the individual, not the reverse. Their heresy spread organically until it became dogma. The computer was now personal.

But no computer is an island entire of itself; every computer is a piece of the continent, a part of the main. And so it passed that the computer, while still personal, became social, connected to other computers that were in turn connected to yet others, bringing along their users who delighted in the unexpected consequences of this network effect. We all became part of the web of computed humanity, able to weave our own personal threads in a way that added to this glorious tapestry whose patterns made manifest the noise and the glitter of a frantic global conversation.

It is as if we have created a universe, then as its creators, made the choice to step inside and live within it. And yet, though connected, we remain restless. We now strive to craft devices that amplify us, that look like us, that mimic our intelligence.

Dr. Jeffrey McKee has noted that "every species is a transitional species." It is indeed so; in the co-evolution of computing and humanity, both are in transition. It is no surprise, therefore, that we now turn to re-create computing in our own image, and in that journey we are equally transformed.
Categories: Architecture


No matter what future we may envision, that future relies on software-intensive systems that have not yet been written.

You can now follow me on Twitter.
Categories: Architecture

There Were Giants Upon the Earth

Steve Jobs. Dennis Ritchie. John McCarthy. Tony Sale.

These are men who - save for Steve Jobs - were little known outside the technical community, but without whom computing as we know it today would not be. Dennis created Unix and C; John invented Lisp; Tony continued the legacy of Bletchley Park, where Turing and others toiled in extreme secrecy but whose efforts shorted World War II by two years.

All pioneers of computing.

They will be missed.
Categories: Architecture

Steve Jobs

This generation, this world, was graced with the brilliance of Steve Jobs, a man of integrity who irreversibly changed the nature of computing for the good. His passion for simplicity, elegance, and beauty - even in the invisible - was and is an inspiration for all software developers.

Quote of the day:

Almost everything - all external expectations, all pride, all fear of embarrassment or failure - these things just fall away in the face of death, leaving only what is truly important. Remembering that you are going to die is the best way I know to avoid the trap of thinking you have something to lose. You are already naked. There is no reason not to follow your heart.
Steve Jobs
Categories: Architecture