Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=27' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Crafting a README

Engine Yard Blog - Fri, 04/26/2013 - 19:23

When looking at a project for the first time, a README file is often the first place many users will go to get information on how to work with a program or library. For developers it's often a challenge to figure out what to put in the README file, as there is uncertainty as to what the users expect when reading the file. This article introduces a template that I often use for writing README files, based on both writing packages and utilizing them. Examples are given in a generic text format, but it is recommended to look into MarkDown if the project is intended for sites such as GitHub.

What Does It Do?

The first part should state plain and simple what the project does. If it's meant to replace another project, it should state what the shortcomings are of the other software that caused a different package to be necessary. It should also list any features that make the package stand out.

= Introduction =

This is a Ruby library to interface with the FooBar API. It was created due to the fact that there are only Python bindings available to interface with it. Some notable features:

* oAuth authentication support
* SSL communication
* Results caching to lower API hits
* Supports version 1.0 and 2.0 of the API
What Is Needed To Use It?

Perhaps one of the most important items from a software packaging perspective is what is needed to use the package. Unless the package is targeting a specific OS, dependencies should point to the project homepage and not the distribution specific package. However, distribution specific packages can be added as an addition to the base content so users don't have to search around for the package on their specific distribution.

If the package is something that requires compilation (a ruby library against C bindings for example), the build requirements should be provided as well. For packages that run under interpreted languages, the language runtime version should also be indicated (Ruby 1.8/1.9, Python 2.7/3,2, Java6/7, etc.)

Finally, any mention of modules, libraries, etc. should be listed out if they are bound to configuration options and not bundled with the language runtime.

= Requirements =

This code has been run and tested on Ruby 1.8 and Ruby 1.9.
== External Deps  ==

* curb (https://github.com/taf2/curb) for curl based calls (allows for setting of custom headers)
* nokogiri (http://nokogiri.org/) for parsing the XML response
* sqlite-ruby (http://sqlite-ruby.rubyforge.org/) for cache storage

== Standard Libary Deps ==

* OpenSSL for cryptography functionality
How Do I Install It?

This may be a simple command such as gem install foobar for installation. However, the user may wish to do a source installation, so it's important to show instructions for that as well. Recommended installation instructions for various distributions can be added in addition if a package version of the project is available.

= Installation =

This package is available in RubyGems and can be installed with:

   gem install foobar

For users working with the source from GitHub, you can run:

   rake install

Which will build and install the gem (you may need sudo/root permissions). You can also chose to build the gem manually if you want:

   rake build

Ubuntu users can install this package by executing:

   sudo apt-get install ruby-foobar

Note: If you use bundler to create a gem through bundle gem it will generate much of this README content for you

How Do I Test It?

It's beneficial to both the user and the developer to have a method of testing. This allows users to ensure basic functionality for reporting bugs. It also gives the developer a place to point users to for filtering out any local environment issues preventing the package from working. This should list the commands necessary to test the package.

= Tests =

An RSpec test suite is available to ensure proper API functionality. Do note that it uses the staging version of the API, and not the production version (to prevent hitting API limits should something go wrong). Tests are set as the default rake target, so you can run them by simply executing `rake`
Where Can I Find More Information?

Here is where the project website should be listed. This could be a dedicated site with its own domain name, a listing on RubyDoc.info, or something as simple as a GitHub repository link. It should also include ways to build API documentation if there is any.

= More Information =

More information can be found on the [project website on GitHub](http://github.com/myuser/myproject). There is extensive usage documentation available [on the wiki](https://github.com/myuser/myproject/wiki).

== API Documentation ==

The main API is documented with yardoc, and can be built with a rake task:

   rake yard

from here you can use the yard server to browse the individual gem docs from the source root:

   yard server

or optionally you can run the main yard gem documentation server:

   yard server --gems

and docs can be viewed from `http://localhost:8808/`
How do I use it?

This is by far one of the most important sections. Users often want to see a small piece of code to get them started on basic usage. This can be a simple connection and data loop, or it can be more extensive and show multiple examples for popular usage. Any examples in the source directory should be noted as well.

= Example Usage =

The following shows how to connect to the API and print a list of users:

   -*- encoding: utf-8 -*-
   require 'foobar'

   api = Foobar::Api.new('[key]','[secret]')
   api.GetUsers().each do |user|
       puts "User: #{user.name}"
What Are The License Terms?

This section should list the location of the LICENSE file, as well as what type of license it is. It’s especially important to note cases where there are multiple licenses, or an alternative commercial license available.

= License =

This project is licensed under the MIT license, a copy of which can be found in the LICENSE file.
How Do I Get Support?

For those who want support, the necessary procedures should be explained. This could be anything from a mailing list to pull requests.

= Support =

Users looking for support should file an issue on the GitHub issue tracking page (https://github.com/myuser/mypackage/issues), or file a pull request (https://github.com/myuser/mypackage/pulls) if you have a fix available. Those who wish to contribute directly to the project can contact me at <user@email.com> to talk about getting repository access granted. Support is also available on IRC (#foobar @ Freenode).

This concludes a look at crafting a README file for users to better understand a project. Note that these are what I would consider guidelines, so projects may choose to add more content, or take out sections based on individual project needs. However, it's also important to note that a detailed and well thought out README can go a long way towards encouraging users to try a package out and can even help entice contributers.

Categories: Programming

Announcing: Distill Speakers and Ticket Sales

Engine Yard Blog - Thu, 04/25/2013 - 18:06

We’re thrilled to announce that the Distill website and speaker lineup is live. The first batch of tickets is now officially on sale!

Our vision for this event, first and foremost, is to provide a distillation of best practices, new technologies and progressive methods currently on the rise in software development. Our desire is to create a special forum where these ideas can be shared with an engaged audience of like-minded developers and artists. We received hundreds of amazing submissions and narrowed them down to the luminaries that comprise our excellent lineup. The talks will range from user experience to mobile development, to the Internet of Things and beyond. We’re excited to be able to bring in speakers from Ireland, Italy, Germany and other far-flung locations to inspire our attendees to change the world with their creations. Take a look at the lineup of Distill speakers here.

In addition, we are pleased to welcome Brent and Nolan Bushnell, James Whelton and Michael Lopp as our keynote speakers. They’re sure to inspire you with their depth of experience and captivating stories about the challenges and rewards of entrepreneurship, technology and education. But that’s not all--we’ll be announcing another keynote speaker in the weeks to come.

This two-day event will take place at The Winery SF on Treasure Island in San Francisco. Shuttles will transport you to and from the venue daily, so you can hang out in comfort and style. Distill is about education, cross-pollination and community--it is our hope that you forge new relationships with your fellow attendees and leave the event feeling enriched, edified and inspired. Stay tuned for more announcements--we’ve got plenty more tricks up our sleeve and we can’t wait to share them with you!

The first batch of tickets is now on sale here. There is a limited quantity of first batch tickets so get yours now. Trust us, you don’t want to miss this.


Categories: Programming

The Thinker: Michael Lopp to Keynote Distill

Engine Yard Blog - Tue, 04/23/2013 - 18:00

While we as a community spend time thinking about how to write great code, minimize bugs, determine the right database schema, anticipate platform shifts and more, Michael Lopp is thinking about the mindset of the developers in our community.  Michael thinks deeply about the problems developers face in their everyday lives, how to help developers identify their true goals, what makes them happy and how to achieve happiness, how to lead and how to follow and much more.  Michael has spoken at every FunConf I have hosted since its inception and has consistently given our attendees a more authentic, meaningful way to think about our lives, what we do and the implications.  Michael is one of the great thinkers in our community and we are very pleased to have him be a keynote speaker at Distill.  Here's a little more information on Michael:

Michael Lopp is a director at Palantir Technologies, a Silicon Valley software company dedicated to radicalizing the way the world interacts with data.  Before joining Palantir, Lopp was part of the senior leadership team at Apple for nine years where he led essential parts of the Mac OS X engineering team, and subsequently managed the engineering team responsible for Apple's Online Store. Prior to Apple, he worked in engineering leadership at notable Silicon Valley companies such as Netscape, Symantecand Borland. Lopp is a noted author in Silicon Valley; his blog. “Rands In Repose,” and his books, Managing Humans and Being Geek, are part of a new management and engineering canon.

Distill is a conference to explore the development of inspired applications. Tickets go on sale this week.

Categories: Programming

April 19, 2013: This Week at Engine Yard

Engine Yard Blog - Fri, 04/19/2013 - 19:04

Our hearts and thoughts are with Boston, Waco, and Chicago.

--Tasha Drew, Product Manager

Engineering Updates

Rails 4.0 beta1 (rails-4.0.0.beta1) is now in GA on our platform!

PHP is available in Early Access on Engine Yard Cloud! Learn all about using PHP with Engine Yard Cloud.

Do you PagerDuty? We do! We find it so useful and critical to maintaining a robust on-call system that we extended it to all of our Premium Support customers. This week our Operations Manager, Jamie Bleichner, announced an even deeper integration for Engine Yard’s Premium Support offering, now offering ZenDesk, NewRelic, Pingdom, Splunk, Nagios, and many other integrations out-of-the-box.

Social Calendar (Come say hi!)

Tuesday, April 23rd: Dublin, Ireland: the Mobile App Development Ireland meetup group is meeting to have an iOS development overview class.

Wednesday, April 24th - Friday, April 26th: Chef Conf, San Francisco! Engine Yard is sponsoring and a bunch of us are attending! Hope to see you there.

Thursday, April 25th: Dublin, Ireland: Node.js Dublin will be meeting with two speakers, Domnic Tarr covering “Streams in Node.js,” and Richard Roger reporting on “The anatomy of an app.”

Thursday, April 25th PDX Coder Dojo: K - 12 students and their parents can play, explore, and learn about coding and building software!

Articles of Interest

An in-depth article all about php on Engine Yard Cloud, by Ireland’s product manager extraordinaire Noah Slater!

Treehouse taught us how to work with iOS core and open source frameworks.

Categories: Programming

PHP on Engine Yard Cloud in Early Access

Engine Yard Blog - Wed, 04/17/2013 - 17:55

I’m excited to announce that PHP for Engine Yard Cloud is now in Early Access.

An Early Access release means that the feature is almost ready, and we’re opening up for people to help us test. When that testing is done, this feature is released as General Availability, and the result will be a unified service offering for PHP, Node.js, and Ruby applications.

To access this feature, navigate to the early access section from the toolbar:

Locate and enable the PHP feature:

From here, deploying a PHP application should be just like deploying any other application. Though, we’ve updated the user interface a little to accommodate multiple languages.

The new app screen now has a “Application Language” dropdown:

Notice also that if you select PHP, you are asked to configure your web root.

If you just want to play around with this and help us test, we recommend you try our sample PHP app for now. This is just a public repository on Github that you may fork and modify if you want to test further. (Or submit pull requests if you think they might help new users!)

From there, you can configure your environment as usual:

Note that PHP-FPM is the only application server stack we support for the time being.

Once this is done, and you have booted your environment, you should see:

Once that is done, click on “Visit your application” and you should see:

And voila! PHP on Engine Yard Cloud!

We hope you’re as excited about this as we are. We have a few more things we want to add to this before we make a General Availability release. And we’re hoping that you’ll take some time to test the release and let us know about any problems or feature requests you have.

If you have any issues or questions about this Early Access feature, use the Access Feature Feedback forum, or open a support ticket.

For more information, see the documentation.


Categories: Programming

Don’t Reinvent the Wheel: Working with iOS Core and Open Source Frameworks

Engine Yard Blog - Tue, 04/16/2013 - 22:43

Note: Our friends at Treehouse wrote this great article about mobile development for us. Check them out here.

"We're different!" This is a mantra many organizations will trot out to justify striking out on their own path with some new technology, design, or process. And sometimes it's true. But the question they need to ask next is, "Are we so different as to justify two to four times as much work, a delay in getting to market, and increased maintenance costs for the indefinite future?"

Nowhere is this more apparent than the app ecosystem of the mobile computing world. You can't blame a developer or organization for wanting to be unique, because having a user interface/experience that stands out could be the differentiator between getting your app featured and making a profit or ending up in the scrap heap of apps. But in general, it's better to explore every nook and cranny of the core iOS frameworks and to scour the open source libraries available on GitHub to see what components you can reuse to write your app more quickly, and hopefully with fewer bugs and crashes. My own practical experience backs this up.

A Quick Case Study

I worked on an app as part of a team where we decided to implement a custom navigation system based on the premise that many of our screens followed a similar format:


Sample mockup created using Moqups

On each screen there was a title, an optional subtitle, an optional image, and two or more rows that users could tap on to navigate through different paths. We would store the entire navigation structure of the app in a property list (or "plist") file, design a basic layout in Interface Builder and programmatically add views as needed using a custom View Controller. Each screen would have a few standard properties in the plist file which would be used to drive the behavior of the base View Controller.

The idea sounded great on (electronic) paper. But there were two problems.

  1. One size did not fit all. We tried to make a reusable ViewController that could handle many of the screens in our app. There were, of course, special cases that could not be built by our basic implementation, but more frustrating were those screens that were just similar enough that we added one extra piece of data in the plist file and one extra condition and method in the base ViewController. These one-offs quickly added up and really affected how "standardized" our data was in the plist file.
  2. The challenge of navigating screens in an iPhone app had already been tackled by others, such as the TTNavigator1 project in an open source library from Facebook called Three20.

TTNavigator wasn't a perfect match for what we wanted to achieve with our custom navigation system. But that is not the point. There wasn't anything special about our approach or, more importantly, the end result. We thought it would make our lives as developers easier, but it ended up taking more time to develop and test and was harder for newcomers to the team to pick up.

What We Should Have Done

Looking back, we should have taken some time to investigate the landscape when we were designing the architecture of the app. In other words, after the user interface and experience were defined by our business and design teams, and we were planning for how development would proceed, we should have looked at what options were available for the types of problems we were trying to solve. Instead, we narrowed in pretty quickly on our design decision and started writing code before fully understanding how it would affect development.

A Better Approach

So as a developer eager to start working on your next great app, how do you even know where to start? Below you will find a practical guide to discovering some of the more common frameworks available that can make your life as an iOS developer so much easier.

Get to Know the iOS Core Frameworks

You don't know what you don't know. In app development (and software development in general) it's easy to focus on the problems at hand and lose touch with the updates made to the platform around you. It is nearly impossible to stay up to date with all the latest features of iOS with each release. Unless...

Check out the release notes for each new version of the SDK. For example, iOS 6 included a pull-to-refresh class called UIRefreshControl that could be used to replace custom or open source solutions like PullToRefresh.

Take a half hour every now and again to review the wealth of components available in the core iOS frameworks. Maybe devote your lunch hour the first Monday of each month to read up on some documentation and release notes. The Apple Developer site has documentation, sample code, and resources for each framework, and this Frameworks page can be sorted in reverse chronological order, so you can easily see what has been added or updated recently.


The sheer number of frameworks is overwhelming at first, but you can narrow things down by concentrating on the areas you are likely to use in professional or personal projects. For example, if an upcoming app is going to be heavy on audio, get familiar with the Audio & Video Starting Point guide and search for "audio" in this list of guides.

Discover Open Source Software (GitHub and Friends)

Good programmers know what to write. Great ones know what to rewrite (and reuse). - Eric S. Raymond

This sums up how we should feel about reusing other people's code. When I was younger I used to prefer writing things from scratch so I could better understand them and have more control. There is some merit to that regarding the "better understanding" part, but I have learned it's clearly better to use and potentially adapt open source projects when they are available.

Here is a quick rundown of some of the potential advantages and disadvantages of using open source software:

Advantages Disadvantages Fewer bugs (well-tested code) Support from the open source community might end More functionality than building yourself Might be hard to include in your project Often see and learn best practices Possible conflicts with other libraries in your project Speed of development Might have to conform to an open license Making the world a better place Might be replaced in a new iOS SDK release (ex. PullToRefresh -> UIRefreshControl)

If you have never used an open source framework, there are a ton of useful repositories available on GitHub for iOS development. One way to see which ones are popular and useful is to take a look at the Most Watched tab in the Objective-C section.


It's useful to keep tabs on what is available on GitHub so you know where you can fit it in with your development. But even if you don't stay up to date with the latest repositories, you can easily search for functionality you want in an app. For example, imagine that you want to include a side menu like the one made popular by the Facebook app. You could read up or figure out how to implement it on your own (it's not technically very difficult). Or you could spend a minute searching on GitHub: a quick search for "ios facebook side" brings up a list of implementations, including the popular ViewDeck project.

And of course, you are not just limited to GitHub (though there are a ton of resources there). I know from experience that this multitude of options can be overwhelming. There are some really helpful sites, though, that do the work for you and curate lists of the best open source software available for iOS development.

For the "pragmatic iOS developer", there is a list of custom frameworks at iosframeworks.com. There is also a really good App Dev Wiki that contains helpful lists of frameworks, design patterns, and other resources for iOS and mobile development in general. The libraries are categorized for helpful discovery and many of the more popular libraries and tools are listed there. And for custom UI controls, check outwww.cocoacontrols.com. You can even subscribe for a weekly newsletter to passively stay up to date with the latest custom controls.

Bootstraps and Boilerplates

One of the more recent developments in app programming is the advent of very useful templates, bootstraps, and boilerplates (choose your own trendy label). These projects, fashioned after the popular Twitter Bootstrap and HTML5 Boilerplate projects of web development fame, attempt to take away some of the pain of setting up a new project for the first time. For iOS you can use the iOS Boilerplate, which is an Xcode project that incorporates some of the more popular third-party libraries to get you up and running quickly and painlessly.

Share the Love

Don't be afraid to share your ideas with other developers! My greatest accomplishments have all been with the assistance of people smarter than me, and I have been exposed to so much simply by talking with other developers. Whether it's a local meetup of iOS developers likeCocoaHeads or a major conference like WWDC, spending some time with people tackling the same problems as you is a great way to learn about new tools and solutions. When I am at a talk or conference I am not trying to retain every line of code that flashes across a projector screen. Rather, I seek out presentations and discussions about things I want to be exposed to or things I feel comfortable sharing about. It doesn't require a huge amount of time, but it's a great way to stay current with trends and learn what is available to help become a better developer.

And while you're at it, contribute back to those open source projects! It used to be that open source code was tightly controlled by programmer-gatekeepers who could make it difficult and intimidating to contribute your work. But with the rise of GitHub and more and more people developing and using open source software, most projects are truly meritocracies where, if you have value to add, you can add it. Fork the code, make your improvements, submit a pull request, and you're done! It's never been easier.

It's All About Efficiency

Don’t reinvent the wheel, unless you plan on learning more about wheels. - Jeff Atwood

One way to be a "better" developer is to be more efficient. And this gets back to the heart of my argument. After a few years of mobile development I have seen both sides of this coin. Writing components from scratch, even with a talented team, can lead to headaches, missed deadlines, and escalating complexity. On the other hand, pulling in a popular open source framework that takes advantage of the latest platform technology and follows software development best practices can lead to a more maintainable app and a more enjoyable development experience. That extra time lets you tackle more interesting problems about how your app looks and functions, which is ultimately what makes for a delightful user experience.

So the next time you fire up Xcode and click on "Create a New Xcode Project", take a minute, or a day, or a few weeks to investigate your options and think about how you can make your life as a developer easier. And make your app better! We are now in the GitHub Generation, and there are so many exciting things we can do because of it. What a wonderful time to be a software developer!

1. Three20 hasn't been updated in more than a year and isn't nearly as useful as before. If you're interested in URL-based routing for app navigation, check out SOCKit.

Categories: Programming

PagerDuty Integration Enhancements

Engine Yard Blog - Mon, 04/15/2013 - 23:29

Last June Engine Yard launched a partnership with PagerDuty - a notification service allowing Premium Support customers to receive pages, phone calls, emails,  or mobile push alerts to an on-call schedule they configure based on events triggered in our ticketing system.

As we continually strive to improve our customers’ experience, the Engine Yard Support team is happy to announce that this partnership has taken another step forward. We have opened up more integration capabilities with our free PagerDuty offering.

You will now be able to set up integration points with varying systems such as New Relic, Pingdom, Splunk, Nagios, and many more, in addition to the existing Zendesk integration.  This even includes any of your own in-house software, as PagerDuty works with any system that can send email or make a simple HTTP API call.

Please see here for more information on PagerDuty integrations.

If you are a current Premium Support customer and have questions, or have yet to take advantage of the PagerDuty integration, please file a ticket and we will be happy to get you set up.

If you have not signed up for Premium Support yet, you can review our offering and sign up details here.

We are excited to be able to provide new monitoring and alerting integrations which will help our Premium Support customers respond to issues even more quickly!

Categories: Programming

Tools should are a prerequisite for efficient and effective QA

We now live in a world where testing and quality are becoming more and more important. Last month I had a meeting with senior management in my company and I made the statement that “quality is user experience”, in other words “without the right amount of quality the user experience will always be low”. And I think most people in QA and Testing will agree with me on that. Even organizations agree on that. Then, but why do we still see so much failures in software around us? Why do we still create software without the needed quality.

For one, because it’s not possible to test for 100%! A known issue in QA, but that’s not the answer we’re looking for. I think the answer is that we still rely too much on old-fashioned manual (functional) testing. As I explained in an earlier blog we need to go past that, move forward. Testing is part of IT and needs to showcase itself as a highly versatile profession. We need to be bale to save money, deliver higher quality, shorten time to market, and go-live with as less bugs as possible…

How can we do that? There are multiple ways to answer that, but one thing will always be one of the answers: test automation or industrialization. Tools should be a prerequisite for efficient and effective QA. It should not be a question to use them, but why not to use them.

Why not use test tools?

The need for test automation has never been as high as now with Agile approaches within the software development lifecycle. New generation test tools are easy to use, low cost, or both. Examples I favor are the new Tricentis TOSCA™ Testsuite, Worksoft Sertify©, SOASTA® Platform, but also open source tool Selenium. And QA, and IT as a whole, needs to go further. Not only use tools to automate test execution, performance testing, security testing, but even more on test specification.

The upcoming Modelization of IT enables the usage of tools even further. We can create models and specify test cases with them (with the use of special tools), create requirements, create code or more. IT can benefit by this Modelization to help the business go further and achieve its goals. I’ve written about a good example of this in this blog on fully automated testing.

The tools are the prerequisite, but how can you learn more about them. Well if you are in the Netherlands in the end of June you could go to the Test Automation Day. They just published their program on their site to enable you to learn more about test automation.

Categories: Testing & QA

April 12, 2013: This Week at Engine Yard

Engine Yard Blog - Fri, 04/12/2013 - 17:42

Have you remembered to do your taxes? Here’s what we’ve been up to this week!

--Tasha Drew, Product Manager

Engineering Updates

The engineyard gem got a couple bumps this week by maintainer and platform engineer extraordinaire Martin Emde.

The dashboard UI is being updated to prepare to allow customers to select multiple languages when creating a new app.  Application configuration options are also now changing based on language selection. Engine Yard recommends different application server stacks, for example, if you’re running Node.js. Read all about it and some other enhancements in our release notes!

Bug hunting: After observing some issues with booting servers in AWS US-East-1, platform engineer and crowd favorite Josh Lane realized that AWS has subtly changed address attach behavior, and DNS name changing is now more eventually “eventually consistent.” Updates to our code was made to handle this change.  We’ve also added more diagnostic checks to catch similar changes more quickly in the future.

Rails 4.0 is in early access! Let us know what you think in the early access feature feedback forum.

Data Data Data

Lead data engineer Ines Sombra is working on her Ricon East presentation, and we hope to see you there in New York! Ping us if you’d like a Friends of Engine Yard discount code. :)

Work continues on our early access Riak on Cloud offering as we move towards the GA launch.

Social Calendar (Come say hi!)

The CFP for the Distill Conference closed. Thank you all so much for your submissions! Our reviewers are checking them out and making some tough calls with such a great response. Don’t forget to give your song requests to application support engineer PJ Hagerty.

Tuesday, April 16th: Enjoy beer, pizza, and php at our Dublin office! Talks will cover adding realtime features to PHP apps with Redis, Node and Socket.io by Clay Smith; John Needham will discuss how he worked to scale TheJournal.ie; and our own Ross Duggan will dive into the intricacies of Version controlling your infrastructure.

Wednesday, April 17th: The one and only PJ Hagerty is continuing his world tour, taking “Ruby Groups: Act Locally - Think Globally” to Rhode Island’s Ruby Group!

Thursday, April 18th: Open Data Ireland, exploring "Commercial Exploitation of Open Data for Private Gains,” will be meeting at our Dublin office.

Coming up next week

Ticketing maintenance: Saturday, April 13th from 5:30 PM to 6:30 PM(Pacific Time); if you have any issues contacting us via our ticketing system, please call us at 1-866-518-9273 or contact us via IRC (#engineyard channel on irc.freenode.net -- web client: http://webchat.freenode.net/). More info

Articles of Interest

Platform engineer and surfer supreme Jacob Burkhart returns from his Ancient Ruby exploits and shares “What happened to the Rails 4 Queue API?” on our blog.

Application support engineer and php enthusiast Davey Shafik takes on learning Ruby on Rails for a Distill project and shares his lessons and revelations along the way.

The New Yorker reports that Windows 8 has crashed the North Korean missile control computers, and Kim Jong-Un may be declaring war on Microsoft, leading to a complex array of emotional responses around the office.


Categories: Programming

What happened to the Rails 4 Queue API?

Engine Yard Blog - Thu, 04/11/2013 - 19:39

The Queue API in Rails 4 is supposed to be an abstraction layer for background processing. It ships with a basic implementation, but developers are expected to swap out the default backend with something more production ready like Resque or Sidekiq. This standardization should then allow Rails plug-ins (and Rails itself) to perform work asynchronously where it makes sense without having to worry about supporting all of the popular backends.

In preparation for my talk titled "How to fail at Background Jobs", I've been following activity on the Rails 4 Queue API. Recently, the Queue API was removed from the master branch, and pushed off until Rails 4.1 at the earliest.

What follows is my third party attempt to report on why. My main source of information is this commit on GitHub, but I'll also attempt to draw some conclusions based on my own experience with queueing systems.

Another interesting source of information comes in the comments with the very first commit to add a Queueing API to Rails: https://github.com/rails/rails/commit/adff4a706a5d7ad18ef05303461e1a0d848bd662

Basically, I see three failures with the Queue API as currently implemented in the "jobs" branch: https://github.com/rails/rails/tree/jobs

1. The API

The API as implemented is a "nice idea", but it's actually very un-Rails-like when compared to things like ActiveRecord.

Here's an example of enqueuing a job in the existing implementation:

class SignupNotification

  def initialize(user)

  def run
    puts "Notifying #{@user.name}..."



For illustrative purposes only, a more Rails-like API might look like this:

class SignupNotification
  connect_to_queue :important_jobs

  def run(user_id)
    user = User.find(user_id)
    puts "Notifying #{@user.name}..."



The name of the queue should be a concern of the job (not the place that enqueued it). Imagine if you wanted to change the queue name, you'd have to change every enqueue-ing place to reference the new name.

Also, notice we have to do a little extra work in our implementation of run to fetch the user by ID instead of having our queuing system Marshall it for us. This leads me to the next failure...

2. Marshall vs. JSON

Jobs are generally run in a different process from where they were enqueued. This means serialization. The simplest choice for doing this would seem to be Ruby's built in Marshall. Rails took the approach of Marshalling an entire job class, while most other libraries use serialize the job arguments and job class name to JSON. It's a best practice in most other systems to store as little information as possible information about the job in the queue itself. A queue is an ordering system, information should be stored in a database.

The Marshall approach is a slightly nicer API for the developer, but quickly breaks down in practice. Care must be taken not to Marshall objects with too many relationships to other objects or Procs (which cannot be Marshalled in Ruby unless you are using the niche implementation: MagLev).

Finally, Marshalling is not as nice for Ops. Monitoring a running queue in production is much easier when you can easily inspect the contents of jobs. JSON is a much more portable format.

3. Solving the Wrong Problem

It seems one of the major goals of the Rails 4 Queue is to always send e-mails in the background. We could debate whether action_mailer really belongs as part of a Model-View-Controller framework in the first place, but I digress.

Let me re-word that a bit: One of the major goals of Rails 4 Queue is to ensure that the sending of e-mails does not adversely impact web response time.

Generally, this sort of thing is done using a background jobs system like Resque: you make a job that sends your e-mail. But Rails core thinks we can do better than that, we don't need a background job system if we can just make our web application server do the work after it's completed sending the response to the client.

Here's some terribly ugly and hacky code to demonstrate my point.

Example using thin: https://gist.github.com/jacobo/5164180

Example using Unicorn: https://gist.github.com/jacobo/5164192

If you run these rack apps and hit them with curl, you'll see that the "e-mail processing" does not interfere with the client receiving a response. But, it does tie up these single-threaded web servers. They won't serve the next request until they are finished with the previous after-request job.

Another approach might be to use threads, but unless you are on JRuby or Rubinius, you would likely slow down your response processing. As your e-mail sending thread will likely start executing and using up processing power that would otherwise be used to generate the response.

The only good way to solve this problem is to make changes to Rack itself, but I've yet to see a proposal on exactly what these might be.

In Conclusion

I'm hoping to see the discussion continue. Maybe there's even an opportunity for other community members to step up and propose ideas about what the Rails 4 Queue API should look like. I think getting this right and shipping it will be a huge win for Rails developers everywhere who are currently duplicating effort working on a myriad of background job processing extensions and customizations coupled to their current backend queueing library.

Categories: Programming

Learning Rails (and Ruby)

Engine Yard Blog - Mon, 04/08/2013 - 22:40

I know PHP. I mean, I really know PHP. Not just the syntax, or the idioms and idiosyncrasies, but why. I can tell you why something works the way it does, under the hood; and I was probably around when the decision was made to do it that way. Thirteen years with any language is a long time.

But it hasn't always been PHP. Two years into my PHP journey, I took a small detour and taught myself ColdFusion, which had just transitioned to running on top of the Java EE platform. Which also meant that I dug into Java because you could extend ColdFusion with Java components.

And then of course there was the inevitable delving into JavaScript, add in a healthy dose of CSS, semantic web technologies (RDF, OWL, and SPARQL), XML, XPath, and XSL (XSL:FO and XSLT) and lets not forget SQL. Heck, I can write (and have written) DTDs!

More recently, after starting to work for Engine Yard on the Orchestra PHP Platform, I learned Python. (Yes, we use Python for parts of our PHP stack. Why? It's the best tool for the job.)

I didn't list most of the keywords on my resumé to make myself sound fancy, I did it because I think it's fair to say at this point, I qualify as a Polyglot.

I have always explored new tools (be they daemons, utilities, libraries, languages or services) and judged them on a few criteria:

  • How well is the tool written?

  • What is its security track record?

  • How many open bugs does it have?

  • How has the community responded to previous issues (are they open, friendly, courteous, prompt)?

  • Does it have enough features for what I need?

  • Does it have too many features for what I need?

Ultimately, it comes down to: Is it the right tool for the task?

Because of this, ultimately when I come to write a web site, PHP is my tool of choice. Know thy tool well, and it shall treat you well.

Then along came Engine Yard, and I was exposed to just a ton of fantastic engineers who happen to choose Ruby as their tool of choice.

Even still, more than a year after working for Engine Yard, I had yet to pick up Ruby, or Rails. Sure, I've read a lot of Ruby code, for code reviews and out of interest for how something has been done. I've even hacked a little on some Rails stuff, but it was mostly copy-and-paste-and-hum-a-few-bars.

Then along came Distill, and we needed a site. With about 3 weeks to work on it, while still working on other tasks, and with no requirements for technology choice, I would normally have just picked up PHP, and probably Zend Framework 2 and knocked it out in a few days.

Instead, given that I've been discussing Distill, and what we want to achieve (a focus on solutions, not technologies) for months, I decided to get into the spirit with what looked like an opportunity to try out Rails (and Ruby). This was a small project, with limited feature set and scope that I could quickly fall back to PHP if I ran into too many issues. Luckily, surrounded by literally dozens of fantastic experienced developers, I had a lot of folks I could ask questions of — but as you'll see, I didn't really need much help.

Implementation Details

This blog post is not meant to focus on how I learned Ruby or Rails, but what I learned from the experience. However, I did want to cover some of this too.

Coming from PHP, I definitely encountered some WTFs:

  • Parentheses are optional on method calls, and often not used: but in some cases [such as nested calls] you need them.

  • There are lots of ways to do "if not". These include: if !<condition>, if not <condition> and unless <condition>.

  • Method names can contain ? and !, and there is a convention of ending method names that return boolean with ?. It's not a operator, it's part of the name, e.g. foo.empty?. Methods ending in ! usually indicate they modify the object they are called upon e.g. foo.downcase! modifies foo, while foo.downcase returns the result, for this reason they are known as destructive methods.

  • Returns can be implicit, a method returns the result of the last statement run (and most code I've seen does this)

Then you get something like this (actual code at some point for the Distill website. It may have changed by the time it goes live):

class Speaker < ActiveRecord::Base
  belongs_to :user
  has_many :proposals

  attr_accessible :user, :bio, :email, :name, :id, :website,  :photo

  has_attached_file :photo, :styles => { :medium => "300x300>", :thumb => "120x120>" }
  validates_attachment_content_type :photo, :content_type => /^image\/(png|gif|jpeg)/

  validates :bio, :email, :name, :photo, :presence => true
  validates :email, :format => {
      :with => /\A[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]+\z/,
      :message => "Must be a valid email address"

Lets break this down:

  • Line 1: We define a class, Speaker that extends (<) the Base class in (::) the ActiveRecord module.

  • Lines 2, 3, 5, 7, 8, 9, 10: are all method calls

  • Line 15: we close the class definition (end)

"But wait, you said method calls?" Indeed I did! "That's crazy talk! We're still in the class definition!"

First, it's important to note that Ruby has an implicit self, which is like self:: in PHP, it calls methods statically (that's the easiest equivalency). This means that belongs_to :user can also be expressed as self.belongs_to :user. What's weird here, is that these (which are inherited) are being called during definition of the class. These methods can be defined (e.g. def self.foo) and called (after definition) within the definition of that same class, or inherited from it’s parent. These methods modify the class object itself.

Aside: while writing this blog post, I actually fully realized what the previous section means, and I had a tweet exchange with fellow Engine Yarder @mkb which helped solidify what's going on which you can see here — to summarize: classes are defined by executing code, this means you can programmatically define classes, and that you can even work with it during definition.

So what I thought was a property (validates), that was somehow magically defined twice, is actually a method call - there's that lack of parenthesis on method calls that I mentioned earlier.

So what happened?

Well, I built a website. A secure, readable (code), usable website. Nothing more than I could have built in PHP, but there were several things that I did that sort of blew my mind.

For the majority of this app, we're talking simple CRUD. Show a form, take the input, store the input, and display it later. There was no complex data structures that I had to worry about or anything. Ruby/Rails or PHP/Zend Framework 2; didn't really matter. I could've written this in **bash script** probably.

I read through most of the Getting Started with Rails, adapting it to my needs as a I went along.

The two parts I thought would be the most challenging:

  1. OAuth with multiple backends (Github, Facebook, and Twitter)

  2. Storing uploaded images on S3.

OAuth with Multiple Backends

To solve this challenge, I use the time-honored practice of Googling. In doing so, I stumbled across Devise and Omniauth; two gems that implement user authentication and OAuth, respectively.

Integrating these gems for someone who barely knows Ruby or Rails was actually quite tricky, but I got it done and was quite amazed! It didn't just handle the OAuth, it handled routes, views, forms, database schema - pretty much everything. I did have to write custom handlers to deal with pushing the user into the database and handling the data sent back by the service (e.g. name/email), but nothing too difficult.

S3 Image Uploads

Again, I solved this with Google, and came up with the Paperclip gem. A file "attachment" extension to ActiveRecord.

Through paperclip, I went from not knowing how to implement file uploads in Ruby/Rails, to handling it, storing the details in the database, creating multiple thumbnails, and pushing it to S3 within minutes. Essentially, after config (which is just specifying the storage adapter of S3, the credentials, and path), and a rake call, these few lines handled everything:

has_attached_file :photo, :styles => { :medium => "300x300>", :thumb => "120x120>" }
validates_attachment_content_type :photo, :content_type => /^image\/(png|gif|jpeg)/

This names the attachment as "photo", specifies the versions we want (300x300 and 120x120) and validates that they uploaded a png, gif or jpeg.

What did I get out of this?

Well, I still have a ton to learn. Ruby isn't just a different syntax, which was (mostly) my experience with Python. That being said, I feel I can now intelligently talk about some of the benefits of Rails compared to it's PHP counterparts.

One of the most significant advantages is the library of amazing gems that work out of the box with Rails and can bring a lot more than more generic PHP libraries, both due to the widespread usage of Rails in the Ruby community and the fact that folks tend to stick to it's standard tooling. For example, there is an OAuth component for Zend Framework 2, but it doesn't presume that you're using ZF2 as your controller/router and setup the routes, nor does it generate views, or hook into a specific auth mechanism or database adapter. This, I believe, it what makes Rails (and therefore Ruby) a great tool for rapid development. (If you’re hunting for gems, The Ruby Toolbox lists gems by what they do, and their popularity, which is quite handy!)

I also think we need to separate the framework from the language. Just like PHP, or Python, Ruby is a general purpose language. PHP however was built from the ground up with a primary focus on running in a web environment. But that really doesn't mean much more than it just provides easy access to the web environment (GET/POST/Cookies, built in session handling, PUT/POST raw data, server environment, etc). What this means to me is that I could have built the Distill site in any of these three.

One major factor PHP has going for it for the web, is that it's shared-nothing architecture is great for horizontal scaling, and it seems better at handling concurrency out of the box — though Ruby is  making great strides in this area (with projects like rubinius, and jruby). That isn't to say Ruby or Python can't, or don't scale, it's just that it's much further along in the learning curve because there's more to getting it right. Of course, working with (and deploying on) the Engine Yard Cloud means this was a non-issue for me.

So, the language doesn't matter all that much, but what about the framework? Could I have built the same website using Zend Framework 2? Yes. Would it have been easier? In some aspects, specifically the fact I don't know Ruby or Rails that well, sure. However I don't think I could have built out the OAuth and S3 storage as covered here as quickly and easily given all other factors were equal. Now, going back to Zend Framework 2, I find I’m doing a lot more busy-work, such as generating forms, scaffolding, schema updates, etc, at least as a starting point.

Does this mean I'm switching to Ruby/Rails? Unlikely. PHP is still the preferred ink in my pen, simply because knowing it so well means it's an effortless tool to transform my ideas into reality.

Will I turn to Ruby/Rails again? Maybe not for my own projects — I tend to work with friends from the PHP community. But when I'm working with my excellent teammates at Engine Yard, absolutely — for me, the strongest thing Ruby/Rails has going for it, is the community knowledge I have access to. No matter the answer to this question however, learning a new language — and more importantly, learning best practices with that language — hopefully makes me a better developer.

It is very easy to latch on to a language, and a community, and think we are learning, because we’re looking at periphery technologies like database servers, cache storage and web services... and we are learning, but it’s through a single point of view (“The PHP Way” or “The Ruby Way”); a set of blinders that are based on everything we are comfortable with.

Getting out of your comfort zone — learning a new language — the core part of what brings all of our technologies together, capturing, and discussing ideas with other communities is where you’ll really grow your ability to solve problems: with the best solutions, and the best tools. So get out to meetups, or conferences, listen to podcasts, and read articles on other languages, even if you’re not interested in using that language.

I'm excited to have finally put enough time into learning enough about this great tool to work on some amazing technology with my fellow Engine Yarders, and to be able to apply the general concepts I've learned throughout my journey into another medium with more people.

 Distill is a conference to explore the development of inspired applications. The call for papers is now open and tickets go on sale in April.

Categories: Programming

April 5, 2013: This Week at Engine Yard

Engine Yard Blog - Fri, 04/05/2013 - 18:01

Welcome to our first “This Week at Engine Yard” update! This is where we plan to keep you apprised of the weekly events, engineering work, and various interesting articles that we host/push/read every week.

Let me know what you think, and if there’s anything else you’d like to see highlighted!

--Tasha Drew, Product Manager

Engineering Updates

Our new Gentoo distribution has been released into early access. We’ve tested our latest distribution against many typical use cases but we would love you to try out your applications on this stack and send us any feedback.

Check out what new features we have in Early Access and Limited Access (aka behind a feature flag). Highlights include: Chef 10, Gentoo 12.11, Riak Clusters, ELBs, and many others. We love customer feedback on these features, so please let us know what you think in our early access feedback forum!

Data Data Data

Postgres had a big security update released; we immediately posted an update and strongly recommend that you upgrade!

We had a customer set up the first node.js + Riak environment we’ve had on Engine Yard’s Cloud product! We are thrilled to see customers using this in its early access phase and looking forward to continuing to improve Riak as a product as we move towards making it GA.

Social Calendar (Come say hi!)

Distill Conference’s CFP is closing in a few days! Send in a CFP if you have something to share and enjoy San Francisco’s Indian Summer, developing, fun, and/or whiskey.

In other Distill news, we just announced that Nolan Bushnell, the founder of Atari and Chuck E. Cheese, will be a keynote speaker!

Coming up next week:

Engine Yard’s own Eamon Leonard will be giving the keynote at Whisky Web II at Airth Castle in Scotland.

We’re sponsoring the awesomely fun LessConf.

The always charming Long Nguyen from our Buffalo, New York office will be presenting at WNY Ruby Brigade’s Tuesday meetup on “Lessons Learned from Rock Climbing.”

Our Dublin office will be actively populating Thursday’s PubStandards, as well as hosting Tuesday’s Dublin Riak! meetup.

Our tech writer Keri Meredith will be attending Basho’s Write the Docs conference in Portland.

This week:

Engine Yard sponsored Mountain West Ruby Conf, where platform engineer Shai Rosenfeld presented “Testing HTTP APIs with Ruby.”

For those who prefer beach to mountain time, platform engineer Jacob Burkhart  presented at Ancient City Ruby with a cautionary tale of “How to Fail at Background Jobs.”

Rounding out what can only be described as a super busy conference week, application engineer PJ Hagerty presented on “Ruby Groups: Think Locally - Act Globally,” capitalizing upon his own experiences growing the WNY Ruby Brigade at Ruby Midwest.

Our Portland office hosted the PDX Women Who Hack and an initial planning meeting for a Portland branch of CoderDojo.

Our Dublin office hosted Ireland’s inaugural PostgreSQL user group (check out the meetup!), and the DevOps Ireland Dublin Monitoring Meetup.

Articles of Interest

We’re increasing our support of Open Source and Rubinius and welcoming Dirkjan Bussink as we sponsor him to work on that project full time!

Jake Luer wrote a comprehensive blog all about how to deliver iOS push notifications using node.js.



Categories: Programming

Engine Yard Expands Support For Rubinius

Engine Yard Blog - Thu, 04/04/2013 - 21:13

I am very pleased to announce that Engine Yard is sponsoring Dirkjan Bussink of Critical Codes to work on Rubinius.

Engine Yard has been a generous supporter of open source Ruby projects, including multiple Ruby implementations and Ruby on Rails, for many years. Indeed, they originally hired Evan Phoenix, the creator of Rubinius, in 2007, and have sponsored my work on Rubinius since 2008. Their sponsorship improves all aspects of the Ruby community, for developers writing Ruby code and for people everywhere who use applications written in Ruby or Rails. I'd like to thank Engine Yard for making Ruby and other open source technologies better for everyone.

Dirkjan has been a contributor to numerous open source projects, and to Rubinius in particular, for many years. He is eager, helpful and all around a joy to work with. We are lucky to have him helping with Rubinius.

With the accolades and appreciation dispensed, I'd like to cover some of what is coming for Rubinius.

Rubinius is an implementation of Ruby. At present, it supports 1.8.7 and 1.9.3 language modes, with support for Ruby 2.0 coming soon. Rubinius is a drop-in replacement for MRI (Matz's Ruby Implementation), including support for C-extensions. Rubinius includes a modern, generational garbage collector, just-in-time compiler to native code using LLVM, and full support for multi-core and multi-CPU hardware with no global interpreter lock.

We are working toward the 2.0 final release for Rubinius. Dirkjan recently visited the Engine Yard office in Portland, OR for a week so we could talk about current and future development in person. I blogged summaries of our discussions: Welcome Dirkjan! and PDX Summit Recap. If you are interested in technical aspects of Rubinius, please see those posts.

Rubinius is available as an Early Access feature in Engine Yard Cloud. If you are currently using Engine Yard Cloud and are interested in learning more about how Rubinius may benefit you, please contact us. There are professional services available to help evaluate the benefits of Rubinius. Engine Yard also offers a free trial if you are not currently using Cloud. We are also working on Rubinius support in other platforms. Dirkjan is available to contract to assist evaluating and migrating to Rubinius.

The future is concurrent. We see this every day with industry's use of technologies like Erlang, Clojure, and Node.js. Rubinius has been built from the beginning to bring Ruby into this concurrent world.

We will be writing more about the technology in Rubinius in the coming weeks. In the meantime, try your application, library, or gem on Rubinius. And don't forget to test on Rubinius on Travis CI. That provides us invaluable feedback. If you have a moment, drop by our #rubinius IRC channel and say hello to Dirkjan.

Categories: Programming

What Happens When You Bring Atari, Chuck E. Cheese and Engine Yard Together?

Engine Yard Blog - Tue, 04/02/2013 - 18:37

When I think about Atari I'm immediately brought back to my childhood and the many hours spent hunched over my Atari console while gazing into its beautiful 128-color graphics.  In 1972, Nolan Bushnell co-founded Atari and released Pong with his partner Ted Dabney, one of the first video games to reach mainstream popularity.  In 1977 they went on to release the now famous Atari 2600, forever changing the lives of hundreds of millions of gamers and credited by many with creating the video game industry.  Following Atari, Nolan went on to found many ventures including Catalyst Technologies, the first technology incubator, Etak, the first car navigation system, Androbot, a personal robotics company and Chuck E. Cheese!  Nolan is a fearless technology pioneer, entrepreneur and scientist.  His latest venture, Brainrush, is an educational software company that uses video game technology with real brain science, in a way that Nolan believes will fundamentally change education.  Nolan also just released his latest book.


I had the good fortune of meeting Nolan through his son Brent, also a lifelong engineer and entrepreneur.  Brent brings together his passions for education and live amusement to inspire kids to be makers.  Using software, art, and hardware he creates projects for clients ranging from Google to Disney, and conferences to hotels.  His creations leap beyond the digital screen and into the real world.

We are thrilled to announce Nolan and Brent Bushnell as keynote speakers at Distill, Engine Yard's inaugural developer conference, where they will talk about technology, entrepreneurship and always pushing boundaries.  Here is a little more information on Nolan and Brent:

Nolan Bushnell is best known as the founder of Atari Corporation and Chuck E. Cheese Pizza Time Theater.  Mr. Bushnell is passionate about enhancing and improving the educational process by integrating the latest in brain science, and truly enjoys motivating and inspiring others with his views on entrepreneurship, creativity, innovation and education.  Currently, Mr. Bushnell is devoting his talents to fixing education with his new company, Brainrush.  His beta software is teaching academic subjects at over 10 times the speed in classrooms with over 90% retention. He uses video game metrics to addict learners to academic subjects.  Over the years, Bushnell has garnered many accolades and distinctions.  He was named ASI 1997 Man of the Year, inducted into the Video Game Hall of Fame, inducted into the Consumer Electronics Association Hall of Fame and named one of Newsweek’s “50 Men That Changed America.”  He is also highlighted as one of Silicon Valley’s entrepreneurial icons in “The Revolutionaries” display at the renowned Tech Museum of Innovation in San Jose, California.

Brent plays at making technology fun.  He is he currently the CEO of Two Bit Circus, an LA-based idea factory focused on education and amusement.  Previously he was the on-camera inventor for the ABC TV show Extreme Makeover: Home Edition, a founder of Doppelgames, a mobile game platform company that sold to Handmade Mobile in 2012 and a founder of Syyn Labs, a creative engineering collective responsible for the hit OK Go Rube Goldberg machine music video and other large scale spectacles.  His particular passions include group games, out-of-home entertainment, and inspiring kids via programs such as NFTE.

Distill is a conference to explore the development of inspired applications. The call for papers is now open and tickets go on sale in April.

Categories: Programming

Delivering iOS Push Notifications with Node.js

Engine Yard Blog - Mon, 04/01/2013 - 23:45

Jake Luer is a Node.js developer and consultant focused on building the next generation of mobile and web applications. He is logicalparadox on GitHub and @jakeluer on Twitter.

Mobile is an incredibly important strategy when building applications in today's ecosystem. One of the major challenges facing all application builders, whether start-ups or enterprise, is keeping users engaged. Notifications are the first step in a long-checklist of tactics that can be used to do just that.

In today's tutorial we will be building a small Node.js application that covers all of the basics of working with the Apple Push Notification (APN) service. This will include connecting to Apple's unique streaming api, sending several types of notifications and listening to the unsubscribe feedback service.

This is a JavaScript/Node.js focused tutorial so it does not cover any iOS (Objective-C) programming. However, since we want to be able to test our notifications with an actual iPhone, a sample iOS project has also been prepared and released under an open-source MIT license.

Time Required: ~2-3 hours

Tools Required:

  • Node.js v0.8 or v0.10.
  • Code editor of preference for javascript files.
  • iOS device (push notifications cannot be sent to simulator)
  • xCode (for working with sample iOS app)
  • Apple Developer Account with iOS Provisioning Portal access
Introduction to Apple Push Notification Service

The Apple Push Notifications service is actually a set of services that developers can use to send notifications from their server (the provider) to iOS devices. This flow diagram from Apple's documentation indicates this best.

APN Flow

In actuality the APN service is two seperate components that provide different benefits to a provider. Implementing both in a production application is required.

1. Gateway Component: The gateway component is the TLS connection that a provider should established to send messages for Apple to process and then forward on to devices. Apple recommends that all providers maintain an "always-on" connection with the gateway service even if no messages are expected to be sent for long periods of time. The service implements a very specific protocol and will disconnect in the event of an error. The Node.js module we are using today handles all binary encoding and implements a number of systems to ease the burden of possible reconnects.

2. Feedback Component: The feedback component is the TLS connection that a provider should occasional establish to download a list of devices which are no longer accepting notifications for a specific applications. All providers will need to implement a feedback workflow before going to production as Apple monitors a provider's usage of this service to ensure it is not sending unnecisary notifications. The Node.js module we will be using makes it really easy to automate your feedback workflow.

Sample iOS Application

Before we get into creating a Node.js application we need to prepare for the moment we want to send a notification to an actual device. For this tutorial we will be using a sample iOS application that will allow us to inspect the notifications that are received by the device. We won't need to do any Objective-C coding but we do need to configure the application in xCode so we can run it on our device.

Furthermore, prior to using APN Agent we will need SSL certificates to establish a secure connection with the APN or Feedback service. In addition to creating our new application and provisioning profile, we will also walk through generating these certificates in a format that APN Agent accepts.

For this section you will need an Apple Developer's Account with access to the iOS Provisioning Portal and the latest version of xCode installed on you local Mac developement machine.

1. Application: Log in to the Apple iOS Provisioning Portal and create a new App ID by selectiong "App IDs" from the side menu and then the "New App ID" button from top right. You will need to specify a Bundle Identifier; I suggest using apnagent as the appname segment of the this bundle id. For example, mine is com.qualiancy.apnagent. You will need to remember this for later.

Create Application

2. Enable APN: From the applications list for your newly created application select "Configure" from the action column. Check the box for "Enable for Apple Push Notification server".


3. Configure: Select "Configure" for development environment. Follow the wizard's instructions for generating a CSR file. Make sure to save the CSR file in a safe place so that you can reuse it for future certificates.


4. Generate: After you have uploaded your CSR it will generate a "Apple Push Notification service SSL Certificate". Download it to a safe place.


5. Import: Once downloaded, locate the file with Finder and double-click to import the certificate into "Keychain Access". Use the filters on the left to locate your newly import certificate if it is not visible. It will be listed under the "login" keychain in the "Certificates" category. Once located, right-click and select "Export"


6. Export: When the export dialog appears be sure that the "File Format" is set at ".p12". Name the file accordingto the environment, such as playground-dev.p12 and save it to a safe place. You will be prompted to enter a password to secure the exported item. This is optional so leave it empty for this tutorial. You will then be asked for your login password.

7. Provision: Head back to the iOS Provisioning Portal and select "Provisioning" from the left menu and then the "New Profile" button in the top right to create a new Development provision. Make sure to select the correct App ID and Device. Once created, download the .mobileprovision file and double-click it in Finder to load it into xCode.

Note: If you have never done a provision before you may not have any "Certificates" or "Devices" listed when you go to create a Provisioning Profile. Consult Apple's documentation or the "Development Provisioning Assistant" from the iOS Provisioning Portal home page to fill in these missing pieces.

8. Clone: Next you will need to clone the apnagent-ios repository and open apnagent.xcodeproj in xCode.

git clone https://github.com/logicalparadox/apnagent-ios.git
open apnagent-ios/apnagent.xcodeproj

9. Configure Project: The final step is to configure the xCode project use your mobile provision. Under the "Build Settings" for apnagent change the User-Defined BUNDLE_ID setting to the bundle identifier you specified earlier. Then select the "Code Signing Identity" for that bundle identifier.

Configure Project

10. Run: To make sure you have everything configured correctly we are going to run the application. Connect your device and then in the top-left corner of xCode make sure you device is selected for "Scheme". Then click "Run". If you do not get any build errors and the xCode log displays your device token you have configured everything correctly.

Device Token

Node.js Module: APN Agent

APN Agent is a Node.js module that I developed to facilitate sending notifications via the APN service. It is production ready and includes a number of a features that make it easy for developers to implement mobile notifications quickly. It also contains several mock helpers which can assist in testing an application or provide feature parity during development.

The major features you can expect to cover today are:

  • Maintaining a persistent connection with the APN gatway and configuring the system for auto-reconnect and error mitigation.
  • Using the chainable message builder to customize outbound messages for all scenarios that Apple accepts.
  • Using the feedback service to flag a device as no longer accepting push notifications.

This tutorial will cover a lot of ground to get a simple application together but might skim over topics that are only relevant in larger deployments. Keep an eye out for links to sections of the module documentation for a deep-dive into certain subjects.

Create Node.js Project

Now that we have our certificate we can begin work on the Node.js application. Today's application will be called apnagent-playground and it will only focus on how to send APN messages as opposed to building a fully flushed out user-centric application. At the end of this section we will accomplish:

  1. Establish a connection with the APN gateway service.
  2. Send a simple "Hello world" message to a device.
  3. Explore the many different options for messages that can be sent.
  4. Learn how to mitigate errors that might occur in multi-user applications.
Project Skeleton

Download Project Skeleton (zip)

Here is the file structure we will be working with:

├── _cert
│   └── pfx.p12
└── agent
│   ├── _header.js
│   └── hello.js
└── feedback
│   ├── live.js
│   └── mock.js
├── device.js
└── package.json

1. Certificate: The first thing you will need to do is move your pfx.p12 file generated was generated earlier into the the _cert folder.

2. package.json: Next you will need to populate the package.json file. We will be working with apnagent version 1.0.x. Though this project is stable, when adding apnagent to your own project I encourage you to check the apnagent source-code change log for anything that might have changed since this release.

Here is the important parts of the package.json for those who did not download the skeleton.

  "private": true,
  "name": "apnagent-playground",
  "version": "0.0.0",
  "dependencies": {
    "apnagent": "1.0.x"

Once your package.json file is populated run npm install to grab the dependencies.

3. Device Token: Since we will be sending messages to an actual device we need to have it easily accessible. Assuming you have the apnagent iOS application open in xCode, "Run" the application on your connected device. When the application opens it will display the device token in the xCode log. Copy and paste it into the device.js file. Mine looks like this:

module.exports = "<a1b56d2c 08f621d8 7060da2b c3887246 f17bb200 89a9d44b fb91c7d0 97416b30>";
Making the Connection

Now that we have the skelton configured and our dependencies installed we can focus on establishing a connection. The first file we are going to work with is agent/_header.js. This file will handle loading our credentials and establishing a connection with the APN service. The first thing we need is to require all of our dependencies. We will construct a new apnagent.Agent and assign it to module.exports so we can access it from all or our different playground scenarios.

// Locate your certificate
var join = require('path').join
  , pfx = join(__dirname, '../_certs/pfx.p12');

// Create a new agent
var apnagent = require('apnagent')
  , agent = module.exports = new apnagent.Agent();

Now that we have created our agent we need to configure it with our authentication certificates and environment details.

// set our credentials
agent.set('pfx file', pfx);

// our credentials were for development

For more configuration options such as modifying the reconnect time or using different types of credentials, view the agent configuration documentation.

Finally, we need to establish our connection. apnagent uses custom Error objects whenever possible to best describe the context of a given problem. When using the .connect() method, a possible custom message is the "GatewayAuthorizationError" which could occur if Apple does not accept your credentials or if apnagent has a problem loading them. You can check for apnagent specific errors by checking the .name property of the Error object.

agent.connect(function (err) {
  // gracefully handle auth problems
  if (err && err.name === 'GatewayAuthorizationError') {
    console.log('Authentication Error: %s', err.message);

  // handle any other err (not likely)
  else if (err) {
    throw err;

  // it worked!
  var env = agent.enabled('sandbox')
    ? 'sandbox'
    : 'production';

  console.log('apnagent [%s] gateway connected', env);

Now we can test our connection by running the _header.js file. If you receive any message other than "gateway connected" you should revisit the previous steps to ensure you have everything configured successfully. Once you confirm a connection press CTRL-C to stop the process.

$ node agent/_header.js
# apnagent [sandbox] gateway connected

To see this file in full, view it on GitHub: agent/_header.js.

Sending Your First Notification

Now that we have a connection we can send our first message. We will be using a seperate file in the agent folder for each message scenario. Our first one is agent/hello.js.

First we need to import our header and device. You will need to do this for all scenarios.

var agent = require('./_header')
  , device = require('../device');

Requiring the _header file will automatically connect to the APN service. Now we can create our first message using the .createMessage() method from our agent. This will create a new message and provide a chainable API to specify message properties. Once we specify all our properties for that message we invoke .send() to dispatch it.

  .alert('Hello Universe!')

To see this file in full, view it on GitHub: agent/hello.js.

Now we need to run this scenario. Make sure that apnagent-ios is running on your device, then:

$ node agent/hello.js

Within moments you should see your notification received:

Screenshot Hello Universe

If you don't receive a notification on your device jump a few paragraphs down to the "Error Mitigation" section for code on how to debug these kinds of issues.

Other Types of Notifications. Badge Numbers

In this next scenario we will set the badge number. The code is rather simple for agent/badge.js:

// Create a badge message
  .alert('Time to set the badge number to 3.')

View on GitHub: agent/badge.js.

Keep in mind different versions of iOS handle badge number messages different. In iOS v6, the badge will not be displayed automatically if the application is in the foreground. By included an alert body we can see the icon badge change but also inspect the payload in apnagent-ios.

Try sending the message while the application is in different states.

Badge Screenshot

Custom Payload Variables

One of the strongest features of the APN gateway service is the ability to include custom variables in your messages. Even though you should not rely on APNs for mission critical information, custom variables provide a way to associate an incoming message with something in your data store.

  .alert('Custom variables')
  .set('id_1', 12345)
  .set('id_2', 'abcdef')

View on GitHub: agent/custom.js.

The .set() method allows you to include your own key/value pairs. These pairs will then be available to the receiving client application.

Custom Screenshot

Message Expiration

By default all messages have an expiration value of 0 (zero). This indicates to Apple that if you cannot deliver the message immediately after processing then it should be discarded. For example, if the default is kept then messages to devices which are off or out of service range would not be delivered.

Though useful in some application contexts there are many cases where it is not. A social networking application may wish to deliver at any time or a calendar application for an event that occurs within the next hour. For this you may modify the default expiration value or change it on a per-message basis.

Here is our agent/expires.js scenario.

// set default to one day
agent.set('expires', '1d');

// send using default 1d
  .alert('You were invited to a new event.')

// use custom for 1 hour
  .alert('New Event @ 4pm')

// set custom no expiration
  .alert('Event happening now!')

View on GitHub: agent/expires.js.

Error Mitigation

One behavior of the APN service is that it does not respond for every message sent to confirm it has been received. Instead it only responds on an error specifying what error on which message there was a problem. Furthermore, when an error occurs the service will disconnect and flush it's backlog of received data refusing to process further until a clean connection is made. Any message that was dispatched through the outbound socket after the invalid message will need to be sent again after once a new connection has been established in order for it to be delivered. Don't panic! apnagent handles all of this for you.

As you might have noticed in the above .createMessage() examples a callback was not specified for the .send() method though the api allows for one to be set. This callback is invoked when a message has been successfully encoded for transfer over the wire but since the APN service does not provide confirmation that every message has been successfully parsed managing a callback flow can be tricky. Instead, any errors that the APN service reports will be emitted as the event message:error. Code best demonstrates all of the possible scenarios.

This goes in our agent/_header.js file before we make a connection.

agent.on('message:error', function (err, msg) {
  switch (err.name) {
    // This error occurs when Apple reports an issue parsing the message.
    case 'GatewayNotificationError':
      console.log('[message:error] GatewayNotificationError: %s', err.message);

      // The err.code is the number that Apple reports.
      // Example: 8 means the token supplied is invalid or not subscribed
      // to notifications for your application.
      if (err.code === 8) {
        console.log('    > %s', msg.device().toString());
        // In production you should flag this token as invalid and not 
        // send any futher messages to it until you confirm validity


    // This happens when apnagent has a problem encoding the message for transfer
    case 'SerializationError':
      console.log('[message:error] SerializationError: %s', err.message);

    // unlikely, but could occur if trying to send over a dead socket
      console.log('[message:error] other error: %s', err.message);

As you can see there is a lot that can go on here; too much to cover in this article. For more information view Apple's APNs documentation for all possible response codes.

Using the Feedback Service

The Feedback Service is the method by which Apple informs a developer which devices should no longer recieve push notifications. The primary reason to cease notifications is that the application has been uninstalled from the device.

Working with Feedback Events

APN Agent's Feedback Interface will periodically connect the APN Feedback Service and download a list of devices that should be marked for unsub. Each "row" in the download consists of the device token and the timestamp that the uninstall occurred. I recommend that when you gather your token from a device you also store the timestamp for the most recent time that token has been reported. This allows you to compare the timestamps to determine if the application was reinstalled since the feedback unsubscribe notice.

There is one "gotcha" that developers should be aware of. The connection that the device maintains to the APN service is disconnected when there are no applications installed that are configured to receive push notifications. The side-effect is that if your application is the last one to be uninstalled the device will NOT notify the APN Feedback service that it was uninstalled. In production, this is highly unlikely to occur, but if you are developing an application and using the sandbox connection, and are the only sandbox application, this behavior will also occur. You have been warned!

Making the Connection

Luckily, APN Agent also has a Mock Feedback interface so we can easily simulate feedback behavior and test our code.

var apnagent = require('apnagent')
  , feedback = new apnagent.MockFeedback();

  .set('interval', '30s') // default is 30 minutes?

The .connect() method for the apnagent.MockFeedback simulates the same behavior as the real apnagent.Feedback. It will perform the initial connection and retrieve the unsubscribed list. Each row will be parsed and added to the to-be-processed queue. Once Apple has finished sending the list they will disconnect and the Feedback interface will schedule the next download to occur after the set interval time has elapsed.

Handling Unsubscribed Devices

Once the Feedback interface has received a list of devices it will place each response into a throttled processing queue. Since we have no idea how long this list will be and reacting to feedback is not as mission-critical as responding to an http request, this throttled queue helps us avoid bottle-necks in any of our node application's finite resources. By default this queue will process up to ten items in parallel, but for our testing we are going to change the concurrency to 1.

feedback.set('concurrency', 1);

Now we need to instruct the feedback service how to handle any device that has been marked as unsubscribed. The following example is pseudo-code so we can't run it directly. You are welcome to adapt it for your database of choice.

 * @param {apnagent.Device} device token
 * @param {Date} timestamp of unsub
 * @param {Function} done callback

feedback.use(function (device, timestamp, done) {
  var token = device.toString()
    , ts = timestamp.getTime();

  // psuedo db code
  db.query('devices', { token: token, active: true }, function (err, devices) {
    if (err) return done(); // bail
    if (!devices.length) return done(); // no devices match

    // async forEach
    devices.forEach(function (device, next) {
      // this device hasn't pinged our api since it unsubscribed
      if (device.get('timestamp') <= ts) {
        device.set('active', false);

      // we have seen this device recently so we don't need to deactive it
      else {
    }, done);
Testing Feedback Events

Live feedback events are difficult to trigger as they require the right conditions and Apple's block-box logic might not recognize those conditions for some time. That is why we are using the MockFeedback class for our example. To make it easy to test this scenarios we can push in our own device-timestamp pairs.

Here is an example that will unsubscribe your device:

// pull in your device
var device = require('../device');

// unsub it as of 30 minutes ago
feedback.unsub(device, '-30m');

This will not invoke our .use() statement immediately. Since it fully emulates the like Feedback class it will wait until the next simulated connection to the feedback service. Since we changed our interval to 30 seconds we won't have to wait very long.

If you have adapted this example to use your own database then you can run it to see what happens. If you would like to see a full-featured example the apnagent-playground repository has this in it's master branch.

Closing Remarks

Today's tutorial covered a lot of ground: connecting to APNs, sending messages and handling feedback. If you are ready to take this to next step the apnagent documentation is the best place to start. For example, there is a full express.js application that implements the MockAgent or live Agent depending on environment which can serve as the foundation of many production applications.

Please let me know if you have any questions in the comments below. Alternatively, if you run into specific issues with any of the code used in this tutorial, please report them under their GitHub Issues.




Categories: Programming

PDX Drinkup Tonight!

Engine Yard Blog - Thu, 03/28/2013 - 19:29

Portland area developers, we're excited to invite you over to the Engine Yard PDX offices tonight for drinks from 6-8pm. We’ll have lots of tasty beverages and snacks, as well as great company, with the likes of our engineering lead Amy Woodward, marathoner extraordinaire Matt Whiteley and marketing guru Mark Gaydos, among many more. Afterward, we’ll be heading over to the GitHub drinkup (8pm and on) which is conveniently stumbling distance from the office. For more information, check this out.


We'll be sponsoring and attending BarCamp PDX - Friday evening and Saturday. BarCamp is a great workshop-like event with content generated entirely by its participants. Topics often focus on but are not limited to early-stage web applications, and related open source technologies, social protocols, open data formats and other DIY/hacker/open culture themes. Register for it here: http://barcampportland7.eventbrite.com/

We will also be hosting the PDX Women Who Hack on Sunday afternoon. Women Who Hack is an awesome organization for women of all programming experience levels who just want to hack on projects together. All languages and platforms are welcome.

women who hack

And finally, Wednesday is the first organizational meeting for the Portland CoderDojo. Unfamiliar with CoderDojo? It’s a free and open organization committed to teaching young people how to program. If you're interested in helping get the Dojo off the ground, come join Colin Dabritz and Amy Woodward:  http://calagator.org/events/1250463896

pretty office


Categories: Programming

RVM Autolibs: Automatic Dependency Handling and Ruby 2.0

Engine Yard Blog - Wed, 03/27/2013 - 20:10

Last month marked a very important milestone for Rubyists - The release of Ruby 2.0.0. It comes with new RubyGems and new dependencies, including OpenSSL. RVM was not doing much to resolve dependencies earlier, instead installing LibYAML because it is required for RubyGems to function properly. The situation changes with OpenSSL as it's now a bigger dependency. Initially for Ruby 2.0.0-rc1 RVM was installing OpenSSL. However compiling OpenSSL is not that easy task as with LibYAML, it also duplicates the effort with distribution maintainers to compile a working OpenSSL.

A new approach

To make this work, RVM takes a new approach. It will now work with the system package manager to install required libraries. This is no easy feat, as different systems have different names for packages, with some of them being available by default and some not available at all.

It’s easy when it’s easy

It’s easy to use an existing package manager on any of the systems. The trouble begins when distribution does not have a default package manager which is the case for OSX. There are a number of package managers, and none of them are popular enough to be de-facto standard. With this in mind it’s necessary for RVM to find an existing package manager and install one when there isn’t one available.

Sensible defaults

When autolibs was first added RVM assumed users wanted to have all the work done for them. Unfortunately we fast hit the reality that some users know better and still prefer to install dependencies manually. There had to be a compromise to fit both needs. In the end RVM will by default detect available libraries and fail if they are not available. Users have now option to switch to other modes including “do it all for me” and “let me do it myself”.

Do it all for me

Users who want get the libraries installed automatically can use autolibs mode 4 aka. enable. This will tell RVM to find package manager (installing one if necessary), install all dependencies, and finally use them for compiling rubies. If the package manager is not available (on OS X) Homebrew will be installed. However users can select what package manager will be installed with autolibs modes osx_port, osx_fink and smf. The smf package manager is for the lesser known RailsInstaller’s SM Framework.

For systems with a default package manager mode 4 is the same as mode 3, which means install missing packages.

Let me do it myself

For users that do not want RVM do the automatic there are two modes that will come in handy. Mode 1 allows users to instruct RVM to pick the libraries and just show warnings if they are missing. In case when even the automatic detection is to much it can be turned off with mode 0. Unfortunately there is a caveat. Given that the code is more dynamic, there is no longer a list to show what is required. This means that some libraries are picked depending on current system state. So if users do not want to use the automated modes (3 or 4) then RVM can only report what is missing, not all the dependencies that might be required on similar distributions.

Some tricks

To install RVM with Ruby, Ruby on Rails and all the required libraries (aka. the poor man’s RailsInstaller):

 \curl -L https://get.rvm.io | bash -s stable --rails --autolibs=enable

To use rvm in deployment where sudo requires extra handling like in capistrano:

    task :install_requirements do
      sudo “rvm --autolibs=4 requirements #{rvm_ruby_string}”
    task :install_ruby do
      run “rvm --autolibs=1 install #{rvm_ruby_string}”

You can find more details about autolibs in our docs https://rvm.io/rvm/autolibs.

Let us know

We have been testing autolibs code for some time now, but as always bringing it to wider audience creates new cases, detects new flaws, or just creates possible misunderstandings. We are open to get those fixed please report issues to RVM’s issue tracker https://github.com/wayneeseguin/rvm/issues or talk to use using IRC http://webchat.freenode.net/?channels=rvm

 Thanks for using RVM, and may the autolibs feature improve your Ruby experience.

Other Announcements Officially opening RVM 2.0 work.

RVM 1.19 was last release where we included new features (Autolibs), all new feature requests will be deferred to RVM 2.0. We still will provide support, work on fixing bugs and update all software versions as long as RVM 2.0 is not released and marked stable. But to allow the work on RVM 2.0 we need to freeze the feature set available in RVM 1.x.

Updates to the website!

RVM has long had an unorganized website that simply adds information and has become hard for both maintainers and for users and since we are opening up development on RVM 2.0 work we are also opening up development on a brand new site! We hope to clean up and simplify the way you interact with the site, implement a cleaner design using Twitter’s Bootstrap and make the documentation more like man pages so that they can be ported back and forth between RVM and the website making everything more seamless not only for us, but for users also.


Categories: Programming

Be the Expert on Your Own Experience

Engine Yard Blog - Tue, 03/26/2013 - 18:09

There are dozens of tech conferences happening this year, and I’d like to encourage you to submit talks and proposals. Some of my colleagues have told me that they are disinclined to submit proposals because they feel like they lack expertise. This all-too-common feeling prevents people from sharing interesting and fresh perspectives at events, and I’d like that to change.

When I first began submitting proposals to conferences, I carefully crafted one very specific talk proposal. It took me about a week. I then proceeded to rewrite the same proposal over and over again as I applied to (and got rejected by) various conferences.

In mid-2012, I applied to Cascadia Ruby, and for the first time ever I submitted more than one proposal. Maybe you could even call it "rage-proposing:” Here's a proposal, here's another proposal, how about this other random topic? Take that!

I showed my proposals (current and past) to my co-workers and the responses were all the same. I was told that they were "too vague", "too broad", and "unfocused". I was told that it seemed like I was trying to cover too much stuff in a single talk.

I appreciated the honest feedback, but it was frustrating because I had worked so hard on them. And If I took the extra time narrow my proposal down to exactly what I would talk then the inevitable rejection that followed would be even worse.

So I said to myself: "I'm gonna write a proposal on something they know nothing about: surfing!"

So it came to pass that the first talk I ever gave at a tech conference had absolutely no technical content (except for a slide where I made a poor comparison between being a beginner surfer "kook" to writing a really disgusting method body).

So--what is the lesson here?

Talk about your experiences. Don’t try to reverse-engineer a talk based on what you expect people to be interested in. Propose a talk that speaks to your passions.

I felt pretty fortunate at Cascadia because I was certain I'd be the expert on surfing compared to a room full of programmers. However, I haven't had the same luxury at subsequent talks.

Which brings me to the next lesson:

Instead of being the expert on a topic, be the expert on your own experience. That's it. Who isn't the expert on his or her own experience? (People with amnesia, maybe).

And that's what I'll be doing in a few weeks when I speak at Ancient City Ruby about "How to Fail at Background Jobs". Yep, I've experienced a lot of fails. To the rest of you reading this, I foolishly promise to provide feedback on your rejections to whatever extent I come in contact with your proposals as a tiny cog helping with Engine Yard’s upcoming conference, Distill.

In conclusion, prepare lots of talk proposals, submit them everywhere. Especially to Distill!

As an aside, I'd like to thank Michał Taszycki of Railsberry conference for his e-mail to me explaining why all FIVE of my talk proposals were rejected. Railsberry was the only conference ever to do this, each of those one-sentence explanations really went a long way in helping me improve future proposals.

Distill is a conference to explore the development of inspired applications. The call for papers is now open and tickets go on sale in April.

Categories: Programming

In Case You Missed It: February’s Node.js Meetup

Engine Yard Blog - Fri, 03/22/2013 - 19:01

Recently, we were pleased to host the San Francisco Node.js/Serverside Javascripters meetup at Engine Yard. Didn’t make it yourself? Not to worry--we’ve got video of three awesome presentations given by local Node.js experts. Dig in and enjoy!

1) Matt Pardee of StrongLoop presents "From Inception to Launch in 4 Weeks: A practical, real-world discussion of building StrongLoop's site entirely in Node." This talk addresses which architecture, modules, and hosting StrongLoop chose and how 3rd-party integrations were implemented. Caveats and pitfalls are also discussed.

2) Giovanni Donelli of Essence App presents "Indy Web Dev/Designer Node: A case study on how to design your app with Node.js" Review how he designed and developed an app using node and deployed it to the cloud. This presentation is targeted to solo designers and independent developers out there who already have some experience in app design and are trying to understand how to take their app from a device to the cloud.

3)  Finally, Daniel Imrie-Situnayake (Green Dot): "Within the Whale: a story of enterprise Node advocacy from the inside out. How we're promoting Node.js within Green Dot, a large company with a lot at stake." An insightful use case of Node in the enterprise.

If you’d like to learn more about deploying a Node.js app to Engine Yard, check out these resources for best practices and FAQs.

Categories: Programming

James Whelton, Co-Founder of CoderDojo, to Keynote Distill

Engine Yard Blog - Wed, 03/20/2013 - 17:52

urlI first met James Whelton in 2011 as he was just launching CoderDojo in Ireland. At that time, I saw huge potential in his vision for educating a new generation of developers through free coding clubs worldwide. What further inspired me about James was that he was unencumbered by the magnitude of what he was trying to accomplish and the resources and commitments it would take to accomplish it. Today, just two years later, there are 130 dojos across 22 countries with 10,000 kids learning to code for free each week. One student, 13 year old Harry Moran, developed Pizzabot, a game that debuted at the top of the iPhone paid downloads charts in Ireland, beating out Angry Birds!

We are very pleased to announce James as one of our keynote speakers at Distill, Engine Yard's inaugural developer conference, where he will talk about inspiring others, dreaming big, reaching your goals and striving for more. Here is a little more information on James:

James Whelton hails from Cork, Ireland. A 20 year old developer and social entrepreneur, passionate about using technology to improve the world and making the opportunity to experience the awesomeness of coding available to young people around the world. A background in iOS and Web development, he's ventured in everything from building Twitter powered doors to proximity controlled kettles to realtime services to teaching 11 years olds Objective-C. He was named a Forbes 30 under 30 in 2013 for social entrepreneurship and Irish Software Association Person of the Year 2012. He likes hacking the system, using code to achieve big things, pina coladas and getting caught in the rain.

Distill is a conference to explore the development of inspired applications. The call for papers is now open and tickets go on sale in April.

Categories: Programming