Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

May 10, 2013: This Week at Engine Yard

Engine Yard Blog - Sat, 05/11/2013 - 01:53

I’m heading to Ricon East with our lead data engineer, Ines, and dapper platform engineers Josh and Thom! Come say hi!

For our php friends, my counterpart, Josh Hamilton, will be at php[tek] with Davey and PJ, who would also enjoy a friendly “wassup!”

--Tasha Drew, Product Manager

Engineering Updates

Application takeover preferences are now in Early Access. For customers who need a non-standard application takeover scenario, you can now select between boot-options, or disable entirely within the UI.

We have made great progress towards availability of provisioned IOPs on volumes for legacy instances (Riak clusters have had this feature for a while).  We are making the feature available to customers in Limited Access this week. We do have some more work to do  on improving the UX and providing documentation before making it more widely available -- please open a ticket with support if you are interested in checking it out before its Early Access release.

Data Data Data

We continue to enhance the behavior of new Clusters. Rolling backups will be the way to permanently archive data stored in a Riak cluster.

Rolling backups extract data one node at a time while your cluster continues to server requests. You will be able to manage the extracted backup files from the UI and even see which nodes they came from!

Social Calendar (Come say hi!)

Monday May 13 - Tuesday May 14th: Engine Yard is sponsoring the lightning talks at Ricon East 2013! We will also have a bunch of people in attendance. Come say hi!

Tuesday May 14th - Friday May 17th: php[tek]!!! Davey Shafik will be giving a talk, and we will have a product manager and engineers on hand to join in the festivities.

Tuesday May 14th, San Francisco Office:Product Lover’s: PM Fast-Track: What do Product Managers really do?

Tuesday May 14th, Buffalo Office: WNY Ruby: May we all enjoy our Rubies!

Tuesday May 14th, Dublin, Ireland Office:Crafthouse #003: looking at the various ways in which we learn web design and investigate ways to improve upon them.

Wednesday May 15th, PDX Office: Coder Dojo PDX, K-12 Coder Night

Wednesday May 15th, Buffalo Office: Girl Develop It Meetup: Code & Coffee Night!

Thursday May 16th, Buffalo Office: Database Seminar, May is for MongoDB

Thursday May 16th - Friday May 17th:NodePDX

Thursday May 16th: Dublin, Ireland Office:UXPA Ireland: My favorite UX tool!

Articles of Interest

Did you miss Chef Conf? Check out all the videos here!

The East Coast of the USA is about to overrun with billions of cicadas. Humans will be outnumbered approximately 600:1.

Davey Shafik does a dive into authentication choices he investigated as he built out the Distill website.

Categories: Programming

Authentication: Not necessarily a social activity

Engine Yard Blog - Thu, 05/09/2013 - 21:18

For Distill, Engine Yard's developer conference, we chose to use social authentication to reduce the barrier to entry for our call for papers. We supported Twitter, Facebook and Github.

While developing the site, my concern was that the registration flow be simple, and that it actually work. Once we launched the site, I realized that I had trouble remembering which provider I had signed up with.

Maybe that's just me (I am terribly forgetful!), but I imagine at least a few other people had this issue. Sure, on the backend we can link multiple accounts, but that means users went through the registration process multiple times. This is not optimal.

For those interested in the numbers, here is how the providers stacked up:

  • Github: 59%
  • Twitter: 38%
  • Facebook: 3%

Why did we make this choice? Probably the same reason everyone else does:

Users don't want, or need another fricken login to remember, just use Facebook/Twitter/Google+/LinkedIn/Github/Yahoo!/...

This is the primary argument for using social auth. And lets be honest, who wants to be in charge of Yet Another Login System?

But is social auth the best option? Lets explore that.

In favor of social auth for our users, we have:

  • One less password to remember
  • Possible to revoke access
  • Automagic integration with my online social presence (that I can control… if I know how!)
  • Users are often always logged into their social sites, so they don't even see a login screen — a few redirects and it can be avoided entirely.

In favor of our bespoke system... it's the same thing we've been doing for years.

What does Social Auth mean to our users?

Let's break down what all of these things really mean.

One Less Password

Is this a good thing? Just like using the same password for everything, using the same social account for everything is not necessarily a good thing.

Sure, we can use an 80 character, expanded charset password on that social account, but "guessing" your username and password isn't necessarily the only way in. Server security breaches, man-in-the-middle attacks (negated by proper application of SSL), fishing or social attacks are still out there! Just like with a bespoke system.

I personally use 1Password as a tool to maintain a list of the literally hundreds of accounts I have, and their respective login credentials. I generate a random password for every site I sign up for, and never even think about remembering it.

I have my password database stored in Dropbox (which, yes, means I'm trusting Dropbox's security) so it's available on all my devices, and it can even function standalone with a built-in web interface!

There are plenty of other free tools even that will allow you to do the same thing (e.g. KeePass Password Safe).

For me the one-less-password argument holds little water.

Possible to revoke access

This one is an argument I rarely hear, but is quite important. Most websites (though it's changing, as people get on the free-data bandwagon) do not allow you to delete your account. And there's a good reason for this: Your data is valuable to the website, even if you're not using it! (Remember: if something is free, you [and your data] are what brings value to the business.)

With social auth, you can not only revoke permissions entirely (denying access to your social data that they haven't yet collected), you can revoke permissions partially (assuming the social site implements that).

I think this is quite important.

Automatic Integration with Social Presence

Social integration is arguable the main reason for even using social authentication (other than the lazy factor), and it can definitely bring value to our experience.

The point of being social, is sharing things with people, and good online experiences are things we want to share. Making that quick and easy is beneficial to both the user, and the business — it's word of mouth advertising, and is priceless.

However, a lot of users don't want to be social. Either with your site in particular ("I don't know this site, why would I want it to see my stuff?") or in general ("I'm a grumpy curmudgeon" or "I don't want the government to spy on me!").

The general answer to this problem, is to make social auth optional. Users can sign up for a bespoke account, or signup via social auth. Or Both.

Unfortunately, I find that when multiple signup options are available — be that bespoke + one social auth, bespoke + multiple social auths, or just multiple social auths — I forget which one I've used. Did I sign in with Github? Or did I create a new account?

1Password does help me in some regard here, because it would have my bespoke credentials available, but with multiple social auth options? It's no help.

Automatic Login

Automatic login is arguably a good thing — it can allow anyone with access to the computer to not only access the social platform (because you're already logged in), but who knows what else.

Think about the people who leave their Facebook logged in at the Apple Store, apparently nobody has yet realized you can look at the list of Apps they have authenticated with and then can simply visit that site and choose to login with Facebook. Suddenly you now have access to their Pandora, Instagram, Klout, and 100 other apps you've authorized!

Now, it is possible as a developer to require login, at least with Facebook auth — but we usually want it to be easy for our users and don't bother with it!

What about a middle ground?

Is there a middle ground? We've already discussed the pitfalls of providing bespoke + social auth(s), so what else can we do?

I think the best middle ground, is to provide bespoke authentication, and then behind that, allow for social connection expressly for the purpose of being social. That is, make it optional.

We can get some of the benefits of social authentication, by allowing users who have connected their social accounts to use them for password resets, rather than email. Simply ask them to re-verify their social account and once they have, you can direct them straight to the password reset form — no emails getting lost, no tokens, simple.

One final option, is that you can use social authentication without getting access to more than the users basic information — especially, you can do it without requesting write permissions. Then, later, you can ask for write permissions should the user wish to utilize that aspect of your site.

Being Socially Responsible — A Social Contract

I have, over the course of thinking about these things, decided how I'm going to interact with my users socially.

  1. Social integration is always optional.
  2. Permissions are granted only as-needed. Ask for the minimum permissions to do the requested action, and no more. When more permissions are needed, ask again.
  3. Always give the user the final say on what gets posted — I will always allow my users to edit the message, and never insert automated text on the end — except the URL to thing they are sharing.
  4. Never automatically post anything. This is really part of the previous one, but it's better to be explicit.

This is my social contract, it will be publicly posted on my site, and presented when users look at the option to connect their social accounts. I think this is the responsible way to interact with my users, and allow them to interact with folks who will hopefully become my users.

What about you? What's your social contract look like?

We recently announced the Distill speaker lineup and first batch ticket sales.  To learn more, visit

Categories: Programming

Identify and Resolve Issues through Proactive Log Management

Engine Yard Blog - Tue, 05/07/2013 - 20:47

Proactively managing logs can be critical to identifying and resolving issues within an application environment. We're excited to announce that Logentries is now available as an Engine Yard Add-on. Engine Yard customers can try it now for free. More information about Logentries on Engine Yard Cloud can be found here.

Through Logentries, users can monitor logs in real-time and get an easy to understand view across their entire application logs. Logs are analysed and visualised so that you can make sense of large volumes of log data, to quickly see and resolve system warnings or errors. Logentries can also be applied from a business analytics perspective to understand how many users registered, logged in, made payments and more over particular time periods.


Engine Yard Cloud customers can get started for free today, including 7 days of storage and a 1GB indexing limit.

Logging metrics is vital to checking the heartbeat of your business. When it comes to logging, it helps to know what you should be looking for. For a more complete explanation of the individual logs you should be concerned with when monitoring your Engine Yard setup, view our blog post Digging Into Engine Yard Logs.

If you are an Engine Yard customer, follow these steps to setup Logentries on your Engine Yard apps:

1. Head to (login required) or navigate to "Logentries" under “Add-ons” in Engine Yard Cloud

2. Click "Set it up"

3. Sign up and follow the instructions for updating your code and deploying

And that’s it! Get ready to enjoy all the benefits that come with getting more insight into your application through proactive log management.

Categories: Programming

May 3, 2013: This Week at Engine Yard

Engine Yard Blog - Fri, 05/03/2013 - 20:29

We’ve finalized some major under-the-hood upgrades at Engine Yard this week that should start showing themselves in public facing features within the next few months! In the meantime, this is what you can actively check out.

--Tasha Drew, Product Manager

Engineering Updates

Improvements to ELB handling are live and in production! Updates include better error handling for a smoother integration and experience.

We have removed Passenger 2 as an option for customers booting new environments because it’s really old. Any customers with an environment assigned to the Passenger 2 application server stack has the feature flag enabled and will continue to see it as an option. You are also encouraged to upgrade for all the awesome benefits of Passenger 3.

Engine Yard Cloud customers can now file tickets directly through the Cloud dashboard.

We had a bunch of other minor bumps you can read about in our release notes.

Data Data Data

Riak has been bumped to 1.3.1 as it reaches the last few weeks of its early access phase!

Social Calendar (Come say hi!)

Tuesday, May 7th: Engine Yard’s Dublin, Ireland office will be hosting the second Postgres User Group meetup with Greg Stark, a long-time Postgres contributor and committer as the speaker.

Thursday, May 9th: Coder Dojo in PDX continues to plan how to help teach kids and their parents about how to learn about and explore coding and software. Everyone is encouraged to grab a laptop and jump in!

Thursday, May 9th: Pub Standards in Dublin, Ireland welcomes any and all in-town developers, designers, founders, and people-who-like-to-build-stuff to stop by the Bull & Castle for a beer and a chat.

Articles of Interest

Pricing updates went live, and customers can expect to take advantage of reduced instance pricing on their April bill!

Our friends at TMX posted a thoughtful piece, “In Search of Software Quality.”

Pacific Coast Support team lead and all around awesome guy Ralph Bankston (who sadly has no twitter handle for me to link to) has gone in-depth about how to troubleshoot cron jobs.

Categories: Programming

Troubleshooting Common Problems with Cron Jobs at Engine Yard

Engine Yard Blog - Thu, 05/02/2013 - 22:34

Cron jobs are a basic unix tool used to run specific commands at specific times.  This can be anything from deleting files to starting a script that processes payments in your application at specific times without having to remember to start the script manually. The most common questions we receive about cron jobs are: verifying the time at which a cron job is supposed to run, environment and path issues while running rake tasks, and unexpected cron output. Here are examples and solutions to some of these common cron problems.


The most common question we receive is how to verify the time that a cron job is supposed to run. An important first step in that process is to verify the time zone the server is currently set to.  Cron runs based on the system time. Our servers default to UTC but some of our older servers are running on Pacific Time so you need to ensure you have the correct time zone. You can verify this by either checking /etc/localtime or typing date.

The five scheduling positions are: minute ( 0 - 59 ), hour ( 0 - 23 ), day of the month ( 1 - 31 ), month ( 1 - 12 ), day of the week ( 0 - 6 with Sunday being 0 ).  A short hand for this that can be added to the top of a crontab is # min hr DoM m DoW.

*      *    *      *     *  command to be executed
┬    ┬    ┬    ┬    ┬
│     │     │     │     │
│     │     │     │     │
│     │     │     │     └───── day of week (0 - 7) (0 or 7 are Sunday, or use names)
│     │     │     └────────── month (1 - 12)
│     │     └─────────────── day of month (1 - 31)
│     └──────────────────── hour (0 - 23)
└───────────────────────── min (0 - 59)


You can also use the * which is the wildcard for every possible value of the five scheduling fields.  You can also use */ to have it run at varying times. There are also websites that can check the timing such as and

Environment and Path Issues

A problem we see is not calling the proper path when running a rake command. If you are running a rake task you’ll want to make sure you set the environment and the path if needed correctly.

Example: A rake task that may work with system gems but not with bundler because of a path and environment issue.

Deploy User Crontab:
30 1 * * * rake ts:index

This code will only index sphinx if you are using system gems and not Bundler.  If you are using Bundler you will want to make sure you are either using bundle exec or calling the binstub executables directly within the application.

Example: A rake task that works.

Deploy User Crontab:
30 1 * * * cd /data/appname/current && RAILS_ENV=production bundle exec rake ts:index

This command calls both the correct path and also sets the RAILS_ENV environment variable so you get expected results based on the Rails environment running. In some instances you may have to specify the full path to rake in the bundled gems which is /usr/local/bin/bundle exec /data/appname/current/ey_bundler_binstubs/rake.

Cron Output

We commonly see cron jobs that don’t have output handled at all or output in an expected manner.  The choices for cron job output are to have no output, create a log file of what happened during the rake task, or to only list errors. The first step in deciding proper output handling is whether you want cron to notify you of anything or if your command will handle it internally. If you choose to do nothing when creating your cron and there was output it would attempt to send an email or if ssmtp mail was not configured on your instance the output would be sent to the dead.letter file. If you do not want any output saved from the cron job appending >/dev/null 2>&1 to your command output and send it  to /dev/null (/dev/null is a device that discards any data sent to it).

Another option is to capture the output of a rake task running --trace  it is possible to add a verbose log with the addition of >/data/deploy/appname/current/log/ts_rake.log >/dev/null 2>&1. The cron job for that would look like this:

30 1 * * * cd /data/appname/current && RAILS_ENV=production bundle exec rake ts:index --trace > /data/deploy/appname/current/log/ts_rake.log >/dev/null 2>&1

The log file will also need to either exist and be writable by the deploy user running the cron job or the user will need write permission to the directory that contains the log file.

It is possible to send the output to email by not capturing the standard out with  >/dev/null 2>&1.  As stated previously, on our system our systems are not set up to send e-mail. That will need to be set up before having the mail delivered.

Cron running at specific times is recorded by default into /var/log/syslog. You can sudo grep cron /var/log/syslog to look at the cron jobs that have run during the current day. You can check older days by going through the older log files which are rotated daily.

Cron on Engine Yard

Cron jobs are great for scheduled tasks. There are two important things to remember about running applications on Engine Yard Cloud. The first is that the application master or solo instance is the only instance in an environment that the dashboard will install cron jobs. This is something to keep in mind if all of your application instances need to run the script or if the job should be run on a utility instance. The second is that when an application master takeover is initiated the newly transitioned application master doesn’t have the full contents of the previous application master. When a takeover occurs the cron jobs from the dashboard have not yet been put in place. Pressing the apply button inside the dashboard will properly install the cron jobs from your dashboard to your new application master.

Categories: Programming

Announcing Lower Engine Yard Pricing: Starting at 5¢/hr!

Engine Yard Blog - Tue, 04/30/2013 - 17:29

Today, we are announcing new lower prices for Engine Yard Cloud including a promotional >50% reduction to our entry level price.  Now, for just 5¢/hour (or $36/month) you can get started with our complete cloud application platform, allowing you to focus on software innovation while we deploy and scale your app in the Cloud.  This new entry price point is below the on-demand price for the bare infrastructure alone - making it more compelling than ever to get complete access to all features of the Engine Yard Cloud application platform and to our acclaimed devops and app support.

In addition to this new promotional price for our entry-level small instance size, we’re announcing varying reductions to all price points with an overall average of 15%.  Some of our most aggressive reductions are to our largest instance sizes, popular for the most demanding commercial-grade applications.  See all the new prices by visiting our updated Cloud Pricing page.  Best of all, the changes are effective April 1st, 2013 so the new pricing will be reflected in your upcoming April bill.

While we continue to add new features and make our cloud architecture more flexible, we are also excited to extend cost reductions as we grow and scale. We hope to give you the industry’s absolute best value with an incredible feature set and unmatched support.  We’re also excited about the continuing pace of our growth thanks to new customers seeing the benefits of our application cloud and our awesome existing customers continuing to innovate and scale their apps with us.  Our wicked devops support (as we’re often affectionately referred to as) continues to stand at the ready as a part of your team and our development team continues to bring you new capabilities every week (check out our new blog series, “This Week at Engine Yard”).  With these new price reductions we’re reinforcing our dedication to giving you, our customers, the absolute best value with Engine Yard Cloud!

Not using Engine Yard Cloud yet?  Try it absolutely free for 500 hours.   

Categories: Programming

In Search of Software Quality

Engine Yard Blog - Mon, 04/29/2013 - 18:30

Note: Our friends at TMX wrote this piece about building high quality software and with their permission, we're reposting it here.

I started writing this article about 6 months after our launch. Things had quieted down a bit by then, and I had the opportunity to think through some of the lessons learned and bounce my ideas off the rest of the team. The time was right to start writing lessons for posterity.

I’ve been creating software professionally for over a decade with many different organizations, and never before have I worked with a codebase built to such a high engineering standard. I’m amazed at how many things we had done right, and every day we reap the rewards. It is my hope that these ideas will prove useful to others as they strive to build high-quality software.

But before we can talk about how to achieve software quality, there are three issues we must address first:

The first is just what do I mean by “Quality”? To me, it’s something that is well designed, well engineered, and well built, be it software or anything else. While we may quibble over some details, I believe there are certain universal virtues that underlie high quality software:

  • Correctness: Above and beyond anything else, it must do what it’s designed to do.
  • Robustness: Good software is robust. It handles bad input well, it fails gracefully, it’s resistant to partial failure.
  • Simplicity: Build as simple as you can, but no simpler.
  • Extensibility: In the real world, ever-changing requirements are a fact of life.
  • Scalability: It should be built with provisions for growth, but beware of premature optimization.
  • Transparency: Being able to glance into a running program is a godsend, and will save you much sleep.
  • Elegance: As Eric Raymond wisely noted, the ability to comprehend complexity is intimately tied to the sense of esthetics. Ugly code is plain hard to read, and therefore hard to understand. That makes it a constant source bugs.

The second question concerns our motivations. Why do we need quality anyway? The simplistic idea that quality is an end in and of itself is incorrect. The real goal is to create value, of which quality is just one aspect. High quality code is easy to debug, easy to refactor and easy to build on top of, which makes it easier to add new functionality. We want quality because it makes it cheaper to add value in the long run.

Unfortunately, as with all things, there are tradeoffs to building high quality software, and this brings us to our third question. Just how much quality do I need? Because there is no escaping the fact that quality has costs. It therefore behooves you to decide just how much quality you need, and live with the consequences. Among other concerns, you must think through the consequences of potential bugs and you must consider the expected lifetime of your codebase. The answers depend on your particular problem domain. What’s needed for a bank would be wrong for a smartphone game, and vice versa. At the end of the day, teams with perfect code never ship.

With that out of the way, you can think of the following list as a distillation of the ideas behind the development process we have evolved. By no means am I implying that we have discovered the only true path. That would be hubristic. But these practices have worked for us, and worked really well.

  • Fast iterations: Boyd’s Law of Iteration states that the speed of iteration is more important than quality of iteration. This is doubly so in software, because marginal deployment costs (rolling out a new release) are in most cases negligible.
  • Automated tests: If you’re not writing unit tests, you can stop reading right now and download whatever is the in-vogue unit test framework for your language of choice. I have just saved your sanity. If you already are, congratulations. You’re on the right path. But to really take you to the next level, unit tests are not enough.
    • You want integration tests, and ideally you want a Continuous Integration (CI) environment.
    • You want empirical measurements of your tests. You need to know for a fact what code is covered and what isn’t. You want to know how long your tests take to run. Now, what level of coverage is acceptable depends on what you’re building. Since in our case it’s financial software, we decided early on to strive for 100%. Last time I checked, we’re averaging a little above 99%. In your case, it may make sense to settle for less.
  • Documentation: No matter how well written and logical your code is, you still want it documented and commented. Your code tells you what it’s doing. Your comments tell you what it should be doing. If the two are out of congruence, you want to detect that as early as possible. Using automated tools to measure documentation coverage is highly recommended.
  • Peer review: I cannot emphasize enough the value brought on by code reviews. From day one, we have had mandatory review by at least one person of any code before it was permitted upstream (in fact, as part of our process, the reviewer merges in the code), and I cannot count the number of times potentially catastrophic lossage was averted. Peer review is our primary mechanism to make sure code is up to par. As an added benefit, it familiarizes engineers with different parts of the code base, and trains them to read code.
  • Metrics and automated code analysis: Static analysis is an invaluable tool. Not only will it report potential bugs, it alerts you to various code smells. Is there too much cyclomatic complexity in one part of the system? Is there too much churn somewhere? Are the levels of test coverage and documentation falling? No coverage in a critical area? It pays to have answers to these questions. Taking this a step further, you want to track this information over time. That way if you detect a negative trend you can take action before it becomes a problem.
  • Iterative, analytical design: A bad design with good test coverage is still a bad design. Conceptual flaws are hard to fix on a live system, especially where you have to deal with persistent data. Thus, if you model your data correctly up-front, you will save yourself much grief. Comparatively speaking, inefficient algorithms are much easier to tackle. Now, there are two points worth noting. First, proper data modeling depends on having accurate requirements, both technical and business. In fact, I will argue that 90% of design is figuring out the requirements. Second, remember to keep the fast iterations. Iterative refinement is key every step of the way, especially when defining requirements with the business stakeholders.
  • Sober assessment of technical debt: Do not take on technical debt without explicitly slating and prioritizing its paydown. Like any other debt, technical debt carries interest, and it behooves you to track how much you’re carrying on your books. Mounting technical debt means you’re building on a substandard foundation, and the longer you wait, the more expensive it will be to excise. Now, keep in mind that not all technical debt is the same. An undocumented constant is mildly annoying but trivial to fix. A bad internal API or antipattern is very annoying, will negatively impact development time, and is much harder to fix. A bad external API has been known to reduce strong men to tears and is very hard to fight. Evaluate and prioritize accordingly.
  • Discipline: This is by far the most important point, so I saved it for last. For these ideas to do you good, your organization needs to practice them consistently, and ideally, you want to do it from day one. The goal is to develop within the organization a culture of discipline, where doing the right thing is the natural, automatic choice.

At first glance, this list may seem daunting. Fear not. There is a powerful advantage working in our favor in that these practices feed off each other, and almost paradoxically, the whole really is greater than the sum of its parts. For example, when I’m reviewing code for a new feature, I know that a) because we have fast iterations, the amount of code I’m looking at is small and comprehensible, b) because it’s covered by automated tests and we have complete test coverage that it is likely correct and I can be reasonably sure there are no regressions, c) because it’s documented it’s easier to understand what the author was trying to do, d) the static analysis tools have identified some of the areas I need to take a closer look at. As a result, not only is the reviewer’s job easier, there is actually more value to the review since there won’t be wasted time dealing with the truly egregious flaws and the reviewer can focus on the sort of issues only a second pair of eyes can detect. There is a genuine synergy here, one that can help take the development organization to the next level.

Categories: Programming

April 26, 2013: This Week at Engine Yard

Engine Yard Blog - Sat, 04/27/2013 - 00:25

We are thrilled to announce that the Distill site is up and running and tickets for the conference are officially on sale!

At Distill, on Treasure Island in San Francisco, our goal is to create a unique developer’s conference with world-class content, focusing on education, cross-pollination, and community.

See the speakers page for our current lineup, with more to come!

--Tasha Drew, Product Manager

Engineering Updates

You can now sign up for Engine Yard Cloud in Japanese!

We are working on polishing up our new Ruby 2.0 stack and plan to have it in early access shortly.

Social Calendar (Come say hi!)

Monday, April 29th - Thursday May 2nd: RailsConf!!! We’re excited to be a sponsor and have the inimitable Shai Rosenfeld (Testing HTTP APIs in Ruby) and ultra suave Edward Chiu (Engine Yard Cloud) presenting.

Articles of Interest

The question is finally answered: Do all Americans have pet eagles?

Categories: Programming

Crafting a README

Engine Yard Blog - Fri, 04/26/2013 - 19:23

When looking at a project for the first time, a README file is often the first place many users will go to get information on how to work with a program or library. For developers it's often a challenge to figure out what to put in the README file, as there is uncertainty as to what the users expect when reading the file. This article introduces a template that I often use for writing README files, based on both writing packages and utilizing them. Examples are given in a generic text format, but it is recommended to look into MarkDown if the project is intended for sites such as GitHub.

What Does It Do?

The first part should state plain and simple what the project does. If it's meant to replace another project, it should state what the shortcomings are of the other software that caused a different package to be necessary. It should also list any features that make the package stand out.

= Introduction =

This is a Ruby library to interface with the FooBar API. It was created due to the fact that there are only Python bindings available to interface with it. Some notable features:

* oAuth authentication support
* SSL communication
* Results caching to lower API hits
* Supports version 1.0 and 2.0 of the API
What Is Needed To Use It?

Perhaps one of the most important items from a software packaging perspective is what is needed to use the package. Unless the package is targeting a specific OS, dependencies should point to the project homepage and not the distribution specific package. However, distribution specific packages can be added as an addition to the base content so users don't have to search around for the package on their specific distribution.

If the package is something that requires compilation (a ruby library against C bindings for example), the build requirements should be provided as well. For packages that run under interpreted languages, the language runtime version should also be indicated (Ruby 1.8/1.9, Python 2.7/3,2, Java6/7, etc.)

Finally, any mention of modules, libraries, etc. should be listed out if they are bound to configuration options and not bundled with the language runtime.

= Requirements =

This code has been run and tested on Ruby 1.8 and Ruby 1.9.
== External Deps  ==

* curb ( for curl based calls (allows for setting of custom headers)
* nokogiri ( for parsing the XML response
* sqlite-ruby ( for cache storage

== Standard Libary Deps ==

* OpenSSL for cryptography functionality
How Do I Install It?

This may be a simple command such as gem install foobar for installation. However, the user may wish to do a source installation, so it's important to show instructions for that as well. Recommended installation instructions for various distributions can be added in addition if a package version of the project is available.

= Installation =

This package is available in RubyGems and can be installed with:

   gem install foobar

For users working with the source from GitHub, you can run:

   rake install

Which will build and install the gem (you may need sudo/root permissions). You can also chose to build the gem manually if you want:

   rake build

Ubuntu users can install this package by executing:

   sudo apt-get install ruby-foobar

Note: If you use bundler to create a gem through bundle gem it will generate much of this README content for you

How Do I Test It?

It's beneficial to both the user and the developer to have a method of testing. This allows users to ensure basic functionality for reporting bugs. It also gives the developer a place to point users to for filtering out any local environment issues preventing the package from working. This should list the commands necessary to test the package.

= Tests =

An RSpec test suite is available to ensure proper API functionality. Do note that it uses the staging version of the API, and not the production version (to prevent hitting API limits should something go wrong). Tests are set as the default rake target, so you can run them by simply executing `rake`
Where Can I Find More Information?

Here is where the project website should be listed. This could be a dedicated site with its own domain name, a listing on, or something as simple as a GitHub repository link. It should also include ways to build API documentation if there is any.

= More Information =

More information can be found on the [project website on GitHub]( There is extensive usage documentation available [on the wiki](

== API Documentation ==

The main API is documented with yardoc, and can be built with a rake task:

   rake yard

from here you can use the yard server to browse the individual gem docs from the source root:

   yard server

or optionally you can run the main yard gem documentation server:

   yard server --gems

and docs can be viewed from `http://localhost:8808/`
How do I use it?

This is by far one of the most important sections. Users often want to see a small piece of code to get them started on basic usage. This can be a simple connection and data loop, or it can be more extensive and show multiple examples for popular usage. Any examples in the source directory should be noted as well.

= Example Usage =

The following shows how to connect to the API and print a list of users:

   -*- encoding: utf-8 -*-
   require 'foobar'

   api ='[key]','[secret]')
   api.GetUsers().each do |user|
       puts "User: #{}"
What Are The License Terms?

This section should list the location of the LICENSE file, as well as what type of license it is. It’s especially important to note cases where there are multiple licenses, or an alternative commercial license available.

= License =

This project is licensed under the MIT license, a copy of which can be found in the LICENSE file.
How Do I Get Support?

For those who want support, the necessary procedures should be explained. This could be anything from a mailing list to pull requests.

= Support =

Users looking for support should file an issue on the GitHub issue tracking page (, or file a pull request ( if you have a fix available. Those who wish to contribute directly to the project can contact me at <> to talk about getting repository access granted. Support is also available on IRC (#foobar @ Freenode).

This concludes a look at crafting a README file for users to better understand a project. Note that these are what I would consider guidelines, so projects may choose to add more content, or take out sections based on individual project needs. However, it's also important to note that a detailed and well thought out README can go a long way towards encouraging users to try a package out and can even help entice contributers.

Categories: Programming

Announcing: Distill Speakers and Ticket Sales

Engine Yard Blog - Thu, 04/25/2013 - 18:06

We’re thrilled to announce that the Distill website and speaker lineup is live. The first batch of tickets is now officially on sale!

Our vision for this event, first and foremost, is to provide a distillation of best practices, new technologies and progressive methods currently on the rise in software development. Our desire is to create a special forum where these ideas can be shared with an engaged audience of like-minded developers and artists. We received hundreds of amazing submissions and narrowed them down to the luminaries that comprise our excellent lineup. The talks will range from user experience to mobile development, to the Internet of Things and beyond. We’re excited to be able to bring in speakers from Ireland, Italy, Germany and other far-flung locations to inspire our attendees to change the world with their creations. Take a look at the lineup of Distill speakers here.

In addition, we are pleased to welcome Brent and Nolan Bushnell, James Whelton and Michael Lopp as our keynote speakers. They’re sure to inspire you with their depth of experience and captivating stories about the challenges and rewards of entrepreneurship, technology and education. But that’s not all--we’ll be announcing another keynote speaker in the weeks to come.

This two-day event will take place at The Winery SF on Treasure Island in San Francisco. Shuttles will transport you to and from the venue daily, so you can hang out in comfort and style. Distill is about education, cross-pollination and community--it is our hope that you forge new relationships with your fellow attendees and leave the event feeling enriched, edified and inspired. Stay tuned for more announcements--we’ve got plenty more tricks up our sleeve and we can’t wait to share them with you!

The first batch of tickets is now on sale here. There is a limited quantity of first batch tickets so get yours now. Trust us, you don’t want to miss this.


Categories: Programming

The Thinker: Michael Lopp to Keynote Distill

Engine Yard Blog - Tue, 04/23/2013 - 18:00

While we as a community spend time thinking about how to write great code, minimize bugs, determine the right database schema, anticipate platform shifts and more, Michael Lopp is thinking about the mindset of the developers in our community.  Michael thinks deeply about the problems developers face in their everyday lives, how to help developers identify their true goals, what makes them happy and how to achieve happiness, how to lead and how to follow and much more.  Michael has spoken at every FunConf I have hosted since its inception and has consistently given our attendees a more authentic, meaningful way to think about our lives, what we do and the implications.  Michael is one of the great thinkers in our community and we are very pleased to have him be a keynote speaker at Distill.  Here's a little more information on Michael:

Michael Lopp is a director at Palantir Technologies, a Silicon Valley software company dedicated to radicalizing the way the world interacts with data.  Before joining Palantir, Lopp was part of the senior leadership team at Apple for nine years where he led essential parts of the Mac OS X engineering team, and subsequently managed the engineering team responsible for Apple's Online Store. Prior to Apple, he worked in engineering leadership at notable Silicon Valley companies such as Netscape, Symantecand Borland. Lopp is a noted author in Silicon Valley; his blog. “Rands In Repose,” and his books, Managing Humans and Being Geek, are part of a new management and engineering canon.

Distill is a conference to explore the development of inspired applications. Tickets go on sale this week.

Categories: Programming

April 19, 2013: This Week at Engine Yard

Engine Yard Blog - Fri, 04/19/2013 - 19:04

Our hearts and thoughts are with Boston, Waco, and Chicago.

--Tasha Drew, Product Manager

Engineering Updates

Rails 4.0 beta1 (rails-4.0.0.beta1) is now in GA on our platform!

PHP is available in Early Access on Engine Yard Cloud! Learn all about using PHP with Engine Yard Cloud.

Do you PagerDuty? We do! We find it so useful and critical to maintaining a robust on-call system that we extended it to all of our Premium Support customers. This week our Operations Manager, Jamie Bleichner, announced an even deeper integration for Engine Yard’s Premium Support offering, now offering ZenDesk, NewRelic, Pingdom, Splunk, Nagios, and many other integrations out-of-the-box.

Social Calendar (Come say hi!)

Tuesday, April 23rd: Dublin, Ireland: the Mobile App Development Ireland meetup group is meeting to have an iOS development overview class.

Wednesday, April 24th - Friday, April 26th: Chef Conf, San Francisco! Engine Yard is sponsoring and a bunch of us are attending! Hope to see you there.

Thursday, April 25th: Dublin, Ireland: Node.js Dublin will be meeting with two speakers, Domnic Tarr covering “Streams in Node.js,” and Richard Roger reporting on “The anatomy of an app.”

Thursday, April 25th PDX Coder Dojo: K - 12 students and their parents can play, explore, and learn about coding and building software!

Articles of Interest

An in-depth article all about php on Engine Yard Cloud, by Ireland’s product manager extraordinaire Noah Slater!

Treehouse taught us how to work with iOS core and open source frameworks.

Categories: Programming

Scrum Trainer / Senior Fellow Position Available

10x Software Development - Steve McConnell - Wed, 04/17/2013 - 19:38

If you're a highly qualified Scrum Professional, check out our opening for a Scrum Trainer / Senior Fellow. Here is a brief description (follow the link for more details):

Travel the World, Help Teams Adopt Scrum, and Reach Their Full Potential

Share your hard-won lessons learned with others. Work with a staff of world-class software experts including Steve McConnell, author of Code Complete and other software industry classics. Become a part of Construx Software, a company recognized multiple times as being the best small company to work for in Washington state.

Requirements for Scrum Trainer/Consultant

We are looking for candidates who have:

  • A minimum of 10 years of broad and deep experience in software development, including deep subject matter expertise in Scrum.
  • Broad and deep knowledge of current software development in-the-trenches practice, research, and literature.
  • Excellent verbal communication skills including the ability to present to groups of professionals.
  • “Leadership” level understanding of at least two of the following areas: Agile Development, Software Project Management, Software Requirements, Software Process, Software Maintenance, Software Design, Software Construction, Software Test, Software Quality, Software Configuration Management, and Software Tools and Methods.
  • The ability to work both independently and as part of a collaborative team.
  • Willingness to commit to providing excellent service quality.
  • Willingness to spend approximately 50% of your time traveling to client locations in North America, with occasional international trips.
  • An ongoing personal commitment to learning from clients, co-workers, publications, and other sources.

Preferred but not required:

  • Training experience and/or public speaking experience.
  • A four-year degree from an accredited university.
  • Industry certifications including Certified Scrum Trainer, Certified Scrum Coach, Certified Scrum Practitioner, Certified Scrum Master, and Professional Scrum Master.
  • A record of conference presentations.
  • A record of published work in refereed journals, blogs, and/or popular trade publications.
No Training Experience?

Our primary interest is your depth of technical expertise. If you are technically qualified, Construx will provide deep support for developing your training and presentation skills.

Why Construx?

Construx Software is an established industry leader in software development best practices, providing consulting and training services to leading companies worldwide. Construx management has created an environment that empowers employees to perform at their highest levels while maintaining a healthy work-life balance. Low turnover, consistent profitability, and an exceptional work force are reasons this company has been named the best small company to work for in Washington state multiple times. Steve McConnell, Construx CEO, said, of his thoughts upon founding the company 17 years ago, "I wanted to create a company that I personally would want to work in the rest of my career."

For more details, and to contact us or apply for the position, please visit here.

Scrum Trainer / Senior Fellow Position Available

10x Software Development - Steve McConnell - Wed, 04/17/2013 - 19:38

If you're a highly qualified Scrum Professional, check out our opening for a Scrum Trainer / Senior Fellow. Here is a brief description (follow the link for more details):

Travel the World, Help Teams Adopt Scrum, and Reach Their Full Potential

Share your hard-won lessons learned with others. Work with a staff of world-class software experts including Steve McConnell, author of Code Complete and other software industry classics. Become a part of Construx Software, a company recognized multiple times as being the best small company to work for in Washington state.

Requirements for Scrum Trainer/Consultant

We are looking for candidates who have:

  • A minimum of 10 years of broad and deep experience in software development, including deep subject matter expertise in Scrum.
  • Broad and deep knowledge of current software development in-the-trenches practice, research, and literature.
  • Excellent verbal communication skills including the ability to present to groups of professionals.
  • “Leadership” level understanding of at least two of the following areas: Agile Development, Software Project Management, Software Requirements, Software Process, Software Maintenance, Software Design, Software Construction, Software Test, Software Quality, Software Configuration Management, and Software Tools and Methods.
  • The ability to work both independently and as part of a collaborative team.
  • Willingness to commit to providing excellent service quality.
  • Willingness to spend approximately 50% of your time traveling to client locations in North America, with occasional international trips.
  • An ongoing personal commitment to learning from clients, co-workers, publications, and other sources.

Preferred but not required:

  • Training experience and/or public speaking experience.
  • A four-year degree from an accredited university.
  • Industry certifications including Certified Scrum Trainer, Certified Scrum Coach, Certified Scrum Practitioner, Certified Scrum Master, and Professional Scrum Master.
  • A record of conference presentations.
  • A record of published work in refereed journals, blogs, and/or popular trade publications.
No Training Experience?

Our primary interest is your depth of technical expertise. If you are technically qualified, Construx will provide deep support for developing your training and presentation skills.

Why Construx?

Construx Software is an established industry leader in software development best practices, providing consulting and training services to leading companies worldwide. Construx management has created an environment that empowers employees to perform at their highest levels while maintaining a healthy work-life balance. Low turnover, consistent profitability, and an exceptional work force are reasons this company has been named the best small company to work for in Washington state multiple times. Steve McConnell, Construx CEO, said, of his thoughts upon founding the company 17 years ago, "I wanted to create a company that I personally would want to work in the rest of my career."

For more details, and to contact us or apply for the position, please visit here.

PHP on Engine Yard Cloud in Early Access

Engine Yard Blog - Wed, 04/17/2013 - 17:55

I’m excited to announce that PHP for Engine Yard Cloud is now in Early Access.

An Early Access release means that the feature is almost ready, and we’re opening up for people to help us test. When that testing is done, this feature is released as General Availability, and the result will be a unified service offering for PHP, Node.js, and Ruby applications.

To access this feature, navigate to the early access section from the toolbar:

Locate and enable the PHP feature:

From here, deploying a PHP application should be just like deploying any other application. Though, we’ve updated the user interface a little to accommodate multiple languages.

The new app screen now has a “Application Language” dropdown:

Notice also that if you select PHP, you are asked to configure your web root.

If you just want to play around with this and help us test, we recommend you try our sample PHP app for now. This is just a public repository on Github that you may fork and modify if you want to test further. (Or submit pull requests if you think they might help new users!)

From there, you can configure your environment as usual:

Note that PHP-FPM is the only application server stack we support for the time being.

Once this is done, and you have booted your environment, you should see:

Once that is done, click on “Visit your application” and you should see:

And voila! PHP on Engine Yard Cloud!

We hope you’re as excited about this as we are. We have a few more things we want to add to this before we make a General Availability release. And we’re hoping that you’ll take some time to test the release and let us know about any problems or feature requests you have.

If you have any issues or questions about this Early Access feature, use the Access Feature Feedback forum, or open a support ticket.

For more information, see the documentation.


Categories: Programming

Don’t Reinvent the Wheel: Working with iOS Core and Open Source Frameworks

Engine Yard Blog - Tue, 04/16/2013 - 22:43

Note: Our friends at Treehouse wrote this great article about mobile development for us. Check them out here.

"We're different!" This is a mantra many organizations will trot out to justify striking out on their own path with some new technology, design, or process. And sometimes it's true. But the question they need to ask next is, "Are we so different as to justify two to four times as much work, a delay in getting to market, and increased maintenance costs for the indefinite future?"

Nowhere is this more apparent than the app ecosystem of the mobile computing world. You can't blame a developer or organization for wanting to be unique, because having a user interface/experience that stands out could be the differentiator between getting your app featured and making a profit or ending up in the scrap heap of apps. But in general, it's better to explore every nook and cranny of the core iOS frameworks and to scour the open source libraries available on GitHub to see what components you can reuse to write your app more quickly, and hopefully with fewer bugs and crashes. My own practical experience backs this up.

A Quick Case Study

I worked on an app as part of a team where we decided to implement a custom navigation system based on the premise that many of our screens followed a similar format:


Sample mockup created using Moqups

On each screen there was a title, an optional subtitle, an optional image, and two or more rows that users could tap on to navigate through different paths. We would store the entire navigation structure of the app in a property list (or "plist") file, design a basic layout in Interface Builder and programmatically add views as needed using a custom View Controller. Each screen would have a few standard properties in the plist file which would be used to drive the behavior of the base View Controller.

The idea sounded great on (electronic) paper. But there were two problems.

  1. One size did not fit all. We tried to make a reusable ViewController that could handle many of the screens in our app. There were, of course, special cases that could not be built by our basic implementation, but more frustrating were those screens that were just similar enough that we added one extra piece of data in the plist file and one extra condition and method in the base ViewController. These one-offs quickly added up and really affected how "standardized" our data was in the plist file.
  2. The challenge of navigating screens in an iPhone app had already been tackled by others, such as the TTNavigator1 project in an open source library from Facebook called Three20.

TTNavigator wasn't a perfect match for what we wanted to achieve with our custom navigation system. But that is not the point. There wasn't anything special about our approach or, more importantly, the end result. We thought it would make our lives as developers easier, but it ended up taking more time to develop and test and was harder for newcomers to the team to pick up.

What We Should Have Done

Looking back, we should have taken some time to investigate the landscape when we were designing the architecture of the app. In other words, after the user interface and experience were defined by our business and design teams, and we were planning for how development would proceed, we should have looked at what options were available for the types of problems we were trying to solve. Instead, we narrowed in pretty quickly on our design decision and started writing code before fully understanding how it would affect development.

A Better Approach

So as a developer eager to start working on your next great app, how do you even know where to start? Below you will find a practical guide to discovering some of the more common frameworks available that can make your life as an iOS developer so much easier.

Get to Know the iOS Core Frameworks

You don't know what you don't know. In app development (and software development in general) it's easy to focus on the problems at hand and lose touch with the updates made to the platform around you. It is nearly impossible to stay up to date with all the latest features of iOS with each release. Unless...

Check out the release notes for each new version of the SDK. For example, iOS 6 included a pull-to-refresh class called UIRefreshControl that could be used to replace custom or open source solutions like PullToRefresh.

Take a half hour every now and again to review the wealth of components available in the core iOS frameworks. Maybe devote your lunch hour the first Monday of each month to read up on some documentation and release notes. The Apple Developer site has documentation, sample code, and resources for each framework, and this Frameworks page can be sorted in reverse chronological order, so you can easily see what has been added or updated recently.


The sheer number of frameworks is overwhelming at first, but you can narrow things down by concentrating on the areas you are likely to use in professional or personal projects. For example, if an upcoming app is going to be heavy on audio, get familiar with the Audio & Video Starting Point guide and search for "audio" in this list of guides.

Discover Open Source Software (GitHub and Friends)

Good programmers know what to write. Great ones know what to rewrite (and reuse). - Eric S. Raymond

This sums up how we should feel about reusing other people's code. When I was younger I used to prefer writing things from scratch so I could better understand them and have more control. There is some merit to that regarding the "better understanding" part, but I have learned it's clearly better to use and potentially adapt open source projects when they are available.

Here is a quick rundown of some of the potential advantages and disadvantages of using open source software:

Advantages Disadvantages Fewer bugs (well-tested code) Support from the open source community might end More functionality than building yourself Might be hard to include in your project Often see and learn best practices Possible conflicts with other libraries in your project Speed of development Might have to conform to an open license Making the world a better place Might be replaced in a new iOS SDK release (ex. PullToRefresh -> UIRefreshControl)

If you have never used an open source framework, there are a ton of useful repositories available on GitHub for iOS development. One way to see which ones are popular and useful is to take a look at the Most Watched tab in the Objective-C section.


It's useful to keep tabs on what is available on GitHub so you know where you can fit it in with your development. But even if you don't stay up to date with the latest repositories, you can easily search for functionality you want in an app. For example, imagine that you want to include a side menu like the one made popular by the Facebook app. You could read up or figure out how to implement it on your own (it's not technically very difficult). Or you could spend a minute searching on GitHub: a quick search for "ios facebook side" brings up a list of implementations, including the popular ViewDeck project.

And of course, you are not just limited to GitHub (though there are a ton of resources there). I know from experience that this multitude of options can be overwhelming. There are some really helpful sites, though, that do the work for you and curate lists of the best open source software available for iOS development.

For the "pragmatic iOS developer", there is a list of custom frameworks at There is also a really good App Dev Wiki that contains helpful lists of frameworks, design patterns, and other resources for iOS and mobile development in general. The libraries are categorized for helpful discovery and many of the more popular libraries and tools are listed there. And for custom UI controls, check You can even subscribe for a weekly newsletter to passively stay up to date with the latest custom controls.

Bootstraps and Boilerplates

One of the more recent developments in app programming is the advent of very useful templates, bootstraps, and boilerplates (choose your own trendy label). These projects, fashioned after the popular Twitter Bootstrap and HTML5 Boilerplate projects of web development fame, attempt to take away some of the pain of setting up a new project for the first time. For iOS you can use the iOS Boilerplate, which is an Xcode project that incorporates some of the more popular third-party libraries to get you up and running quickly and painlessly.

Share the Love

Don't be afraid to share your ideas with other developers! My greatest accomplishments have all been with the assistance of people smarter than me, and I have been exposed to so much simply by talking with other developers. Whether it's a local meetup of iOS developers likeCocoaHeads or a major conference like WWDC, spending some time with people tackling the same problems as you is a great way to learn about new tools and solutions. When I am at a talk or conference I am not trying to retain every line of code that flashes across a projector screen. Rather, I seek out presentations and discussions about things I want to be exposed to or things I feel comfortable sharing about. It doesn't require a huge amount of time, but it's a great way to stay current with trends and learn what is available to help become a better developer.

And while you're at it, contribute back to those open source projects! It used to be that open source code was tightly controlled by programmer-gatekeepers who could make it difficult and intimidating to contribute your work. But with the rise of GitHub and more and more people developing and using open source software, most projects are truly meritocracies where, if you have value to add, you can add it. Fork the code, make your improvements, submit a pull request, and you're done! It's never been easier.

It's All About Efficiency

Don’t reinvent the wheel, unless you plan on learning more about wheels. - Jeff Atwood

One way to be a "better" developer is to be more efficient. And this gets back to the heart of my argument. After a few years of mobile development I have seen both sides of this coin. Writing components from scratch, even with a talented team, can lead to headaches, missed deadlines, and escalating complexity. On the other hand, pulling in a popular open source framework that takes advantage of the latest platform technology and follows software development best practices can lead to a more maintainable app and a more enjoyable development experience. That extra time lets you tackle more interesting problems about how your app looks and functions, which is ultimately what makes for a delightful user experience.

So the next time you fire up Xcode and click on "Create a New Xcode Project", take a minute, or a day, or a few weeks to investigate your options and think about how you can make your life as a developer easier. And make your app better! We are now in the GitHub Generation, and there are so many exciting things we can do because of it. What a wonderful time to be a software developer!

1. Three20 hasn't been updated in more than a year and isn't nearly as useful as before. If you're interested in URL-based routing for app navigation, check out SOCKit.

Categories: Programming

PagerDuty Integration Enhancements

Engine Yard Blog - Mon, 04/15/2013 - 23:29

Last June Engine Yard launched a partnership with PagerDuty - a notification service allowing Premium Support customers to receive pages, phone calls, emails,  or mobile push alerts to an on-call schedule they configure based on events triggered in our ticketing system.

As we continually strive to improve our customers’ experience, the Engine Yard Support team is happy to announce that this partnership has taken another step forward. We have opened up more integration capabilities with our free PagerDuty offering.

You will now be able to set up integration points with varying systems such as New Relic, Pingdom, Splunk, Nagios, and many more, in addition to the existing Zendesk integration.  This even includes any of your own in-house software, as PagerDuty works with any system that can send email or make a simple HTTP API call.

Please see here for more information on PagerDuty integrations.

If you are a current Premium Support customer and have questions, or have yet to take advantage of the PagerDuty integration, please file a ticket and we will be happy to get you set up.

If you have not signed up for Premium Support yet, you can review our offering and sign up details here.

We are excited to be able to provide new monitoring and alerting integrations which will help our Premium Support customers respond to issues even more quickly!

Categories: Programming

Tools should are a prerequisite for efficient and effective QA

We now live in a world where testing and quality are becoming more and more important. Last month I had a meeting with senior management in my company and I made the statement that “quality is user experience”, in other words “without the right amount of quality the user experience will always be low”. And I think most people in QA and Testing will agree with me on that. Even organizations agree on that. Then, but why do we still see so much failures in software around us? Why do we still create software without the needed quality.

For one, because it’s not possible to test for 100%! A known issue in QA, but that’s not the answer we’re looking for. I think the answer is that we still rely too much on old-fashioned manual (functional) testing. As I explained in an earlier blog we need to go past that, move forward. Testing is part of IT and needs to showcase itself as a highly versatile profession. We need to be bale to save money, deliver higher quality, shorten time to market, and go-live with as less bugs as possible…

How can we do that? There are multiple ways to answer that, but one thing will always be one of the answers: test automation or industrialization. Tools should be a prerequisite for efficient and effective QA. It should not be a question to use them, but why not to use them.

Why not use test tools?

The need for test automation has never been as high as now with Agile approaches within the software development lifecycle. New generation test tools are easy to use, low cost, or both. Examples I favor are the new Tricentis TOSCA™ Testsuite, Worksoft Sertify©, SOASTA® Platform, but also open source tool Selenium. And QA, and IT as a whole, needs to go further. Not only use tools to automate test execution, performance testing, security testing, but even more on test specification.

The upcoming Modelization of IT enables the usage of tools even further. We can create models and specify test cases with them (with the use of special tools), create requirements, create code or more. IT can benefit by this Modelization to help the business go further and achieve its goals. I’ve written about a good example of this in this blog on fully automated testing.

The tools are the prerequisite, but how can you learn more about them. Well if you are in the Netherlands in the end of June you could go to the Test Automation Day. They just published their program on their site to enable you to learn more about test automation.

Categories: Testing & QA

April 12, 2013: This Week at Engine Yard

Engine Yard Blog - Fri, 04/12/2013 - 17:42

Have you remembered to do your taxes? Here’s what we’ve been up to this week!

--Tasha Drew, Product Manager

Engineering Updates

The engineyard gem got a couple bumps this week by maintainer and platform engineer extraordinaire Martin Emde.

The dashboard UI is being updated to prepare to allow customers to select multiple languages when creating a new app.  Application configuration options are also now changing based on language selection. Engine Yard recommends different application server stacks, for example, if you’re running Node.js. Read all about it and some other enhancements in our release notes!

Bug hunting: After observing some issues with booting servers in AWS US-East-1, platform engineer and crowd favorite Josh Lane realized that AWS has subtly changed address attach behavior, and DNS name changing is now more eventually “eventually consistent.” Updates to our code was made to handle this change.  We’ve also added more diagnostic checks to catch similar changes more quickly in the future.

Rails 4.0 is in early access! Let us know what you think in the early access feature feedback forum.

Data Data Data

Lead data engineer Ines Sombra is working on her Ricon East presentation, and we hope to see you there in New York! Ping us if you’d like a Friends of Engine Yard discount code. :)

Work continues on our early access Riak on Cloud offering as we move towards the GA launch.

Social Calendar (Come say hi!)

The CFP for the Distill Conference closed. Thank you all so much for your submissions! Our reviewers are checking them out and making some tough calls with such a great response. Don’t forget to give your song requests to application support engineer PJ Hagerty.

Tuesday, April 16th: Enjoy beer, pizza, and php at our Dublin office! Talks will cover adding realtime features to PHP apps with Redis, Node and by Clay Smith; John Needham will discuss how he worked to scale; and our own Ross Duggan will dive into the intricacies of Version controlling your infrastructure.

Wednesday, April 17th: The one and only PJ Hagerty is continuing his world tour, taking “Ruby Groups: Act Locally - Think Globally” to Rhode Island’s Ruby Group!

Thursday, April 18th: Open Data Ireland, exploring "Commercial Exploitation of Open Data for Private Gains,” will be meeting at our Dublin office.

Coming up next week

Ticketing maintenance: Saturday, April 13th from 5:30 PM to 6:30 PM(Pacific Time); if you have any issues contacting us via our ticketing system, please call us at 1-866-518-9273 or contact us via IRC (#engineyard channel on -- web client: More info

Articles of Interest

Platform engineer and surfer supreme Jacob Burkhart returns from his Ancient Ruby exploits and shares “What happened to the Rails 4 Queue API?” on our blog.

Application support engineer and php enthusiast Davey Shafik takes on learning Ruby on Rails for a Distill project and shares his lessons and revelations along the way.

The New Yorker reports that Windows 8 has crashed the North Korean missile control computers, and Kim Jong-Un may be declaring war on Microsoft, leading to a complex array of emotional responses around the office.


Categories: Programming

What happened to the Rails 4 Queue API?

Engine Yard Blog - Thu, 04/11/2013 - 19:39

The Queue API in Rails 4 is supposed to be an abstraction layer for background processing. It ships with a basic implementation, but developers are expected to swap out the default backend with something more production ready like Resque or Sidekiq. This standardization should then allow Rails plug-ins (and Rails itself) to perform work asynchronously where it makes sense without having to worry about supporting all of the popular backends.

In preparation for my talk titled "How to fail at Background Jobs", I've been following activity on the Rails 4 Queue API. Recently, the Queue API was removed from the master branch, and pushed off until Rails 4.1 at the earliest.

What follows is my third party attempt to report on why. My main source of information is this commit on GitHub, but I'll also attempt to draw some conclusions based on my own experience with queueing systems.

Another interesting source of information comes in the comments with the very first commit to add a Queueing API to Rails:

Basically, I see three failures with the Queue API as currently implemented in the "jobs" branch:

1. The API

The API as implemented is a "nice idea", but it's actually very un-Rails-like when compared to things like ActiveRecord.

Here's an example of enqueuing a job in the existing implementation:

class SignupNotification

  def initialize(user)

  def run
    puts "Notifying #{}..."



For illustrative purposes only, a more Rails-like API might look like this:

class SignupNotification
  connect_to_queue :important_jobs

  def run(user_id)
    user = User.find(user_id)
    puts "Notifying #{}..."


The name of the queue should be a concern of the job (not the place that enqueued it). Imagine if you wanted to change the queue name, you'd have to change every enqueue-ing place to reference the new name.

Also, notice we have to do a little extra work in our implementation of run to fetch the user by ID instead of having our queuing system Marshall it for us. This leads me to the next failure...

2. Marshall vs. JSON

Jobs are generally run in a different process from where they were enqueued. This means serialization. The simplest choice for doing this would seem to be Ruby's built in Marshall. Rails took the approach of Marshalling an entire job class, while most other libraries use serialize the job arguments and job class name to JSON. It's a best practice in most other systems to store as little information as possible information about the job in the queue itself. A queue is an ordering system, information should be stored in a database.

The Marshall approach is a slightly nicer API for the developer, but quickly breaks down in practice. Care must be taken not to Marshall objects with too many relationships to other objects or Procs (which cannot be Marshalled in Ruby unless you are using the niche implementation: MagLev).

Finally, Marshalling is not as nice for Ops. Monitoring a running queue in production is much easier when you can easily inspect the contents of jobs. JSON is a much more portable format.

3. Solving the Wrong Problem

It seems one of the major goals of the Rails 4 Queue is to always send e-mails in the background. We could debate whether action_mailer really belongs as part of a Model-View-Controller framework in the first place, but I digress.

Let me re-word that a bit: One of the major goals of Rails 4 Queue is to ensure that the sending of e-mails does not adversely impact web response time.

Generally, this sort of thing is done using a background jobs system like Resque: you make a job that sends your e-mail. But Rails core thinks we can do better than that, we don't need a background job system if we can just make our web application server do the work after it's completed sending the response to the client.

Here's some terribly ugly and hacky code to demonstrate my point.

Example using thin:

Example using Unicorn:

If you run these rack apps and hit them with curl, you'll see that the "e-mail processing" does not interfere with the client receiving a response. But, it does tie up these single-threaded web servers. They won't serve the next request until they are finished with the previous after-request job.

Another approach might be to use threads, but unless you are on JRuby or Rubinius, you would likely slow down your response processing. As your e-mail sending thread will likely start executing and using up processing power that would otherwise be used to generate the response.

The only good way to solve this problem is to make changes to Rack itself, but I've yet to see a proposal on exactly what these might be.

In Conclusion

I'm hoping to see the discussion continue. Maybe there's even an opportunity for other community members to step up and propose ideas about what the Rails 4 Queue API should look like. I think getting this right and shipping it will be a huge win for Rails developers everywhere who are currently duplicating effort working on a myriad of background job processing extensions and customizations coupled to their current backend queueing library.

Categories: Programming