Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

A Conversation About Testing in PHP

Engine Yard Blog - Wed, 05/22/2013 - 21:45

We are proud to sponsor Chris Hartjes and Ed Finkler's Development Hell podcast series where they record their freewheeling, uncensored discussions on programming the web, so future generations can learn from their failures.

Read on to get the low down on different testing tools and their relative merits--check it out as Ed and Chris weep for the future, come to some interesting conclusions and get their hands dirty so you don't have to.

To hear more from Chris and Ed tune in to their podcast, /dev/hell

Ed and Chris had a little chat about testing in PHP.

Chris: Okay, so today's topic is PHP testing

Ed: Word up

Chris: Now, Ed, I know that for the most part you are not a big fan of the mainstream PHP testing tools

Ed: Yes, that's true

Chris: So what is it that you don't like about them

Ed: I guess realistically my complaints are aimed at PHPUnit . It's very powerful and very complete from what I can tell, but I think it's difficult to pick up and I think that difficulty makes people less likely to use it. Because it's by far the best known testing tool, I think that tends to limit the use of unit testing, period, in PHP. That's not necessarily PHPUnit's fault per se. I just think it's the situation we're in. I think the documentation, the setup, and just obtaining PHPUnit is a challenge, particularly when compared to unit testing options I've seen in other languages. Python, for example, has a simple but effective unit testing library built into the core.

Chris: So, when you say "difficult to pick up", is it because tests look like this?

<?php 
class Labels
{
    public $db;

    /**
     * @param GrumpyDb $db
     */
    public function __construct($db)
    {
        $this->db = $db;
    }

    /**
     * Turns label values like codingStandardsSuck into
     * CODING_STANDARDS_SUCK
     */
    public function screamingSnakeLabels()
    {
        $results = $db->query("SELECT name FROM labels");
        $labels = array();
        foreach ($results as $result) {
            $labels[] = $this->_camelToScreamingSnake($result);
        }
        return $labels;
    }

    /**
     * Method that takes a camelCase string into SCREAMING_SNAKE_CASE
     *
     * @param string $value
     */
    protected function _camelToScreamingSnake($value)
    {
        $result = preg_replace_callback(
            '/[A-Z]/',
            function ($match) {
                return "_" . strtolower($match[0]);
            },
            $value
        );
        return strtoupper($result);
    }
}

class DevhellTest extends PHPUnit_Framework_TestCase
{
    public function testShowEdHow()
    {
        $db = $this->getMockBuilder('Foo')
            ->disableOriginalConstructor()
            ->setMethods(array('query'))
            ->getMock();
        $db->expects($this->once())
            ->method('query')
            ->will($this->returnValue(array('devHell', 'camelCase'));
        $label = new Label($db);
        $expectedResults = array('DEV_HELL', 'CAMEL_CASE');
        $testResults = $label->screamingSnakeLabels();
        $this->assertEquals(
            $expectedResults,
            $testResults,
            "Labels were not correctly coverted to screaming snake case"
        );
    }
}

Chris: Maybe it's because I've worked with it a lot, all I see is some boilerplate and then a few statements that seem pretty intuitive to me.

Ed: I think boilerplate is part of the issue. I think that's intimidating. Tools can mitigate that to some extent, but I don't think it eliminates the problem entirely. I just don't think writing a simple test should be anything more than a couple lines of code. Then you can build upon that iteratively as you need. I think that approach of starting simply and building up your set of tests really helps you understand what's going on, and I think it makes testing a lot more accessible to people who haven't done it before. A lot of testing framework docs I see throw a ton of nomenclature out at the reader. I think if you don't already understand that nomenclature, you won't understand what's up.

Chris: So when you say 'nomenclature', you're talking about things like what exactly? Assertions and mocks?

Ed: Knowing how to mock that stuff up is pretty complex. In my experience the majority of people who work with PHP don't have a lot of formal training and even if they do, it often doesn't cover testing concepts. Like, what's a "unit?" What's an assertion? What's a mock or a stub?

Chris: I weep for the future, Ed. A unit is a small amount of code that you're trying to test

In PHP, that's usually one object, An assertion is simply a statement that "I am saying that the following is true", whatever that assertion happens to be. I do agree that there is lots of confusion about what a mock or a stub is so in my book I devote a chapter to explaining those things.

Ed: So I know what that stuff is (although I get confused about the diff between a mock and stub). But the real problem is that in order to write tests, you have to already know how to program, and that in itself is super-intimidating for people. PHP has a very shallow learning curve: the time between learning and becoming productive in some way is very short. That's certainly one of the reasons PHP is so popular. We need, I think, to mirror that in how we present testing, and make it easy to get into. It shouldn't be something that is terribly complex to set-up and do.

Chris: In that light, I understand the motivation to develop your own testing tools, but I still think PHPUnit is the way to go. So many people use it and there are so many resources available to learn it, that picking it up isn't as difficult as I think you're making it out to be. Alternately, I think the Behavior-Driven Development (BDD) model that Behat offers is appealing, and easier to pick up than the xUnit style. Behat combined with Mink is a solid alternative to PHPUnit.

Ed: If you are doing acceptance testing (meaning that you only care that the application as a whole is working) I don't
think you can go wrong with being able to write tests that look like this:

Feature:
    Scenario: Main page loads
    Given I am on "/index.php"
    Then I should see "Lies I Told My Kids"

    Scenario: Empty form fields trigger errors
    Given I am on "/index.php"
    When I press "submitButton"
    Then I should see "You submitted an invalid e-mail address"

    Scenario: Missing description triggers errors
    Given I am on "/index.php"
    When I fill in "email" with "test@domain.com"
    And I press "submitButton"
    Then I should see "You submitted a blank description"

Chris: The Behat and Mink combo can let you create some very interesting acceptance tests, and it even provides you with tools that will tell you when you when you will have to write your own helpers to supplement what they can provide you. It took me a few days to figure out Behat's own way of doing things but once I did I was able to create some very interesting tests, even ones where JavaScript (long the bane of automated acceptance testing) was being used.

If your mind doesn't align well with unit testing, then something like Behat is definitely the way to go. There's something neat about watching PHP run Behat which in turn opens up a browser and starts acting like a user and hopefully using your application correctly.

Ed: Ultimately, though, a lot of the problem with testing in PHP is that PHP's insane flexibility makes it super easy to write code that you cannot test. That and PHP is almost always working in concert with other systems, like a web server, so it can be tough to know what you can easily test inside the CLI and what you'd need to use a different approach.

To write testable code, you really have to be thinking about testing when you write your code. It takes a bit of time to get used to that, but I think it's very doable. In much the same way, it's taken us a long time to make security a first-order concern in PHP development, but I think we've done a decent job of that. We need to do that for testing as well.

Chris: If only, Ed. If only.

Categories: Programming

May 17, 2013: This Week at Engine Yard

Engine Yard Blog - Fri, 05/17/2013 - 19:36

I spent this week with the team of engineers who made Riak on Engine Yard Cloud possible, attending RICON East: all Distributed Systems, all the time. Later in the week we took advantage of being in New York City to visit local customers and discuss the various features we’re working on and field any technical, product, and data questions.

Both our engineering and product teams love incorporating customer feedback into our direction. Speaking of which -- if you’re in San Francisco, I’m organizing customer UX feedback sessions! Hit me up :)

--Tasha Drew, Product Manager

Engineering Updates

PHP is now GA on Engine Yard Cloud! Per Product Manager Noah Slater: “PHP has been an important part of Engine Yard’s growing family since the acquisition of Orchestra in 2011. And now, PHP on Engine Yard Cloud represents the culmination of our efforts to deliver the industry’s best Platform as a Service for PHP developers. The result of this work is a unified service offering for PHP, Node.js, and Ruby applications.” Read all about the GA launch announced by Davey Shafik at php[tek] in Chicago this week!

Data Data Data

Riak and Clusters are live! See our blog post for more info - https://blog.engineyard.com/2013/riak-is-ga-engine-yard

A cluster is a new way to organize and manage instances that share a specific function.  Clusters take much of the functionality that was once placed at the environment level, and moves it down to the cluster level. One environment can have many clusters, and each cluster can run different cookbooks and be in different regions.

We drove the cluster model hand in hand with our productization of Riak on Cloud because the distributed model of Riak paired perfectly with where we wanted to drive the future of our platform. We can now take this underlying work and begin to re-productize other offerings to take advantage of its flexibility in many ways.

Social Calendar (Come say hi!)

Tuesday May 20th: Engine Yard Dublin hosts the PHP meetup where Eugene Kenny, Adverts.ie discusses his "Developer Toolbox", and then Matthew Weier O'Phinney of Zend Framework & Nate Abele of Lithium go head to head on the subject of Frameworks.

Wednesday May 21st: Engine Yard’s San Francisco HQ will be hosting the monthly Riak meetup! Lead data engineer and fan favorite Ines Sombra will be presenting about Riak on Engine Yard Cloud, followed by Basho’s Mark Phillips discussing Riak CS.

Wednesday May 21st: Our PDX office will be hosting Coder Dojo for students K-12 to learn about software! Grab a ticket and bring your parents for some software fun.

Thursday May 22nd: Engine Yard Dublin plays host to Open Data Ireland, “Give us our health data!”

Friday May 23rd: In which I talk about myself in the 3rd person? Tasha Drew will be speaking at Cloud East in Cambridge, UK, about deployments in the cloud, including various strategies we at Engine Yard see for environments of different sizes -- and concluding with sharing our own deployment strategy.

Articles of Interest 

Lightweight screenshot and annotation tool http://glui.me/ has gained some fans in our office!

Engine Yard friend Daragh Curran, Head of Product Engineering, Intercom shared an awesome blog here. “Shipping brings life to your team, to your product, and to your customers. Shipping is your company’s heartbeat.”

Categories: Programming

Shipping is your company’s heartbeat

Engine Yard Blog - Thu, 05/16/2013 - 12:38

Note: Engine Yard friend Daragh Curran, Head of Product Engineering, Intercom has graciously let us post this great piece about code deployment on our blog. Check it out on their own blog here.

Software only becomes valuable when you ship it to customers. Before then it's just a costly accumulation of hard work and assumptions.

Shipping unlocks a feedback loop that confirms or challenges those assumptions. It makes new things possible for your customers, and gives you the opportunity to focus on the next thing.

Shipping brings life to your team, to your product, and to your customers. Shipping is your company's heartbeat.

Shipping will try to kill you

The scramble to get that one last feature done, the late nights, the compromises, the sinking feeling when we realise something major is broken, the post-mortems… It's agony, but if it was easy everyone would do it. Shipping exposes mistakes. We're nervous about it, and our natural reaction is to do it reluctantly and infrequently, which actually carries higher risk, causing more reluctance in the future.

The cost of shipping is approaching zero

Not too long ago, shipping software involved actual ships, disks, and printed manuals. It happened perhaps once a year. Bug fixes weren't automatic over the internet like today. Everything was slower and more controlled. The cost of shipping was massive, the consequence of a mistake was large. Today, the cost of shipping has approached zero. Most people can deploy in seconds or minutes with a single command or button click. With a little thought you can do that without your customers noticing, and with automated monitoring you'll find out immediately if something goes wrong.

Despite the cost of shipping approaching zero, many people still ship software guided by very old habits.

Shipping cadence defines your company

The cadence at which you ship defines your company. A yearly cadence results in a very structured approach to the design->build->test cycle. A few months of building, while the rest is spend fixing. Engineers can join and leave before seeing their hard work end up in the hands of customers. The approach to design becomes one of anticipating all possible needs, rather than focusing and iterating on the important ones.

Obstacles downstream propagate upstream

An obstacle downstream propagates upstream. If you're not allowed to implement new ideas, you stop having them.
- Paul Graham

The right approach to shipping has a positive influence on your company's productivity and your team's happiness & job satisfaction. Shipping infrequently is an obstacle. Ship slow, and you'll introduce challenges that push you to ship even slower. Ship frequently, and see positive effects everywhere in your company. For example, lets examine how behaviour changes along with shipping frequency, while handling a simple request from a customer.

Time to production behavior

Lets say a customer gets in touch to say "No matter what I do, I cannot save my name correctly, I think it doesn't like hyphens". In a company where you ship continuously, you see this and think Simple — I'll tweak a test and a regex pattern, get a quick code review from my buddy beside me, merge to mainline, and 1 minute later when it's deployed to production, reply to the customer: "Sorry about this, it's fixed now, thanks for letting us know". They'll reply: "Wow, thanks for fixing so quickly". High fives all around!

If we stretch the time to production (TTP) out a little, even to 10 minutes, the behaviour changes. You either do the same, but reply saying it'll be fixed with our next deploy (probably 10 minutes) - or you wait, so that you can communicate with certainty. The waiting is time where you'll shift focus to something else, but have the baggage of having to follow up. Perhaps you'll think, I'll have a quick coffee, then move on to something else afterwards. Even though your deployments are entirely automated, you lose time because of waiting and losing focus.

Customer support shipping

If TTP is hours, the behaviour changes again. No longer can you say with certainty when the change will be out there, so you're tempted to batch up with other similar small changes. You postpone replying until you get time to do it, sometimes forgetting about it. You're less likely to take prompt action, wow'ing the customer, and you pay some mental cost for having it on a todo list. Since getting to production takes hours now, your team will start restricting to morning only deploys, so miss that slot and it's further delays.

If TTP is days, it exacerbates that further - perhaps you'll reply "Thanks for letting us know. We'll fix this in our next sprint". It gets bundled in with a whole load of other small low, priority items, you spend more time debating estimates, and priorities, than the first guy took to fix it and reply to the customer. Miss the beginning of week deploy window and further slippage. The larger releases bring higher risk, you'll tell your customer it's fixed, only to later require rolling back because of a separate change. Your bug database gets bigger and bigger, with little details that you'll probably never fix.

When TTP is weeks, it exaggerates that even further - perhaps you'll reply "Sorry about this, I'll let the development team know" or something equally lame from your customer’s standpoint. Deep down you realise nothing will be fixed, and the job of talking to customers becomes a cost or hassle, rather than an opportunity to improve your product and nurture happy loyal customers.

Shipping continuously

Better approaches to writing or testing software help us iterate more quickly and confidently, but the benefits are quite local to engineering teams. Continuous shipping on the other hand, touches all parts of your company, as do the benefits, and the behaviours it enables and encourages.

Linkedin's transition to continuous deployment is linked to their recent financial success.

Good products, are a side effect of combining good people with an idea in an environment that helps those people to kick ass. Your attitude to shipping is a big part of that environment you create.

Shipping breathes life into how we think. The feedback loop helps us learn, gain confidence in making quick decisions, and build momentum. Momentum in product improvements excites and engages our customers. Seeing quickly the benefits of our hard work, motivates us to do more. Building a team where people can work hard and move fast attracts others to join you - hiring gets easier.

shipping-brings

Shipping continuously isn't an achievement you unlock and then move on. You've got to constantly obsess about it. If you believe in the benefits it brings, you'll be driven to shrink 20 minutes down to 1 minute or less, you'll consider 'ability to ship' as an equal to 'does it scale' when building new systems. And you'll do that because of all the life it breathes into your company and your product.

Shipping is your company's heartbeat.

Categories: Programming

Riak is GA on Engine Yard Cloud

Engine Yard Blog - Tue, 05/14/2013 - 20:56

Hello from NYC! We stopped by RICON East to share great news. We are thrilled to announce the General Availability of Riak on Engine Yard Cloud.

Riak is our first highly available, non-relational database and the first component of our stack to use a new cluster provisioning model. Riak exemplifies the future of Engine Yard and you should totally check it out! Here’s why.

Highlights of Using Riak on Engine Yard Cloud

Riak’s use case primarily fits applications with loosely structured data where even seconds of downtime are unacceptable. Riak has a key/value data model and is completely data agnostic, meaning you can store anything you want in a value (media, json, xml, text, etc.).

Riak is masterless. You can send writes to any node in the cluster and data will be appropriately stored, even in the case of individual node failures. Riak also supports tunable consistency, allowing you to make the datastore more strict on certain types of data and more responsive on others.

Painless Installation, Management, and Support

We have invested in simplifying Riak's installation and configuration to make the learning curve less steep. In one easy step you can define the flavor and size of your cluster, the location of your data (EBS, ephemeral, etc.), optimize your cluster by selecting desired backends, and even enable full text search.

Once your cluster is up and running you can painlessly grow it if you need to add capacity. Removing nodes is also a trivial operation. If for any reason you want to archive your entire cluster, you can easily do this, too.

Riak clusters come with the fantastic support you have come to expect from Engine Yard. As partners of Riak's makers, Basho, we can quickly escalate tickets on your behalf when they require extra engineering insight.

A Whole New (Clustered) World

The cluster model used by Riak evolves the deployment topology of  environments. Environments become more flexible with the ability to specify zero to many clusters per environment, and have all clusters properly deployed and balanced within availability zones in your region. We are also working on the ability to have clusters within a single environment provisioned in a different region.

As of today, clusters are exposed to all customers.


We will be migrating individual stack components to our new cluster model. All supported databases will be re-done and acquire the provisioning features you see in Riak. We are very excited about what we'll be releasing over the next few months.

Introducing Cluster Behaviors

The cluster provisioning model also allow us to express cluster-specific behaviors and act upon them in a scheduled way (or on demand). For example: all Riak clusters have access to rolling backups as their first supported behavior.

With rolling backups we can archive the entire contents of a cluster one node at a time without compromising its overall performance and ability to respond to requests.  We will be introducing new behaviors (like rolling snapshots) very soon.

Things You Must Know

To prepare for the migration of legacy components to clusters we have decided to change the way environments update. We have pushed stack responsibilities down to the cluster level. This means that clusters are now responsible for managing their stacks and updates which gives us greater granularity and flexibility (it’s a great thing, we promise!).

An important thing to note is that environment-wide custom Chef runs will no longer be applied to cluster instances. Clusters are isolated from system-wide versions of Chef as they carry their own stack and updates.

What Comes Next?

Here are a few things we have in store as we continue to evolve Riak and clusters:

We want to make Riak’s management tasks more intuitive than ever, so we will roll out enhancements to the environment page and overall cluster user experience.  We are also working towards improve cluster monitoring and alerting.

Enhancements to instance booting times are in the pipeline. You will be able to go from zero to a fully running cluster faster than ever!

Where Can I Learn More?

Our documentation has been updated and it’s a great place to get started. We will be leveraging Basho’s excellent Riak documentation, too.

If you are in San Francisco we will be giving a tour of Riak on Engine Yard on May 22nd. Come ask questions! We’ll hand out a few gifts to the best ones

http://www.meetup.com/San-Francisco-Riak-Meetup/events/118840422/

Still Have Not Tried Riak?

Riak is available on all trial accounts. Simply sign up, boot up a cluster, and you’ll be able to experiment with it.

Also feel free to open a Support ticket if you are wondering if Riak is a good fit for your application.  We love hearing from our customers and want your feedback.

Categories: Programming

Announcing PHP on Engine Yard Cloud

Engine Yard Blog - Tue, 05/14/2013 - 15:45

We’re excited to announce the general availability of PHP on Engine Yard Cloud.

PHP has been an important part of Engine Yard’s growing family since the acquisition of Orchestra in 2011. And now, PHP on Engine Yard Cloud represents the culmination of our efforts to deliver the industry’s best Platform as a Service for PHP developers. The result of this work is a unified service offering for PHP, Node.js, and Ruby applications.

With PHP on Engine Yard Cloud, users get a proven, robust platform on which they can both horizontally and vertically scale applications – including content, media, e-commerce, and more. As a highly configurable PaaS, Engine Yard Cloud gives PHP developers – from enterprises to digital agencies to SMBs – a wider range of instance sizes, a fully curated PHP stack, and advanced automation and orchestration features such as database replication and failover.

Whether deploying a simple Wordpress blog or an advanced MySQL-backed web application, developers get a range of control over configuration, deployment and management of their application environments, including full root access on virtual servers and the flexibility of using custom Chef recipes to control and automate entire environments, regardless of size.

Get Started With Our Lowest Entry-Level Cost Ever

We recently announced several big price reductions including a new entry level price that gives you a dedicated EC2 small instance for $0.05 per hour. That's an average of $36.50 per month — almost 50 percent less than the original price! This means you can immediately start using Engine Yard Cloud to deploy your PHP applications at an entry level cost so low, it's less than the cost of a basic application on Orchestra.

What’s more, if you haven't already made use of the free trial, you can login to Engine Yard Cloud with your existing login and claim your free 500 hours to get started!

Want to try it out? Head over to our documentation and give things a whirl.

What Does This Mean for Orchestra Customers?

We plan to retire Orchestra later this year, as we have already communicated to our Orchestra customers. In fact, we are already working with some customers to help them migrate to Cloud. And if you haven't already migrated, there are several reasons why you might want to try PHP on Engine Yard Cloud right away.

Some of the benefits of PHP on Engine Yard Cloud:

  • Choose the dedicated instance sizes you need
  • Run your database in your environment. No more third party providers required!
  • More control over your deployments
  • SSH access. Logs. Debugging.
  • Automated backups and snapshots of your environment
  • Stop and start environments

If you haven't migrated yet, and you can open a support ticket and we will work with you on the migration. Or you can read more about our plans in the unification FAQ.

Thanks

We know we couldn’t have gotten this far without the support from this community, so we’d like to say a big “THANK YOU” to everyone involved. The whole Orchestra team is now working on Engine Yard Cloud. And we hope you’re as excited as we are about the expanded PHP service with more deployment choices, increased flexibility, better management, and — as always — the industry’s best support included.

Please note: GA features will go live at 1 pm PST today.

Categories: Programming

Mobile Application Privacy: 10 Tips to Protect Your Brand, Reputation and Customer Data

Engine Yard Blog - Mon, 05/13/2013 - 17:23

Note: Engine Yard friend Himanshu Dwivedi, CEO/Founder, Data Theorem has graciously let us post this great piece about mobile security on our blog.

Mobile privacy is one of those hot topics lately. Every week there seems to be another article about a mobile app or platform having a privacy issue. Indeed, Path was recently hit with a $800,000 fine by the FTC for breaches in privacy. The focus on privacy extends across application platforms to even BlackBerry which have been well regarded for their security measures. The tricky part of providing privacy for your customers on your mobile application is that there are many kinds of data stored as well as places this data can be stored.  Here are the top 10 tips to ensure privacy for your customers as you develop your mobile application.

Please Don't:

1) Use UDID and equivalents

This feature has been deprecated in iOS 6.0 but still largely pulled. This is an advertiser's dream but it tracks without the users knowledge which raises concerns with privacy advocates.

Recommendation: Generate an app-specific random session ID that can't be cross-linked by other applications.

2) Be careless with geo-location

Treat geo-location tags and data as sensitive. This means that you don't want to send information over the network in clear-text. Another thing to watch for is storing this information client side where there's a cross-link with identifiers, making it vulnerable to hackers.

Recommendation: If you're going to use geo-location, be sure to store it server side but remove it from these systems over time. Long-term storage can imply that you are tracking your customers over a long period of time including where they have been which is a big no-no from a privacy perspective.

3) Do not access contact lists without permission

While Contact lists are a great way to get more users, especially with a "cold start" applications, unfortunately privacy advocates don't like it. The case of Path demonstrates that storing contact list server side and cross-linking is a bad idea and one that may get you slapped with a  fine.

Recommendation: Accessing contact list is one thing but strong server-side is a a big no-no. Do not access a user’s contact list without permission, even if it means for a better user experience.

Watch Out for…

4) The Copy/Paste function

In iOS you want to be aware of the UI paste board as information stored in the clipboard (aka UI Pasteboard) is accessible to all other applications on the device. This is important because applications have been written specifically to monitor the clipboard.

Recommendation: Even though it's a UX trade-off, disable the UI Pasteboard especially for sensitive fields: username. passwords, phone numbers, addresses.

5) Cache.db

The cache.db file was introduced between iOS 4.x and 5. The information stored on cache. db database file is not encrypted and accessible to anyone who has access to the device. There isn't a great deal of documentation on this so often developers don't realize what information the cache.db is collected.

Recommendation:Turn off caching using NSURLCache settings.

6) Auto-Correct

The iOS caches each non-numeric keyboard tap of the user for the auto-correction feature to work, which cannot be disabled by any application. Since almost every non-numeric  is stored, it's important to disable auto-complete for confidential fields such as address, mother's maiden name, city of birth, etc.

Recommendation: Set atuocorrectionType prperty to UITExtAutocorrection NO.

7) Backgrounded Screen

A backgrounded screen is when a screenshot is taken with a state change, such as an incoming call. All applications on the device have access to the screen shot which may include confidential data. Luckily this is an easy fix.

Recommendations: Detect state decision of the application specially applicationDidEnterBackground. When the app is backgrounded, enable a splash screenshot that consists only of nonsensitive data such as the organization's logo.

Remember

8) Keychain

There are keychain dumper tools to dump everything out of the keychain. If you want data that is secure at all costs you shouldn't put it in the keychain.

Recommendation: Many items in the KeyChain are accessible, but just not of the box.

9) Encryption

If the key to decrypt data is stored client side on the device, it only slows attackers with physical but does not protect the data 100%. For offline mode, decryption keys are often stored on the device itself.

Recommendation: To truly encrypt data, ensure the decryption key (private key) is stored server side only

10) SD Cards

External storage (e.g. SD cards) have no file permission protection which means the data is accessible to all applications (copy, tamper, etc).

Recommendation: Do not store any private/confidential data on external SD cards. If data needs to be shared between device apps, store the data server-side  and allow access to the storage using client-side authentication/authorization tokens. Be aware that the Sandbox model does not apply on the SD card either the Android or Windows phone.

With these tips, your application will definitely be up-to-date with the latest in mobile security and privacy.

With the widespread adoption of mobile applications, ensuring the security and privacy of your application and customer data is paramount. By applying these tips, you will help ensure your application is up-to-date with the latest in mobile security and privacy.

Categories: Programming

May 10, 2013: This Week at Engine Yard

Engine Yard Blog - Sat, 05/11/2013 - 01:53

I’m heading to Ricon East with our lead data engineer, Ines, and dapper platform engineers Josh and Thom! Come say hi!

For our php friends, my counterpart, Josh Hamilton, will be at php[tek] with Davey and PJ, who would also enjoy a friendly “wassup!”

--Tasha Drew, Product Manager

Engineering Updates

Application takeover preferences are now in Early Access. For customers who need a non-standard application takeover scenario, you can now select between boot-options, or disable entirely within the UI.

We have made great progress towards availability of provisioned IOPs on volumes for legacy instances (Riak clusters have had this feature for a while).  We are making the feature available to customers in Limited Access this week. We do have some more work to do  on improving the UX and providing documentation before making it more widely available -- please open a ticket with support if you are interested in checking it out before its Early Access release.

Data Data Data

We continue to enhance the behavior of new Clusters. Rolling backups will be the way to permanently archive data stored in a Riak cluster.

Rolling backups extract data one node at a time while your cluster continues to server requests. You will be able to manage the extracted backup files from the UI and even see which nodes they came from!

Social Calendar (Come say hi!)

Monday May 13 - Tuesday May 14th: Engine Yard is sponsoring the lightning talks at Ricon East 2013! We will also have a bunch of people in attendance. Come say hi!

Tuesday May 14th - Friday May 17th: php[tek]!!! Davey Shafik will be giving a talk, and we will have a product manager and engineers on hand to join in the festivities.

Tuesday May 14th, San Francisco Office:Product Lover’s: PM Fast-Track: What do Product Managers really do?

Tuesday May 14th, Buffalo Office: WNY Ruby: May we all enjoy our Rubies!

Tuesday May 14th, Dublin, Ireland Office:Crafthouse #003: looking at the various ways in which we learn web design and investigate ways to improve upon them.

Wednesday May 15th, PDX Office: Coder Dojo PDX, K-12 Coder Night

Wednesday May 15th, Buffalo Office: Girl Develop It Meetup: Code & Coffee Night!

Thursday May 16th, Buffalo Office: Database Seminar, May is for MongoDB

Thursday May 16th - Friday May 17th:NodePDX

Thursday May 16th: Dublin, Ireland Office:UXPA Ireland: My favorite UX tool!

Articles of Interest

Did you miss Chef Conf? Check out all the videos here! http://www.youtube.com/playlist?list=PLrmstJpucjzXNMLcI5X-EjirpDd-SITd3

The East Coast of the USA is about to overrun with billions of cicadas. Humans will be outnumbered approximately 600:1.

Davey Shafik does a dive into authentication choices he investigated as he built out the Distill website.

Categories: Programming

Authentication: Not necessarily a social activity

Engine Yard Blog - Thu, 05/09/2013 - 21:18

For Distill, Engine Yard's developer conference, we chose to use social authentication to reduce the barrier to entry for our call for papers. We supported Twitter, Facebook and Github.

While developing the site, my concern was that the registration flow be simple, and that it actually work. Once we launched the site, I realized that I had trouble remembering which provider I had signed up with.

Maybe that's just me (I am terribly forgetful!), but I imagine at least a few other people had this issue. Sure, on the backend we can link multiple accounts, but that means users went through the registration process multiple times. This is not optimal.

For those interested in the numbers, here is how the providers stacked up:

  • Github: 59%
  • Twitter: 38%
  • Facebook: 3%

Why did we make this choice? Probably the same reason everyone else does:

Users don't want, or need another fricken login to remember, just use Facebook/Twitter/Google+/LinkedIn/Github/Yahoo!/...

This is the primary argument for using social auth. And lets be honest, who wants to be in charge of Yet Another Login System?

But is social auth the best option? Lets explore that.

In favor of social auth for our users, we have:

  • One less password to remember
  • Possible to revoke access
  • Automagic integration with my online social presence (that I can control… if I know how!)
  • Users are often always logged into their social sites, so they don't even see a login screen — a few redirects and it can be avoided entirely.

In favor of our bespoke system... it's the same thing we've been doing for years.

What does Social Auth mean to our users?

Let's break down what all of these things really mean.

One Less Password

Is this a good thing? Just like using the same password for everything, using the same social account for everything is not necessarily a good thing.

Sure, we can use an 80 character, expanded charset password on that social account, but "guessing" your username and password isn't necessarily the only way in. Server security breaches, man-in-the-middle attacks (negated by proper application of SSL), fishing or social attacks are still out there! Just like with a bespoke system.

I personally use 1Password as a tool to maintain a list of the literally hundreds of accounts I have, and their respective login credentials. I generate a random password for every site I sign up for, and never even think about remembering it.

I have my password database stored in Dropbox (which, yes, means I'm trusting Dropbox's security) so it's available on all my devices, and it can even function standalone with a built-in web interface!

There are plenty of other free tools even that will allow you to do the same thing (e.g. KeePass Password Safe).

For me the one-less-password argument holds little water.

Possible to revoke access

This one is an argument I rarely hear, but is quite important. Most websites (though it's changing, as people get on the free-data bandwagon) do not allow you to delete your account. And there's a good reason for this: Your data is valuable to the website, even if you're not using it! (Remember: if something is free, you [and your data] are what brings value to the business.)

With social auth, you can not only revoke permissions entirely (denying access to your social data that they haven't yet collected), you can revoke permissions partially (assuming the social site implements that).

I think this is quite important.

Automatic Integration with Social Presence

Social integration is arguable the main reason for even using social authentication (other than the lazy factor), and it can definitely bring value to our experience.

The point of being social, is sharing things with people, and good online experiences are things we want to share. Making that quick and easy is beneficial to both the user, and the business — it's word of mouth advertising, and is priceless.

However, a lot of users don't want to be social. Either with your site in particular ("I don't know this site, why would I want it to see my stuff?") or in general ("I'm a grumpy curmudgeon" or "I don't want the government to spy on me!").

The general answer to this problem, is to make social auth optional. Users can sign up for a bespoke account, or signup via social auth. Or Both.

Unfortunately, I find that when multiple signup options are available — be that bespoke + one social auth, bespoke + multiple social auths, or just multiple social auths — I forget which one I've used. Did I sign in with Github? Or did I create a new account?

1Password does help me in some regard here, because it would have my bespoke credentials available, but with multiple social auth options? It's no help.

Automatic Login

Automatic login is arguably a good thing — it can allow anyone with access to the computer to not only access the social platform (because you're already logged in), but who knows what else.

Think about the people who leave their Facebook logged in at the Apple Store, apparently nobody has yet realized you can look at the list of Apps they have authenticated with and then can simply visit that site and choose to login with Facebook. Suddenly you now have access to their Pandora, Instagram, Klout, and 100 other apps you've authorized!

Now, it is possible as a developer to require login, at least with Facebook auth — but we usually want it to be easy for our users and don't bother with it!

What about a middle ground?

Is there a middle ground? We've already discussed the pitfalls of providing bespoke + social auth(s), so what else can we do?

I think the best middle ground, is to provide bespoke authentication, and then behind that, allow for social connection expressly for the purpose of being social. That is, make it optional.

We can get some of the benefits of social authentication, by allowing users who have connected their social accounts to use them for password resets, rather than email. Simply ask them to re-verify their social account and once they have, you can direct them straight to the password reset form — no emails getting lost, no tokens, simple.

One final option, is that you can use social authentication without getting access to more than the users basic information — especially, you can do it without requesting write permissions. Then, later, you can ask for write permissions should the user wish to utilize that aspect of your site.

Being Socially Responsible — A Social Contract

I have, over the course of thinking about these things, decided how I'm going to interact with my users socially.

  1. Social integration is always optional.
  2. Permissions are granted only as-needed. Ask for the minimum permissions to do the requested action, and no more. When more permissions are needed, ask again.
  3. Always give the user the final say on what gets posted — I will always allow my users to edit the message, and never insert automated text on the end — except the URL to thing they are sharing.
  4. Never automatically post anything. This is really part of the previous one, but it's better to be explicit.

This is my social contract, it will be publicly posted on my site, and presented when users look at the option to connect their social accounts. I think this is the responsible way to interact with my users, and allow them to interact with folks who will hopefully become my users.

What about you? What's your social contract look like?

We recently announced the Distill speaker lineup and first batch ticket sales.  To learn more, visit distill.engineyard.com

Categories: Programming

Identify and Resolve Issues through Proactive Log Management

Engine Yard Blog - Tue, 05/07/2013 - 20:47

Proactively managing logs can be critical to identifying and resolving issues within an application environment. We're excited to announce that Logentries is now available as an Engine Yard Add-on. Engine Yard customers can try it now for free. More information about Logentries on Engine Yard Cloud can be found here.

Through Logentries, users can monitor logs in real-time and get an easy to understand view across their entire application logs. Logs are analysed and visualised so that you can make sense of large volumes of log data, to quickly see and resolve system warnings or errors. Logentries can also be applied from a business analytics perspective to understand how many users registered, logged in, made payments and more over particular time periods.


 

Engine Yard Cloud customers can get started for free today, including 7 days of storage and a 1GB indexing limit.

Logging metrics is vital to checking the heartbeat of your business. When it comes to logging, it helps to know what you should be looking for. For a more complete explanation of the individual logs you should be concerned with when monitoring your Engine Yard setup, view our blog post Digging Into Engine Yard Logs.

If you are an Engine Yard customer, follow these steps to setup Logentries on your Engine Yard apps:

1. Head to https://cloud.engineyard.com/addons/logentries (login required) or navigate to "Logentries" under “Add-ons” in Engine Yard Cloud

2. Click "Set it up"

3. Sign up and follow the instructions for updating your code and deploying



And that’s it! Get ready to enjoy all the benefits that come with getting more insight into your application through proactive log management.

Categories: Programming

May 3, 2013: This Week at Engine Yard

Engine Yard Blog - Fri, 05/03/2013 - 20:29

We’ve finalized some major under-the-hood upgrades at Engine Yard this week that should start showing themselves in public facing features within the next few months! In the meantime, this is what you can actively check out.

--Tasha Drew, Product Manager

Engineering Updates

Improvements to ELB handling are live and in production! Updates include better error handling for a smoother integration and experience.

We have removed Passenger 2 as an option for customers booting new environments because it’s really old. Any customers with an environment assigned to the Passenger 2 application server stack has the feature flag enabled and will continue to see it as an option. You are also encouraged to upgrade for all the awesome benefits of Passenger 3.

Engine Yard Cloud customers can now file tickets directly through the Cloud dashboard.

We had a bunch of other minor bumps you can read about in our release notes.

Data Data Data

Riak has been bumped to 1.3.1 as it reaches the last few weeks of its early access phase!

Social Calendar (Come say hi!)

Tuesday, May 7th: Engine Yard’s Dublin, Ireland office will be hosting the second Postgres User Group meetup with Greg Stark, a long-time Postgres contributor and committer as the speaker.

Thursday, May 9th: Coder Dojo in PDX continues to plan how to help teach kids and their parents about how to learn about and explore coding and software. Everyone is encouraged to grab a laptop and jump in!

Thursday, May 9th: Pub Standards in Dublin, Ireland welcomes any and all in-town developers, designers, founders, and people-who-like-to-build-stuff to stop by the Bull & Castle for a beer and a chat.

Articles of Interest

Pricing updates went live, and customers can expect to take advantage of reduced instance pricing on their April bill!

Our friends at TMX posted a thoughtful piece, “In Search of Software Quality.”

Pacific Coast Support team lead and all around awesome guy Ralph Bankston (who sadly has no twitter handle for me to link to) has gone in-depth about how to troubleshoot cron jobs.

Categories: Programming

Troubleshooting Common Problems with Cron Jobs at Engine Yard

Engine Yard Blog - Thu, 05/02/2013 - 22:34

Cron jobs are a basic unix tool used to run specific commands at specific times.  This can be anything from deleting files to starting a script that processes payments in your application at specific times without having to remember to start the script manually. The most common questions we receive about cron jobs are: verifying the time at which a cron job is supposed to run, environment and path issues while running rake tasks, and unexpected cron output. Here are examples and solutions to some of these common cron problems.

Timing

The most common question we receive is how to verify the time that a cron job is supposed to run. An important first step in that process is to verify the time zone the server is currently set to.  Cron runs based on the system time. Our servers default to UTC but some of our older servers are running on Pacific Time so you need to ensure you have the correct time zone. You can verify this by either checking /etc/localtime or typing date.

The five scheduling positions are: minute ( 0 - 59 ), hour ( 0 - 23 ), day of the month ( 1 - 31 ), month ( 1 - 12 ), day of the week ( 0 - 6 with Sunday being 0 ).  A short hand for this that can be added to the top of a crontab is # min hr DoM m DoW.

*      *    *      *     *  command to be executed
┬    ┬    ┬    ┬    ┬
│     │     │     │     │
│     │     │     │     │
│     │     │     │     └───── day of week (0 - 7) (0 or 7 are Sunday, or use names)
│     │     │     └────────── month (1 - 12)
│     │     └─────────────── day of month (1 - 31)
│     └──────────────────── hour (0 - 23)
└───────────────────────── min (0 - 59)

(Wikipedia)

You can also use the * which is the wildcard for every possible value of the five scheduling fields.  You can also use */ to have it run at varying times. There are also websites that can check the timing such as http://www.generateit.net/cron-job/ and http://cronwtf.github.com/.

Environment and Path Issues

A problem we see is not calling the proper path when running a rake command. If you are running a rake task you’ll want to make sure you set the environment and the path if needed correctly.

Example: A rake task that may work with system gems but not with bundler because of a path and environment issue.

Deploy User Crontab:
30 1 * * * rake ts:index

This code will only index sphinx if you are using system gems and not Bundler.  If you are using Bundler you will want to make sure you are either using bundle exec or calling the binstub executables directly within the application.

Example: A rake task that works.

Deploy User Crontab:
30 1 * * * cd /data/appname/current && RAILS_ENV=production bundle exec rake ts:index

This command calls both the correct path and also sets the RAILS_ENV environment variable so you get expected results based on the Rails environment running. In some instances you may have to specify the full path to rake in the bundled gems which is /usr/local/bin/bundle exec /data/appname/current/ey_bundler_binstubs/rake.

Cron Output

We commonly see cron jobs that don’t have output handled at all or output in an expected manner.  The choices for cron job output are to have no output, create a log file of what happened during the rake task, or to only list errors. The first step in deciding proper output handling is whether you want cron to notify you of anything or if your command will handle it internally. If you choose to do nothing when creating your cron and there was output it would attempt to send an email or if ssmtp mail was not configured on your instance the output would be sent to the dead.letter file. If you do not want any output saved from the cron job appending >/dev/null 2>&1 to your command output and send it  to /dev/null (/dev/null is a device that discards any data sent to it).

Another option is to capture the output of a rake task running --trace  it is possible to add a verbose log with the addition of >/data/deploy/appname/current/log/ts_rake.log >/dev/null 2>&1. The cron job for that would look like this:

30 1 * * * cd /data/appname/current && RAILS_ENV=production bundle exec rake ts:index --trace > /data/deploy/appname/current/log/ts_rake.log >/dev/null 2>&1

The log file will also need to either exist and be writable by the deploy user running the cron job or the user will need write permission to the directory that contains the log file.

It is possible to send the output to email by not capturing the standard out with  >/dev/null 2>&1.  As stated previously, on our system our systems are not set up to send e-mail. That will need to be set up before having the mail delivered.

Cron running at specific times is recorded by default into /var/log/syslog. You can sudo grep cron /var/log/syslog to look at the cron jobs that have run during the current day. You can check older days by going through the older log files which are rotated daily.

Cron on Engine Yard

Cron jobs are great for scheduled tasks. There are two important things to remember about running applications on Engine Yard Cloud. The first is that the application master or solo instance is the only instance in an environment that the dashboard will install cron jobs. This is something to keep in mind if all of your application instances need to run the script or if the job should be run on a utility instance. The second is that when an application master takeover is initiated the newly transitioned application master doesn’t have the full contents of the previous application master. When a takeover occurs the cron jobs from the dashboard have not yet been put in place. Pressing the apply button inside the dashboard will properly install the cron jobs from your dashboard to your new application master.

Categories: Programming

Announcing Lower Engine Yard Pricing: Starting at 5¢/hr!

Engine Yard Blog - Tue, 04/30/2013 - 17:29

Today, we are announcing new lower prices for Engine Yard Cloud including a promotional >50% reduction to our entry level price.  Now, for just 5¢/hour (or $36/month) you can get started with our complete cloud application platform, allowing you to focus on software innovation while we deploy and scale your app in the Cloud.  This new entry price point is below the on-demand price for the bare infrastructure alone - making it more compelling than ever to get complete access to all features of the Engine Yard Cloud application platform and to our acclaimed devops and app support.

In addition to this new promotional price for our entry-level small instance size, we’re announcing varying reductions to all price points with an overall average of 15%.  Some of our most aggressive reductions are to our largest instance sizes, popular for the most demanding commercial-grade applications.  See all the new prices by visiting our updated Cloud Pricing page.  Best of all, the changes are effective April 1st, 2013 so the new pricing will be reflected in your upcoming April bill.

While we continue to add new features and make our cloud architecture more flexible, we are also excited to extend cost reductions as we grow and scale. We hope to give you the industry’s absolute best value with an incredible feature set and unmatched support.  We’re also excited about the continuing pace of our growth thanks to new customers seeing the benefits of our application cloud and our awesome existing customers continuing to innovate and scale their apps with us.  Our wicked devops support (as we’re often affectionately referred to as) continues to stand at the ready as a part of your team and our development team continues to bring you new capabilities every week (check out our new blog series, “This Week at Engine Yard”).  With these new price reductions we’re reinforcing our dedication to giving you, our customers, the absolute best value with Engine Yard Cloud!

Not using Engine Yard Cloud yet?  Try it absolutely free for 500 hours.   

Categories: Programming

In Search of Software Quality

Engine Yard Blog - Mon, 04/29/2013 - 18:30

Note: Our friends at TMX wrote this piece about building high quality software and with their permission, we're reposting it here.

I started writing this article about 6 months after our launch. Things had quieted down a bit by then, and I had the opportunity to think through some of the lessons learned and bounce my ideas off the rest of the team. The time was right to start writing lessons for posterity.

I’ve been creating software professionally for over a decade with many different organizations, and never before have I worked with a codebase built to such a high engineering standard. I’m amazed at how many things we had done right, and every day we reap the rewards. It is my hope that these ideas will prove useful to others as they strive to build high-quality software.

But before we can talk about how to achieve software quality, there are three issues we must address first:

The first is just what do I mean by “Quality”? To me, it’s something that is well designed, well engineered, and well built, be it software or anything else. While we may quibble over some details, I believe there are certain universal virtues that underlie high quality software:

  • Correctness: Above and beyond anything else, it must do what it’s designed to do.
  • Robustness: Good software is robust. It handles bad input well, it fails gracefully, it’s resistant to partial failure.
  • Simplicity: Build as simple as you can, but no simpler.
  • Extensibility: In the real world, ever-changing requirements are a fact of life.
  • Scalability: It should be built with provisions for growth, but beware of premature optimization.
  • Transparency: Being able to glance into a running program is a godsend, and will save you much sleep.
  • Elegance: As Eric Raymond wisely noted, the ability to comprehend complexity is intimately tied to the sense of esthetics. Ugly code is plain hard to read, and therefore hard to understand. That makes it a constant source bugs.

The second question concerns our motivations. Why do we need quality anyway? The simplistic idea that quality is an end in and of itself is incorrect. The real goal is to create value, of which quality is just one aspect. High quality code is easy to debug, easy to refactor and easy to build on top of, which makes it easier to add new functionality. We want quality because it makes it cheaper to add value in the long run.

Unfortunately, as with all things, there are tradeoffs to building high quality software, and this brings us to our third question. Just how much quality do I need? Because there is no escaping the fact that quality has costs. It therefore behooves you to decide just how much quality you need, and live with the consequences. Among other concerns, you must think through the consequences of potential bugs and you must consider the expected lifetime of your codebase. The answers depend on your particular problem domain. What’s needed for a bank would be wrong for a smartphone game, and vice versa. At the end of the day, teams with perfect code never ship.

With that out of the way, you can think of the following list as a distillation of the ideas behind the development process we have evolved. By no means am I implying that we have discovered the only true path. That would be hubristic. But these practices have worked for us, and worked really well.

  • Fast iterations: Boyd’s Law of Iteration states that the speed of iteration is more important than quality of iteration. This is doubly so in software, because marginal deployment costs (rolling out a new release) are in most cases negligible.
  • Automated tests: If you’re not writing unit tests, you can stop reading right now and download whatever is the in-vogue unit test framework for your language of choice. I have just saved your sanity. If you already are, congratulations. You’re on the right path. But to really take you to the next level, unit tests are not enough.
    • You want integration tests, and ideally you want a Continuous Integration (CI) environment.
    • You want empirical measurements of your tests. You need to know for a fact what code is covered and what isn’t. You want to know how long your tests take to run. Now, what level of coverage is acceptable depends on what you’re building. Since in our case it’s financial software, we decided early on to strive for 100%. Last time I checked, we’re averaging a little above 99%. In your case, it may make sense to settle for less.
  • Documentation: No matter how well written and logical your code is, you still want it documented and commented. Your code tells you what it’s doing. Your comments tell you what it should be doing. If the two are out of congruence, you want to detect that as early as possible. Using automated tools to measure documentation coverage is highly recommended.
  • Peer review: I cannot emphasize enough the value brought on by code reviews. From day one, we have had mandatory review by at least one person of any code before it was permitted upstream (in fact, as part of our process, the reviewer merges in the code), and I cannot count the number of times potentially catastrophic lossage was averted. Peer review is our primary mechanism to make sure code is up to par. As an added benefit, it familiarizes engineers with different parts of the code base, and trains them to read code.
  • Metrics and automated code analysis: Static analysis is an invaluable tool. Not only will it report potential bugs, it alerts you to various code smells. Is there too much cyclomatic complexity in one part of the system? Is there too much churn somewhere? Are the levels of test coverage and documentation falling? No coverage in a critical area? It pays to have answers to these questions. Taking this a step further, you want to track this information over time. That way if you detect a negative trend you can take action before it becomes a problem.
  • Iterative, analytical design: A bad design with good test coverage is still a bad design. Conceptual flaws are hard to fix on a live system, especially where you have to deal with persistent data. Thus, if you model your data correctly up-front, you will save yourself much grief. Comparatively speaking, inefficient algorithms are much easier to tackle. Now, there are two points worth noting. First, proper data modeling depends on having accurate requirements, both technical and business. In fact, I will argue that 90% of design is figuring out the requirements. Second, remember to keep the fast iterations. Iterative refinement is key every step of the way, especially when defining requirements with the business stakeholders.
  • Sober assessment of technical debt: Do not take on technical debt without explicitly slating and prioritizing its paydown. Like any other debt, technical debt carries interest, and it behooves you to track how much you’re carrying on your books. Mounting technical debt means you’re building on a substandard foundation, and the longer you wait, the more expensive it will be to excise. Now, keep in mind that not all technical debt is the same. An undocumented constant is mildly annoying but trivial to fix. A bad internal API or antipattern is very annoying, will negatively impact development time, and is much harder to fix. A bad external API has been known to reduce strong men to tears and is very hard to fight. Evaluate and prioritize accordingly.
  • Discipline: This is by far the most important point, so I saved it for last. For these ideas to do you good, your organization needs to practice them consistently, and ideally, you want to do it from day one. The goal is to develop within the organization a culture of discipline, where doing the right thing is the natural, automatic choice.

At first glance, this list may seem daunting. Fear not. There is a powerful advantage working in our favor in that these practices feed off each other, and almost paradoxically, the whole really is greater than the sum of its parts. For example, when I’m reviewing code for a new feature, I know that a) because we have fast iterations, the amount of code I’m looking at is small and comprehensible, b) because it’s covered by automated tests and we have complete test coverage that it is likely correct and I can be reasonably sure there are no regressions, c) because it’s documented it’s easier to understand what the author was trying to do, d) the static analysis tools have identified some of the areas I need to take a closer look at. As a result, not only is the reviewer’s job easier, there is actually more value to the review since there won’t be wasted time dealing with the truly egregious flaws and the reviewer can focus on the sort of issues only a second pair of eyes can detect. There is a genuine synergy here, one that can help take the development organization to the next level.

Categories: Programming

April 26, 2013: This Week at Engine Yard

Engine Yard Blog - Sat, 04/27/2013 - 00:25

We are thrilled to announce that the Distill site is up and running and tickets for the conference are officially on sale!

At Distill, on Treasure Island in San Francisco, our goal is to create a unique developer’s conference with world-class content, focusing on education, cross-pollination, and community.

See the speakers page for our current lineup, with more to come!

--Tasha Drew, Product Manager

Engineering Updates

You can now sign up for Engine Yard Cloud in Japanese!

We are working on polishing up our new Ruby 2.0 stack and plan to have it in early access shortly.

Social Calendar (Come say hi!)

Monday, April 29th - Thursday May 2nd: RailsConf!!! We’re excited to be a sponsor and have the inimitable Shai Rosenfeld (Testing HTTP APIs in Ruby) and ultra suave Edward Chiu (Engine Yard Cloud) presenting.

Articles of Interest

The question is finally answered: Do all Americans have pet eagles? http://imgur.com/gallery/yqjdEkB

Categories: Programming

Crafting a README

Engine Yard Blog - Fri, 04/26/2013 - 19:23

When looking at a project for the first time, a README file is often the first place many users will go to get information on how to work with a program or library. For developers it's often a challenge to figure out what to put in the README file, as there is uncertainty as to what the users expect when reading the file. This article introduces a template that I often use for writing README files, based on both writing packages and utilizing them. Examples are given in a generic text format, but it is recommended to look into MarkDown if the project is intended for sites such as GitHub.

What Does It Do?

The first part should state plain and simple what the project does. If it's meant to replace another project, it should state what the shortcomings are of the other software that caused a different package to be necessary. It should also list any features that make the package stand out.

= Introduction =

This is a Ruby library to interface with the FooBar API. It was created due to the fact that there are only Python bindings available to interface with it. Some notable features:

* oAuth authentication support
* SSL communication
* Results caching to lower API hits
* Supports version 1.0 and 2.0 of the API
What Is Needed To Use It?

Perhaps one of the most important items from a software packaging perspective is what is needed to use the package. Unless the package is targeting a specific OS, dependencies should point to the project homepage and not the distribution specific package. However, distribution specific packages can be added as an addition to the base content so users don't have to search around for the package on their specific distribution.

If the package is something that requires compilation (a ruby library against C bindings for example), the build requirements should be provided as well. For packages that run under interpreted languages, the language runtime version should also be indicated (Ruby 1.8/1.9, Python 2.7/3,2, Java6/7, etc.)

Finally, any mention of modules, libraries, etc. should be listed out if they are bound to configuration options and not bundled with the language runtime.

= Requirements =

This code has been run and tested on Ruby 1.8 and Ruby 1.9.
== External Deps  ==

* curb (https://github.com/taf2/curb) for curl based calls (allows for setting of custom headers)
* nokogiri (http://nokogiri.org/) for parsing the XML response
* sqlite-ruby (http://sqlite-ruby.rubyforge.org/) for cache storage

== Standard Libary Deps ==

* OpenSSL for cryptography functionality
How Do I Install It?

This may be a simple command such as gem install foobar for installation. However, the user may wish to do a source installation, so it's important to show instructions for that as well. Recommended installation instructions for various distributions can be added in addition if a package version of the project is available.

= Installation =

This package is available in RubyGems and can be installed with:

   gem install foobar

For users working with the source from GitHub, you can run:

   rake install

Which will build and install the gem (you may need sudo/root permissions). You can also chose to build the gem manually if you want:

   rake build

Ubuntu users can install this package by executing:

   sudo apt-get install ruby-foobar

Note: If you use bundler to create a gem through bundle gem it will generate much of this README content for you

How Do I Test It?

It's beneficial to both the user and the developer to have a method of testing. This allows users to ensure basic functionality for reporting bugs. It also gives the developer a place to point users to for filtering out any local environment issues preventing the package from working. This should list the commands necessary to test the package.

= Tests =

An RSpec test suite is available to ensure proper API functionality. Do note that it uses the staging version of the API, and not the production version (to prevent hitting API limits should something go wrong). Tests are set as the default rake target, so you can run them by simply executing `rake`
Where Can I Find More Information?

Here is where the project website should be listed. This could be a dedicated site with its own domain name, a listing on RubyDoc.info, or something as simple as a GitHub repository link. It should also include ways to build API documentation if there is any.

= More Information =

More information can be found on the [project website on GitHub](http://github.com/myuser/myproject). There is extensive usage documentation available [on the wiki](https://github.com/myuser/myproject/wiki).

== API Documentation ==

The main API is documented with yardoc, and can be built with a rake task:

   rake yard

from here you can use the yard server to browse the individual gem docs from the source root:

   yard server

or optionally you can run the main yard gem documentation server:

   yard server --gems

and docs can be viewed from `http://localhost:8808/`
How do I use it?

This is by far one of the most important sections. Users often want to see a small piece of code to get them started on basic usage. This can be a simple connection and data loop, or it can be more extensive and show multiple examples for popular usage. Any examples in the source directory should be noted as well.

= Example Usage =

The following shows how to connect to the API and print a list of users:

   -*- encoding: utf-8 -*-
   require 'foobar'

   api = Foobar::Api.new('[key]','[secret]')
   api.GetUsers().each do |user|
       puts "User: #{user.name}"
   end
What Are The License Terms?

This section should list the location of the LICENSE file, as well as what type of license it is. It’s especially important to note cases where there are multiple licenses, or an alternative commercial license available.

= License =

This project is licensed under the MIT license, a copy of which can be found in the LICENSE file.
How Do I Get Support?

For those who want support, the necessary procedures should be explained. This could be anything from a mailing list to pull requests.

= Support =

Users looking for support should file an issue on the GitHub issue tracking page (https://github.com/myuser/mypackage/issues), or file a pull request (https://github.com/myuser/mypackage/pulls) if you have a fix available. Those who wish to contribute directly to the project can contact me at <user@email.com> to talk about getting repository access granted. Support is also available on IRC (#foobar @ Freenode).
Conclusion

This concludes a look at crafting a README file for users to better understand a project. Note that these are what I would consider guidelines, so projects may choose to add more content, or take out sections based on individual project needs. However, it's also important to note that a detailed and well thought out README can go a long way towards encouraging users to try a package out and can even help entice contributers.

Categories: Programming

Announcing: Distill Speakers and Ticket Sales

Engine Yard Blog - Thu, 04/25/2013 - 18:06

We’re thrilled to announce that the Distill website and speaker lineup is live. The first batch of tickets is now officially on sale!

Our vision for this event, first and foremost, is to provide a distillation of best practices, new technologies and progressive methods currently on the rise in software development. Our desire is to create a special forum where these ideas can be shared with an engaged audience of like-minded developers and artists. We received hundreds of amazing submissions and narrowed them down to the luminaries that comprise our excellent lineup. The talks will range from user experience to mobile development, to the Internet of Things and beyond. We’re excited to be able to bring in speakers from Ireland, Italy, Germany and other far-flung locations to inspire our attendees to change the world with their creations. Take a look at the lineup of Distill speakers here.

In addition, we are pleased to welcome Brent and Nolan Bushnell, James Whelton and Michael Lopp as our keynote speakers. They’re sure to inspire you with their depth of experience and captivating stories about the challenges and rewards of entrepreneurship, technology and education. But that’s not all--we’ll be announcing another keynote speaker in the weeks to come.

This two-day event will take place at The Winery SF on Treasure Island in San Francisco. Shuttles will transport you to and from the venue daily, so you can hang out in comfort and style. Distill is about education, cross-pollination and community--it is our hope that you forge new relationships with your fellow attendees and leave the event feeling enriched, edified and inspired. Stay tuned for more announcements--we’ve got plenty more tricks up our sleeve and we can’t wait to share them with you!

The first batch of tickets is now on sale here. There is a limited quantity of first batch tickets so get yours now. Trust us, you don’t want to miss this.

 

Categories: Programming

The Thinker: Michael Lopp to Keynote Distill

Engine Yard Blog - Tue, 04/23/2013 - 18:00

While we as a community spend time thinking about how to write great code, minimize bugs, determine the right database schema, anticipate platform shifts and more, Michael Lopp is thinking about the mindset of the developers in our community.  Michael thinks deeply about the problems developers face in their everyday lives, how to help developers identify their true goals, what makes them happy and how to achieve happiness, how to lead and how to follow and much more.  Michael has spoken at every FunConf I have hosted since its inception and has consistently given our attendees a more authentic, meaningful way to think about our lives, what we do and the implications.  Michael is one of the great thinkers in our community and we are very pleased to have him be a keynote speaker at Distill.  Here's a little more information on Michael:

Michael Lopp is a director at Palantir Technologies, a Silicon Valley software company dedicated to radicalizing the way the world interacts with data.  Before joining Palantir, Lopp was part of the senior leadership team at Apple for nine years where he led essential parts of the Mac OS X engineering team, and subsequently managed the engineering team responsible for Apple's Online Store. Prior to Apple, he worked in engineering leadership at notable Silicon Valley companies such as Netscape, Symantecand Borland. Lopp is a noted author in Silicon Valley; his blog. “Rands In Repose,” and his books, Managing Humans and Being Geek, are part of a new management and engineering canon.

Distill is a conference to explore the development of inspired applications. Tickets go on sale this week.

Categories: Programming

April 19, 2013: This Week at Engine Yard

Engine Yard Blog - Fri, 04/19/2013 - 19:04

Our hearts and thoughts are with Boston, Waco, and Chicago.

--Tasha Drew, Product Manager

Engineering Updates

Rails 4.0 beta1 (rails-4.0.0.beta1) is now in GA on our platform!

PHP is available in Early Access on Engine Yard Cloud! Learn all about using PHP with Engine Yard Cloud.

Do you PagerDuty? We do! We find it so useful and critical to maintaining a robust on-call system that we extended it to all of our Premium Support customers. This week our Operations Manager, Jamie Bleichner, announced an even deeper integration for Engine Yard’s Premium Support offering, now offering ZenDesk, NewRelic, Pingdom, Splunk, Nagios, and many other integrations out-of-the-box.

Social Calendar (Come say hi!)

Tuesday, April 23rd: Dublin, Ireland: the Mobile App Development Ireland meetup group is meeting to have an iOS development overview class.

Wednesday, April 24th - Friday, April 26th: Chef Conf, San Francisco! Engine Yard is sponsoring and a bunch of us are attending! Hope to see you there.

Thursday, April 25th: Dublin, Ireland: Node.js Dublin will be meeting with two speakers, Domnic Tarr covering “Streams in Node.js,” and Richard Roger reporting on “The anatomy of an app.”

Thursday, April 25th PDX Coder Dojo: K - 12 students and their parents can play, explore, and learn about coding and building software!

Articles of Interest

An in-depth article all about php on Engine Yard Cloud, by Ireland’s product manager extraordinaire Noah Slater!

Treehouse taught us how to work with iOS core and open source frameworks.

Categories: Programming

Scrum Trainer / Senior Fellow Position Available

10x Software Development - Steve McConnell - Wed, 04/17/2013 - 19:38

If you're a highly qualified Scrum Professional, check out our opening for a Scrum Trainer / Senior Fellow. Here is a brief description (follow the link for more details):

Travel the World, Help Teams Adopt Scrum, and Reach Their Full Potential

Share your hard-won lessons learned with others. Work with a staff of world-class software experts including Steve McConnell, author of Code Complete and other software industry classics. Become a part of Construx Software, a company recognized multiple times as being the best small company to work for in Washington state.

Requirements for Scrum Trainer/Consultant

We are looking for candidates who have:

  • A minimum of 10 years of broad and deep experience in software development, including deep subject matter expertise in Scrum.
  • Broad and deep knowledge of current software development in-the-trenches practice, research, and literature.
  • Excellent verbal communication skills including the ability to present to groups of professionals.
  • “Leadership” level understanding of at least two of the following areas: Agile Development, Software Project Management, Software Requirements, Software Process, Software Maintenance, Software Design, Software Construction, Software Test, Software Quality, Software Configuration Management, and Software Tools and Methods.
  • The ability to work both independently and as part of a collaborative team.
  • Willingness to commit to providing excellent service quality.
  • Willingness to spend approximately 50% of your time traveling to client locations in North America, with occasional international trips.
  • An ongoing personal commitment to learning from clients, co-workers, publications, and other sources.

Preferred but not required:

  • Training experience and/or public speaking experience.
  • A four-year degree from an accredited university.
  • Industry certifications including Certified Scrum Trainer, Certified Scrum Coach, Certified Scrum Practitioner, Certified Scrum Master, and Professional Scrum Master.
  • A record of conference presentations.
  • A record of published work in refereed journals, blogs, and/or popular trade publications.
No Training Experience?

Our primary interest is your depth of technical expertise. If you are technically qualified, Construx will provide deep support for developing your training and presentation skills.

Why Construx?

Construx Software is an established industry leader in software development best practices, providing consulting and training services to leading companies worldwide. Construx management has created an environment that empowers employees to perform at their highest levels while maintaining a healthy work-life balance. Low turnover, consistent profitability, and an exceptional work force are reasons this company has been named the best small company to work for in Washington state multiple times. Steve McConnell, Construx CEO, said, of his thoughts upon founding the company 17 years ago, "I wanted to create a company that I personally would want to work in the rest of my career."

For more details, and to contact us or apply for the position, please visit here.

Scrum Trainer / Senior Fellow Position Available

10x Software Development - Steve McConnell - Wed, 04/17/2013 - 19:38

If you're a highly qualified Scrum Professional, check out our opening for a Scrum Trainer / Senior Fellow. Here is a brief description (follow the link for more details):

Travel the World, Help Teams Adopt Scrum, and Reach Their Full Potential

Share your hard-won lessons learned with others. Work with a staff of world-class software experts including Steve McConnell, author of Code Complete and other software industry classics. Become a part of Construx Software, a company recognized multiple times as being the best small company to work for in Washington state.

Requirements for Scrum Trainer/Consultant

We are looking for candidates who have:

  • A minimum of 10 years of broad and deep experience in software development, including deep subject matter expertise in Scrum.
  • Broad and deep knowledge of current software development in-the-trenches practice, research, and literature.
  • Excellent verbal communication skills including the ability to present to groups of professionals.
  • “Leadership” level understanding of at least two of the following areas: Agile Development, Software Project Management, Software Requirements, Software Process, Software Maintenance, Software Design, Software Construction, Software Test, Software Quality, Software Configuration Management, and Software Tools and Methods.
  • The ability to work both independently and as part of a collaborative team.
  • Willingness to commit to providing excellent service quality.
  • Willingness to spend approximately 50% of your time traveling to client locations in North America, with occasional international trips.
  • An ongoing personal commitment to learning from clients, co-workers, publications, and other sources.

Preferred but not required:

  • Training experience and/or public speaking experience.
  • A four-year degree from an accredited university.
  • Industry certifications including Certified Scrum Trainer, Certified Scrum Coach, Certified Scrum Practitioner, Certified Scrum Master, and Professional Scrum Master.
  • A record of conference presentations.
  • A record of published work in refereed journals, blogs, and/or popular trade publications.
No Training Experience?

Our primary interest is your depth of technical expertise. If you are technically qualified, Construx will provide deep support for developing your training and presentation skills.

Why Construx?

Construx Software is an established industry leader in software development best practices, providing consulting and training services to leading companies worldwide. Construx management has created an environment that empowers employees to perform at their highest levels while maintaining a healthy work-life balance. Low turnover, consistent profitability, and an exceptional work force are reasons this company has been named the best small company to work for in Washington state multiple times. Steve McConnell, Construx CEO, said, of his thoughts upon founding the company 17 years ago, "I wanted to create a company that I personally would want to work in the rest of my career."

For more details, and to contact us or apply for the position, please visit here.