Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=31' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

All About Cloud Storage

Engine Yard Blog - Fri, 03/15/2013 - 18:50

With the rise of social apps like Facebook, Instagram, YouTube and more, managing user generated content has become a growing challenge and problem to be solved.  Amazon AWS S3, Google Storage, Rackspace Cloud Files, and other similar services have sprung up to help application developers solve a common problem - scalable asset storage.  And of course, they all utilize “the cloud”!

The Problem

Popular social applications, scientific applications, media generating applications and more are able to generate massive amounts of data in a small amount of time.  Here are just a few examples:

  • 72 hours of video are uploaded every minute by YouTube users. (source)
  • 20 million photos are uploaded to SnapChat every day. (source)
  • Pinterest has stored over 8 billion objects and 410 terabytes of data since their launch in 2009. (source)
  • Twitter generates roughly 12 terabytes of data per day. (source)

When your application begins to store massive amounts of user generated data, your team will inevitably need to decide where to spend its engineering effort in relation to that data.  If your application is engineered to store assets on your own hardware/infrastructure, your team will spend plenty of time and money related to storing and serving your assets efficiently.  Alternately, an application can easily store assets with a cloud storage provider.  Choosing this route allows application content to scale almost limitlessly while only paying for the resources and space needed to store and serve the assets. In effect, cloud storage frees up your teams engineering time to focus more on creating unique application features, rather than reinventing the file storage wheel when scalability becomes an issue.

When should you consider using cloud based storage for your application?

  • When user generated assets are part of your application.Does your application accept uploads from your users?  Does your application generate files serverside?  If your application is going to accept uploads from users or generate content stored on the filesystem, you will likely want to consider using a cloud storage provider sooner rather than later.
  • When your application is likely to grow beyond a single server.If your application is small enough to run on a dedicated single server or web host, and you don’t expect it to grow beyond that single server, it doesn’t make sense to use cloud storage for your assets.  Simply store them on the local file system and call it a day.If, however, you expect any growth from your application that would require you to run more than one application server, you will immediately reap the benefits of storing your assets in the cloud.  By storing your assets in the cloud, you can horizontally scale your service to as many front-end application servers as your heart desires without the need to replicate your file system assets to the any new servers.  Because your assets are stored centrally with a cloud service, they will be accessible from a given hostname, no matter how many application servers your application runs.
  • When its more cost effective for your team to focus on business critical application features rather than engineering a scalable file storage system.If you are strapped for either time or money, and you expect your application to grow, you can’t go wrong with cloud storage.  Cloud storage gives you the flexibility to get up and running quickly, scale your storage to the growing needs of your application and only pay for the storage and resources you use.  This in turn allows you to focus less on hardware costs, operations and configuration for storing assets and more importantly focus your time on developing your business.

Integration & Access

Most of the leaders in online cloud storage provide API access to their platform allowing developers to integrate web-scale asset storage and file access within their applications.  Below we’ll look at some code examples using an SDK or library to store assets on Amazon S3.  Many libraries and SDKs make setting the storage provider a breeze, allowing you to easily deploy file storage on many of the popular providers.

Ruby & Carrierwave

Code examples below have been adapted from the CarrierWave github repository.

  • Install CarrierWave: gem install carrierwave or in your gemfile gem 'carrierwave'
  • Install Fog: gem install fog or in your gemfile gem "fog", "~> 1.3.1"
  • In an initiailization file add the following:
    CarrierWave.configure do |config|
    config.fog_credentials = {
    :provider               => 'AWS',       # required
    :aws_access_key_id      => 'xxx',       # required
    :aws_secret_access_key  => 'yyy',       # required
    :region                 => 'eu-west-1'  # optional, defaults to 'us-east-1'
    config.fog_directory  = 'name_of_directory'
    config.fog_public     = false
    config.fog_attributes = {'Cache-Control'=>'max-age=315576000'}
    config.asset_host     = 'https://assets.example.com’
  • Create your uploader class:
    class AvatarUploader < CarrierWave::Uploader::Base
    storage :fog
  • Using your uploader directly:
    uploader = AvatarUploader.new
  • Using your uploader with ActiveRecord:
  • Add a field to your database table and require CarrierWave:
    add_column :users, :avatar, :string in a database migration file
    require 'carrierwave/orm/activerecord' in your model file.
  • Mount your uploader to your model:
    class User < ActiveRecord::Base
    mount_uploader :avatar, AvatarUploader
  • Work with your model and files:
    u = User.new
    u.avatar = params[:file]
    u.avatar = File.open('somewhere')
    u.avatar.url # => '/url/to/file.png'
    u.avatar.current_path # => 'path/to/file.png'
    u.avatar.identifier # => 'file.png'

Here are some CarrierWave examples for uploading to Amazon S3, Rackspace Cloud Files and Google Storage.  And some gems for using with other ORMs like DataMapper, Mongoid and Sequel.


Amazon provides a PHP SDK to work with AWS APIs and services.  For this code example we will be using instructions straight from the SDK repository README and sample code.

  • Copy the contents of config-sample.inc.php and add your credentials as instructed in the file.
  • Move your file to ~/.aws/sdk/config.inc.php.
  • Make sure that getenv('HOME') points to your user directory. If not you'll need to set putenv('HOME=<your-user-directory>')
  • // Instantiate the AmazonS3 class
    $s3 = new AmazonS3();
    // Create a bucket to upload to
    $bucket = 'YOUR-BUCKET-NAME' . strtolower($s3->key);
    if (!$s3->if_bucket_exists($bucket))
    $response = $s3->create_bucket($bucket,AmazonS3::REGION_US_E1);
    if (!$response->isOK()) die('Could not create `' . $bucket . '`.');
    // Download a public object.
    $response = $s3->get_object('aws-sdk-for-php', 'some/path-to-file.ext',array(
    'fileDownload' => './local/path-to-file.ext'
    // Uploading an object.
    $response = $s3->create_object($bucket, 'some/path-to-file.ext', array(
    'fileUpload' => './local/path-to-file.ext'

Node & Knox
For Node.js I have adapted example code from the Knox Amazon S3 Client on Github.

  • // Configure the client
    var client = knox.createClient({
    key: '<api-key-here>'
    , secret: '<secret-here>'
    , bucket: 'BUCKET-NAME'
    // Putting a file on S3
    client.putFile('some/path-to-file.ext', 'bucket/file-name.ext', function(err, res){
    // Logic
    // Getting a file from S3
    client.get('/some/path-to-file.ext’).on('response', function(res){
    res.on('data', function(chunk){
    // Deleting a file on S3
    client.del('/some/path-to-file.ext’).on('response', function(res){

As you can see in the previous code examples, working with the AWS S3 APIs are very straightforward and there are plenty of libraries readily available for most languages.  I definitely recommend taking a hard look into using a cloud storage provider for your next project.  You’ll save time not reinventing file storage solutions, reap the benefits of focusing on developing your application, and have practically unlimited storage scalability as you need it.

Categories: Programming

Welcome Riak 1.3 to Engine Yard

Engine Yard Blog - Wed, 03/13/2013 - 00:06

Hello friends! A few months ago we rolled out support for Riak, Basho’s distributed key/value database, on the Engine Yard stack. Today we want to share with you the improvements we’ve made to the product, including the release of a brand new version, 1.3. We hope you are as excited about this release as we are. It includes major enhancements and features.

We want to make it easier than ever to get started with Riak on our platform and we’ve made a video to help you. Follow Edward’s screencast and you’ll have a production-ready cluster in a matter of minutes.

Introducing Riak from Engine Yard on Vimeo.

Why you should use/upgrade to Riak 1.3

Here is a summary of the enhancements included in version 1.3 that you definitely want to be aware of. For more information check Basho’s Introducing Riak 1.3 documentation.

Active Anti-Entropy

Riak is designed to be highly available. This means that the database understands and can survive (normally catastrophic) events like node-failures and data inconsistencies.  These inconsistencies, often referred to as ‘entropy’, can arise due to failure modes, concurrent updates, and physical data loss or corruption. Ironically these events are not uncommon in distributed systems and the Cloud is the most pervasive example.

Riak already had several features for repairing this “entropy” (lost or corrupted data), but they required user intervention. Riak 1.3 introduces active anti-entropy (AAE) to solve this problem automatically. The AAE process repairs cluster entropy on an ongoing basis by using data replicated in other nodes to heal itself. This feature is enabled by default on Riak 1.3 clusters.

Improved MapReduce

Riak supports MapReduce queries in both JavaScript and Erlang for aggregation and analytics tasks. In Riak 1.3, tunable backpressure is extended to the MapReduce sink to prevent problems at endpoint processes. (Backpressure keeps Riak processes from being overwhelmed and it also prevents memory consumption getting out of control.)

New file system default on EBS-backed Riak clusters

Now when creating a EBS-backed Riak cluster we’ll format your volume using the ext4 file system instead of ext3. This change enhances I/O performance and durability.

Riak Haproxy enabled in utility instances

In addition to application instances, we now allow utility instances to address Riak cluster nodes using haproxy.

And there is more! Riak 1.3 provides Advanced Multi-Datacenter Replication Capabilities (available for Riak Enterprise customers).  This release also provides better performance, more TCP connections and easier configuration.

Give it a try!

With 500 hours for free you can try Riak 1.3 without any hassle.  We have you covered! All of our Riak installations come with full support from us at Engine Yard and from our partner Basho.

Categories: Programming

Hack your bundle for fun and profit

Engine Yard Blog - Fri, 03/08/2013 - 01:44

Note: Friend of Engine Yard André Arko, a member of Bundler core, wrote this insightful piece for us about little-known tricks in Bundler. Check out his own site here.

Bundler has turned out to be an amazingly useful tool for installing and tracking gems that a Ruby project needs--so useful, in fact, that nearly every Ruby project uses it. Even though it shows up practically everywhere, most people don’t know about Bundler’s built-in tools and helpers. In an attempt to increase awareness (and Ruby developer productivity), I’m going to tell you about them.

Install, update, and outdated

You probably already know this, but I’m going to summarize for the people who are just getting started and don’t know yet. Run bundle install to install the bundle that’s requested by your project. If you’ve just run git pull and there are new gems? bundle install. If you’ve just added new gems or changed gem versions in the Gemfile? bundle install. It might seem like you want to bundle update, but that won’t just install gems — it will try to upgrade every single gem in your bundle. That’s usually a disaster unless you really meant to do it.

The update command is for when gems you use has been updated, and you want your bundle to have the newest version that your Gemfile will allow. Run bundle outdated to print a list of gems that could be upgraded. If you want to upgrade a specific gem, run bundle update GEM, or run bundle update to update everything. After the update finishes, make sure all your tests pass before you commit your new Gemfile.lock!

Show and open

Most people know about bundle show, which prints the full path to the location where a gem is installed (probably because it’s called out in the success message after installing!). Far more useful, however, is the bundle open command, which will open the gem itself directly into your EDITOR. Here’s a minimalist demo:

$ bundle install
Fetching gem metadata from https://rubygems.org/..........
Resolving dependencies...
Installing rack (1.5.2)
Using bundler (1.3.1)
Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.
$ echo $EDITOR
mate -w
$ bundle open rack

That’s all you need to get the installed copy of rack open in your editor. Being able to edit gems without having to look for them can be an amazing debugging tool. It makes it possible to insert print or debugger statements in a few seconds. If you do change your gems, though, be sure to reset them afterwards! There will be a pristine command soon, but for now, just run bundle exec gem pristine to restore the gems that you edited.


The show command still has one more trick up it’s sleeve, though: bundle show --paths. Printing a list of paths may not sound terribly useful, but it makes it trivial to search through the source of every gem in your bundle. Want to know where ActionDispatch::RemoteIp is defined? It’s a one-liner:

$ grep ActionDispatch::RemoteIp `bundle show --paths`

Whether you use grep, ack, or ag, it’s very easy to set up a shell function that allows you to search the current bundle in just a few characters. Here’s mine:

function back () {
ack "$@" `bundle show --paths`

With that function, searching becomes even easier:

$ back ActionDispatch::RemoteIp

One of the most annoying things about using Bundler is the way that you (probably) have to run bundle exec whatever anytime you want to run a command. One of the easiest ways around that is installing Bundler binstubs. By running bundle binstubs GEM, you can generate stubs in the bin/ directory. Those stubs will load your bundle, and the correct version of the gem, before running the command. Here's an example of setting up a binstub for rspec.

$ bundle binstubs rspec-core
$ bin/rspec spec
No examples found.
Finished in 0.00006 seconds
0 examples, 0 failures

Use binstubs for commands that you run often, or for commands that you might want to run from (say) a cronjob. Since the binstubs don't have to load as much code, they even run faster. Rails 4 adopts binstubs as an official convention, and ships with bin/rails and bin/rake, which are both set up to always run for that specific application.

Creating a Gemfile

I've seen some complaints recently that it's too much work to type source 'https://rubygems.org/' when creating a new Gemfile. Happily, Bundler will do that for you! When you're starting a new project, you can create a new Gemfile with Rubygems.org as the source by running a single command:

$ bundle init

At this point, you're ready to add gems and install away!

Git local gems

A lot of people ask how they can use Bundler to modify and commit to a gem in their Gemfile. Thanks to work lead by José Valim, Bundler 1.2 allows this, in a pretty elegant way. With one setting, you can load your own git clone in development, but deploying to production will simply check out the last commit you used.

Here’s how to set up a git local copy of rack:

$ echo "gem 'rack', :github => 'rack/rack', :branch => 'master'" >> Gemfile
$ bundle config local.rack ~/sw/gems/rack
$ bundle show rack

Now that it’s set up, you can edit the code your application will use, but still commit in that repository as often as you like. Pretty sweet.

Ruby versions

Another feature of Bundler 1.2 is ruby version requirements. If you know that your application only works with one version of ruby, you can require that version. Just add one line to your Gemfile specifying the version number as a string.

ruby '1.9.3'

Now Bundler will raise an exception if you try to run your application on a different version of ruby. Never worry about accidentally using the wrong version while developing again!

Dependency graph

Bundler uses your Gemfile to create what is technically called a “dependency graph”, where there are many gems that have various dependencies on eachother. It can be pretty cool to see that dependency graph drawn as a literal graph, and that’s what the bundle viz command does. You need to install GraphViz and the ruby-graphviz gem.

$ brew install graphviz
$ gem install ruby-graphviz
$ bundle viz

Once you’ve done that, though, you get a pretty picture that’s a lot of fun to look at. Here’s the graph for a Gemfile that just contains the Rails gem.


IRB in your bundle

I have one final handy tip before the big finale: the console command. Running bundle console will open an IRB prompt for you, but it will also load your entire bundle and all the gems in it beforehand. If you want to try expirimenting with the gems you use, but don’t have the Rails gem to give you a Rails console, this is a great alternative.

$ bundle console
>> Rack::Server.new
=> #<Rack::Server:0x007fb439037970 @options=nil>
Creating a new gem

Finally, what I think is the biggest and most useful feature of Bundler after installing things. Since Bundler exists to manage gems, the Bundler team is very motivated to make it easy to create and manage gems. It’s really, really easy. You can create a directory with the skeleton of a new gem just by running bundle gem NAME. You’ll get a directory with a gemspec, readme, and lib file to drop your code into. Once you’ve added your code, you can install the gem on your own system to try it out just by running rake install. Once you’re happy with the gem and want to share it with others, pushing a new version of your gem to rubygems.org is as easy as rake release. As a side benefit, gems created this way can also easily be used as git gems. That means you (and anyone else using your gem) can fork, edit, and bundle any commit they want to.

Step 3: Profit

Now that you know all of the handy stuff Bundler will do for you, I suggest trying it out! Search your bundle, create a gem, edit it with git locals, and release it to rubygems.org. As far as I’m concerned, the absolute best thing about Bundler is that it makes it easier for everyone to share useful code, and collaborate to make Ruby better for everyone.

Categories: Programming

New book review ‘SOA Made Simple’ coming soon.

Ben Wilcock - Tue, 02/05/2013 - 14:24

My review copy has arrived and I’ll be reviewing it just as soon as I can, but in the meantime if you’d like more information about this new book go to http://bit.ly/wpNF1J

Categories: Architecture, Programming

Moving to Berlin and auf wiedersehen to friends

Chad Fowler - Tue, 01/08/2013 - 19:39

It’s hard to believe it’s been almost two years since InfoEther was acquired by LivingSocial. Since then, we’ve built the strongest development team I’ve ever known. We’ve set records for e-commerce transaction volume. We’ve grown at an incredible pace, both as a business and as technology team. We’ve shipped a lot of software, and made millions of people’s lives more interesting in the process.  I’ve had the privilege to work with some of the most admired engineers in on industry. I’m proud of the team Aaron Batalion (from whom I’ve learned a ton about running a consumer internet product) had assembled before InfoEther arrived and of the team that ultimately grew from that foundation.

As I mentioned when I announced our Hungry Academy program, for me personally the experience at LivingSocial has been intense. As I said in that post, “These last 8 months at LivingSocial have been the best 4 years of my career.” That holds true today.

Playing the role of Senior Vice President of Technology and serving on the senior executive team at LivingSocial has been a rewarding learning experience.  I’m humbled by the talent and experience of every member of that team, and since my first day have been awed by Tim O’Shaughnessy’s business sense and natural leadership ability.  I look back on my career, and a handful of teachers and mentors stand out that have had a significant impact on me. Tim is now in that very short list.

All the while, though, I’ve known that I would eventually go back to a more hands-on role, personally building products and solving technical problems. I am, after all, a Passionate Programmer.

Kelly and I have also had  a goal to (before we get too old to fully enjoy it) live overseas again.  Our time in India, was a huge influence on both of us, both personally and professionally, and we’ve long since hoped to gain a similar experience in a different part of the world, specifically Europe.

So that’s what we’re going to do.  I’m taking a role as CTO of a technology startup in one of our favorite cities: Berlin.  I’ll be working hands-on to develop cross-platform software with a small, talented team of engineers, designers, and product managers. I’ll be transitioning from my role at LivingSocial for the remainder of January and will be relocated to Berlin and starting the new job in mid February.

As excited as I am to move to the next adventure, it’s always sad leaving a great company like LivingSocial.  I’ve made some friendships that will last forever, and I’ll miss the team immensely. I know, however, that they’re in good hands and that 2013 is going to be a fantastic year for the business. 

Categories: Programming

Leaving a Legacy System revisited

Chad Fowler - Mon, 10/29/2012 - 11:45

Early last year, I posted about how we’ve unfortunately turned the word “legacy” into a bad word in the software industry.

At Nordic Ruby and again at Aloha Ruby Conf this year I turned this idea into an exploratory presentation. I think it turned out pretty well and is an important topic. Watch it and let me know what you think!

Categories: Programming

Leaving a Legacy System revisited

Chad Fowler - Mon, 10/29/2012 - 11:45

Early last year, I posted about how we’ve unfortunately turned the word “legacy” into a bad word in the software industry.

At Nordic Ruby and again at Aloha Ruby Conf this year I turned this idea into an exploratory presentation. I think it turned out pretty well and is an important topic. Watch it and let me know what you think!

Categories: Programming

Facebook Has An Architectural Governance Challenge

Just to be clear, I don't work for Facebook, I have no active engagements with Facebook, my story here is my own and does not necessarily represent that of IBM. I'd spent a little time at Facebook some time ago, I've talked with a few of its principal developers, and I've studied its architecture. That being said:

Facebook has a looming architectural governance challenge.

When I last visited the company, they had only a hundred of so developers, the bulk of whom fit cozily in one large war room. Honestly, it was little indistinguishable from a Really Nice college computer lab: nice work desks, great workstations, places where you could fuel up with caffeine and sugar. Dinner was served right there, so you never needed to leave. Were I a twenty-something with only a dog and a futon to my name, it would be been geek heaven. The code base at the time was, by my estimate, small enough that it was grokable, and the major functional bits were not so large and were sufficiently loosely coupled such that development could proceed along nearly independent threads of progress.

I'll reserve my opinions of Facebook's development and architectural maturity for now. But, I read with interest this article that reports that Facebook plans to double in size in the coming year.

Oh my, the changes they are a comin'.

Let's be clear, there are certain limited conditions under which the maxim "give me PHP and a place to stand, and I will move the world" holds true. Those conditions include having a) a modest code base b) with no legacy friction c) growth and acceptance and limited competition that masks inefficiencies, d) a hyper energetic, manically focused group of developers e) who all fit pretty much in the same room. Relax any of those constraints, and Developing Really Really Hard just doesn't cut it any more.

Consider: the moment you break a development organization across offices, you introduce communication and coordination challenges. Add the crossing of time zones, and unless you've got some governance in place, architectural rot will slowly creep in and the flaws in your development culture will be magnified. The subtly different development cultures that will evolve in each office will yield subtly different textures of code; it's kind of like the evolutionary drift on which Darwin reported. If your architecture is well-structure, well-syndicated, and well-governed, you can more easily split the work across groups; if your architecture is poorly-structured, held in the tribal memory of only a few, and ungoverned, then you can rely on heroics for a while, but that's unsustainable. Your heros will dig in, burn out, or cash out.

Just to be clear, I'm not picking on Facebook. What's happening here is a story that every group that's at the threshold of complexity must cross. If you are outsourcing to India or China or across the city, if you are growing your staff to the point where the important architectural decisions no longer will fit in One Guy's Head, if you no longer have the time to just rewrite everything, if your growing customer base grows increasingly intolerant of capricious changes, then, like it or not, you've got to inject more discipline.

Now, I'm not advocating extreme, high ceremony measures. As a start, there are some fundamentals that will go a long way: establish a well-instrumented and well-automated build and release system; use some collaboration tools that channel work but also allow for serendipitous connections; codify and syndicate the system's load bearing wells/architectural decisions; create a culture of patterns and refactoring.

Remind your developers that what they do, each of of them, is valued; remind your developers there is more to life than coding.

It will be interesting to watch how Facebook metabolizes this growth. Some organizations are successful in so doing; many are not. But I really do wish Facebook success. If they thought the past few years were interesting times, my message to them is that the really interesting times are only now beginning. And I hope they enjoy the journey.
Categories: Architecture

How Watson Works

Earlier this year, I conducted an archeological dig on Watson. I applied the techniques I've developed for the Handbook which involves the use of the UML, Philippe Kruchten's 4+1 View Model, and IBM's Rational Software Architect. The fruits of this work have proven to be useful as groups other than Watson's original developers begin to transform the Watson code base for use in other domains.

You can watch my presentation at IBM Innovate on How Watson Works here.
Categories: Architecture

Books on Computing

Over the past several years, I've immersed myself in the literature of the history and the implications of computing. All told, I've consumed over two hundred books, almost one hundred documentaries, and countless articles and websites - and I have a couple of hundred more books yet to metabolize. I've begun to name the resources I've studied here and so offer them up for your reading pleasure.

I've just begun to enter my collection of books - what you see there now at the time of this blog is just a small number of the books that currently surround me in my geek cave - so stay tuned as this list grows. If you have any particular favorites you think I should study, please let me know.
Categories: Architecture

The Computing Priesthood

At one time, computing was a priesthood, then it became personal; now it is social, but it is becoming more human.

In the early days of modern computing - the 40s, 50s and 60s - computing was a priesthood. Only a few were allowed to commune directly with the machine; all others would give their punched card offerings to the anointed, who would in turn genuflect before their card readers and perform their rituals amid the flashing of lights, the clicking of relays, and the whirring of fans and motors. If the offering was well-received, the anointed would call the communicants forward and in solemn silence hand them printed manuscripts, whose signs and symbols would be studied with fevered brow.

But there arose in the world heretics, the Martin Luthers of computing, who demanded that those glass walls and raised floors be brought down. Most of these heretics cried out for reformation because they once had a personal revelation with a machine; from time to time, a secular individual was allowed full access to an otherwise sacred machine, and therein would experience an epiphany that it was the machines who should serve the individual, not the reverse. Their heresy spread organically until it became dogma. The computer was now personal.

But no computer is an island entire of itself; every computer is a piece of the continent, a part of the main. And so it passed that the computer, while still personal, became social, connected to other computers that were in turn connected to yet others, bringing along their users who delighted in the unexpected consequences of this network effect. We all became part of the web of computed humanity, able to weave our own personal threads in a way that added to this glorious tapestry whose patterns made manifest the noise and the glitter of a frantic global conversation.

It is as if we have created a universe, then as its creators, made the choice to step inside and live within it. And yet, though connected, we remain restless. We now strive to craft devices that amplify us, that look like us, that mimic our intelligence.

Dr. Jeffrey McKee has noted that "every species is a transitional species." It is indeed so; in the co-evolution of computing and humanity, both are in transition. It is no surprise, therefore, that we now turn to re-create computing in our own image, and in that journey we are equally transformed.
Categories: Architecture


No matter what future we may envision, that future relies on software-intensive systems that have not yet been written.

You can now follow me on Twitter.
Categories: Architecture

There Were Giants Upon the Earth

Steve Jobs. Dennis Ritchie. John McCarthy. Tony Sale.

These are men who - save for Steve Jobs - were little known outside the technical community, but without whom computing as we know it today would not be. Dennis created Unix and C; John invented Lisp; Tony continued the legacy of Bletchley Park, where Turing and others toiled in extreme secrecy but whose efforts shorted World War II by two years.

All pioneers of computing.

They will be missed.
Categories: Architecture

Steve Jobs

This generation, this world, was graced with the brilliance of Steve Jobs, a man of integrity who irreversibly changed the nature of computing for the good. His passion for simplicity, elegance, and beauty - even in the invisible - was and is an inspiration for all software developers.

Quote of the day:

Almost everything - all external expectations, all pride, all fear of embarrassment or failure - these things just fall away in the face of death, leaving only what is truly important. Remembering that you are going to die is the best way I know to avoid the trap of thinking you have something to lose. You are already naked. There is no reason not to follow your heart.
Steve Jobs
Categories: Architecture