Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Day 1 of 7 Days of Agile Results - Sunday (Getting Started)

imageYour Outcome:  Learn how to get started quickly with Agile Results, even if you've never read the book.

Welcome to day 1 of 7 Days of Agile Results.  By following along each day, you can learn how to use Agile Results as a powerful system to easily triple your productivity.

Agile Results is the productivity system introduced in my best-selling time management book, Getting Results the Agile Way.

You don't have to invest a lot of time, to get a lot of benefits.  You don't even need any ramp up time.  You can just get started now and implement other concepts from the book as time goes by, and when you are ready for those concepts.

The main idea is to get benefits now vs. later.

Here are some of the big benefits you'll gain from using Agile Results as your productivity system:

  1. You'll automatically master time management, motivation, and personal productivity by using proven practices.
  2. You'll improve and amplify your impact by focusing on your top 3 Wins for the week.
  3. You'll avoid getting overloaded and overwhelmed by your day by using 3 Wins each day to focus and prioritize your time, energy, and attention.
  4. You'll reduce your stress and improve your peace of mind by generating a vision for your week and for each day.
  5. You'll improve your satisfaction each day by knowing that you actually got the important things done.
  6. You'll quickly learn how to improve your productivity through continuous learning, feedback, and changing your approach.
  7. You'll learn how to embrace change and get better results in any situation.

With that in mind, let's take a quick look at how Agile Results works:

  1. On Mondays, write down 3 outcomes or "Wins" that you want for the week. (This is called Monday Vision.)
  2. Each day, write down 3 outcomes or "Wins" that you want for each day. (This is called Daily Outcomes.)
  3. On Fridays, ask yourself, "What are 3 things going well?," and, "What are 3 things to improve?", and write your answers down. (This is called Friday Reflection.)

This is the Monday Vision, Daily Wins, Friday Reflection pattern in Agile Results.  It’s a simple pattern for weekly results.  By focusing on 3 Wins each week, and each day, you apply concentrated effort to your most important results.  Focus is your force multiplier.

It might sound incredibly simple, but that's the idea.   Agile Results really is a simple system for meaningful results

Simple, but powerful. 

When you identify 3 outcomes or "Wins", what you are doing is:

  1. Prioritizing among your multiple, competing priorities.
  2. Focusing on your highest value results.
  3. Creating a way to more consistently produce better and better results.

What you are actually doing is learning how to identify and produce continuous value … the agile way.

And, as you will learn, value is the ultimate short-cut.

Assignment

  1. Read a few case studies and testimonials to understand some of the results that people achieve using Agile Results.
  2. Add a reminder to your calendar to test-drive Agile Results each day for this week.
Categories: Architecture, Programming

Stuff The Internet Says On Scalability For January 17th, 2014

Hey, it's HighScalability time:


From the stunning Scale of the Universe - Interactive Flash Animation
  • $7 trillion: US spend on patrolling oil sea-lanes; 82 billion: files served by MaxCDN in 5 months
  • Quotable Quotes: 
    • @StephenFleming: "Money doesn’t solve scaling problems, but the actual solutions to scaling problems always cost money." http://daringfireball.net/2014/01/googles_acquisition_of_nest
    • David Rosenthal: Robert Puttnam in Making Democracy Work and Bowling Alone has shown the vast difference in economic success between high-trust and low-trust societies.
    • @kylefox: That's a huge advantage of SaaS businesses: you can be liberal with refunds & goodwill credits w/o impacting the bottom line much.
    • Thomas B. Roberts: That’s the essence of science: Ask the impertinent question, and you are on your way to pertinent science.
    • Benjamin K. Bergen: Simulation is an iceberg. By consciously reflecting, as you just have been doing, you can see the tip—the intentional, conscious imagery. But many of the same brain processes are engaged, invisibly and unbeknownst to you, beneath the surface during much of your waking and sleeping life. Simulation is the creation of mental experiences of perception and action in the absence of their external manifestation.

  • Urbane apps are the future. 80% world population will be in cities by 2045

  • Knossos: Redis and linearizability. Kyle Kingsbury delivers an amazingly indepth model based analysis of "a hypothetical linearizable system built on top of Redis WAIT and a strong coordinator." The lesson: don't get Kyle mad.

  • If a dead startup had a spirit, this is what it would look like: About Everpix. A truly fine memorial. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Categories: Architecture

Windows Azure: Staging Publishing Support for Web Sites, Monitoring Improvements, Hyper-V Recovery Manager GA, and PCI Compliance

ScottGu's Blog - Scott Guthrie - Thu, 01/16/2014 - 20:53

This morning we released another great set of enhancements to Windows Azure.  Today’s new capabilities and announcements include:

  • Web Sites: Staged Publishing Support and Always On Support
  • Monitoring Improvements: Web Sites + SQL Database Alerts
  • Hyper-V Recovery Manager: General Availability Release
  • Mobile Services: Support for SenchaTouch
  • PCI Compliance: Windows Azure Now Validated for PCI DSS Compliance

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

Web Sites: Staged Publishing Support

With today’s release, you can now enable staged publishing to your Windows Azure Web Sites.  This new feature is really powerful, and enables you to deploy updates of your web apps/sites to a staging version of the site that can be accessed via a URL that is different from your main site.  You can use this staged site to test your site/app deployment and then, when ready, instantaneously swap the content and configuration between the live site and the staging version. 

This new features enables you to deploy changes with more confidence.  And it ensures that your site is never in an inconsistent state (where some files have been updated and others not) - now you can immediately swap all changes to all of the files in one shot.

Enabling Staged Publishing Support

To setup staged publishing go to the DASHBOARD tab of a web site and click Enable staged publishing from the quick glance section:

image

Clicking this link will cause Azure to create a new staging version of the web-site and link it to the existing site.  This linkage is represented in the navigation of the Windows Azure Management Portal – the staging site will show up as a sub-node of the primary site:

image

If you look closely at the name of the staging site, you’ll notice that its URL by default is sitename-staging (e.g. if the primary site name was “scottgu”, the staging site would be “scottgu-staging”):

image

You can optionally map any custom DNS name you want to the staging site (using either a C-Name or A-Record) – just like you would a normal site.  So your staging domain doesn’t have to have an azurewebsites.net extension.  In the scenario above I could remap the staging domain to be staging.scottgu.com, or scottgu-staging.com, or even foobar.scottgu.com if I wanted to. 

The staging URL doesn’t change between deployments of a site – so you can configure a custom DNS once, and then you can use it across all subsequent deployments.  You can also optionally enable SSL on the staging site and upload a SSL certificate to use with the staging domain (ensuring you can fully test/validate your SSL scenarios before swapping live).

Configuring the Staging Site

You can click on the staging site to manage it just like any other site:

image 

The SCALE tab and the LINKED RESOURCES tabs are disabled for staging sites, but all other tabs work as expected.  You can use the CONFIGURE tab to set configuration settings like database and application connection-strings (if you set these at the site level they override anything you might have in a web.config file).

One thing you’ll also notice when you open the staging site is that there is a new SWAP button in the bottom command-bar of it – we’ll talk about how to use that in a little bit.

Deploying to the Staging Site

Deploying a new instance of your web-app/site to the staging site is really easy.  Simply deploy to it just like you would any normal site.  You can use FTP, the built-in “Publish” dialog inside Visual Studio, Web Deploy or Git, TFS, VS Online, GitHub, BitBucket, DropBox or any of the other deployment mechanism we already support.  You configure these just like you would a normal site.

Below I’m going to use the built-in VS publish wizard to publish a new version of the site to the staging site:

image

Once this new version of the app is deployed to the staging site we can access a page in it using the staging domain (in this case http://scottgu-staging):

image

Note that the new version of the site we deployed is only in the staging site.  This means that if we hit the primary site domain (in this case http://scottgu) we wouldn’t see this new “V2” update - it would instead show any older version that had been previously deployed:

image

This allows us to do final testing and validation of the staging version without impacting users visiting the live production site.

Swapping Deployments

At some point we’ll be ready to roll our staged version to be the live production site version.  Doing this is easy – all we need to do is push the SWAP button within the command-bar of either our live site or staging site using the Windows Azure Portal (you can also automate this from the command-line or via a REST call):

image

When we push the SWAP button we’ll be prompted with a confirmation dialog explaining what is about to happen:

image

If we confirm we want to proceed with the swap, Azure will immediately swap the content of the live site (in this case http://scottgu) with the newer content in the staging site (in this case http://scottgu-staging).  This will take place immediately – and ensure that all of the files are swapped in a single shot (so that you never have mix-matched files).

Some settings from the staged version will automatically copy to the production version – including things like connection string overrides, handler mappings, and other settings you might have configured.  Other settings like the DNS endpoints, SSL bindings, etc will not change (ensuring that you don’t need to worry about SSL certs used for the staging domain overriding the production URL cert, etc).

Once the swap is complete (the command takes only a few seconds to execute), you’ll find that the content that was previously in the staging site is now in the live production site:

image

And the content that had been in the older live version of the site is now in the staging site.  Having the older content available in the staging site is useful – as it allows you to quickly swap it back to the previous site if you discover an issue with the version that you just deployed (just click the SWAP button again to do this).  Once you are sure the new version is fine you can just overwrite the staging site again with V3 of your app and repeat the process again.

Deployment with Confidence

We think you’ll find that the new staged publishing feature is both easy to use and very powerful, and enables you to handle deployments of your sites with an industrial strength workflow.

Web Sites: Always On Support

One of the other useful Web Site features that we are introducing today is a feature we call “Always On”.  When Always On is enabled on a site, Windows Azure will automatically ping your Web Site regularly to ensure that the Web Site is always active and in a warm/running state.  This is useful to ensure that a site is always responsive (and that the app domain or worker process has not paged out due to lack of external HTTP requests). 

It also useful as a way to keep a Web Site active for scenarios where you want to run background code within it irrespective of whether it is actively processing external HTTP customer requests.  We have another new feature we are enabling this week called “Web Jobs” that makes it really easy to now write this background code and run it within a Web Site. I’ll blog more about this feature and how to use it in the next few days.

You can enable Always On support for Web Sites running in Standard mode by navigating to the CONFIGURE tab within the portal, and then toggling the Always On button that is now within it:

image

Monitoring Improvements: Web Sites + SQL Database Alerts

With almost every release we make improvements to our monitoring functionality of Azure services. Today’s update brings two nice new improvements:

  1. Metrics updated every minute for Windows Azure Web Sites
  2. Alerting for more metrics from Windows Azure Websites and Windows Azure SQL Databases

Monitoring Data Every Minute

With today’s release we are now updating statistics on the monitoring dashboard of a Web Site every minute, so you can get much more fresh information on exactly how your website is being used (prior to today the granularity was not as fine grained):

image

Viewing data at this higher granularity can make it easier to observe changes to your website as they happen. No additional configuration is required to get data every minute – it is now automatically enabled for all Azure Websites.

Expanding Alerting

When you create alerts you can now choose between six different services:

  • Cloud Service
  • Mobile Service
  • SQL Database (New Today!)
  • Storage
  • Virtual Machine
  • Web Site (More Metrics Today!)

To get started with Alerting, click on the Management Services extension on the left navigation tab of the the Windows Azure Management Portal:

image

Then, click the Add Rule button in the command bar at the bottom of the screen. This will open a wizard for creating an alert rule. You can see all of the services that now support alerts:

image

New Web Site Alert Metrics

With today’s release we are adding the ability to alert on any metric that you see for a Web Site in the portal (previously we only supported alerts on Uptime and Response Time metrics). Today’s new metrics include support for setting threshold alerts for errors as well as CPU time and total requests:

image

The CPU time and Data Out metric alerts are particularly useful for Free or Shared websites – you can now use these alerts to email you if you’re getting close to exceeding your quotas for a free or shared website (and need to scale up instances).

New SQL Alert Metrics

With today’s release you can also now define alerts for your SQL Databases. For Web and Business tier databases you can setup alert metrics for the Storage for the database.  There are also now additional metrics and alerts for SQL Database Premium (which is currently in preview) such as CPU Cores and IOPS.

Once you’ve set up these new alerts, they behave just like alerts for other services. You’ll be informed when they cross the thresholds you establish, and you can see the recent alert history in the dashboard:

image

Windows Azure Hyper-V Recovery Manager: General Availability Release

I’m excited to announce the General Availability of Windows Azure Hyper-V Recovery Manager (HRM). This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios.

Windows Azure Hyper-V Recovery Manager helps protect your on premise applications and services by orchestrating the protection and recovery of Virtual Machines running in a System Center Virtual Machine Manager 2012 R2 and System Center Virtual Machine Manager 2012 SP1 private cloud to a secondary location. With simplified configuration, automated protection, continuous health monitoring and orchestrated recovery, Hyper-V Recovery Manager service can help you implement Disaster Recovery and recover applications accurately, consistently, and with minimal downtime.

image

The service leverages Hyper-V Replica technology available in Windows Server 2012 and Windows Server 2012 R2 to orchestrate the protection and recovery of Hyper-V Virtual Machines from one on-premise site to another on-premise site. Application data always travels on your on premise replication channel. Only metadata that is needed (such as names of logical clouds, virtual machines, networks etc.) for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted.

Getting Started

To get started, use the Windows Azure Management Portal to create a Hyper-V Recovery Manager Vault. Browse to Data Services > Recovery Services and click New to create a New Hyper-V Recovery Manager Vault. You can name the vault and specify a region where you would like the vault to be created.

clip_image002

Once the Hyper-V recovery Manager vault is created, you’ll be presented with a simple tutorial that will help guide you on how to register your SCVMM Servers and configure protection and recovery of Virtual Machines.

clip_image004

To learn more about setting up Hyper-V Recovery Manager in your deployment follow our detailed step-by-step guide.

Key Benefits of Hyper-V Recovery Manager

Hyper-V Recovery Manager offers the following key benefits that differentiate it from other disaster recovery solutions:

  • Simple Setup and Configuration: HRM dramatically simplifies configuration and management operations across large number of Hyper-V hosts, Virtual Machines and data-centers.
  • Automated Protection: HRM leverages the capabilities of Windows Server and System Center to provide on-going replication of VMs and ensures protection throughout the lifecycle of a VM.
  • Remote Monitoring: HRM leverages the power and reach of Azure to provide a remote monitoring and DR management service that can be accessed from anywhere.
  • Orchestrated Recovery: Recovery Plans enables automated DR orchestration by sequencing failover of different application tiers and customization with scripts and manual actions.

New Improvements

The Hyper-V Recovery Manager service has been enhanced since the initial October Preview with several nice improvements:

  • Improved Failback Support: The Failback support has been improved in scenarios where the primary host cluster has been rebuilt after an outage.
  • Support for Kerberos based Authentication: Cloud configuration now allows selecting Kerberos based authentication for Hyper-V Replica. This is useful in scenarios where customers want to use 3rd party WAN optimization and compression and have AD trust available between primary and secondary sites.
  • Support for Upgrade from VMM 2012 SP1 to VMM 2012 R2: HRM service now supports upgrades from VMM 2012 SP1 to VMM 2012 R2.
  • Improved Scale: The UI and service has been enhanced for better scale support.

Please visit Windows Azure web site for more information on Hyper-V Recovery Manager. You can also refer to additional product documentation. You can visit the HRM forum on MSDN for additional information and engage with other customers.

Mobile Services: Support for SenchaTouch

I’m excited to announce that in partnership with our friends at Sencha, we are today adding support for SenchaTouch to Windows Azure Mobile Services. SenchaTouch is a well know HTML/JavaScript-based development framework for building cross-platform mobile apps and web sites. With today’s addition, you can easily use Mobile Services with your SenchaTouch app.

You can download Windows Azure extension for Sencha here, configure Sencha loader with the location of the azure extension, and add Azure package to your app.json file:

{ name : "Basic", requires : [ "touch-azure"]}

Once you have the Azure extension added to your Sencha project, you can connect your Sencha app to your Mobile Service simply by adding the following initialization code:

Ext.application({

    name: 'Basic',

    requires: ['Ext.azure.Azure'],

    azure: {

        appKey: 'myazureservice-access-key',

        appUrl: 'myazure-service.azure-mobile.net'

    },

    launch: function () {

        // Call Azure initialization

        Ext.Azure.init(this.config.azure);

    }

});

From here on you can data bind your data model to Azure Mobile Services, authenticate users and use push notifications. Follow this detailed Getting Started tutorial to get started with SenchaTouch and Mobile Services. Read more detailed documentation at Mobile Services Sencha extension resources page.

Windows Azure Now Validated for PCI DSS Compliance

We are very excited to announce that Windows Azure has been validated for compliance with the Payment Card Industry (PCI) Data Security Standards (DSS) by an independent Qualified Security Assessor (QSA).

The PCI DSS is the global standard that any organization of any size must adhere to in order to accept payment cards, and to store, process, and/or transmit cardholder data. By providing PCI DSS validated infrastructure and platform services, Windows Azure delivers a compliant platform for you to run your own secure and compliant applications. You can now achieve PCI DSS certification for those applications using Windows Azure.

To assist customers in achieving PCI DSS certification, Microsoft is making the Windows Azure PCI Attestation of Compliance and Windows Azure Customer PCI Guide available for immediate download.

Visit the Trust Center for a full list of in scope features or for more information on Windows Azure security and compliance.

Summary

Today’s release includes a bunch of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Documentation Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming

Vedis - An Embedded Implementation of Redis Supporting Terabyte Sized Databases

I don't know about you, but when I first learned about Redis my initial thought was wow, why hasn't anyone done this before? My next thought was why put this functionality in a separate process? Why not just embed it in your own server code and skip the network path completely? Especially in a Service Oriented Architecture there's no need for an extra hop or extra software installation and configuration.

Now you can embed Redis-like code directly into your server with Vedis - an embeddable datastore C library built with over 70 commands similar in concept to Redis but without the networking layer since Vedis run in the same process of the host application. It's transactional, cross platform, thread safe, key-value, supports terabyte sized databases, has a GPL-like license (which isn't great for commercial apps), and supports an on-disk as well as in-memory datastore.

More about Vedis:

Categories: Architecture

Announcing The Texas Chapter of the Association of Enterprise Architects

Mike Walker's Blog - Wed, 01/15/2014 - 06:31

 Texas Association of Enterprise Architects

I would like to be the first to announce the Texas Chapter of the Association of Enterprise Architects!

Myself and the rest of the the leadership have been working very hard to create this chapter. This post provides a quick overview of the Global Association of Enterprise Architects and the Texas Chapter of the AEA along with an invite to the first meeting in Austin Texas.

 

Introduction to the Global Association of Enterprise Architects Association of Enterrpise Archtiects logo

  This chapter is part of a greater global organization called the Association of Enterprise Architects. The Association of Enterprise Architects (AEA) is the only definitive professional organization for Enterprise Architects with over 35,000 professionals world-wide.

Its goals are:

  • Raise the status of the EA profession
  • Foster an already vibrant EA community
  • Access to EA thought leadership
  • Increase market value of professionals

 

Introduction of the Texas AEA

These were all great goals and I wanted to bring this level of professionalism to Texas. In this state there is a vibrant community of Texas professionals, vendors and consortiums offers thought leadership, visions for the future, and proven practices based on reality of today. Building on the Global AEA Texas has the following objectives:

 Texas Association of Enterprise Architects

 

This chapter will strive to have the following characteristics to ensure a vibrant and highly interactive community forum. We will be…

  • Regular. With quarterly and monthly meetings starting first in Austin, TX and then branching out to Houston, Dallas and San Antonio.
  • Impactful. Through full day events we hope to draw in top speakers in the architecture community.
  • Highly Interactive . These meetings and events focus first on the community and take the best from a conferences and a user groups.
  • High Quality. The key to a vibrant and sustainable community is to make sure there is high quality discussions and interactions.

We have been lucky with some top notch sponsorship support and I would like to personally thank all these vendors for helping to make this happen! With out your support it would be difficult for us to get this moving.

 

 Texas Association of Enterprise Architects

 

 

Join Us!

So what’s left to say, but come on and join us in this professional network. Myself and the rest of the Texas AEA members would be honored to have you join.

You can join by coming to our first meeting in Austin Texas on January 30th. You can find the invite here.

 

If your  still not sold, below are a few reasons why I think it’s important for all of you to consider:

  • The Network. There is already 700 members of the AEA w/out an established chapter.
  • Access. You have the ability to access not only the Texas chapter but others along with job boards and other collaborative features.
  • Amplification. Amplify YOUR voice in the community. Show your and your company’s leadership in the market.
  • SIG’s. Special interest groups are a great way to get smart and passionate people together to create exciting and new innovative ideas.
  • And much more…
Categories: Architecture

SharePoint VPS solution

Microsoft SharePoint is an ideal solution for companies who have multiple offices and staff members who are on the move. Using SharePoint, documents and other materials can be easily shared with both colleagues and managers. Other features include advanced document management, which allows users to virtually check out a document, modify it or just read it, then check in the document again. This allows managers/company owners to see exactly when their staff members are working and just what they are doing. When combined with a highly customizable workflow management system and group calendars, SharePoint can improve the way in which your company functions and operates.

However, many organizations are observed to be failing with SharePoint implementation. So with this article, we are trying to make it simpler for organizations in-house IT administrators to help implement SharePoint over a virtual server environment.

Here we are going to see following key points:

Categories: Architecture

Ask HS: Design and Implementation of scalable services?

We have written agents deployed/distributed across the network. Agents sends data every 15 Secs may be even 5 secs. Working on a service/system to which all agent can post data/tuples with marginal payload. Upto 5% drop rate is acceptable. Ultimately the data will be segregated and stored into DBMS System (currently we are using MSQL).

Question(s) I am looking for answer

1. Client/Server Communication: Agent(s) can post data. Status of sending data is not that important. But there is a remote where Agent(s) to be notified if the server side system generates an event based on the data sent.

- Lot of advices from internet suggests using Message Bus (ActiveMQ) for async communication. Multicast and UDP are the alternatives.

2. Persistence: After some evaluation data to be stored in DBMS System.

- End of processing data is an aggregated record for which MySql looks scalable. But on the volume of data is exponential. Considering HBase as an option.

Looking if there are any alternatives for above two scenarios and get expert advice.

Categories: Architecture

Geolocation detection with haproxy

Agile Testing - Grig Gheorghiu - Tue, 01/14/2014 - 00:33
A useful feature for a web application is the ability to detect the user's country of origin based on their source IP address. This used not to be possible in haproxy unless you applied Cyril Bonté's geolocation patches (see the end of this blog post for how exactly to do that if you don't want to live on the bleeding edge of haproxy). However, the latest development version of haproxy (which is 1.5-dev21 at this time) contains geolocation detection functionality.

Here's how to use the geolocation detection feature of haproxy:

1) Generate text file which maps IP address ranges to ISO country codes

This is done using Cyril's haproxy-geoip utility, which is available in his geolocation patches. Here's how to locate and run this utility:
  • clone patch git repo: git clone https://github.com/cbonte/haproxy-patches.git
  • the haproxy-geoip script is now available in haproxy-patches/geolocation/tools
    • for the script to run, you need to have the funzip utility available on your system (it's part of the unzip package in Ubuntu)
    • you also need the iprange binary, which you can 'make' from its source file available in the haproxy-1.5-dev21/contrib/iprange directory; once you generate the binary, copy it somewhere in your PATH so that haproxy-geoip can locate it
  • run haproxy-geoip, which prints its output (IP ranges associated to ISO country codes) to stdout, and capture stdout to a file: haproxy-geoip > geolocation.txt
  • copy geolocation.txt to /etc/haproxy
2) Set custom HTTP header based on geolocation
For this, haproxy provides the map_ip function, which locates the source IP (the predefined 'src' variable in the line below) in the IP range in geolocation.txt and returns the ISO country code. We assign this country code to the custom X-Country HTTP header:
http-request set-header X-Country %[src, map_ip(/etc/haproxy/geolocation.txt)]
If you didn't want to map the source IP to a country code, but instead wanted to inspect the value of an HTTP header such as X-Forwarded-For, you could do this:
http-request set-header X-Country %[req.hdr_ip(X-Forwarded-For,-1), map_ip(/etc/haproxy/geolocation.txt)]
3) Use geolocation in ACLs
Let's assume that if the country detected via geolocation is not US, then you want to send the user to a different backend. You can do that with an ACL. Note that we compare the HTTP header X-Country which we already set above to the string 'US' using the '-m str' string matching functionality of haproxy, and we also specify that we want a case insensitive comparison with '-i US':
acl acl_geoloc_us req.hdr(X-Country) -m str -i USuse_backend www-backend-non-us if !acl_geoloc_us
If you didn't want to set the custom HTTP header, you could use the map_ip function directly in the definition of the ACL, like this:
acl acl_geoloc_us %[src, map_ip(/etc/haproxy/geolocation.txt)] -m str -i USuse_backend www-backend-non-us if !acl_geoloc_us
Speaking of ACLs, here's an example of defining ACLs based on the existence of a cookie and based on the value of the cookie then choosing a backend based on those ACLs:
acl acl_cookie_country req.cook_cnt(country_code) eq 1acl acl_cookie_country_us req.cook(country_code) -m str -i USuse_backend www-backend-non-us if acl_cookie_country !acl_cookie_country_us
And now for something completely different...which is what I mentioned in the beginning of this post: 
How to use the haproxy geolocation patches with the current stable (1.4) version of haproxy
a) Patch haproxy source code with gelocation patches, compile and install haproxy:
  • clone patch git repo: git clone https://github.com/cbonte/haproxy-patches.git
  • change to haproxy-1.4.24 directory
  • copy haproxy-1.4-geolocation.patc to the root of haproxy-1.4.24 
  • apply the patch: patch -p1 < haproxy-1.4-geolocation.patch
  • make clean
  • make TARGET=linux26
  • make install
b) Generate text file which maps IP address ranges to ISO country codes
  • install funzip: apt-get install unzip
  • create iprange binary
    • cd haproxy-1.4.24/contrib/iprange
    • make
    • the iprange binary will be created in the same folder. copy that to /usr/local/sbin
  • haproxy-geoip is located here: haproxy-patches/geolocation/tools
  • haproxy-geoip > geolocation.txt
  • copy geolocation.txt to /etc/haproxy 
c) Obtain country code based on source IP and use it in ACL
This is done via the special 'geolocate' statement and the 'geoloc' variable added to the haproxy configuration syntax by the geolocation patch:

geolocate src /etc/haproxy/geolocation.txt
acl acl-au geoloc eq AU
use_backend www-backend-au if acl-au

If instead of the source IP you want to map the value of the X-Forwarded-For header to a country, use:
geolocate hdr_ip(X-Forwarded-For,-1) /etc/haproxy/geolocation.txt

If you wanted to redirect to another location instead of using an ACL, use:
redirect location http://some.location.example.com:4567 if { geoloc AU }

That's it for now. I want to thank Cyril Bonté, the author of the geolocation patches, and Willy Tarreau, the author of haproxy, for their invaluable help and their amazingly fast responses to my emails. It's a pleasure to deal with such open source developers passionate about the software they produce.  Also thanks to my colleagues Zmer Andranigian for working on getting version 1.4 of haproxy to work with geolocation, and Jeff Roberts for working on getting 1.5-dev21 to work.
One last thing: haproxy-1.5-dev21 has been very stable in production for us, but of course test it thoroughly before deploying it in your environment.

NYTimes Architecture: No Head, No Master, No Single Point of Failure

Michael Laing, a Systems Architect at NYTimes, gave this great decription of their use of RabbitMQ and their overall architecture on the RabbitMQ mailing list. The closing sentiment marks this as definitely an architecture to learn from:

Although it may seem complex, Fabrik has simple components and is mostly principles and plumbing. The key point to grasp is that there is no head, no master, no single point of failure. As I write this I can see components failing (not RabbitMQ), and we are fixing them so they are more reliable. But the system doesn't fail, users can connect, and messages are delivered, regardless - all within design parameters.

Since it's short, to the point, and I couldn't say it better, I'll just reproduce several original sources here:

Categories: Architecture

Stuff The Internet Says On Scalability For January 10th, 2014

Hey, it's HighScalability time:


Run pneumatic tubes alongside optical fiber cables and we unite the digital and material worlds.
  • 1 billion: searches on DuckDuckGo in 2013
  • Quotable Quotes: 
    • pg: We [Hacker News] currently get just over 200k uniques and just under 2m page views on weekdays (less on weekends). 
      • rtm: New server: one Xeon E5-2690 chip, 2.9 GHz, 8 cores total, 32 GB RAM.
    • Kyle Vanhemert: The graph shows the site’s [Reddit] beginnings in the primordial muck of porn and programming.
    • Drake Baer: But it's not about knowing the most people possible. Instead of being about size, a successful network is about shape.
    • @computionist: Basically when you cache early to scale you've admitted you have no idea what you're doing.
    • Norvig's Law: Any technology that surpasses 50% penetration will never double again.  
    • mbell: Keep in mind that a single modern physical server that is decently configured (12-16 cores, 128GB of ram) is the processing equivalent of about 16 EC2 m1.large instances.
    • @dakami: Learned databases because grep|wc -l got slow. Now I find out that's pretty much map/reduce.
    • Martin Thompson: I think "Futures" are a poor substitute for being pure event driven and using state machines. Futures make failure cases very ugly very quickly and they are really just methadone for trying to wean programmers off synchronous designs :-) 
    • @wattersjames: Can your PaaS automate Google Compute Engine? If it can, you will find that it can create VMs in only 35 seconds, at most any scale.
    • Peter M. Hoffmann: Considering the inherent drive of matter to form ever more complex structures, life seems inevitable.

  • The marker for a new generation, like kids who will never know a card catalogue, Kodak film, pay phones, phone books, VHS tapes, typewriters, or floppy disks: Co-Founder Of Snapchat Admits He's Never Owned A Physical Server

  • Want the secret of becoming a hugely popular site? Make it fast and it will become popular. It's science. Are Popular Websites Faster?: No matter what distribution of websites you look at – the more popular websites trend faster. Even the slowest popular website is much faster than those that are less popular. On the web, the top 500 sites are nearly 1s faster (by the median), and on mobile it is closer to 1.5s faster. 

  • In 1956 we may not have had BigData, but BigStorage was definitely in. Amazing picture of IBM's 5 mega-byte drive weighing in at more than 2,000 pounds.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Categories: Architecture

Extreme Goal Setting for 2014

When’s the last time you went for your personal Epic Win?   If it’s been a while, no worries.  Let’s go big this year.

I’ll give you the tools.

I realize time and again, that Bruce Lee was so right when he said, “To hell with circumstances; I create opportunities.”  Similarly, William B. Sprague told us, “Do not wait to strike till the iron is hot; but make it hot by striking.” 

And, Peter Drucker said, “The best way to predict the future is to create it.”   Similarly, Alan Kay said, "The best way to predict the future is to invent it."

Well then?  Game on!

By the way, if you’re not feeling very inspired, check out either my 37 Inspirational Quotes That Will Change Your Life, Motivational Quotes, or my Inspirational Quotes.  They are intense, and I bet you can find your favorite three.

As I’ve been diving deep into goal setting and goal planning, I’ve put together a set of deep dive posts that will give you a very in-depth look at how to set and achieve any goal you want.   Here is my roundup so far:

Brian Tracy on 12 Steps to Set and Achieve Any Goal

Brian Tracy on the Best Times for Writing and Reviewing Your Goals

Commit to Your Best Year Ever

Goal Setting vs. Goal Planning

How To Find Your Major Definite Purpose

How To Use 3 Wins for the Year to Have Your Best Year Ever

The Power of Annual Reviews for Achieving Your Goals and Realizing Your Potential

What Do You Want to Spend More Time Doing?

Zig Ziglar on Setting Goals

Hopefully, my posts on goal setting and goal planning save you many hours (if not days, weeks, etc.) of time, effort, and frustration on trying to figure out how to really set and achieve your goals.   If you only read one post, at least read Goal Setting vs. Goal Planning because this will put you well ahead of the majority of people who regularly don’t achieve their goals.

In terms of actions, if there is one thing to decide, make it Commit to Your Best Year Ever.

Enjoy and best wishes for your greatest year ever and a powerful 2014.

Categories: Architecture, Programming

Under Snowden's Light Software Architecture Choices Become Murky

Adrian Cockcroft on the future of Cloud, Open Source, SaaS and the End of Enterprise Computing:

Most big enterprise companies are actively working on their AWS rollout now. Most of them are also trying to get an in-house cloud to work, with varying amounts of success, but even the best private clouds are still years behind the feature set of public clouds, which is has a big impact on the agility and speed of product development

While the Snowden revelations have tattered the thin veil of trust secreting Big Brother from We the People, they may also be driving a fascinating new tension in architecture choices between Cloud Native (scale-out, IaaS), Amazon Native (rich service dependencies), and Enterprise Native (raw hardware, scale-up).

This tension became evident in a recent HipChat interview where HipChat, makers of an AWS based SaaS chat product, were busy creating an on-premises version of their product that could operate behind the firewall in enterprise datacenters. This is consistent with other products from Atlassian in that they do offer hosted services as well as installable services, but it is also an indication of customer concerns over privacy and security.

The result is a potential shattering of backend architectures into many fragments like we’ve seen on the front-end. On the front-end you can develop for iOS, Android, HTML5, Windows, OSX, and so on. Any choice you make is like declaring for a cold war power in a winner take all battle for your development resources. Unifying this mess has been the refuge of cloud based services over HTTP. Now that safe place is threatened.

To see why...

Categories: Architecture

Sponsored Post: Netflix, Logentries, Host Color, Booking, Apple, ScaleOut, MongoDB, BlueStripe, AiScaler, Aerospike, LogicMonitor, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple is hiring for multiple positions. Imagine what you could do here. At Apple, great ideas have a way of becoming great products, services, and customer experiences very quickly.
    • Sr Software Engineer. The iOS Systems Team is looking for a Software Engineer to work on operations, tools development and support of worldwide iOS Device sales and activations. Please apply here
    • Sr. Security Software Developer. We are looking for an excellent programmer who's done extensive security programming. This individual will participate in various security projects from the start to the end. In addition to security concepts, it's important to have intricate knowledge of different flavors of Unix operating systems to develop code that's compact and optimal. Familiarity with key exchange protocols, authentication protocols and crypto analysis is a plus. Please apply here.
    • Senior Server Side Engineer. The Emerging Technology team is looking for a highly motivated, detail-oriented, energetic individual with experience in a variety of big data technologies.  You will be part of a fast growing, cohesive team with many exciting responsibilities related to Big Data, including: Develop scalable, robust systems that will gather, process, store large amount of data Define/develop Big Data technologies for Apple internal and customer facing applications. Please apply here.
    • Senior Server Side Engineer. The Emerging Technology team is looking for a highly motivated, detail-oriented, energetic individual with experience in a variety of big data technologies.  You will be part of a fast growing, cohesive team with many exciting responsibilities related to Big Data, including: Develop scalable, robust systems that will gather, process, store large amount of data Define/develop Big Data technologies for Apple internal and customer facing applications. Please apply here.
    • Senior Engineer: Emerging Technology. Apple’s Emerging Technology group is looking for a senior engineer passionate about exploring emerging technologies to create paradigm shifting cloud based solutions. Please apply here

  • The Netflix Cloud Performance Team is hiring. Help tackle the more complex scalability challenges emerging on the cloud today, wrangling tens of thousands of instances, handling billions of requests a day. We are searching for a Senior Performance Architect and a Senior Cloud Performance Tools Engineer

  • We need awesome people @ Booking.com - We want YOU! Come design next generation interfaces, solve critical scalability problems, and hack on one of the largest Perl codebases. Apply: http://www.booking.com/jobs.en-us.html

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Your amazing event here.
Cool Products and Services
  • Log management made easy with Logentries Billions of log events analyzed every day to unlock insights from the log data the matters to you. Simply powerful search, tagging, alerts, live tail and more for all of your log data. Automated AWS log collection and analytics, including CloudWatch events. 

  • HostColorCloud Servers based on OpenNebula Cloud automation IaaS. Cloud Start features: 256 MB RAM; 1000 GB Premium Bandwidth; 10 GB fast, RAID 10 protected Storage; 1 CPU Core;  /30 IPv4 (4 IPs) and /48 IPv6 subnet and costs only $9.95/mo. Client could choose an OS (CentOS is default). 

  • LogicMonitor is the cloud-based IT performance monitoring solution that enables companies to easily and cost-effectively monitor their entire IT infrastructure stack – storage, servers, networks, applications, virtualization, and websites – from the cloud. No firewall changes needed - start monitoring in only 15 minutes utilizing customized dashboards, trending graphs & alerting

  • Rapidly Develop Hadoop MapReduce Code. With ScaleOut hServer™ you can use a subset of your Hadoop data and run your MapReduce code in seconds for fast code development and you don’t need to load and manage the Hadoop software  stack, it's a self-contained Hadoop MapReduce execution environment. To learn more check out www.scaleoutsoftware.com/prototypehadoop/

  • MongoDB Backup Free Usage Tier Announced. We're pleased to introduce the free usage tier to MongoDB Management Service (MMS). MMS Backup provides point-in-time recovery for replica sets and consistent snapshots for sharded systems with minimal performance impact. Start backing up today at mms.mongodb.com.

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • Aerospike Capacity Planning Kit. Download the Capacity Planning Kit to determine your database storage capacity and node requirements. The kit includes a step-by-step Capacity Planning Guide and a Planning worksheet. Free download.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

How HipChat Stores and Indexes Billions of Messages Using ElasticSearch and Redis

This article is from an interview with Zuhaib Siddique, a production engineer at HipChat, makers of group chat and IM for teams.

HipChat started in an unusual space, one you might not think would have much promise, enterprise group messaging, but as we are learning there is gold in them there enterprise hills. Which is why Atlassian, makers of well thought of tools like JIRA and Confluence, acquired HipChat in 2012.

And in a tale not often heard, the resources and connections of a larger parent have actually helped HipChat enter an exponential growth cycle. Having reached the 1.2 billion message storage mark they are now doubling the number of messages sent, stored, and indexed every few months.

That kind of growth puts a lot of pressure on a once adequate infrastructure. HipChat exhibited a common scaling pattern. Start simple, experience traffic spikes, and then think what do we do now? Using bigger computers is usually the first and best answer. And they did that. That gave them some breathing room to figure out what to do next. On AWS, after a certain inflection point, you start going Cloud Native, that is, scaling horizontally. And that’s what they did.

But there's a twist to the story. Security concerns have driven the development of an on-premises version of HipChat in addition to its cloud/SaaS version. We'll talk more about this interesting development in a post later this week.

While HipChat isn’t Google scale, there is good stuff to learn from HipChat about how they index and search billions of messages in a timely manner, which is the key difference between something like IRC and HipChat. Indexing and storing messages under load while not losing messages is the challenge.

This is the road that HipChat took, what can you learn? Let’s see…

Categories: Architecture

Disaster Recovery and Planning

Coding the Architecture - Simon Brown - Mon, 01/06/2014 - 10:51

Maybe software developers are naturally optimistic but in my experience they rarely consider system failure or disaster scenarios when designing software. Failures are varied and range from the likely (local disk failure) to the rare (tsunami) and from low impact to fatal (where fatal may be the death of people or bankruptcy of a business).

Failure planning broadly fits into the following areas:

  1. Avoiding failure
  2. Failing safely
  3. Failure recovery
  4. Disaster Recovery

Avoiding failure is what a software architect is most likely to think about at design time. This may involve a number of High Availability (HA) techniques and tools including; redundant servers, distributed databases or real time replication of data and state. This usually involves removing any single point of failure but you should be careful to not just consider the software and hardware that it immediately runs on - you should also remove any single dependency on infrastructure such as power (battery backup, generators or multiple power supplies) or telecoms (multiple wired connections, satellite or radio backups etc).

Failing safely is a complex topic that I touched on recently and may not apply to your problem domain (although you should always consider if it does).

Failure recovery usually goes hand-in-hand with High Availability and ensures that when single components are lost they can be re-created/started to join the system. There is no point in having redundancy if components cannot be recovered as you will eventually lose enough components for the system to fail!

Disaster Recovery Scenarios and Planning

However, the main topic I want to discuss here is disaster recovery. This is the process that a system and its operators have to execute in order to recreate a fully operational system after a disaster scenario. This differs from a failure in that the entire system (potentially all the components but at least enough to render it inoperable) stops working. As I stated earlier, many software architects don't consider these scenarios but they can include:

  1. Power surge leading to a datacenter failure
  2. Flooding leading to the damage of machines located in a building's basement
  3. Communications failure leading to loss of connectivity
  4. Disgruntled staff member deliberately corrupting a system
  5. Staff member accidentally shutting down the system
  6. Terrorist action leading to the loss of a building with no re-entry possible

These are usually classified into either natural or man-made disasters. Importantly these are likely to cause outright system failure and require some manual intervention - the system will not automatically recover. Therefore an organisation should have a Disaster Recovery (DR) Plan for the operational staff to follow when this occurs.

A disaster recovery plan should consider a range of scenarios and give very clear and precise instructions on what to do for each of them. In the event of a disaster scenario the staff members are likely to be stressed and not thinking as clearly as they would otherwise. Keep any steps required simple and don't worry about stating the obvious or being patronising - remember that the staff executing the plan may not be the usual maintainers of the system.

Please remember that 'cloud hosted' systems still require disaster recovery plans! Your provider could have issues and you are still affected by scenarios that involve corrupt data and disgruntled staff. Can you roll-back your data store to a known point in the past before corruption?

Strategies

The aims and actions of any recovery will depend on the scenario that occurs. Therefore the scenarios listed should each refer to a strategy which contains some actions.

Before any strategy is executed you need to be able to detect the event has occurred. This may sound obvious but a common mistake is to have insufficient monitoring in place to actually detect it. Once detected there needs to be comprehensive notification in place so that all systems and people are aware that actions are now required.

For each strategy there has to be an aim for the actions. For example, do you wish to try to bring up a complete system with all data (no data loss) or do you just need something up and running? Perhaps missing data can be imported at a later time or maybe some permanent data-loss is tolerated? Does the recovered system have to provide full functionality or is an emergency subset sufficient?

This is hugely dependent on the problem domain and scenario but the key metrics are recovery point objectives (RPO) and recovery time objectives (RTO) along with level of service. Your RPO and RTO are key non-functional (quality) requirements and should be listed in your software architecture document. These metrics should influence your replication, backup strategies and necessary actions.

Business Continuity

The disaster recovery plans for the IT systems are actually a subset of the boarder 'business continuity' plans (BCP) that an organisation should have. This covers all the aspects of keeping an organisation running in the event of a disaster. BCP plans also includes manual processes, staff coverage, building access etc. You need to make sure that the IT disaster recovery plan fits into the business continuity plan and you state the dependencies between them.

There are a range of official standards covering Business Continuity Planning such as ISO22301, ISO22313 and ISO27031. Depending on your business and location you might have a legal obligation to comply with these or other local standards. I would strongly recommend that you investigate whether your organisation needs to be compliant - if you fail to do so then there could be legal consequences.

This is a complex topic which I have only touched upon - if it raises concerns then you may have a lot of work to do! If you don't know where to start then I'd suggest getting your team together and running a risk storming workshop.

Categories: Architecture

Speedy FitNesse roundtrips during development

Xebia Blog - Mon, 01/06/2014 - 10:11

FitNesse is an acceptance testing framework. It allows business users, testers and developers to collaborate on executable specifications (for example in BDD style and/or implementing Specification by Example), and allows for testing both the back-end and the front-end. Aside from partly automating acceptance testing and as a tool to help build a common understanding between developers and business users, a selection of the tests from a FitNesse test suite often doubles as a regression test suite.

In contrast to unit tests, FitNesse tests should usually be focused but still test a feature in an  'end-to-end' way. It is not uncommon for a FitNesse test to for example start mocked versions of external systems, start e.g. a Spring context and connect to a real test database rather than an in-memory one.

Running FitNesse during development

The downside of end-to-end testing is that setting up all this context makes running a single test locally relatively slow. This is part of the reason you should keep in mind the testing pyramid while writing tests, and write tests at the lowest possible level (though not lower).

Still, when used correctly, FitNesse tests can provide enormous value. Luckily, versions of FitNesse since 06-01-2014 make it relatively easy to significantly reduce this round-trip time.

A bit of background

Most modern FitNesse tests are written using the SLIM test system. When executing a test, a separate 'service' process is spun up to actually execute the code under test ('fixtures' and code-under-test). This has a couple of advantages: the classpath of the service process can be kept relatively clean - in fact, you can even use a service process written in an entirely different language, such as .Net or Ruby, as long as implements the SLIM protocol.

In the common case of using the Java SLIM service, however, this means spinning up a JVM, loading your classes into the classloader, and possibly additional tasks such as initializing part of your backend and mocking services. This can take a while, and slows down your development roundtrip, making FitNesse less pleasant to work with.

How to speed up your FitNesse round-trip times

One way to tremendously speed up test round-trip times is to, instead of initializing the complete context every time you run a test, start the SlimService manually and keep it running. When done from your IDE this allows you to take advantage of selective reloading of updated classes and easily setting breakpoints.

To locally use FitNesse in this way, put the FitNesse non-standalone jar on your classpath, and start the main method of fitnesse.slim.SlimService with parameters like '-d 8090': '-d' is to prevent the SlimService from shutting down after the first test disconnects, '8090' specifies the port number on which to listen.

Example: java -cp *yourclasspath* fitnesse.slim.SlimService -d 8090

Now, when starting the FitNesse web UI, use the 'slim.port' to specify the port to connect to and set 'slim.pool.size' to '1', and FitNesse will connect to the already-running SLIM service instead of spinning up a new process each time.

Example: java -Dslim.port=8090 -Dslim.pool.size=1 -jar fitnesse-standalone.jar -p 8000 -e 0 

We've seen improvements in the time it takes to re-run one of our tests from a typical ~15 seconds to about 2-3 seconds. Not only a productivity improvement, but more importantly this makes it much more pleasant to use FitNesse tests where they make sense.

The Most Effective CIOs in 2014

I was reading The Fruits of Innovation: Top 10 IT Trends in 2014, by Mark Harris.

Harris had this to say about the evolving role of the CIO:

“In the end, these leaders are now tasked to accurately manage, predict, execute and justify. Hence, the CIO’s role will evolve. Previously, CIOs were mostly technologists that were measured almost exclusively by availability and uptime. The CIO’s job was all about crafting a level of IT services that the company could count on, and the budgeting process needed to do so was a mostly a formality.”

Harris had this to say about the best qualities in a CIO:

“The most effective CIOs in 2014 will be business managers that understand the wealth of technology options now available, the costs associated with each as well as the business value of each of the various services they are chartered to deliver. He or she will map out a plan that delivers just the right amount service within their agreed business plan. Email, for instance, may have an entirely different value to a company than their online store, so the means to deliver these diverse services will need to be different. It is the 2014 CIO’s job to empower their organizations to deliver just the right services at just the right cost.”

That matches what I’ve been seeing.

CIOs need business acumen and the ability to connect IT to business impact.

Another way to think of it is, the CIO needs to help accelerate and realize business value from IT investments.

Value Realization is hot.

You Might Also Like

Stephen Kell on Value Realization

Blessing Sibanyoni on Value Realization

Paul Lidbetter on Value Realization

Martin Sykes on Value Realization

Mark Bestauros on Value Realization

Graham Doig on Value Realization

Categories: Architecture, Programming

How To Have Your Best Year Ever

There’s a little trick I learned about how to have your best year ever:

Commit to Your Best Year Ever

That’s it.

And, it actually works.

When you decide to have your best year ever, and you make it a mission, you find a way to make it happen.

You embrace the challenges and the changes that come you way. 

You make better choices throughout the year, in a way that moves you towards your best year ever.

A while back, our team, did exactly that.   We decided we wanted to make the coming year our best year ever.  We wanted a year we could look back on, and know that we gave it our best shot.  We wanted a year that mattered.  And we were willing to work for it.

And, it worked like a champ.

In fast, most of us go our best reviews at Microsoft.  Ever.

It’s not like it’s magic.  It works because it sets the stage.  It sets the stage for great expectations.  And, when you expect more, from yourself, or from the world, you start to look for and leverage more opportunities to make that come true.

It also helps you role with the punches.   You find ways to turnaround negative situations into more positive ones.  You find ways to take setbacks as learning opportunities to experience your greatest growth.  You look for ways to turn ordinary events into extraordinary adventures.

And when you get knocked down.  You try again.  Because you’re on a mission.

When you make it a mission to have your best year ever, you stretch yourself a little more.  You try new things.  You take old things to new heights.

But there’s a very important distinction here.   You have to own the decision. 

It has to be your choice.   YOU have to choose it so that you internalize it, and actually believe it, so that you actually act on it.

Otherwise, it’s just a neat idea, but you won’t live it.

And if you don’t live it, it won’t happen.

But, as soon as you decide that no matter what, this will be YOUR best year ever, you unleash your most resourceful self.

If you’ve forgotten what it’s like to go for the epic win, then watch this TED talk and my notes:

Go For the Epic Win

Best wishes for your best year.

Ever.

Categories: Architecture, Programming

Event Sourcing. Draw it

Think Before Coding - Jérémie Chassaing - Sat, 01/04/2014 - 22:58

Here is a drawing to show the interaction between the Decide and Apply functions:

Categories: Architecture, Requirements

Stuff The Internet Says On Scalability For January 3rd, 2014

Hey, it's HighScalability time, can you handle the truth?


Should software architectures include parasites? They increase diversity and complexity in the food web.
  • 10 Million: classic hockey stick growth pattern for GitHub repositories
  • Quotable Quotes:
    • Seymour Cray: A supercomputer is a device for turning compute-bound problems into IO-bound problems.
    • Robert Sapolsky: And why is self-organization so beautiful to my atheistic self? Because if complex, adaptive systems don’t require a blue print, they don’t require a blue print maker. If they don’t require lightning bolts, they don’t require Someone hurtling lightning bolts.
    • @swardley: Asked for a history of PaaS? From memory, public launch - Zimki ('06), BungeeLabs ('06), Heroku ('07), GAE ('08), CloudFoundry ('11) ...
    • @neil_conway: If you're designing scalable systems, you should understand backpressure and build mechanisms to support it.
    • Scott Aaronson...the brain is not a quantum computer. A quantum computer is good at factoring integers, discrete logarithms, simulating quantum physics, modest speedups for some combinatorial algorithms, none of these have obvious survival value. The things we are good at are not the same thing quantum computers are good at.
    • @rbranson: Scaling down is way cooler than scaling up.
    • @rbranson: The i2 EC2 instances are a huge deal. Instagram could have put off sharding for 6 more months, would have had 3x the staff to do it.
    • @mraleph: often devs still approach performance of JS code as if they are riding a horse cart but the horse had long been replaced with fusion reactor
  • Now we know the cost of bandwidth: Netflix’s new plan: save a buck with SD-only streaming
  • Massively interesting Stack Overflow thread on Why is processing a sorted array faster than an unsorted array? Compilers may grant a hidden boon or turn traitor with a deep deceit. How do you tell? It's about branch prediction.
  • Can your database scale to 1000 cores? Nope. Concurrency Control in the Many-core Era: Scalability and Limitations: We conclude that rather than pursuing incremental solutions, many-core chips may require a completely redesigned DBMS architecture that is built from ground up and is tightly coupled with the hardware.
  • Not all SSDs are created equal. Power-Loss-Protected SSDs Tested: Only Intel S3500 PassesWith a follow up. If data on your SSD can't survive a power outage it ain't worth a damn. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Categories: Architecture