Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/7&page=5' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Teach the World Your Skills in a Mobile-First, Cloud-First World

“Be steady and well-ordered in your life so that you can be fierce and original in your work.”  -- Gustave Flaubert

An important aspect of personal effectiveness and career development is learning business skills for a technology-centric world.

I know a lot of developers figuring out how to share their expertise in a mobile-first, cloud-first world.  Some are creating software services, some are selling online courses, some are selling books, and some are building digital products.    It’s how they are sharing and scaling their expertise with the world, while doing what they love. 

In each case, the underlying pattern is the same:

"Write once, share many." 

It’s how you scale.  It’s how you amplify your impact.  It’s a simple way to combine passion + purpose + profit.

With our mobile-first, cloud-first world, and so much technology at your fingertips to help with automation, it’s time to learn better business skills and how to stay relevant in in an ever-changing market.   

But the challenge is, how do you actually start?

On the consumer side ...
In a mobile-first, cloud-first world, users want the ability to consume information anywhere, anytime, from any device.

On the produce side ...
Producers want the ability to easily create digital products that they can share with the world -- and automate the process as much as possible. 

I've researched and tested a lot of ways to share your experience in a way that works in a mobile-first, cloud-first world.  I’ve went through a lot of people, programs, processes, and tools.  Ultimately, the proven practice for building high-end digital products is building courses.  And teaching courses is the easiest way to get started.  And Dr. Cha~zay is one of the best in the world at teaching people how to teach the world what they love.

I have a brilliant and deep guest post by Dr. Cha~zay on how to teach courses in a mobile-first, cloud-first world:

Teach the World What You Love

You could very much change your future, or your kid’s future, or your friend’s future, or whoever you know that needs to figure out new ways to teach in a mobile first, cloud-first world.

The sooner you start doing, testing, and experimenting, the sooner you start figuring out what works in a Digital Economy could mean to you, your family, your friends, in a mobile-first, cloud-first world.

The world changes. 

Do you?

Categories: Architecture, Programming

Docker to the on-premise rescue

Xebia Blog - Wed, 11/18/2015 - 10:18

During the second day at Dockercon EU 2015 in Barcelona, Docker introduced the missing glue which they call "Containers as a Service Platform". With both focus on public cloud and on-premise, this is a great addition to the eco system. For this blogpost I would like to focus on the Run part of the "Build-Ship-Run" thought of Docker, and with the focus on on-premise. To realize this, Docker launched the Docker Universal Control Plane which was the project formerly known as Orca.

caas-private I got to play with version 0.4.0 of the software during a hands-on lab and I will try to summarize what I've learned.

Easy installation

Of course the installation is done by launching Docker containers on one or more hosts, so you will need to provision your hosts with the Docker Engine. After that you can launch a `orca-bootstrap` container to install, uninstall, or add an Orca controller. The orca-bootstrap script will generate a Swarm Root CA, Orca Root CA, deploy the necessary Orca containers (I will talk more about this in the next section), after which you can login into the Docker Universal Control Plane. Adding a second Orca controller is as simple as running orca-bootstrap with a join parameter and specifying the existing Orca controller.


Let's talk a bit about the technical parts and keep in mind that I'm not the creator of this product. There are 7 containers running after you have succesfully run the orca-bootstrap installer. You have the Orca controller itself, listening on port 443, which is your main entry point to Docker UCP. There are 2 cfssl containers, one for Orca CA and one for Swarm CA. Then you have the Swarm containers (Manager and Agent) and the key-value store, for which Docker chose etcd. Finally, there is an orca-proxy container, whose port 12376 redirects to the Swarm Manager.  I'm not sure why this is yet, maybe we will find out in the beta.

From the frontend (which we will discuss next) you can download a 'bundle', which is a zip file containing the TLS parts and a  sourceable environment file containing:

export DOCKER_CERT_PATH=$(pwd)
export DOCKER_HOST=tcp://orca_controller_ip:443
# Run this command from within this directory to configure your shell:
# eval $(env.sh)
# This admin cert will also work directly against Swarm and the individual
# engine proxies for troubleshooting.  After sourcing this env file, use
# "docker info" to discover the location of Swarm managers and engines.
# and use the --host option to override $DOCKER_HOST

As you can see, it also works directly against Swarm manager and Engine to troubleshoot. Running `docker version` with this environment returns:

Version:      1.9.0
API version:  1.21
Go version:   go1.4.2
Git commit:   76d6bc9
Built:        Tue Nov  3 17:43:42 UTC 2015
OS/Arch:      linux/amd64
Version:      orca/0.4.0
API version:  1.21
Go version:   go1.5
Git commit:   56afff6
OS/Arch:      linux/amd64


Okay, so when I opened up the frontend it looked pretty familiar and I was trying to remember where I've seen this before. After a look at the source, I found an ng-app parameter in the html tag named shipyard. The GUI is based on the Shipyard project, which is cool because this was an already well functioning management tool built upon Docker Swarm and the Docker API, so people familiar with shipyard already know the functionality, so let me quickly sum up what it can do and wthat it looks like in Docker UCP.

ducp-dashboardDashboard overview

ducp-applications2Application expanded, quickly start/stop/restart/destroy/inspect running container

ducp-applications-applicationApplication overview, graphs of resource usage and container IDs can be included or excluded from the graph.

ducp-containersContainers overview, multi select containers and execute actions

ducp-containers-container-logsAbility to quickly inspect logs

ducp-contaienrs-container-consoleAbility to exec into the container to debug/troubleshoot etc.

Secrets Management & Authentication/Authorization

So, in this hands-on lab there were a a few things that were not ready yet. Eventually it will be possible to hook up Docker UCP to an existing LDAP directory but I was not able to test this yet. Once fully implemented you can hook it up to your existing RBAC system and give teams the authorization they need.

There was also a demo showing off a secret management tool, which also was not yet available. I guess this is what the key-value store is used for as well. Basically you can store a secret at a path such a secret/prod/redis and then access it by running a container with a label like:

docker run -ti --rm -label com.docker.secret.scope=secret/prod

Now you can access the secret within the container in the file /secret/prod/redis.

Now what?

A lot of the new things are being added to the ecosystem, which is certainly going to help the adoption of Docker for some customers and bringing it into production. I like that Docker thought of the on-premise customers and deliver them an equally as the cloud users. As this is an early version they need feedback from users, so if you are able to test it, please do so in order to make it a better product. They said they are already working on multi-tenancy for instance, but no timelines were given.

If you would like to sign up for the beta of Docker Universal Control Plane, you can sign up at this page: https://www.docker.com/try-ducp



Why I like golang: a programming autobiography

Agile Testing - Grig Gheorghiu - Mon, 11/16/2015 - 19:07
Tried my hand at writing a story on Medium.

9ish Low Latency Strategies for SaaS Companies

Achieving very low latencies takes special engineering, but if you are a SaaS company latencies of a few hundred milliseconds are possible for complex business logic using standard technologies like load balancers, queues, JVMs, and rest APIs.

Itai Frenkel, a software engineer at Forter, which provides a Fraud Prevention Decision as a Service, shows how in an excellent article: 9.5 Low Latency Decision as a Service Design Patterns.

While any article on latency will have some familiar suggestions, Itai goes into some new territory you can really learn from. The full article is rich with detail, so you'll want to read it, but here's a short gloss:

Categories: Architecture

Video: Software architecture as code

Coding the Architecture - Simon Brown - Mon, 11/16/2015 - 12:52

I presented a new version of my "Software architecture as code" talk at the Devoxx Belgium 2015 conference last week, and the video is already online. If you're interested in how to communicate software architecture without using tools like Microsoft Visio, you might find it interesting.

The slides are also available to view online and download. Enjoy!

Categories: Architecture

How Facebook's Safety Check Works

I noticed on Facebook during this horrible tragedy in Paris that there was some worry because not everyone had checked in using Safety Check (video). So I thought people might want to know a little more about how Safety Check works.

If a friend or family member hasn't checked-in yet it doesn't mean anything bad has happened to them. Please keep that in mind. Safety Check is a good system, but not a perfect system, so keep your hopes up.

This is a really short version, there's a longer article if you are interested.

When is Safety Check Triggered?
  • Before the Paris attack Safety Check was only activated for natural disasters. Paris was the first time it was activated for human disasters and they will be doing it more in the future. As a sign of this policy change, Safety Check has been activated for the recent bombing in Nigeria.

How Does Safety Check Work?
  • If you are in an area impacted by a disaster Facebook will send you a push notification asking if you are OK. 

  • Tapping the “I’m Safe” button marks that your are safe.

  • All your friends are notified that you are safe.

  • Friends can also see a list of all the people impacted by the disaster and how they are doing.

How is the impacted area selected?
  • Since Facebook only has city-level location for most users, declaring the area isn't as hard as drawing on a map. Facebook usually selects a number of cities, regions, states, or countries that are affected by the crisis.

  • Facebook always allows people to declare themselves into the crisis (or out) in case the geolocation prediction is inaccurate. This means Facebook can be a bit more selective with the geographic area, since they want a pretty high signal with the notifications. Notification click-through and conversion rates are used as downstream signals on how well a launch went.

  • For something like Paris, Facebook selected the whole city and launched. Especially with the media reporting "Paris terror attacks," this seemed like a good fit.

How do you build the pool of people impacted by a disaster in a certain area?
  • Building a geoindex is the obvious solution, but it has weaknesses.

  • People are constantly moving so the index will be stale.

  • A geoindex of 1.5 billion people is huge and would take a lot of resources they didn’t have. Remember, this is a small team without a lot of resources trying to implement a solution.

  • Instead of keeping a data pipeline that’s rarely used active all of the time, the solution should work only when there is an incident. This requires being able to make a query that is dynamic and instant.

  • Facebook does not have GPS-level location information for the majority of its user base (only those that turn on the nearby friends feature), so they use the same IP2Geo prediction algorithms that Google and other web companies use -- essentially determining city level location based on IP address.

The solution leveraged the shape of the social graph and its properties:
  • When there’s a disaster, say an earthquake in Nepal, a hook for Safety Check is turned on in every single news feed load.

  • When people check their news feed the hook executes. If the person checking their news feed is not in Nepal then nothing happens.

  • When someone in Nepal checks their news feed is when the magic happens.

  • Safety Check fans out to all their friends on their social graph. If a friend is in the same area then a push notification is sent asking if they are OK.

  • The process keeps repeating recursively. For every friend found in the disaster area a job is spawned to check their friends. Notifications are sent as needed.

In Practice this Solution Was Very Effective
  • At the end of the day it's really just DFS (Depth First Search) with seen state and selective exploration.

  • The product experience feels live and instant because the algorithm is so fast at finding people. Everyone in the same room, for example, will appear to get their notifications at the same time. Why?

  • Using the news feed gives a random sampling of users that is biased towards the most active users with the most friends. And it filters out inactive users, which is billions of rows of computation which need not be performed.

  • The graph is dense and interconnectedSix Degrees of Kevin Bacon is wrong, at least on Facebook. The average distance between any two of Facebook’s 1.5 billion users is 4.74 edges. Sorry Kevin. With 1.5 billion users the whole graph can be explored within 5 hops. Most people can be efficiently reached by following the social graph.

  • There’s a lot of parallelism for free using a social graph approach. Friends can be assigned to different machines and processed in parallel. As can their friends, and so on.

  • Isn't it possible to use something like Hadoop/Hive/Presto to simply get a list of all users in Paris on demand? Hive and Hadoop are offline. It can take ~45 minutes to execute a query on Facebook's entire user table (even longer if it involves joins) and certain times of the day its slower (during work hours usually). Not only that, but once the query executes some engineer has to go copy and paste into a script that would likely run on one machine. Doing this in a distributed async job fashion allowed for a lot more flexibility. Even better, it's possible to change the geographic area as the algorithm runs and those changes are reflected immediately. 

  • The cost of searching for the users in the area directly correlates with the size of the crisis (geographically). A smaller crises ends up being fairly cheap, whereas larger crises end up checking on a larger and larger portion of the userbase until 100% of the user base is reached. For Nepal, a big disaster, ~1B profiles were checked. For some smaller launches only ~100k profiles were checked. Had an index been used, or an offline job that did joins and filters, the cost would be constant, no matter how small the crisis.

On HackerNews

Categories: Architecture

Stuff The Internet Says On Scalability For November 13th, 2015

Hey, it's HighScalability time:

Gorgeous picture of where microbes live in species. Humans have the most. (M. WARDEH ET AL)

  • 14.3 billion: Alibaba single day sales; 1.55 billion: Facebook monthly active users; 6 billion: Snapchat video views per day; unlimited: now defined as 300 GB by Comcast; 80km: circumference of China's proposed supercolider; 500: alien worlds visualized; 50: future sensors per acre on farms; 1 million: Instagram requests per second.

  • Quotable Quotes:
    • Adam Savage~ Lesson learned: do not test fire rockets indoors.
    • dave_sullivan: I'm going to say something unpopular, but horizontally-scaled deep learning is overkill for most applications. Can anyone here present a use case where they have personally needed horizontal scaling because a Titan X couldn't fit what they were trying to do? 
    • @bcantrill: Question I've been posing at #KubeCon: are we near Peak Confusion in the container space? Consensus: no -- confusion still accelerating!
    • @PeterGleick: When I was born, CO2 levels were  ~300 ppm. This week may be the last time anyone alive will see less than 400 ppm. 
    • @patio11: "So I'm clear on this: our business is to employ people who can't actually do worthwhile work, train them up, then hand to competition?"
    • Settlement-Size: This finding reveals that incipient forms of hierarchical settlement structure may have preceded socioeconomic complexity in human societies
    • wingolog: for a project to be technically cohesive, it needs to be socially cohesive as well; anything else is magical thinking.
    • @mjpt777: Damn! @toddlmontgomery has got Aeron C++ IPC to go at over 30m msg/sec. Java is struggling to keep up.
    • Tim O'Reilly: While technological unemployment is a real phenomenon, I think it's far more important to look at the financial incentives we've put in place for companies to cut workers and the cost of labor. If you're a public company whose management compensation is tied your stock price, it's easy to make short term decisions that are good for your pocketbook but bad long term for both the company and for society as a whole.
    • @RichardDawkins: Evolution is "Descent with modification". Languages, computers and fashions evolve. Solar systems, mountains and embryos don't. They develop
    • @Grady_Booch: Dispatches from a programmer in the year 2065: "How do you expect me to fit 'Hello, World' into only a terabyte of memory?" via Joe Marasco
    • @huntchr: I find #Zookeeper to be the Achilles Heal of a few otherwise interesting projects e.g. #kafka, #mesos.
    • Robert Scoble~ Facebook Live was bringing 10x more viewers than Twitter/Periscope
    • cryptoz: I've always wondered about this. Presumably the people leading big oil companies are not dumb idiots; so why wouldn't they take this knowledge and prepare in advance?

  • Waze is using data from sources you may not expect. Robert Scoble: How about Waze? I witnessed an accident one day on the highway near my house. Two lane road. The map turned red within 30 seconds of the accident. How did that happen? Well, it turns out cell phone companies (Verizon, in particular, in the United States) gather real time data from cell phones. Your phone knows how fast it’s going. In fact, today, Waze shows you that it knows. Verizon sells that data (anonymized) to Google, which then uses that data to put the red line on your map.

  • If email would have been done really right in the early days then we wouldn't need half the social networks or messaging apps we have today. Almost everything we see is a reimplementation of email. Gmail, We Need To Talk.

  • Don Norman and Bruce Tognazzini, prophets from Apple's time in the wilderness, don't much like the new religion. They stand before the temple shaking fists at blasphemy. How Apple Is Giving Design A Bad Name: Apple is destroying design. Worse, it is revitalizing the old belief that design is only about making things look pretty. No, not so! Design is a way of thinking, of determining people’s true, underlying needs, and then delivering products and services that help them. Design combines an understanding of people, technology, society, and business. 

  • There's a new vision of the Internet out there and it's built around the idea of Named Data Networking (NDN). It's an evolution from today’s host-centric network architecture IP to a data-centric network architecture. Luminaries like Van Jacobson like the idea. Packet Pushers with good coverage in Show 262 – Future of Networking – Dave Ward. Dave Ward is the CTO of Engineering and Chief Architect at Cisco. For me, make the pipes dumb, fast, and secure. Everything else is emergent.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Are your node modules secure?

Xebia Blog - Fri, 11/13/2015 - 13:05

With over 200k packages, npm is the world's largest registry of open source packages. It serves several million downloads each month. The popularity of npm is a direct result of the popularity of JavaScript. Originally npm was the package manager for Node.js, the server-side JavaScript runtime. Since Node.js developers mostly follow the Unix philosophy, the npm registry contains many very small libraries tailored to a specific purpose. Since the introduction of Browserify, many of these libraries suddenly became suitable for use in the web browser. It has made npm not only the package manager for Node.js, but for the entire JavaScript ecosystem. This is why npm is not an abbreviation of Node Package Manager, but a recursive bacronymic abbreviation for "npm is not an acronym". Wow.

If you do any serious JavaScript development, you cannot go without libraries, so npm is an indispensable resource. Any project of meaningful size is quickly going to rely on several dozen libraries. Considering that these libraries often have a handful of dependencies of their own, your application indirectly depends on hundreds of packages. Most of the time this works out quite well, but sometimes things aren't that great. It turns out that keeping all of these dependencies up to date can be quite a challenge. Even if you frequently check your dependencies for updates, there's no guarantee that your dependencies' authors will do the same. With the pace at which new JavaScript packages are being released, it's close to impossible to keep everything up to date at all times.

Most of the time it's not a problem to rely on an older version of a package. If your package works fine with an outdated dependency, there's no compelling reason to upgrade. Why fix something that isn't broken? Unfortunately, it's not so easy to tell if it is. Your package may have been broken without your knowledge. The problem is in the definition of "broken". You could consider it to mean your application doesn't work in some way, but what about the non-functionals? Did you consider the fact that you may be relying on packages that introduce security vulnerabilities into your system?

Like any software, Node.js and JavaScript aren't immune to security issues. You could even consider JavaScript inherently less secure because of its dynamic nature. The Node Security Project exists to address this issue. It keeps a database of known security vulnerabilities in the Node ecosystem and allows anyone to report them. Although NSP provides a command line tool to check your dependencies for vulnerabilities, a new company called Snyk has recently released a tool to do the same and more. Snyk, short for "so now you know", finds security vulnerabilities in your entire dependency tree based on the NSP database and other sources. Its CLI tool is incredibly simple to install and use. Just `npm install snyk` and off you go. You can run it against your own project, or against any npm package:

> snyk test azure

‚úó Vulnerability found on validator@3.1.0
Info: https://app.snyk.io/vuln/npm:validator:20130705
From: azure@0.10.6 > azure-arm-website@0.10.0 > azure-common@0.9.12 > validator@~3.1.0
No direct dependency upgrade can address this issue.
Run `snyk protect -i` to patch this vulnerability
Alternatively, manually upgrade deep dependency validator@~3.1.0 to validator@3.2.0


Tested azure for known vulnerabilities, found 32 vulnerabilities.

It turns out the Node.js library for Azure isn't quite secure. Snyk can automatically patch the vulnerability for you, but the real solution is to update the azure-common package to use the newer version of validator. As you see, most of the security issues reported by Snyk have already been fixed by the authors of the affected library. That's the real reason to keep your dependencies up to date.

I think of Snyk as just another type of code quality check. Just like your unit tests, your build should fail if you've accidently added an insecure dependency. A really simple way to enforce it is to use a pre-commit hook in your package.json:

"scripts": {
 "lint": "eslint src test",
 "snyk": "snyk test",
 "test": "mocha test/spec",
"pre-commit": ["lint", "test", "snyk"]

The pre-commit hook will automatically be executed when you try to commit to your Git repository. It will run the specified npm scripts and if any of them fail, abort the commit. It must be noted that, by default, Snyk will only test your production dependencies. If you want it to also test your devDependencies you can run it with the `--dev` flag.

Personal Effectiveness Toolbox

‚ÄúPrivate Victory precedes Public Victory. Algebra comes before calculus.‚ÄĚ ‚Äď Stephen Covey

At last.  It‚Äôs here.  It‚Äôs my Personal Effectiveness Toolbox:

Personal Effectiveness Toolbox

It‚Äôs the real deal.  This is my hand-picked collection of principles, patterns, practices, and tools to help you make the most of what you‚Äôve got.

My Personal Effectiveness Toolbox is a roundup of the best-of-the-best resources that help you in key areas of your life, including the following Hot Spots:

  1. Mind
  2. Body
  3. Emotions
  4. Career
  5. Finance
  6. Relationships
  7. Fun
Get Your Edge in Work and Life (Your Unfair Advantage)

If you want to get an edge in work and life, Personal Effectiveness Toolbox will help you do exactly that.   I mentor a lot of people inside and outside of Microsoft, so I am always looking for the best resources and tools that actually work.  I‚Äôve personally spent many, many thousands of dollars on programs and tested them in the real-world against extreme challenges.

I wasted a lot of money.

But I also found a lot of incredible and amazing products that actually worked.  I found people and products and tools that provide real insight and led to real breakthroughs.

The Best Personal Effectiveness Resources in the World

My Personal Effectiveness Toolbox is the ultimate collection of programs, tools, and books that help you succeed in all areas of your life.   I‚Äôve organized the resources into the following categories:

Achievement Systems, Beliefs / Limits / Mindsets, Blogging, Body / Fitness / Health, Book Writing, Business / Startups / Passive Income, Career, Confidence, Creativity, Finance, Goals, Emotional Intelligence, Interpersonal Skills, Leadership, Mind / Intellectual Horsepower, Motivation, Personal Development, Productivity, Relationships.

I‚Äôve also tried to address some common scenarios and issues. 

Build Passive Income Skills for a Mobile-First, Cloud-First World

One scenario I see a lot is people are looking to achieve financial freedom.  They either want to have a backup for their day job, or perhaps pursuit other opportunities on their own terms.  Or they want to simply try their hand at generating passive income.  The beauty is that in today‚Äôs world, you can combine your purpose, passion, and profit, and sell what you know to the world. 

But the challenge is it can be a confusing path, and there is a lot to learn.  I want through a lot of books, courses, and programs that were a big let down.  But, along the way, I did find some resources that really did help.  For example, I regularly recommend SBI! (Site Build It) to friends and family as a way to get started.  I also recommend  Teleseminar Mastery Course as an effective way to create an online course on your favorite subject and get paid for doing what you love.   To give them a handle on how to think about passive income, financial freedom, and building businesses in today‚Äôs mobile-first, cloud-first world, I have them start with Six-Figure Second Income.  It‚Äôs one of the best books I‚Äôve read that puts it all together and really explains things in plain English, and puts things like digital information products and a digital economy in perspective.

Improve Your Personal Effectiveness with a Personal Achievement System

Another scenario I see is that too many people struggles with goals, motivation, and productivity.  While you can attack these individually, I‚Äôm a fan of building a strong foundation by putting a personal achievement system in place.  If you have an achievement system you can count on, you amplify your chance for success.  It also helps you with continuous learning.   And a good personal achievement system helps you get much better over time.

While there are a lot of systems out there, if I had to pick the best starting point, I would say it‚Äôs Tony Robbins‚Äô Personal Power II.  It‚Äôs the most hard-core personal development program and personal excellence program I know.   You‚Äôll learn more about your body, brains, and emotions than a lifetime of reading.  You‚Äôll learn how to rapidly model success, and accelerate your learning curve.  It‚Äôs the same program I used to go from nearly last in a class of 197 students, to #3.  I still can‚Äôt believe it.  Just about every day I recall some aspect of Personal Power II, and apply it in some shape or form.  It‚Äôs one of Tony‚Äôs greatest gifts to the world, ever.

Note that just because I‚Äôm talking about Tony Robbins particular program doesn‚Äôt mean I limited myself to his programs.  In fact, I also included a reference to Brian Tracy‚Äôs Success Master Academy.  It‚Äôs also one of the best programs available that really gives you a well-rounded foundation for achieving your dreams.

Master Emotional Intelligence

Emotional Intelligence is often the difference that makes the difference in work and life.  While Emotional Intelligence won‚Äôt guarantee your success, the absence of it can almost guarantee you will struggle.  You will have a disadvantage compared to those with EQ.  But the very good news is that Emotional Intelligence is a skill you can learn.  You can practice it every day.  And you can learn it on your own.  The place to start is Daniel Goleman‚Äôs classic book, Emotional Intelligence.

If you are wondering what Stephen Covey meant when he wanted us to increase the gap between stimulus and response, and to respond to our challenges vs. react, that’s exactly where Emotional Intelligence comes in.

Achieve Your Goals

Believe it or not, goals are your friend.  If your goals aren‚Äôt working for you, the problem is you have ‚Äúimpotent goals‚ÄĚ, as Tony Robbins would say.  Or, perhaps, maybe Zig Ziglar said it best when he said, "People do not wander around and then find themselves at the top of Mount Everest."

Goals help you prioritize, focus, and know when you are done.  They help you make trade-offs in how much time to spend on something, or even when to spend time on something.  They also help you establish markers along the way so you can feel a sense of progress and they help you with your motivation.

But all of the goodness of goals depends on knowing how to really set them and achieve them with skill.   The good news is, goals have been around a very long time, way longer than you or me.   And many people before us have learned how to really use goals to their advantage.

And the beauty is nothing stops us from using all those lessons learned from goal setting.  The art and science of effective goal setting is well-known and well published.  You just need to know where to look.  While I have gone through many, many goal setting courses and exercises, I would say that one of the best, most thorough programs that really gives you a rock-solid foundation is Brian Tracy‚Äôs Goal Mastery for Personal and Financial Achievement.  It is an advanced system that not only covers the basics, it dives deep into how to really create compelling goals and make them happen.

Write Your Book

One goal a lot of people have is to write a book.  In fact, many people I know want to write their first book.  I‚Äôve included a link to Brian Tracy‚Äôs 20-Step Author Quick Start Guide, which is one of the most thorough guides that walks you through the process of writing and publishing your book.  Brian Tracy is a world-renowned author and is one of the best to learn from. 

You can write your book to share your experience.  You can also use books as a way to help your career or to establish your expertise.  You can also use your book as a way to help build your financial fitness.  And you can use your book writing process as a way to dive much deeper into a topic you love.

Call to Action

I could go on, but at this point, I’m just going to ask you to do three things:

  1. Bookmark my Personal Effectiveness Toolbox.  It will serve you for years to come (and I will continue to update it.)
  2. Review the programs and tools and test which ones you think will help you the most.
  3. Last, but not least (and perhaps most important), share the Personal Effectiveness Toolbox with 10 of your friends.  Karma just might surprise you with a big fat kiss.  But all Karma aside, you can really help your friends and family the same way I do with some of the best tools in the world.

Here’s to getting everything you want, and then some, as well as helping more people achieve their dreams.

Categories: Architecture, Programming

Sponsored Post: StatusPage.io, Digit, iStreamPlanet, Instrumental, Redis Labs, Jut.io, SignalFx, InMemory.Net, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Senior Devops Engineer - StatusPage.io is looking for a senior devops engineer to help us in making the internet more transparent around downtime. Your mission: help us create a fast, scalable infrastructure that can be deployed to quickly and reliably.

  • Digit Game Studios, Irish’s largest game development studio, is looking for game server engineers to work on existing and new mobile 3D MMO games. Our most recent project in development is based on an iconic AAA-IP and therefore we expect very high DAU & CCU numbers. If you are passionate about games and if you are experienced in creating low-latency architectures and/or highly scalable but consistent solutions then talk to us and apply here.

  • As a Networking & Systems Software Engineer at iStreamPlanet you’ll be driving the design and implementation of a high-throughput video distribution system. Our cloud-based approach to video streaming requires terabytes of high-definition video routed throughout the world. You will work in a highly-collaborative, agile environment that thrives on success and eats big challenges for lunch. Please apply here.

  • As a Scalable Storage Software Engineer at iStreamPlanet you’ll be driving the design and implementation of numerous storage systems including software services, analytics and video archival. Our cloud-based approach to world-wide video streaming requires performant, scalable, and reliable storage and processing of data. You will work on small, collaborative teams to solve big problems, where you can see the impact of your work on the business. Please apply here.

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Your event could be here. How cool is that?
Cool Products and Services
  • Instrumental is a hosted real-time application monitoring platform. In the words of one of our customers: "Instrumental is the first place we look when an issue occurs. Graphite was always the last place we looked." - Dan M

  • Real-time correlation across your logs, metrics and events.  Jut.io just released its operations data hub into beta and we are already streaming in billions of log, metric and event data points each day. Using our streaming analytics platform, you can get real-time monitoring of your application performance, deep troubleshooting, and even product analytics. We allow you to easily aggregate logs and metrics by micro-service, calculate percentiles and moving window averages, forecast anomalies, and create interactive views for your whole organization. Try it for free, at any scale.

  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Sumo Logic alternative.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

A 360 Degree View of the Entire Netflix Stack

This is a guest repost by Chris Ueland, creator of Scale Scale, with a creative high level view of the Netflix stack.

As we research and dig deeper into scaling, we keep running into Netflix. They are very public with their stories. This post is a round up that we put together with Bryan’s help. We collected info from all over the internet. If you’d like to reach out with more info, we’ll append this post. Otherwise, please enjoy!

–Chris / ScaleScale / MaxCDN

A look at what we think is interesting about how Netflix Scales
Categories: Architecture

Stuff The Internet Says On Scalability For November 6th, 2015

Hey, it's HighScalability time:

Cool geneology of Relational Database Management Systems.
  • 9,000: Artifacts Uncovered in California Desert; 400 Million: LinkedIn members; 100: CEOs have more retirement assets than 41% of American families; $160B: worth of AWS; 12,000: potential age of oldest oral history; fungi: world's largest miners 

  • Quotable Quotes:
    • @jaykreps: Someone tell @TheEconomist that people claiming you can build Facebook on top of a p2p blockchain are totally high.
    • Larry Page: I think my job is to create a scale that we haven't quite seen from other companies. How we invest all that capital, and so on.
    • Tiquor: I like how one of the oldest concepts in programming, the ifdef, has now become (if you read the press) a "revolutionary idea" created by Facebook and apparently the core of a company's business. I'm only being a little sarcastic.
    • @DrQz: +1 Data comes from the Devil, only models come from God. 
    • @DakarMoto: Great talk by @adrianco today quote of the day "i'm getting bored with #microservices, and I’m getting very interested in #teraservices.”
    • @adrianco: Early #teraservices enablers - Diablo Memory1 DIMMs, 2TB AWS X1 instances, in-memory databases and analytics...
    • @PatrickMcFadin: Average DRAM Contract Price Sank Nearly 10% in Oct Due to Ongoing Supply Glut. How long before 1T memory is min?
    • @leftside: "Netflix is a monitoring system that sometimes shows people movies." --@adrianco #RICON15
    • Linus: So I really see no reason for this kind of complete idiotic crap.
    • Jeremy Hsu: In theory, the new architecture could pack about 25 million physical qubits within an array that’s 150 micrometers by 150 µm. 
    • @alexkishinevsky: Just done AWS API Gateway HTTPS API, AWS Lambda function to process data straight into AWS Kinesis. So cool, so different than ever before.
    • @highway_62: @GreatDismal Food physics and candy scaling is a real thing. Expectations and ratios get blown. Mouth feel changes.
    • @randybias:  #5 you can’t get automation scaling without relative homogeneity (homologous) and that’s why the webscale people succeeded
    • Brian Biles: Behind it all: VMs won.  The only thing that kept this [Server Centric Storage is Killing Arrays] from happening a long time ago was OS proliferation on physical servers in the “Open Systems” years.  Simplifying storage for old OS’s required consolidative arrays with arbitrated-standard protocols.
    • @paulcbetts: This disk is writing at almost 1GB/sec and reading at ~2.2GB/sec. I remember in 2005 when I thought my HD reading at 60MB/sec was hot shit.
    • @merv: One of computing’s biggest challenges for architects and designers: scaling is not distributed uniformly in time or space.

  • To Zookeeper or to not Zookeeper? This is one of the questions debated on an energetic mechanical-sympathy thread. Some say Zookeeper is an unreliable and difficult to manage. Others say Zookeeper works great if carefully tended. If you need a gossip/discovery service there are alternatives: JGroups, Raft, Consul, Copycat.

  • Algorithms are as capable of tyranny as any other entity wielding power. Twins denied driver’s permit because DMV can’t tell them apart

  • Odd thought. What if Twitter took stock or options as payment for apps that want to use Twitter as a platform (not Fabric)? The current user caps would effectively be the free tier. If you want to go above that you can pay. Or you can exchange stock or options for service. This prevents the Yahoo problem of being King Makers, that is when Google becomes more valuable than you. It gives Twitter potential for growth. It aligns incentives because Twitter will be invested in the success of apps that use it. And it gives apps skin in the game. Although Twitter has to recognize the value of the stock they receive as revenue, they can offset that against previous losses.

  • One of the best stories ever told. Her Code Got Humans on the Moon—And Invented Software Itself: MARGARET HAMILTON WASN’T supposed to invent the modern concept of software and land men on the moon...But the Apollo space program came along. And Hamilton stayed in the lab to lead an epic feat of engineering that would help change the future of what was humanly—and digitally—possible. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Notes on testing in golang

Agile Testing - Grig Gheorghiu - Fri, 11/06/2015 - 00:00
I've been doing a lot of testing of APIs written in the Go programming language in the last few months. It's been FUN! Writing code in Golang is also very fun. Can't have good code quality without tests though, so I am going to focus on the testing part in this post.

Unit testing
Go is a "batteries included" type of language, just like Python, so naturally it comes with its own testing package, which provides support for automated execution of unit tests. Here's an excerpt from its documentation:
Package testing provides support for automated testing of Go packages. It is intended to be used in concert with the ‚Äúgo test‚ÄĚ command, which automates execution of any function of the form
func TestXxx(*testing.T)
where Xxx can be any alphanumeric string (but the first letter must not be in [a-z]) and serves to identify the test routine.The functionality offered by the testing package is fairly bare-bones though, so I've actually been using another package called testify which provides test suites and more friendly assertions.

Whether you're using testing or a 3rd party package such as testify, the Go way of writing unit tests is to include them in a file ending with _test.go in the same directory as your code under test. For example, if you have a file called customers.go which deals with customer management business logic, you would write unit tests for that code and put them in file called customers_test.go in the same directory as customers.go. Then, when you run the "go test" command in that same directory, your unit tests will be automatically run. In fact, "go test" will discover all tests in files named *_test.go and run them. You can find more details on Go unit testing in the Testing section of the "How to Write Go Code" article.

Integration testing
I'll give some examples of how I organize my integration tests. Let's take again the example of testing an API what deals with the management of customers. An integration test, by definition, will hit the API endpoint from the outside, via HTTP. This is in contrast with a unit test which will test the business logic of the API handler internally, and will live as I said above in the same package as the API code.

For my integration tests, I usually create a directory per set of endpoints that I want to test, something like core-api for example. In there I drop a file called main.go where I set some constants used throughout my tests:

package main

import (

const API_VERSION = "v2"
const API_HOST = "myapi.example.com"
const API_PORT = 8000
const API_PROTO = "http"
const API_INIT_KEY = "some_init_key"
const API_SECRET_KEY = "some_secret_key"
const TEST_PHONE_NUMBER = "+15555550000"
const DEBUG = true

func init() {
        fmt.Printf("API_PROTO:%s; API_HOST:%s; API_PORT:%d\n", API_PROTO, API_HOST, API_PORT)

func main() {

For integration tests related to the customer API, I create a file called customer_test.go with the following boilerplate:

package main

import (


// Define the suite, and absorb the built-in basic suite
// functionality from testify - including a T() method which
// returns the current testing context
type CustomerTestSuite struct {
        apiURL          string
        testPhoneNumber string

// Set up variables used in all tests
// this method is called before each test
func (suite *CustomerTestSuite) SetupTest() {
        suite.apiURL = fmt.Sprintf("%s://%s:%d/%s/customers", API_PROTO, API_HOST, API_PORT, API_VERSION)
        suite.testPhoneNumber = TEST_PHONE_NUMBER

// Tear down variables used in all tests
// this method is called after each test
func (suite *CustomerTestSuite) TearDownTest() {

// In order for 'go test' to run this suite, we need to create
// a normal test function and pass our suite to suite.Run
func TestCustomerTestSuite(t *testing.T) {
        suite.Run(t, new(CustomerTestSuite))

By using the testify package, I am able to define a test suite, a struct I call CustomerTestSuite which contains a testify suite.Suite as an anonymous field. Golang uses composition over inheritance, and the effect of embedding a suite.Suite in my test suite is that I can define methods such as SetupTest and TearDownTest on my CustomerTestSuite. I do the common set up for all test functions in SetupTest (which is called before each test function is executed), and the common tear down for all test functions in TearDownTest (which is called after each test function is executed).

In the example above, I set some variables in SetupTest which I will use in every test function I'll define. Here is an example of a test function:

func (suite *CustomerTestSuite) TestCreateCustomerNewEmailNewPhone() {
        url := suite.apiURL
        random_email_addr := fmt.Sprintf("test-user%d@example.com", common.RandomInt(1, 1000000))
        phone_num := suite.testPhoneNumber
        status_code, json_data := create_customer(url, phone_num, random_email_addr)

        customer_id := get_nested_item_property(json_data, "customer", "id")

        assert_success_response(suite.T(), status_code, json_data)
        assert.NotEmpty(suite.T(), customer_id, "customer id should not be empty")

The actual HTTP call to the backend API that I want to test happens inside the create_customer function, which I defined in a separate utils.go file:

func create_customer(url, phone_num, email_addr string) (int, map[string]interface{}) {
        fmt.Printf("Sending request to %s\n", url)

        payload := map[string]string{
                "phone_num":  phone_num,
                "email_addr": email_addr,
        ro := &grequests.RequestOptions{}
        ro.JSON = payload

        var resp *grequests.Response
        resp, _ = grequests.Post(url, ro)

        var json_data map[string]interface{}
        status_code := resp.StatusCode
        err := resp.JSON(&json_data)
        if err != nil {
                fmt.Println("Unable to coerce to JSON", err)
                return 0, nil

        return status_code, json_data

Notice that I use the grequests package, which is a Golang port of the Python Requests package. Using grequests allows me to encapsulate the HTTP request and response in a sane way, and to easily deal with JSON.

To go back to the TestCreateCustomerNewEmailNewPhone test function, once I get back the response from the API call to create a customer, I call another helper function called assert_success_response, which uses the assert package from testify in order to verify that the HTTP response code was 200 and that certain JSON parameters that we send back with every response (such as error_msg, error_code, req_id) are what we expect them to be:

func assert_success_response(testobj *testing.T, status_code int, json_data map[string]interface{}) {
        assert.Equal(testobj, 200, status_code, "HTTP status code should be 200")
        assert.Equal(testobj, 0.0, json_data["error_code"], "error_code should be 0")
        assert.Empty(testobj, json_data["error_msg"], "error_msg should be empty")
        assert.NotEmpty(testobj, json_data["req_id"], "req_id should not be empty")
        assert.Equal(testobj, true, json_data["success"], "success should be true")

To actually run the integration test, I run the usual 'go test' command inside the directory containing my test files.

This pattern has served me well in creating an ever-growing collection of integration tests against our API endpoints.

Test coverage
Part of Golang's "batteries included" series of tools is a test coverage tool. To use it, you first need to run 'go test' with various coverage options. Here is a shell script we use to produce our test coverage numbers:


# Run all of our go unit-like tests from each package

set -e
set -x

cp /dev/null $CTMP
cp /dev/null $CREAL
cp /dev/null $CMERGE

go test -v -coverprofile=$CTMP -covermode=count -parallel=9 ./auth
cat $CTMP > $CREAL

go test -v -coverprofile=$CTMP -covermode=count -parallel=9 ./customers
cat $CTMP |tail -n+2 >> $CREAL

# Finally run all the go integration tests

go test -v -coverprofile=$C -covermode=count -coverpkg=./auth,./customers ./all_test.go
cat $CTMP |tail -n+2 >> $CREAL

rm $CTMP

# Merge the coverage report from unit tests and integration tests

cd $GOPATH/src/core_api/
cat $CREAL | go run ../samples/mergecover/main.go >> $CMERGE

set +x

echo "You can run the following to view the full coverage report!::::"
echo "go tool cover -func=$CMERGE"
echo "You can run the following to generate the html coverage report!::::"
echo "go tool cover -html=$CMERGE -o coverage.html"

The first section of the bash script above runs 'go test' in covermode=count against every sub-package we have (auth, customers etc). It combines the coverprofile output files (CTMP) into a single file (CREAL).

The second section runs the integration tests by calling 'go test' in covermode=count, with coverpkg=[comma-separated list of our packages], against a file called all_test.go. This file starts an HTTP server exposing our APIs, then hits our APIs by calling 'go test' from within the integration test directory.

The coverage numbers from the unit tests and integration tests are then merged into the CMERGE file by running the mergecover tool.

At this point, you can generate an html file via go tool cover -html=$CMERGE -o coverage.html, then inspect coverage.html in a browser. Aim for more than 80% coverage for each package under test.

Robot Framework and the keyword-driven approach to test automation - Part 1 of 3

Xebia Blog - Wed, 11/04/2015 - 21:55

Hans Buwalda is generally credited with the introduction of the keyword-driven paradigm of functional test automation, initially calling it the 'action word' approach.

This approach tackled certain fundamental problems pertaining to the efficiency of the process of creating test code (mainly the lack of reuse) and the maintainability, readability and robustness of that code. Problems surrounding these aspects frequently led to failed automation efforts. The keyword-driven framework therefore was (and is) a quantum leap forward, providing a solution to these problems by facilitating the application of modularity, abstraction and other design patterns to the automation code.

Robot Framework (RF) can be regarded as the epitome of this type of automation framework. Our first post on the RF concentrated on the high-level design of the platform. In this second of our three-part series of introductory-level posts, we will take a closer look at what the keyword-driven approach to test automation is all about.

This second post will itself be divided into three parts. In part 1, we will look at the position of the keyword-driven approach within the history of test automation frameworks. In part 2 we will delve into the concept of a 'keyword'. Finally, in part 3, we will look at how the keyword-driven approach is implemented by the RF.

A short history of test automation frameworks

In order to get a first, overall impression of the nature and advantages of the keyword-driven approach to test automation, let's have a look at the different framework generations that make up the history of test automation platforms. In doing so, we'll come across some of the differences, similarities and interdependencies between these various types of frameworks.

Please note that this history is partially a recap of similar (and sometimes more elaborate) genealogies in existing books, articles and blog posts. Nevertheless, I want to present it here as part of my comprehensive RF introduction. Moreover, I intend to give my own spin to the categorization of frameworks, by arguing that hybrid, MBT (Model-based testing) and Agile frameworks have no place in this lineage and that Scriptless Frameworks are not really scriptless.

Having said that, let's take a closer look at the various types of automation frameworks. The methodological and technological evolution of automated functional testing platforms is often divided into the following stages.

Linear frameworks (such as record & playback frameworks)

The code that is written or generated is procedural in nature, lacking both control structures (for run-time decision making and the reuse of code sequences) and calling structures (for the  reuse of code modules). To wit, code consists of long sequences of statements that are executed one after the other.

Test cases are thus implemented as typically large, monolithic blocks of static code, mixing what Gojko Adzic calls the technical activity level, user interface workflow level and business rule level. That is, the procedural test case consists of a series of lowest-level statements that implements and 'represents' all three levels. Up until the advance of the keyword-driven approach, this mixing was a staple of test automation.

It will be clear that such code lacks reusability, maintainability, readability and several other critical code qualities. The linear framework therefore always injected major drawbacks and limitations into an automation solution.

Amongst many other examples (of such drawbacks and limitations), is that adding even the slightest variation on an existing test case was labour-intensive and leading to more code to maintain. Due to the lack of control structures, any alternative functional flow had to be implemented through an additional test case, since the test code had no way of deciding dynamically (i.e. run-time) which alternative set of test actions to execute. This was worsened by the fact that test cases could not be made generic due to the lack of data-driven capabilities. Consequently, each test case had to be implemented through a dedicated, separate script, even in those cases where the functional flow and (thus) the required test actions were completely identical and only the data input had to be varied.

The lack of control structures also meant less robustness, stability and reliability of the code, since custom error detection and recovery logic could not be implemented and, therefore, run-time error handling was completely lacking.

It was also hard to understand the purpose of test cases. Especially business stake holders were dependant on documentation provided with each test case and this documentation needed to be maintained as well.

With the advent of each subsequent framework generation, this situation would gradually be improved upon.

Structured/scripted frameworks

The automation code now featured control/flow structures (logic), such as for-loops and conditional  statements, making the code maybe even harder to read while not improving much upon maintainability and reusability.

Of course, this approach did provide flexibility and power to the automation engineer. It also prevented code duplication to some extent, on the one hand due to the reuse of blocks of statements through looping constructs and on the other hand because alternative functional flows could now be handled by the same test case through decisions/branching. Additionally, robustness and stability was greatly improved upon because through the control structures routines for error detection and handling could be implemented.

Data-driven frameworks

Data is now extracted from the code, tremendously improving upon automation efficiency by increased levels of both reusability and maintainability.

The code is made generic by having the data passed into the code by way of argument(s). The data itself persists either within the framework (e.g. in lists or tables) or outside the framework (e.g. in databases, spread sheets or text files).

The automation platform is thereby capable of having the same script iterate through sets of data items, allowing for a truly data-driven test design. Variations on test cases can now be defined by simply specifying the various sets of parameters that a certain script is to be executed with. E.g. a login routine that is repeatedly executed with different sets of credentials to test all relevant login scenario's.

Through this approach, the 1-1 relation between test case and test script could be abandoned for the first time. Multiple test cases, that require identical sequences of test actions but different input conditions, could be implemented through one test script. This increased the level of data coverage immensely, while simultaneously reducing the number of scripts to maintain.

Of course, the data-driven approach to test design comes with a trade-off between efficiency and readability. Nevertheless, it is now possible to very quickly and efficiently extend test coverage. For instance through applying boundary-value analysis and equivalence partitioning to the test designs.

Where the structured frameworks added efficiency to the testing of variations in functional flows, the data-driven framework added efficiency to the testing of variations in data-flows.

Keyword-driven frameworks (sometimes called modularity frameworks)

Reusable blocks of code statements are now extracted into one or more layers of lower-level test functions. These functions are called 'keywords'. Consequently, there are now at least two layers in the automation code: the higher-level test cases and the lower-level, reusable test functions (keywords).

Test cases now can call keywords, thereby reusing the involved keyword logic and abstracting from the technical complexities of the automation solution.

The keywords can live in code (e.g. Python, Java, .Net) or can be created through a (proprietary) scripting language. A combination of coding and scripting is also possible. In that case the scripted functions typically reuse (and are implemented through) the lower-level, coded functions.

By facilitating modularization and abstraction, the keyword-driven framework dramatically improves the reusability and maintainability of the test code as well as the readability of both test code and test cases.

More on the nature and advantages of the keyword-driven framework, in part 2 of this second post.

Scriptless frameworks

There is a substantial level of controversy surrounding this type of framework. That is, there is a lot of polarized discussion on the validity of the claims that tool vendors make with regard to the capabilities and benefits of their tools.

The principal claim seems to be that these frameworks automatically generate the reusable, structured code modules as featured in the keyword-driven approach. No scripting required.

I must admit to not having a lot of knowledge of (let alone hands-on experience with) these frameworks. But based on what I have seen and read (and heard from people who actually use them), it appears to me that the 'scriptless' amounts to nothing more than the automated generation of an (admittedly advanced) object/UI map through some sort of code scanning process. For example, the controls (i.e. GUI elements such as edit fields, buttons, etc.) of some part of an application's GUI may be inventoried (in terms of all kinds of properties) and then be made available for the application of some sort of operation (e.g. click or input) or assertion (e.g. isEnabled, hasValue). The available types of operations and assertions per GUI element are thus part of the object map. But all of this hardly constitutes a truly scriptless approach, since the logic that these actions must be embedded in (to model the workflow of the SUT), still needs to be scripted and/or coded.

Hybrid frameworks

Sometimes hybrid frameworks. are mentioned as the follow-up generation of the keyword-driven approach, but, since keyword-driven frameworks are always data-driven as well, this is a superfluous category. In general, any generation is 'subsuming' the essential features of its predecessors, adding new features. So, for example, keyword-driven means structured and data-driven as well.

Note though that certain modern, keyword-driven frameworks are not inherently structured, since they do not feature a (proprietary) scripting language in which to create (mid-level) structured functions, but only support coding such functions at the lowest level in e.g. Java.

For instance, the 'scenario' tables of FitNesse, although often used as a middle layer between test cases (specifications) and the fixtures implemented in code, cannot hold structured functions, since FitNesse does not provide control structures of any form. Therefore, the scenario table contains linear 'code'. For a long time, these tables weren't even able to provide return values.

As a side note: of all the keyword-driven frameworks, RF features the most mature, complete and powerful scripting engine. We will go into details of the RF scripting facilities in a later post. Of course, to what extent such scripting should be applied to test code development is subject to discussion. This discussion centers mainly around the impact of scripting on the risks that apply to the involved automation efforts. Again, a later RF post will focus on that discussion.

Model-based testing frameworks

Similarly, Model-based testing (MBT) frameworks are sometimes mentioned as preceding the scriptless framework generation. But the specific capabilities of this type of tool pertain to the automated generation of highly formalized (logical) test designs with a specific form of coverage. Therefore, in my opinion, model-based frameworks do not belong in this genealogy at all.

With MBT frameworks test cases are derived from a model, such as e.g. a finite state machine, that represents the (behavior of the) SUT. The model, that serves as input to the framework, is sometimes created dynamically as well. However, the model-based approach to test design is, in itself, agnostic towards the question of how to execute the generated set of cases. It might be automated or it might be manual. Accordingly, there are three 'deployment models' for the execution of a model-based test design.

Only seldom, in the case of the so-called 'online testing' deployment scheme, the generated tests can be executed automatically. The logical test cases are first dynamically made concrete (physical) and are then executed. In that case, the execution is typically performed by an execution engine that is part of the same framework (as is the case with e.g. Tricentis Tosca). However, depending on the product technology and interfaces, customizations (extensions) to the execution engine could be required.

At times the created test cases are candidates for automated execution. This is the case with the so-called 'offline generation of executable tests', deployment scheme which generates the test cases as machine-readable code modules, e.g. a set of JAVA or Python classes. These can then be integrated into the test automation code created on platforms such as Cucumber or RF and subsequently be executed by them.

Most often though the generated test designs adhere to the so-called 'offline generation of manually deployable tests' deployment scheme. In that case, the framework output is a set of human-readable test cases that are to be executed manually.

The fact that occasionally model-based test designs can be executed directly by an MBT framework component (namely in one of the three deployment schemes), is the reason that these frameworks are sometimes (and erroneously) listed in the genealogy of test automation frameworks.

Agile/ATDD/BDD frameworks

Finally, sometimes Agile, ATDD (Acceptance Test Driven Development) or BDD (Behavior Driven Development) frameworks are mentioned as well. However, this is not so much a category onto itself, but rather, at the utmost, an added feature to (or on top of) modern, keyword-driven frameworks. Frameworks such as the Robot Framework or FitNesse allow for high-level, business-readable, BDD-style test designs or specifications. For instance, a test design applying the Gherkin syntax of Given-When-Then.

These BDD-specific features thus add to the frameworks the capability of facilitating specification and collaboration. That is, they enable collaborative specification or specification by example. Some frameworks, such as Cucumber or JBehave actually define, understand and promote themselves in terms of this specific usage. In other words, although they can be used in a 'purely' keyword- and data-driven manner as well, they stress the fact that the main use case for these frameworks is 'specification by example'. They position themselves as advocates of the BDD philosophy and, as such, want to stimulate and evangelize BDD. From the JBehave web site: "JBehave is a framework for Behaviour-Driven Development (BDD). BDD is an evolution of test-driven development (TDD) and acceptance-test driven design. It shifts the vocabulary from being test-based to behaviour-based, and positions itself as a design philosophy." [italics MH]

It is precisely through their ability to create modularized and data-driven test automation code, that the keyword-driven frameworks facilitate and enable a layered and generic test design and thus the ATDD/BDD practices of creating executable specifications and of specifying by example. These specifications or examples need to be business-readable and, consequently, require all of the implementation details and complexities to be hidden in lower layers of the automation solution. They must also be able to allow for variations on test cases (i.e. examples), for instance in terms of boundary cases or invalid data. The keyword-driven mechanisms, mainly those of code extraction and data extraction, thus form the technical preconditions to the methodological and conceptual shift from the 'test-based vocabulary' to the 'behavior-based vocabulary'. ATDD/BDD practices, methods and techniques such as collaborative design, Gherkin, living documentation, etc. can be applied through putting those keyword-driven features that form the key advantages of this type of framework to their best use. So, to create, for instance, high-level scenario's in Gherkin is nothing more than a (somewhat) new form of usage of a keyword-driven framework and a (slightly) different spin on the keyword-driven approach. Although originally not devised and designed with these usages in mind, the keyword-driven frameworks proved to be just the perfect match for such practices.

In conclusion, although Agile, ATDD and BDD as methodological paradigms or conceptual frameworks are new, the phrase 'Agile/ATDD/BDD test automation frameworks' merely refers to a variation on the application of the keyword-driven approach and on the usage of the involved frameworks. This usage can, however, be regarded as applying the keyword-driven approach to the fullest extent and as bringing it to its logical conclusion.

At the end of the day, applying ATDD/BDD-style test designs comes down to layering these designs along the lines of the (already mentioned) distinction made by Gojko Adzic. This layering is made possible by the keyword-driven nature of the utilized frameworks and, consequently, always implies a leyword-driven approach.


We now have a better understanding of the position that the keyword-driven approach holds in relation to other types of test automation frameworks and of some of its unique advantages.

As a side-effect, we have also gained some knowledge concerning the evolution of test automation frameworks as well as concerning the problem space that they were developed in.

And finally, we have seen that certain framework 'types' do not belong into the pantheon of functional test automation frameworks.

Building upon this context, we will now give a brief, condensed analysis of the notion of a 'keyword' in part 2 of this three-part post.

Strategy: Avoid Lots of Little Files

I've been bitten by this one. It happens when you quite naturally use the file system as a quick and dirty database. A directory is a lot like a table and a file name looks a lot like a key. You can store many-to-one relationships via subdirectories. And the path to a file makes a handy quick lookup key. 

The problem is a file system isn't a database. That realization doesn't hit until you reach a threshold where there are actually lots of files. Everything works perfectly until then.

When the threshold is hit iterating a directory becomes very slow because most file system directory data structures are not optimized for the lots of small files case. And even opening a file becomes slow.

According to Steve Gibson on Security Now (@16:10) LastPass ran into this problem. LastPass stored every item in their vault in an individual file. This allowed standard file syncing technology to be used to update only the changed files. Updating a password changes just one file so only that file is synced.

Steve thinks this is a design mistake, but this approach makes perfect sense. It's simple and robust, which is good design given, what I assume, is the original reasonable expectation of relatively small vaults.

The problem is the file approach doesn't scale to larger vaults with thousands of files for thousands of web sites. Interestingly, decrypting files was not the bottleneck, the overhead of opening files became the problem. The slowdown was on the elaborate security checks the OS makes to validate if a process has the rights to open a file.

The new version of 1Password uses a UUID to shard items into one of 16 files based on the first digit of the UUID. Given good random number generation the files should grow more or less equally as items are added. Problem solved. Would this be your first solution when first building a product? Probably not.

Apologies to 1Password if this is not a correct characterization of their situation, but even if wrong, the lesson still remains.

Categories: Architecture

Agile, but still really not Agile? What Pipeline Automation can do for you. Part 3.

Xebia Blog - Tue, 11/03/2015 - 13:07

Organizations adopting Agile and teams delivering on a feature-by-feature basis producing business value at the end of every sprint. Quite possibly this is also the case in your organization. But do these features actually reach your customer at the same pace and generate business value straight away? And while we are at it: are you able to actually use feedback from your customer and apply it for use in the very next sprint?

Possibly your answer is ‚ÄúNo‚ÄĚ, which I see very often. Many companies have adopted the Agile way of working in their lines of business, but for some reason ‚Äėold problems‚Äô just do not seem to go away...

Hence the question:

‚ÄúDo you fully capitalize on the benefits provided by working in an Agile manner?‚ÄĚ

Straight forward Software Delivery Pipeline Automation might help you with that.

In this post I hope to inspire you to think about how Software Development Pipeline automation can help your company to move forward and take the next steps towards becoming a truly Agile company. Not just a company adopting Agile principles, but a company that is really positioned to respond to the ever changing environment that is our marketplace today. To explain this, I take the Agile Manifesto as a starting point and work from there.

In my previous posts (post 1, post 2), I addressed Agile Principles 1 to 4 and 5 to 8. Please read below where I'll explain about how automation can help you for Agile Principles 9 to 12.


Agile Principle 9: Continuous attention to technical excellence and good design enhances agility.

In Agile teams, technical excellence is achieved by complete openness of design, transparency on design-implications, reacting to new realities1-Quality and using feedback loops to continuously enhance the product. However, many Agile teams still seem to operate in the blind when it comes to feedback on build, deployment, runtime and customer experience.

Automation enhances the opportunity to implement feedback loops into the Software Delivery Process as feedback. Whatever is automated can be monitored and immediately provides insight into the actual state of your pipeline. Think of things like trend information, test results, statistics, and current health status at a press of a button.

Accumulating actual measurement data is an important step, pulling this data to an abstract level for the complete team to understand in the blink of an eye is another. Take that extra mile and use dashboarding techniques to make data visible. Not only is it fun to do, it is very helpful in making project status come alive.


Agile Principle 10: Simplicity--the art of maximizing the amount of work not done--is essential.

Many¬†of us may know the quote: ‚ÄúEverything should be made as simple as possible, but no simpler‚ÄĚ. For ‚ÄúSimplicity-the art of maximizing the amount of work not done‚ÄĚ, wastes like ‚Äėover-processing‚Äô and ‚Äėover-production‚Äô are minimized by showing the product to the Product Owner as soon as possible and at frequent intervals, preventing gold plating and build-up of features in the pipeline.Genius-2

Of course, the Product Owner is important, but the most important stakeholder is the customer. To get feedback from the customer, you need to get new features not only to your Demo environment, but all the way to production. Automating the Build, Deploy, Test and provisioning processes are topics that help organizations achieving that goal.

Full automation of your software delivery pipeline provides a great mechanism for minimizing waste and maximizing throughput all the way into production. It will help you to determine when you start gold plating and position you to start doing things that really matter to your customer.

Did you know that according to a Standish report, more than 50% of functionality in software is rarely or never used. These aren’t just marginally valued features; many are no-value features. Imagine what can be achieved when we actually know what is used and what is not.


Agile Principle 11: The best architectures, requirements, and designs emerge from self-organizing teams.

Traditionally engineering projects were populated with specialists. Populating a team with specialists was based on the concept to divide labor, thereby pushing specialists towards focus on their field of expertise. The interaction designer designs the UI, the architect creates the required architectural model, the database administrator sets up the database, the integrator works on integration logic and so forth. Everyone was working on an assigned activity but as soon as components were put together, nothing seemed to work.

In Agile teams it is not about a person performing a specific task, it is about the team delivering fully functional slices of the product. When a slice fails, the team fails. Working together when designing components will help finding a optimal overall solution instead of many optimized sub-solutions that need to be glued together at a later stage. For this to work you need an environment the emits immediate feedback on the complete solution and this is where build, test and deployment automation come into play. Whenever a developer checks in code, the code is to be added to the complete codebase (not just the developer's laptop) and is to be tested, deployed and verified in the actual runtime environment as well. Working with fully slices provides a team the opportunity to experiment as a whole and start doing the right things.


Agile Principle 12: At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

To improve is to change, to be perfect is to change often. Self-learning teams and adjustments to new realities are key for Agile teams. However, in many organizations teams remain shielded from important transmitters of feedback like customer (usage), runtime (operational), test resucontinuous_improvement_modellts (quality) and so on.

The concept of Continuous Delivery is largely based upon receiving reliable feedback and using the feedback to improve. It is all about enabling the team to do the right things and do things right. An important aspect here is that information should not become biased. In order to steer correctly, actual measurement data need to be accumulated and represented at an understandable way to the team. Automation of the Software Delivery process allows the organization to gather real data, perform real measurements on real events and act accordingly. This is how one can start act on reality instead of hypothesis.


Maybe this article starts to sound a bit like Steve Balmers Mantra for Developers (oh my goshhhh) but than for "Automation", but just give it five minutes... You work Agile, but are your customers really seeing the difference? Do they now have a better product, delivered faster? If the answer is no, what could help you with this? Instantly?


Michiel Sens.

Bringing Agile to the Next Level

Xebia Blog - Mon, 11/02/2015 - 22:22

The Best is Yet to Come written on desert roadI finished my last post with the statement Agile will be applied on a much wider scale in the near future. Within governmental organizations, industry, startups, on a personal level, you name it.  But how?  In my next posts I will deep dive in this exciting story lying in front of us in five steps:

Blogpost/Step I: Creating Awareness & Distributing Agile Knowledge
Change is a chance, not a threat.  Understanding and applying the Agile Mindset and toolsets will help everyone riding the wave of change with more pleasure and success.  This is the main reason why I’ve joined initiatives like Nederland Kantelt, EduScrum, Wikispeed and Delft University’s D.R.E.A.M. Hall.

Blogpost/Step II: Fit Agile for Purpose
The Agile Manifesto was originally written for software.  Lots of variants of the manifesto emerged the last couple of years serving different sectors and products. This is a good thing if the core values of the agile manifesto are respected.

However, agile is not applicable for everything.  For example, Boeing will never apply Scrum directly for producing critical systems.  They’re applying Scrum for less critical parts and R&D processes. For determining the right approach they use the Cynefin framework.  In this post I will explain this framework making it a lot easier where you could apply Agile and where you should be careful.

Blogpost/Step III: Creating a Credible Purpose or ‚ÄúWhy‚ÄĚ
You can implement a new framework or organization, hire the brightest minds and have loads of capital, in the end it all boils down to real passion and believe. Every purpose should be spot on in hitting the center of the Golden Circle.  But how to create this fontainebleau in spring?

Blogpost/Step IV: Breaking the Status Quo and Igniting Entrepreneurship
Many corporate organizations are busy or have implemented existing frameworks like SAFe or successful Agile models from companies like Netflix and Spotify.  But the culture change which goes with it, is the most important step. How to spark a startup mentality in your organization?  How to create real autonomy?

Compass with needle pointing the word organic. Green and grey tones over beige background, Conceptual illustration for healthy eating and organic farming.Blogpost/Step V: Creating Organic Organizations
Many Agile implementations do not transform organizations in being intrinsically Agile.  To enable this, organizations should evolve organically, like Holacracy.   They will become stronger and stronger by setbacks and uncertain circumstances.  Organic organizations will be more resilient and anti-fragile.  In fact, it’s exactly how nature works.  But how can you work towards this ideal situation?

Paper: Coordination Avoidance in Distributed Databases By Peter Bailis

Peter Bailis has released the work of a lifetime, his dissertion is now available online: Coordination Avoidance in Distributed Databases.

The topic Peter is addressing is summed up nicely by his thesis statement: 

Many semantic requirements of database-backed applications can be efficiently enforced without coordination, thus improving scalability, latency, and availability.

I'd like to say I've read the entire dissertation and can offer cogent insightful analysis, but that would be a lie. Though I have watched several of Peter's videos (see Related Articles). He's doing important and interesting work, that as much University research has done, may change the future of what everyone is doing.

From the introduction:

The rise of Internet-scale geo-replicated services has led to upheaval in the design of modern data management systems. Given the availability, latency, and throughput penalties associated with classic mechanisms such as serializable transactions, a broad class of systems (e.g., “NoSQL”) has sought weaker alternatives that reduce the use of expensive coordination during system operation, often at the cost of application integrity. When can we safely forego the cost of this expensive coordination, and when must we pay the price?

In this thesis, we investigate the potential for coordination avoidance—the use of as little coordination as possible while ensuring application integrity—in several modern dataintensive domains. We demonstrate how to leverage the semantic requirements of applications in data serving, transaction processing, and web services to enable more efficient distributed algorithms and system designs. The resulting prototype systems demonstrate regular order-of-magnitude speedups compared to their traditional, coordinated counterparts on a variety of tasks, including referential integrity and index maintenance, transaction execution under common isolation models, and database constraint enforcement. A range of open source applications and systems exhibit similar results.

Related Articles 
Categories: Architecture

How Shopify Scales to Handle Flash Sales from Kanye West and the Superbowl

This is a guest repost by Christophe Limpalair, creator of Scale Your Code.

In this article, we take a look at methods used by Shopify to make their platform resilient. Not only is this interesting to read about, but it can also be practical and help you with your own applications.

Shopify's Scaling Challenges

Shopify, an ecommerce solution, handles about 300 million unique visitors a month, but as you'll see, these 300M people don't show up in an evenly distributed fashion.

One of their biggest challenge is what they call "flash sales". These flash sales are when tremendously popular stores sell something at a specific time.

For example, Kanye West might sell new shoes. Combined with Kim Kardashian, they have a following of 50 million people on Twitter alone.

They also have customers who advertise on the Superbowl. Because of this, they have no idea how much traffic to expect. It could be 200,000 people showing up at 3:00 for a special sale that ends within a few hours.

How does Shopify scale to these sudden increases in traffic? Even if they can't scale that well for a particular sale, how can they make sure it doesn't affect other stores? This is what we will discuss in the next sections, after briefly explaining Shopify's architecture for context.

Shopify's Architecture
Categories: Architecture

3 Fights Each Day

‚ÄúSometimes the prize is not worth the costs. The means by which we achieve victory are as important as the victory itself.‚ÄĚ
‚Äē Brandon Sanderson

Every day presents us with new challenges.  Whether it‚Äôs a personal struggle, or a challenge at work, or something that requires you to stand and deliver.

To find your strength.

To summon your courage, or find your motivation, or to dig deep and give it your all.

Adapt, Adjust, Or Avoid Situations

Sometimes you wonder whether the struggle is worth it.  Then other times you breakthrough.  And, other times you wonder why it was even a struggle at all.

The struggle is your growth.  And every struggle is a chance for personal growth and self-actualization.  It‚Äôs also a chance to really build your self-awareness.

For example, how well can you read a situation and anticipate how well you will do?   In every situation, you can either Adapt, Adjust, or Avoid the situation.  Adapt means you change yourself for the situation.  Adjust means you change the situation to better suite you.  And Avoid means, stay away from it.  You will be like a fish out of water.  If you don‚Äôt like roller coasters, then don‚Äôt get on them.

So every situation is a great opportunity to gain insight into yourself as well as to learn how to read situation, and people, much better.  And the faster you adapt, the more fit you will be to survive, and ultimately thrive.

Nature favors the flexible.

The 3 Fights We Fight Each Day

But aside from Adapting, Adjusting, and Avoiding situations, it also helps to have a simple mental model to frame your challenges each day.  A former Navy Seal frames it for us really well.  He says we fight 3 fights each day:

  1. Inside you
  2. The enemy
  3. The ‚Äúsystem‚ÄĚ

Maybe you can relate?  Each day you wake up, your first fight is with yourself.  Can you summon your best energy?  Can you get in your most resourceful state?  Can you find your motivation?   Can you drop a bad habit, or add a good one?   Can you get into your best frame of mind to tackle the challenges before you?

Winning this fight sets the stage for the rest.

The second fight is what most people would consider the actual fight.  It‚Äôs the challenge you are up against.   Maybe it‚Äôs winning a deal.  Maybe it‚Äôs doing your workout.  Maybe it‚Äôs completing an assignment or task at work.  Either way, this is where if you lost your first fight, this is going to be even tougher now.

The third fight is with the ‚Äúsystem.‚ÄĚ  Everybody operates within a system.  It might be your politics, policies, or procedures.  You might be in a school or a corporation or an institution, or on a team, or within an organization.  Either way, there are rules and expectations.  There are ways for things to be done.  Sometimes they work with you.  Sometimes they work against you.   And herein lies the fight.

In my latest post, I share some simple ways from our Navy Seal friends how you can survive and thrive against these 3 fights:

3 Fights We Fight Each Day

You can read it quickly.  But use the tools inside to actually practice and prepare so you can respond better to your most challenging situations.  If you practice the breathing techniques and the techniques for visualization, you will be using the same tools that the world‚Äôs best athletes, the Navy Seals, the best execs, and the highest achievers use ‚Ķ to do more, be more, and achieve more ‚Ķ in work and life.

Categories: Architecture, Programming