Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Sponsored Post: Surge, Redis Labs, Jut.io, VoltDB, Datadog, Power Admin, MongoDB, SignalFx, InMemory.Net, Couchbase, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • VoltDB's in-memory SQL database combines streaming analytics with transaction processing in a single, horizontal scale-out platform. Customers use VoltDB to build applications that process streaming data the instant it arrives to make immediate, per-event, context-aware decisions. If you want to join our ground-breaking engineering team and make a real impact, apply here.  

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Surge 2015. Want to mingle with some of the leading practitioners in the scalability, performance, and web operations space? Looking for a conference that isn't just about pitching you highly polished success stories, but that actually puts an emphasis on learning from real world experiences, including failures? Surge is the conference for you.

  • Your event could be here. How cool is that?
Cool Products and Services
  • MongoDB Management Made Easy. Gain confidence in your backup strategy. MongoDB Cloud Manager makes protecting your mission critical data easy, without the need for custom backup scripts and storage. Start your 30 day free trial today.

  • In a recent benchmark for NoSQL databases on the AWS cloud, Redis Labs Enterprise Cluster's performance had obliterated Couchbase, Cassandra and Aerospike in this real life, write-intensive use case. Full backstage pass and and all the juicy details are available in this downloadable report.

  • Real-time correlation across your logs, metrics and events.  Jut.io just released its operations data hub into beta and we are already streaming in billions of log, metric and event data points each day. Using our streaming analytics platform, you can get real-time monitoring of your application performance, deep troubleshooting, and even product analytics. We allow you to easily aggregate logs and metrics by micro-service, calculate percentiles and moving window averages, forecast anomalies, and create interactive views for your whole organization. Try it for free, at any scale.

  • In a recent benchmark conducted on Google Compute Engine, Couchbase Server 3.0 outperformed Cassandra by 6x in resource efficiency and price/performance. The benchmark sustained over 1 million writes per second using only one-sixth as many nodes and one-third as many cores as Cassandra, resulting in 83% lower cost than Cassandra. Download Now.

  • Datadog is a monitoring service for scaling cloud infrastructures that bridges together data from servers, databases, apps and other tools. Datadog provides Dev and Ops teams with insights from their cloud environments that keep applications running smoothly. Datadog is available for a 14 day free trial at datadoghq.com.

  • Here's a little quiz for you: What do these companies all have in common? Symantec, RiteAid, CarMax, NASA, Comcast, Chevron, HSBC, Sauder Woodworking, Syracuse University, USDA, and many, many more? Maybe you guessed it? Yep! They are all customers who use and trust our software, PA Server Monitor, as their monitoring solution. Try it out for yourself and see why we’re trusted by so many. Click here for your free, 30-Day instant trial download!

  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Loggly alternative.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Notes on setting up an ELK stack and logstash-forwarder

Agile Testing - Grig Gheorghiu - Tue, 08/04/2015 - 00:22
I set up the ELK stack a while ago and I want to jot down some notes on installing and configuring it.  I was going to write "before I forget how to do it", but that's not true anymore, because I have ansible playbooks and roles for this setup. As I said before, using ansible as executable documentation has been working really well for me. I still need to write this blog post though just so I refresh my memory about the bigger picture of ELK when I revisit it next.

Some notes:

  • Used Jeff Geerling's ansible-role-logstash for the main setup of the ELK server I have
  • Used logstash-forwarder (used to be called lumberjack) on all servers that need to send their logs to the ELK server
  • Wrapped the installation and configuration of logstash-forwarder into a simple ansible role which installs the .deb file for this package and copies over a templatized logstash-forwarder.conf file; here is my ansible template for this file
  • Customized the lumberjack input config file on the ELK server (still called lumberjack, but actually used in conjunction with the logstash-forwarder agents running on each box that sends its logs to ELK); here is my /etc/logstash/conf.d/01-lumberjack-input.conf file
  • Added my app-specific config file on the ELK server; here is my /etc/logstash/conf.d/20-app.conf file with a few things to note
    • the grok stanza applies the 'valid' tag only to the lines that match the APPLOGLINE pattern (see below for more on this pattern)
    • the 'payload' field of any line that matches the APPLOGLINE pattern is parsed as JSON; this is nice because I can change the names of the fields in the JSON object in the log file and all these fields will be individually shown in ELK
    • all lines that are not taggeed as 'valid' will be dropped
  • Created a file called myapp in the /opt/logstash/patterns directory on the ELK server; this file contains all my app-specific patterns referenced in the 20-app.conf file above, in this example just 1 pattern: 
    • APPLOGLINE \[myapp\] %{TIMESTAMP_ISO8601:timestamp}Z\+00:000 \[%{WORD:severity}\] \[myresponse\] \[%{NUMBER:response}\] %{GREEDYDATA:payload}
    • this patterns uses predefined logstash patterns such as TIMESTAMP_ISO8601, WORD, NUMBER and GREEDYDATA
    • note the last field called payload; this is the JSON payload that gets parsed by logstash



Seven of the Nastiest Anti-patterns in Microservices

Daniel Bryant gave an energetic talk at Devoxx UK 2015 on lessons learned from over five years of experience with microservice based projects. The talk: The Seven Deadly Sins of Microservices: Redux (video, slides).

If you don't want to risk your immortal API then be sure to avoid:

  1. Lust - using the latest and greatest tech with the idea it will solve all your problems. It won't. Do you really need microservices at all? If you do go microservices do you really need new tech in your stack? Choose boring technology. Know why you are choosing something. A monolith can perform better and because a monolith can be developed faster it may also be the correct choice in proving your business case 
  2. Gluttony - excessive communication protocols. Projects often have a crazy number of protocols for gluing parts together. Standardize on the glue across an organization. Choose one synchronous and one asynchronous protocol. Don't gold-plate.
  3. Greed - all your service are belong to us. Do not underestimate the impact moving to a microservice approach will have on your organization. Your business organization needs to change to take advantage of microservices. Typically orgs will have silos between Dev, QA, and Ops with even more silos inside each silo like front-end, middleware, and database. Use cross functional teams like Spotify, Amazon, and Gilt. Connect rather than divide your company. 
  4. Sloth - creating a distributed monolith. If you can't deploy your services independently then they aren't microservices. Decouple. Transform data at a less central part of the stack. Some options are schema-first design and consumer-driven contracts.
  5. Wrath - blowing up when bad things happen. Bad things happen all the time so you need to test. Microservices are inherently distributed so you have network problems to deal with that weren't a problem in a monolith. The book Release It! has a lot of good fault tolerance patterns. Operationally you need to implement continuous delivery, agile, and devops. Test for failures using real life disaster scenarios testing, live injection failure testing, and something like Netflix's Simian Army.
  6. Envy - the shared single domain fallacy. A lot of time has been spent building and perfecting the model of a single domain. There's one big database with a unified schema. Microservices decompose a system along different lines and that can cause contention in an organization. Reports can be generated using pull by service or data pumps with events. 
  7. Pride - testing in the world of transience. Does your stuff really work? We all make mistakes. Think testing at the developer level, operational level, and business level. Surprisingly little has been written about testing microservices. Invest in your build pipeline testing. Some tools: Serenity BOD, Wiremock/Saboteur, Jenkins Performance Plugin. Testing in production is an emerging idea with companies that deploy many microservices.
Categories: Architecture

Building IntelliJ plugins from the command line

Xebia Blog - Mon, 08/03/2015 - 13:16

For a few years already, IntelliJ IDEA has been my IDE of choice. Recently I dove into the world of plugin development for IntelliJ IDEA and was unhappily surprised. Plugin development all relies on IDE features. It looked hard to create a build script to do the actual plugin compilation and packaging from a build script. The JetBrains folks simply have not catered for that. Unless you're using TeamCity as your CI tool, you're out of luck.

For me it makes no sense writing code if:

  1. it can not be compiled and packaged from the command line
  2. the code can not be compiled and tested on a CI environment
  3. IDE configurations can not be generated from the build script

Google did not help out a lot. Tomasz Dziurko put me in the right direction.

In order to build and test a plugin, the following needs to be in place:

  1. First of all you'll need IntelliJ IDEA. This is quite obvious. The Plugin DevKit plugins need to be installed. If you want to create a language plugin you might want to install Grammar-Kit too.
  2. An IDEA SDK needs to be registered. The SDK can point to your IntelliJ installation.

The plugin module files are only slightly different from your average project.

Update: I ran into some issues with forms and language code generation and added some updates at the end of this post.

Compiling and testing the plugin

Now for the build script. My build tool of choice is Gradle. My plugin code adheres to the default Gradle project structure.

First thing to do is to get a hold of the IntelliJ IDEA libraries in an automated way. Since the IDEA libraries are not available via Maven repos, an IntelliJ IDEA Community Edition download is probably the best option to get a hold of the libraries.

The plan is as follows: download the Linux version of IntelliJ IDEA, and extract it in a predefined location. From there, we can point to the libraries and subsequently compile and test the plugin. The libraries are Java, and as such platform independent. I picked the Linux version since it has a nice, simple file structure.

The following code snippet caters for this:

apply plugin: 'java'

// Pick the Linux version, as it is a tar.gz we can simply extract
def IDEA_SDK_URL = 'http://download.jetbrains.com/idea/ideaIC-14.0.4.tar.gz'
def IDEA_SDK_NAME = 'IntelliJ IDEA Community Edition IC-139.1603.1'

configurations {
    ideaSdk
    bundle // dependencies bundled with the plugin
}

dependencies {
    ideaSdk fileTree(dir: 'lib/sdk/', include: ['*/lib/*.jar'])

    compile configurations.ideaSdk
    compile configurations.bundle
    testCompile 'junit:junit:4.12'
    testCompile 'org.mockito:mockito-core:1.10.19'
}

// IntelliJ IDEA can still run on a Java 6 JRE, so we need to take that into account.
sourceCompatibility = 1.6
targetCompatibility = 1.6

task downloadIdeaSdk(type: Download) {
    sourceUrl = IDEA_SDK_URL
    target = file('lib/idea-sdk.tar.gz')
}

task extractIdeaSdk(type: Copy, dependsOn: [downloadIdeaSdk]) {
    def zipFile = file('lib/idea-sdk.tar.gz')
    def outputDir = file("lib/sdk")

    from tarTree(resources.gzip(zipFile))
    into outputDir
}

compileJava.dependsOn extractIdeaSdk

class Download extends DefaultTask {
    @Input
    String sourceUrl

    @OutputFile
    File target

    @TaskAction
    void download() {
       if (!target.parentFile.exists()) {
           target.parentFile.mkdirs()
       }
       ant.get(src: sourceUrl, dest: target, skipexisting: 'true')
    }
}

If parallel test execution does not work for your plugin, you'd better turn it off as follows:

test {
    // Avoid parallel execution, since the IntelliJ boilerplate is not up to that
    maxParallelForks = 1
}
The plugin deliverable

Obviously, the whole build process should be automated. That includes the packaging of the plugin. A plugin is simply a zip file with all libraries together in a lib folder.

task dist(type: Zip, dependsOn: [jar, test]) {
    from configurations.bundle
    from jar.archivePath
    rename { f -> "lib/${f}" }
    into project.name
    baseName project.name
}

build.dependsOn dist
Handling IntelliJ project files

We also need to generate IntelliJ IDEA project and module files so the plugin can live within the IDE. Telling the IDE it's dealing with a plugin opens some nice features, mainly the ability to run the plugin from within the IDE. Anton Arhipov's blog post put me on the right track.

The Gradle idea plugin helps out in creating those files. This works out of the box for your average project, but for plugins IntelliJ expects some things differently. The project files should mention that we're dealing with a plugin project and the module file should point to the plugin.xml file required for each plugin. Also, the SDK libraries are not to be included in the module file; so, I excluded those from the configuration.

The following code snippet caters for this:

apply plugin: 'idea'

idea {
    project {
        languageLevel = '1.6'
        jdkName = IDEA_SDK_NAME

        ipr {
            withXml {
                it.node.find { node ->
                    node.@name == 'ProjectRootManager'
                }.'@project-jdk-type' = 'IDEA JDK'

                logger.warn "=" * 71
                logger.warn " Configured IDEA JDK '${jdkName}'."
                logger.warn " Make sure you have it configured IntelliJ before opening the project!"
                logger.warn "=" * 71
            }
        }
    }

    module {
        scopes.COMPILE.minus = [ configurations.ideaSdk ]

        iml {
            beforeMerged { module ->
                module.dependencies.clear()
            }
            withXml {
                it.node.@type = 'PLUGIN_MODULE'
                //  <component name="DevKit.ModuleBuildProperties" url="file://$MODULE_DIR$/src/main/resources/META-INF/plugin.xml" />
                def cmp = it.node.appendNode('component')
                cmp.@name = 'DevKit.ModuleBuildProperties'
                cmp.@url = 'file://$MODULE_DIR$/src/main/resources/META-INF/plugin.xml'
            }
        }
    }
}
Put it to work!

Combining the aforementioned code snippets will result in a build script that can be run on any environment. Have a look at my idea-clock plugin for a working example.

Update 1: Forms

For an IntelliJ plugin to use forms it appeared some extra work has to be performed.
This difference is only obvious once you compare the plugin built by IntelliJ with the one built by Gradle:

  1. Include a bunch of helper classes
  2. Instrument the form classes

Including more files in the plugin was easy enough. Check out this commit to see what has to be added. Those classes are used as "helpers" for the form after instrumentation. For instrumentation an Ant task is available. This task can be loaded in Gradle and used as a last step of compilation.

Once I knew what to look for, this post helped me out: How to manage development life cycle of IntelliJ plugins with Maven, along with this build script.

Update 2: Language code generation

The Jetbrains folks promote using JFlex to build the lexer for your custom language. In order to use this from Gradle a custom version of JFlex needs to be used. This was used in an early version of the FitNesse plugin.

Stuff The Internet Says On Scalability For July 31st, 2015

Hey, it's HighScalability time:


Where does IBM's Watson or Google Translate fit? (SciencePorn)
  • 40Tb/s: Bandwidth for Windows 10 launch; 4.04B: Facebook Q2 revenue; 37M: Americans who don't use the web;
  • Quotable Quotes:
    • @BoredElonMusk: We would have already discovered Earth 6.0 if NASA got the same budget as the DOD.
    • David Blight~ Something I've always believed as a historian and more and more it seems true to me is what really moves history, or brings about change in rather sudden and almost always unpredictable ways, is events. 
    • Quentyn Kennemer: Tom Brady replaces Android with iPhone, gets suspended 4 games
    • @BenedictEvans: Apple Maps has ~300m users to iOS GMaps 100m, of 4-500m iPhones. Spotify has 20m paying & 70m free users. And then there’s YouTube
    • Ben: Some scale problems should go unsolved. No. Most scale problems should go unsolved.
    • @mikedicarlo: 3.5 million Redis ops per/sec across our cluster. Wondering how that compares with other production deployments out there. 
    • @Carnage4Life: $1 billion valuation for a caller ID app with $800K in revenues? Unicorn valuations are officially meaningless 

  • Is shooting a trespasser filming a video of your potentially intimate moments considered a crime? Kentucky man shoots down drone hovering over his backyard

  • Death through premature scaling. Larry Berman determined this was the cause of death of RewardMe, his once scrappy startup. In the next turn of the wheel the dharma is:  Be a 1-man growth team;  Get customers online as oppose to through a long sales cycle; Don’t hold inventory; Focus on product and support. The new enlightenment: Don’t scale until you’re ready for it. Cash is king, and you need to extend your runway as long as possible until you’ve found product market fit. 

  • What about scaling for the rest of us? That's the topic addressed in Scaling Ruby Apps to 1000 Requests per Minute - A Beginner's Guide. A very good resource. It goes into explaining the path of request through Heroku. Dispels some myths like scaling up makes a system faster. Explains queue time. And other good stuff. 

  • Not quite as sexy as Zero Point energy, but 3D Xpoint memory sounds pretty cool: Intel and Micron have unveiled what appears to be the holy grail of memory. Called 3D XPoint (pronounced "cross point"), this is an entirely new type of non-volatile memory, with roughly 1,000 times the performance and 1,000 times the endurance of conventional NAND flash, while also being 10 times denser than conventional DRAM.

  • So what is 3D XPoint Memory really? Here's a great analysis at DailyTech by Jason Mick. More than analysis, it's a detective story. Jason puts together clues from history and recently filed patents to deduce that this new wonder RAM is most likely to be PRAM or Phase-change Memory, that stores data "in the form of a phase change to a tiny atomic-level structure." Jason thinks "any usage scenarios, it may be possible to run exclusively off PRAM." Forgetting just got even harder.

  • Damn. I may die after all. The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near: My model shows that it can be estimated that the brain operates at least 10x^21 operations per second. With current rates of growth in computational power we could achieve supercomputers with brain-like capabilities by the year 2037, but estimates after the year 2080 seem more realistic.

  • It has always struck me that telcos who desperately want to get in to the cloud business, where they are just an also ran, control some of the most desired potential colo space in the world: cell towers. Turn those towers into location aware clouds and we can really get some revolutionary edge computing going on. Transiting traffic back to a centralized cloud is such a waste. Could 'Supercomputing at the Edge' provide a scalable platform for new mobile services?

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

How Debugging is Like Hunting Serial Killers

Warning: A quote I use in this article is quite graphic. That's the power of the writing, but if you are at all squirmy you may want to turn back now

Debugging requires a particular sympathy for the machine. You must be able to run the machine and networks of machines in your mind while simulating what-ifs based on mere wisps of insight.

There's another process that is surprisingly similar to debugging: hunting down serial killers.

I ran across this parallel while reading Mindhunter: Inside the FBI's Elite Serial Crime Unit by John E. Douglas, a FBI profiler whose specialty is the dark debugging of twisted human minds.

Here's how John describes profiling:

You have to be able to re-create the crime scene in your head. You need to know as much as you can about the victim so that you can imagine how she might have reacted. You have to be able to put yourself in her place as the attacker threatens her with a gun or a knife, a rock, his fists, or whatever. You have to be able to feel her fear as he approaches her. You have to be able to feel her pain as he rapes her or beats her or cuts her. You have to try to imagine what she was going through when he tortured her for his sexual gratification. You have to understand what it’s like to scream in terror and agony, realizing that it won’t help, that it won’t get him to stop. You have to know what it was like. And that is a heavy burden to have to carry.

Serial killers are like bugs in the societal machine. They hide. They blend in. They can pass for "normal" which makes them tough to find. They attack weakness causing untold damage until caught. And they will keep causing damage until caught. They are always hunting for opportunity.

After reading the book I'm quite grateful that the only bugs I've had to deal with are of the computer variety. The human bugs are very very scary.

Here are some other quotes from the book you may also appreciate:

Categories: Architecture

What is Insight?

"A moment's insight is sometimes worth a life's experience." -- Oliver Wendell Holmes, Sr.

Some say we’re in the Age of Insight.  Others say insight is the new currency in the Digital Economy.

And still others say that insight is the backbone of innovation.

Either way, we use “insight” an awful lot without talking about what insight actually is.

So, what is insight?

I thought it was time to finally do a deeper dive on what insight actually is.  Here is my elaboration of “insight” on Sources of Insight:

Insight

You can think of it as “insight explained.”

The simple way that I think of insight, or those “ah ha” moments, is by remembering a question Ward Cunningham uses a lot:

“What did you learn that you didn’t expect?” or “What surprised you?”

Ward uses these questions to reveal insights, rather than have somebody tell him a bunch of obvious or uneventful things he already knows.  For example, if you ask somebody what they learned at their presentation training, they’ll tell you that they learned how to present more effectively, speak more confidently, and communicate their ideas better.

No kidding.

But if you instead ask them, “What did you learn that you didn’t expect?” they might actually reveal some insight and say something more like this:

“Even though we say don’t shoot the messenger all the time, you ARE the message.”

Or

“If you win the heart, the mind follows.”

It’s the non-obvious stuff, that surprises you (at least at first).  Or sometimes, insight strikes us as something that should have been obvious all along and becomes the new obvious, or the new normal.

Ward used this insights gathering technique to more effectively share software patterns.  He wanted stories and insights from people, rather than descriptions of the obvious.

I’ve used it myself over the years and it really helps get to deeper truths.  If you are a truth seeker or a lover of insights, you’ll enjoy how you can tease out more insights, just by changing your questions.   For example, if you have kids, don’t ask, “How was your day?”   Ask them, “What was the favorite part of your day?” or “What did you learn that surprised you?”

Wow, I now this is a short post, but I almost left without defining insight.

According to the dictionary, insight is “The capacity to gain an accurate and deep intuitive understanding of a person or thing.”   Or you may see insight explained as inner sight, mental vision, or wisdom.

I like Edward de Bono’s simple description of insight as “Eureka moments.”

Some people count steps in their day.  I count my “ah-ha” moments.  After all, the most important ingredient of effective ideation and innovation is …yep, you guessed it – insight!

For a deeper dive on the power of insight, read my page on Insight explained, on Sources Of Insight.com

Categories: Architecture, Programming

A Well Known But Forgotten Trick: Object Pooling

This is a guest repost by Alex Petrov. Find the original article here.

Most problem are quite straightforward to solve: when something is slow, you can either optimize it or parallelize it. When you hit a throughput barrier, you partition a workload to more workers. Although when you face problems that involve Garbage Collection pauses or simply hit the limit of the virtual machine you're working with, it gets much harder to fix them.

When you're working on top of a VM, you may face things that are simply out of your control. Namely, time drifts and latency. Gladly, there are enough battle-tested solutions, that require a bit of understanding of how JVM works.

If you can serve 10K requests per second, conforming with certain performance (memory and CPU parameters), it doesn't automatically mean that you'll be able to linearly scale it up to 20K. If you're allocating too many objects on heap, or waste CPU cycles on something that can be avoided, you'll eventually hit the wall.

The simplest (yet underrated) way of saving up on memory allocations is object pooling. Even though the concept is sounds similar to just pooling objects and socket descriptors, there's a slight difference.

When we're talking about socket descriptors, we have limited, rather small (tens, hundreds, or max thousands) amount of descriptors to go through. These resources are pooled because of the high initialization cost (establishing connection, performing a handshake over the network, memory-mapping the file or whatever else). In this article we'll talk about pooling larger amounts of short-lived objects which are not so expensive to initialize, to save allocation and deallocation costs and avoid memory fragmentation.

Object Pooling
Categories: Architecture

Algolia's Fury Road to a Worldwide API Part 3

The most frequent questions we answer for developers and devops are about our architecture and how we achieve such high availability. Some of them are very skeptical about high availability with bare metal servers, while others are skeptical about how we distribute data worldwide. However, the question I prefer is “How is it possible for a startup to build an infrastructure like this”. It is true that our current architecture is impressive for a young company:

  • Our high-end dedicated machines are hosted in 13 worldwide regions with 25 data-centers

  • our master-master setup replicates our search engine on at least 3 different machines

  • we process over 6 billion queries per month

  • we receive and handle over 20 billion write operations per month

Just like Rome wasn't built in a day, our infrastructure wasn't as well. This series of posts will explore the 15 instrumental steps we took when building our infrastructure. I will even discuss our outages and bugs in order to you to understand how we used them to improve our architecture.

The first blog post of this series focused on our early days in beta and the second post on the first 18 months of the service, including our first outages. In this last post, I will describe how we transformed our "startup" architecture into something new that was able to meet the expectation of big public companies.

Step 11: February 2015 Launch of our Synchronized Worldwide infrastructure
Categories: Architecture

The monolithic frontend in the microservices architecture

Xebia Blog - Mon, 07/27/2015 - 16:39

When you are implementing a microservices architecture you want to keep services small. This should also apply to the frontend. If you don't, you will only reap the benefits of microservices for the backend services. An easy solution is to split your application up into separate frontends. When you have a big monolithic frontend that can’t be split up easily, you have to think about making it smaller. You can decompose the frontend into separate components independently developed by different teams.

Imagine you are working at a company that is switching from a monolithic architecture to a microservices architecture. The application your are working on is a big client facing web application. You have recently identified a couple of self-contained features and created microservices to provide each functionality. Your former monolith has been carved down to bare essentials for providing the user interface, which is your public facing web frontend. This microservice only has one functionality which is providing the user interface. It can be scaled and deployed separate from the other backend services.

You are happy with the transition: Individual services can fit in your head, multiple teams can work on different applications, and you are speaking on conferences on your experiences with the transition. However you’re not quite there yet: The frontend is still a monolith that spans the different backends. This means on the frontend you still have some of the same problems you had before switching to microservices. The image below shows a simplification of the current architecture.

Single frontend

With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Backend teams can't deliver business value without the frontend being updated since an API without a user interface doesn't do much. More backend teams means more new features, and therefore more pressure is put on the frontend team(s) to integrate new features. To compensate for this it is possible to make the frontend team bigger or have multiple teams working on the same project. Because the frontend still has to be deployed in one go, teams cannot work independently. Changes have to be integrated in the same project and the whole project needs to be tested since a change can break other features.
Another option is to have the backend teams integrate their new features with the frontend and submitting a pull request. This helps in dividing the work, but to do this effectively a lot of knowledge has to be shared across the teams to get the code consistent and on the same quality level. This would basically mean that the teams are not working independently. With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Besides not being able to scale, there is also the classical overhead of a separate backend and frontend team. Each time there is a breaking change in the API of one of the services, the frontend has to be updated. Especially when a feature is added to a service, the frontend has to be updated to ensure your customers can even use the feature. If you have a frontend small enough it can be maintained by a team which is also responsible for one or more services which are coupled to the frontend. This means that there is no overhead in cross team communication. But because the frontend and the backend can not be worked on independently, you are not really doing microservices. For an application which is small enough to be maintained by a single team it is probably a good idea not to do microservices.

If you do have multiple teams working on your platform, but you were to have multiple smaller frontend applications there would have been no problem. Each frontend would act as the interface to one or more services. Each of these services will have their own persistence layer. This is known as vertical decomposition. See the image below.

frontend-per-service

When splitting up your application you have to make sure you are making the right split, which is the same as for the backend services. First you have to recognize bounded contexts in which your domain can be split. A bounded context is a partition of the domain model with a clear boundary. Within the bounded context there is high coupling and between different bounded contexts there is low coupling. These bounded contexts will be mapped to micro services within your application. This way the communication between services is also limited. In other words you limit your API surface. This in turn will limit the need to make changes in the API and ensure truly separately operating teams.

Often you are unable to separate your web application into multiple entirely separate applications. A consistent look and feel has to be maintained and the application should behave as single application. However the application and the development team are big enough to justify a microservices architecture. Examples of such big client facing applications can be found in online retail, news, social networks or other online platforms.

Although a total split of your application might not be possible, it might be possible to have multiple teams working on separate parts of the frontend as if they were entirely separate applications. Instead of splitting your web app entirely you are splitting it up in components, which can be maintained separately. This way you are doing a form of vertical decomposition while you still have a single consistent web application. To achieve this you have a couple of options.

Share code

You can share code to make sure that the look and feel of the different frontends is consistent. However then you risk coupling services via the common code. This could even result in not being able to deploy and release separately. It will also require some coordination regarding the shared code.

Therefore when you are going to share code it is generally a good a idea to think about the API that it’s going to provide. Calling your shared library “common”, for example, is generally a bad idea. The name suggests developers should put any code which can be shared by some other service in the library. Common is not a functional term, but a technical term. This means that the library doesn’t focus on providing a specific functionality. This will result in an API without a specific goal, which will be subject to change often. This is especially bad for microservices when multiple teams have to migrate to the new version when the API has been broken.

Although sharing code between microservices has disadvantages, generally all microservices will share code by using open source libraries. Because this code is always used by a lot of projects, special care is given to not breaking compatibility. When you’re going to share code it is a good idea to uphold your shared code to the same standards. When your library is not specific to your business, you might as well release it publicly to encourage you think twice about breaking the API or putting business specific logic in the library.

Composite frontend

It is possible to compose your frontend out of different components. Each of these components could be maintained by a separate team and deployed independent of each other. Again it is important to split along bounded contexts to limit the API surface between the components. The image below shows an example of such a composite frontend.

composite-design

Admittedly this is an idea we already saw in portlets during the SOA age. However, in a microservices architecture you want the frontend components to be able to deploy fully independently and you want to make sure you do a clean separation which ensures there is no or only limited two way communication needed between the components.

It is possible to integrate during development, deployment or at runtime. At each of these integration stages there are different tradeoffs between flexibility and consistency. If you want to have separate deployment pipelines for your components, you want to have a more flexible approach like runtime integration. If it is more likely different versions of components might break functionality, you need more consistency. You would get this at development time integration. Integration at deployment time could give you the same flexibility as runtime integration, if you are able to integrate different versions of components on different environments of your build pipeline. However this would mean creating a different deployment artifact for each environment.

Software architecture should never be a goal, but a means to an end

Combining multiple components via shared libraries into a single frontend is an example of development time integration. However it doesn't give you much flexibility in regards of separate deployment. It is still a classical integration technique. But since software architecture should never be a goal, but a means to an end, it can be the best solution for the problem you are trying to solve.

More flexibility can be found in runtime integration. An example of this is using AJAX to load html and other dependencies of a component. Then the main application only needs to know where to retrieve the component from. This is a good example of a small API surface. Of course doing a request after page load means that the users might see components loading. It also means that clients that don’t execute javascript will not see the content at all. Examples are bots / spiders that don’t execute javascript, real users who are blocking javascript or using a screenreader that doesn’t execute javascript.

When runtime integration via javascript is not an option it is also possible to integrate components using a middleware layer. This layer fetches the html of the different components and composes them into a full page before returning the page to the client. This means that clients will always retrieve all of the html at once. An example of such middleware are the Edge Side Includes of Varnish. To get more flexibility it is also possible to manually implement a server which does this. An open source example of such a server is Compoxure.

Once you have you have your composite frontend up and running you can start to think about the next step: optimization. Having separate components from different sources means that many resources have to be retrieved by the client. Since retrieving multiple resources takes longer than retrieving a single resource, you want to combine resources. Again this can be done at development time or at runtime depending on the integration techniques you chose decomposing your frontend.

Conclusion

When transitioning an application to a microservices architecture you will run into issues if you keep the frontend a monolith. The goal is to achieve good vertical decomposition. What goes for the backend services goes for the frontend as well: Split into bounded contexts to limit the API surface between components, and use integration techniques that avoid coupling. When you are working on single big frontend it might be difficult to make this decomposition, but when you want to deliver faster by using multiple teams working on a microservices architecture, you cannot exclude the frontend from decomposition.

Resources

Sam Newman - From Macro to Micro: How Big Should Your Services Be?
Dan North - Microservices: software that fits in your head

Super fast unit test execution with WallabyJS

Xebia Blog - Mon, 07/27/2015 - 11:24

Our current AngularJS project has been under development for about 2.5 years, so the number of unit tests has increased enormously. We tend to have a coverage percentage near 100%, which led to 4000+ unit tests. These include service specs and view specs. You may know that AngularJS - when abused a bit - is not suited for super large applications, but since we tamed the beast and have an application with more than 16,000 lines of high performing AngularJS code, we want to keep in charge about the total development process without any performance losses.

We are using Karma Runner with Jasmine, which is fine for a small number of specs and for debugging, but running the full test suite takes up to 3 minutes on a 2.8Ghz MacBook Pro.

We are testing our code continuously, so we came up with a solution to split al the unit tests into several shards. This parallel execution of the unit tests decreased the execution time a lot. We will later write about the details of this Karma parallelization on this blog. Sharding helped us a lot when we want to run the full unit test suite, i.e. when using it in the pre push hook, but during development you want quick feedback cycles about coverage and failing specs (red-green testing).

With such a long unit test cycle, even when running in parallel, many of our developers are fdescribe-ing the specs on which they are working, so that the feedback is instant. However, this is quite labor intensive and sometimes an fdescribe is pushed accidentally.

And then.... we discovered WallabyJS. It is just an ordinary test runner like Karma. Even the configuration file is almost a copy of our karma.conf.js.
The difference is in the details. Out of the box it runs the unit test suite in 50 secs, thanks to the extensive use of Web Workers. Then the fun starts.

Screenshot of Wallaby In action (IntelliJ). Shamelessly grabbed from wallaby.com

I use Wallaby as IntelliJ IDEA plugin, which adds colored annotations to the left margin of my code. Green squares indicate covered lines/statements, orange give me partly covered code and grey means "please write a test for this functionality or I introduce hard to find bugs". Colorblind people see just kale green squares on every line, since the default colors are not chosen very well, but these colors are adjustable via the Preferences menu.

Clicking on a square pops up a box with a list of tests that induces the coverage. When the test failed, it also tells me why.

dialog

A dialog box showing contextual information (wallaby.com)

Since the implementation and the tests are now instrumented, finding bugs and increasing your coverage goes a lot faster. Beside that, you don't need to hassle with fdescribes and fits to run individual tests during development. Thanks to the instrumentation Wallaby is running your tests continuously and re-runs only the relevant tests for the parts that you are working on. Real time.

5 Reasons why you should test your code

Xebia Blog - Mon, 07/27/2015 - 09:37

It is just like in mathematics class when I had to make a proof for Thales’ theorem I wrote “Can’t you see that B has a right angle?! Q.E.D.”, but he still gave me an F grade.

You want to make things work, right? So you start programming until your feature is implemented. When it is implemented, it works, so you do not need any tests. You want to proceed and make more cool features.

Suddenly feature 1 breaks, because you did something weird in some service that is reused all over your application. Ok, let’s fix it, keep refreshing the page until everything is stable again. This is the point in time where you regret that you (or even better, your teammate) did not write tests.

In this article I give you 5 reasons why you should write them.

1. Regression testing

The scenario describes in the introduction is a typical example of a regression bug. Something works, but it breaks when you are looking the other way.
When you had tests with 100% code coverage, a red error had been appeared in the console or – even better – a siren goes off in the room where you are working.

Although there are some misconceptions about coverage, it at least tells others that there is a fully functional test suite. And it may give you a high grade when an audit company like SIG inspects your software.

coverage

100% Coverage feels so good

100% Code coverage does not mean that you have tested everything.
This means that the test suite it implemented in such a way that it calls every line of the tested code, but says nothing about the assertions made during its test run. If you want to measure if your specs do a fair amount of assertions, you have to do mutation testing.

This works as follows.

An automatic task is running the test suite once. Then some parts of you code are modified, mainly conditions flipped, for loops made shorter/longer, etc. The test suite is run a second time. If there are tests failing after the modifications begin made, there is an assertion done for this case, which is good.
However, 100% coverage does feel really good if you are an OCD-person.

The better your test coverage and assertion density is, the higher probability to catch regression bugs. Especially when an application grows, you may encounter a lot of regression bugs during development, which is good.

Suppose that a form shows a funny easter egg when the filled in birthdate is 06-06-2006 and the line of code responsible for this behaviour is hidden in a complex method. A fellow developer may make changes to this line. Not because he is not funny, but he just does not know. A failing test notices him immediately that he is removing your easter egg, while without a test you would find out the the removal 2 years later.

Still every application contains bugs which you are unaware of. When an end user tells you about a broken page, you may find out that the link he clicked on was generated with some missing information, ie. users//edit instead of users/24/edit.

When you find a bug, first write a (failing) test that reproduces the bug, then fix the bug. This will never happen again. You win.

2. Improve the implementation via new insights

“Premature optimalization is the root of all evil” is something you hear a lot. This does not mean that you have to implement you solution pragmatically without code reuse.

Good software craftmanship is not only about solving a problem effectively, also about maintainability, durability, performance and architecture. Tests can help you with this. If forces you to slow down and think.

If you start writing your tests and you have trouble with it, this may be an indication that your implementation can be improved. Furthermore, your tests let you think about input and output, corner cases and dependencies. So do you think that you understand all aspects of the super method you wrote that can handle everything? Write tests for this method and better code is garanteed.

Test Driven Development even helps you optimizing your code before you even write it, but that is another discussion.

3. It saves time, really

Number one excuse not to write tests is that you do not have time for it or your client does not want to pay for it. Writing tests can indeed cost you some time, even if you are using boilerplate code elimination frameworks like Mox.

However, if I ask you whether you would make other design choices if you had the chance (and time) to start over, you probably would say yes. A total codebase refactoring is a ‘no go’ because you cannot oversee what parts of your application will fail. If you still accept the refactoring challenge, it will at least give you a lot of headache and costs you a lot of time, which you could have been used for writing the tests. But you had no time for writing tests, right? So your crappy implementation stays.

Dilbert bugfix

A bug can always be introduced, even with good refactored code. How many times did you say to yourself after a day of hard working that you spend 90% of your time finding and fixing a nasty bug? You are want to write cool applications, not to fix bugs.
When you have tested your code very well, 90% of the bugs introduced are catched by your tests. Phew, that saved the day. You can focus on writing cool stuff. And tests.

In the beginning, writing tests can take up to more than half of your time, but when you get the hang of it, writing tests become a second nature. It is important that you are writing code for the long term. As an application grows, it really pays off to have tests. It saves you time and developing becomes more fun as you are not being blocked by hard to find bugs.

4. Self-updating documentation

Writing clean self-documenting code is one if the main thing were adhere to. Not only for yourself, especially when you have not seen the code for a while, but also for your fellow developers. We only write comments if a piece of code is particularly hard to understand. Whatever style you prefer, it has to be clean in some way what the code does.

  // Beware! Dragons beyond this point!

Some people like to read the comments, some read the implementation itself, but some read the tests. What I like about the tests, for example when you are using a framework like Jasmine, is that they have a structured overview of all method's features. When you have a separate documentation file, it is as structured as you want, but the main issue with documentation is that it is never up to date. Developers do not like to write documentation and forget to update it when a method signature changes and eventually they stop writing docs.

Developers also do not like to write tests, but they at least serve more purposes than docs. If you are using the test suite as documentation, your documentation is always up to date with no extra effort!

5. It is fun

Nowadays there are no testers and developers. The developers are the testers. People that write good tests, are also the best programmers. Actually, your test is also a program. So if you like programming, you should like writing tests.
The reason why writing tests may feel non-productive is because it gives you the idea that you are not producing something new.

OLYMPUS DIGITAL CAMERA

Is the build red? Fix it immediately!

However, with the modern software development approach, your tests should be an integrated part of your application. The tests can be executed automatically using build tools like Grunt and Gulp. They may run in a continuous integration pipeline via Jenkins, for example. If you are really cool, a new deploy to production is automatically done when the tests pass and everything else is ok. With tests you have more confidence that your code is production ready.

A lot of measurements can be generated as well, like coverage and mutation testing, giving the OCD-oriented developers a big smile when everything is green and the score is 100%.

If the test suite fails, it is first priority to fix it, to keep the codebase in good shape. It takes some discipline, but when you get used to it, you have more fun developing new features and make cool stuff.

Stuff The Internet Says On Scalability For July 24th, 2015

Hey, it's HighScalability time:


Walt Disney doesn't mouse around. Here's how he makes a goofy business plan.

 

  • 81%: AWS YOY growth; 400: hours of video uploaded to YouTube EVERY MINUTE; 9,000: # of mineable asteroids near earth; 1,400: light years to Earth's high latency backup node; 10K: in the future hard disks will be this many times faster 
  • Quotable Quotes:
    • @BenedictEvans: Chinese govt: At the end of 2014 China had 112.7 billion static webpages and 77.2 billion dynamic webpages. They used 9,310,312,446,467 KB
    • Michael Franklin (AMPLab): This is always a pendulum where you swing from highly distributed to more centralized and back in. My guess is there’s going to be another swing of the pendulum, where we really need to start thinking about how do you distribute processing throughout a wide area network.
    • Sherlock Holmes: Singularity is almost invariably a clue. 
    • @jpetazzo: OH: "In any team you need a tank, a healer, a damage dealer, someone with crowd control abilities, and another who knows iptables"
    • Jeff Sussna: Ultimately, the impact of containers will reach even beyond IT, and play a part in transforming the entire nature of the enterprise. 
    • @CarlosAlimurung: Impressive.  The number of #youtube channels making six figures grew by 50%. 
    • harlowja: Overall, no the community isn't perfect, yes there are issues, yes it burns some people out, but software isn't rainbows and butterflies after all.
    • werner: BTW nobody wants eventual consistency, it is a fact of live among many trade-offs. I would rather not expose it but it comes with other advantages ...
    • @VideoInkNews: We’re focused on our top three priorities – mobile, mobile and mobile, said @YouTube CEO @SusanWojcicki #VidCon2015 #keynote
    • Ivan Pepelnjak: Use a combination of MPLS/VPN and Internet VPN, or Internet VPN with 3G backup. Use multiple access methods, so the cable-seeking backhoe doesn’t bring down all uplinks.
    • @randybias: Repeat after me: containers do little to enable application portability.  If you want portability use a PaaS.  PaaS != Containers.
    • To see even more quotes please click through to see the rest of the post.

  • Can't we all just get along? And by "we" I mean humans and robots. Maybe. Inside Amazon shows by example how one new utopian community is bridging the categorical divide. Forget all your skepticism and technopanic, humans and robots can really work together in a highly efficient system.

  • A Brief History of Scaling LinkedIn. Not so brief actually. Lots of really good details. They of course started off with a monolith and ended up with a service oriented architecture. One of the most interesting ideas is the super block: groupings of backend services with a single access API. This allows us to have a specific team optimize the block, while keeping our call graph in check for each client.

  • If you want to move at the speed of software doesn't your datacenter infrastructure have to move at the same speed? Network Break 45 from Packet Pushers talks about an open source virtual software router, CloudRouter, running the latest release of OpenDaylight's SDN controller and ONOS. The idea is to make a dead simple router you can just instantiate as needed. Greg Ferro makes the point that if you don't have to care if you are starting 100 or 1000 virtual routers it changes how you go about building infrastructure. Running a Cisco Router, and F5 load balancer, and a virtual firewall, how much will it cost to spin up virtual datacenters for 100s of developers? How long will it take? How much will it cost? How does it even work? 

Categories: Architecture

Android: Custom ViewMatchers in Espresso

Xebia Blog - Fri, 07/24/2015 - 16:03

Somehow it seems that testing is still treated like an afterthought in mobile development. The introduction of the Espresso test framework in the Android Testing Support Library improved the situation a little bit, but the documentation is limited and it can be hard to debug problems. And you will run into problems, because testing is hard to learn when there are so few examples to learn from.

Anyway, I recently created my first custom ViewMatcher for Espresso and I figured I would like to share it here. I was building a simple form with some EditText views as input fields, and these fields should display an error message when the user entered an invalid input.

Android TextView with Error Message

Android TextView with error message

In order to test this, my Espresso test enters an invalid value in one of the fields, presses "submit" and checks that the field is actually displaying an error message.

@Test
public void check() {
  Espresso
      .onView(ViewMatchers.withId((R.id.email)))
      .perform(ViewActions.typeText("foo"));
  Espresso
      .onView(ViewMatchers.withId(R.id.submit))
      .perform(ViewActions.click());
  Espresso
      .onView(ViewMatchers.withId((R.id.email)))
      .check(ViewAssertions.matches(
          ErrorTextMatchers.withErrorText(Matchers.containsString("email address is invalid"))));
}

The real magic happens inside the ErrorTextMatchers helper class:

public final class ErrorTextMatchers {

  /**
   * Returns a matcher that matches {@link TextView}s based on text property value.
   *
   * @param stringMatcher {@link Matcher} of {@link String} with text to match
   */
  @NonNull
  public static Matcher<View> withErrorText(final Matcher<String> stringMatcher) {

    return new BoundedMatcher<View, TextView>(TextView.class) {

      @Override
      public void describeTo(final Description description) {
        description.appendText("with error text: ");
        stringMatcher.describeTo(description);
      }

      @Override
      public boolean matchesSafely(final TextView textView) {
        return stringMatcher.matches(textView.getError().toString());
      }
    };
  }
} 

The main details of the implementation are as follows. We make sure that the matcher will only match children of the TextView class by returning a BoundedMatcher from withErrorText(). This makes it very easy to implement the matching logic itself in BoundedMatcher.matchesSafely(): simply take the getError() method from the TextView and feed it to the next Matcher. Finally, we have a simple implementation of the describeTo() method, which is only used to generate debug output to the console.

In conclusion, it turns out to be pretty straightforward to create your own custom ViewMatcher. Who knew? Perhaps there is still hope for testing mobile apps...

You can find an example project with the ErrorTextMatchers on GitHub: github.com/smuldr/espresso-errortext-matcher.

The Best Productivity Book for Free

image"At our core, Microsoft is the productivity and platform company for the mobile-first and cloud-first world." -- Satya Nadella

We take productivity seriously at Microsoft. Ask any Softie. I never have a lack of things to do, or too much time in my day, and I can't ever make "too much" impact.

To be super productive, I've had to learn hard-core prioritization techniques, extreme energy management, stakeholder management, time management, and a wealth of productivity hacks to produce better, faster results.

We don’t learn these skills in school.  But if we’re lucky, we learn from the right mentors and people all around us, how to bring out our best when we need it the most.

Download the 30 Days of Getting Results Free eBook

You can save years of pain for free:

30 Days of Getting Results Free eBook

There’s always a gap between books you read and what you do in the real world. I wanted to bridge this gap. I wanted 30 Days of Getting Results to be raw and real to help you learn what it really takes to master productivity and time management so you can survive and thrive with the best in the world.

It’s not pretty.  It’s super effective.

30 Days of Getting Results is a 30 Day Personal Productivity Improvement Sprint

I wrote 30 Days of Getting Results using a 30 Day Sprint. Each day for that 30 Day Sprint, I wrote down the best information I learned from the school of hard knocks about productivity, time management, work-life balance, and more.

For each day, I share a lesson, a story, and an exercise.

I wanted to make it easy to practice productivity habits.

Agile Results is a Fire Starter for Personal Productivity

The thing that’s really different about Agile Results as a time management system is that it’s focused on meaningful results.  Time is treated as a first-class citizen so that you hit your meaningful windows of opportunity, and get fresh starts each day, each week, each month, each year.  As a metaphor, you get to be the author of your life and write your story forward.

For years, I’ve received emails from people around the world how 30 Days of Getting Results was a breath of fresh air for them.

It helped them find their focus, get more productive, enjoy what they do, renew their energy, and spend more time in their strengths and their passions, while pursuing their purpose.

It’s helped doctors, teachers, students, lawyers, developers, grandmothers, and more.

Learn a New Language, Change Careers, or Start a Business

You can use Agile Results to learn better, faster, and deeper because it helps you think better, feel better, and take better action.

You can use Agile Results to help you learn a new language, build new skills, learn an instrument, or whatever your heart desires.

I used the system to accidentally write a book in a month.

I didn’t set out to write a book. I set out to share the world’s best insight and action for productivity and time management. I wrote for 20 minutes each day, during that month, to share the best lessons and the best insights I could with one purpose:

Help everyone thrive in work and life.

Over the coming months, I had more and more people ask for a book version. As much as they liked the easy to flip through Web pages, they wanted to consume it as an eBook. So I turned 30 Days of Getting Results into a free eBook and made that available.

Here's the funny part:

I forgot I had done that.

The Accidental Free Productivity Book that Might Just Change Your Life

One day, I was having a conversation with one of my readers, and they said that I should sell 30 Days of Getting Results as a $30 work book. They liked it much more than the book, Getting Results the Agile Way. They found it to be more actionable and easier to get started, and they liked that I used the system as a way to teach the system.

They said I should make the effort to put it together as a PDF and sell it as a workbook. He said people would want to pay for it because it’s high-value, real-world training, and he said it was better than any live training he had ever taken (and he had taken a lot.)

I got excited by the idea, and it made perfect sense. After all, wouldn’t people want to learn something that could impact every single day of their lives, and help them achieve more in work and life and help them adapt and compete more effectively in our ever-changing world?

I went to go put it together, and I had already done it.

Set Your Productivity on Fire

When you’re super productive, it’s easy to forget some of the things you create because they so naturally flow from spending the right time, on the right things, with the right energy. You’ll naturally leave a trail of results from experimenting and learning.

Whether you want to be super productive, or do less, but accomplish more, check out the ultimate free productivity guide:

30 Days of Getting Results Free eBook

Share it with friends, family, colleagues, and whoever else you want to have an unfair advantage in our hyper-competitive world.

Lifting others up, lifts you up in the process.

If you have a personal story of how 30 Days of Getting Results has helped you in some way, feel free to share it with me.  It’s always fun to hear how people are using Agile Results to take on new challenges, re-invent their productivity, and operate at a higher level.

Or simply get started again … like a fresh start, for the first time, full of new zest to be your best.

Categories: Architecture, Programming

Architecting Backend for a Social Product

This is aimed towards taking you through key architectural decisions which will make a social application a true next generation social product. The proposed changes addresses following attributes; a) availability b) reliability c) scalability d) performance and flexibility towards extensions (not modifications)

Goals

a) Ensuring that user’s content is easily discoverable and is available always.

b) Ensuring that the content pushed is relevant not only semantically but also from user’s device perspective.

c) Ensuring that the real time updates are generated, pushed and analyzed.

d) Eye towards saving user’s resources as much as possible.

e) Irrespective of server load, user’s experience should remain intact.

f) Ensuring overall application security

In summary we want to deal with an amazing challenge, where we must deal with a mega sea of ever expanding user generated contents, increasing number of users, and a constant stream of new items, all while ensuring an excellent performance. Considering the above challenge it is imperative that we must study certain key architectural elements which will influence the over system design. Here are the few key decisions & analysis.

Data Storage
Categories: Architecture

Step away from the code!

Coding the Architecture - Simon Brown - Tue, 07/21/2015 - 19:57

While at the Devoxx UK conference recently, I was interviewed by Lucy Carey from Voxxed about software architecture, diagrams, monoliths, microservices, design thinking and modularity. You can watch this short interview (~5 minutes) at Step Away from the Code! ... enjoy!

Categories: Architecture

Sponsored Post: Redis Labs, Jut.io, VoltDB, Datadog, Tumblr, Power Admin, MongoDB, SignalFx, InMemory.Net, Couchbase, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Make Tumblr fast, reliable and available for hundreds of millions of visitors and tens of millions of users.  As a Site Reliability Engineer you are a software developer with a love of highly performant, fault-tolerant, massively distributed systems. Apply here now! 

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Surge 2015. Want to mingle with some of the leading practitioners in the scalability, performance, and web operations space? Looking for a conference that isn't just about pitching you highly polished success stories, but that actually puts an emphasis on learning from real world experiences, including failures? Surge is the conference for you.

  • Your event could be here. How cool is that?
Cool Products and Services
  • MongoDB Management Made Easy. Gain confidence in your backup strategy. MongoDB Cloud Manager makes protecting your mission critical data easy, without the need for custom backup scripts and storage. Start your 30 day free trial today.

  • In a recent benchmark for NoSQL databases on the AWS cloud, Redis Labs Enterprise Cluster's performance had obliterated Couchbase, Cassandra and Aerospike in this real life, write-intensive use case. Full backstage pass and and all the juicy details are available in this downloadable report.

  • Real-time correlation across your logs, metrics and events.  Jut.io just released its operations data hub into beta and we are already streaming in billions of log, metric and event data points each day. Using our streaming analytics platform, you can get real-time monitoring of your application performance, deep troubleshooting, and even product analytics. We allow you to easily aggregate logs and metrics by micro-service, calculate percentiles and moving window averages, forecast anomalies, and create interactive views for your whole organization. Try it for free, at any scale.

  • VoltDB is a full-featured fast data platform that has all of the data processing capabilities of Apache Storm and Spark Streaming, but adds a tightly coupled, blazing fast ACID-relational database, scalable ingestion with backpressure; all with the flexibility and interactivity of SQL queries. Learn more.

  • In a recent benchmark conducted on Google Compute Engine, Couchbase Server 3.0 outperformed Cassandra by 6x in resource efficiency and price/performance. The benchmark sustained over 1 million writes per second using only one-sixth as many nodes and one-third as many cores as Cassandra, resulting in 83% lower cost than Cassandra. Download Now.

  • Datadog is a monitoring service for scaling cloud infrastructures that bridges together data from servers, databases, apps and other tools. Datadog provides Dev and Ops teams with insights from their cloud environments that keep applications running smoothly. Datadog is available for a 14 day free trial at datadoghq.com.

  • Here's a little quiz for you: What do these companies all have in common? Symantec, RiteAid, CarMax, NASA, Comcast, Chevron, HSBC, Sauder Woodworking, Syracuse University, USDA, and many, many more? Maybe you guessed it? Yep! They are all customers who use and trust our software, PA Server Monitor, as their monitoring solution. Try it out for yourself and see why we’re trusted by so many. Click here for your free, 30-Day instant trial download!

  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Loggly alternative.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Parallax image scrolling using Storyboards

Xebia Blog - Tue, 07/21/2015 - 07:37

Parallax image scrolling is a popular concept that is being adopted by many apps these days. It's the small attention to details like this that can really make an app great. Parallax scrolling gives you the illusion of depth by letting objects in the background scroll slower than objects in the foreground. It has been used in the past by many 2d games to make them feel more 3d. True parallax scrolling can become quite complex, but it's not very hard to create a simple parallax image scrolling effect on iOS yourself. This post will show you how to add it to a table view using Storyboards.

NOTE: You can find all source code used by this post on https://github.com/lammertw/ParallaxImageScrolling.

The idea here is to create a UITableView with an image header that has a parallax scrolling effect. When we scroll down the table view (i.e. swipe up), the image should scroll with half the speed of the table. And when we scroll up (i.e. swipe down), the image should become bigger so that it feels like it's stretching while we scroll. The latter is not really a parallax scrolling effect but commonly used in combination with it. The following animation shows these effects:

imageedit_9_7419205352  

But what if we want a "Pull down to Refresh" effect and need to add a UIRefreshControl? Well, then we just drop the stretch effect when scrolling up:  

imageedit_7_3493339154    

And as you might expect, the variation with Pull to Refresh is actually a lot easier to accomplish than the one without.

Parallax Scrolling Libraries

While you can find several objective-c or Swift libraries that provide parallax scrolling similar to the ones here, you'll find that it's not that hard to create these yourself. Doing it yourself has the benefit of customizing it exactly the way your want it and of course it will add to your experience. Plus it might be less code than integrating with such a library. However if you need exactly what such a library provides then using it might work better for you.

The basics

NOTE: You can find all the code of this section at the no-parallax-scrolling branch.

Let's start with a simple example that doesn't have any parallax scrolling effects yet.

imageedit_5_3840368192

Here we have a standard UITableViewController with a cell containing our image at the top and another cell below it with some text. Here is the code:

class ImageTableViewController: UITableViewController {

  override func numberOfSectionsInTableView(tableView: UITableView) -> Int {
    return 2
  }

  override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
    return 1
  }

  override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
    var cellIdentifier = ""
    switch indexPath.section {
    case 0:
      cellIdentifier = "ImageCell"
    case 1:
      cellIdentifier = "TextCell"
    default: ()
    }

    let cell = tableView.dequeueReusableCellWithIdentifier(cellIdentifier, forIndexPath: indexPath) as! UITableViewCell

    return cell
  }

  override func tableView(tableView: UITableView, heightForRowAtIndexPath indexPath: NSIndexPath) -> CGFloat {
    switch indexPath.section {
    case 0:
      return UITableViewAutomaticDimension
    default: ()
      return 50
    }
  }

  override func tableView(tableView: UITableView, estimatedHeightForRowAtIndexPath indexPath: NSIndexPath) -> CGFloat {
    switch indexPath.section {
    case 0:
      return 200
    default: ()
      return 50
    }
  }

}

The only thing of note here is that we're using UITableViewAutomaticDimension for automatic cell heights determined by constraints in the cell: we have a UIImageView with constraints to use the full width and height of the cell and a fixed aspect ratio of 2:1. Because of this aspect ratio, the height of the image (and therefore of the cell) is always half of the width. In landscape it looks like this:

iOS Simulator Screen Shot 20 Jul 2015 17.27.38

We'll see later why this matters.

Parallax scrolling with Pull to Refresh

NOTE: You can find all the code of this section at the pull-to-refresh branch.

As mentioned before, creating the parallax scrolling effect is easiest when it doesn't need to stretch. Commonly you'll only want that if you have a Pull to Refresh. Adding the UIRefreshControl is done in the standard way so I won't go into that.

Container view
The rest is also quite simple. With the basics from below as our starting point, what we need to do first is add a UIView around our UIImageView that acts as a container. Since our image will change it's position while we scroll, we cannot use it anymore to calculate the height of the cell. The container view will have exactly the constraints that our image view had: use the full width and height of the cell and have an aspect ratio of 2:1. Also make sure to enable Clip Subviews on the container view to make sure the image view is clipped by it.

Align Center Y constraint
The image view, which is now inside the container view, will keep its aspect ratio constraint and use the full width of the container view. For the y position we'll add an Align Center Y constraint to vertically center the image within the container. All that looks something like this: Screen Shot 2015-07-20 at 17.46.25

Parallax scrolling using constraint
When we run this code now, it will still behave exactly as before. What we need to do is make the image view scroll with half the speed of the table view when scrolling down. We can do that by changing the constant of the Align Center Y constraint that we just created. First we need to connect it to an outlet of a custom UITableViewCell subclass:

class ImageCell: UITableViewCell {
  @IBOutlet weak var imageCenterYConstraint: NSLayoutConstraint!
}

When the table view scrolls down, we need to lower the Y position of the image by half the amount that we scrolled. To do that we can use scrollViewDidScroll and the content offset of the table view. Since our UITableViewController already adheres to the UIScrollViewDelegate, overriding that method is enough:

override func scrollViewDidScroll(scrollView: UIScrollView) {
  imageCenterYConstraint?.constant = min(0, -scrollView.contentOffset.y / 2.0) // only when scrolling down so we never let it be higher than 0
}

We're left with one small problem. The imageCenterYConstraint is connected to the ImageCell that we created and the scrollViewDidScroll method is in the view controller. So what left is create a imageCenterYConstraint in the view controller and assign it when the cell is created:

weak var imageCenterYConstraint: NSLayoutConstraint?

override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
  var cellIdentifier = ""
  switch indexPath.section {
  case 0:
    cellIdentifier = "ImageCell"
  case 1:
    cellIdentifier = "TextCell"
  default: ()
  }

  // the new part of code:
  let cell = tableView.dequeueReusableCellWithIdentifier(cellIdentifier, forIndexPath: indexPath) as! UITableViewCell
  if let imageCell = cell as? ImageCell {
    imageCenterYConstraint = imageCell.imageCenterYConstraint
  }

  return cell
}

That's all we need to do for our first variation of the parallax image scrolling. Let's go on with something a little more complicated.

Parallax scrolling with Pull to Refresh

NOTE: You can find all the code of this section at the no-pull-to-refresh branch.

When starting from the basics, we need to add a container view again like we did in the Container view paragraph from the previous section. The image view needs some different constraints though. Add the following constraints to the image view:

  • Ass before, keep the 2:1 aspect ratio
  • Add a Leading Space and Trailing Space of 0 to the Superview (our container view) and set the priority to 900. We will break these constraints when stretching the image because the image will become wider than the container view. However we still need them to determine the preferred width.
  • Align Center X to the Superview. We need this one to keep the image in the center when we break the Leading and Trailing Space constraints.
  • Add a Bottom Space and Top Space of 0 to the Superview. Create two outlets to the cell class ImageCell like we did in the previous section for the center Y constraint. We'll call these bottomSpaceConstraint and topSpaceConstraint. Also assign these from the cell to the view controller like we did before so we can access them in our scrollViewDidScroll method.

The result: Screen Shot 2015-07-20 at 21.30.52 We now have all the constraints we need to do the effects for scrolling up and down.

Scrolling down
When we scroll down (swipe up) we want the same effect as in our previous section. Instead of having an 'Align Center Y' constraint that we can change, we now need to do the following:

  • Set the bottom space to minus half of the content offset so it will fall below the container view.
  • Set the top space to plus half of the content offset so it will be below the top of the container view.

With these two calculation we effectively delay the scrolling speed of the image view with the half of the table view scrolling speed.

bottomSpaceConstraint?.constant = -scrollView.contentOffset.y / 2
topSpaceConstraint?.constant = scrollView.contentOffset.y / 2

Scrolling up
When the table view scrolls up (swipe down) the container view is going down. What we want here is that the image view sticks to the top of the screen instead of going down as well. As well need for that is to set the constant of the topSpaceConstraint to the content offset. That means the height of the image will increase. Because of our 2:1 aspect ratio, the width of the image will grow as well. This is why we had to lower the priority of the Leading and Trailing constraint because the image no longer fits inside the container and breaks those constraints.

topSpaceConstraint?.constant = scrollView.contentOffset.y

We're left with one problem now. When the image sticks to the top while the container view goes down, it means that the image falls outside the container view. And since we had to enable Clip Subviews for scrolling down, we now get something like this: iOS Simulator Screen Shot 20 Jul 2015 21.45.44

We can't see the top of the image since it's outside the container view. So what we need is to clip when scrolling down and not clip when scrolling up. We can only do that in code so we need to connect the container view to an outlet, just as we've done with the constraints. Then the final code in scrollViewDidScroll becomes:

func scrollViewDidScroll(scrollView: UIScrollView) { 
  if scrollView.contentOffset.y >= 0 { 
    // scrolling down 
    containerView.clipsToBounds = true 
    bottomSpaceConstraint?.constant = -scrollView.contentOffset.y / 2 
    topSpaceConstraint?.constant = scrollView.contentOffset.y / 2 
  } else { 
    // scrolling up 
    topSpaceConstraint?.constant = scrollView.contentOffset.y 
    containerView.clipsToBounds = false 
  } 
}
Conclusion

So there you have it. Two variations of parallax scrolling without too much effort. As mentioned before, use a dedicated library if you have to, but don't be afraid that it's too complicated to do it yourself.

Additional notes

If you've seen the source code on GitHub you might have noted a few additional things. I didn't want to mention in the main body of this post to prevent any distractions but it's important to mention them anyway.

  • The aspect ratio constraints need to have a priority lower than 1000. Set them 999 or 950 or something (make sure they're higher than the Leading and Trailing Space constraints that we set to 900 in the last section). This is because of an issue related to cells with dynamic height (using UITableViewAutomaticDimension) and rotation. When the user rotates the device, the cell will get its new width while still having the previous height. The new height calculation is not yet done at the beginning of the rotation animation. At this moment, the 2:1 aspect ratio cannot exist, which is why we cannot set it to 1000 (required). Right after the new height is calculated it the aspect ratio constraint will kick back in. It seems that the state in which the aspect ratio constraint cannot exist is not even visible so don't worry about your cell looking strange. Also leaving it at 1000 only seems to generate an error message about the constraint, after which it continues as expected.
  • Instead of assigning the outlets from the ImageCell to new variables in the view controller you may also create a scrollViewDidScroll in the cell, which is then being called from the scrollViewDidScroll from your view controller. You can get the cell using cellForRowAtIndexPath. See the code on GitHub to see this done.

The Agile Revolution - Episode 91

Coding the Architecture - Simon Brown - Tue, 07/21/2015 - 06:47

While at the YOW! conference in Australia during December 2014, I was interviewed by Craig Smith and Tony Ponton for The Agile Revolution podcast. It's a short episode (28 minutes) but we certainly packed a lot in, with the discussion covering software architecture, my C4 model, technical leadership, agility, lightweight documentation and even a little bit about enterprise architecture.

Speaking at YOW! in Australia

Thanks to Craig and Tony for taking the time to do this and I hope you enjoy The Agile Revolution - Episode 91: Coding The Architecture with Simon Brown.

Categories: Architecture