Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!


Does Manual Testing Have a Future?

Making the Complex Simple - John Sonmez - Thu, 01/15/2015 - 16:00

In this video, I tackle whether or not manual testing has a future or whether someone who is a manual tester should look to move on to a different role.

The post Does Manual Testing Have a Future? appeared first on Simple Programmer.

Categories: Programming

Monitoring Akka with Kamon

Xebia Blog - Thu, 01/15/2015 - 13:49

Kamon is a framework for monitoring the health and performance of applications based on akka, the popular actor system framework often used with Scala. It provides good quick indicators, but also allows in-depth analysis.


Beyond just collecting local metrics per actor (e.g. message processing times and mailbox size), Kamon is unique in that it also monitors message flow between actors.

Essentially, Kamon introduces a TraceContext that is maintained across asynchronous calls: it uses AOP to pass the context along with messages. None of your own code needs to change.

Because of convenient integration modules for Spray/Play, a TraceContext can be automatically started when an HTTP request comes in.

If nothing else, this can be easily combined with the Logback converter shipped with Kamon: simply logging the token is of great use right out of the gate.


Kamon does not come with a dashboard by itself (though some work in this direction is underway).

Instead, it provides 3 'backends' to post the data to (4 if you count the 'LogReporter' backend that just dumps some statistics into Slf4j): 2 on-line services (NewRelic and DataDog), and statsd (from Etsy).

statsd might seem like a hassle to set up, as it needs additional components such as grafana/graphite to actually browse the statistics. Kamon fortunately provides a correctly set-up docker container to get you up and running quickly. We unfortunately ran into some issues with the image uploaded to the Docker Hub Registry, but building it ourselves from the definition on github resolved most of these.


We found the source code to Kamon to be clear and to-the-point. While we're generally no great fan of AspectJ, for this purpose the technique seems to be quite well-suited.

'Monkey-patching' a core part of your stack like this can of course be dangerous, especially with respect to performance considerations. Unless you enable the heavier analyses (which are off by default and clearly marked), it seems this could be fairly light - but of course only real tests will tell.

Getting Started

Most Kamon modules are enabled by adding their respective akka extension. We found the quickest way to get started is to:

  • Add the Kamon dependencies to your project as described in the official getting started guide
  • Enable the Metrics and LogReporter extensions in your akka configuration
  • Start your application with AspectJ run-time weaving enabled. How to do this depends on how you start your application. We used the sbt-aspectj plugin.

Enabling AspectJ weaving can require a little bit of twiddling, but adding the LogReporter should give you quick feedback on whether you were successful: it should start periodically logging metrics information.

Next steps are:

  • Enabling Spray or Play plugins
  • Adding the trace token to your logging
  • Enabling other backends (e.g. statsd)
  • Adding custom application-specific metrics and trace points

Kamon looks like a healthy, useful tool that not only has great potential, but also provides some great quick wins.

The documentation that is available is of great quality, but there are some parts of the system that are not so well-covered. Luckily, the source code very approachable.

It is clear the Kamon project is not very popular yet, judging by some of the rough edges we encountered. These, however, seem to be mostly superficial: the core ideas and implementation seems solid. We highly recommend taking a look.


Remco Beckers

Arnout Engelen

Exploring Akka Stream's TCP Back Pressure

Xebia Blog - Wed, 01/14/2015 - 15:48

Some years ago, when Reactive Streams lived in utopia we got the assignment to build a high-volume message broker. A considerable amount of code of the solution we delivered back then was dedicated to prevent this broker being flooded with messages in case an endpoint became slow.

How would we have solved this problem today with the shiny new Akka Reactive Stream (experimental) implementation just within reach?

In this blog we explore Akka Streams in general and TCP Streams in particular. Moreover, we show how much easier we can solve the challenge we faced backed then using Streams.

A use-case for TCP Back Pressure

The high-volume message broker mentioned in the introduction basically did the following:

  • Read messages (from syslog) from a TCP socket
  • Parse the message
  • Forward the message to another system via a TCP connection

For optimal throughput multiple TCP connections were available, which allowed delivering messages to the endpoint system in parallel. The broker was supposed to handle about 4000 - 6000 messages per second. As follows a schema of the noteworthy components and message flow:


Naturally we chose Akka as framework to implement this application. Our approach was to have an Actor for every TCP connection to the endpoint system. An incoming message was then forwarded to one of these connection Actors.

The biggest challenge was related to back pressure: how could we prevent our connection Actors from being flooded with messages in case the endpoint system slowed down or was not available? With 6000 messages per second an Actor's mailbox is flooded very quickly.

Another requirement was that message buffering had to be done by the client application, which was syslog. Syslog has excellent facilities for that. Durable mailboxes or something the like was out of the question. Therefore, we had to find a way to pull only as many messages in our broker as it could deliver to the endpoint. In other words: provide our own back pressure implementation.

A considerable amount of code of the solution we delivered back then was dedicated to back pressure. During one of our re-occurring innovation days we tried to figure out how much easier the back pressure challenge would have been if Akka Streams would have been available.

Akka Streams in a nutshell

In case you are new to Akka Streams as follows some basic information that help you understand the rest of the blog.

The core ingredients of a Reactive Stream consist of three building blocks:

  • A Source that produces some values
  • A Flow that performs some transformation of the elements produced by a Source
  • A Sink that consumes the transformed values of a Flow

Akka Streams provide a rich DSL through which transformation pipelines can be composed using the mentioned three building blocks.

A transformation pipeline executes asynchronously. For that to work it requires a so called FlowMaterializer, which will execute every step of the pipeline. A FlowMaterializer uses Actor's for the pipeline's execution even though from a usage perspective you are unaware of that.

A basic transformation pipeline looks as follows:


  implicit val actorSystem = ActorSystem()
  implicit val materializer = FlowMaterializer()

  val numberReverserFlow: Flow[Int, String] = Flow[Int].map(_.toString.reverse)

  numberReverserFlow.runWith(Source(100 to 200), ForeachSink(println))

We first create a Flow that consumes Ints and transforms them into reversed Strings. For the Flow to run we call the runWith method with a Source and a Sink. After runWith is called, the pipeline starts executing asynchronously.

The exact same pipeline can be expressed in various ways, such as:

    //Use the via method on the Source that to pass in the Flow
    Source(100 to 200).via(numberReverserFlow).to(ForeachSink(println)).run()

    //Directly call map on the Source.
    //The disadvantage of this approach is that the transformation logic cannot be re-used.
    Source(100 to 200).map(_.toString.reverse).to(ForeachSink(println)).run()

For more information about Akka Streams you might want to have a look at this Typesafe presentation.

A simple reverse proxy with Akka Streams

Lets move back to our initial quest. The first task we tried to accomplish was to create a stream that accepts data from an incoming TCP connection, which is forwarded to a single outgoing TCP connection. In that sense this stream was supposed to act as a typical reverse-proxy that simply forwards traffic to another connection. The only remarkable quality compared to a traditional blocking/synchronous solution is that our stream operates asynchronously while preserving back-pressure.


implicit val system = ActorSystem("on-to-one-proxy")
implicit val materializer = FlowMaterializer()

val serverBinding = StreamTcp().bind(new InetSocketAddress("localhost", 6000))

val sink = ForeachSink[StreamTcp.IncomingConnection] { connection =>
      println(s"Client connected from: ${connection.remoteAddress}")
      connection.handleWith(StreamTcp().outgoingConnection(new InetSocketAddress("localhost", 7000)).flow)
val materializedServer =


First we create the mandatory instances every Akka reactive Stream requires, which is an ActorSystem and a FlowMaterializer. Then we create a server binding using the StreamTcp Extension that listens to incoming traffic on localhost:6000. With the ForeachSink[StreamTcp.IncomingConnection] we define how to handle the incoming data for every StreamTcp.IncomingConnection by passing a flow of type Flow[ByteString, ByteString]. This flow consumes ByteStrings of the IncomingConnection and produces a ByteString, which is the data that is sent back to the client.

In our case the flow of type Flow[ByteString, ByteString] is created by means of the StreamTcp().outgoingConnection(endpointAddress).flow. It forwards a ByteString to the given endpointAddress (here localhost:7000) and returns its response as a ByteString as well. This flow could also be used to perform some data transformations, like parsing a message.

Parallel reverse proxy with a Flow Graph

Forwarding a message from one connection to another will not meet our self defined requirements. We need to be able to forward messages from a single incoming connection to a configurable amount of outgoing connections.

Covering this use-case is slightly more complex. For it to work we make use of the flow graph DSL.

  import akka.util.ByteString

  private def parallelFlow(numberOfConnections:Int): Flow[ByteString, ByteString] = {
    PartialFlowGraph { implicit builder =>
      val balance = Balance[ByteString]
      val merge = Merge[ByteString]
      UndefinedSource("in") ~> balance

      1 to numberOfConnections map { _ =>
        balance ~> StreamTcp().outgoingConnection(new InetSocketAddress("localhost", 7000)).flow ~> merge

      merge ~> UndefinedSink("out")
    } toFlow (UndefinedSource("in"), UndefinedSink("out"))

We construct a flow graph that makes use of the junction vertices Balance and Merge, which allow us to fan-out the stream to several other streams. For the amount of parallel connections we want to support, we create a fan-out flow starting with a Balance vertex, followed by a OutgoingConnection flow, which is then merged with a Merge vertex.

From an API perspective we faced the challenge of how to connect this flow to our IncomingConnection. Almost all flow graph examples take a concrete Source and Sink implementation as starting point, whereas the IncomingConnection does neither expose a Source nor a Sink. It only accepts a complete flow as input. Consequently, we needed a way to abstract the Source and Sink since our fan-out flow requires them.

The flow graph API offers the PartialFlowGraph class for that, which allows you to work with abstract Sources and Sinks (UndefinedSource and UndefinedSink). We needed quite some time to figure out how they work: simply declaring a UndefinedSource/Sink without a name won't work. It is essential that you give the UndefinedSource/Sink a name which must be identical to the one that is used in the UndefinedSource/Sink passed in the toFlow method. A bit more documentation on this topic would help.

Once the fan-out flow is created, it can be passed to the handleWith method of the IncomingConnection:

val sink = ForeachSink[StreamTcp.IncomingConnection] { connection =>
      println(s"Client connected from: ${connection.remoteAddress}")
      val parallelConnections = 20

As a result, this implementation delivers all incoming messages to the endpoint system in parallel while still preserving back-pressure. Mission completed!

Testing the Application

To test our solution we wrote two helper applications:

  • A blocking client that pumps as many messages as possible into a socket connection to the parallel reverse proxy
  • A server that delays responses with a configurable latency in order to mimic a slow endpoint. The parallel reverse proxy forwards messages via one of its connections to this endpoint.

The following chart depicts the increase in throughput with the increase in amount of connections. Due to the nondeterministic concurrent behavior there are some spikes in the results but the trend shows a clear correlation between throughput and amount of connections:


End-to-end solution

The end-to-end solution can be found here.
By changing the numberOfConnections variable you can see the impact on performance yourself.

Check it out! ...and go with the flow ;-)

Information about TCP back pressure with Akka Streams

At the time of this writing there was not much information available about Akka Streams, due to the fact that it is one of the newest toys of the Typesafe factory. As follows some valuable resources that helped us getting started:

Efficient Game Textures with Hardware Compression

Android Developers Blog - Tue, 01/13/2015 - 20:43

Posted by Shanee Nishry, Developer Advocate

As you may know, high resolution textures contribute to better graphics and a more impressive game experience. Adaptive Scalable Texture Compression (ASTC) helps solve many of the challenges involved including reducing memory footprint and loading time and even increase performance and battery life.

If you have a lot of textures, you are probably already compressing them. Unfortunately, not all compression algorithms are made equal. PNG, JPG and other common formats are not GPU friendly. Some of the highest-quality algorithms today are proprietary and limited to certain GPUs. Until recently, the only broadly supported GPU accelerated formats were relatively primitive and produced poor results.

With the introduction of ASTC, a new compression technique invented by ARM and standardized by the Khronos group, we expect to see dramatic changes for the better. ASTC promises to be both high quality and broadly supported by future Android devices. But until devices with ASTC support become widely available, it’s important to understand the variety of legacy formats that exist today.

We will examine preferable compression formats which are supported on the GPU to help you reduce .apk size and loading times of your game.

Texture Compression

Popular compressed formats include PNG and JPG, which can’t be decoded directly by the GPU. As a consequence, they need to be decompressed before copying them to the GPU memory. Decompressing the textures takes time and leads to increased loading times.

A better option is to use hardware accelerated formats. These formats are lossy but have the advantage of being designed for the GPU.

This means they do not need to be decompressed before being copied and result in decreased loading times for the player and may even lead to increased performance due to hardware optimizations.

Hardware Accelerated Formats

Hardware accelerated formats have many benefits. As mentioned before, they help improve loading times and the runtime memory footprint.

Additionally, these formats help improve performance, battery life and reduce heating of the device, requiring less bandwidth while also consuming less energy.

There are two categories of hardware accelerated formats, standard and proprietary. This table shows the standard formats:

table { border-collapse: collapse; } table, th, td { border: 1px solid black; } td { padding: 5px; } ETC1 Supported on all Android devices with OpenGL ES 2.0 and above. Does not support alpha channel. ETC2 Requires OpenGL ES 3.0 and above. ASTC Higher quality than ETC1 and ETC2. Supported with the Android Extension Pack.

As you can see, with higher OpenGL support you gain access to better formats. There are proprietary formats to replace ETC1, delivering higher quality and alpha channel support. These are shown in the following table:

table { border-collapse: collapse; } table, th, td { border: 1px solid black; }td { padding: 5px; } ATC Available with Adreno GPU. PVRTC Available with a PowerVR GPU. DXT1 S3 DXT1 texture compression. Supported on devices running Nvidia Tegra platform. S3TC S3 texture compression, nonspecific to DXT variant. Supported on devices running Nvidia Tegra platform.

That’s a lot of formats, revealing a different problem. How do you choose which format to use?

To best support all devices you need to create multiple apks using different texture formats. The Google Play developer console allows you to add multiple apks and will deliver the right one to the user based on their device. For more information check this page.

When a device only supports OpenGL ES 2.0 it is recommended to use a proprietary format to get the best results possible, this means making an apk for each hardware.

On devices with access to OpenGL ES 3.0 you can use ETC2. The GL_COMPRESSED_RGBA8_ETC2_EAC format is an improved version of ETC1 with added alpha support.

The best case is when the device supports the Android Extension Pack. Then you should use the ASTC format which has better quality and is more efficient than the other formats.

Adaptive Scalable Texture Compression (ASTC)

The Android Extension Pack has ASTC as a standard format, removing the need to have different formats for different devices.

In addition to being supported on modern hardware, ASTC also offers improved quality over other GPU formats by having full alpha support and better quality preservation.

ASTC is a block based texture compression algorithm developed by ARM. It offers multiple block footprints and bitrate options to lower the size of the final texture. The higher the block footprint, the smaller the final file but possibly more quality loss.

Note that some images compress better than others. Images with similar neighboring pixels tend to have better quality compared to images with vastly different neighboring pixels.

Let’s examine a texture to better understand ASTC:

This bitmap is 1.1MB uncompressed and 299KB when compressed as PNG.

Compressing the Android jellybean jar texture into ASTC through the Mali GPU Texture Compression Tool yields the following results.

table { border-collapse: collapse; } table, th, td { border: 1px solid black; }td { padding: 5px; } Block Footprint 4x4 6x6 8x8 Memory 262KB 119KB 70KB Image Output Difference Map 5x Enhanced Difference Map

As you can see, the highest quality (4x4) bitrate for ASTC already gains over PNG in memory size. Unlike PNG, this gain stays even after copying the image to the GPU.

The tradeoff comes in the detail, so it is important to carefully examine textures when compressing them to see how much compression is acceptable.


Using hardware accelerated textures in your games will help you reduce the size of your .apk, runtime memory use as well as loading times.

Improve performance on a wider range of devices by uploading multiple apks with different GPU texture formats and declaring the texture type in the AndroidManifest.xml.

If you are aiming for high end devices, make sure to use ASTC which is included in the Android Extension Pack.

Join the discussion on

+Android Developers
Categories: Programming

Python: Counter – ValueError: too many values to unpack

Mark Needham - Tue, 01/13/2015 - 00:16

I recently came across Python’s Counter tool which makes it really easy to count the number of occurrences of items in a list.

In my case I was trying to work out how many times words occurred in a corpus so I had something like the following:

>> from collections import Counter
>> counter = Counter(["word1", "word2", "word3", "word1"])
>> print counter
Counter({'word1': 2, 'word3': 1, 'word2': 1})

I wanted to write a for loop to iterate over the counter and print the (key, value) pairs and started with the following:

>>> for key, value in counter:
...   print key, value
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: too many values to unpack

I’m not sure why I expected this to work but in fact since Counter is a sub class of dict we need to call iteritems to get an iterator of pairs rather than just keys.

The following does the job:

>>> for key, value in counter.iteritems():
...   print key, value
word1 2
word3 1
word2 1

Hopefully future Mark will remember this!

Categories: Programming

How Taiseer Joudeh, Built an Incredibly Successful Technical Blog in 14 Months

Making the Complex Simple - John Sonmez - Mon, 01/12/2015 - 16:30

I recently had the opportunity to interview very prolific and friendly, Taiseer Joudeh. He first caught my attention when I saw he had mentioned me on Twitter, thanking me for the help in getting his Microsoft MVP and mentioning me in his blog post. John @jsonmez Thanks for all the inspiration and motivation you blog about! I’m MVP now. Special ... Read More

The post How Taiseer Joudeh, Built an Incredibly Successful Technical Blog in 14 Months appeared first on Simple Programmer.

Categories: Programming

MVVM and Threading

Actively Lazy - Sun, 01/11/2015 - 21:01

The Model-View-ViewModel pattern is a very powerful design pattern when building WPF applications, even if I’m not sure everyone interprets it the same way. However, it’s never been clear to me how to easily manage multi-threaded WPF applications: writing multi-threaded code is hard and there seems to be no real support baked into WPF or the idea of MVVM to make multi-threaded code easier to get right.

The Problem

All WPF applications are effectively bound by the same three constraints:

  1. All interaction with UI elements must be done on the UI thread
  2. All long-running tasks (web service calls etc) should be done on a background thread to keep the UI responsive
  3. Repeatedly switching between threads is expensive

Bound by these constraints it means that all WPF code has two types of thread running through it: UI threads and background threads. It is clearly important to know which type of thread will be executing any given line of code: a background thread cannot interact with UI elements and UI threads should not make expensive calls.


A very brief, contrived example might help. The source code for this is available on github.

Here is a viewmodel for a very simple view:

class ViewModel
  public ObservableCollection<string> Items { get; private set; }
  public ICommand FetchCommand { get; private set; }

  public async void Fetch()
    var items = await m_model.Fetch();
    foreach (var item in items)

The ViewModel exposes a command, which calls ViewModel.Fetch() to retrieve some data from the model; once retrieved this data is added to the list of displayed items. Since Fetch is called by an ICommand and interacts with an ObservableCollection (bound to the view) it clearly runs on the UI thread.

Our model is then responsible for fetching the data requested by the viewmodel:

class Model
  public async Task<IList<string>> Fetch()
    return await Task.Run(() => DoFetch());

  private IList<string> DoFetch()
    return new[] { "Hello" };

In a real application DoFetch() would obviously call a database or web service or whatever is required; it is also probable that the Fetch() method might be more complex, e.g. coordinating multiple service calls or managing caching or other application logic.

There is a difference in threading in the model compared to the viewmodel: the Fetch method will, in our example, be called on the UI thread whereas DoFetch will be called on a background thread. Here we have a class through which may run two separate types of thread.

In this very simple example which thread type calls each method is obvious. But scale this up to a real application with dozens of classes and many methods and it becomes less obvious. It suddenly becomes very easy to add a long-running service call to a method which runs on the UI thread; or a switch to a background thread from a method that already runs on a background thread. These errors can be difficult to spot: neither causes an obvious failure. The first will only show if the service call is obviously slow, the observed behaviour may simply be a UI that intermittently pauses for no apparent reason. The second simply slows tasks down, the application will seem slower than it ought to be with no obvious indication of why.

It seems as though WPF and MVVM have carefully guided us into a multi-threaded minefield.

First Approach

The first idea is to simply apply a naming convention, each method is suffixed with _UI or _Worker to indicate which type of thread invoked it. E.g. our model class becomes:

class Model
  public async Task<IList<string>> Fetch_UI()
    return await Task.Run(() => Fetch_Worker());

  private IList<string> Fetch_Worker()
    return new[] { "Hello" };

This at least makes it obvious to my future self which type of thread I think executes each method. Simple code inspection shows that a _UI method calling someWebService.Invoke(…) is a Bad Thing. Similarly, Task.Run(…) in a _Worker method is obviously suspect. However, it looks ugly and isn’t guaranteed to be correct – it is just a naming convention, nothing stops me calling a _UI method from a background thread, or vice versa.

Introducing PostSharp

If you haven’t tried PostSharp, the C# AOP library, it is definitely worth a look. Even the free version is quite powerful and allows us to evolve the first idea into something more powerful.

PostSharp allows you to create an attribute which will introduce “advice” (i.e. code to run) around any method annotated with the attribute. For example, we can annotate the ViewModel constructor with a new UIThreadPolicy attribute:

  public ViewModel()
    Items = new ObservableCollection<string>();
    FetchCommand = new FetchCommand(this);

This attribute is logically similar to using a suffix on the method name, in that it documents our intention. However, by using AOP it also allows us to introduce code to run before the method executes:

class UIThreadPolicy : OnMethodBoundaryAspect
  public override void OnEntry(MethodExecutionArgs args)
    if (Thread.CurrentThread.IsBackground)
      Console.WriteLine("*** Thread policy warning: \n" + Environment.StackTrace);

The OnEntry method will be triggered before the annotated method is invoked. If the method is ever invoked on a background thread, we’ll output a warning to the console. In this rudimentary way, we not only document our intention that the ViewModel should only be created on a UI thread, we also enforce it at runtime to ensure that it remains correct.

We can define another attribute, WorkerThreadPolicy, to enforce the reverse: that a method is only invoked on a background thread. With discipline one of these two attributes can be applied to every method. This makes it trivial when making changes in months to come to know whether a given method runs on the UI thread or a background thread.


Understanding situations where multiple threads access code is hard, so wouldn’t it be great if we could easily identify situations where it’s safe to ignore it?

By using the thread attributes, we can identify situations where code is only accessed by the UI thread. In this case, we have no concurrency to deal with. There is exactly one UI thread so we can ignore any concurrency concerns. For example, our simple Fetch() method above can add a property to keep track of whether we’re already busy:

  public async void Fetch()
    IsFetching = true;
      var items = await m_model.Fetch();
      foreach (var item in items)
      IsFetching = false;

So long as all access to the IsFetching property is on the UI thread, there is no need to worry about locking. We can enforce this by adding attributes to the property itself, too:

  private bool m_isFetching;

  internal bool IsFetching
    get { return m_isFetching; }

      m_isFetching = value;
      IsFetchingChanged(this, new EventArgs());

Here we use a simple, unlocked bool – knowing that the only access is from a single thread. Without these attributes, it is possible that someone in the future writes to IsFetching from a background thread. It will generally work – but access from the UI thread could continue to see a stale value for a short period.


In general we have aimed for a pattern where ViewModels are only accessed on the UI thread. Model classes, however, tend to have a mixture. And since the “model” in any non-trivial application actually includes dozens of classes this mixture of threads permeates the code. By using these attributes it is possible to understand which threads are used where without exhaustively searching up call stacks.

Writing multi-threaded code is hard: but, with a bit of PostSharp magic, knowing which type of thread will execute any given line of code at least makes it a little bit easier.

Categories: Programming, Testing & QA

Python: scikit-learn: ImportError: cannot import name __check_build

Mark Needham - Sat, 01/10/2015 - 09:48

In part 3 of Kaggle’s series on text analytics I needed to install scikit-learn and having done so ran into the following error when trying to use one of its classes:

>>> from sklearn.feature_extraction.text import CountVectorizer
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/markneedham/projects/neo4j-himym/himym/lib/python2.7/site-packages/sklearn/", line 37, in <module>
    from . import __check_build
ImportError: cannot import name __check_build

This error doesn’t reveal very much but I found that when I exited the REPL and tried the same command again I got a different error which was a bit more useful:

>>> from sklearn.feature_extraction.text import CountVectorizer
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/markneedham/projects/neo4j-himym/himym/lib/python2.7/site-packages/sklearn/", line 38, in <module>
    from .base import clone
  File "/Users/markneedham/projects/neo4j-himym/himym/lib/python2.7/site-packages/sklearn/", line 10, in <module>
    from scipy import sparse
ImportError: No module named scipy

The fix for this is now obvious:

$ pip install scipy

And I can now load CountVectorizer without any problem:

$ python
Python 2.7.5 (default, Aug 25 2013, 00:04:04)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from sklearn.feature_extraction.text import CountVectorizer
Categories: Programming

Python: gensim – clang: error: unknown argument: ‘-mno-fused-madd’ [-Wunused-command-line-argument-hard-error-in-future]

Mark Needham - Sat, 01/10/2015 - 09:39

While working through part 2 of Kaggle’s bag of words tutorial I needed to install the gensim library and initially ran into the following error:

$ pip install gensim
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -I/Users/markneedham/projects/neo4j-himym/himym/build/gensim/gensim/models -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -I/Users/markneedham/projects/neo4j-himym/himym/lib/python2.7/site-packages/numpy/core/include -c ./gensim/models/word2vec_inner.c -o build/temp.macosx-10.9-intel-2.7/./gensim/models/word2vec_inner.o
clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future]
clang: note: this will be a hard error (cannot be downgraded to a warning) in the future
command 'cc' failed with exit status 1
an integer is required
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/Users/markneedham/projects/neo4j-himym/himym/build/gensim/", line 166, in <module>
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/", line 152, in setup
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/", line 953, in run_commands
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/", line 972, in run_command
  File "/Users/markneedham/projects/neo4j-himym/himym/lib/python2.7/site-packages/setuptools/command/", line 59, in run
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/command/", line 573, in run
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/", line 326, in run_command
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/", line 972, in run_command
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/command/", line 127, in run
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/", line 326, in run_command
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/", line 972, in run_command
  File "/Users/markneedham/projects/neo4j-himym/himym/build/gensim/", line 71, in run
    "There was an issue with your platform configuration - see above.")
TypeError: an integer is required
Cleaning up...
Command /Users/markneedham/projects/neo4j-himym/himym/bin/python -c "import setuptools, tokenize;__file__='/Users/markneedham/projects/neo4j-himym/himym/build/gensim/';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/sb/6zb6j_7n6bz1jhhplc7c41n00000gn/T/pip-i8aeKR-record/install-record.txt --single-version-externally-managed --compile --install-headers /Users/markneedham/projects/neo4j-himym/himym/include/site/python2.7 failed with error code 1 in /Users/markneedham/projects/neo4j-himym/himym/build/gensim
Storing debug log for failure in /Users/markneedham/.pip/pip.log

The exception didn’t make much sense to me but I came across a blog post which explained it:

The Apple LLVM compiler in Xcode 5.1 treats unrecognized command-line options as errors. This issue has been seen when building both Python native extensions and Ruby Gems, where some invalid compiler options are currently specified.

The author suggests this only became a problem with XCode 5.1 so I’m surprised I hadn’t come across it sooner since I haven’t upgraded XCode in a long time.

We can work around the problem by telling the compiler to treat extra command line arguments as a warning rather than an error

export ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future

Now it installs with no problems.

Categories: Programming

Python NLTK/Neo4j: Analysing the transcripts of How I Met Your Mother

Mark Needham - Sat, 01/10/2015 - 02:22

After reading Emil’s blog post about dark data a few weeks ago I became intrigued about trying to find some structure in free text data and I thought How I met your mother’s transcripts would be a good place to start.

I found a website which has the transcripts for all the episodes and then having manually downloaded the two pages which listed all the episodes, wrote a script to grab each of the transcripts so I could use them on my machine.

I wanted to learn a bit of Python and my colleague Nigel pointed me towards the requests and BeautifulSoup libraries to help me with my task. The script to grab the transcripts looks like this:

import requests
from bs4 import BeautifulSoup
from soupselect import select
episodes = {}
for i in range(1,3):
    page = open("data/transcripts/page-" + str(i) + ".html", 'r')
    soup = BeautifulSoup(
    for row in select(soup, "td.topic-titles a"):
        parts = row.text.split(" - ")
        episodes[parts[0]] = {"title": parts[1], "link": row.get("href")}
for key, value in episodes.iteritems():
    parts = key.split("x")
    season = int(parts[0])
    episode = int(parts[1])
    filename = "data/transcripts/S%d-Ep%d" %(season, episode)
    print filename
    with open(filename, 'wb') as handle:
        headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
        response = requests.get("" + value["link"], headers = headers)
        if response.ok:
            for block in response.iter_content(1024):
                if not block:

the files containing the lists of episodes are named ‘page-1′ and ‘page-2′

The code is reasonably simple – we find all the links inside the table, put them in a dictionary and then iterate through the dictionary and download the files to disk. The code to save the file is a bit of a monstrosity but there didn’t seem to be a ‘save’ method that I could use.

Having downloaded the files, I thought through all sorts of clever things I could do, including generating a bag of words model for each episode or performing sentiment analysis on each sentence which I’d learnt about from a Kaggle tutorial.

In the end I decided to start simple and extract all the words from the transcripts and count many times a word occurred in a given episode.

I ended up with the following script which created a dictionary of (episode -> words + occurrences):

import csv
import nltk
import re
from bs4 import BeautifulSoup
from soupselect import select
from nltk.corpus import stopwords
from collections import Counter
from nltk.tokenize import word_tokenize
def count_words(words):
    for elem in words:
        tally[elem] += 1
    return tally
episodes_dict = {}
with open('data/import/episodes.csv', 'r') as episodes:
    reader = csv.reader(episodes, delimiter=',')
    for row in reader:
        print row
        transcript = open("data/transcripts/S%s-Ep%s" %(row[3], row[1])).read()
        soup = BeautifulSoup(transcript)
        rows = select(soup, "table.tablebg tr div.postbody")
        raw_text = rows[0]
        [ad.extract() for ad in select(raw_text, "")]
        [ad.extract() for ad in select(raw_text, "div.t-foot-links")]
        text = re.sub("[^a-zA-Z]", " ", raw_text.text.strip())
        words = [w for w in nltk.word_tokenize(text) if not w.lower() in stopwords.words("english")]
        episodes_dict[row[0]] = count_words(words)

Next I wanted to explore the data a bit to see which words occurred across episodes or which word occurred most frequently and realised that this would be a much easier task if I stored the data somewhere.

s/somewhere/in Neo4j

Neo4j’s query language, Cypher, has a really nice ETL-esque tool called ‘LOAD CSV’ for loading in CSV files (as the name suggests!) so I added some code to save my words to disk:

with open("data/import/words.csv", "w") as words:
    writer = csv.writer(words, delimiter=",")
    writer.writerow(["EpisodeId", "Word", "Occurrences"])
    for episode_id, words in episodes_dict.iteritems():
        for word in words:
            writer.writerow([episode_id, word, words[word]])

This is what the CSV file contents look like:

$ head -n 10 data/import/words.csv

Now we need to write some Cypher to get the data into Neo4j:

// words
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-himym/data/import/words.csv" AS row
MERGE (word:Word {value: row.Word})
// episodes
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-himym/data/import/words.csv" AS row
MERGE (episode:Episode {id: TOINT(row.EpisodeId)})
// words to episodes
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-himym/data/import/words.csv" AS row
MATCH (word:Word {value: row.Word})
MATCH (episode:Episode {id: TOINT(row.EpisodeId)})
MERGE (word)-[:USED_IN_EPISODE {times: TOINT(row.Occurrences) }]->(episode);

Having done that we can write some simple queries to explore the words used in How I met your mother:

MATCH (word:Word)-[r:USED_IN_EPISODE]->(episode) 
RETURN word.value, COUNT(episode) AS episodes, SUM(r.times) AS occurrences
ORDER BY occurrences DESC
==> +-------------------------------------+
==> | word.value | episodes | occurrences |
==> +-------------------------------------+
==> | "Ted"      | 207      | 11437       |
==> | "Barney"   | 208      | 8052        |
==> | "Marshall" | 208      | 7236        |
==> | "Robin"    | 205      | 6626        |
==> | "Lily"     | 207      | 6330        |
==> | "m"        | 208      | 4777        |
==> | "re"       | 208      | 4097        |
==> | "know"     | 208      | 3489        |
==> | "Oh"       | 197      | 3448        |
==> | "like"     | 208      | 2498        |
==> +-------------------------------------+
==> 10 rows

The main 5 characters occupy the top 5 positions which is probably what you’d expect. I’m not sure why ‘m’ and ‘re’ are in the next two position s – I expect that might be scraping gone wrong!

Our next query might focus around checking which character is referred to the post in each episode:

WITH ["Ted", "Barney", "Robin", "Lily", "Marshall"] as mainCharacters
MATCH (word:Word) WHERE word.value IN mainCharacters
MATCH (episode:Episode)<-[r:USED_IN_EPISODE]-(word)
WITH episode, word, r
ORDER BY, r.times DESC
WITH episode, COLLECT({word: word.value, times: r.times})[0] AS topWord
RETURN, topWord.word AS word, topWord.times AS occurrences
==> +---------------------------------------+
==> | | word       | occurrences |
==> +---------------------------------------+
==> | 72         | "Barney"   | 75          |
==> | 143        | "Ted"      | 16          |
==> | 43         | "Lily"     | 74          |
==> | 156        | "Ted"      | 12          |
==> | 206        | "Barney"   | 23          |
==> | 50         | "Marshall" | 51          |
==> | 113        | "Ted"      | 76          |
==> | 178        | "Barney"   | 21          |
==> | 182        | "Barney"   | 22          |
==> | 67         | "Ted"      | 84          |
==> +---------------------------------------+
==> 10 rows

If we dig into it further there’s actually quite a bit of variety in the number of times the top character in each episode is mentioned which again probably says something about the data:

WITH ["Ted", "Barney", "Robin", "Lily", "Marshall"] as mainCharacters
MATCH (word:Word) WHERE word.value IN mainCharacters
MATCH (episode:Episode)<-[r:USED_IN_EPISODE]-(word)
WITH episode, word, r
ORDER BY, r.times DESC
WITH episode, COLLECT({word: word.value, times: r.times})[0] AS topWord
RETURN MIN(topWord.times), MAX(topWord.times), AVG(topWord.times), STDEV(topWord.times)
==> +-------------------------------------------------------------------------------------+
==> | MIN(topWord.times) | MAX(topWord.times) | AVG(topWord.times) | STDEV(topWord.times) |
==> +-------------------------------------------------------------------------------------+
==> | 3                  | 259                | 63.90865384615385  | 42.36255207691068    |
==> +-------------------------------------------------------------------------------------+
==> 1 row

Obviously this is a very simple way of deriving structure from text, here are some of the things I want to try out next:

  • Detecting common phrases/memes/phrases used in the show (e.g. the yellow umbrella) – this should be possible by creating different length n-grams and then searching for those phrases across the corpus.
  • Pull out scenes – some of the transcripts use the keyword ‘scene’ to denote this although some of them don’t. Depending how many transcripts contain scene demarkations perhaps we could train a classifier to detect where scenes should be in the transcripts which don’t have scenes.
  • Analyse who talks to each other or who talks about each other most frequently
  • Create a graph of conversations as my colleagues Max and Michael have previously blogged about.
Categories: Programming

The God Login

Coding Horror - Jeff Atwood - Fri, 01/09/2015 - 12:32

I graduated with a Computer Science minor from the University of Virginia in 1992. The reason it's a minor and not a major is because to major in CS at UVa you had to go through the Engineering School, and I was absolutely not cut out for that kind of hardcore math and physics, to put it mildly. The beauty of a minor was that I could cherry pick all the cool CS classes and skip everything else.

One of my favorite classes, the one I remember the most, was Algorithms. I always told people my Algorithms class was the one part of my college education that influenced me most as a programmer. I wasn't sure exactly why, but a few years ago I had a hunch so I looked up a certain CV and realized that Randy Pausch – yes, the Last Lecture Randy Pausch – taught that class. The timing is perfect: University of Virginia, Fall 1991, CS461 Analysis of Algorithms, 50 students.

I was one of them.

No wonder I was so impressed. Pausch was an incredible, charismatic teacher, a testament to the old adage that your should choose your teacher first and the class material second, if you bother to at all. It's so true.

In this case, the combination of great teacher and great topic was extra potent, as algorithms are central to what programmers do. Not that we invent new algorithms, but we need to understand the code that's out there, grok why it tends to be fast or slow due to the tradeoffs chosen, and choose the correct algorithms for what we're doing. That's essential.

And one of the coolest things Mr. Pausch ever taught me was to ask this question:

What's the God algorithm for this?

Well, when sorting a list, obviously God wouldn't bother with a stupid Bubble Sort or Quick Sort or Shell Sort like us mere mortals, God would just immediately place the items in the correct order. Bam. One step. The ultimate lower bound on computation, O(1). Not just fixed time, either, but literally one instantaneous step, because you're freakin' God.

This kind of blew my mind at the time.

I always suspected that programmers became programmers because they got to play God with the little universe boxes on their desks. Randy Pausch took that conceit and turned it into a really useful way of setting boundaries and asking yourself hard questions about what you're doing and why.

So when we set out to build a login dialog for Discourse, I went back to what I learned in my Algorithms class and asked myself:

How would God build this login dialog?

And the answer is, of course, God wouldn't bother to build a login dialog at all. Every user would already be logged into GodApp the second they loaded the page because God knows who they are. Authoritatively, even.

This is obviously impossible for us, because God isn't one of our investors.

But.. how close can we get to the perfect godlike login experience in Discourse? That's a noble and worthy goal.

Wasn't it Bill Gates who once asked why the hell every programmer was writing the same File Open dialogs over and over? It sure feels that way for login dialogs. I've been saying for a long time that the best login is no login at all and I'm a staunch supporter of logging in with your Internet Driver's license whenever possible. So we absolutely support that, if you've configured it.

But today I want to focus on the core, basic login experience: user and password. That's the default until you configure up the other methods of login.

A login form with two fields, two buttons, and a link on it seems simple, right? Bog standard. It is, until you consider all the ways the simple act of logging in with those two fields can go wrong for the user. Let's think.

Let the user enter an email to log in

The critical fault of OpenID, as much as I liked it as an early login solution, was its assumption that users could accept an URL as their "identity". This is flat out crazy, and in the long run this central flawed assumption in OpenID broke it as a future standard.

User identity is always email, plain and simple. What happens when you forget your password? You get an email, right? Thus, email is your identity. Some people even propose using email as the only login method.

It's fine to have a username, of course, but always let users log in with either their username or their email address. Because I can tell you with 100% certainty that when those users forget their password, and they will, all the time, they'll need that email anyway to get a password reset. Email and password are strongly related concepts and they belong together. Always!

(And a fie upon services that don't allow me to use my email as a username or login. I'm looking at you, Comixology.)

Tell the user when their email doesn't exist

OK, so we know that email is de-facto identity for most people, and this is a logical and necessary state of affairs. But which of my 10 email addresses did I use to log into your site?

This was the source of a long discussion at Discourse about whether it made sense to reveal to the user, when they enter an email address in the "forgot password" form, whether we have that email address on file. On many websites, here's the sort of message you'll see after entering an email address in the forgot password form:

If an account matches, you should receive an email with instructions on how to reset your password shortly.

Note the coy "if" there, which is a hedge against all the security implications of revealing whether a given email address exists on the site just by typing it into the forgot password form.

We're deadly serious about picking safe defaults for Discourse, so out of the box you won't get exploited or abused or overrun with spammers. But after experiencing the real world "which email did we use here again?" login state on dozens of Discourse instances ourselves, we realized that, in this specific case, being user friendly is way more important than being secure.

The new default is to let people know when they've entered an email we don't recognize in the forgot password form. This will save their sanity, and yours. You can turn on the extra security of being coy about this, if you need it, via a site setting.

Let the user switch between Log In and Sign Up any time

Many websites have started to show login and signup buttons side by side. This perplexed me; aren't the acts of logging in and signing up very different things?

Well, from the user's perspective, they don't appear to be. This Verge login dialog illustrates just how close the sign up and log in forms really are. Check out this animated GIF of it in action.

We've acknowledged that similarity by having either form accessible at any time from the two buttons at the bottom of the form, as a toggle:

And both can be kicked off directly from any page via the Sign Up and Log In buttons at the top right:

Pick common words

That's the problem with language, we have so many words for these concepts:

  • Sign In
  • Log In
  • Sign Up
  • Register
  • Join <site>
  • Create Account
  • Get Started
  • Subscribe

Which are the "right" ones? User research data isn't conclusive.

I tend to favor the shorter versions when possible, mostly because I'm a fan of the whole brevity thing, but there are valid cases to be made for each depending on the circumstances and user preferences.

Sign In may be slightly more common, though Log In has some nautical and historical computing basis that makes it worthy:

A couple of years ago I did a survey of top websites in the US and UK and whether they used “sign in”, “log in”, “login”, “log on”, or some other variant. The answer at the time seemed to be that if you combined “log in” and “login”, it exceeded “sign in”, but not by much. I’ve also noticed that the trend toward “sign in” is increasing, especially with the most popular services. Facebook seems to be a “log in” hold-out.

Work with browser password managers

Every login dialog you create should be tested to work with the default password managers in …

At an absolute minimum. Upon subsequent logins in that browser, you should see the username and password automatically autofilled.

Users rely on these default password managers built into the browsers they use, and any proper modern login form should respect that, and be designed sensibly, e.g. the password field should have type="password" in the HTML and a name that's readily identifable as a password entry field.

There's also LastPass and so forth, but I generally assume if the login dialog works with the built in browser password managers, it will work with third party utilities, too.

Handle common user mistakes

Oops, the user is typing their password with caps lock on? You should let them know about that.

Oops, the user entered their email as instead of Or instead of You should either fix typos in common email domains for them, or let them know about that.

(I'm also a big fan of native browser "reveal password" support for the password field, so the user can verify that she typed in or autofilled the password she expects. Only Internet Explorer and I think Safari offer this, but all browsers should.)

Help users choose better passwords

There are many schools of thought on forcing helping users choose passwords that aren't unspeakably awful, e.g. password123 and iloveyou and so on.

There's the common password strength meter, which updates in real time as you type in the password field.

It's clever idea, but it gets awful preachy for my tastes on some sites. The implementation also leaves a lot to be desired, as it's left up to the whims of the site owner to decide what password strength means. One site's "good" is another site's "get outta here with that Fisher-Price toy password". It's frustrating.

So, with Discourse, rather than all that, I decided we'd default on a solid absolute minimum password length of 8 characters, and then verify the password to make sure it is not one of the 10,000 most common known passwords by checking its hash.

Don't forget the keyboard

I feel like keyboard users are a dying breed at this point, but for those of us that, when presented with a login dialog, like to rapidly type, tab, p4$$w0rd, enter

please verify that this works as it should. Tab order, enter to submit, etcetera.

Rate limit all the things

You should be rate limiting everything users can do, everywhere, and that's especially true of the login dialog.

If someone forgets their password and makes 3 attempts to log in, or issues 3 forgot password requests, that's probably OK. But if someone makes a thousand attempts to log in, or issues a thousand forgot password requests, that's a little weird. Why, I might even venture to guess they're possibly … not human.

You can do fancy stuff like temporarily disable accounts or start showing a CAPTCHA if there are too many failed login attempts, but this can easily become a griefing vector, so be careful.

I think a nice middle ground is to insert standard pauses of moderately increasing size after repeated sequential failures or repeated sequential forgot password requests from the same IP address. So that's what we do.

Stuff I forgot

I tried to remember everything we went through when we were building our ideal login dialog for Discourse, but I'm sure I forgot something, or could have been more thorough. Remember, Discourse is 100% open source and by definition a work in progress – so as my friend Miguel de Icaza likes to say, when it breaks, you get to keep both halves. Feel free to test out our implementation and give us your feedback in the comments, or point to other examples of great login experiences, or cite other helpful advice.

Logging in involves a simple form with two fields, a link, and two buttons. And yet, after reading all this, I'm sure you'll agree that it's deceptively complex. Your best course of action is not to build a login dialog at all, but instead rely on authentication from an outside source whenever you can.

Like, say, God.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Categories: Programming

How Much Depth When Answering Interview Questions?

Making the Complex Simple - John Sonmez - Thu, 01/08/2015 - 16:00

How much depth should you go into when answering interview questions? Int his video, I talk about how to respond to software development interview questions in a way that will make anyone want to hire you right away.

The post How Much Depth When Answering Interview Questions? appeared first on Simple Programmer.

Categories: Programming

Habits, Dreams, and Goals

I’ve been talking to people in the halls about what they learned about goals from last year, and what they are going to do differently this year.   We’ve had chats about New Years Resolutions, habits, goals, and big dreams. (My theme is Dream Big for 2015.)

Here are a few of the insights that I’ve been sharing with people that really seems to create a lot clarity:

  1. Dream big first, then create your goals.  Too many people start with goals, but miss the dream that holds everything together.   The dream is the backdrop and it needs to inspire you and pull your forward.  Your dream needs to be actionable and believable, and it needs to reflect your passion and your purpose.
  2. There are three types of actions:  habits, goals, and inspired actions.   Habits can help support our goals and reach our dreams.   Goals are really the above and beyond that we set our sights on and help us funnel and focus our energy to reach meaningful milestones.   They take deliberate focus and intent.  You don’t randomly learn to play the violin with skill.  It takes goals.  Inspired actions are the flashes of insight and moments of brilliance.
  3. People mess up by focusing on goals, but not having any habits that support them.  For example, if I have an extreme fitness goal, but I have the ice-cream habit, I might not reach my goals.  Or, if I want to be an early bird, but I have the party-all-night long, or a I’m a late-night reader, that might not work out so well.  
  4. People mess up on their habits when they have no goals.  They might inch their way forward, but they can easily spend an entire year, and not actually have anything significant or meaningful for themselves, because they never took the chance to dream big, or set a goal they cared about.   So while they’ve made progress, they didn’t make any real pop.   Their life was slow and steady.  In some cases, this is great, if all they wanted.  But I also know people that feel like they wasted the year, because they didn’t do what they knew they were capable of, or wanted to achieve.
  5. People can build habits that help them reach new goals.   Some people I knew have built fantastic habits.  They put a strong foundation in place that helps them reach for more.  They grow better, faster, stronger, and more powerful.   In my own experience, I had some extreme fitness goals, but I started with a few healthy habits.  My best one is wake up, work out.  I just do it.  I do a 30 minute workout.   I don’t have to think about it, it’s just part of my day like brushing my teeth.  Since it’s a habit, I keep doing it, so I get better over time.  When I first started the workout, I sucked.  I repeated the same workout three times, but by the third time, I was on fire.   And, since it’s a habit, it’s there for me, as a staple in my day, and, in reality, the most empowering part of my day.  It boosts me and gives me energy that makes everything else in my day, way easier, much easier to deal with, and I can do things in half the time, or in some cases 10X.

Maybe the most important insight is that while you don’t need goals to make your habits effective, it’s really easy to spend a year, and then wonder where the year went, without the meaningful milestones to look back on.   That said, I’ve had a few years, where I simply focused on habits without specific goals, but I always had a vision for a better me, or a better future in mind (more like a direction than a destination.)

As I’ve taken friends and colleagues through some of my learnings over the holidays, regarding habits, dreams, and goals, I’ve had a few people say that I should put it all together and share it, since it might help more people add some clarity to setting and achieving their goals.

Here it is:

How Dreams, Goals, and Habits Fit Together

Enjoy, and Dream Big for 2015.

Categories: Architecture, Programming

Episode 217: James Turnbull on Docker

James Turnbull joins Charles Anderson to discuss Docker, an open source platform for distributed applications for developers and system administrators. Topics include Linux containers and the functions they provide, container images and how they are built, use cases for containers, and the future of containers versus virtual machines. Venue: Internet Related Links James’s home page: […]
Categories: Programming

Decoupling JUnit and Hamcrest

Mistaeks I Hav Made - Nat Pryce - Tue, 01/06/2015 - 23:34
Evolution of Hamcrest and JUnit has been complicated by the dependencies between the two projects. If they can be decoupled, both projects can move forward with less coordination required between the development teams. To that end, we've come up with a plan to allow JUnit to break its dependency on Hamcrest. We've created a new library, hamcrest-junit , that contains a copy of the JUnit code that uses Hamcrest. The classes have been repackaged into org.hamcrest.junit so that the library can be added to existing projects without breaking any code and projects can migrate to the new library at their own pace. The Assert.assertThat function has been copied into the class org.hamcrest.junit.MatcherAssert and Assume.assumeThat to org.hamcrest.junit.MatcherAssume. The ExpectedException and ErrorCollector classes have been copied from package org.junit.rules to package org.hamcrest.junit. The classes in hamcrest-junit only pass strings to JUnit. For example, the assumeThat function does not use the constructor of the AssumptionFailedException that takes a Matcher but generates the description of the matcher as a String and uses that to create the exception instead. This will allow JUnit to deprecate and eventually remove any code that depends on types defined by Hamcrest, finally decoupling the two projects. Version of the hamcrest-junit library has been published to Maven Central (groupId: org.hamcrest, artifactId: hamcrest-junit, version: The source is on GitHub at Feedback about the library, about how easy it is to transition code from using JUnit's classes to using hamcrest-junit and suggestions for improvement are most welcome. Discussion can happen on the hamcrest mailing list or the project's issue tracker on GitHub.
Categories: Programming, Testing & QA

Why your F# evangelism isn't working

Eric.Weblog() - Eric Sink - Mon, 01/05/2015 - 19:00
Ouch. Eric, you're one of those anti-F# people, aren't you?


If you skim this blog entry too quickly or just read the title, you might think I am someone who does not like F#. Nothing could be further from the truth. Over the last several months, I have become a big F# fan. It has become my preferred language for personal projects.

My current side project is a key-value store in F#. I have learned a lot by writing it, and I am even starting to think it might end up becoming useful. :-)

Mostly, I find coding in F# to be extremely satisfying.

I am writing this article not as an opponent of F#, but rather, as someone who hopes that F# will become a mainstream .NET language.

Eric, you're wrong. F# is mainstream already.

Of course it is. For some definition of "mainstream".

F# is gaining traction really fast. People are using F# for real stuff. The language is improving. Xamarin is supporting it. By nearly any measure, F# is showing a lot of momentum over the last few years. If you are an F# fan, there just isn't any bad news running around.

But for this article, I am using a definition of "mainstream", (which I'll explain below) which I believe F# has not yet reached.

If, when you arrive at the end of this article, you do not like my definition of mainstream, that's okay. Just take a copy of this article, and do a search and replace all instances of the word "mainstream" with "purple". I have no desire to argue with you about what "mainstream" means, but if you want to argue about the meaning of "purple", I'll be happy to. :-)

You're wrong again. My F# evangelism IS working

Of course it is. To a certain extent.

But in marketing terminology, as far as I can tell, most F# users today are "early adopters". Very few are "pragmatists". And F# has not yet "crossed the chasm".

What is "the chasm"?

The term originates from a 1991 book by Geoffrey Moore.

The main point of Moore's book is that the classical marketing bell curve has a problem. Typically (and, prior to Moore's book, always), that bell curve is drawn like this:

The basic idea of this curve is that when a market adopts a new techology, it follows a pattern. The technology moves from left to right on the bell curve, becoming adopted by four groups in the following order:

  • the "early adopters" (people who like trying new technologies)

  • the "pragmatists" (people who only care about technology to get something done)

  • the "conservatives" (pragmatists, but even more risk-averse)

  • the "laggards" (people who actively avoid new things)

Together, the pragmatists and conservatives are the definition of "mainstream" for the purpose of this article.

Moore's key observation is that moving from the early adopters to the pragmatists is very hard. Some technologies never make it. To illustrate this problem, Moore suggests drawing the bell curve differently, with a "chasm" between the early adopters and the pragmatists:

His book explains why the chasm exists, why some technologies "die at the bottom of the chasm", and how some technologies successfully "cross the chasm". It is a marketing classic and highly recommended.

For the purpose of this blog entry, the main thing you need to know is this: The chasm exists because pragmatists generally adopt new techologies as a herd. They don't adopt a new technology until they see other pragmatists using it. This creates a chicken-and-egg problem.

How does this herd thing work?

Pragmatists have an annual conference where they all agree to stay with their existing technologies. The actual vote is not formal, but consensus gets reached.

A lot of this process happens in hallways and the dining hall: "Are you considering Windows 8 or should we all just stay with Windows 7 and see what happens next?"

Some of the process happens in the conference itself, where you'll see session titles like, "Why it's safe for you to ignore mobile for another year."

At PragmatiCon 2014, the ratified platform looked something like this:

  • SQL is still the only safe choice.

  • Keep an eye on your developers to make sure they're not using Ruby.

  • Exchange is still the best email solution.

  • The cloud is okay for some things, but important data needs to stay in your own server room.

  • Let's ignore BYOD and hope it goes away.

  • Building a mobile app is still too expensive and too risky.

So the pragmatists don't care about the ways that F# is better?

No, not really.

This point is where the title of this blog entry comes from. If you are trying to explain the benefits of F# to pragmatists, you are probably frustrated. It probably seems like they're not listening to you. That's because they're not.

Pragmatists don't make technology decisions on the basis of what is better. They prefer the safety of the herd. A pragmatist wants to be using the same technology as all the other pragmatists, even if everybody is somewhat unhappy with it. They will choose "predictably disappointing" over "excellent and unproven" every time.

Maybe we just need to do a better job of explaining the benefits of F#?

Wouldn't it be great if it were that simple?

But no. As an early adopter, there is nothing you can say to a pragmatist that will make a difference. They know that your opinion and experience are not to be trusted, because they do not share your values.

So these pragmatists are just stupid then?

Not at all. Their decision-making process is quite rational. It is a natural consequence of being someone who uses technology to get something done rather than using technology for its own sake.

Near the top of this blog entry, I said that I find coding in F# to be extremely satisfying. That remark identifies me as an early adopter. It is something a pragmatist would never say. If any pragmatists accidentally stumbled across this blog entry, they stopped reading right there.

Pragmatists don't care about the craft of software. They don't care about how cool something is. They care about cars and investments and law and soap and oil rigs and health care and construction and transportation and insurance. Technology is just a tool.

BTW, if you find the word "pragmatists" to be too tedious to pronounce, you can just refer to these folks by their more commonly-used label: "normal people".

Fine. We don't need those pragmatists anyway, right?

Maybe not. Some things stay in the land of early adopters forever.

But the area under the bell curve matters. It is roughly equivalent to revenue. Together, the pragmatists and conservatives represent almost all of the money in the market.

If your situation allows you to be successful with a given technology even though it only gets used by early adopters, great. But many people are (directly or indirectly) placing bets (financial or otherwise) which will only pay off when their technology get used by the majority of the market.

So this chicken-and-egg situation is hopeless then?

Not quite.

Sometimes a pragmatist can be convinced to break with the herd. The key is to find what Moore calls a "pragmatist in pain".

A pragmatist in pain is someone whose needs are not being well met by whatever technology is current popular among pragmatists. The current technology is not merely annoying them. It is failing them.

A pragmatist in pain actually does care about how F# is better, even though this goes against their nature. They really hate the idea of listening to some F# nerd prattle on about immutability and type inference. But they have reached their last resort. Their pain has forced them to look for hope from somebody outside the herd.

This is how a new product gets across the chasm. Find a pragmatist in pain. Do whatever-it-takes to make them happy with your product. Then go back and do it again. Repeat until you have enough customers that they can go to PragmatiCon without being shunned and ridiculed.

Why will it be especially hard for F# to cross the chasm?

Because C# is really, really good.

I love C#, but I hold the opinion that F# is better.

Kirk: "Better at what?"
Khan: "Everything."

I also understand that F#'s awesomeness is basically irrelevant to the question of whether it will go mainstream or not. If the pragmatists are not in pain, they are not interested. C# doesn't cause very much pain.

Will the hybrid functional-first languages cross the chasm together?

Mostly, no.

Certainly it is true that F# is part of a trend. The Java world has Scala. The Apple/iOS world has Swift. It is not merely true that F# is gaining momentum. It is also true that functional programming is gaining momentum.

But in terms of going mainstream, these three languages will be related-but-separate. If Swift cross the chasm first (and it will), that will add a bit more momentum to F#, simply because the two languages will be seen as comparables in different ecosystems. But F# will have to cross the chasm on its own.

Why will Swift go mainstream before F#?

Yes, F# has a seven year head start, but Swift will cross the chasm first. This has nothing to do with the relative merits of these two languages.

As of January 2015, F# is quite stable and trustworthy for most use cases, while Swift is mostly an unstable mess that isn't ready for prime time. This too is irrelevant.

The simple fact is that C# is kinda great and Objective-C is kinda dreadful. Swift will go mainstream first because you can't swing a 9-iron in the Apple/iOS ecosystem without hitting a pragmatist in pain.

Eric, you're wrong. I know some pragmatists who are using F#.

Really? Great! Please spread the word.


Use 30 Days of Getting Results to Help You Reach Your Goals for 2015

imageSeveral Summers back, I used a 30 Day Improvement Sprint to share my best insights and best lessons learned on getting results.  I called it 30 Days of Getting Results:

30 Days of Getting Results

It’s timeless advice to help you be YOUR best.

The overall goal of the site was to help you master productivity, master time management, and achieve work life balance.   The idea was that by spending a little time each day, you would get back lots of time and energy and produce better results.  And we all need an edge in work and life.

Rise Above Productivity, Time Management, and Work-Life Balance Challenges

Here are the key things that I tried to help you with:

  • How to set yourself up for success on a daily basis
  • How to create a simple system your can use for getting great results in work and life
  • How to use proven practices to master time management, motivation, and personal productivity
  • How to embrace change and get better results in any situation
  • How to triple your personal productivity
  • How to focus and direct your attention with skill
  • How to use your strengths to create a powerful edge for getting results
  • How to change a habit and make it stick
  • How to achieve better work-life balance and spend more time doing the things you love

So if you’re struggling with any of the above, you might find just the piece of advice or the one or two ideas that help you find your breakthrough.

The Making of 30 Days of Getting Results

Behind the scenes, when I wrote each of the 30 days, I gave myself a 20-minute time limit (a 20-minute timebox for you in the know.)  I would then write as if writing to somebody where I only had a small window of time to help them as best I could to achieve better results, in any situation.

It might seem like the first few days start slow, but things pick up from there pretty fast.  Also, it’s self-paced so you can hop around to any particular day that you think you need the most.

I’ve had many people tell me that it was the course that they needed that helped them set and achieve better goals, while also helping them make new habits and break bad habits.   It’s also helped them find more energy as well as enjoy more of the things that they do.   It’s also helped them find ways to spend more time in their strengths and do what makes them come alive.

I will say that the user experience isn’t that great.   The site was a test and I didn’t want to spend a lot of time on the site design.    That said, it’s pretty straightforward.   When you go to the home page at 30 Days of Getting Results, you’ll see a brief intro and overview, and then you can dive in from there, by either starting with Day 1: Take a Tour of Agile Results, or  by clicking through the 30 Days on the left-hand side of the menu.

30 Days of Getting Results at a Glance

Here are all the days at a glance for your convenience:

  • Overview
  • 30 Days at a Glance
  • Day 1 – Take a Tour
  • Day 2 – Monday Vision
  • Day 3 – Daily Outcomes
  • Day 4 – Let Things Slough Off
  • Day 5 – Hot Spots
  • Day 6 – Friday Reflection
  • Day 7 – Setup Boundaries
  • Day 8 – Dump Your Brain
  • Day 9 – Prioritize Your Day
  • Day 10 – Feel Strong
  • Day 11 – Reduce Friction
  • Day 12 – Productivity Personas
  • Day 13 – Triage
  • Day 14 – Carve Out Time
  • Day 15 – Achieve a Peaceful Calm
  • Day 16 – Use Metaphors
  • Day 17 – Add Power Hours
  • Day 18 – Add Creative Hours
  • Day 19 – Who are You Doing it For?
  • Day 20 – Ask Better Questions
  • Day 21 — Carry the Good Forward
  • Day 22 – Design Your Day
  • Day 23 – Design Your Week
  • Day 24 – Bounce Back with Skill
  • Day 25 – Fix Time, Flex Scope
  • Day 26 – Solve Problems with Skill
  • Day 27 – Do Something Great
  • Day 28 – Find Your One Thing
  • Day 29 – Find Your Arena
  • Day 30 – Take It to the Next Level
  • The course is free.  Hopefully that doesn’t de-value it.   It has a lot of the lessons you would learn in some of the most advanced productivity and time management training.  

    The Structure of the Daily Lessons

    The structure of each day is the same.  It includes an outcome, a lesson, and an assignment.  And right up front, I include a relevant quote and picture.   Here is an example of Day 24 – Bounce Back with Skill:

    Quote: “Life is not about how fast you run, or how high you climb, but how well you bounce.” – Anonymous Your Outcome: Your Outcome: Bounce back with skill and roll with the punches.  Learn to draw from multiple sources of strength and energy, including your mind, body, emotions, and spirit. Lesson Welcome to day 24 of 30 Days of Getting Results, based on my book, Getting Results the Agile Way.  In day 23, you learned how to design your week with skill to get a fresh start, establish routines that support and renew you, and spend more time on the things that count for you.   Today, we learn how to bounce back with skill.  Bouncing back with skill helps us roll with the punches, refuel our bodies, and keep our spark alive.  It’s how we keep our engine going when the rest of us says “we can’t” and it’s how we “shut down” or “turn it off” so we can bounce back stronger.
    (For the rest of the lesson, see Day 24 – Bounce Back with Skill … ) Assignment

    Today’s Assignment

    1. Find one of your past victories in life and add that to your mental flip book of scenes to draw from when you need it most.
    2. Find one metaphor to help you to represent how you bounce back in life.
    3. Find one song or one saying to have in your mind that you can use as a one-liner reminder to take the right actions when it counts.  For example, one that some people like is “Stand strong when tested.”
    Get Started with 30 Days of Getting Results

    If you want to start off well for 2015, and you have big dreams and big goals in mind, then give 30 Days of Getting Results a try:

    30 Days of Getting Results

    If you want to take it slow and steady, then just try one lesson each day.  If you’re feeling gung-ho, then see how quickly you can make it through all 30 at your own pace.

    To help you stay on track, if you take the slow and steady route, build the habit by adding a simple reminder to your calendar in the morning to go and take the next lesson.    Do it Monday through Friday and take the weekends off.

    Enjoy and best wishes for your best year ever.

    Dream Big for 2015 (my personal theme for 2015)




  • Categories: Architecture, Programming

    24 Quick Tips to Boost Your Career as a Software Engineer, This Year

    Making the Complex Simple - John Sonmez - Mon, 01/05/2015 - 16:00

    This is your year! Well–it can be. I want it to be, you want it to be… Everyone wants it to be, except for Bobby. Who’s Bobby?   He’s the guy that bullied you in high school and shoved you into lockers. Bobby doesn’t want to see you succeed. Bobby wants you to end up flipping burgers, like he is. ... Read More

    The post 24 Quick Tips to Boost Your Career as a Software Engineer, This Year appeared first on Simple Programmer.

    Categories: Programming

    Peace of mind in a state of overload

    Xebia Blog - Sun, 01/04/2015 - 21:23

    This article is meant for knowledge workers that want to be more on top of things and feel secure that they haven’t forgotten about something, freeing their mind for the actual tasks at hand. It especially applies to those that are using or want to use SCRUM, a popular and formalized Agile methodology, in their day to day work.
    I got hooked on Agile back in 2005, while working for Db4o, and never looked back since. Improving the process per iteration and having a manageable amount of work per sprint gave me peace of mind and focus, enabling me to deliver the right software solutions to the stakeholders. When I got asked to join a tech startup in 2011 as its CTO I suddenly had a lot more to deal with: hiring and managing staff, managing costs, reporting to the board, applying for subsidies and making sure the books were kept in order. On top of this I still operated as SCRUM Master and technical lead within a SCRUM team.
    During this period one of my co-founders introduced me to Getting things done by David Allen. It took him only about 15 minutes to explain the basics and I got started with it straight away.

    You can optionally watch this presentation to go along with the article:

    Diverse responsibilities

    As knowledge workers, and more specifically consultants, we have a diversity of responsibilities. You have personal responsibilities, like planning a doctor’s appointment or picking up your kids from daycare, and you have responsibilities from your employer like ordering a backup disk, co-creating a course or preparing a presentation. Lastly you also have responsibilities from your client, like sprint tasks, meetings and organizing innovation days. Truly staying on top of all these responsibilities is tough, real tough!


    For those of you that are not familiar with Agile methodologies. Agile is an iterative process. In each iteration a team commits to a finite amount of work. A typical iteration has a length of two weeks, which is its deadline. All stories, the units of work in an iteration, can be made actionable by the Agile team.


    Getting Things Done takes a slightly different approach. There is a single inbox, comparable to the part of a backlog in SCRUM, that hasn’t been groomed yet. By regularly reviewing the inbox, it can be emptied by turning items into actionable tasks, high-level projects, calendar items, reference material or just trash. Actionable tasks can be extracted from a project, that will move it forward. Actionable tasks should be kept together to be able to prioritize them upon review.

    GTD Chart

    GTD Chart

    Please refer to the book Getting Things Done by David Allen in order to get the full explanation but below follows a quick overview.

    Quick overview of GTD Inbox

    The purpose of the GTD inbox is to collect anything new that might or might not require your attention. Whenever something pops up into your mind that you think requires your attention, either now or in the future, you collect it here. It doesn’t need to be organised. It doesn’t even need to be an attainable goal. As long as you get it off your mind and file it away for review, the inbox has done its job. Reviewing the inbox will empty it into one of the following categories.


    You may find that a lot of the things you collect in your GTD inbox don’t really require you to take any action at all, so you can throw them away with impunity! Good riddance!


    Many a time you get a piece of information that you don’t need immediately but would like to be able to reference in the future. These items go from your inbox into a file. This file can be physical, a folder structure on your computer system or something completely different.


    Though people have a tendency to put too many things in their calendar that really don’t need to be done at that exact time, inbox items that do have a specific time window move from there to your calendar.

    Waiting for

    If you delegate something you expect it to be done after a certain period of time. Though these dependencies are kept out of SCRUM for a good reason they are a part of everyday life. Move these from your inbox to your waiting for list so you can check in with whoever you delegated it to in order to be aware of their status.

    Someday maybe

    Colleague sent you an adorable youtube frolicking puppy compilation? Maybe you didn’t have time to watch it when it was brought to your attention but why not put it away to watch later? Items that you don’t need to do but would like to get around to eventually should be moved over here.


    There are many things that people have on their to do list that they keep on staring at and get intimidated by for some reason. Often this pertains to items that are of a too high level to pick up straight away. This can range from Bring about world peace to Organise daughter’s birthday party. Both of these tasks really consist of multiple clearly actionable tasks even though one gets the impression the latter is easier to attain in one’s lifetime. Things that should be more clearly defined belong here. Review them and extract actionable tasks from them that move the projects closer to their goal.

    Next actions

    This is the work horse of the GTD system and is where you actually pick up tasks to do. You place well defined tasks here. If you need to call someone about a case, be sure to save this tasks along with the name of the person, their phone number and case reference to remove any impediments, like having to look up their number or the case reference. This way you make the barrier to get something done as low as possible.


    Of course there are a few obstacles to overcome using these two methods side by side as a consultant.

    Since GTD encourages you to work on the most important thing at that time, you could be required to make private phone calls during consultancy hours or respond to a mail from a co-worker while getting ready for work. This causes work-time and private time to overlap. The obvious solution would be to track work intervals for each task. However, this takes a little bit of time and discipline so should ideally be automated.

    GTD requires you to review at least daily what should be done with stuff in your inbox and what the next most important actions are. This takes time. Is this personal time? Should it be billable? In my opinion working with GTD reduces stress and increases productivity and effectiveness. Thus working with GTD will make you a more valuable employee which is something your employer should invest in.

    While both GTD as well as Agile work with definitions of work, the priority of stories in SCRUM is determined by the Product Owner. How can this be incorporated into GTD? Well, even though the Product Owner is in charge of the priorities inside the sprint, he does not have a full overview of everything that needs doing in your life. Therefore you are the one that determines the order in which your tasks need to be performed. Within these tasks only the ones coming from your sprint have a relative order pre-determined by the PO.


    In my day to day usage of GTD I found that there are a few identifiable improvements.

    Due to work requirements I need to maintain multiple calendars. Since some GTD inbox items end up in your calendar this sometimes means having to create multiple items in your calendar, which causes needless overhead. It would be beneficial if this would be supported by software, so that a GTD inbox item can automatically be delegated to one or more calendars.

    When tasks are coming from external applications like Jira they have to be kept in sync. It would save time if this could be managed automatically.

    Lastly the question of ownership. Who owns a task? The assignee, the author or the company on who’s behalf it needs to be performed? I strongly believe that tasks belong to the author or organisation that the author wrote them for. If tasks have been delegated or synced from external systems they should be revocable by their owner. At the same hand an organisation should not have or control access to tasks a person authored without the author’s permission.


    Unfortunately there is currently no software tool that would serve all my needs as outlined here. However the most essential properties of such a tool should be: multi-platform to ensure availability, tagging support to be able to categorise without having to split up the list and owned by you.

    Lessons Learned from John Maxwell Revisited

    I did a major cleanup of my post on lessons learned from John Maxwell:

    Lessons Learned from John Maxwell

    It should be much easier to read now. 

    It was worth cleaning up because John Maxwell is one of the deepest thinkers in the leadership space.  He’s published more than 50 books on leadership and he lives and breathes leadership in business and in life.

    When I first started studying leadership long ago, John Maxwell’s definition of leadership was the most precise I found:

    “Leadership is influence.”

    As I began to dig through his work, I was amazed at the nuggets and gems and words of wisdom that he shared in so many books.  I started with Your Road Map for Success.   I think my next book was The 21 Irrefutable Laws of Leadership.   Ironically, I didn’t realize it was the same author until I started to notice on my shelf that I had a growing collection of leadership books, all by John Maxwell.

    It was like finding the leadership Sherpa.

    Sure enough, over the years, he continued to fill the shelves at Barnes & Nobles, with book after book on all the various nooks and crannies of leadership. 

    This was about the same time that I noticed how Edward de Bono had filled the shelves with books on thinking.  I realized that some people really share there life’s work as a rich library that is a timeless gift for the world.   I also realized that it really helps people stand out in their field or discipline when they contribute so many guides and guidance to the art and science of whatever their specific focus is.

    What I like about John Maxwell’s work is that it’s plain English and down to Earth.  He writes in a very conversational way, and you can actually see his own progress throughout his books.  In Your Road Map for Success, it’s a great example of how he doesn’t treat leadership as something that comes naturally.  He works hard at it, to build his own knowledge base of patterns, practices, ideas, concepts, and inspirational stories.

    While he’s created a wealth of wisdom to help advance the practice of leadership, I think perhaps his greatest contribution is The 21 Irrefutable Laws of Leadership.  It’s truly a work of art, and he does an amazing job of distilling down the principles that serve as the backbone of effective leadership.

    Categories: Architecture, Programming