Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Stuff The Internet Says On Scalability For December 19th, 2014

Hey, it's HighScalability time:


Brilliant & hilarious keynote to finish the day at #yow14 (Matt)

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Swift self reference in inner closure

Xebia Blog - Fri, 12/19/2014 - 13:06

We all pretty much know how to safely use self within a Swift closure. But have you ever tried to use self inside a closure that's inside another closure? There is a big chance that the Swift compiler crashed (Xcode 6.1.1) without giving you an error in the editor and without any error message. So how can you solve this problem?

The basic working closure

Before we dive into the problem and solution, let's first have a look at a working code sample that only uses a single closure. We can create a simple Swift Playground to run it and validate that it works.

class Runner {
    var closures: [() -> ()] = []

    func doSomethingAsync(completion: () -> ()) {
        closures = [completion]
        completion()
    }
}

class Playground {

    let runner = Runner()

    func works() {
        runner.doSomethingAsync { [weak self] in
            self?.printMessage("This works") ?? ()
        }
    }

    func printMessage(message: String) {
        println(message)
    }

    deinit {
        println("Deinit")
    }

}

struct Tester {
    var playground: Playground? = Playground()
}

var tester: Tester? = Tester()
tester?.playground?.works()
tester?.playground = nil

The doSomethingAsync method takes a closure without arguments and has return type Void. This method doesn't really do anything, but imagine it would load data from a server and then call the completion closure once it's done loading. It does however create a strong reference to the closure by adding it to the closures array. That means we are only allowed to use a weak reference of self within our closure. Otherwise the Runner would keep a strong reference to the Playground instance and neither would ever be deallocated.

Luckily all is fine and the "This works" message is printed in our playground output. Also a "Deinit" message is printed. The Tester construction is used to make sure that the playground will actually deallocate it.

The failing situation

Let's make things slightly more complex. When our first async call is finished and calls our completion closure, we want to load something more and therefore need to create another closure within the outer closure. We add the method below to our Playground class. Keep in mind that the first closure doesn't have [weak self] since we only reference self in the inner closure.

func doesntWork() {
    weak var weakRunner = runner
    runner.doSomethingAsync {

        // do some stuff for which we don't need self

        weakRunner?.doSomethingAsync { [weak self] in
            self?.printMessage("This doesn't work") ?? ()
        } ?? ()
    }
}

Just adding it already makes the compiler crash, without giving us an error in the editor. We don't even need to run it.

Screen Shot 2014-12-19 at 10.30.59

It gives us the following message:

Communication with the playground service was interrupted unexpectedly.
The playground service "com.apple.dt.Xcode.Playground" may have generated a crash log.

And when you have such code in your normal project, the editor also doesn't give an error, but the build will fail with a Swift Compiler Error without clear message of what's wrong:
Command /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swiftc failed with exit code 1

The solution

So how can we work around this problem. Quite simple actually. We simply need to move the [weak self] to the most outer closure.

func doesWork() {
    weak var weakRunner = runner
    runner.doSomethingAsync { [weak self] in

        weakRunner?.doSomethingAsync {
            self?.printMessage("This now works again") ?? ()
        } ?? ()
    }
}

This does mean that it's possible that within the outer closure, self is not nil and in the inner closure it is nil. So don't write code like this:

    runner.doSomethingAsync { [weak self] in

        if self != nil {
            self!.printMessage("This is fine, self is not nil")

            weakRunner?.doSomethingAsync {
                self!.printMessage("This is not good, self could be nil now")
            } ?? ()
        }
    }

There is one more scenario you should be aware of. If you use an if let construction to safely unwrap self, you could create a strong reference again to self. The following sample illustrates this and will create a reference cycle since our runner will create a strong reference to the Playground instance because of the inner closure.

    runner.doSomethingAsync { [weak self] in

        if let this = self {

            weakRunner?.doSomethingAsync {
                this.printMessage("Captures a strong reference to self")
            } ?? ()
        }
    }

Also this is solved easily by including a weak reference to the instance again, now called this.

runner.doSomethingAsync { [weak self] in

    if let this = self {

        weakRunner?.doSomethingAsync { [weak this] in
            this?.printMessage("We're good again") ?? ()
        } ?? ()
    }
}
Conclusion

Most people working with Swift know that there are still quite a few bugs in it. In this case, Xcode should give us an error in the editor. If your editor doesn't complain, but your Swift compiler fails, look for closures like this and correct it. Always be safe and use [weak self] references within closures.

The Sweet Spot of Customer Demand Meets Microsoft Supply

Here’s a simple visual that I whiteboard when I lead workshops for business transformation.

image

The Sweet Spot is where customer “demand” meets Microsoft “supply.”

I’m not a fan of product pushers or product pushing.  I’m a fan of creating “pull.”

In order for customers to pull-through any product, platform, or service, you need to know the customer’s pains, needs, and desired outcomes.  Without customer empathy, you’re not relevant.

This is a simple visual, but a powerful one.  

When you have good representation of the voice of the customer, you can really identity problems worth solving.   It always comes down to pains, needs, opportunities, and desired outcomes.  In short, I always just say pains, needs, and desired outcomes so that people can remember it easily.

To make it real, we use scenarios to tell a simple story of a customer’s pain, needs, and desired outcomes.   We use our friends in the field working with customers to give us real stories of real pain.  

Here is an example Scenario Narrative where a company is struggling in creating products that its customers care about …

image

As you can see, the Current State is a pretty good story of pain, that a lot of business leaders and product owners can identify with.  For some, it’s all too real, because it is their story and they can see themselves in it.

In the Desired Future State, it’s a pretty good story of what success would look like.   It paints a pretty simple picture of what would be an ideal scenario …. a future possibility.

Here is an example of a Solution Storyboard, where we paint a simple picture of that Desired Future State, or more precisely, a Future Capability Vision.     It’s this Future Capability Vision that shows how, with the right capabilities, the customer can address their pains, needs, and desired outcomes.

image

The beauty of this approach is that it’s product and technology agnostic.   It’s all about building capabilities.

From there, with a  good understanding of the pains, needs, and desired outcomes, it’s super easy to overlay relevant products, technologies, consulting services, etc.

And then, rather than trying to do a product “push”, it becomes a product “pull” because it connects with customers in a very deep, very real, very relevant way.

Think “pull” not “push.”

You Might Also Like

Drive Business Transformation by Reenvisioning Operations

Drive Business Transformation by Reenvisioning Your Customer Experience

Dual-Speed IT Drives Business Transformation and Improves IT-Business Relationships

How Business Leaders are Building Digital Skills

How To Build a Roadmap for Your Digital Transformation

Categories: Architecture, Programming

Agility and the essence of software architecture

Coding the Architecture - Simon Brown - Thu, 12/18/2014 - 16:44

I'm just back from the YOW! conference tour in Australia (which was amazing!) and I presented this as the closing slide for my Agility and the essence of software architecture talk, which was about how to create agile software systems in an agile way.

Agility and the essence of software architecture

You will have probably noticed that software architecture sketches/diagrams form a central part of my lightweight approach to software architecture, and I thought this slide was a nice way to summarise the various things that diagrams and the C4 model enable, plus how this helps to do just enough up front design. The slides are available to view online/download and hopefully one of the videos will be available to watch after the holiday season.

Categories: Architecture

Dynamic DNS updates with nsupdate (new and improved!)

Agile Testing - Grig Gheorghiu - Wed, 12/17/2014 - 23:14
I blogged about this topic before. This post shows a slightly different way of using nsupdate remotely against a DNS server running BIND 9 in order to programatically update DNS records. The scenario I am describing here involves an Ubuntu 12.04 DNS server running BIND 9 and an Ubuntu 12.04 client running nsupdate against the DNS server.

1) Run ddns-confgen and specify /dev/urandom as the source of randomness and the name of the zone file you want to dynamically update via nsupdate:

$ ddns-confgen -r /dev/urandom -z myzone.com

# To activate this key, place the following in named.conf, and
# in a separate keyfile on the system or systems from which nsupdate
# will be run:
key "ddns-key.myzone.com" {
algorithm hmac-sha256;
secret "1D1niZqRvT8pNDgyrJcuCiykOQCHUL33k8ZYzmQYe/0=";
};

# Then, in the "zone" definition statement for "myzone.com",
# place an "update-policy" statement like this one, adjusted as
# needed for your preferred permissions:
update-policy {
 grant ddns-key.myzone.com zonesub ANY;
};

# After the keyfile has been placed, the following command will
# execute nsupdate using this key:
nsupdate -k <keyfile>

2) Follow the instructions in the output of ddns-keygen (above). I actually named the key just ddns-key, since I was going to use it for all the zones on my DNS server. So I added this stanza to /etc/bind/named.conf on the DNS server:

key "ddns-key" {
algorithm hmac-sha256;
secret "1D1niZqRvT8pNDgyrJcuCiykOQCHUL33k8ZYzmQYe/0=";
};

3) Allow updates when the key ddns-key is used. In my case, I added the allow-update line below to all zones that I wanted to dynamically update, not only to myzone.com:

zone "myzone.com" {
        type master;
        file "/etc/bind/zones/myzone.com.db";
allow-update { key "ddns-key"; };
};

At this point I also restarted the bind9 service on my DNS server.

4) On the client box, create a text file containing nsupdate commands to be sent to the DNS server. In the example below, I want to dynamically add both an A record and a reverse DNS PTR record:

$ cat update_dns1.txt
server dns1.mycompany.com
debug yes
zone myzone.com
update add testnsupdate1.myzone.com 3600 A 10.10.2.221
show
send
zone 2.10.10.in-addr.arpa
update add 221.2.10.10.in-addr.arpa 3600 PTR testnsupdate1.myzone.com
show
send

Still on the client box, create a file containing the stanza with the DDNS key generated in step 1:

$ cat ddns-key.txt
key "ddns-key" {
algorithm hmac-sha256;
secret "Wxp1uJv3SHT+R9rx96o6342KKNnjW8hjJTyxK2HYufg=";
};

5) Run nsupdate and feed it both the update_dns1.txt file containing the commands, and the ddns-key.txt file:

$ nsupdate -k ddns-key.txt -v update_dns1.txt

You should see some fairly verbose output, since the command file specifies 'debug yes'. At the same time, tail /var/log/syslog on the DNS server and make sure there are no errors.

In my case, there were some hurdles I had to overcome on the DNS server. The first one was that apparmor was installed and it wasn't allowing the creation of the journal files used to keep track of DDNS records. I saw lines like these in /var/log/syslog:

Dec 16 11:22:59 dns1 kernel: [49671335.189689] type=1400 audit(1418757779.712:12): apparmor="DENIED" operation="mknod" parent=1 profile="/usr/sbin/named" name="/etc/bind/zones/myzone.com.db.jnl" pid=31154 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107
Dec 16 11:22:59 dns1 kernel: [49671335.306304] type=1400 audit(1418757779.828:13): apparmor="DENIED" operation="mknod" parent=1 profile="/usr/sbin/named" name="/etc/bind/zones/rev.2.10.10.in-addr.arpa.jnl" pid=31153 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107

To get past this issue, I disabled apparmor for named:

# ln -s /etc/apparmor.d/usr.sbin.named /etc/apparmor.d/disable/
# service apparmor restart

The next issue was an OS permission denied (nothing to do with apparmor) when trying to create the journal files in /etc/bind/zones:

Dec 16 11:30:54 dns1 named[32640]: /etc/bind/zones/myzone.com.db.jnl: create: permission denied
Dec 16 11:30:54 dns named[32640]: /etc/bind/zones/rev.2.0.10.in-addr.arpa.jnl: create: permission denied

I got past this issue by running

# chown -R bind:bind /etc/bind/zones

At this point everything worked as expected.


The Big Problem is Medium Data

This is a guest post by Matt Hunt, who leads open source projects for Bloomberg LP R&D. 

“Big Data” systems continue to attract substantial funding, attention, and excitement. As with many new technologies, they are neither a panacea, nor even a good fit for many common uses. Yet they also hold great promise. The question is, can systems originally designed to serve hundreds of millions of requests for something like web pages also work for requests that are computationally expensive and have tight tolerances?

Modern era big data technologies are a solution to an economics problem faced by Google and other Internet giants a decade ago. Storing, indexing, and responding to searches against all web pages required tremendous amounts of disk space and computer power. Very powerful machines, fast SAN storage, and data center space were prohibitively expensive. The solution was to pack cheap commodity machines as tightly together as possible with local disks.

This addressed the space and hardware cost problem, but introduced a software challenge. Writing distributed code is hard, and with many machines comes many failures. So a framework was also required to take care of such problems automatically for the system to be viable.

Hadoop

Right now, we’re in a transition phase in the industry in computing built from the entrance of Hadoop and its community starting in 2004. Understanding why and how these systems were created also offers insight into some of their weaknesses.  

At Bloomberg that we don’t have a big data problem. What we have is a “medium data” problem -- and so does everyone else.   Systems such as Hadoop and Spark are less efficient and mature for these typical low latency enterprise uses in general. High core counts, SSDs, and large RAM footprints are common today - but many of the commodity platforms have yet to take full advantage of them, and challenges remain.  A number of distributed components are further hampered by Java, which creates its own complications for low latency performance.

A practical use case
Categories: Architecture

Multithreaded Programming has Really Gone to the Dogs

Taken from Multithreaded programming - theory and practice on reddit, which also has some very funny comments. If anything this is way too organized. 

 What's not shown? All the little messes that have to be cleaned up after...

Categories: Architecture

The Machine: HP's New Memristor Based Datacenter Scale Computer - Still Changing Everything

The end of Moore’s law is the best thing that’s happened to computing in the last 50 years. Moore’s law has been a tyranny of comfort. You were assured your chips would see a constant improvement. Everyone knew what was coming and when it was coming. The entire semiconductor industry was held captive to delivering on Moore’s law. There was no new invention allowed in the entire process. Just plod along on the treadmill and do what was expected. We are finally breaking free of these shackles and entering what is the most exciting age of computing that we’ve seen since the late 1940s. Finally we are in a stage where people can invent and those new things will be tried out and worked on and find their way into the market. We’re finally going to do things differently and smarter.

-- Stanley Williams (paraphrased)

HP has been working on a radically new type of computer, enigmatically called The Machine (not this machine). The Machine is perhaps the largest R&D project in the history of HP. It’s a complete rebuild of both hardware and software from the ground up. A massive effort. HP hopes to have a small version of their datacenter scale product up and running in two years.

The story began when we first met HP’s Stanley Williams about four years ago in How Will Memristors Change Everything? In the latest chapter of the memristor story, Mr. Williams gives another incredible talk: The Machine: The HP Memristor Solution for Computing Big Data, revealing more about how The Machine works.

The goal of The Machine is to collapse the memory/storage hierarchy. Computation today is energy inefficient. Eighty percent of the energy and vast amounts of time are spent moving bits between hard disks, memory, processors, and multiple layers of cache. Customers end up spending more money on power bills than on the machines themselves. So the machine has no hard disks, DRAM, or flash. Data is held in power efficient memristors, an ion based nonvolatile memory, and data is moved over a photonic network, another very power efficient technology. When a bit of information leaves a core it leaves as a pulse of light.

On graph processing benchmarks The Machine reportedly performs 2-3 orders of magnitude better based on energy efficiency and one order of magnitude better based on time. There are no details on these benchmarks, but that’s the gist of it.

The Machine puts data first. The concept is to build a system around nonvolatile memory with processors sprinkled liberally throughout the memory. When you want to run a program you send the program to a processor near the memory, do the computation locally, and send the results back. Computation uses a wide range of heterogeneous multicore processors. By only transmitting the bits required for the program and the results the savings is enormous when compared to moving terabytes or petabytes of data around.

The Machine is not targeted at standard HPC workloads. It’s not a LINPACK buster. The problem HP is trying to solve for their customers is where a customer wants to perform a query and figure out the answer by searching through a gigantic pile of data. Problems that need to store lots of data and analyze in realtime as new data comes in

Why is a very different architecture needed for building a computer? Computer systems can’t not keep up with the flood of data that’s coming in. HP is hearing from their customers that they need the ability to handle ever greater amounts of data. The amount of bits that are being collected is growing exponentially faster than the rate at which transistors are being manufactured. It’s also the case that information collection is growing faster than the rate at which hard disks are being manufactured. HP estimates there are 250 trillion DVDs worth of data that people really want to do something with. Vast amount of data are being collected in the world are never even being looked at.

So something new is needed. That’s at least the bet HP is making. While it’s easy to get excited about the technology HP is developing, it won’t be for you and me, at least until the end of the decade. These will not be commercial products for quite a while. HP intends to use them for their own enterprise products, internally consuming everything that’s made. The idea is we are still very early in the tech cycle, so high cost systems are built first, then as volumes grow and processes improve, the technology will be ready for commercial deployment. Eventually costs will come down enough that smaller form factors can be sold.

What is interesting is HP is essentially building its own cloud infrastructure, but instead of leveraging commodity hardware and software, they are building their own best of breed custom hardware and software. A cloud typically makes available vast pools of memory, disk, and CPU, organized around instance types which are connected by fast networks. Recently there’s a move to treat these resource pools as independent of the underlying instances. So we are seeing high level scheduling software like Kubernetes and Mesos becoming bigger forces in the industry. HP has to build all this software themselves, solving many of the same problems, along with the opportunities provided by specialized chips. You can imagine programmers programming very specialized applications to eke out every ounce of performance from The Machine, but what is more likely is HP will have to create a very sophisticated scheduling system to optimize how programs run on top of The Machine. What's next in software is the evolution of a kind of Holographic Application Architecture, where function is fluid in both time and space, and identity arises at run-time from a two-dimensional structure. Schedule optimization is the next frontier being explored on the cloud.

The talk is organized in two broad sections: hardware and software. Two-thirds of the project is software, but Mr. Williams is a hardware guy, so hardware makes up the majority of the talk.  The hardware section is based around the idea of optimizing the various functions around the physics that is available: electrons compute; ions store; photons communicate.

Here’s is my gloss on Mr. Williams talk. As usual with such a complex subject much can be missed. Also, Mr. Williams tosses huge interesting ideas around like pancakes, so viewing the talk is highly recommended. But until then, let’s see The Machine HP thinks will be the future of computing….

Categories: Architecture

The Machine: HP's New Memristor Based Datacenter Scale Computer - Still Changing Everything

The end of Moore’s law is the best thing that’s happened to computing in the last 50 years. Moore’s law has been a tyranny of comfort. You were assured your chips would see a constant improvement. Everyone knew what was coming and when it was coming. The entire semiconductor industry was held captive to delivering on Moore’s law. There was no new invention allowed in the entire process. Just plod along on the treadmill and do what was expected. We are finally breaking free of these shackles and entering what is the most exciting age of computing that we’ve seen since the late 1940s. Finally we are in a stage where people can invent and those new things will be tried out and worked on and find their way into the market. We’re finally going to do things differently and smarter.

-- Stanley Williams (paraphrased)

HP has been working on a radically new type of computer, enigmatically called The Machine (not this machine). The Machine is perhaps the largest R&D project in the history of HP. It’s a complete rebuild of both hardware and software from the ground up. A massive effort. HP hopes to have a small version of their datacenter scale product up and running in two years.

The story began when we first met HP’s Stanley Williams about four years ago in How Will Memristors Change Everything? In the latest chapter of the memristor story, Mr. Williams gives another incredible talk: The Machine: The HP Memristor Solution for Computing Big Data, revealing more about how The Machine works.

The goal of The Machine is to collapse the memory/storage hierarchy. Computation today is energy inefficient. Eighty percent of the energy and vast amounts of time are spent moving bits between hard disks, memory, processors, and multiple layers of cache. Customers end up spending more money on power bills than on the machines themselves. So the machine has no hard disks, DRAM, or flash. Data is held in power efficient memristors, an ion based nonvolatile memory, and data is moved over a photonic network, another very power efficient technology. When a bit of information leaves a core it leaves as a pulse of light.

On graph processing benchmarks The Machine reportedly performs 2-3 orders of magnitude better based on energy efficiency and one order of magnitude better based on time. There are no details on these benchmarks, but that’s the gist of it.

The Machine puts data first. The concept is to build a system around nonvolatile memory with processors sprinkled liberally throughout the memory. When you want to run a program you send the program to a processor near the memory, do the computation locally, and send the results back. Computation uses a wide range of heterogeneous multicore processors. By only transmitting the bits required for the program and the results the savings is enormous when compared to moving terabytes or petabytes of data around.

The Machine is not targeted at standard HPC workloads. It’s not a LINPACK buster. The problem HP is trying to solve for their customers is where a customer wants to perform a query and figure out the answer by searching through a gigantic pile of data. Problems that need to store lots of data and analyze in realtime as new data comes in

Why is a very different architecture needed for building a computer? Computer systems can’t not keep up with the flood of data that’s coming in. HP is hearing from their customers that they need the ability to handle ever greater amounts of data. The amount of bits that are being collected is growing exponentially faster than the rate at which transistors are being manufactured. It’s also the case that information collection is growing faster than the rate at which hard disks are being manufactured. HP estimates there are 250 trillion DVDs worth of data that people really want to do something with. Vast amount of data are being collected in the world are never even being looked at.

So something new is needed. That’s at least the bet HP is making. While it’s easy to get excited about the technology HP is developing, it won’t be for you and me, at least until the end of the decade. These will not be commercial products for quite a while. HP intends to use them for their own enterprise products, internally consuming everything that’s made. The idea is we are still very early in the tech cycle, so high cost systems are built first, then as volumes grow and processes improve, the technology will be ready for commercial deployment. Eventually costs will come down enough that smaller form factors can be sold.

What is interesting is HP is essentially building its own cloud infrastructure, but instead of leveraging commodity hardware and software, they are building their own best of breed custom hardware and software. A cloud typically makes available vast pools of memory, disk, and CPU, organized around instance types which are connected by fast networks. Recently there’s a move to treat these resource pools as independent of the underlying instances. So we are seeing high level scheduling software like Kubernetes and Mesos becoming bigger forces in the industry. HP has to build all this software themselves, solving many of the same problems, along with the opportunities provided by specialized chips. You can imagine programmers programming very specialized applications to eke out every ounce of performance from The Machine, but what is more likely is HP will have to create a very sophisticated scheduling system to optimize how programs run on top of The Machine. What's next in software is the evolution of a kind of Holographic Application Architecture, where function is fluid in both time and space, and identity arises at run-time from a two-dimensional structure. Schedule optimization is the next frontier being explored on the cloud.

The talk is organized in two broad sections: hardware and software. Two-thirds of the project is software, but Mr. Williams is a hardware guy, so hardware makes up the majority of the talk.  The hardware section is based around the idea of optimizing the various functions around the physics that is available: electrons compute; ions store; photons communicate.

Here’s is my gloss on Mr. Williams talk. As usual with such a complex subject much can be missed. Also, Mr. Williams tosses huge interesting ideas around like pancakes, so viewing the talk is highly recommended. But until then, let’s see The Machine HP thinks will be the future of computing….

Categories: Architecture

New blog posts about bower, grunt and elasticsearch

Gridshore - Mon, 12/15/2014 - 08:45

Two new blog posts I want to point out to you all. I wrote these blog posts on my employers blog:

The first post is about creating backups of your elasticsearch cluster. Some time a go they introduced the snapshot/restore functionality. Of course you can use the REST endpoint to use the functionality, but how easy is it if you can use a plugin to handle the snapshots. Or maybe even better, integrate the functionality in your own java application. That is what this blogpost is about, integrating snapshot/restore functionality in you java application. As a bonus there are the screens of my elasticsearch gui project snowing the snapshot/restore functionality.

Creating elasticsearch backups with snapshot/restore

The second blog post I want to put under you attention is front-end oriented. I already mentioned my elasticsearch gui project. This is an Angularjs application. I have been working on the plugin for a long time and the amount of javascript code is increasing. Therefore I wanted to introduce grunt and bower to my project. That is what this blogpost is about.

Improve my AngularJS project with grunt

The post New blog posts about bower, grunt and elasticsearch appeared first on Gridshore.

Categories: Architecture, Programming

Agile: how hard can it be?!

Xebia Blog - Sun, 12/14/2014 - 13:48

Yesterday my colleagues and I ran an awesome workshop at the MIT conference in which we built a Rube Goldberg machine using Scrum and Extreme Engineering techniques. As agile coaches one would think that being an Agile team should come naturally to us, but I'd like to share our pitfalls and insights with you since "we learned a lot" about being an agile team and what an incredible powerful model a Rube Goldberg machine is for scaled agile product development.

If you're not the reading type, check out the video.

Rube ... what?

Goldberg. According to Wikipedia; A Rube Goldberg machine is a contraption, invention, device or apparatus that is deliberately over-engineered or overdone to perform a very simple task in a very complicated fashion, usually including a chain reaction. The expression is named after American cartoonist and inventor Rube Goldberg (1883–1970).

In our case we set out on a 6 by 4 meter stage divided in 5 sections. Each section had a theme like rolling, propulsion, swinging, lifting etc. In a fashion it resembled a large software product that requires to respond to some event in an (for outsiders) incredibly complex matter, by triggering a chain of sub-systems which end in some kind of end-result.

The workspace, scrum boards and build stuff

The workspace, scrum boards and build stuff

Extreme Scrum

During the day 5 teams worked in a total of 10 sprints to create the most incredible machine, experiencing everything one can find during "normal" product development. We had inexperienced team members, little to no documentation, legacy systems whom's engineering principles were shrouded in mystery, teams that forgot to retrospective, interfaces that were ignored because the problem "lies with the other team". The huge time pressure of the relative small sprint and the complexity of what we were trying to achieve created a pressure cooker that brought these problems to the surface faster than anything else and with Scrum we were forced to face and fix these problems.

Team scrumboard

Team scrumboard

Build, fail, improve, build

“Most people do not listen with the intent to understand; they listen with the intent to reply.” - Stephen R. Covey

Having 2 minutes to do your planning makes it very difficult to listen, especially when your head is buzzing with ideas, yet sometimes you have to slow down to speed up. Effective building requires us to really understand what your team mate is going to do, pairing proved a very effective way to slow down your own brain and benefit from both rubber ducking and the insight of your team mate. Once our teams reached 4 members we could pair and drastically improve the outcome.

Deadweight with pneumatic fuse

Deadweight with pneumatic fuse

Once the machine had reached a critical size integration tests started to fail. The teams responded by testing multiple times during the sprint and fix the broken build rather than adding new features. Especially in mechanical engineering that is not a simple as it sounds. Sometimes a part of the machine would be "refactored" and since we did not design for a simple end-to-end test to be applied continuously. It took a couple of sprints to get that right.

A MVP that made it to the final product

A MVP that made it to the final product

"Keep your code clean" we teach teams every day. "Don't accept technical or functional debt, you know it will slow you down in the end". Yet it is so tempting. Despite a Scrum Master and an "Über Scrum Master" we still had a hard time keeping our workspace clean, refactor broken stuff out, optimise and simplify...

Have an awesome goal

"A true big hairy audacious goal is clear and compelling, serves as unifying focal point of effort, and acts as a clear catalyst for team spirit. It has a clear finish line, so the organization can know when it has achieved the goal; people like to shoot for finish lines." - Collins and Porras, Built to Last: Successful Habits of Visionary Companies

Truth is: we got lucky with the venue. Building a machine like this is awesome and inspiring in itself, learning how Extreme Scrum can help teams to build machines better, faster, innovative and with a whole lot more fun is a fantastic goal in itself, but parallel to our build space was a true magnet, something that really focussed the teams and go that extra mile.

The ultimate goal of the machine

The ultimate goal of the machine

Biggest take away

Building things is hard, building under pressure is even harder. Even teams that are aware of the theory will be tempted to throw everything overboard and just start somewhere. Applying Extreme Engineering techniques can truly help you, it's a simple set of rules but it requires an unparalleled level of discipline. Having a Scrum coach can make all the difference between a successful and failed project.

Stuff The Internet Says On Scalability For December 12th, 2014

Hey, it's HighScalability time:


We've had a wee bit of a storm in the bay area.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Extreme Engineering - Building a Rube Goldberg machine with scrum

Xebia Blog - Fri, 12/12/2014 - 15:16

Is agile usable to do other things than software development? Well we knew that already; yes!
But to create a machine in 1 day with 5 teams and continuously changing members using scrum might be exciting!

See our report below (it's in Dutch for now)

 

Extreme engineering video

 

 

Azure: Premium Storage, RemoteApp, SQL Database Update, Live Media Streaming, Search and More

ScottGu's Blog - Scott Guthrie - Thu, 12/11/2014 - 20:14

Today we released a number of great enhancements to Microsoft Azure. These include:

  • Premium Storage: New Premium high-performance Storage for Azure Virtual Machine workloads
  • RemoteApp: General Availability of Azure RemoteApp service
  • SQL Database: Enhancements to Azure SQL Databases
  • Media Services: General Availability of Live Channels for Media Streaming
  • Azure Search: Enhanced management experience, multi-language support and more
  • DocumentDB: Support for Bulk Add Documents and Query Syntax Highlighting
  • Site Recovery: General Availability of disaster recovery to Azure for branch offices and SMB customers
  • Azure Active Directory: General Availability of Azure Active Directory application proxy and password write back support

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Premium Storage: High-performance Storage for Virtual Machines

I’m excited to announce the public preview of our new Azure Premium Storage offering. With the introduction of the new Premium Storage option, Azure now offers two types of durable storage: Premium Storage and Standard Storage. Premium Storage stores data durably on Solid State Drives (SSDs) and provides high performance, low latency, disk storage with consistent performance delivery guarantees.

image

Premium Storage is ideal for I/O-sensitive workloads - and is great for database workloads hosted within Virtual Machines.  You can optionally attach several premium storage disks to a single VM, and support up to 32 TB of disk storage per Virtual Machine and drive more than 50,000 IOPS per VM at less than 1 millisecond latency for read operations. This provides a wickedly fast storage option that enables you to run even more workloads in the cloud.

Using Premium Storage, Azure now offers the ability to "lift-and-shift" more demanding enterprise applications to the cloud - including SQL Server, Dynamics AX, Dynamics CRM, Exchange Server, MySQL, Oracle Database, IBM DB2, and SAP Business Suite solutions.

Premium Storage is now available in public preview starting today. To sign up to use the Azure Premium Storage preview, visit the Azure Preview page. Disk Sizes and Performance

Premium Storage disks provide up to 5,000 IOPS and 200 MB/sec throughput depending on the disk size. When you create a new premium storage disk you get the option to select the disk size and performance characteristics you want based on your application performance and storage capacity needs.  For the public preview, we are offering three Premium Storage disk configurations:

Disk Types<?xml:namespace prefix = "o" />

P10

P20

P30

Disk Size

128 GB

512 GB

1 TB

IOPS per Disk

500

2300

5000

Throughput per Disk

100 MB/sec

150 MB/sec

200 MB/sec

You can maximize the performance of your VMs by attaching multiple Premium Storage disks to them (up to the network bandwidth limit available to the VM for disk traffic). To learn the disk bandwidth available for each VM size, see the Virtual Machine and Cloud Service Sizes for Azure Durability

Durability of data is of utmost importance for any persistent storage option. Azure customers have critical applications that depend on the persistence of their data and high tolerance against failures. Premium Storage keeps three replicas of data within the same region. In addition, you can also optionally create snapshots of your disks and copy those snapshots to a Standard GRS storage account - which enables you to maintain a geo-redundant snapshot of your data that is stored at least 400 miles away from your primary Azure region. Learn More

You can learn more about Premium Storage disks here.  To sign up to use Premium Storage, go to the Azure Preview page, and sign up for Microsoft Azure Premium Storage service using your subscription. RemoteApp: General Availability of Azure RemoteApp

I’m excited to announce the general availability of Azure RemoteApp. Using Azure RemoteApp, you can deploy Windows desktop applications in the cloud, and provide your users with an intuitive, high-fidelity, WAN-ready remote application experience.  Users can use the cloud-hosted Windows applications you enable on their phones, tablets, or PCs - including Windows, Mac, iOS and Android based devices.  We are delivering RemoteApp with a super competitive price - you can host your user's applications in the cloud for just $10/user/month.  With today’s release, Azure RemoteApp is backed by an SLA and supported by Microsoft Support, offering the full scalability and security of the Azure cloud. Getting Started

Setting up RemoteApp is easy. In the Azure Management Portal, select NEW -> App Services -> RemoteApp -> Quick Create. Pick a name, region, select the scale configuration plan you want to use, pick one of the standard template images, and click OK. When you do this for the first time, your 30-day free trial will also start. This is a fully featured trial, available to all Azure customers.

image

A RemoteApp instance is an elastic, automatically scaled, collection of Windows Server VMs that are running the Remote Desktop Session Host role and host the applications. The VMs are all created based on the template image you provide. You can provide your own template image containing your custom apps, or use one of the standard template images we provide. One of these is for Office 365 ProPlus, which you can use if you have a subscription that contains the Office 365 ProPlus service:

image

Once enabled, your users can quickly and easily connect to the applications you host in Azure.  They can use Windows, Mac, iOS and Android devices to connect to the RemoteApp service - enabling you to use Azure to run your Windows desktop applications anywhere in the world, on any device. Enabling Hybrid Applications

Many business-critical Windows applications rely on on-premises infrastructure such as identity and machine management, and require access to on-premises databases and resources. Azure RemoteApp provides a hybrid deployment model that supports all of these scenarios.

  • Hybrid Management: In a hybrid RemoteApp collection, the VMs which host your applications are joined to your AD domain. Therefore, you can use on-premises management tools like Group Policy, System Center, and many other enterprise management tools that rely on AD membership.

  • Federated Identity: You can use Azure Active Directory integrated with your on-premises AD and your users can logon with their familiar corporate identities. When the Windows applications starts, it is running in a fully domain-joined session, with the usual integrated authentication capabilities of a Windows domain.
  • Hybrid Networking: Windows applications in a hybrid RemoteApp collection can seamlessly access on-premises data and resources. This capability is built on Azure Virtual Networking with site-to-site VPN, providing cloud-premise virtual network connectivity. In the future, RemoteApp collections will support full range of Azure Networking capabilities, including ExpressRoute.

Performance and Scale Configurations

With today's general availability release, we are offering two scale configurations: BASIC and STANDARD.

  • BASIC is intended for lighter, task-worker use cases, such as a single productivity application or a data-entry frontend to a line of business system.
  • STANDARD is intended for typical productivity use cases such as using Outlook, Word, Excel and other knowledge worker line of business and productivity applications.

You can select the scale configuration for your RemoteApp collection while creating it. If you want to change it later, you can do so using the SCALE tab. Your applications and settings and your user data remain intact through this change.

image Pricing

We are making the RemoteApp service available at a very attractive, affordable price.  You can host a line of business Windows application for as little as $10/user per month using the BASIC configuration.

At the STANDARD level, you can host your users’ entire productivity workspace for just $15/user per month.

Learn More

A variety of resources are available on the RemoteApp overview page. You can also download the client for your device and take a test drive. Finally, RDV Team blog discusses today’s new features in more detail. SQL Databases: Now with SQL 2014 Features and Compatibility

Today we are making available a preview of the next-generation release of our Azure SQL Database service.  We announced some of the preview's new features earlier in November.  Today's release delivers near-complete SQL Server 2014 engine compatibility and even better performance.

Our internal benchmark tests (using over 600 million rows of data) show query performance improvements of around 5x with today's preview relative to our existing Premium Tier SQL Database offering and up to 100x performance improvements when using the new In-memory columnstore technology now supported with today's preview release. Lots of great new features and improvements

Key highlights of today's preview include:

  • Better management of large databases. We now support heavier database workload management with parallel queries, table partitioning, online indexing, worry-free large index rebuilds with the previous 2GB size limit removed, and more alter database commands.

  • Support for more programmability capabilities: You can now build even more robust applications with CLR, T-SQL Windows functions, XML index, and change tracking support.

  • Up to 100x performance improvements with support for In-memory columnstore queries for data mart and analytic workloads.

  • Improved monitoring and troubleshooting: Extended Events (XEvents) and visibility into over 100 new table views via an expanded set of Database Management Views (DMVs).

  • New S3 performance level: Today's preview introduces a new pricing option for SQL Databases. The new "S3" performance tier delivers 100 DTU of performance (twice the DTU level of the existing S2 tier) and all of the features available in the Standard tier. It enables an even more cost effective way to run applications with higher performance needs.

Learn More and Get Started

You can read more about the enhancements in today's preview on the preview getting started page.  To use today's preview, you can navigate to the SETTINGS part on the SQL Database blade in the Azure Preview Portal and upgrade to use the preview.

image

Try our the new features and give us your feedback! Media Services: General Availability of Live Media Streaming

I’m very excited to announce the General Availability of Live Channels Media Streaming support.  Live Channels with Azure Media Services is the live services backbone that broadcasters such as NBC Sports have used to deliver live online media streamed events such as English Premier League, NHL hockey, Sunday Night Football.  A dozen international broadcasters also used it to seamlessly deliver live media streaming coverage of the 2014 Sochi Winter Olympics and 2014 FIFA World Cup.

You can now use Live Channels to stream events of your own - and scale to literally millions of users watching them.  Today's general availability release is backed by an enterprise-grade Service-Level Agreement (SLA) for all customers. 

Live Streaming Learn More

For more information on functionality and pricing, visit the Getting Started with Live Streaming blog post, the Media Services webpage or Media Services Pricing webpage, or the Live Streaming MSDN section.

Search: Portal Management, Multi-language support

I am happy to announce a number of highly requested features available today in Azure Search.  Azure Search provides developers with all of the features needed to build out search experience for web and mobile applications without having to deal with the typical complexities that come with managing, tuning and scaling a real-world search service.  Azure Portal Enhancements

Helping developers setup and manage their services quickly and easily is a key goal of the Azure Management Portal. Today's release adds several new capabilities to the Azure Search support in the portal that makes it even easier to get started with Azure Search and reduce the need to write code.

For example, you can now easily create a new index. In the portal, you can name the search index, set all of the fields, and assign the properties for each of them - all without writing any code:

image

Once you create the index, you can also now drill into usage details like document counts and storage size. You can see all of the fields associated with this index as shown below:

image

Index tuning is another enhancement now supported in the portal user interface. Boosting relevant items not only provides a better search experience, it also helps you achieve business objectives. For example, if you are searching a product index, you might want to boost documents where the result was found in the product name, as opposed to another document where the result was found in the product description. Or you may wish to use a scoring function that allows you to boost items that have high star ratings or that provide higher margins.

The task of tuning an index was previously only available through the API. Starting today, using the Azure Preview portal you can create or alter scoring profiles, instantly tuning the results of your search queries without having to write a line of code:

image Multilanguage Support across 27 Languages

With today’s release, Azure Search now has support for 27 languages. This allows Azure Search to accommodate the unique characteristics of a given language, enabling word-breaking and text normalization to work as expected for each language. Part of this enhancement includes support for stemming in the relevant languages, reducing words to their word stems. For example, you can search for the word “runs” and find documents that say “run” or “running”.

When creating an index you can choose to include content from multiple languages, allowing you to search and return results based on the chosen language of your user. For more information, you can visit the Language Support page. Over time, we will continue to enhance multi-language support to include additional languages. API features

Last but not least, we’ve introduced a new Azure Search Management REST API that allows you to perform common administrative tasks, such as creating new services, and scaling services to allow for additional storage or higher query-per-second rates. You can see a sample of how to use this Management API at CodePlex. Document DB: Bulk Add Documents and Syntax Highlighting

DocumentDB is a NoSQL document database service designed for scalable and high performance modern applications.  DocumentDB is delivered as a fully managed service (meaning you don’t have to manage any infrastructure or VMs yourself) with an enterprise grade SLA.

We now support some nice new capabilities for Document DB in the Azure Preview Portal:

  • Add Documents: Upload existing JSON documents via Document Explorer
  • Query syntax highlighting: Document DB SQL query

These features make it even easier to get started and explore DocumentDB.

Add Documents Support within the Azure Portal

The DocumentDB Document Explorer within the Azure Preview Portal now supports the uploading of existing JSON documents - which makes it easy to import and start using existing data stored in existing JSON files. Simply open Document Explorer and click the Add Document command:

image

In the new blade that opens, click the browse button to open a file explorer and select 1 or more JSON documents to upload. Note that Document Explorer currently supports up to 100 JSON document files in a single upload operation.

image

Once you’re satisfied with your selection, click the upload button. The documents will automatically be added to the Document Explorer grid and aggregate results will be displayed as the upload operation is in progress.

image

Once the operation has completed, you can select up to another 100 documents to upload without having to close the Add Document blade.  This makes it easy to import data into your DocumentDB databases. Query Explorer – Syntax Highlighting

We’ve also enabled basic keyword and value highlighting within Query Explorer.

image

This makes it even easier to experiment and test queries against your NoSQL databases.

Please send us your feedback and suggestions on the Microsoft Azure DocumentDB feedback forum. If you haven’t tried DocumentDB yet, you can learn more about how to get started here. Disaster Recovery: GA of DR for Branch Offices & SMB Customers

I’m excited to announce the General Availability of the Disaster Recovery (DR) to Azure for Branch offices and SMB feature in our Azure Site Recovery (ASR) service.  Today's new support enables consistent replication, protection, and recovery of Virtual Machines directly in Microsoft Azure.  With this new support we have extended the Azure Site Recovery service to become a simple, reliable & cost effective DR Solution for enabling Virtual Machine replication and recovery between Windows Server 2012 R2 and Microsoft Azure without having to deploy a System Center Virtual Machine Manager on your primary site.

These features builds on top of the Hyper-V Replica technology available in Windows Server 2012 R2 and Microsoft Azure to provide remote health monitoring, no-impact recovery plan testing and single click orchestrated recovery – all of this backed by an SLA that is enterprise grade. Verify DR Plans with Confidence

The Test Failover feature within Azure Site Recovery allows you to test your disaster recovery plans without impacting your production workload which ensures that you can perform periodic DR drills to meet your compliance objectives. You can connect to the virtual machine running in Azure via RDP after enabling the appropriate endpoints for the virtual machine running in Azure.

A Planned Failover will do a shutdown of your on-premises machine, transfer all the last changes inside the virtual machine to Azure & then bring up an instance of the VM in Azure without any data loss. An Unplanned Failover is usually triggered when your on-premises site has been hit by an unexpected disaster.

If you are looking for failing over a multi-virtual machine application, you can do so using the One-Click Orchestration using Recovery Plans feature available in Azure Site Recovery. Recovery plans make failover and failback from Azure easy and ensure that you meet your Recovery Time Objectives (RTO) goals of your organization.

Check out additional pricing or product information about Azure Site Recovery, and sign up for a free Azure trial and start using it today. Active Directory: GA of Application Proxy and Password Writeback support

Today's Azure update includes some great updates to Azure Active Directory. Azure Active Directory Application Proxy

The Azure Active Directory Application Proxy allows you to make your on-premises web applications securely accessible to users who want to use them from the cloud - and enables you to authenticate access to them using Azure AD.

You can do this without changing your applications and without having to change your DMZ configuration. Just install a lightweight connector anywhere on your network and configure access to the application in your Azure Active Directory, and you can make your SharePoint, Outlook Web Access (or any other Web application that relies on Kerberos) available to users outside your corporate network.

image

With today's release we added support for Kerberos Constrained Delegation. Now, once a user has authenticated to Azure Active Directory, the Azure Active Directory Application Proxy can automatically authenticate users to your on-premises application. Password Writeback for Azure Active Directory Premium Customers

With the new Password Writeback support in Azure AD Sync, you can now configure your Active Directory system so that any time a user or administrator changes a password in Azure AD, the new password is also written back to your on-premises Active Directory as well. So, for example, when a user forgets their password to your on-premises AD, they can reset their password using the Azure AD password reset service we provide in the cloud, and then use their new password to sign on to your on-premises AD.  This makes it easier for organizations to enable a variety of self-service IT and password reset features to their employees and partners. Preview of security questions for password reset

With today's release we’re also introducing preview support that enables you to configure security questions for password reset scenarios. This enables users to register their answers to secret questions, and then use those answers to prove who they are when they go to reset a forgotten password. Add your own password SSO for SaaS applications

With today's release we are introducing the preview of functionality that lets you configure password-based single sign-on for any web application that has an HTML sign-in page, even for applications that are not in the Azure AD Application Gallery. You can also add any links to your users’ Azure AD Access Panel, such as deep links to specific SharePoint pages, or to web apps that use Active Directory Federation Services. More Ways to Get AD Premium

We now support the ability to purchase Azure Active Directory Premium online at the Office 365 commerce catalogue, where you can purchase Azure AD Premium licenses for as many users as you want.  You can then easily manage your Azure Active Directory by navigating to http://aka.ms/accessAAD or through the Office administration portal.

To learn more about these new capabilities and how you can start using them, read Alex Simons’ post on the Active Directory Team Blog. Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu omni

Categories: Architecture, Programming

Reactive prefetch speeds Google's mobile search by 100-150 milliseconds.

Increasing responsiveness by parallelizing and prefetching content using hints and dependency graphs, is an old concept, but seldom do we see such a nice tight example of the benefit as is given by the great Ilya Grigorik in this G+ post

The insight here is that we're initiating the fetch for the HTML and its critical resources in parallel... which requires that the page initiating the navigation knows which critical resources are being used on the target page.

This is a powerful pattern and one that you can use to accelerate your site as well. The key insight is that we are not speculatively prefetching resources and do not incur unnecessary downloads. Instead, we wait for the user to click the link and tell us exactly where they are headed, and once we know that, we tell the browser which other resources it should fetch in parallel - aka, reactive prefetch!

As you can infer, implementing the above strategy requires a lot of smarts both in the browser and within the search engine... First, we need to know the list of critical resources that may delay rendering of the destination page for every page on the web! No small feat, but the Search team has us covered - they're good like that. Next, we need a browser API that allows us to invoke the prefetch logic when the click occurs: the search page listens for the click event, and once invoked, dynamically inserts prefetch hints into the search results page. Finally, this is where Chrome comes in: as the search results page is unloaded, the browser begins fetching the hinted resources in parallel with the request for the destination page. The net result is that the critical resources are fetched much sooner, allowing the browser to render the destination page 100-150 milliseconds earlier.

Categories: Architecture

In Memory: Grace Hopper to Programmers: Mind Your Nanoseconds!

This is an article published a few years ago, but as today is Grace Hopper's birthday I thought it would be a good time to share again an amazing talk from this amazing woman.

Computing pioneer Grace Hopper, inventor of the compiler, searched for a concrete way to create an intuitive understanding of just how fast is a nanosecond, a billionth of a second, which was the speed of their new computer circuits. As an illustration she settled on the length of wire that is as long as light can travel in one nanosecond. The length is a very portable 11.8 inches. A microseconds worth of wire is a still portable, but a much bulkier 984 feet. In one millisecond light travels 186 miles, which only Hercules could carry. In today's terms, at a 3.06 GHz clock speed, there's .33 nanoseconds between ticks, or 3.73 inches of light travel.

Understanding the profligate ways of programmers, she suggests that every programmer wear a necklace of a microseconds worth of wire so they know what they are wasting when they throw away microseconds. And if a General is busting your chops about satellite messages taking too long to send, you can bust out your piece of wire and explain there's a lot of nanoseconds between here and there.

Here's a short, witty, and wise video of her famous nanosecond demonstration. An amazing lady, great innovator, an engaging speaker, and an inspiring teacher.

Related Articles
Categories: Architecture

Sponsored Post: Campanja, Hypertable, Sprout Social, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Campanja is an Internet advertising optimization company born in the cloud and today we are one of the nordics bigger AWS consumers, the time has come for us to the embrace the next generation of cloud infrastructure. We believe in immutable infrastructure, container technology and micro services, we hope to use PaaS when we can get away with it but consume at the IaaS layer when we have to. Please apply here.

  • Performance and Scale EngineerSprout Social, will be like a physical trainer for the Sprout social media management platform: you will evaluate and make improvements to keep our large, diverse tech stack happy, healthy, and, most importantly, fast. You'll work up and down our back-end stack - from our RESTful API through to our myriad data systems and into the Java services and Hadoop clusters that feed them - searching for SPOFs, performance issues, and places where we can shore things up. Apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Sign Up for New Aerospike Training Courses.  Aerospike now offers two certified training courses; Aerospike for Developers and Aerospike for Administrators & Operators, to help you get the most out of your deployment.  Find a training course near you. http://www.aerospike.com/aerospike-training/
Cool Products and Services
  • Aerospike Hits 1M writes per second with 6x Fewer Servers than Cassandra. A new Google Compute Engine benchmark demonstrates how the Aerospike database hit 1 million writes per second with just 50 nodes - compared to Cassandra's 300 nodes. Read the benchmark: http://www.aerospike.com/blog/1m-wps-6x-fewer-servers-than-cassandra/

  • Hypertable Inc. Announces New UpTime Support Subscription Packages. The developer of Hypertable, an open-source, high-performance, massively scalable database, announces three new UpTime support subscription packages – Premium 24/7, Enterprise 24/7 and Basic. 24/7/365 support packages start at just $1995 per month for a ten node cluster -- $49.95 per machine, per month thereafter. For more information visit us on the Web at http://www.hypertable.com/. Connect with Hypertable: @hypertable--Blog.

  • FoundationDB launches SQL Layer. SQL Layer is an ANSI SQL engine that stores its data in the FoundationDB Key-Value Store, inheriting its exceptional properties like automatic fault tolerance and scalability. It is best suited for operational (OLTP) applications with high concurrency. Users of the Key Value store will have free access to SQL Layer. SQL Layer is also open source, you can get started with it on GitHub as well.

  • Diagnose server issues from a single tab. The Scalyr log management tool replaces all your monitoring and analysis services with one, so you can pinpoint and resolve issues without juggling multiple tools and tabs. It's a universal tool for visibility into your production systems. Log aggregation, server metrics, monitoring, alerting, dashboards, and more. Not just “hosted grep” or “hosted graphs,” but enterprise-grade functionality with sane pricing and insane performance. Trusted by in-the-know companies like Codecademy – try it free!

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Former Softie Patricia Walsh Sets a World Record for Blind Triathletes

I’m always of fan of hearing about how Softies change the world, inside and outside of Microsoft.

I was reading Blind Ambition: How to Envision Your Limitless Potential and Achieve the Success You Want by Patricia Walsh.  It’s an inspirational story, as well as an insightful read if you are looking for ways to up your game or get the edge in work and life.

I wrote a 10 Big Ideas from Blind Ambition to share some of the highlights from the book.

Walsh is a former Softie.  More than that, she has raced in marathons, ultra-marathons and IRONMAN triathlons.  In 2011, Walsh set a new world record for blind triathletes, shattering the previous male and female records by over 50 minutes.

Pretty impressive.

She left Microsoft to start her own business, pursuit her speaking career, and train as a world-class athlete.

She set a high-bar.

But she also set a great example.  Walsh wanted to help light the way for others to show them that they can be limitless if they set goals, put in the work, and don’t let fear or failures hold them back. 

And most importantly, don’t put limits on yourself, and don’t fall into the trap of the limits that others put on you.

Categories: Architecture, Programming

The End of Common-off-the-Shelf Software

Xebia Blog - Mon, 12/08/2014 - 08:12

Large Common-of-the-Shelf Software (COTS for short) packages are difficult to implement and integrate. Buying a large software package is not a good idea. Below I will explain how Agile methods and services on light weight containers will help implement minimal, focused solutions.

Given the standard [hardware | OS | app server | business logic | user interface] software stack, COTS packages include some of the app server, all of the business logic and the full user interface. Examples are packages for sales support, financial management or marketing. Large and unwieldy beasts that thrash around on your IT infrastructure, needing herds of specialists to keep them going and insist that you install Java 1.6 and Oracle 10 on Redhat 4.2, IE 8.0 and the biggest, meanest server money can buy.

It probably all started with honorable intentions: buy over reuse over build appears to make perfect sense if you don’t look too closely. I even agree, though we might disagree on one important aspect, and that would be scale.
In the old waterfall days we were used to writing an architecture and make an inventory of business needs. Because people quickly learned that they rarely get more than one opportunity to ask for what they needed, they tended to ask for a lot, cramming in as much features as they could think of. At some point in the decision process everyone realized they might as well buy something really versatile; a large software package that matches all requirements now and in the future.

All is well.

Until the next business need pops up and the same reasoning (fix all specs up front, one shot to get it right, might as well ask a little extra, won’t hurt) leads to another package that has some overlap with the first but not too much so that’s OK. Then the need arises to synchronize data (because of the slight overlap between the packages) and an ESB is implemented (because you might as well buy a software package right?).

Now there are two stovepipes in your landscape glued together with a SPOF and things are not well any more. Changing stuff means coordinating the effort of multiple teams. Testing and integrating becomes the task of a large team, no team has ‘works in production’ in their definition of done. Works on my machine is the best you may hope for and somebody else will fix all integration problems. Oh, and the people who use this software switch between badly designed screens running in a bunch of yesteryear’s browsers.

How can modern software development wisdom and architecture help?

Two trends allow us to avoid stovepipes connected by super glue: micro-services hosted on light weight containers and Agile methods.

Micro services on light weight containers like Docker or maybe Dropwizard or Spring Boot are the end of the application server that served us so well last decade. If you can scale your application by starting a new process on a fresh VM you don’t need complex software to share resources. That means you don’t really need a lot of infrastructure. You can deploy small components with negligible overhead. Key-value data stores allow you to relax constraints on data that where imposed by relational databases. A service might support two versions of an interface at the same time. Combined with REST, a DNS and a load balancer this is the end of ESBs.

Agile promotes stable teams and budgets that are allocated to a team instead of a project. This means we don’t really have to do budget calculations anymore. Because we can change direction every sprint there is no need to ask for the world like we did in the waterfall days. That implies that we should create the smallest thing that could possible solve the problem, instead of buying the biggest beast that will solve all our problems and some others we don’t even have.

This doesn’t mean we shouldn’t buy software anymore. What I would love to see happening is vendors focusing on small specialized components: a highly specialized service using state-of-the-art algorithms to assess credit risk or a component that knows all about registering and monitoring customer service calls. That would be awesome. But no user interface thanks, we’ll be happy to create that ourselves, grabbing the data with a HTTP call and present it exactly like its needed.

The End of Common-off-the-Shelf Software

Xebia Blog - Mon, 12/08/2014 - 08:12

Large Common-of-the-Shelf Software (COTS for short) packages are difficult to implement and integrate. Buying a large software package is not a good idea. Below I will explain how Agile methods and services on light weight containers will help implement minimal, focused solutions.

Given the standard [hardware | OS | app server | business logic | user interface] software stack, COTS packages include some of the app server, all of the business logic and the full user interface. Examples are packages for sales support, financial management or marketing. Large and unwieldy beasts that thrash around on your IT infrastructure, needing herds of specialists to keep them going and insist that you install Java 1.6 and Oracle 10 on Redhat 4.2, IE 8.0 and the biggest, meanest server money can buy.

It probably all started with honorable intentions: buy over reuse over build appears to make perfect sense if you don’t look too closely. I even agree, though we might disagree on one important aspect, and that would be scale.
In the old waterfall days we were used to writing an architecture and make an inventory of business needs. Because people quickly learned that they rarely get more than one opportunity to ask for what they needed, they tended to ask for a lot, cramming in as much features as they could think of. At some point in the decision process everyone realized they might as well buy something really versatile; a large software package that matches all requirements now and in the future.

All is well.

Until the next business need pops up and the same reasoning (fix all specs up front, one shot to get it right, might as well ask a little extra, won’t hurt) leads to another package that has some overlap with the first but not too much so that’s OK. Then the need arises to synchronize data (because of the slight overlap between the packages) and an ESB is implemented (because you might as well buy a software package right?).

Now there are two stovepipes in your landscape glued together with a SPOF and things are not well any more. Changing stuff means coordinating the effort of multiple teams. Testing and integrating becomes the task of a large team, no team has ‘works in production’ in their definition of done. Works on my machine is the best you may hope for and somebody else will fix all integration problems. Oh, and the people who use this software switch between badly designed screens running in a bunch of yesteryear’s browsers.

How can modern software development wisdom and architecture help?

Two trends allow us to avoid stovepipes connected by super glue: micro-services hosted on light weight containers and Agile methods.

Micro services on light weight containers like Docker or maybe Dropwizard or Spring Boot are the end of the application server that served us so well last decade. If you can scale your application by starting a new process on a fresh VM you don’t need complex software to share resources. That means you don’t really need a lot of infrastructure. You can deploy small components with negligible overhead. Key-value data stores allow you to relax constraints on data that where imposed by relational databases. A service might support two versions of an interface at the same time. Combined with REST, a DNS and a load balancer this is the end of ESBs.

Agile promotes stable teams and budgets that are allocated to a team instead of a project. This means we don’t really have to do budget calculations anymore. Because we can change direction every sprint there is no need to ask for the world like we did in the waterfall days. That implies that we should create the smallest thing that could possible solve the problem, instead of buying the biggest beast that will solve all our problems and some others we don’t even have.

This doesn’t mean we shouldn’t buy software anymore. What I would love to see happening is vendors focusing on small specialized components: a highly specialized service using state-of-the-art algorithms to assess credit risk or a component that knows all about registering and monitoring customer service calls. That would be awesome. But no user interface thanks, we’ll be happy to create that ourselves, grabbing the data with a HTTP call and present it exactly like its needed.