Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Multithreaded Programming has Really Gone to the Dogs

Taken from Multithreaded programming - theory and practice on reddit, which also has some very funny comments. If anything this is way too organized. 

 What's not shown? All the little messes that have to be cleaned up after...

Categories: Architecture

The Machine: HP's New Memristor Based Datacenter Scale Computer - Still Changing Everything

The end of Moore’s law is the best thing that’s happened to computing in the last 50 years. Moore’s law has been a tyranny of comfort. You were assured your chips would see a constant improvement. Everyone knew what was coming and when it was coming. The entire semiconductor industry was held captive to delivering on Moore’s law. There was no new invention allowed in the entire process. Just plod along on the treadmill and do what was expected. We are finally breaking free of these shackles and entering what is the most exciting age of computing that we’ve seen since the late 1940s. Finally we are in a stage where people can invent and those new things will be tried out and worked on and find their way into the market. We’re finally going to do things differently and smarter.

-- Stanley Williams (paraphrased)

HP has been working on a radically new type of computer, enigmatically called The Machine (not this machine). The Machine is perhaps the largest R&D project in the history of HP. It’s a complete rebuild of both hardware and software from the ground up. A massive effort. HP hopes to have a small version of their datacenter scale product up and running in two years.

The story began when we first met HP’s Stanley Williams about four years ago in How Will Memristors Change Everything? In the latest chapter of the memristor story, Mr. Williams gives another incredible talk: The Machine: The HP Memristor Solution for Computing Big Data, revealing more about how The Machine works.

The goal of The Machine is to collapse the memory/storage hierarchy. Computation today is energy inefficient. Eighty percent of the energy and vast amounts of time are spent moving bits between hard disks, memory, processors, and multiple layers of cache. Customers end up spending more money on power bills than on the machines themselves. So the machine has no hard disks, DRAM, or flash. Data is held in power efficient memristors, an ion based nonvolatile memory, and data is moved over a photonic network, another very power efficient technology. When a bit of information leaves a core it leaves as a pulse of light.

On graph processing benchmarks The Machine reportedly performs 2-3 orders of magnitude better based on energy efficiency and one order of magnitude better based on time. There are no details on these benchmarks, but that’s the gist of it.

The Machine puts data first. The concept is to build a system around nonvolatile memory with processors sprinkled liberally throughout the memory. When you want to run a program you send the program to a processor near the memory, do the computation locally, and send the results back. Computation uses a wide range of heterogeneous multicore processors. By only transmitting the bits required for the program and the results the savings is enormous when compared to moving terabytes or petabytes of data around.

The Machine is not targeted at standard HPC workloads. It’s not a LINPACK buster. The problem HP is trying to solve for their customers is where a customer wants to perform a query and figure out the answer by searching through a gigantic pile of data. Problems that need to store lots of data and analyze in realtime as new data comes in

Why is a very different architecture needed for building a computer? Computer systems can’t not keep up with the flood of data that’s coming in. HP is hearing from their customers that they need the ability to handle ever greater amounts of data. The amount of bits that are being collected is growing exponentially faster than the rate at which transistors are being manufactured. It’s also the case that information collection is growing faster than the rate at which hard disks are being manufactured. HP estimates there are 250 trillion DVDs worth of data that people really want to do something with. Vast amount of data are being collected in the world are never even being looked at.

So something new is needed. That’s at least the bet HP is making. While it’s easy to get excited about the technology HP is developing, it won’t be for you and me, at least until the end of the decade. These will not be commercial products for quite a while. HP intends to use them for their own enterprise products, internally consuming everything that’s made. The idea is we are still very early in the tech cycle, so high cost systems are built first, then as volumes grow and processes improve, the technology will be ready for commercial deployment. Eventually costs will come down enough that smaller form factors can be sold.

What is interesting is HP is essentially building its own cloud infrastructure, but instead of leveraging commodity hardware and software, they are building their own best of breed custom hardware and software. A cloud typically makes available vast pools of memory, disk, and CPU, organized around instance types which are connected by fast networks. Recently there’s a move to treat these resource pools as independent of the underlying instances. So we are seeing high level scheduling software like Kubernetes and Mesos becoming bigger forces in the industry. HP has to build all this software themselves, solving many of the same problems, along with the opportunities provided by specialized chips. You can imagine programmers programming very specialized applications to eke out every ounce of performance from The Machine, but what is more likely is HP will have to create a very sophisticated scheduling system to optimize how programs run on top of The Machine. What's next in software is the evolution of a kind of Holographic Application Architecture, where function is fluid in both time and space, and identity arises at run-time from a two-dimensional structure. Schedule optimization is the next frontier being explored on the cloud.

The talk is organized in two broad sections: hardware and software. Two-thirds of the project is software, but Mr. Williams is a hardware guy, so hardware makes up the majority of the talk.  The hardware section is based around the idea of optimizing the various functions around the physics that is available: electrons compute; ions store; photons communicate.

Here’s is my gloss on Mr. Williams talk. As usual with such a complex subject much can be missed. Also, Mr. Williams tosses huge interesting ideas around like pancakes, so viewing the talk is highly recommended. But until then, let’s see The Machine HP thinks will be the future of computing….

Categories: Architecture

On Communicating with Resistant Stakeholders: Process Flow Storyboards

Software Requirements Blog - Seilevel.com - Tue, 12/16/2014 - 17:00
Determining how best to communicate requirements to stakeholders on your project can be difficult if you have a challenging audience who has resistance, for whatever reason, towards the project you’re working on. I was working on a project several months ago in which one of the major stakeholder groups felt that the work we were […]
Categories: Requirements

5 Reasons Product Owners Should Let Teams Work Out of Order

Mike Cohn's Blog - Tue, 12/16/2014 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

A product owner hands 10 story cards to the team. The team reads them and hands the fifth and sixth cards back to the product owner. By the end of the sprint, the team delivers the functionality described on cards 1, 2, 3, 4, and 7. But the team has not touched the work of cards 5 and 6.

And I say this is OK.

Standard agile advice is that a team should work on product backlog items in the order prescribed by the product owner. And although this is somewhat reasonable advice, I want to argue that good agile teams violate that guideline all the time.

There are many good reasons for a team to work out of order. Let’s consider just a few, and consider it sufficient evidence that teams should be allowed to work out of order.

1. Synergies

There are often synergies between items near the top of a product backlog—while a team is working on item No. 3, they should be allowed to work on No. 6. If two items are in the same part of the system and can be done faster together than separately, this is usually a good tradeoff for a product owner to allow.

2. Dependencies

A team may agree that No. 4 is more important than No. 5 and No. 6 on the product backlog. Unfortunately, No. 4 can’t be done until No. 7 has been implemented. Finding a dependency like this is usually enough to justify a team working a bit out of the product owner’s prioritized sequence.

3. Skillset Availability

A team might love to work on the product owner’s fourth top priority, but the right person for the job is unavailable. Sure, this can be a sign that additional cross-training is needed on that team to address this problem – but, that is often more of a long-term solution. And, in the short term, the right thing to do might simply be to work a bit out of order on something that doesn’t require the in-demand skillset.

4. It’s More Exciting

OK, this one may stir up some controversy. I’m not saying the team can do 1, 2, 3, 4, and then number 600. But, in my example of 1, 2, 3, 4 and 7, choosing to work on something because it’s more exciting to the team is OK.

On some projects, teams occasionally hit a streak of product backlog items that are, shall we say, less than exciting. Letting the team slide slightly ahead sometimes just to have some variety in what they’re doing can be good for the morale of the team. And that will be good for product owners.

Bonus Reason 4a: It’s More Exciting to Stakeholders

While on the subject of things being more exciting, I’m going to say it is also acceptable for a team to work out of order if the item will be more exciting to stakeholders.

It can sometimes be a challenge to get the right people to attend sprint reviews. It gets especially tough after a team runs a few boring ones in a row. Sometimes this happens because of the nature of the high-priority work—it’s stuff that isn’t really visible or is esoteric perhaps to stakeholders.

In those cases, it can be wise to add a bit of sex appeal to the next sprint review by making sure the team works on something stakeholders will find interesting and worth attending the meeting for.

5. Size

In case the first four (plus) items haven’t convinced you, I’ve saved for last the item that proves every team occasionally works out of product owner order: A team may skip item 5 on the product backlog because it’s too big. So they grab the next one that fits.

If a team were not to do this, they would grab items 1, 2, 3, and 4 and stop, perhaps leaving a significant portion of the sprint unfilled. Of course, the team could look for a way to perhaps do a portion of item 5 before jumping down to grab number 7. But sometimes that won’t be practical, which means the team will at least occasionally work out of order.

Product Owners Aren’t Perfect

A perfect product owner would know everything above. The perfect product owner would know that the team’s DBA will be full from tasks from the first four product backlog items, and so wouldn’t put a database-intensive task fifth. The perfect product owner would know that items 2 and 5 both affect the same Java class, and would naturally prioritize them together.

But most product owners have a hard time being perfect. A better solution is for them to put the product backlog in a pretty good prioritized order, and then leave room for fine tuning from the team.

What do you do when inertia wins?

Step 3 is to take smaller bites!

Step 3 is to take smaller bites!

Changing how any organization works is not easy.  Many different moving parts have to come together for a change to take root and build up enough inertia to pass the tipping point. Unfortunately because of misalignment, misunderstanding or poor execution, change programs don’t always win the day.  This is not new news to most of us in the business.  What should happen after a process improvement program fails?  What happens when the wrong kind of inertia wins?

Step One:  All failures must be understood.

First, perform a critical review of the failed program that focuses on why and how it failed.  The word critical is important.  Nothing should be sugar coated or “spun” to protect people’s feelings.  A critical review must also have a good dose of independence from those directly involved in the implementation.  Independence is required so that the biases and decisions that led to the original program can be scrutinized.  The goal is not to pillory those involved, but rather to make sure the same mistakes are not repeated.  These reviews are known by many names: postmortems, retrospectives or troubled project reviews, to name a few.

Step two:  Determine which way the organization is moving.

Inertia describes why an object in motion tends to stay in motion or those at rest tend to stay at rest.  Energy is required to change the state of any object or organization; understanding the direction of the organization is critical to planning any change. In process improvement programs we call the application of energy change management.  A change management program might include awareness building, training, mentoring or a myriad of other events all designed to inject energy into the system. The goal of that energy is either to amplify or change the performance of some group within an organization.  When not enough or too much energy is applied, the process change will fail.

Just because a change has failed does not mean all is lost.  There are two possible outcomes to a failure. The first is that the original position is reinforced, making change even more difficult.  The second is that the target group has been pushed into moving, maybe not all the way to where they should be or even in the right direction, but the original inertia has been broken.

Frankly, both outcomes happen.  If the failure is such that no good comes of it, then your organization will be mired in the muck of living off past performance.  This is similar to what happens when a car gets stuck in snow or sand and digs itself in.  The second scenario is more positive, and while the goal was not attained, the organization has begun to move, making further change easier.  I return to the car stuck in the snow example.  A technique that is taught to many of us that live in snowy climates is “rocking.” Rocking is used to get a car stuck in snow moving back and forth.  Movement increases the odds that you will be able to break free and get going in the right direction.

Step Three:  Take smaller bites!

The lean startup movement provides a number of useful concepts that can be used when changing any organization.  In Software Process and Measurement Cast 196, Jeff Anderson talked in detail about leveraging the concepts of lean start-ups within change programs (Link to SPaMCAST 196).  A lean start up will deliver a minimum amount of functionality needed to generate feedback and to further populate a backlog of manageable changes. The backlog should be groomed and prioritized by a product owner (or owners) from the area being impacted by the change.  This will increase ownership and involvement and generate buy-in.  Once you have a prioritized backlog, make the changes in a short time-boxed manner while involving those being impacted in measuring the value delivered.  Stop doing things if they are not delivering value and go to the next change.

Being a change agent is not easy, and no one succeeds all the time unless they are not taking any risks.  Learn from your mistakes and successes.  Understand the direction the organization is moving and use that movement as an asset to magnify the energy you apply. Involve those you are asking to change to building a backlog of prioritized minimum viable changes (mix the concept of a backlog with concepts from the lean start up movement).  Make changes based on how those who are impacted prioritize the backlog then stand back to observe and measure.  Finally, pivot if necessary.  Always remember that the goal is not really the change itself, but rather demonstrable business value. Keep pushing until the organization is going in the right direction.  What do you do when inertia wins?  My mother would have said just get back up, dust your self off and get back in the game.


Categories: Process Management

Reminder to migrate to updated Google Data APIs

Google Code Blog - Mon, 12/15/2014 - 18:00
Over the past few years, we’ve been updating our APIs with new versions across Drive and Calendar, as well as those used for managing Google Apps for Work domains. These new APIs offered developers several improvements over older versions of the API. With each of these introductions, we also announced the deprecation of a set of corresponding APIs.

The deprecation period for these APIs is coming to an end. As of April 20, 2015, we will discontinue these deprecated APIs. Calls to these APIs and any features in your application that depend on them will not work after April 20th.

Discontinued APIReplacement APIDocuments List APIDrive APIAdmin AuditAdmin SDK Reports APIGoogle Apps ProfilesAdmin SDK Directory APIProvisioningAdmin SDK Directory APIReportingAdmin SDK Reports APIEmail Migration API v1Email Migration API v2Reporting VisualizationNo replacement available
When updating, we also recommend that you use the opportunity to switch to OAuth2 for authorization. Older protocols, such as ClientLogin, AuthSub, and OpenID 2.0, have also been deprecated and are scheduled to shut down.

For help on migration, consult the documentation for the APIs or ask questions about the Drive API or Admin SDK on StackOverflow.

Posted by Steven Bazyl, Developer Advocate
Categories: Programming

The Machine: HP's New Memristor Based Datacenter Scale Computer - Still Changing Everything

The end of Moore’s law is the best thing that’s happened to computing in the last 50 years. Moore’s law has been a tyranny of comfort. You were assured your chips would see a constant improvement. Everyone knew what was coming and when it was coming. The entire semiconductor industry was held captive to delivering on Moore’s law. There was no new invention allowed in the entire process. Just plod along on the treadmill and do what was expected. We are finally breaking free of these shackles and entering what is the most exciting age of computing that we’ve seen since the late 1940s. Finally we are in a stage where people can invent and those new things will be tried out and worked on and find their way into the market. We’re finally going to do things differently and smarter.

-- Stanley Williams (paraphrased)

HP has been working on a radically new type of computer, enigmatically called The Machine (not this machine). The Machine is perhaps the largest R&D project in the history of HP. It’s a complete rebuild of both hardware and software from the ground up. A massive effort. HP hopes to have a small version of their datacenter scale product up and running in two years.

The story began when we first met HP’s Stanley Williams about four years ago in How Will Memristors Change Everything? In the latest chapter of the memristor story, Mr. Williams gives another incredible talk: The Machine: The HP Memristor Solution for Computing Big Data, revealing more about how The Machine works.

The goal of The Machine is to collapse the memory/storage hierarchy. Computation today is energy inefficient. Eighty percent of the energy and vast amounts of time are spent moving bits between hard disks, memory, processors, and multiple layers of cache. Customers end up spending more money on power bills than on the machines themselves. So the machine has no hard disks, DRAM, or flash. Data is held in power efficient memristors, an ion based nonvolatile memory, and data is moved over a photonic network, another very power efficient technology. When a bit of information leaves a core it leaves as a pulse of light.

On graph processing benchmarks The Machine reportedly performs 2-3 orders of magnitude better based on energy efficiency and one order of magnitude better based on time. There are no details on these benchmarks, but that’s the gist of it.

The Machine puts data first. The concept is to build a system around nonvolatile memory with processors sprinkled liberally throughout the memory. When you want to run a program you send the program to a processor near the memory, do the computation locally, and send the results back. Computation uses a wide range of heterogeneous multicore processors. By only transmitting the bits required for the program and the results the savings is enormous when compared to moving terabytes or petabytes of data around.

The Machine is not targeted at standard HPC workloads. It’s not a LINPACK buster. The problem HP is trying to solve for their customers is where a customer wants to perform a query and figure out the answer by searching through a gigantic pile of data. Problems that need to store lots of data and analyze in realtime as new data comes in

Why is a very different architecture needed for building a computer? Computer systems can’t not keep up with the flood of data that’s coming in. HP is hearing from their customers that they need the ability to handle ever greater amounts of data. The amount of bits that are being collected is growing exponentially faster than the rate at which transistors are being manufactured. It’s also the case that information collection is growing faster than the rate at which hard disks are being manufactured. HP estimates there are 250 trillion DVDs worth of data that people really want to do something with. Vast amount of data are being collected in the world are never even being looked at.

So something new is needed. That’s at least the bet HP is making. While it’s easy to get excited about the technology HP is developing, it won’t be for you and me, at least until the end of the decade. These will not be commercial products for quite a while. HP intends to use them for their own enterprise products, internally consuming everything that’s made. The idea is we are still very early in the tech cycle, so high cost systems are built first, then as volumes grow and processes improve, the technology will be ready for commercial deployment. Eventually costs will come down enough that smaller form factors can be sold.

What is interesting is HP is essentially building its own cloud infrastructure, but instead of leveraging commodity hardware and software, they are building their own best of breed custom hardware and software. A cloud typically makes available vast pools of memory, disk, and CPU, organized around instance types which are connected by fast networks. Recently there’s a move to treat these resource pools as independent of the underlying instances. So we are seeing high level scheduling software like Kubernetes and Mesos becoming bigger forces in the industry. HP has to build all this software themselves, solving many of the same problems, along with the opportunities provided by specialized chips. You can imagine programmers programming very specialized applications to eke out every ounce of performance from The Machine, but what is more likely is HP will have to create a very sophisticated scheduling system to optimize how programs run on top of The Machine. What's next in software is the evolution of a kind of Holographic Application Architecture, where function is fluid in both time and space, and identity arises at run-time from a two-dimensional structure. Schedule optimization is the next frontier being explored on the cloud.

The talk is organized in two broad sections: hardware and software. Two-thirds of the project is software, but Mr. Williams is a hardware guy, so hardware makes up the majority of the talk.  The hardware section is based around the idea of optimizing the various functions around the physics that is available: electrons compute; ions store; photons communicate.

Here’s is my gloss on Mr. Williams talk. As usual with such a complex subject much can be missed. Also, Mr. Williams tosses huge interesting ideas around like pancakes, so viewing the talk is highly recommended. But until then, let’s see The Machine HP thinks will be the future of computing….

Categories: Architecture

Team Competition is Not Friendly

I once worked in an organization where the senior managers thought they should motivate us, the team members. They decided to have a team competition, complete with prizes.

I was working on a difficult software problem with a colleague on another team. We both needed to jointly design our pieces of the product to make the entire product work.

After management announced the competition, he didn’t want to work with me. Why? There was prize money, worth hundreds of dollars to each person. He had a mortgage and three kids. That money made a big difference to him. I was still single. I would have stuck that money into either my savings or retirement fund, after buying something nice for myself.

Management motivated us, alright. But not to collaborate. They motivated us to stop working together. They motivated us to compete.

Our progress stopped.

My boss wanted to know what happened. I explained. I couldn’t fault my colleague. He wanted the money. It made a big difference for him. I would have appreciated the money, but not nearly as much as he would have. (Later, when I was paying for childcare, I understood how much of a difference that money made.)

I then had this conversation with my boss, ranting and raving the entire time:

“Look, do you want the best product or the best competition?”

“What?”

“You can’t have both. You can have a great product or you can have a great competition. Choose. Because once you put money on the table, where only one team gets the money, we won’t collaborate anymore.”

My boss got that “aha” look on his face. “Hold that thought,” he said.

The next day, management changed the competition. Now, it was about the teams who worked together to create the best product, not the one team who had the best idea. Still not so good, because all the teams on the program needed to collaborate. But better.

When I had my one-on-one with my boss, I explained that all the teams needed to collaborate. Were they really going to pay everyone a bonus?

My boss paled. They had not thought this through. “I’d better make sure we have the funds, right?”

People don’t work just for money. You need to pay people a reasonable salary. Remember what Dan Pink says in Drive: The Surprising Truth About What Motivates Us. People work for autonomy, mastery, and purpose. If you exchange the social contract of working for autonomy, mastery, and purpose for money, you’d better pay enough money. You also better repeat that money the next time. And, the next time. And, the next time.

That’s the topic of this month’s management myth: Management Myth 35: Friendly Competition Is Constructive.

Software product development is a team activity, full of learning. As soon as you make it a competition, you lose on the teamwork. You lose the learning. Nobody wins. There is no such thing as “friendly” competition.

Instead, if you go for collaboration, you can win.

Read Management Myth 35: Friendly Competition Is Constructive.

Categories: Project Management

How I got Robert (Uncle Bob) Martin to write a foreword for my book

Making the Complex Simple - John Sonmez - Mon, 12/15/2014 - 16:00

Last week my publisher, Manning, gave me a little surprise. They told me that my new book, Soft Skills: The Software Developer’s Life Manual was going to publish early; that anyone who ordered before December 14th would be able to get the print version in their hands by Christmas (barring any unforeseen circumstances.) This was very exciting, until I realized ... Read More

The post How I got Robert (Uncle Bob) Martin to write a foreword for my book appeared first on Simple Programmer.

Categories: Programming

The Evolution of eInk

Coding Horror - Jeff Atwood - Mon, 12/15/2014 - 09:40

Sure, smartphones and tablets get all the press, and deservedly so. But if you place the original mainstream eInk device from 2007, the Amazon Kindle, side by side with today's model, the evolution of eInk devices is just as striking.

Each of these devices has a 6 inch eInk screen. Beyond that they're worlds apart.

8" × 5.3" × 0.8"
10.2 oz 6.4" × 4.5" × 0.3"
6.3 oz 6" eInk display
167 PPI
4 level greyscale 6" eInk display
300 PPI
16 level greyscale
backlight 256 MB 4 GB 400 Mhz CPU 1 GHz CPU $399 $199 7 days battery life
USB 6 weeks battery life
WiFi / Cellular

They may seem awfully primitive compared to smartphones, but that's part of their charm – they are the scooter to the motorcycle of the smartphone. Nowhere near as versatile, but as a form of basic transportation, radically simpler, radically cheaper, and more durable. There's an object lesson here in stripping things away to get to the core.

eInk devices are also pleasant in a paradoxical way because they basically suck at everything that isn't reading. That doesn't sound like something you'd want, except when you notice you spend every fifth page switching back to Twitter or Facebook or Tinder or Snapchat or whatever. eInk devices let you tune out the world and truly immerse yourself in reading.

I believe in the broadest sense, bits > atoms. Sure, we'll always read on whatever device we happen to hold in our hands that can display words and paragraphs. And the advent of retina class devices sure made reading a heck of a lot more pleasant on tablets and smartphones.

But this idea of ultra-cheap, pervasive eInk reading devices eventually replacing those ultra-cheap, pervasive paperbacks I used to devour as a kid has great appeal to me. I can't let it go. Reading is Fundamental, man!

That's why I'm in this weird place where I will buy, sight unseen, every new Kindle eInk device. I wasn't quite crazy enough to buy the original Kindle (I mean, look at that thing) but I've owned every model since the third generation Kindle was introduced in 2010.

I've also been tracking the Kindle prices to see when they can get them down to $49 or lower. We're not quite there yet – the basic Kindle eInk reader, which by the way is still pretty darn amazing compared to that original 2007 model pictured above – is currently on sale for $59.

But this is mostly about their new flagship eInk device, the Kindle Voyage. Instead of being cheap, it's trying to be upscale. The absolute first thing you need to know is this is the first 300 PPI (aka "retina") eInk reader from Amazon. If you're familiar with the smartphone world before and after the iPhone 4, then you should already be lining up to own one of these.

When you experience 300 PPI in eInk, you really feel like you're looking at a high quality printed page rather than an array of RGB pixels. Yeah, it's still grayscale, but it is glorious. Here are some uncompressed screenshots I made from mine at native resolution.

Note that the real device is eInk, so there's a natural paper-like fuzziness that makes it seem even more high resolution than these raw bitmaps would indicate.

I finally have enough resolution to pick a thinner font than fat, sassy old Caecilia.

The backlight was new to the original Paperwhite, and it definitely had some teething pains. The third time's the charm; they've nailed the backlight aspect for improved overall contrast and night reading. The Voyage also adds an ambient light sensor so it automatically scales the backlight to anything from bright outdoors to a pitch-dark bedroom. It's like automatic night time headlights on a car – one less manual setting I have to deal with before I sit down and get to my reading. It's nice.

The Voyage also adds page turn buttons back into the mix, via pressure sensing zones on the left and right bezel. I'll admit I had some difficulty adjusting to these buttons, to the point that I wasn't sure I would, but I eventually did – and now I'm a convert. Not having to move your finger into the visible text on the page to advance, and being able to advance without moving your finger at all, just pushing it down slightly (which provides a little haptic buzz as a reward), does make for a more pleasant and efficient reading experience. But it is kind of subtle and it took me a fair number of page turns to get it down.

In my experience eInk devices are a bit more fragile than tablets and smartphones. So you'll want a case for automatic on/off and basic "throw it in my bag however" paperback book level protection. Unfortunately, the official Kindle Voyage case is a disaster. Don't buy it.

Previous Kindle cases were expensive, but they were actually very well designed. The Voyage case is expensive and just plain bad. Whoever came up with the idea of a weirdly foldable, floppy origami top opening case on a thing you expect to work like a typical side-opening book should be fired. I recommend something like this basic $14.99 case which works fine to trigger on/off and opens in the expected way.

It's not all sweetness and light, though. The typography issues that have plagued the Kindle are still present in full force. It doesn't personally bother me that much, but it is reasonable to expect more by now from a big company that ostensibly cares about reading. And has a giant budget with lots of smart people on its payroll.

This is what text looks like on a kindle.

— Justin Van Slembrou… (@jvanslem) February 6, 2014

If you've dabbled in the world of eInk, or you were just waiting for a best of breed device to jump in, the Kindle Voyage is easy to recommend. It's probably peak mainstream eInk. Would recommend, would buy again, will probably buy all future eInk models because I have an addiction. A reading addiction. Reading is fundamental. Oh, hey, $2.99 Kindle editions of The Rise and Fall of the Third Reich? Yes, please.

(At the risk of coming across as a total Amazon shill, I'll also mention that the new Amazon Family Sharing program is amazing and lets me and my wife finally share books made of bits in a sane way, the way we used to share regular books: by throwing them at each other in anger.)

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Categories: Programming

New blog posts about bower, grunt and elasticsearch

Gridshore - Mon, 12/15/2014 - 08:45

Two new blog posts I want to point out to you all. I wrote these blog posts on my employers blog:

The first post is about creating backups of your elasticsearch cluster. Some time a go they introduced the snapshot/restore functionality. Of course you can use the REST endpoint to use the functionality, but how easy is it if you can use a plugin to handle the snapshots. Or maybe even better, integrate the functionality in your own java application. That is what this blogpost is about, integrating snapshot/restore functionality in you java application. As a bonus there are the screens of my elasticsearch gui project snowing the snapshot/restore functionality.

Creating elasticsearch backups with snapshot/restore

The second blog post I want to put under you attention is front-end oriented. I already mentioned my elasticsearch gui project. This is an Angularjs application. I have been working on the plugin for a long time and the amount of javascript code is increasing. Therefore I wanted to introduce grunt and bower to my project. That is what this blogpost is about.

Improve my AngularJS project with grunt

The post New blog posts about bower, grunt and elasticsearch appeared first on Gridshore.

Categories: Architecture, Programming

SPaMCAST 320 – Alfonso Bucero – Today is a Good Day

www.spamcast.net

http://www.spamcast.net

 

Listen to the Software Process and Measurement Cast 320

SPaMCAST 320 features our interview with Alfonso Bucero. We discussed his book, Today Is A Good Day. Attitude is an important tool for a project manager, team member or executive.  In his book Alfonso provides a plan for honing your attitude.

Alfonso Bucero, MSc, PMP, PMI-RMP, PMI Fellow, is the founder and Managing Partner of BUCERO PM Consulting.  He managed IIL Spain for almost two years, and he was a Senior Project Manager at Hewlett-Packard Spain (Madrid Office) for thirteen years.

Since 1994, he has been a frequent speaker at International Project Management (PM) Congresses and Symposiums. Alfonso has delivered PM training and consulting services in Spain, Mexico, UK, Belgium, Germany, France, Denmark, Costa Rica, Brazil, USA, and Singapore. As believer in Project Management, he teaches that Passion, Persistence and Patience as keys for project success.

Alfonso co-authored the book Project Sponsorship with Randall L. Englund published by Josse-Bass in 2006. He has authored the book Today is a Good Day – Attitudes for achieving project success, published by Multimedia Publishing in Canada in 2010. He has also contributed to professional magazines in Russia (SOVNET), India (ICFAI), Argentina and Spain. Alfonso co-authored The Complete Project Manager and The Complete Project Manager Toolkit published with Randall L. Englund published by Management Concepts in March 2012. Alfonso published The Influential Project Manager in 2014 with CRC Press in the US.

Alfonso has also published several articles in national and international Project Management magazines. He is a Contributing editor of PM Network (Crossing Borders), published by the “Project Management Institute”.

Contact Alfonso: alfonso.bucero@abucero.com
Twitter:
@abucero
Website: http://www.abucero.com/

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

In the next Software Process and Measurement Cast we will feature our essay on the requirements for success with Agile.  Senior management, engagement, culture and coaches are components but not the whole story

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 320 - Alfonso Bucero - Today is a Good Day

Software Process and Measurement Cast - Sun, 12/14/2014 - 23:00

SPaMCAST 320 features our interview with Alfonso Bucero. We discussed his book, Today Is A Good Day. Attitude is an important tool for a project manager, team member or executive.  In his book Alfonso provides a plan for honing your attitude.

Alfonso Bucero, MSc, PMP, PMI-RMP, PMI Fellow, is the founder and Managing Partner of BUCERO PM Consulting.  He managed IIL Spain for almost two years, and he was a Senior Project Manager at Hewlett-Packard Spain (Madrid Office) for thirteen years.

Since 1994, he has been a frequent speaker at International Project Management (PM) Congresses and Symposiums. Alfonso has delivered PM training and consulting services in Spain, Mexico, UK, Belgium, Germany, France, Denmark, Costa Rica, Brazil, USA, and Singapore. As believer in Project Management, he teaches that Passion, Persistence and Patience as keys for project success.

Alfonso co-authored the book Project Sponsorship with Randall L. Englund published by Josse-Bass in 2006. He has authored the book Today is a Good Day – Attitudes for achieving project success, published by Multimedia Publishing in Canada in 2010. He has also contributed to professional magazines in Russia (SOVNET), India (ICFAI), Argentina and Spain. Alfonso co-authored The Complete Project Manager and The Complete Project Manager Toolkit published with Randall L. Englund published by Management Concepts in March 2012. Alfonso published The Influential Project Manager in 2014 with CRC Press in the US.

Alfonso has also published several articles in national and international Project Management magazines. He is a Contributing editor of PM Network (Crossing Borders), published by the “Project Management Institute”.

Contact Alfonso: alfonso.bucero@abucero.com
Twitter: 
@abucero
Website: http://www.abucero.com/

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

In the next Software Process and Measurement Cast we will feature our essay on the requirements for success with Agile.  Senior management, engagement, culture and coaches are components but not the whole story

Upcoming Events

DCG Webinars:

Agile Risk Management - It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Agile: how hard can it be?!

Xebia Blog - Sun, 12/14/2014 - 13:48

Yesterday my colleagues and I ran an awesome workshop at the MIT conference in which we built a Rube Goldberg machine using Scrum and Extreme Engineering techniques. As agile coaches one would think that being an Agile team should come naturally to us, but I'd like to share our pitfalls and insights with you since "we learned a lot" about being an agile team and what an incredible powerful model a Rube Goldberg machine is for scaled agile product development.

If you're not the reading type, check out the video.

Rube ... what?

Goldberg. According to Wikipedia; A Rube Goldberg machine is a contraption, invention, device or apparatus that is deliberately over-engineered or overdone to perform a very simple task in a very complicated fashion, usually including a chain reaction. The expression is named after American cartoonist and inventor Rube Goldberg (1883–1970).

In our case we set out on a 6 by 4 meter stage divided in 5 sections. Each section had a theme like rolling, propulsion, swinging, lifting etc. In a fashion it resembled a large software product that requires to respond to some event in an (for outsiders) incredibly complex matter, by triggering a chain of sub-systems which end in some kind of end-result.

The workspace, scrum boards and build stuff

The workspace, scrum boards and build stuff

Extreme Scrum

During the day 5 teams worked in a total of 10 sprints to create the most incredible machine, experiencing everything one can find during "normal" product development. We had inexperienced team members, little to no documentation, legacy systems whom's engineering principles were shrouded in mystery, teams that forgot to retrospective, interfaces that were ignored because the problem "lies with the other team". The huge time pressure of the relative small sprint and the complexity of what we were trying to achieve created a pressure cooker that brought these problems to the surface faster than anything else and with Scrum we were forced to face and fix these problems.

Team scrumboard

Team scrumboard

Build, fail, improve, build

“Most people do not listen with the intent to understand; they listen with the intent to reply.” - Stephen R. Covey

Having 2 minutes to do your planning makes it very difficult to listen, especially when your head is buzzing with ideas, yet sometimes you have to slow down to speed up. Effective building requires us to really understand what your team mate is going to do, pairing proved a very effective way to slow down your own brain and benefit from both rubber ducking and the insight of your team mate. Once our teams reached 4 members we could pair and drastically improve the outcome.

Deadweight with pneumatic fuse

Deadweight with pneumatic fuse

Once the machine had reached a critical size integration tests started to fail. The teams responded by testing multiple times during the sprint and fix the broken build rather than adding new features. Especially in mechanical engineering that is not a simple as it sounds. Sometimes a part of the machine would be "refactored" and since we did not design for a simple end-to-end test to be applied continuously. It took a couple of sprints to get that right.

A MVP that made it to the final product

A MVP that made it to the final product

"Keep your code clean" we teach teams every day. "Don't accept technical or functional debt, you know it will slow you down in the end". Yet it is so tempting. Despite a Scrum Master and an "Über Scrum Master" we still had a hard time keeping our workspace clean, refactor broken stuff out, optimise and simplify...

Have an awesome goal

"A true big hairy audacious goal is clear and compelling, serves as unifying focal point of effort, and acts as a clear catalyst for team spirit. It has a clear finish line, so the organization can know when it has achieved the goal; people like to shoot for finish lines." - Collins and Porras, Built to Last: Successful Habits of Visionary Companies

Truth is: we got lucky with the venue. Building a machine like this is awesome and inspiring in itself, learning how Extreme Scrum can help teams to build machines better, faster, innovative and with a whole lot more fun is a fantastic goal in itself, but parallel to our build space was a true magnet, something that really focussed the teams and go that extra mile.

The ultimate goal of the machine

The ultimate goal of the machine

Biggest take away

Building things is hard, building under pressure is even harder. Even teams that are aware of the theory will be tempted to throw everything overboard and just start somewhere. Applying Extreme Engineering techniques can truly help you, it's a simple set of rules but it requires an unparalleled level of discipline. Having a Scrum coach can make all the difference between a successful and failed project.

Re-read Saturday: Developing a Vision and Strategy, Leading Change, John P. Kotter Chapter Five

A vision provides a goal and direction to travel.

A vision provides a goal and direction to travel.

John P. Kotter’s book, Leading Change, established why change in organizations can fail and the forces that shape the changes when they are successful. The two sets of opposing forces he identifies in the first two chapters are used to define and illuminate his famous eight-stage model for change. The first stage of the model is establishing a sense of urgency. A sense of urgency provides the energy and rational for any large, long-term change program. Once a sense of urgency has been established, the second stage in the eight-stage model for change is the establishment of a guiding coalition. If a sense of urgency provides energy to drive change, a guiding coalition provides the power for making change happen. Once we identify or establish a sense of urgency and the power to make change happen we then have to wrestle establishing a vision and strategy. A vision represents a picture of a state of being at some point in the future. A vision acts as an anchor that establishes the goal of the transformation. A strategy defines the high level path to that future.

Kotter begins the first chapter by reviewing changes driven by different leadership styles that include authoritarian, micromanagement and visionary. Change driven by authoritarian decree (do it because I said so) and micromanagement (I will tell you step-by-step how to get from point A to point B and validate compliance to my instructions) often fail to break through the status quo. In fact, demanding change tends to generate resistance and passive aggressive behavior due to the lack of buy-in from those involved in the change. Couple the lack of buy-in with the incredible level of effort needed to force people to change and then to monitor that change, and scalability problems will surface. Neither authoritarian- nor micromanagement-driven techniques are efficient for responding to dynamic, large scale changes. Change driven by vision overcomes these issues by providing the direction and the rational for why the organization should strive together toward the future defined by the vision.

Effective visions are not easy to craft. Visions are important for three reasons. An effective vision will provide clarity of direction. A clear direction provides everyone making or guiding the change with a clearer set of parameters to make decisions. When lean and Agile teams crisply define the goals of a sprint or Agile release train (SAFe), they are using the same technique to break through the clutter and focus the decision making process on achieving the their goal. Secondly, visions are important because they provide hope by describing a feasible outcome. A vision of what is perceived as a feasible outcome provides a belief that the pain of change be overcome. Finally, a vision provides alignment. Alignment keeps people moving in a common direction.

Kotter defines six characteristics of an effective vision.

  1. Imaginable – The people who consume the vision must be able to paint a rational picture in their mind of what the world will be like if the vision is attained.
  2. Desirable – The vision appeal to the long-term interests of those being asked to change.
  3. Feasible – The vision has to be attainable.
  4. Focused – The vision provide enough clarity and alignment to guide organizational decisions.
  5. Flexible – The vision must provide enough direction to guide but not enough to restrict individual initiative.
  6. Communicable – The vision must be consumable and understandable to everyone involved in the change process. Kotter further suggests that if a vision can’t be explained in five minutes it has failed the test of communicable.

In the third stage of the eight-stage model for change, Kotter drills deeply into the rationale and the definition of an effective vision.  Kotter defines strategy as the logic for how the vision will be attained.  An effectively developed vision makes the processes of defining the path (strategy) for attaining vision far less contentious. The attributes of an effective vision including being imaginable, feasible and focused provide enough of a set of constraints to begin the process of defining how the vision can be achieved.


Categories: Process Management

R: Time to/from the weekend

Mark Needham - Sat, 12/13/2014 - 21:38

In my last post I showed some examples using R’s lubridate package and another problem it made really easy to solve was working out how close a particular date time was to the weekend.

I wanted to write a function which would return the previous Sunday or upcoming Saturday depending on which was closer.

lubridate’s floor_date and ceiling_date functions make this quite simple.

e.g. if we want to round the 18th December down to the beginning of the week and up to the beginning of the next week we could do the following:

> library(lubridate)
> floor_date(ymd("2014-12-18"), "week")
[1] "2014-12-14 UTC"
 
> ceiling_date(ymd("2014-12-18"), "week")
[1] "2014-12-21 UTC"

For the date in the future we actually want to grab the Saturday rather than the Sunday so we’ll subtract one day from that:

> ceiling_date(ymd("2014-12-18"), "week") - days(1)
[1] "2014-12-20 UTC"

Now let’s put that together into a function which finds the closest weekend for a given date:

findClosestWeekendDay = function(dateToLookup) {
  before = floor_date(dateToLookup, "week") + hours(23) + minutes(59) + seconds(59)
  after  = ceiling_date(dateToLookup, "week") - days(1)
  if((dateToLookup - before) < (after - dateToLookup)) {
    before  
  } else {
    after  
  }
}
 
> findClosestWeekendDay(ymd_hms("2014-12-13 13:33:29"))
[1] "2014-12-13 UTC"
 
> findClosestWeekendDay(ymd_hms("2014-12-14 18:33:29"))
[1] "2014-12-14 23:59:59 UTC"
 
> findClosestWeekendDay(ymd_hms("2014-12-15 18:33:29"))
[1] "2014-12-14 23:59:59 UTC"
 
> findClosestWeekendDay(ymd_hms("2014-12-17 11:33:29"))
[1] "2014-12-14 23:59:59 UTC"
 
> findClosestWeekendDay(ymd_hms("2014-12-17 13:33:29"))
[1] "2014-12-20 UTC"
 
> findClosestWeekendDay(ymd_hms("2014-12-19 13:33:29"))
[1] "2014-12-20 UTC"

I’ve set the Sunday date at 23:59:59 so that I can use this date in the next step where we want to calculate how how many hours it is from the current date to the nearest weekend.

I ended up with this function:

distanceFromWeekend = function(dateToLookup) {
  before = floor_date(dateToLookup, "week") + hours(23) + minutes(59) + seconds(59)
  after  = ceiling_date(dateToLookup, "week") - days(1)
  timeToBefore = dateToLookup - before
  timeToAfter = after - dateToLookup
 
  if(timeToBefore < 0 || timeToAfter < 0) {
    0  
  } else {
    if(timeToBefore < timeToAfter) {
      timeToBefore / dhours(1)
    } else {
      timeToAfter / dhours(1)
    }
  }
}
 
> distanceFromWeekend(ymd_hms("2014-12-13 13:33:29"))
[1] 0
 
> distanceFromWeekend(ymd_hms("2014-12-14 18:33:29"))
[1] 0
 
> distanceFromWeekend(ymd_hms("2014-12-15 18:33:29"))
[1] 18.55833
 
> distanceFromWeekend(ymd_hms("2014-12-17 11:33:29"))
[1] 59.55833
 
> distanceFromWeekend(ymd_hms("2014-12-17 13:33:29"))
[1] 58.44194
 
> distanceFromWeekend(ymd_hms("2014-12-19 13:33:29"))
[1] 10.44194

While this works it’s quite slow when you run it over a data frame which contains a lot of rows.

There must be a clever R way of doing the same thing (perhaps using matrices) which I haven’t figured out yet so if you know how to speed it up do let me know.

Categories: Programming

R: Numeric representation of date time

Mark Needham - Sat, 12/13/2014 - 20:58

I’ve been playing around with date times in R recently and I wanted to derive a numeric representation for a given value to make it easier to see the correlation between time and another variable.

e.g. December 13th 2014 17:30 should return 17.5 since it’s 17.5 hours since midnight.

Using the standard R libraries we would write the following code:

> december13 = as.POSIXlt("2014-12-13 17:30:00")
> as.numeric(december13 - trunc(december13, "day"), units="hours")
[1] 17.5

That works pretty well but Antonios recently introduced me to the lubridate so I thought I’d give that a try as well.

The first nice thing about lubridate is that we can use the date we created earlier and call the floor_date function rather than truncate:

> (december13 - floor_date(december13, "day"))
Time difference of 17.5 hours

That gives us back a difftime…

> class((december13 - floor_date(december13, "day")))
[1] "difftime"

…which we can divide by different units to get the granularity we want:

> diff = (december13 - floor_date(december13, "day"))
> diff / dhours(1)
[1] 17.5
 
> diff / ddays(1)
[1] 0.7291667
 
> diff / dminutes(1)
[1] 1050

Pretty neat!

lubridate also has some nice functions for creating dates/date times. e.g.

> ymd_hms("2014-12-13 17:00:00")
[1] "2014-12-13 17:00:00 UTC"
 
> ymd_hm("2014-12-13 17:00")
[1] "2014-12-13 17:00:00 UTC"
 
> ymd_h("2014-12-13 17")
[1] "2014-12-13 17:00:00 UTC"
 
> ymd("2014-12-13")
[1] "2014-12-13 UTC"

And if you want a different time zone that’s pretty easy too:

> with_tz(ymd("2014-12-13"), "GMT")
[1] "2014-12-13 GMT"
Categories: Programming

Agile Metrics: The Relationship Between Measurement Framework Quadrants

Untitled

“I never knew anybody . . . who found life simple. I think a life or a time looks simple when you leave out the details.” – Ursula K. Le GuinThe Birthday of the World and Other Stories

The act of measurement either reflects how work was done, how it is being done and what is possible in the future. A measurement framework that supports all of these goals is going to have to reflect some of the details and complexity that are found in the development (broad sense) environment. The simple Agile measurement framework uses the relationships between the areas of productivity, quality, predictability and value to account and reflect real world complexity and to help generate some balance. Each quadrant of the model interacts with the other to a greater or lesser extent. The following matrix maps the nuances between the quadrants.

Impact Matrix

Impact Matrix

The labor productivity quadrant most directly influences the value quadrant. Lower productivity (output per unit of effort) equates to higher costs and less value that can be delivered. Pressure to increase productivity and lower cost can cause higher levels of technical debt, therefore lower levels of quality. Erratic levels of productivity can be translated into time-to-market variability.

Predictability, typically expressed as velocity or time-to-market, most directly interacts with quality at two levels. The first is terms of customer satisfaction. Delivering functionality at a rate or date that is at odds with what is anticipated will typically have a negative impact on customer satisfaction (quality). Crashing the schedule to meet a date (and be perceived as predictable) will generally cause the team to cut corners, which yields technical debt and higher levels of defects. Lower quality is generally thought to reduce the perceived value of the functionality delivered.

Quality, measured as technical debit or delivered defects, has direct links to predictability (noted earlier) and value. The linkage from quality to value is direct. Software (or any other deliverable) that has lower quality than anticipated will be held in lower regard and be perceived as being less useful. We have noted a moderate relationship between labor productivity and quality through technical debt. This relationship can also be seen through the mechanism of fixing defects. Every hour spent fixing defects is an hour that would normally be spent developing or enhancing functionality.

Value, measure as business value or return on investment, is very strongly related to productivity and value (as noted earlier).

Based on the relationships we can see that a focus on a single area of the model could cause a negative impact on performance in a different quadrant. For example, a single minded focus on efficiency can lead to reduced value quality and more strongly less value to stakeholders. The model would suggest the need to measure and set performance level agreements for value if labor productivity is going to be stressed.

The simple Agile measurement framework provides a means to understand the relationships between the four macro categories of measurement that have been organized into quadrant. Knowledge of those relationships can help an organization or team to structure how they measure to ensure approach taken is balanced.


Categories: Process Management

Stuff The Internet Says On Scalability For December 12th, 2014

Hey, it's HighScalability time:


We've had a wee bit of a storm in the bay area.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Extreme Engineering - Building a Rube Goldberg machine with scrum

Xebia Blog - Fri, 12/12/2014 - 15:16

Is agile usable to do other things than software development? Well we knew that already; yes!
But to create a machine in 1 day with 5 teams and continuously changing members using scrum might be exciting!

See our report below (it's in Dutch for now)

 

Extreme engineering video