Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Use Cases 101: Let’s Take an Uber

Software Requirements Blog - Seilevel.com - Thu, 12/18/2014 - 17:30
I was recently asked to prepare a handout giving the basics of use cases for an upcoming training session. It struck me as odd that I needed to start from square one for a model that seemed standard. Use cases, once ubiquitous, have largely been replaced by process flows and other less text-heavy models. As […]
Categories: Requirements

Agility and the essence of software architecture

Coding the Architecture - Simon Brown - Thu, 12/18/2014 - 16:44

I'm just back from the YOW! conference tour in Australia (which was amazing!) and I presented this as the closing slide for my Agility and the essence of software architecture talk, which was about how to create agile software systems in an agile way.

Agility and the essence of software architecture

You will have probably noticed that software architecture sketches/diagrams form a central part of my lightweight approach to software architecture, and I thought this slide was a nice way to summarise the various things that diagrams and the C4 model enable, plus how this helps to do just enough up front design. The slides are available to view online/download and hopefully one of the videos will be available to watch after the holiday season.

Categories: Architecture

Manage Your Project Portfolio is Featured in Colombia’s Largest Business Newspaper

Andy Hunt, the Pragmatic Bookshelf publisher, just sent me an email telling me that Manage Your Project Portfolio is featured in La RepĂşblica, Columbia’s “first and most important business newspaper.” That’s because getabstract liked it!  

la_repĂşblica_20141204Okay, my book is below the fold. It’s in smaller print.

And, I have to say, I’m still pretty excited.

If your organization can’t decide which projects come first, second, third, whatever, or which features come first, second, third, whatever, you should get Manage Your Project Portfolio.

Categories: Project Management

Learning Hacks

Making the Complex Simple - John Sonmez - Thu, 12/18/2014 - 16:00

In this video, I talk about some of the hacks I use to get in extra learning time in my busy day. One of the main hacks I use is to listen to audio books whenever I am in the car or working out. I also read a book while walking on the treadmill every night.

The post Learning Hacks appeared first on Simple Programmer.

Categories: Programming

The Myth and Half-Truths of "Myths and Half-Truths"

Herding Cats - Glen Alleman - Thu, 12/18/2014 - 05:51

I love it when there is a post about Myths and Half-Truths, that is itself full of Myths and Half Truths. Orginal text in Italics, the Myth Busted and Half Truth turned into actual Truth beow each

Myth: If your estimates are a range of dates, you are doing a good job managing expectations.

  • Only the earlier, lower, more magical numbers will be remembered. And those will be accepted as firm commitments.

If this is the case, the communication process is broken. It's not the fault of the estimate or the estimator that Dilbert style management is in place. If this condition persists, better look for work somewhere else, because there are much bigger systemic problems there.

  • The lower bound is usually set at "the earliest date the project can possibly be completed". In other words, there is absolutely no way the work can be completed any earlier, even by a day. What are the chances of hitting that exact date? Practice shows - close to nil. 

The lower bound, most likely, and upper bounds must have a confidence level. The early date with a 10% confidence says to the reader, there is no hope of making that. But the early alone gets stuck in the mind as a cofirmation bias.

Never ever communicate a date, a cost, a capability in the absence of the confidence level for that number.

Making the estimate and management expectations have no connection to each other. Poor estimates make it difficult to manage expectations, because receiving the estimates loose confidecen in the estimator when the estimates have no connection with the actual outcomes of the project.

Half-truth: You can control the quality of your estimate by putting more work into producing this estimate.

It's not the work that is needed, it's the proper work. Never confuse effort with results. Reference Class Forecasting, Parametric models, and similar estimating tools along with the knowledge of the unlying uncertaities of the processes, technology, and people connected with delivering the outcomes of their efforts.

  • By spending some time learning about the project, researching resources available, considering major and minor risks, one can produce a somewhat better estimate.
  • The above activities are only going to take the quality of the estimate so far.  Past a certain point, no matter how much effort goes into estimating a project, the quality of the estimate is not going to improve. Then the best bet is to simply start working on the project.

Of course this ignores the very notion of Subject Matter Expertise. Why are you asking someone who only knows one thing - a one trick pony to work on yu new project. This is naive hiring and management.

This would be like hiring a commercial builder with little or no understanding of modern energy efficient building, solar power and heating, thermal windows, and high efficiency HVAC systems and asked him to build a LEADS Compliant office building.

Why would you do that? Why would you hire software developers that had little understanding of where technology is going? Don’t they read, attend conferences, and look at the literature to see what’s coming up? 

Myth: People can learn to estimate better, as they gain experience.

People can learn to estimates as they gain skill, knowledge, and experience. All three ae needed. Experience alone is necessary but far from sufficient. Experience doing it wrong doesn't lead to improvement, only confirm that bad estimates are the outcome. There are a nearly endless supply of books, papers, and articles on how to properly estimate. Read, take a course, talk to experts, listen, and you'll be able to determine where you are going wrong. Then your experience  is will of value, beyond confirming you know how to do it wrong.

  • It is possible to get better at estimating – if one keeps estimating the same task, which becomes known and familiar with experience. This is hardly ever the case in software development. Every project is different, most teams are brand new, and technology is moving along fast enough.

That's not the way to improve anyting in the software development world. If you wrote code for a single function over and over again, you'd be a one-trick-pony. 

The notion of projects are always new, is better said projects are new to me. Fix that by finding someone who knows what the new problem is about and hire them to help you. Like the builder above dobn't embark on a project where you don't have some knowledge of what to do, how to do it, what prolems will be encountered and what their solutions are. That's a great way to waste your customers money and join a Dath March project.

    • Do not expect to produce a better estimate for your next project than you did for your last one.

Did you not keep a record of what you did last time. Were you paying attention to what happened? No wonder your outcomes are the same.

    • By the same token, do not expect a worse estimate. The quality of the estimate is going to be low, and it is going to be random.

As Inigo Montoya says: You keep using that word. I do not think it means what you think it means.All estimates are random variables. These random variables come from an underlying statistcial process - a Reference Class - of uncertainty about or work. Some are reducible some are irreducible. Making decisons in the presence of this uncertainty is called risk management. And as Tim Lister says Risk Management is how Adults Manage Projects.

Half-truth: it is possible to control the schedule by giving up quality.

The trade offs between cost, schedule, and performance - quality is a performance measure in our Software Intensive Systems domain, along with many other ...ilities. So certainly the schedule can be controlled by trading off quality. 

  • Only for short-term, throw-away temporary projects.

This is not a Myth, it's all too common. I'm currently the Director of Program Governance, where this decision was made lomg ago and we're still paying the price for that decision.

  • For most projects, aiming for lower quality has a negative effect on the schedule.

 

Related articles How Think Like a Rocket Scientist - Irreducible Complexity Complex Project Management Risk Management is Project Management for Adults
Categories: Project Management

Training Strategies for Transformations and Change: Active Learning

Group discussion is an active learning technique.

Group discussion is an active learning technique.

Change is often premised on people learning new methods, frameworks or techniques.  For any change to be implemented effectively, change agents need to understand the most effective way of helping learners learn.  Active learning is a theory of teaching based on the belief that learners should take responsibility for their own learning.  Techniques that support this type of teaching exist on a continuum that begins with merely fostering active listening, to interactive lectures and finally to using investigative inquiry techniques. Learning using games is a form of active learning (see www.tastycupcake.org for examples of Agile games).  Using active learning requires understanding the four basic elements of active learning, participants’ responsibilities and keys to success.

There are four basic elements of active learning that need to be worked into content delivery.

  1. Talking and listening – The act of talking about a topic helps learners organize, synthesize and reinforce what they have learned.
  2. Writing – Writing provides a mechanism for students to process information (similar to talking and listening). Writing is can used in when groups are too large for group or team level interaction or are geographically distributed.
  3. Reading – Reading provides the entry point for new ideas and concepts. Coupling reading with other techniques such as writing (e.g. generating notes and summaries) improves learner’s ability to synthesize and incorporate new concepts.
  4. Reflecting – Reflection provides learners with time to synthesize what they have learned. For example providing learners with time to reflect on how they would teach or answer questions on the knowledge gained in a game or exercise helps increase retention.

Both learners and teachers have responsibilities when using active learning methods. Learners have the responsibility to:

  1. Be motivated – The learner needs to have a goal for learning and be will to expend the effort to reach that goal [HUH?]
  2. Participate in the community – The learner needs to other learners in games, exercises and discussions. [HUH?]
  3. Be able to accept, shape and manage change – Learning is change; the learner must be able to incorporate what they have learned into how they work.

While by definition, active learning shifts the responsibility for learning to learner not all of the responsibility rests on the learner. Teachers/Organization have the responsibility to:

  1. Set goals – The teacher or organization needs to define or identify the desired result of the training.
  2. Design curriculum – The trainer (or curriculum designer) needs to ensure they have designed the courseware needed to guide the learner’s transformations.
  3. Provide facilitation – The trainer needs to provide encouragement and help make the learning process easier.

As a trainer in an organization pursuing a transformation, there are several keys to successfully using active learning.

  1. Use creative events (games or other exercised) that generate engagement.
  2. Incorporate active learning in a planned manner.
  3. Make sure the class understands the process being used and how it will benefit them.
  4. In longer classes, vary pair, team or group membership to help expose learners to as diverse a set of points-of-view as possible.
  5. All exercises should conclude with a readout/presentation of results to the class.  Varying the approach (have different people present, ask different questions) taken during the readout help focus learner attention.
  6. Negotiate a signal for students to stop talking. (Best method: The hand raise, where when the teacher raises his or her hand everyone else raises their hand and stops talking.)

While randomly adding a discussion exercise at the end of a lecture module uses an active learning technique, it not a reflection of an effective approach to active learning.  When building a class or curriculum that intends to use active learning, the game and exercises that are selected need to be carefully chosen  to illicit the desired learning impact.


Categories: Process Management

Dynamic DNS updates with nsupdate (new and improved!)

Agile Testing - Grig Gheorghiu - Wed, 12/17/2014 - 23:14
I blogged about this topic before. This post shows a slightly different way of using nsupdate remotely against a DNS server running BIND 9 in order to programatically update DNS records. The scenario I am describing here involves an Ubuntu 12.04 DNS server running BIND 9 and an Ubuntu 12.04 client running nsupdate against the DNS server.

1) Run ddns-confgen and specify /dev/urandom as the source of randomness and the name of the zone file you want to dynamically update via nsupdate:

$ ddns-confgen -r /dev/urandom -z myzone.com

# To activate this key, place the following in named.conf, and
# in a separate keyfile on the system or systems from which nsupdate
# will be run:
key "ddns-key.myzone.com" {
algorithm hmac-sha256;
secret "1D1niZqRvT8pNDgyrJcuCiykOQCHUL33k8ZYzmQYe/0=";
};

# Then, in the "zone" definition statement for "myzone.com",
# place an "update-policy" statement like this one, adjusted as
# needed for your preferred permissions:
update-policy {
 grant ddns-key.myzone.com zonesub ANY;
};

# After the keyfile has been placed, the following command will
# execute nsupdate using this key:
nsupdate -k <keyfile>

2) Follow the instructions in the output of ddns-keygen (above). I actually named the key just ddns-key, since I was going to use it for all the zones on my DNS server. So I added this stanza to /etc/bind/named.conf on the DNS server:

key "ddns-key" {
algorithm hmac-sha256;
secret "1D1niZqRvT8pNDgyrJcuCiykOQCHUL33k8ZYzmQYe/0=";
};

3) Allow updates when the key ddns-key is used. In my case, I added the allow-update line below to all zones that I wanted to dynamically update, not only to myzone.com:

zone "myzone.com" {
        type master;
        file "/etc/bind/zones/myzone.com.db";
allow-update { key "ddns-key"; };
};

At this point I also restarted the bind9 service on my DNS server.

4) On the client box, create a text file containing nsupdate commands to be sent to the DNS server. In the example below, I want to dynamically add both an A record and a reverse DNS PTR record:

$ cat update_dns1.txt
server dns1.mycompany.com
debug yes
zone myzone.com
update add testnsupdate1.myzone.com 3600 A 10.10.2.221
show
send
zone 2.10.10.in-addr.arpa
update add 221.2.10.10.in-addr.arpa 3600 PTR testnsupdate1.myzone.com
show
send

Still on the client box, create a file containing the stanza with the DDNS key generated in step 1:

$ cat ddns-key.txt
key "ddns-key" {
algorithm hmac-sha256;
secret "Wxp1uJv3SHT+R9rx96o6342KKNnjW8hjJTyxK2HYufg=";
};

5) Run nsupdate and feed it both the update_dns1.txt file containing the commands, and the ddns-key.txt file:

$ nsupdate -k ddns-key.txt -v update_dns1.txt

You should see some fairly verbose output, since the command file specifies 'debug yes'. At the same time, tail /var/log/syslog on the DNS server and make sure there are no errors.

In my case, there were some hurdles I had to overcome on the DNS server. The first one was that apparmor was installed and it wasn't allowing the creation of the journal files used to keep track of DDNS records. I saw lines like these in /var/log/syslog:

Dec 16 11:22:59 dns1 kernel: [49671335.189689] type=1400 audit(1418757779.712:12): apparmor="DENIED" operation="mknod" parent=1 profile="/usr/sbin/named" name="/etc/bind/zones/myzone.com.db.jnl" pid=31154 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107
Dec 16 11:22:59 dns1 kernel: [49671335.306304] type=1400 audit(1418757779.828:13): apparmor="DENIED" operation="mknod" parent=1 profile="/usr/sbin/named" name="/etc/bind/zones/rev.2.10.10.in-addr.arpa.jnl" pid=31153 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107

To get past this issue, I disabled apparmor for named:

# ln -s /etc/apparmor.d/usr.sbin.named /etc/apparmor.d/disable/
# service apparmor restart

The next issue was an OS permission denied (nothing to do with apparmor) when trying to create the journal files in /etc/bind/zones:

Dec 16 11:30:54 dns1 named[32640]: /etc/bind/zones/myzone.com.db.jnl: create: permission denied
Dec 16 11:30:54 dns named[32640]: /etc/bind/zones/rev.2.0.10.in-addr.arpa.jnl: create: permission denied

I got past this issue by running

# chown -R bind:bind /etc/bind/zones

At this point everything worked as expected.


The Big Problem is Medium Data

This is a guest post by Matt Hunt, who leads open source projects for Bloomberg LP R&D. 

“Big Data” systems continue to attract substantial funding, attention, and excitement. As with many new technologies, they are neither a panacea, nor even a good fit for many common uses. Yet they also hold great promise. The question is, can systems originally designed to serve hundreds of millions of requests for something like web pages also work for requests that are computationally expensive and have tight tolerances?

Modern era big data technologies are a solution to an economics problem faced by Google and other Internet giants a decade ago. Storing, indexing, and responding to searches against all web pages required tremendous amounts of disk space and computer power. Very powerful machines, fast SAN storage, and data center space were prohibitively expensive. The solution was to pack cheap commodity machines as tightly together as possible with local disks.

This addressed the space and hardware cost problem, but introduced a software challenge. Writing distributed code is hard, and with many machines comes many failures. So a framework was also required to take care of such problems automatically for the system to be viable.

Hadoop

Right now, we’re in a transition phase in the industry in computing built from the entrance of Hadoop and its community starting in 2004. Understanding why and how these systems were created also offers insight into some of their weaknesses.  

At Bloomberg that we don’t have a big data problem. What we have is a “medium data” problem -- and so does everyone else.   Systems such as Hadoop and Spark are less efficient and mature for these typical low latency enterprise uses in general. High core counts, SSDs, and large RAM footprints are common today - but many of the commodity platforms have yet to take full advantage of them, and challenges remain.  A number of distributed components are further hampered by Java, which creates its own complications for low latency performance.

A practical use case
Categories: Architecture

Training Strategies for Transformations and Change

Presentations are just one learning strategy.

Presentations are just one learning strategy.

How many times have you sat in a room, crowded around tables, perhaps taking notes on your laptop or maybe doing email while someone drones away about the newest process change? All significant organizational transformations require the learning and adoption of new techniques, concepts and methods. Agile transformations are no different. For example, a transformation from waterfall to Scrum will require developing an understanding of Agile concepts and Scrum techniques. Four types of high-level training strategies are often used to support process improvement, such as Agile transformations. They are:

  1. Classic Lecture /Presentation – A presenter stands in front of the class and presents information to the learners. In most organizations the classic classroom format is used in conjunction with a PowerPoint deck, which provides counterpoint and support for the presenter. The learner’s role is to take notes from the lecture, interactions between the class and presenter and the presentation material and then to synthesize what they have heard. Nearly everyone in an IT department is familiar with type of training from attending college or university. An example in my past was Psychology 101 at the University of Iowa with 500+ of my closest friends. I remember the class because it was at 8 AM and because of the people sleeping in the back row. While I do not remember anything about the material many years later, this technique is often very useful in broadly introducing concepts. This method is often hybridized with other strategies to more deeply implement techniques and methods.
  2. Active Learning – Is strategy that is based on the belief that learners should take responsibility for their own learning. Learners are provided with a set of activities that keep them busy doing what is to be learned while analyzing, synthesizing and evaluating. Learners must do more than just listen and take notes. The teacher acts as a facilitator to help ensure the learning activities include techniques such as Agile games, working in pairs/groups, role-playing with discussion of the material and other co-operative learning techniques. Active learning and lecture techniques are often combined.
  3. Experiential Learning – Experiential learning is learning from experience. The learner will perform tasks, reflect on performance and possibly suffer the positive or negative consequences of making mistakes or being successful. The theory behind experiential learning is that for learning to be internalized the experience needs to be concretely and actively engaged rather than in a more theoretical or purely in a reflective manner. Process improvement based on real time experimentation in the Toyota Production system is a type of experiential learning.
  4. Mentoring – Mentoring is a process that uses one-on-one coaching and leadership to help a learner do concrete tasks with some form of support. Mentoring is a form of experience based learning, most attributable to the experiential learning camp. Because mentoring is generally a one-on-one technique it is generally not scalable to for large-scale change programs.

The majority of change agents have not been educated in adult learning techniques and leverage classic presentation/lecture techniques spiced with exercises. However, each of these high-level strategies have value and can be leveraged to help build the capacity and capabilities for change.


Categories: Process Management

Multithreaded Programming has Really Gone to the Dogs

Taken from Multithreaded programming - theory and practice on reddit, which also has some very funny comments. If anything this is way too organized. 

 What's not shown? All the little messes that have to be cleaned up after...

Categories: Architecture

The Machine: HP's New Memristor Based Datacenter Scale Computer - Still Changing Everything

The end of Moore’s law is the best thing that’s happened to computing in the last 50 years. Moore’s law has been a tyranny of comfort. You were assured your chips would see a constant improvement. Everyone knew what was coming and when it was coming. The entire semiconductor industry was held captive to delivering on Moore’s law. There was no new invention allowed in the entire process. Just plod along on the treadmill and do what was expected. We are finally breaking free of these shackles and entering what is the most exciting age of computing that we’ve seen since the late 1940s. Finally we are in a stage where people can invent and those new things will be tried out and worked on and find their way into the market. We’re finally going to do things differently and smarter.

-- Stanley Williams (paraphrased)

HP has been working on a radically new type of computer, enigmatically called The Machine (not this machine). The Machine is perhaps the largest R&D project in the history of HP. It’s a complete rebuild of both hardware and software from the ground up. A massive effort. HP hopes to have a small version of their datacenter scale product up and running in two years.

The story began when we first met HP’s Stanley Williams about four years ago in How Will Memristors Change Everything? In the latest chapter of the memristor story, Mr. Williams gives another incredible talk: The Machine: The HP Memristor Solution for Computing Big Data, revealing more about how The Machine works.

The goal of The Machine is to collapse the memory/storage hierarchy. Computation today is energy inefficient. Eighty percent of the energy and vast amounts of time are spent moving bits between hard disks, memory, processors, and multiple layers of cache. Customers end up spending more money on power bills than on the machines themselves. So the machine has no hard disks, DRAM, or flash. Data is held in power efficient memristors, an ion based nonvolatile memory, and data is moved over a photonic network, another very power efficient technology. When a bit of information leaves a core it leaves as a pulse of light.

On graph processing benchmarks The Machine reportedly performs 2-3 orders of magnitude better based on energy efficiency and one order of magnitude better based on time. There are no details on these benchmarks, but that’s the gist of it.

The Machine puts data first. The concept is to build a system around nonvolatile memory with processors sprinkled liberally throughout the memory. When you want to run a program you send the program to a processor near the memory, do the computation locally, and send the results back. Computation uses a wide range of heterogeneous multicore processors. By only transmitting the bits required for the program and the results the savings is enormous when compared to moving terabytes or petabytes of data around.

The Machine is not targeted at standard HPC workloads. It’s not a LINPACK buster. The problem HP is trying to solve for their customers is where a customer wants to perform a query and figure out the answer by searching through a gigantic pile of data. Problems that need to store lots of data and analyze in realtime as new data comes in

Why is a very different architecture needed for building a computer? Computer systems can’t not keep up with the flood of data that’s coming in. HP is hearing from their customers that they need the ability to handle ever greater amounts of data. The amount of bits that are being collected is growing exponentially faster than the rate at which transistors are being manufactured. It’s also the case that information collection is growing faster than the rate at which hard disks are being manufactured. HP estimates there are 250 trillion DVDs worth of data that people really want to do something with. Vast amount of data are being collected in the world are never even being looked at.

So something new is needed. That’s at least the bet HP is making. While it’s easy to get excited about the technology HP is developing, it won’t be for you and me, at least until the end of the decade. These will not be commercial products for quite a while. HP intends to use them for their own enterprise products, internally consuming everything that’s made. The idea is we are still very early in the tech cycle, so high cost systems are built first, then as volumes grow and processes improve, the technology will be ready for commercial deployment. Eventually costs will come down enough that smaller form factors can be sold.

What is interesting is HP is essentially building its own cloud infrastructure, but instead of leveraging commodity hardware and software, they are building their own best of breed custom hardware and software. A cloud typically makes available vast pools of memory, disk, and CPU, organized around instance types which are connected by fast networks. Recently there’s a move to treat these resource pools as independent of the underlying instances. So we are seeing high level scheduling software like Kubernetes and Mesos becoming bigger forces in the industry. HP has to build all this software themselves, solving many of the same problems, along with the opportunities provided by specialized chips. You can imagine programmers programming very specialized applications to eke out every ounce of performance from The Machine, but what is more likely is HP will have to create a very sophisticated scheduling system to optimize how programs run on top of The Machine. What's next in software is the evolution of a kind of Holographic Application Architecture, where function is fluid in both time and space, and identity arises at run-time from a two-dimensional structure. Schedule optimization is the next frontier being explored on the cloud.

The talk is organized in two broad sections: hardware and software. Two-thirds of the project is software, but Mr. Williams is a hardware guy, so hardware makes up the majority of the talk.  The hardware section is based around the idea of optimizing the various functions around the physics that is available: electrons compute; ions store; photons communicate.

Here’s is my gloss on Mr. Williams talk. As usual with such a complex subject much can be missed. Also, Mr. Williams tosses huge interesting ideas around like pancakes, so viewing the talk is highly recommended. But until then, let’s see The Machine HP thinks will be the future of computing….

Categories: Architecture

On Communicating with Resistant Stakeholders: Process Flow Storyboards

Software Requirements Blog - Seilevel.com - Tue, 12/16/2014 - 17:00
Determining how best to communicate requirements to stakeholders on your project can be difficult if you have a challenging audience who has resistance, for whatever reason, towards the project you’re working on. I was working on a project several months ago in which one of the major stakeholder groups felt that the work we were […]
Categories: Requirements

5 Reasons Product Owners Should Let Teams Work Out of Order

Mike Cohn's Blog - Tue, 12/16/2014 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

A product owner hands 10 story cards to the team. The team reads them and hands the fifth and sixth cards back to the product owner. By the end of the sprint, the team delivers the functionality described on cards 1, 2, 3, 4, and 7. But the team has not touched the work of cards 5 and 6.

And I say this is OK.

Standard agile advice is that a team should work on product backlog items in the order prescribed by the product owner. And although this is somewhat reasonable advice, I want to argue that good agile teams violate that guideline all the time.

There are many good reasons for a team to work out of order. Let’s consider just a few, and consider it sufficient evidence that teams should be allowed to work out of order.

1. Synergies

There are often synergies between items near the top of a product backlog—while a team is working on item No. 3, they should be allowed to work on No. 6. If two items are in the same part of the system and can be done faster together than separately, this is usually a good tradeoff for a product owner to allow.

2. Dependencies

A team may agree that No. 4 is more important than No. 5 and No. 6 on the product backlog. Unfortunately, No. 4 can’t be done until No. 7 has been implemented. Finding a dependency like this is usually enough to justify a team working a bit out of the product owner’s prioritized sequence.

3. Skillset Availability

A team might love to work on the product owner’s fourth top priority, but the right person for the job is unavailable. Sure, this can be a sign that additional cross-training is needed on that team to address this problem – but, that is often more of a long-term solution. And, in the short term, the right thing to do might simply be to work a bit out of order on something that doesn’t require the in-demand skillset.

4. It’s More Exciting

OK, this one may stir up some controversy. I’m not saying the team can do 1, 2, 3, 4, and then number 600. But, in my example of 1, 2, 3, 4 and 7, choosing to work on something because it’s more exciting to the team is OK.

On some projects, teams occasionally hit a streak of product backlog items that are, shall we say, less than exciting. Letting the team slide slightly ahead sometimes just to have some variety in what they’re doing can be good for the morale of the team. And that will be good for product owners.

Bonus Reason 4a: It’s More Exciting to Stakeholders

While on the subject of things being more exciting, I’m going to say it is also acceptable for a team to work out of order if the item will be more exciting to stakeholders.

It can sometimes be a challenge to get the right people to attend sprint reviews. It gets especially tough after a team runs a few boring ones in a row. Sometimes this happens because of the nature of the high-priority work—it’s stuff that isn’t really visible or is esoteric perhaps to stakeholders.

In those cases, it can be wise to add a bit of sex appeal to the next sprint review by making sure the team works on something stakeholders will find interesting and worth attending the meeting for.

5. Size

In case the first four (plus) items haven’t convinced you, I’ve saved for last the item that proves every team occasionally works out of product owner order: A team may skip item 5 on the product backlog because it’s too big. So they grab the next one that fits.

If a team were not to do this, they would grab items 1, 2, 3, and 4 and stop, perhaps leaving a significant portion of the sprint unfilled. Of course, the team could look for a way to perhaps do a portion of item 5 before jumping down to grab number 7. But sometimes that won’t be practical, which means the team will at least occasionally work out of order.

Product Owners Aren’t Perfect

A perfect product owner would know everything above. The perfect product owner would know that the team’s DBA will be full from tasks from the first four product backlog items, and so wouldn’t put a database-intensive task fifth. The perfect product owner would know that items 2 and 5 both affect the same Java class, and would naturally prioritize them together.

But most product owners have a hard time being perfect. A better solution is for them to put the product backlog in a pretty good prioritized order, and then leave room for fine tuning from the team.

What do you do when inertia wins?

Step 3 is to take smaller bites!

Step 3 is to take smaller bites!

Changing how any organization works is not easy.  Many different moving parts have to come together for a change to take root and build up enough inertia to pass the tipping point. Unfortunately because of misalignment, misunderstanding or poor execution, change programs don’t always win the day.  This is not new news to most of us in the business.  What should happen after a process improvement program fails?  What happens when the wrong kind of inertia wins?

Step One:  All failures must be understood.

First, perform a critical review of the failed program that focuses on why and how it failed.  The word critical is important.  Nothing should be sugar coated or “spun” to protect people’s feelings.  A critical review must also have a good dose of independence from those directly involved in the implementation.  Independence is required so that the biases and decisions that led to the original program can be scrutinized.  The goal is not to pillory those involved, but rather to make sure the same mistakes are not repeated.  These reviews are known by many names: postmortems, retrospectives or troubled project reviews, to name a few.

Step two:  Determine which way the organization is moving.

Inertia describes why an object in motion tends to stay in motion or those at rest tend to stay at rest.  Energy is required to change the state of any object or organization; understanding the direction of the organization is critical to planning any change. In process improvement programs we call the application of energy change management.  A change management program might include awareness building, training, mentoring or a myriad of other events all designed to inject energy into the system. The goal of that energy is either to amplify or change the performance of some group within an organization.  When not enough or too much energy is applied, the process change will fail.

Just because a change has failed does not mean all is lost.  There are two possible outcomes to a failure. The first is that the original position is reinforced, making change even more difficult.  The second is that the target group has been pushed into moving, maybe not all the way to where they should be or even in the right direction, but the original inertia has been broken.

Frankly, both outcomes happen.  If the failure is such that no good comes of it, then your organization will be mired in the muck of living off past performance.  This is similar to what happens when a car gets stuck in snow or sand and digs itself in.  The second scenario is more positive, and while the goal was not attained, the organization has begun to move, making further change easier.  I return to the car stuck in the snow example.  A technique that is taught to many of us that live in snowy climates is “rocking.” Rocking is used to get a car stuck in snow moving back and forth.  Movement increases the odds that you will be able to break free and get going in the right direction.

Step Three:  Take smaller bites!

The lean startup movement provides a number of useful concepts that can be used when changing any organization.  In Software Process and Measurement Cast 196, Jeff Anderson talked in detail about leveraging the concepts of lean start-ups within change programs (Link to SPaMCAST 196).  A lean start up will deliver a minimum amount of functionality needed to generate feedback and to further populate a backlog of manageable changes. The backlog should be groomed and prioritized by a product owner (or owners) from the area being impacted by the change.  This will increase ownership and involvement and generate buy-in.  Once you have a prioritized backlog, make the changes in a short time-boxed manner while involving those being impacted in measuring the value delivered.  Stop doing things if they are not delivering value and go to the next change.

Being a change agent is not easy, and no one succeeds all the time unless they are not taking any risks.  Learn from your mistakes and successes.  Understand the direction the organization is moving and use that movement as an asset to magnify the energy you apply. Involve those you are asking to change to building a backlog of prioritized minimum viable changes (mix the concept of a backlog with concepts from the lean start up movement).  Make changes based on how those who are impacted prioritize the backlog then stand back to observe and measure.  Finally, pivot if necessary.  Always remember that the goal is not really the change itself, but rather demonstrable business value. Keep pushing until the organization is going in the right direction.  What do you do when inertia wins?  My mother would have said just get back up, dust your self off and get back in the game.


Categories: Process Management

Reminder to migrate to updated Google Data APIs

Google Code Blog - Mon, 12/15/2014 - 18:00
Over the past few years, we’ve been updating our APIs with new versions across Drive and Calendar, as well as those used for managing Google Apps for Work domains. These new APIs offered developers several improvements over older versions of the API. With each of these introductions, we also announced the deprecation of a set of corresponding APIs.

The deprecation period for these APIs is coming to an end. As of April 20, 2015, we will discontinue these deprecated APIs. Calls to these APIs and any features in your application that depend on them will not work after April 20th.

Discontinued APIReplacement APIDocuments List APIDrive APIAdmin AuditAdmin SDK Reports APIGoogle Apps ProfilesAdmin SDK Directory APIProvisioningAdmin SDK Directory APIReportingAdmin SDK Reports APIEmail Migration API v1Email Migration API v2Reporting VisualizationNo replacement available
When updating, we also recommend that you use the opportunity to switch to OAuth2 for authorization. Older protocols, such as ClientLogin, AuthSub, and OpenID 2.0, have also been deprecated and are scheduled to shut down.

For help on migration, consult the documentation for the APIs or ask questions about the Drive API or Admin SDK on StackOverflow.

Posted by Steven Bazyl, Developer Advocate
Categories: Programming

The Machine: HP's New Memristor Based Datacenter Scale Computer - Still Changing Everything

The end of Moore’s law is the best thing that’s happened to computing in the last 50 years. Moore’s law has been a tyranny of comfort. You were assured your chips would see a constant improvement. Everyone knew what was coming and when it was coming. The entire semiconductor industry was held captive to delivering on Moore’s law. There was no new invention allowed in the entire process. Just plod along on the treadmill and do what was expected. We are finally breaking free of these shackles and entering what is the most exciting age of computing that we’ve seen since the late 1940s. Finally we are in a stage where people can invent and those new things will be tried out and worked on and find their way into the market. We’re finally going to do things differently and smarter.

-- Stanley Williams (paraphrased)

HP has been working on a radically new type of computer, enigmatically called The Machine (not this machine). The Machine is perhaps the largest R&D project in the history of HP. It’s a complete rebuild of both hardware and software from the ground up. A massive effort. HP hopes to have a small version of their datacenter scale product up and running in two years.

The story began when we first met HP’s Stanley Williams about four years ago in How Will Memristors Change Everything? In the latest chapter of the memristor story, Mr. Williams gives another incredible talk: The Machine: The HP Memristor Solution for Computing Big Data, revealing more about how The Machine works.

The goal of The Machine is to collapse the memory/storage hierarchy. Computation today is energy inefficient. Eighty percent of the energy and vast amounts of time are spent moving bits between hard disks, memory, processors, and multiple layers of cache. Customers end up spending more money on power bills than on the machines themselves. So the machine has no hard disks, DRAM, or flash. Data is held in power efficient memristors, an ion based nonvolatile memory, and data is moved over a photonic network, another very power efficient technology. When a bit of information leaves a core it leaves as a pulse of light.

On graph processing benchmarks The Machine reportedly performs 2-3 orders of magnitude better based on energy efficiency and one order of magnitude better based on time. There are no details on these benchmarks, but that’s the gist of it.

The Machine puts data first. The concept is to build a system around nonvolatile memory with processors sprinkled liberally throughout the memory. When you want to run a program you send the program to a processor near the memory, do the computation locally, and send the results back. Computation uses a wide range of heterogeneous multicore processors. By only transmitting the bits required for the program and the results the savings is enormous when compared to moving terabytes or petabytes of data around.

The Machine is not targeted at standard HPC workloads. It’s not a LINPACK buster. The problem HP is trying to solve for their customers is where a customer wants to perform a query and figure out the answer by searching through a gigantic pile of data. Problems that need to store lots of data and analyze in realtime as new data comes in

Why is a very different architecture needed for building a computer? Computer systems can’t not keep up with the flood of data that’s coming in. HP is hearing from their customers that they need the ability to handle ever greater amounts of data. The amount of bits that are being collected is growing exponentially faster than the rate at which transistors are being manufactured. It’s also the case that information collection is growing faster than the rate at which hard disks are being manufactured. HP estimates there are 250 trillion DVDs worth of data that people really want to do something with. Vast amount of data are being collected in the world are never even being looked at.

So something new is needed. That’s at least the bet HP is making. While it’s easy to get excited about the technology HP is developing, it won’t be for you and me, at least until the end of the decade. These will not be commercial products for quite a while. HP intends to use them for their own enterprise products, internally consuming everything that’s made. The idea is we are still very early in the tech cycle, so high cost systems are built first, then as volumes grow and processes improve, the technology will be ready for commercial deployment. Eventually costs will come down enough that smaller form factors can be sold.

What is interesting is HP is essentially building its own cloud infrastructure, but instead of leveraging commodity hardware and software, they are building their own best of breed custom hardware and software. A cloud typically makes available vast pools of memory, disk, and CPU, organized around instance types which are connected by fast networks. Recently there’s a move to treat these resource pools as independent of the underlying instances. So we are seeing high level scheduling software like Kubernetes and Mesos becoming bigger forces in the industry. HP has to build all this software themselves, solving many of the same problems, along with the opportunities provided by specialized chips. You can imagine programmers programming very specialized applications to eke out every ounce of performance from The Machine, but what is more likely is HP will have to create a very sophisticated scheduling system to optimize how programs run on top of The Machine. What's next in software is the evolution of a kind of Holographic Application Architecture, where function is fluid in both time and space, and identity arises at run-time from a two-dimensional structure. Schedule optimization is the next frontier being explored on the cloud.

The talk is organized in two broad sections: hardware and software. Two-thirds of the project is software, but Mr. Williams is a hardware guy, so hardware makes up the majority of the talk.  The hardware section is based around the idea of optimizing the various functions around the physics that is available: electrons compute; ions store; photons communicate.

Here’s is my gloss on Mr. Williams talk. As usual with such a complex subject much can be missed. Also, Mr. Williams tosses huge interesting ideas around like pancakes, so viewing the talk is highly recommended. But until then, let’s see The Machine HP thinks will be the future of computing….

Categories: Architecture

Team Competition is Not Friendly

I once worked in an organization where the senior managers thought they should motivate us, the team members. They decided to have a team competition, complete with prizes.

I was working on a difficult software problem with a colleague on another team. We both needed to jointly design our pieces of the product to make the entire product work.

After management announced the competition, he didn’t want to work with me. Why? There was prize money, worth hundreds of dollars to each person. He had a mortgage and three kids. That money made a big difference to him. I was still single. I would have stuck that money into either my savings or retirement fund, after buying something nice for myself.

Management motivated us, alright. But not to collaborate. They motivated us to stop working together. They motivated us to compete.

Our progress stopped.

My boss wanted to know what happened. I explained. I couldn’t fault my colleague. He wanted the money. It made a big difference for him. I would have appreciated the money, but not nearly as much as he would have. (Later, when I was paying for childcare, I understood how much of a difference that money made.)

I then had this conversation with my boss, ranting and raving the entire time:

“Look, do you want the best product or the best competition?”

“What?”

“You can’t have both. You can have a great product or you can have a great competition. Choose. Because once you put money on the table, where only one team gets the money, we won’t collaborate anymore.”

My boss got that “aha” look on his face. “Hold that thought,” he said.

The next day, management changed the competition. Now, it was about the teams who worked together to create the best product, not the one team who had the best idea. Still not so good, because all the teams on the program needed to collaborate. But better.

When I had my one-on-one with my boss, I explained that all the teams needed to collaborate. Were they really going to pay everyone a bonus?

My boss paled. They had not thought this through. “I’d better make sure we have the funds, right?”

People don’t work just for money. You need to pay people a reasonable salary. Remember what Dan Pink says in Drive: The Surprising Truth About What Motivates Us. People work for autonomy, mastery, and purpose. If you exchange the social contract of working for autonomy, mastery, and purpose for money, you’d better pay enough money. You also better repeat that money the next time. And, the next time. And, the next time.

That’s the topic of this month’s management myth: Management Myth 35: Friendly Competition Is Constructive.

Software product development is a team activity, full of learning. As soon as you make it a competition, you lose on the teamwork. You lose the learning. Nobody wins. There is no such thing as “friendly” competition.

Instead, if you go for collaboration, you can win.

Read Management Myth 35: Friendly Competition Is Constructive.

Categories: Project Management

How I got Robert (Uncle Bob) Martin to Write a Foreword For My Book

Making the Complex Simple - John Sonmez - Mon, 12/15/2014 - 16:00

Last week my publisher, Manning, gave me a little surprise. They told me that my new book, Soft Skills: The Software Developer’s Life Manual was going to publish early; that anyone who ordered before December 14th would be able to get the print version in their hands by Christmas (barring any unforeseen circumstances.) This was very exciting, until I realized ... Read More

The post How I got Robert (Uncle Bob) Martin to Write a Foreword For My Book appeared first on Simple Programmer.

Categories: Programming

The Evolution of eInk

Coding Horror - Jeff Atwood - Mon, 12/15/2014 - 09:40

Sure, smartphones and tablets get all the press, and deservedly so. But if you place the original mainstream eInk device from 2007, the Amazon Kindle, side by side with today's model, the evolution of eInk devices is just as striking.

Each of these devices has a 6 inch eInk screen. Beyond that they're worlds apart.

8" × 5.3" × 0.8"
10.2 oz 6.4" × 4.5" × 0.3"
6.3 oz 6" eInk display
167 PPI
4 level greyscale 6" eInk display
300 PPI
16 level greyscale
backlight 256 MB 4 GB 400 Mhz CPU 1 GHz CPU $399 $199 7 days battery life
USB 6 weeks battery life
WiFi / Cellular

They may seem awfully primitive compared to smartphones, but that's part of their charm – they are the scooter to the motorcycle of the smartphone. Nowhere near as versatile, but as a form of basic transportation, radically simpler, radically cheaper, and more durable. There's an object lesson here in stripping things away to get to the core.

eInk devices are also pleasant in a paradoxical way because they basically suck at everything that isn't reading. That doesn't sound like something you'd want, except when you notice you spend every fifth page switching back to Twitter or Facebook or Tinder or Snapchat or whatever. eInk devices let you tune out the world and truly immerse yourself in reading.

I believe in the broadest sense, bits > atoms. Sure, we'll always read on whatever device we happen to hold in our hands that can display words and paragraphs. And the advent of retina class devices sure made reading a heck of a lot more pleasant on tablets and smartphones.

But this idea of ultra-cheap, pervasive eInk reading devices eventually replacing those ultra-cheap, pervasive paperbacks I used to devour as a kid has great appeal to me. I can't let it go. Reading is Fundamental, man!

That's why I'm in this weird place where I will buy, sight unseen, every new Kindle eInk device. I wasn't quite crazy enough to buy the original Kindle (I mean, look at that thing) but I've owned every model since the third generation Kindle was introduced in 2010.

I've also been tracking the Kindle prices to see when they can get them down to $49 or lower. We're not quite there yet – the basic Kindle eInk reader, which by the way is still pretty darn amazing compared to that original 2007 model pictured above – is currently on sale for $59.

But this is mostly about their new flagship eInk device, the Kindle Voyage. Instead of being cheap, it's trying to be upscale. The absolute first thing you need to know is this is the first 300 PPI (aka "retina") eInk reader from Amazon. If you're familiar with the smartphone world before and after the iPhone 4, then you should already be lining up to own one of these.

When you experience 300 PPI in eInk, you really feel like you're looking at a high quality printed page rather than an array of RGB pixels. Yeah, it's still grayscale, but it is glorious. Here are some uncompressed screenshots I made from mine at native resolution.

Note that the real device is eInk, so there's a natural paper-like fuzziness that makes it seem even more high resolution than these raw bitmaps would indicate.

I finally have enough resolution to pick a thinner font than fat, sassy old Caecilia.

The backlight was new to the original Paperwhite, and it definitely had some teething pains. The third time's the charm; they've nailed the backlight aspect for improved overall contrast and night reading. The Voyage also adds an ambient light sensor so it automatically scales the backlight to anything from bright outdoors to a pitch-dark bedroom. It's like automatic night time headlights on a car – one less manual setting I have to deal with before I sit down and get to my reading. It's nice.

The Voyage also adds page turn buttons back into the mix, via pressure sensing zones on the left and right bezel. I'll admit I had some difficulty adjusting to these buttons, to the point that I wasn't sure I would, but I eventually did – and now I'm a convert. Not having to move your finger into the visible text on the page to advance, and being able to advance without moving your finger at all, just pushing it down slightly (which provides a little haptic buzz as a reward), does make for a more pleasant and efficient reading experience. But it is kind of subtle and it took me a fair number of page turns to get it down.

In my experience eInk devices are a bit more fragile than tablets and smartphones. So you'll want a case for automatic on/off and basic "throw it in my bag however" paperback book level protection. Unfortunately, the official Kindle Voyage case is a disaster. Don't buy it.

Previous Kindle cases were expensive, but they were actually very well designed. The Voyage case is expensive and just plain bad. Whoever came up with the idea of a weirdly foldable, floppy origami top opening case on a thing you expect to work like a typical side-opening book should be fired. I recommend something like this basic $14.99 case which works fine to trigger on/off and opens in the expected way.

It's not all sweetness and light, though. The typography issues that have plagued the Kindle are still present in full force. It doesn't personally bother me that much, but it is reasonable to expect more by now from a big company that ostensibly cares about reading. And has a giant budget with lots of smart people on its payroll.

This is what text looks like on a kindle.

— Justin Van Slembrou… (@jvanslem) February 6, 2014

If you've dabbled in the world of eInk, or you were just waiting for a best of breed device to jump in, the Kindle Voyage is easy to recommend. It's probably peak mainstream eInk. Would recommend, would buy again, will probably buy all future eInk models because I have an addiction. A reading addiction. Reading is fundamental. Oh, hey, $2.99 Kindle editions of The Rise and Fall of the Third Reich? Yes, please.

(At the risk of coming across as a total Amazon shill, I'll also mention that the new Amazon Family Sharing program is amazing and lets me and my wife finally share books made of bits in a sane way, the way we used to share regular books: by throwing them at each other in anger.)

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Categories: Programming

New blog posts about bower, grunt and elasticsearch

Gridshore - Mon, 12/15/2014 - 08:45

Two new blog posts I want to point out to you all. I wrote these blog posts on my employers blog:

The first post is about creating backups of your elasticsearch cluster. Some time a go they introduced the snapshot/restore functionality. Of course you can use the REST endpoint to use the functionality, but how easy is it if you can use a plugin to handle the snapshots. Or maybe even better, integrate the functionality in your own java application. That is what this blogpost is about, integrating snapshot/restore functionality in you java application. As a bonus there are the screens of my elasticsearch gui project snowing the snapshot/restore functionality.

Creating elasticsearch backups with snapshot/restore

The second blog post I want to put under you attention is front-end oriented. I already mentioned my elasticsearch gui project. This is an Angularjs application. I have been working on the plugin for a long time and the amount of javascript code is increasing. Therefore I wanted to introduce grunt and bower to my project. That is what this blogpost is about.

Improve my AngularJS project with grunt

The post New blog posts about bower, grunt and elasticsearch appeared first on Gridshore.

Categories: Architecture, Programming