Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Software Development Linkopedia August 2014

From the Editor of Methods & Tools - Thu, 08/21/2014 - 14:48
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about Agile retrospectives, software architecture, software developer psychology, software testing  in Agile teams, quality code and the (funny) history of programming. Web site: Fun Retrospectives Blog: How to make software architecture decisions? Blog: Cognitive Biases in Software Engineering Blog: How the Other Half Works: an Adventure in the Low Status of Software Engineers Blog: Confessions of an ex-developer Article: Tearing Down the Walls РEmbedding QA in a TDD/Pairing and ...

Is pairing for everybody?

Actively Lazy - Thu, 08/21/2014 - 08:11

Pair programming is a great way to share knowledge. But every developer is different, does pairing work for everyone?

Pairing helps a team normalise its knowledge – what one person knows, everyone else learns through pairing: keyboard shortcuts, techniques, practices, third party libraries as well as the details of the source code you’re working in.¬†This pushes up the average level of the team and stops knowledge becoming siloed.

Pairing also helps with discipline: it’s a lot harder to argue that you don’t need a unit test when there’s someone sitting next to you, literally acting as your conscience. It’s also a lot harder to just do the quick and dirty hack to get on to the next task, when the person sitting next to you has taken control of the keyboard to stop you committing war crimes against the source code.

The biggest problem most teams face is basically one of communication: coordinating, in detail, the activities of a team of developers is difficult. Ideally, every developer would know everything that is going on across the team – but this clearly isn’t practical. Instead, we have to draw boundaries to make it easier to reason about the system as a whole, without knowing the whole system to the same level of detail. I’ll create an API, some boundary layer, and we each work to our own side of it. I’ll create the service, you sort out the user interface. I’ll sort out the network protocol, you sort out the application layer. You have to introduce an architectural boundary to simplify the communication and coordination. Your architecture immediately reflects the relationships of the developers building it.

Whereas on teams that pair, these boundaries can be softer. They still happen, but the boundary becomes softer because as pairs rotate you see both sides of any boundary so it doesn’t become a black box you don’t know about and can’t change. One day I’m writing the user interface code, the next I’m writing the service layer that feeds it. This is how you spot inconsistencies and opportunities to fix the architecture and take advantage of implementation details on both sides. Otherwise this communication is hard. Continuous pair rotation means you can get¬†close to the ideal that each developer knows, broadly, what is happening everywhere.

However, let’s be honest: pairing isn’t for everyone. I’ve worked with some people who were great at pairing, who were a pleasure to work with. People who had no problem explaining their thought process and no ego to get bruised when you point out the fatal flaw in their idea. People who spot when you’ve lost the train of thought and pick up where you drifted off from.

A good pairing session becomes very social. A team that is pairing can sound very noisy. It can be one of the hardest things to get used to when you start pairing: I seem to spend my entire day arguing and talking. When are we gonna get on and write some damned code? But that just highlights how little of the job is actually typing in source code. Most of the day is figuring out which change to make and where. A single line of code can take hours of arguing to get right and in the right place.

But programming tends to attract people who are less sociable than others – and let’s face it, we’re a pretty anti-social bunch: I spend my entire day negotiating with a machine that works in 1s and 0s. Not for me the subtle nuances of human communication, it either compiles or it doesn’t. I don’t have to negotiate or try and out politick the compiler. I don’t have to deal with the compiler having “one of those days” (well, I say that, sometimes I swear…). I don’t have to take the compiler to one side and offer comforting words because its cat died. I don’t have to worry about hurting the compiler’s feelings because I made the same mistake for the hundredth time: “yes of course I’m listening to you, no I’m not just ignoring you. Of course I value your opinions, dear. But seriously, this is definitely an IList of TFoo!”

So it’s no surprise that among the great variety of programmers you meet, some are extrovert characters who relish the social, human side of working in a team of people, building software. As well as the introvert characters who relish the quiet, private, intellectual challenge of crafting an elegant solution to a fiendish problem.

And so to pairing: any team will end up with a mixture of characters. The extroverts will tend to enjoy pairing, while the introverts will tend to find it harder and seek to avoid it. This isn’t necessarily a question of education or persuasion, the benefits are relatively intangible and more introverted¬†developers¬†may¬†find the whole process less enjoyable than working solo. It sounds trite: but happy developers are productive developers. There’s no point doing anything that makes some of your peers unhappy. All teams need to agree rules. For example, some people like eating really smelly food in an open plan office. Good teams tend to agree rules about¬†this kind of behaviour; everyone agrees that small sacrifices for an individual make a big difference for team harmony.

However, how do you resolve a difference of opinion with pairing? As a team decision, pairing is a bit all or nothing. Either we agree to pair on everything, so there’s no code ownership, regular rotation and we learn from each other. Or we don’t, and we each become responsible for our own dominion. We can’t agree that those that want to pair will go into the pairing room so as not to upset everyone else.

One option is to simply require that everyone on your team has to love pairing. I don’t know about you: hiring good people is¬†hard. The last thing I want to do is start excluding people¬†who could otherwise be productive. Isn’t it better to at least have¬†somebody¬†doing¬†something, even if they’re not pairing?

Another option is to force developers to pair, even if they find it difficult or uncomfortable. But is that really going to be productive? Building resentment and unhappiness is not going to create a high performance team. Of course, the other extreme is just as likely to cause upset: if you stop all pairing, then those that want to will feel resentful and unhappy.

And what about the middle ground? Can you have a team where some people pair while others work on their own? It seems inevitable that Conway’s law will come into play: the structure of the software will reflect the structure of the team. It’s very difficult for there to be overlap between developers working on their own and¬†developers that are pairing. For exactly the same reason it’s difficult for a group of individual developers to overlap on the same area of code at the same time: you’ll necessarily introduce some architectural boundary to ease coordination.

This means you still end up with a collection of silos, some owned by individual developers, some owned by a group of developers. Does this give you the best compromise? Or the worst of both worlds?

What’s your experience? What have you tried? What worked, what didn’t?


Categories: Programming, Testing & QA

Two lessons on haproxy checks and swap space

Agile Testing - Grig Gheorghiu - Thu, 08/21/2014 - 02:03
Let's assume you want to host a Wordpress site which is not going to get a lot of traffic. You want to use EC2 for this. You still want as much fault tolerance as you can get at a decent price, so you create an Elastic Load Balancer endpoint which points to 2 (smallish) EC2 instances running haproxy, with each haproxy instance pointing in turn to 2 (not-so-smallish) EC2 instances running Wordpress (Apache + MySQL). 
You choose to run haproxy behind the ELB because it gives you more flexibitity in terms of load balancing algorithms, health checks, redirections etc. Within haproxy, one of the Wordpress servers is marked as a backup for the other, so it only gets hit by haproxy when the primary one goes down. On this secondary Wordpress instance you set up MySQL to be a slave of the primary instance's MySQL. 
Here are two things (at least) that you need to make sure you have in this scenario:
1) Make sure you specify the httpchk option in haproxy.cfg, otherwise the primary server will not be marked as down even if Apache goes down. So you should have something like:
backend servers-http  server s1 10.0.1.1:80 weight 1 maxconn 5000 check port 80  server s2 10.0.1.2:80 backup weight 1 maxconn 5000 check port 80  option httpchk GET /
2) Make sure you have swap space in case the memory on the Wordpress instances gets exhausted, in which case random processes will be killed by the oom process (and one of those processes can be mysqld). By default, there is no swap space when you spin up an Ubuntu EC2 instance. Here's how to set up a 2 GB swapfile:
dd if=/dev/zero of=/swapfile1 bs=1024 count=2097152mkswap /swapfile1chmod 0600 /swapfile1swapon /swapfile1echo "/swapfile1 swap swap defaults 0 0" >> /etc/fstab
I hope these two things will help you if you're not already doing them ;-)

Which Software Size Metric Should I Use? Part 3

398982625_e475db57c5_bThe fourth step in our checklist for selecting a size metric is an evaluation of the temporal component. This step focuses your evaluation on answering the question, ‚ÄúIs the metric available when you need it?‚ÄĚ When do you need to know how big a project is depends on what you intend to do with the data (that goal thing again). The majority of goals can be viewed as either estimation related (forward view) or measurement related (historical view). Different sizing metrics can be initially applied at different times during a project’s life. For example, Use Case Points can‚Äôt be developed until Use Cases are developed,¬†lines of¬†code can‚Äôt be counted until you are deep into construction, or at the very earliest, in technical design.

1Untitled

The major dichotomy is between estimation needs and measurement needs. As Figure 4 suggests, determining size from requirements (or earlier) will require focusing on functional metrics. Functional metrics can be applied earlier in the process (regardless of methodology) because they are based on a higher-level of abstraction that is more closely aligned with the business description of the project. Developing estimates or sizing later in the in the development process opens the possibility of more physical metrics which are more closely aligned with how developers view their work.


Categories: Process Management

No News is Bad News - Feedback Must Create Steering Signal from Plan †

Herding Cats - Glen Alleman - Wed, 08/20/2014 - 22:41

Screen Shot 2014-08-20 at 12.39.03 PMProject Manager: How's the project progressing to plan? 

Developers: We're spending money, consuming resources, producing outputs that the customer likes. 

Project Manager: I was more interested in what's our performance against our planned spend, planned resource consumption, and planned outputs of value to the customer? 

Developers: What do you mean, we didn't estimate any of that, we're managing this project with #NoEstimates. You know that new alternative to estimates for making decisions in software development. That is ways to make decisions with "No Estimates." of the impacts on the future of those estimates or of our work on the future cost, schedule, or technical performance. You know where we can use Decision making frameworks for projects that do not require estimates, or apply Investment models for software projects that do not require estimates, and have our project management methods for risk management, scope management, progress reporting, not require any of those annoying estimates. 'Cause we kinda suck at them anyway, so we just decided instead of learning how to estimate, we'll just not estimate and get back to coding.

Project Manager: Oh, you mean that approach of managing other peoples money that violates the principles of software microeconomics with Open Loop Control - where our organization can make business decisions on the allocation of our limited resources, without examining how those decisions effect the supply and demand of our resources. You do know about those resources? Like money, people, and time?

Developers: Yea, we don't need any that mumbo jumbo microeconomics that we all learned in school, since we didn't pay attention in that boring the statistics and probability class that tried to teach us that all variables on a project are actually random variables and we should to know something about their behaviour in the future if we're going have a hope in hell of ever managing this project in the presence of uncertainty about those values.

Project Manager: What's that smell? Maybe we'd better start rearranging the deck chairs on our ship here real soon, cause I smell an Iceberg getting closer.

No project can be managed to successful closure in the absence of steering targets defined at periodic intervals for the expenditure of cost, schedule, and technical performance. Knowing what those steering targets should be requires estimating their values, then measures the actual values to develop the needed steering signal - the variance between plan and actual.

The only way out of the need to estimate those intermediate steering targets is to straight line the budget, schedule, and needed technical performance - from start to end, then measure the actual performance. 

Like the intended route of the Titanic, our project does not proceed in a straight line, so that idea is a non-starter. And like the Titanic, our project cannot confuse the intended speed with the actual speed, just like we can't confuse the budget - the total planned crossing time - with the actual cost - the actual total crossing time.

Titanic-Route

Without those pesky intermediate targets to steer toward - those targets created by estimating the needed cost, needed scheduled arrival date, and, needed capabilities on the needed date for the needed cost. We're managing the project Open Loop, driving in a straight line. Never knowing what will pop up in front of our path. 

Say good bye to Kate Leonardo, you're gonna get wet.

† Full attribution for the inspiration for this post comes from the very useful blog by Gene Hughson

Related articles Staying on Plan Means Closed Loop Control How Not To Make Decisions Using Bad Estimates Control Systems - Their Misuse and Abuse More Than Target Needed To Steer Toward Project Success Why Project Management is a Control System
Categories: Project Management

Help! Too Many Incidents! - Capacity Assignment Policy In Agile Teams

Xebia Blog - Wed, 08/20/2014 - 22:26

As an Agile coach, scrum master, product owner, or team member you probably have been in the situation before in which more work is thrown at the team than the team has capacity to resolve.

In case of work that is already known this basically is a scheduling problem of determining the optimal order that the team will complete the work so as to maximise the business value and outcome. This typically applies to the case that a team is working to build or extend a new product.

The other interesting case is e.g. operational teams that work on items that arrive in an ad hoc way. Examples include production incidents. Work arrives ad hoc and the product owner needs to allocate a certain capacity of the team to certain types of incidents. E.g. should the team work on database related issues, or on front-end related issues?

If the team has more than enough capacity the answer is easy: solve them all! This blog will show how to determine what capacity of the team is best allocated to what type of incident.

What are we trying to solve?

Before going into details, let's define what problem we want to solve.

Assume that the team recognises various types of incidents, e.g. database related, GUI related, perhaps some more. Each type of incident will have an associated average resolution time. Also, each type will arrive at the team at a certain rate, the input rate. E.g. database related incidents arrive 3 times per month, whereas GUI related incidents occur 4 times per week. Finally, each incident type will have different operational costs assigned to it. The effect of database related incidents might be that 30 users are unable to work. GUI related incidents e.g. affect only part of the application affecting a few users.

At any time, the team has a backlog of incidents to resolve. With this backlog an operational cost is concerned. This operational we want to minimise.

What makes this problem interesting is that we want to minimise this cost under the constraint of having limited number of resources, or capacity. The product owner may wish to deliberately ignore GUI type of incidents and let the team work on database related incidents. Or assign 20% of the capacity to GUI related and 80% of the available capacity to database related incidents?

Types of Work

For each type of work we define the input rate, production rate, cost rate, waiting time, and average resolution time:

 \lambda_i = \text{average input rate for type '$i$'}, \lambda_i = \text{average input rate for type '$i$'},

 C_i = \text{operational cost rate for type '$i$'}, C_i = \text{operational cost rate for type '$i$'},

 x_i = \text{average resolution time for type '$i$'}, x_i = \text{average resolution time for type '$i$'},

 w_i = \text{average waiting time for type '$i$'}, w_i = \text{average waiting time for type '$i$'},

 s_i = \text{average time spend in the system for type '$i$'}, s_i = \text{average time spend in the system for type '$i$'},

 \mu_i = \text{average production rate for type '$i$'} \mu_i = \text{average production rate for type '$i$'}

Some items get resolved and spend the time s_i = x_i + w_is_i = x_i + w_i in the system. Other items never get resolved and spend time  s_i = w_i s_i = w_i in the system.

In the previous blog Little's Law in 3D the average total operational cost is expressed as:

 \text{Average operational cost for type '$i$'} = \frac{1}{2} \lambda_i C_i \overline{S_i(S_i+T)} \text{Average operational cost for type '$i$'} = \frac{1}{2} \lambda_i C_i \overline{S_i(S_i+T)}

To get the goal cost we need to sum this for all work types 'i'.

System

The process for work items is that they enter the system (team) as soon as they are found or detected. When they are found these items will contribute immediately to the total operational cost. This stops as soon as they are resolved. For some the product owner decides that the team will start working on them. The point that the team start working on an item the waiting time w_iw_i is known and on average they spend a time x_ix_i before it is resolved.

As the team has limited resources, they cannot work on all the items. Over time the average time spent in the system will increase. As shown in the previous blog Why Little's Law Works...Always Little's Law still applies when we consider a finite time interval.

This process is depicted below:

new doc 13_2

 \overline{M} = \text{fixed team capacity}, \overline{M} = \text{fixed team capacity},

 \overline{M_i} = \text{team capacity allocated to working on problems type '$i$'}, \overline{M_i} = \text{team capacity allocated to working on problems type '$i$'},

 \overline{N} = \text{total number of items in the system} \overline{N} = \text{total number of items in the system}

The total number of items allowed in the 'green' area is restricted by the team's capacity. The team may set a WiP limit to enforce this. In contrast the number of items in the 'orange' area is not constrained: incidents flow into the system as they are found and leave the system only after they have been resolved.

Without going into the details, the total operational cost can be rewritten in terms of x_ix_i and w_iw_i:

(1)  \text{Average operational cost for type '$i$'} = \frac{1}{2} \lambda_i C_i \overline{w_i(w_i+T)} + \mu_i C_i \overline{x_i} \,\, \overline{w_i} + \frac{1}{2} \mu_i C_i \overline{x_i(x_i+T)} \text{Average operational cost for type '$i$'} = \frac{1}{2} \lambda_i C_i \overline{w_i(w_i+T)} + \mu_i C_i \overline{x_i} \,\, \overline{w_i} + \frac{1}{2} \mu_i C_i \overline{x_i(x_i+T)}

What are we trying to solve? Again.

Now that I have shown the system, defined exactly what I mean with the variables, I will refine what exactly we will be solving.

Find M_iM_i such that this will minimise (1) under the constraint that the team has a fixed and limited capacity.

Important note

The system we are considering is not stable. Therefore we need to be careful when applying and using Little's Law. To circumvent necessary conditions for Little's Law to hold, I will consider the average total operational cost over a finite time interval. This means that we will minimise the average of the cost over the time interval from start to a certain time. As the accumulated cost increases over time the average is not the same as the cost at the end of the time interval.

Note: For our optimisation problem to make sense the system needs to be unstable. For a stable system it follows from Little's Law that the average input rate for type i is equal to the average production rate for type 'i'. In case there is no optimisation since we cannot choose those to be different. The ability to choose them differently is the essence of our optimisation problem.

Little's Law

At this point Little's Law provides a few relations between the variables  M, M_i, N, w_i, x_i, \mu_i, \lambda_i M, M_i, N, w_i, x_i, \mu_i, \lambda_i . These relations we can use to find what values of M_iM_i will minimise the average total operational cost.

As described in the previous blog Little's Law in 3D Little's Law gives relations for the system as a whole, per work item type and for each subsystem. These relations are:

 \overline{N_i} = \lambda_i \,\, \overline{s_i} \overline{N_i} = \lambda_i \,\, \overline{s_i}

 \overline{N_i} - \overline{M_i} = \lambda_i \,\, \overline{w_i} \overline{N_i} - \overline{M_i} = \lambda_i \,\, \overline{w_i}

 \overline{M_i} = \mu_i \,\,\overline{x_i} \overline{M_i} = \mu_i \,\,\overline{x_i}

 M_1 + M_2 + ... = M M_1 + M_2 + ... = M

The latter relation is not derived from Little's Law but merely states that total capacity of the team is fixed.

Note that Little's Law also has given us relation (1) above.

Result

Again, without going into the very interesting details of the calculation I will just state the result and show how to use it to calculate the capacities to allocate to certain work item types.

First, for each work item type determine the product between the average input rate (\lambda_i\lambda_i) and the average resolution time (x_ix_i). The interpretation of this is the average number of new incidents arriving while the team works on resolving an item. Put the result in a row vector and name it 'V':

(2)  V = (\lambda_1 x_1, \lambda_2 x_2, ...) V = (\lambda_1 x_1, \lambda_2 x_2, ...)

Next, add all at the components of this vector and denote this by ||V||||V||.

Second, multiply the result of the previous step for each item by the quotient of the average resolution time (x_ix_i) and the cost rate (C_iC_i). Put the result in a row vector and name it 'W':

(3)  W = (\lambda_1 x_1 \frac{x_1}{C_1}, \lambda_2 x_2 \frac{x_2}{C_2}, ...) W = (\lambda_1 x_1 \frac{x_1}{C_1}, \lambda_2 x_2 \frac{x_2}{C_2}, ...)

Again, add all components of this row vector and call this ||W||||W||.

Then, the capacity to allocate to item of type 'k' is proportional to:

(4)  \frac{M_k}{M} \sim W_k - \frac{1}{M} (W_k ||V|| - V_k ||W||) \frac{M_k}{M} \sim W_k - \frac{1}{M} (W_k ||V|| - V_k ||W||)

Here, V_kV_k denotes the k-th component of the row vector 'V'. So, V_1V_1 is equal to \lambda_1 x_1\lambda_1 x_1. Likewise for W_kW_k.

Finally, because these should add up to 1, each of (4) is divided by the sum of all of them.

Example

If this seems complicated, let's do a real calculation and see how the formulas of the previous section are applied.

Two types of incidents

As a first example consider a team that collects data on all incidents and types of work. The data collected over time includes the resolution time, dates that the incident occurred and the date the issue was resolved. The product owner assigns a business value to each incident which corresponds to the cost rate of the incident which in this case is measured in the number of (business) uses affected. Any other means of assigning a cost rate will do also.

The team consist of 6 team members, so the team's capacity MM is equal to 12 where each member is allowed to work on a maximum of 2 incidents.

From their data they discover that they have 2 main types of incidents. See the so-called Cycle Time Histogram below.

new doc 13_9

The picture above shows two types of incidents, having typical average resolution times of around 2 days and 2 weeks. Analysis shows that these are related to the GUI and database components respectively. From their data the team determines that they have an average input rate of 6 per week and 2 per month respectively. The average cost rate for each type is 10 per day and 200 per day respectively.

That is, the database related issues have: \lambda = 2 \text{per month} = 2/20 = 1/10 \text{per day} \lambda = 2 \text{per month} = 2/20 = 1/10 \text{per day} ,  C = 200 \text{per day} C = 200 \text{per day} , and resolution time  x = 2 \text{weeks} = 10 \text{days} x = 2 \text{weeks} = 10 \text{days} . While the GUI related issues have:  \lambda = 6 \text{per week} = 6/5 \text{per day} \lambda = 6 \text{per week} = 6/5 \text{per day} ,  C = 10 \text{per day} C = 10 \text{per day} , and resolution time  x = 2 \text{days} x = 2 \text{days} .

The row vector 'V' becomes (product of \lambda\lambda and xx:

 V = (1/10 * 10, 6/5 * 2) = (1, 12/5) V = (1/10 * 10, 6/5 * 2) = (1, 12/5) ,   ||V|| = 1 + 12/5 = 17/5 ||V|| = 1 + 12/5 = 17/5

The row vector 'W' becomes:

 W = (1/10 * 10 * 10 / 200, 6/5 * 2 * 2 / 10) = (1/20, 12/25) W = (1/10 * 10 * 10 / 200, 6/5 * 2 * 2 / 10) = (1/20, 12/25) ,  ||W|| = 1/20 + 12/25 = 53/100 ||W|| = 1/20 + 12/25 = 53/100

Putting this together we obtain the result that a percentage of the team's capacity should be allocated to resolve database related issues that is equal to:

 M_\text{database}/M \sim 1/20 - 1/12 *(1/20 * 17/5 - 1 * 53/100) = 1/20 + 1/12 * 36/100 = 1/20 + 3/100 = 8/100 = 40/500 M_\text{database}/M \sim 1/20 - 1/12 *(1/20 * 17/5 - 1 * 53/100) = 1/20 + 1/12 * 36/100 = 1/20 + 3/100 = 8/100 = 40/500

and a percentage should be allocated to work on GUI related items that is

 M_\text{GUI}/M \sim 12/25 - 1/12 *(12/25 * 17/5 - 12/5 * 53/100) = 12/25 - 1/12 * 9/125 = 12/25 - 3/500 = 237/500 M_\text{GUI}/M \sim 12/25 - 1/12 *(12/25 * 17/5 - 12/5 * 53/100) = 12/25 - 1/12 * 9/125 = 12/25 - 3/500 = 237/500

Summing these two we get as the sum 277/500. This means that we allocate 40/237 ~ 16% and 237/277 ~ 84% of the team's capacity to database and GUI work items respectively.

Kanban teams may define a class of service to each of these incident types and put a WiP limit on the database related incident lane of 2 cards and a WiP limit of 10 to the number of cards in the GUI related lane.

Scrum teams may allocate part of the team's velocity to user stories related to database and GUI related items based on the percentages calculated above.

Conclusion

Starting with the expression for the average total operational cost I have shown that this leads to an interesting optimisation problem in which we ant to determine the optimal allocation of a team's capacity to different work item type in such a way that it will on average minimise the average total operation cost present in the system.

The division of the team's capacity over the various work item types is determined by the work item types' average input rate, resolution time, and cost rate and is proportional to

(4)  \frac{M_k}{M} \sim W_k - \frac{1}{M} (W_k ||V|| - V_k ||W||) \frac{M_k}{M} \sim W_k - \frac{1}{M} (W_k ||V|| - V_k ||W||)

The data needed to perform this calculation is easily gathered by teams. Teams may use a cycle time histogram to find appropriate work item types. See this article on control charts for more information.

 

BE Agile before you Become Agile

Xebia Blog - Wed, 08/20/2014 - 20:49

People dislike change. It disrupts our routines and we need to invest to adapt. We only go along if we understand why change is needed and how we benefit from it.
The key to intrinsic motivation is to experience the benefits of the change yourself, rather than having someone else explain it to you.

Agility is almost an acronym for change. It is critical to let people experience the benefits of Agility before asking them to buy into this new way of working. This post explains how to create a great Agile experience in a fun, simple, cost efficient and highly effective way. BEing agile, before BEcoming agile!

The concept of a ‚ÄúCompany Innovation Day‚ÄĚ

Have you seen this clip about Dan Pinks’ Drive? According to him, the key factors for more motivation and better performance are: autonomy, mastery and purpose.
If you have some scrum experience this might sound familiar, right? That is because these 3 things really tie in nicely with agile and scrum, for example:

Autonomy = being able to self-direct;
‚ÄĘ Let the team plan their own work
‚ÄĘ Let the team decide how to best solve problems

Mastery = learning, applying and mastering new skills and abilities, a.k.a. "get better at stuff";
‚ÄĘ Retrospect and improve
‚ÄĘ Learn, apply and master new skills to get achieve goals as a team.

Purpose = understanding necessity and being as effective as possible;
‚ÄĘ Write user stories that add value
‚ÄĘ Define sprint goals that tie in to product- and business goals.

In the clip, the company "Atlassian" is mentioned. This is the company that makes "JIRA", one of the most popular Agile support tools. Atlassian tries to facilitate autonomy, mastery and purpose by organizing one day per quarter of ‚Äúmanagement free‚ÄĚ innovation. They call it a ‚Äúship it day‚ÄĚ.

Now this is cool! According to Dan, their people had fun (most important), fixed a whole array of bugs and delivered new product ideas as well. They have to ship all this in one day, again showing similarities with the time boxed scrum approach. When I first saw this, I realized that this kind of fast delivery of value is pretty much something you would like to achieve with Agile Scrum too! Doing Scrum right would feel like a continuous series of ship it days.

My own experience with innovation days

Recently I organized an innovation day with a client (for tips see on how to organize yours, click here). We invited the whole department to volunteer. If you didn’t feel like it, you could just skip it and focus on sprint work. Next we promoted the day and this resulted in a growing list of ideas coming in.
Except for the framing of the day, the formation of ideas and teams was totally self-organized and also result driven as we asked for the expected result. Ultimately we had 20 initiatives to be completed in one day.
On the day itself, almost everyone joined in and people worked hard to achieve results at the end of the day.
The day ended in presenting the results and having pizzas. Only some ideas just missed the deadline, but most were finished including usable and fresh new stuff with direct business value. When looking at the photos of that day it struck me that 9 out of ten photos showed smiling faces. Sweet!

The first innovation day was concluded with an evaluation. In my opinion evaluation is essential, because this is the perfect moment discuss deeper lessons and insights. Questions like; ‚Äúhow can we create the innovation day energy levels during sprints‚ÄĚ, and ‚Äúhow can we utilize self-organizing abilities more‚ÄĚ are invaluable as they could lead to new insights, inspiration and experiments for day-to-day operations.

The value of an innovation day as a starting point for Agile

All in all, I think an innovation day is the perfect way to get people experiencing the power of Agile.
Doing the innovation day on ‚Äúday one‚ÄĚ offers huge benefits when added to standard stuff like training and games. This is because the context is real. You have a real goal, a real timebox and you need to self-organize to achieve the desired result.
People doing the work get to experience their potential and the power of doing stuff within a simplified context. Managers get to experience unleashing the human potential when they focus only on the context and environment for that day.
I can only imagine the amazement and renewed joy when people experience the possibilities coming from a strong waterfall setting. All that good stuff from just a one-day investment!

Conclusion

It would be great if you would start out an Agile change initiative with an innovation day. Get people enthusiastic and inspired (e.g. motivated for change) first and then tell them why it works and how we are going to apply the same principles in day-to-day operations. This will result in less friction and resistance and give people a better sense for where they are heading.

Do you want to start doing innovation days or do you want to share your experience, feel free to leave a comment below.

The Agile Household: How Scrum Made Us a Better Family

Mike Cohn's Blog - Wed, 08/20/2014 - 20:13

I’m always fascinated by stories about Scrum (or any agile process) being used outside of software development. When Martin Lapointe told me how he and his family used Scrum -- and especially a task board -- to manage their recent relocation from Paris to Montreal, I immediately asked him to share that story. I’m sure you’ll find it as interesting, amusing, and informative as I did. - Mike Cohn

Ever since discovering the “Agile Manifesto,” I have been trying to integrate its core set of values into my day-to-day routines in hopes of improving processes outside of the office environment. With this in mind, my family and I embarked on an agile adventure that produced amazing results we never expected!

Since my childhood, I have longed to live and experience life in a different country. I have always been especially interested in exploring Europe in hopes of better understanding the many different cultures that make it such an amazing place. This dream had always been on the back burner, and then in 2011, with the help of my wife, Pascale, and my two girls, Elisabeth and Sarah (8 and 5 years old), we decided to make it a reality.

Carpe Diem (Seize the Day)!

With our family mantra in mind, we picked up, left Montreal and migrated to Paris, France.

As expected, when displacing a family of four from a three-story house to a small Parisian apartment, the move was exhausting and quite chaotic. Almost right off the plane, I started my new job as a ScrumMaster at a telecom company, and meanwhile, Pascale embarked on her new role as “Product Owner” of our household. Our two girls were rapidly immersed into the Parisian lifestyle and school system.

"With our family in mantra in mind, we picked up, left Montreal and migrated to Paris, France."

The next two years flew by at light speed! Before we knew it, it was time to start planning our return to Canada.

Looking back on our initial move to Paris, we wish we had better prepared ourselves and been more organized as a family while facing this major life change. In the face of another move, we began thinking of ways to approach yet another life changing experience.

Our hearts filled with emotions as we started compiling our to-do lists of Post-its that had to be completed before D-day. As the tasks piled up, we started to wonder how we could possibly get all of this done while trying to maintain a balance of work and living a happy life until the end of our European adventure.

What if Scrum Was the Answer to Our Challenge?

Then, one night, we said to ourselves let’s try something different. What if Scrum was the answer to our challenge? In the agile world, we try to leverage experience and failures to improve, so why not use the same approach to our big move?

Having some positive experiences experimenting with Scrum on a non-IT related team, I said to myself, why not take the same approach with my family? So, we gathered in our bedroom with our Post-its and sharpies in hand, and said, let’s create a backlog!

Elisabeth, being the curious one, immediately asked me, “Daddy, what’s a backlog?” I responded by explaining that a backlog is everything we need to do before leaving Paris. Elisabeth quickly asked, “Can I add the Eiffel tower carrousel to the backlog?” Of course, we said yes!

Mom then asked, “Can we also add taking out the trash?” I said yes, of course! Within seconds, everybody was writing valid stories down on Post-its. Sarah drew her story, because she hadn’t learned to write yet: a picture of her favourite Parisian sweets, Pierre Hermé macarons!

In one evening we compiled more than 50 family stories. These stories weren’t perfect by any means. There were no acceptance criteria, no estimations, but the girls were on board, and dare I say it, excited, and that was more than enough!

Next, we asked ourselves: what can we realistically accomplish in a week? Mom wanted to add the entire house cleaning tasks to the backlog. The girls wanted to add all of the fun stuff, and I wanted to add the must-see tourist attractions we hadn’t yet visited.

So we took a small board and made three columns: “To do,” “Ongoing” and “Done.” We negotiated ruthlessly, but ultimately Elisabeth and Sarah got the most stories in (don’t worry! The parents got their revenge in the following iterations ☺). The first iteration was about to begin.

"We had never seen this much work get done so fast, with so much happiness, ease, understanding and visibility!"

The Scrum momentum was on. We agreed upon one-week iterations (sprints) and then took time on the weekends to plan the next iteration. Each morning, we would have a quick gathering (daily stand-up) and the girls were very anxious to move the Post-its around the board. We had never seen so much work get done so fast, with so much happiness, ease, understanding and visibility!

With the Scrum approach, we were able to get the entire apartment-cleaning task list done, administrative tasks complete and some great museum visits in without any conflicts. Scrum was turning out to be a great tool for our family to use when trying to improve clarity and set priorities for big challenges!

We had the joy of experiencing our last days in Paris in a way that we will never forget.

Scrum Was Turning Out to be a Great Tool for Our Family

Back in Montreal, it seemed as though something was missing … It was Scrum! Realising this, we started a new backlog in the basement, and added a cork Scrum board in the kitchen. Our activities are now visible and updated each morning at the breakfast table.

If there’s one overlying factor that we’ve taken away from this experience it’s that we are so proud to be an agile family!




 

Part 2: The Cloud Does Equal High performance

This a guest post by Anshu Prateek, Tech Lead, DevOps at Aerospike and Rajkumar Iyer, Member of the Technical Staff at Aerospike.

In our first post we busted the myth that cloud != high performance and outlined the steps to 1 Million TPS (100% reads in RAM) on 1 Amazon EC2 instance for just $1.68/hr. In this post we evaluate the performance of 4 Amazon instances when running a 4 node Aerospike cluster in RAM with 5 different read/write workloads and show that the r3.2xlarge instance delivers the best price/performance.

Several reports have already documented the performance of distributed NoSQL databases on virtual and bare metal cloud infrastructures:

Categories: Architecture

Xamarin.Forms with Zumero

Eric.Weblog() - Eric Sink - Wed, 08/20/2014 - 16:00

I am a Xamarin fanboy, so my excitement about Xamarin.Forms is perhaps unsurprising. But I see an awful lot of potential for this technology. I want to show you some of the stuff we've been doing with Xamarin.Forms here at Zumero.

Andrew Jackson in Two Minutes

First I am going to race through this demo very quickly. Then I'll circle back around and explain things.

STEP ONE: Download ZAG and run it

Visit http://zumero.com/dev-center/zss/#zag and download the ZAG application. For this demo, I'm using ZAG on Mac OS X (but you could choose Windows or Linux) and I am targetting iOS (but you could choose Android or Windows Phone).

When you run the app, you should see something like this:

STEP TWO: New Database

Under the "File" menu, choose "New Database...". You should see this dialog:

Fill in the Server URL and DBFile exactly as shown in the screen shot (Server URL: "http://demo.zumero.com:8080". DBFile: "demo"). Click the OK button. You should see a dialog asking you where to save the local copy of the database:

Click OK. You should see "Syncing with the ZSS Server":

And when the sync is complete, at the bottom of the ZAG window, you should see "Sync result: 0 (success)".

STEP THREE: Generate

Under the "Generate" menu, find and choose the item labeled "Xamarin.Forms C#":

You will be asked three questions, and you should be able to just click OK on all three.

STEP FOUR: Open the sln file

You can quit the ZAG application now. It should have generated a folder somewhere like /Users/eric/Documents/demo.zssdemo/. Under that folder you should find a Xamarin solution tree that looks something like this:

Double-click the demo.sln file to open it in Xamarin Studio. You should see four C# projects: a Portable Class Library called "demo.Shared", plus one app target each for iOS, Android, and Windows Phone 8:

STEP FIVE: Build and run the app

If you build and run the demo.iOS app in the iPhone simulator, you should see something like this:

STEP SIX: Sync

Click the "Sync" button in the upper right. You should see:

The defaults should be fine. Just tap the "Sync Now" button. When the sync is completed, you should see a list of tables:

STEP SEVEN: Andrew Jackson

Tap the "presidents" table. In an iPad instance of the simulator, you would see this:

And tap the seventh item. You should see Andrew Jackson, the only U.S. president ever to kill someone in a duel:

What is ZAG?

ZAG is short for "ZSS App Generator". It's a desktop app which generates ready-to-build source code and build scripts for mobile apps that sync using ZSS.

We think of ZAG as a way for getting people started faster. Many people come to our product without much experience in mobile app development. ZAG can be used to give them a starting point, sort of like sample code that is customized for their data.

What is Zumero for SQL Server?

Zumero for SQL Server (ZSS) is a solution for data sync between SQL Server and mobile devices.

More info from Dan Bricklin about offline in mobile apps: http://bricklin.com/offline.htm

More info about Zumero on our website: zumero.com

More info about Zumero from my previous blog entry: here

What is demo.zumero.com:8080?

This is a publicly accessible ZSS server provided so that folks can play with Zumero clients more easily. It contains some basic sample data such as U.S. presidents and the periodic table of the elements.

In real-world scenarios, a customer would use ZAG to generate their starter app after they have completed setup of their ZSS server.

What is Xamarin?

Xamarin is (IMHO) a great solution for building mobile apps.

One of the main benefits of the Xamarin platform is the ability to write the non-UI parts of your iOS/Android/WP8 apps in cross-platform code while implementing a native user interface for each mobile environment.

But I would use Xamarin even for a single-platform app, simply to get the benefits of working in .NET/C#.

More info on the Xamarin website: http://xamarin.com/

What is Xamarin.Forms?

Xamarin.Forms is Xamarin's solution for making [most of] your UI code cross-platform as well, while retaining fully native performance and feel.

In a nutshell, the coolness of Xamarin.Forms lies in the fact that in the solution generated by ZAG above, the demo.Shared project is a Portable Class Library even though it contains the entire user interface for the app.

More info on the Xamarin website: http://xamarin.com/forms

What is a Portable Class Library (PCL)?

A PCL is a .NET class library that is annotated with information about which platforms it should support. This metadata allows the tooling to enforce portability rules in both the development and the consumption of the library.

More info on Scott Hanselman's blog: http://www.hanselman.com/blog/CrossPlatformPortableClassLibrariesWithNETAreHappening.aspx

More info on the Xamarin website: http://developer.xamarin.com/guides/cross-platform/application_fundamentals/pcl/introduction_to_portable_class_libraries/

Why does ZAG generate separate projects for iOS, Android, and WinPhone?

Xamarin.Forms can make most of your UI code portable, but not all of it. The actual building of the mobile app is specific to each platform. But if you look in the code for each of those platform-specific projects, you'll see that there isn't much there.

What dependencies does the ZAG-generated app have?

The following NuGet packages will need to be retrieved:

  • Xamarin.Forms

  • SQLite-net PCL

  • SQLitePCL.raw_basic

  • Zumero

What is the "SQLite-net PCL" NuGet package?

This is the Portable Class Library (PCL) version of SQLite-net, the popular lightweight SQLite ORM by Frank Krueger (@praeclarum).

More info on GitHub: https://github.com/praeclarum/sqlite-net

More info on the NuGet website: https://www.nuget.org/packages/sqlite-net-pcl/

What is the "SQLitePCL.raw_basic" NuGet package?

SQLitePCL.raw is my Portable Class Library for accessing SQLite from C#.

More info on Github: https://github.com/ericsink/SQLitePCL.raw

More info on the NuGet website: https://www.nuget.org/packages/SQLitePCL.raw_basic/

What is SQLite?

SQLite is the most popular SQLite database for mobile devices.

More info on the SQLite website: sqlite.org

What is the "Zumero" NuGet package?

This is the Zumero Client SDK in the form of a Portable Class Library in a NuGet package.

More info on the Zumero website: http://zumero.com/dev-center/zss/

More info on the NuGet website: https://www.nuget.org/packages/Zumero/

What do Zumero's client-side SQLite files look like?

As much as possible, they look exactly like they looked in SQL Server.

  • Table and column names are the same.

  • All data values are the same (whenever possible).

  • Foreign keys in SQL Server are reconstructed as foreign keys in SQLite.

  • Since SQLite does not perform type checking, Zumero adds constraints to do so.

And so on...

What's happening in step two?

ZAG is acting as a Zumero client and synchronizing the data on the server into a local SQLite file. This file is used to obtain information about the tables and columns necessary to generate the mobile app.

The same sync is happening in step six, except then it is the mobile app performing the sync instead of ZAG.

What were those three questions in step three?

The first one is the project name:

Then ZAG wants to know the settings for your sync server. These should already be filled in with the ones you entered earlier:

Finally, ZAG is asking you where to save the source code and project files for the app to be generated:

Does a ZAG-generated app allow modifications to the data?

For the Xamarin.Forms C# target, yes. On the item detail page, you should be able to enter new values in text fields and 'Save' the changes.

But you should get a permission denied error if you try to sync those changes to our public demo server. :-)

Is ZAG generating UI stuff as XAML or as C# code?

Currently, it's XAML. You'll find the files in the 'xaml' folder in the demo.Shared project.

Does ZAG generate polished ready-to-use apps?

Oh definitely not. :-)

The output of ZAG should build and run with no errors (if it doesn't, it's a bug, and please let us know), but it's just a starting point for further development.

 

More Than 800 Videos on TVAgile.com

From the Editor of Methods & Tools - Wed, 08/20/2014 - 09:33
TVAgile.com has just passed the mark of the 800 resources available with an Oredev conference presentation that discusses how to foment creative collaboration based on the tenets of improv and open spaces. TVAgile.com is a directory of videos, interviews and tutorials focused agile software development approaches and practices: Scrum, Extreme Programming (XP), Test Driven Development (TDD) , Lean Software Development, Kanban, Behavior Driven Development (BDD), Agile Requirements, Continuous Integration, Pair Programming, Refactoring, … Explore all these resources on http://www.tvagile.com/

Which Software Size Metric Should I Use? Part 2

3466780657_aec63156b8_b

Selecting a software size metric sets you down a specific track.

Deciding on which software size metric you should use is a fairly momentous decision. Much like deciding on a development platform the decision on which size measure will commit an organization to different types of tools, process and techniques. For example the processes and tools needed to count lines of code would be different than those needed to support story points as a sizing technique. The goals of the measurement program will be instrumental in the determining which type of size metrics will be the most useful. Measurement goals will help you choose between four macro attributes of organization specific and industry defined metrics and between physical and logical metrics. For example, if benchmarking against other firms or industry data is required to attain your measurement goal using organizationally defined metrics would be less viable. Similarly if you have a heterogeneous software environment then selecting a functional metric would make more sense than using a physical metric (logical metrics normalizes varied technology).

Figure 1:Balancing Organizational Perspective Versus Organizational Environment

Untitled

The second checkbox is whether the measure has an externally defined and documented methodology. Why is definition important? Definition is the precursor to repeatability and consistency, which allows comparability. Consistency and repeatability are prerequisites for the ability to generate data needed to use the scientific method such as Six Sigma and tools used to support Kiazen. Finally, an external definition reduces the amount of effort that is required to construct and implement measurement programs.

Even where a definition exists a wide range of nuances are possible. Examples of the range of definitions begin with the most defined, the functional precision of ISO functional metrics to the less defined methodology of Use Case Points which began with a single academic definition and has evolved into many functional variants. The variants seen in UCP are a reflection of having no central control point to control methods evolution, which we will explore later in this model. The range of formality of definition is captured in Figure 2.

Untitled2

Figure 3 consolidates the view of formality of definition with the delineation between logical and physical metrics. Each measure has strengths and weaknesses. The first two items in our checklist are macro filters.

Untitled3

Each measure of size fits a specific combination of organizational goals, environmental constraints and needs however the field of potential software sizing metrics is wide and varied. Once the macro filter is applied each subsequent step in the checklist will narrow the field of potential size measures.


Categories: Process Management

Creating a Company Where Everyone Gives Their Best

‚ÄúYour work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do.‚ÄĚ ‚ÄĒSteve Jobs

What does it take to create a company where everybody gives their best where they have their best to give?

It takes empathy.

It also takes encouraging people to be zestful, zany, and zealous.

It takes bridging the gap between the traits that make people come alive, and the traits that traditional management practices value.

In the book The Future of Management, Gary Hamel walks through what it takes to create a company where everyone gives their best so that employees thrive and companies create sustainable competitive advantage.

Resilience and Creativity: The Traits that Differentiate Human Beings from Other Species

Resilience and creativity are what separate us from the pack.

Via The Future of Management:

‚ÄúAsk your colleagues to describe the distinguishing characteristics of your company, and few are likely to mention adaptability and inventiveness.  Yet if you ask them to make a list of the traits that differentiate human beings from other species, resilience and creativity will be near the top of the list.  We see evidence of these qualities every day -- in ourselves and in those around us. ‚Äú

We Work for Organizations that Aren't Very Human

People are adaptive and creative, but they often work for organizations that are not.

Via The Future of Management:

‚ÄúAll of us know folks who've switched careers in search of new challenges or a more balanced life.  We know people who've changed their consumption habits for the sake of the planet.  We have friends and relatives who've undergone a spiritual transformation, or risen to the demands of parenthood, or overcome tragedy.  Every day we meet people who write blogs, experiment with new recipes, mix up dance tunes, or customize their cars.  As human beings, we are amazingly adaptable and creative, yet most of us work for companies that are not.  In other words, we work for organizations that aren't very human.‚ÄĚ

Modern Organizations Deplete Natural Resilience and Creativity

Why do so many organizations underperform?  They ignore or devalue the capabilities that make us human.

Via The Future of Management:

‚ÄúThere seems to be something in modern organizations that depletes the natural resilience and creativity of human beings, something that literally leaches these qualities out of employees during daylight hours.  The culprit?  Management principles and processes that foster discipline, punctuality, economy, rationality, and order, yet place little value on artistry, nonconformity, originality, audacity, and √©lan.  To put it simply, most companies are only fractionally human because they make room for only a fraction of the qualities and capabilities that make us human.  Billions of people show up for work every day, but way too many of them are sleepwalking.  The result: organizations that systematically underperform their potential.‚ÄĚ

Adaptability and Innovation Have Become the Keys to Competitive Success

There’s a great big gap between what makes people great and the management systems that get in the way.

Via The Future of Management:

‚ÄúWeirdly, many of those who labor in the corporate world--from lowly admins to high powered CEOs--seem resigned to this state of affairs.  They seem unperturbed by the confounding contrast between the essential nature of human beings and the essential nature of the organization in which they work.  In years past, it might have been possible to ignore this incongruity, but no longer--not in a world where adaptability and innovation have become the sine qua non of competitive success.  The challenge: to reinvent our management systems so they inspire human beings to bring all of their capabilities to work every day.‚ÄĚ

The Human Capabilities that Contribute to Competitive Success

Hamel offers his take on what the relative contribution of human capabilities that contribute to value creation, recognizing that we now live in a world where efficiency and discipline are table stakes.

 

Passion 35% Creativity 25% Initiative 20% Intellect 15% Diligence 5% Obedience 0%   100%

 

Via The Future of Management:

‚ÄúThe human capabilities that contribute to competitive success can be arrayed in a hierarchy.  At the bottom is obedience--an ability to take direction and follow rules.   This is the baseline.  Next up the ladder is diligence.  Diligent employees are accountable.  They don't take shortcuts.  They are conscientious and well-organized.  Knowledge and intellect are on the next step.  Most companies work hard to hire intellectually gifted employees.  They value smart people who are eager to improve their skills and willing to borrow best practices from others.  Beyond intellect lies initiative.  People with initiative don't wait to be asked and don't wait to be told.  They seek out new challenges and are always searching for new ways to add value.  Higher still lies the gift of creativity.  Creative people are inquisitive and irrepressible.  They're not afraid of saying stupid things.  They start a lot of conversations with, 'Wouldn't it be cool if ..." And finally, at the top lies passion.‚ÄĚ

 

The Power of Passion

Passion makes us do dumb things.  But it‚Äôs also the key to doing great things.

Via Via The Future of Management:

‚ÄúPassion can make people do stupid things, but it's the secret sauce that turns intent into accomplishment.  People with passion climb over obstacles and refuse to give up.  Passion is contagious and turns one-person crusades into mass movements.  As the English novelist E.M. Forster put it, 'One person with passion is better than forty people merely interested.'‚ÄĚ

Obedience is Worth Zip in Terms of Competitive Advantage

Rule-following employees won’t help you change the world.

Via The Future of Management:

‚ÄúI'm not suggesting that obedience is literally worth nothing.  A company where no one followed any rules would soon descend into anarchy.  Instead, I'm arguing that rule-following employees are worth zip in terms of their competitive advantage they generate.  In a world with 4 billion nearly distributed souls, all eager to climb the ladder of economic progress, it's not hard to find billable, hardworking employees.  And what about intelligence?  For years we've been told we're living in the knowledge economy; but as knowledge itself becomes commoditized, it will lose much of its power to create competitive advantage.‚ÄĚ

Obedience, Diligence, and Expertise Can Be Bought for Next to Nothing

You can easily buy obedience, diligence, and expertise from around the world.

But that’s not what will make you the next great company or the next great thing or a great place to work.

Via The Future of Management:

‚ÄúToday, obedience, diligence, and expertise can be bought for next to nothing.  From Bangalore to Guangzhou, they have become global commodities.  A simple example: turn over your iPod, and you'll find six words engraved on the back that foretell the future of competition: 'Designed in California. Made in China.'  Despite the equal billing, the remarkable success of Apple's music business owes relatively little to the company's network of Asian subcontractors.  It is a credit instead to the imagination of Apple's designers, marketers, and lawyers.  Obviously not every iconic product is going to be designed in California, not nor manufactured in China. ‚Äú

You Need Employees that are Zestful, Zany, and Zealous

If you want to bring out the best in people and what they are capable of, aim for zestful, zany, and zealous.

Via The Future of Management:

‚ÄúThe point, though, is this: if you want to capture the economic high ground in the creative economy, you need employees who are more than acquiescent, attentive, and astute--they must also be zestful, zany, and zealous.‚ÄĚ

If you want to bring out your best, then break our your zest and get your zane on.

You Might Also Like

The Principles of Modern Management

How Employees Lost Empathy for their Work, for the Customer, and for the Final Product

No Slack = No Innovation

The Drag of Old Mental Models on Innovation and Change

The New Competitive Landscape

The New Realities that Call for New Organizational and Management Capabilities

Who’s Managing Your Company

Categories: Architecture, Programming

Agile 2014 ‚Äď speaking and attending; a summary

Xebia Blog - Tue, 08/19/2014 - 17:14

So Agile 2014 is over again… and what an interesting conference it was.

What did I find most rewarding? Meeting so many agile people! My first conclusion was that there were experts like us agile consultants or starting agile coaches, ScrumMasters and other people getting acquainted with our cool agile world. Another trend I noticed was the scaled agile movement. Everybody seems to be involved in that somehow. Some more successful than others; some more true to agile than others.

What I missed this year was the movement of scrum or agile outside IT although my talk about scrum for marketing had a lot of positive responses.  Everybody I talked to was interested in hearing more information about it.

There was a talk maybe even two about hardware agile but I did not found a lot of buzz around it. Maybe next year? I do feel that there is potential here. I believe Fullstack product development should be the future. Marketing and IT teams? Hardware and software teams?  Splitting these still sounds as design and developer teams to me.

But what a great conference it was. I met a lot of awesome people. Some just entering the agile world; some authors of books I had read which got me further in the agile movement. I talked to the guys from Spotify. The company which is unique in its agile adoption / maturity. And they don’t even think that they are there yet. But then again will somebody ever truly BE agile ..?

I met the guys from scrum.inc who developed a great new scaled framework. Awesome ideas on that subject and awesome potential to treat it as a community created open framework; keep your eyes open for that!

I attended some nice talks too; also some horrible ones. Or actually 1, which should never have been presented in a 90 minute slot in a conference like this. But lets get back to the nice stories. Lyssa Adkins had a ‚Äėtalk‚Äô about conflicts. Fun thing was that she actually facilitated the debate about scaled agile on stage. The session could have been better but the idea and potential of the subject is great.

Best session? Well probably the spotify guys. Still the greatest story out there of an agile company. The key take-out of that session for me is: agile is not an end-state, but a journey. And if you take it as serious as Spotify you might be able to make the working world a lot better. Looking at Xebia we might not even be considered to be trying agile compared to them. And that is meant in a humble way while looking up to these guys! - I know we are one of the frontiers of agile in the Netherlands. The greatest question in this session: ‚ÄėWhere is the PMO in your model‚Ķ.‚Äô

Well you clearly understand this …

Another inspiring session was the keynote session from the CFO of Statoil about beyond budgeting. This was a good story which should become bigger in the near future as this is one of the main questions I get when implementing agile in a company: ‚Äúhow do we plan / estimate and budget projects when we go and do agile?‚ÄĚ Beyond budgeting at least get‚Äôs us a little closer.
Long story short. I had a blast in Orlando. I learnt new things and met a lot of cool people.My main take out: Our community is growing which teaches us that we are not yet there by a long run. An awesome future is ahead! See you next year!

Tips for Newbie Business Analysts ‚Äď Part II

Software Requirements Blog - Seilevel.com - Tue, 08/19/2014 - 17:00
You have become a model-making machine, an expert in elicitation sessions, and a proficient PowerPointer. You communicate early and often with your project team members, and you’ve read Betsy Stockdale’s blog post on Professionalism 101. It might seem as though you’re ready to take the leap and start owning areas of a project by yourself. […]
Categories: Requirements

Sponsored Post: Apple, Tumblr, Gawker, FoundationDB, Monitis, BattleCry, Surge, Cloudant, CopperEgg, Logentries, Couchbase, MongoDB, BlueStripe, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here. 
    • Senior Software Engineer -iOS Systems. Do you love building highly scalable, distributed web applications? Does the idea of a fast-paced environment make your heart leap? Do you want your technical abilities to be challenged every day, and for your work to make a difference in the lives of millions of people? If so, the iOS Systems Carrier Services team is looking for a talented software engineer who is not afraid to share knowledge, think outside the box, and question assumptions. Please apply here.
    • Software Engineering Manager, IS&T WWDR Dev Systems. The WWDR development team is seeking a hands-on engineering manager with a passion for building large-scale, high-performance applications. The successful candidate will be collaborating with Worldwide Developer Relations (WWDR) and various engineering teams throughout Apple. You will lead a team of talented engineers to define and build large-scale web services and applications. Please apply here.
    • C++ Senior Developer and Architect- Maps. The Maps Team is looking for a senior developer and architect to support and grow some of the core backend services that support Apple Map's Front End Services. Ideal candidate would have experience with system architecture, as well as the design, implementation, and testing of individual components but also be comfortable with multiple scripting languages. Please apply here.
    • Site Reliability Engineer. The iOS Systems team is building out a Site Reliability organization. In this role you will be expected to work hand-in-hand with the teams across all phases of the project lifecycle to support systems and to take ownership as they move from QA through integrated testing, certification and production.  Please apply here.

  • Make Tumblr fast, reliable and available for hundreds of millions of visitors and tens of millions of users. As a Site Reliability Engineer you are a software developer with a love of highly performant, fault-tolerant, massively distributed systems. Apply here.

  • Systems & Networking Lead at Gawker. We are looking for someone to take the initiative on the lowest layers of the Kinja platform. All the way down to power and up through hardware, networking, load-balancing, provisioning and base-configuration. The goal for this quarter is a roughly 30% capacity expansion, and the goal for next quarter will be a rolling CentOS7 upgrade as well as to planning/quoting/pitching our 2015 footprint and budget. For the full job spec and to apply, click here: http://grnh.se/t8rfbw

  • BattleCry, the newest ZeniMax studio in Austin, is seeking a qualified Front End Web Engineer to help create and maintain our web presence for AAA online games. This includes the game accounts web site, enhancing the studio website, our web and mobile- based storefront, and front end for support tools. http://jobs.zenimax.com/requisitions/view/540

  • FoundationDB is seeking outstanding developers to join our growing team and help us build the next generation of transactional database technology. You will work with a team of exceptional engineers with backgrounds from top CS programs and successful startups. We don’t just write software. We build our own simulations, test tools, and even languages to write better software. We are well-funded, offer competitive salaries and option grants. Interested? You can learn more here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • OmniTI has a reputation for scalable web applications and architectures, but we still lean on our friends and peers to see how things can be done better. Surge started as the brainchild of our employees wanting to bring the best and brightest in Web Operations to our own backyard. Now in its fifth year, Surge has become the conference on scalability and performance. Early Bird rate in effect until 7/24!
Cool Products and Services
  • Get your Black Belt in NoSQL. Couchbase Learning Services presents hands-on training for mission-critical NoSQL. This is the fastest, most reliable way to gain skills in advanced NoSQL application development and systemsoperation, through expert instruction with hands-on practical labs, in a structured learning environment. Trusted training. From Couchbase. View the schedule, read attendee testimonials & enroll now.

  • Now track your log activities with Log Monitor and be on the safe side! Monitor any type of log file and proactively define potential issues that could hurt your business' performance. Detect your log changes for: Error messages, Server connection failures, DNS errors, Potential malicious activity, and much more. Improve your systems and behaviour with Log Monitor.

  • The NoSQL "Family Tree" from Cloudant explains the NoSQL product landscape using an infographic. The highlights: NoSQL arose from "Big Data" (before it was called "Big Data"); NoSQL is not "One Size Fits All"; Vendor-driven versus Community-driven NoSQL.  Create a free Cloudant account and start the NoSQL goodness

  • Finally, log management and analytics can be easy, accessible across your team, and provide deep insights into data that matters across the business - from development, to operations, to business analytics. Create your free Logentries account here.

  • CopperEgg. Simple, Affordable Cloud Monitoring. CopperEgg gives you instant visibility into all of your cloud-hosted servers and applications. Cloud monitoring has never been so easy: lightweight, elastic monitoring; root cause analysis; data visualization; smart alerts. Get Started Now.

  • Whitepaper Clarifies ACID Support in Aerospike. In our latest whitepaper, author and Aerospike VP of Engineering & Operations, Srini Srinivasan, defines ACID support in Aerospike, and explains how Aerospike maintains high consistency by using techniques to reduce the possibility of partitions.  Read the whitepaper: http://www.aerospike.com/docs/architecture/assets/AerospikeACIDSupport.pdf.

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Not vs. Not-Now Prioritization Along with Medium-Term Goals

Mike Cohn's Blog - Tue, 08/19/2014 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

In last month’s newsletter I wrote about how we make personal financial decisions in a now vs. not-now manner. We don’t map out must-haves, should-haves, could-haves, and won’t haves. And I promised in this month’s newsletter, I would cover a simple approach to now vs. not-now planning while still accommodating working toward a bigger vision for a product.

I always recommend having a medium-term vision for where a product is headed. I find a three-month horizon works well. At the start of each quarter, a product owner should put a stake in the ground saying “Here’s where we want to be in three months.” This is done in conjunction with the team and other stakeholders, but the ultimate vision for a product is up to the product owner.

A product owner doesn’t need to be overly committed to the vision—it can be changed. But, without a stake in the ground a few months out, prioritization decisions are likely to be driven by whatever emergencies erupt right before sprint planning meetings. Without a vision, the urgent always wins over the strategic.

For choosing between competing ideas for a medium-term vision, I like using a formal approach—that is, something I can explain to someone else. I want to be able to say, “I chose to focus on such-and-such rather than this-and-that” and then show some analysis indicating how I made that decision. I’ve written elsewhere about relative weighting, theme screening and theme scoring—and we have tools on this website for performing those analyses.

But at the start of each sprint, a product owner needs to make the smaller now vs. not-now decisions of prioritizing user stories to be worked on in the next one sprint. Rather than using a formal, explainable approach for that, I advise product owners consider four things about the product backlog items they are evaluating:

  1. how valuable the feature will be
  2. the learning that will occur by developing the feature
  3. the riskiness of the feature
  4. any change in the relative cost of the feature

I do this by first sorting the candidate product backlog items on attribute one, the value of each feature. There’s nothing special about this; it’s how product people have prioritized features for years.

But I don’t stop there. I use items two through four to adjust the now sorted backlog. For the most part, I will move product backlog items up rather than down based on the influence of these additional factors. Let me explain what each factor is about.

The second factor refers to how much learning will occur by developing the feature. Learning can take many forms—for example, a team might learn about the technology being used (“this vendor’s library isn’t as easy as we were told it would be”) or a product owner might learn how well users respond to a new user interface. If the learning that will result from developing a particular product backlog item is significant, I will move that item up the product backlog so it is developed in the coming sprint.pro

As for riskiness, if a given risk cannot be avoided, I prefer doing that product backlog item sooner rather than later so that I can learn the impact of the risk. And so, I will move the product backlog item up into the current sprint. If, however, there is a chance of avoiding a risk entirely (perhaps by not doing the feature at all), I will move that product backlog item out of the current sprint. I’ll then hopefully continue to do that in each subsequent sprint, thereby avoiding that risk entirely.

Finally, some features can be cheap if they are done now or expensive if they are put off. When I see such an item on a product backlog, I will move it into the current sprint to avoid the higher cost of delaying the feature.

By combining the use of these four factors when selecting items for a sprint with a formal approach to establishing a medium-term, three-month vision, you’ll be able to successfully prioritize in a now vs. not-now manner within the context of achieving a bigger goal.

Now vs. Not-Now Prioritization Along with Medium-Term Goals

Mike Cohn's Blog - Tue, 08/19/2014 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

In last month’s newsletter I wrote about how we make personal financial decisions in a now vs. not-now manner. We don’t map out must-haves, should-haves, could-haves, and won’t haves. And I promised in this month’s newsletter, I would cover a simple approach to now vs. not-now planning while still accommodating working toward a bigger vision for a product.

I always recommend having a medium-term vision for where a product is headed. I find a three-month horizon works well. At the start of each quarter, a product owner should put a stake in the ground saying “Here’s where we want to be in three months.” This is done in conjunction with the team and other stakeholders, but the ultimate vision for a product is up to the product owner.

A product owner doesn’t need to be overly committed to the vision—it can be changed. But, without a stake in the ground a few months out, prioritization decisions are likely to be driven by whatever emergencies erupt right before sprint planning meetings. Without a vision, the urgent always wins over the strategic.

For choosing between competing ideas for a medium-term vision, I like using a formal approach—that is, something I can explain to someone else. I want to be able to say, “I chose to focus on such-and-such rather than this-and-that” and then show some analysis indicating how I made that decision. I’ve written elsewhere about relative weighting, theme screening and theme scoring—and we have tools on this website for performing those analyses.

But at the start of each sprint, a product owner needs to make the smaller now vs. not-now decisions of prioritizing user stories to be worked on in the next one sprint. Rather than using a formal, explainable approach for that, I advise product owners consider four things about the product backlog items they are evaluating:

  1. how valuable the feature will be
  2. the learning that will occur by developing the feature
  3. the riskiness of the feature
  4. any change in the relative cost of the feature

I do this by first sorting the candidate product backlog items on attribute one, the value of each feature. There’s nothing special about this; it’s how product people have prioritized features for years.

But I don’t stop there. I use items two through four to adjust the now sorted backlog. For the most part, I will move product backlog items up rather than down based on the influence of these additional factors. Let me explain what each factor is about.

The second factor refers to how much learning will occur by developing the feature. Learning can take many forms—for example, a team might learn about the technology being used (“this vendor’s library isn’t as easy as we were told it would be”) or a product owner might learn how well users respond to a new user interface. If the learning that will result from developing a particular product backlog item is significant, I will move that item up the product backlog so it is developed in the coming sprint.pro

As for riskiness, if a given risk cannot be avoided, I prefer doing that product backlog item sooner rather than later so that I can learn the impact of the risk. And so, I will move the product backlog item up into the current sprint. If, however, there is a chance of avoiding a risk entirely (perhaps by not doing the feature at all), I will move that product backlog item out of the current sprint. I’ll then hopefully continue to do that in each subsequent sprint, thereby avoiding that risk entirely.

Finally, some features can be cheap if they are done now or expensive if they are put off. When I see such an item on a product backlog, I will move it into the current sprint to avoid the higher cost of delaying the feature.

By combining the use of these four factors when selecting items for a sprint with a formal approach to establishing a medium-term, three-month vision, you’ll be able to successfully prioritize in a now vs. not-now manner within the context of achieving a bigger goal.

AngularJS directives for c3.js

Gridshore - Tue, 08/19/2014 - 10:19

For one of my projects we wanted to create some nice charts. Feels like something you often want but do not do because it takes to much time. This time we really needed it. We had a look at D3.js library, a very nice library but so many options and a lot to do yourself. Than we found c3.js, check the blog post by Roberto: Creating charts with C3.js. Since I do a lot with AngularJS, I wanted to integrate these c3.js charts with AngularJS. I already wrote a piece about the integration. Now I went one step further by creating a set of AngularJS directives.

You can read the blog on the trifork blog:

http://blog.trifork.com/2014/08/19/angularjs-directives-for-c3-js-chart-library/

The post AngularJS directives for c3.js appeared first on Gridshore.

Categories: Architecture, Programming

Systems Thinking and Capabilities Based Planning

Herding Cats - Glen Alleman - Tue, 08/19/2014 - 04:23

The term Systems Thinking is many times used in vague and unspecified ways to mean think about the system and all will emerge in a way needed to produce value. 

Systems Thinking is a term toosed around many times with little regard for what it actually means and its relationship to Systems Engineering and Engineered Systems.

But systems thinking is much more than that. Systems thinking provides a rigorous way of integrating people, purpose, process and performance and: †

  • relating systems to their environment.
  • understanding complex problem situations
  • maximising the outcomes achieved.¬†
  • avoiding or minimising the impact of¬†unintended consequences.
  • aligning teams, disciplines, specialisms and¬†interest groups.
  • managing uncertainty, risk and opportunity.

†  "How Systems Thinking Contributes to Systems Engineering," INCOSEUK, Z7, Issue 1.0, March 2010.

Related articles A Breathtaking Paper Positive Deviance
Categories: Project Management