Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Counting IFPUG Function Points: Getting to a Number

Putting Things Together!

Putting Things Together!

The process for counting IFPUG Function culminates by the counter translating the sized data and transaction functions into a number.

Using our examples from ‘Counting IFPUG Function Points: Small, Medium and Large Logical Files?’ and ‘Counting IFPUG Function Points: Sizing Transactions.’ Our function point count would be:

  • Employee ILF: 2 RETs and 15 DETs – Low
  • Zip Code EIF : 1 RET and 1 DET – Low
  • Add Employee EI: 2 FTR and 10 DETs – Average
  • Inquire on Employee: 1 FTE and 10 DETs – Low

The count could be translated into a simple matrix as follows:

Component

Low

Average

High

Total Internal Logical File

1

      External Interface File

1

      External Input  

1

    External Output         Internal Inquiry

1

     

 

IFPUG Function Points provide a weight for each component/size combination. The weight translates the low, average and high representations of size into a number that can be used for estimation and other metrics.  We can create an unadjusted function point count by adding the weights to the count matrix and then multiplying each component count by the weight.  The sum of all of the extended weights yields the unadjusted count:

Component

Low

Average

High

Total

Internal Logical File

1 x 7

__ x 10

__ x 15

7 fp

External Interface File

1 x 5

__ x 7

__ x 10

5 fp

External Input

__ x 3

1 x 4

__ x 6

4 fp

External Output

__ x 4

__ x 5

__ x 7

0 fp

Internal Inquiry

1 x 3

__ x 4

__ x 6

3 fp

Total:  19 fp

If we are doing a project comprised only of changes to the four components in the example, the total unadjusted function point count would be 19 function points.  The International Standard Organization compliant version (ISO/IEC 14143-1:2007) of IFPUG Function Points uses only this unadjusted count.  The classic version of IFPUG function point counting includes two further steps.

Converting an unadjusted count into an adjusted function point count require an assessment of fourteen General System Characteristics.  General System Characteristics (GSC) reflect a set of typical features that applications exhibit that are not generally counted as function points. The features that the GSCs evaluate were originally identified to correct for the differences seen between batch and on-line applications.  For example GSC #1—Data Communication—is rated on a scale ranging from 0 (pure batch application) to 5 (the application is more than a front-end and supports more than one type of TP communication protocol).  Each of the 14 GSCs is evaluated using guides and a similar zero to five scale.  Summing all 14 GSC ratings for an application will result in a value between 0 and 70; that sum is referred to as the Total Degree of Influence (TDI).  TDI is then used to create the value adjustment factor (VAF) using the following expression: VAF = (TDI * 0.01) + 0.65.  The product of the VAF and our original unadjusted count to create the adjusted function point count. The adjustment factor can adjust a function point count by plus or minus 35%.  So, in the example above, the adjusted count could range from 12 to 26 function points.

Every application will have its own unique VAF.  VAFs generally do not change to a huge degree after an application is initially developed.  However, the ISO version of the counting process does not use this process and IFPUG now judges the VAF as optional. Furthermore, the unadjusted count is used in most commercial estimation tools, which subsequently use their own criteria for adjusting the count for other factors that impact development and support effort.  In the long run, the unadjusted count will likely become the norm because the process is simpler and quicker to use.

 

 


Categories: Process Management

What is Chaos? And how we found out about it...

Software Development Today - Vasco Duarte - Fri, 12/27/2013 - 07:00

As he looked at numbers e noticed that the oscillations of his model did not repeat.

In fact he entered the numbers again, and again, and again but his model would refuse to behave the same way twice.

During our long education we were taught that given the initial state of a system (the present parameters that define the state of the system) plus the equations that fully describe the system, you will be able to plot the future behavior of the system forever.

And we have plenty of evidence that this approach works. For one, we are able to predict when Comet Halley will next visit our corner of the galaxy. And many astronomers were able to predict that last visit thousands of years ago!

So, why was his model stubbornly behaving differently every time he punched the numbers in? After all he knew the system perfectly - he had designed it!

The best way he had to describe this "unpredictable" behavior was with the word "chaotic", a never ending sequence of never repeating patters. Nothing was the same even if the initial state was the same and he was the one defining the equations for this toy-weather model. I mean he had defined ALL the equations...

It took a few days, but he figured it out. He had entered the parameters with a precision of 1/1000, but during processing the computer executing the model would use numbers with a precision of 1/1000000. On initial consideration this did not seem a relevant difference, after all a difference of 1/1000 was equivalent to having a butterfly flap its wings in China and having that create a storm in North America.

Systems that would never repeat in behavior even if they ran for ever

Later, this and other experiments would be repeated all over the world, in many different domains but the results would be similar. All over the world scientists were discovering other systems that were "sensitive dependence to initial condition" (aka suffered from the Butterfly effect), the scientific definition for "chaos" which later became the popular term to describe systems that would never repeat in behavior even if they ran for ever. These systems exhibited infinite variety of behavior when certain conditions were met. What were those conditions? That we will explore in a later post

Photo credit: John Hammink, follow him on twitter

Integrity 2: On Being Under the Radar

James Bach’s Blog - Fri, 12/27/2013 - 04:37

Here’s a comment from a Context-Driven tester with more patience than me. (His message is italicized and indented. My reply is interjected in normal text…)

Sami Söderblom writes:

Hi James,

I skimmed your book “Secrets of a Buccaneer-Scholar” and found the same spirit from this post. It was highly inspiring, but it also bothered me a bit. I started to write about this to Twitter (under @pr0mille), but my point was lost quite quickly. Therefore I CHOSE to take the mental effort and explain myself here… ;)

I work in Sogeti Finland. For many it’s a company that doesn’t actually promote integrity. We sell such atrocities as TMap and TPI, and many of our consultants enforce them blindly. Vast majority is unable to tell that they are promoting bad things, because they don’t see them as such. They don’t feel that their integrity is jeopardized. Some have awakened from this dream, but still choose to promote bad things, because their bonuses, career development and ultimately reputation within the company depend on this. They have chosen to lose their integrity.

Yes, it is not necessarily poor integrity to embrace something that some people (like me) think is bad. My model of integrity is based on wholeness of self, and congruence between self as presented to others and self as experienced personally. Therefore: IF I fancy myself an excellent tester, AND I advocate something that I believe is inconsistent with excellent testing, THEN that is an integrity problem. If someone honestly thinks that a “bad” thing is good, it’s a knowledge or wisdom problem, but not challenge to integrity.

I suspect there are people in Sogeti who believe in the value of TMap and TPI. I question their judgment, of course. I wonder if they have ever seriously studied testing. And I know that any of them who decided that TMap should be improved would face stiff resistance from most other people nearby, to the degree that improvement of the model is quite unlikely to occur. But none of that– in itself– is an ethics problem. Of course being intentionally, determinedly ignorant over a period of time is a different matter. I don’t understand how anyone who pays attention to developments in testing in the last 20 years could continue to hold on to such outmoded thinking.

At first I was blissfully ignorant. After gaining experience and trying to apply TMap and TPI to actual work, study them and really understand I started to see the flaws and the fact that they just don’t work. Worse yet, they can seriously harm the organizations they are implemented in. This weighed more on my scale than any bonuses or internal reputation. So I started to rebel.

Bad move.

By exposing that my integrity was more important than questionable company goals, I became open to attacks. I had to spend all of my time making my case, explaining myself to people that could not be convinced. It was a head on collision. Force against force. I didn’t make any progress in changing things for the better.

I don’t generally think of my integrity-related actions as moves in a game but rather as expressions of what I am. I’m not thinking “what do I need to say or do in order to get Sami not to fight me” but rather a three-step process FIRST: “What is true, relevant, helpful, and reasonably non-abusive?” THEN: “What is an effective way to live by that and communicate it?” If I proceed that way, sometimes opposition melts away. If it doesn’t melt away, then perhaps it’s right to fight. Ah, and here is the third step: “IF there is opposition, try to understand it and learn from that.”

Three steps, but none of them is about the ends justifying whatever means that get me what I want.

Sometimes, when integrity seems not to be at stake, I use a strategy of thinking “what does this person want and how can I support him?” Management skills training and years of experience as a husband and father help me have the patience to do that. Then, in a roundabout way without me being in charge, we might end up doing great work. It’s sort of a Taoist strategy, but it takes a lot of patience and time (more than I usually have, in my old age).

Then I remembered my jujutsu teachings. Jujutsu is all about manipulating the opponent via indirect attacks. In jujutsu you never go with force against force. So I changed my approach. I started to play ball. I accepted assignments that were less than optimal integrity-wise, but in them I used grassroot tactics to change things. For example I started to remove all the constraints that prevented people from thinking. I removed expected results from test cases. I simplified test planning. I incorporated roles and made people to talk with each other rather than with systems. I never talked about exploratory or context-driven testing, I never mentioned heuristics, I never bothered people with anything but the actual work.

I flew under the radar.

This sounds like the Taoist thing. Of course, that’s a viable strategy. But are you doing it honestly? “…flew under the radar” and “manipulating” sound like you could be attempting to deceive people.

This is what I meant by the fast food metaphor. A little (polite?) deception won’t kill your integrity outright, but it could make you sick. There are degrees of disguising your intent and methods (“blacker lies” and “whiter lies” shall we say) and I hope you are struggling with these ethical questions as you go.

I find it helpful to ask myself what is in the best interests of the company? Am I taking my paycheck and helping the company succeed on its own terms, or am I a parasite living off its life juices merely for my own gain?

This took some time, some hard days and some dents to my integrity too, but now it has started to pay off. Because I’ve introduced success rather than conflict I now have more and more good or even awesome assignments, and I know how to act in the bad ones. I have even applied these tactics internally and good things are happening as we speak. I cannot talk about all of them, but it’s safe to say that TMap won’t be the same… ;)

It’s too easy to be hasty and send wrong messages over Twitter. I hope you see that I’m far from considering myself as a victim or prisoner. Yes, I work in challenging environment, but I’m choosing to do so. Sometime I’ve thought that I’m doing volunteer work in a conflict area. I believe I can matter and that gives purpose to my work. I want to believe that it builds my integrity too.

You’re using active, responsible language, here. You seem to be taking responsibility. The most impressive part is that you have named yourself and your employer. That seems dangerous, to me, but dangerous is a good way that I admire.

Perhaps predictable for a guy in the midst of a Finnish winter, you seem eager for sunlight!

Ethics Under the Radar

I have heard it called “stealth testing.” Doing the right thing secretly, while publicly following a broken process. I am not completely sure that is what Sami is talking about, but the phrase “under the radar” suggests that he might be.

I have certainly used “stealth mode,” but it troubles me. I ask myself: How would I feel if someone who worked for me was “under the radar” while negating or ignoring my corporate strategy? As it happens I have been in this position several times that I know of. My answer is that I would feel great or terrible depending on what I thought the motive of the person was. I call it “constructive insubordination” when I am disobeyed in a way that honors my intent and my prerogatives as a manager. I celebrate that. It’s like having magic elves working for you. At the same time, it calls into question whether I am being over-controlling as a manager (or father or husband… the same principle works in any situation where you establish a contract with someone to do something for you).

 

 

 

 

Categories: Testing & QA

Quote of the Day

Herding Cats - Glen Alleman - Fri, 12/27/2013 - 01:04

Either write something worth reading or do something worth writing about - Benjamin Franklin

When there are simple and many times simple minded platitudes about solving complex and many times wicked or intractable problems in our project management world, think of Ben.

Tell me something actionable, with corrective outcomes in units of measure meaningful to the decision makers. This almost always involves doing something that is obvious, measurable, and that has been done before, but we just forgot, didn't pay attention, or have chosen to stop doing.

My favorite to date is the link to a NYT article about how the federal government needs to improve it's acquisition process. This article lists programs, all analyzed by GAO, all well reviewed and corrective actions being taken. The OP of this article has no connection, other than reading a poorly informed news paper article, about the problems and the possible corrections.

So Ben's advice can be applied here. Do something worth reading about, stop restating the obvious. Or do something outside your personal anecdotal experience worth writing about for others to test in their domain. Either way, we've got very qualified people identifying gaps, root causes, abnd proposed corrective actions from most every problem in the planet. Start with research on what they're saying first, then see if what is claimed to be the smell of dysfunction has a solution that can be directly applied to your domain. If not, there is a real opportunity to contribute to the art and practice of sepnding other peoples money in exchange for creating value. 

Categories: Project Management

Counting IFPUG Function Points: Sizing Transactions

 

Waves come in multiple sizes just like transactions.

Waves come in multiple sizes just like transactions.

Each of the three types of transactions identified in the IFPUG Function Point methodology are classified into three categories: low, average and high (small, medium and large).  The process for sizing transactions is similar to the process we used to size data functions that we discussed before. The size of a transaction is based on the interplay between file types referenced (FTR) and data element types (DETs).  A FTR refers to an internal logical files read or updated or an external interface file read.  The function point counter will review each transaction and count the number ILFs read or updated and the EIFs read.  The total FTRs will be used to determine the size (remember IFPUG uses the work complexity).  In our example of an HR system, we described the human resource clerk sitting in front of a computer entering the data needed to add a new employee (Employee Number, Name, Address, City, State, Zip, Work Location, and Job Code), and after entering the data the clerk hits the enter key and the data is committed to the employee ILF. Upon review it was pointed out that the Zip Code entered was checked against the zip code/postal Code file provided by the Post Office.  The number of FTRs for this external input transaction would be two (employee and zip/postal Code).  The counting rules for FTRs are no different whether the transaction is an EI, EO or EQ, with the exception that an EQ can never update a logical file. Therefore the FTRs should only reflect files that are read for EQs.

DETs are defined as unique, user-recognizable, non-repeated attributes.  This is the same definition of a DET that we used when discussing sizing data functions.  Counting data DETs for transactions is similar to counting DETs for data transactions with a few more transaction-related rules.  The rules:

Count “one” for each DET that enters or exits the boundary during the processing of the transaction.

Count “one” DET per transaction for the ability to send a response message (only one per transaction)

Count “one” DET per transaction for the ability to make the transaction happen (only one per transaction)

Using our example of entering an employee . . .

The clerk types in 8 fields therefore the counter would count 8 DETs entering the boundary of the application.  When he or she is finished typing they click on the post icon (or press enter) when the zip code is validated.  A message is returned if the zip code is wrong or if it is correct, and if the employee does not already exist a message is displayed saying that the employee is added.  In this case we would count a DET for the message and a DET for the ability to make the transaction happen. In our example the total number of DETs would be 10.

Just like the data transactions, IFPUG provides a simple matrix to derive size of external inputs.

FTRs 1 – 4 DETs 5 – 15 DETs 16 + DETs 0 – 1 Low Low Average 2 Low Average High 3+ Average High High

Using the matrix is a matter of counting the number of FTRs a transactions uses, finding the corresponding row and then finding the corresponding column for the number of DETs that you counted for the transactions.  In the example two FTRs and 10 DETs equates to an average external input.

The size/complexity matrix for external outputs and external inquires is a little different.

FTRs 1 – 5 DETs 6 – 19 DETs 20 + DETs 0 – 1 Low Low Average 2 – 3 Low Average High 4+ Average High High

A quick example of an external inquiry using our HR example would be if our mythical HR clerk needed to look up an employee (with same 8 fields noted before).  To accomplish this feat, the clerk types in an employee number and then presses enter. If the employee number is bad (or an employee does not exist) a message is returned.  If found all eight fields are display.  We would count 10 DETS.  We count one DET for employee number entering, one DET for pressing Enter one DET for the ability for a message and then seven DETs for all of the employee data returned (exits) except that employee number both enters and exits therefore is only counted once.   The zip code would not be validated on the external inquiry therefore the transaction would have one FTR and 10 DETs therefore would be a low external inquiry.

The process is repeated for each transaction.


Categories: Process Management

Countdown Deal for Getting Results the Agile Way on Kindle

image“In life you need either inspiration or desperation.” – Tony Robbins

Is 2014 going to be YOUR year?

Let’s make it so.

My best-selling book on time management and productivity is on sale for a limited time offer, through a Countdown Deal:

Getting Results the Agile Way on Kindle

What is Getting Results the Agile Way all about?  It’s a simple system for meaningful results in work and life.   It’s the best synthesis of what I know for mastering time management, motivation, and personal productivity. (And, it’s designed to be “better together” – use it with your favorite existing tools of choice, whether that’s Franklin-Covey, the Pomodoro Technique, Getting Things Done, etc.)

The way this Countdown Deal works is that the price goes from lower to higher during the course of 7 days.

As I currently understand it, here’s the price breakdown:

  • Dec 26, 27th – .99 Cents
  • Dec 28, 29th – $2.99
  • Dec 30, 31st, and Jan 1st – $4.99

In other words, the sooner you get it, the cheaper it is.

Here are the key benefits of the book:

  • Triple your productivity.  (When you combine a strengths based approach + Power Hours, you’ll ignite your productivity.)
  • Set yourself up for success on a daily basis
  • Embrace change and get better results in any situation
  • Focus and direct your attention with skill
  • Use your strengths to create a powerful edge for getting results
  • How to change a habit and make it stick
  • Achieve better work-life balance with a system that works
  • Spend more time doing the things you love

Here’s what others are saying about the book:

"Agile Results delivers know-what, know-why and know-how for anyone who understands the value of momentum in making your moments count."– Dr. Rick Kirschner, bestselling author

"JD’s ability to understand and cut to the real issues and then apply techniques that have proven to be successful in other situations is legendary at Microsoft. Over the years I have learnt that he will not recommend something or someone unless he believe it the entire value chain, making the advice you get even more potent. It’s a little like a whirlwind and you have to be prepared for a ride but if you want results and you want them fast, you talk to JD."– Mark Curphey, CEO & Founder, SourceClear

JD is the go-to-guy for getting results, and Agile Results demonstrates his distinct purpose – he shows how anyone can do anything, better. This book has simple, effective, powerful tools and ideas that are easy enough for everyone to apply in their work and lives, so that they get the results they’d like, even the impossible and the unexpected.”– Janine de Nysschen, Changemaker and Purpose Strategist, Whytelligence

Getting results and being YOUR best is a personal thing, which is why I designed it as a personal results system.

If you already are using Agile Results, tell me a story.   Tell me the good, the bad, and the ugly.   I always want to know what’s working or not working for you.  Each week, I receive emails from people around the world with their stories of personal victories.  For some, it’s fast, as if it was the missing link they needed to help them connect the dots.  For others, it’s more like a slow crescendo.  And, for others, it’s more like a game of slow and steady that wins the race.

But what everybody seems to have in common is that they feel like they got back on path and are instantly getting better at spending the right time, on the right things, the right way, with the right energy.  

This is where breakthroughs happen.

Categories: Architecture, Programming

The Beautiful Design Winter 2013 Collection on Google Play

Android Developers Blog - Thu, 12/26/2013 - 20:29
Posted by Marco Paglia, Android Design Team

While beauty’s in the eye of the beholder, designing apps for a platform also requires an attention to platform norms to ensure a great user experience. The Android Design site is an excellent resource for designers and developers alike to get familiar with Android’s visual, interaction, and written language. Many developers have taken advantage of the tools and techniques provided on the site, and every now and then we like to showcase a few notable apps that go above and beyond the guidelines.

This summer, we published the first Beautiful Design collection on Google Play. Today, we're refreshing the collection with a new set of apps just in time for the holiday season.

As a reminder, the goal of this collection is to highlight beautiful apps with masterfully crafted design details such as beautiful presentation of photos, crisp and meaningful layout and typography, and delightful yet intuitive gestures and transitions.

The newly updated Beautiful Design Winter 2013 collection includes:

Timely (by Bitspin), a clock app that takes animation to a whole new level. Screen transitions are liquid smooth and using the app feels more like playing with real objects than fussing around with knobs and buttons. If you’ve ever wondered if setting an alarm could be fun, Timely unequivocally answers “yes”.

Circa, a news reader that’s fast, elegant and full of beautiful design details throughout. Sophisticated typography and banner image effects, coupled with an innovative and "snappy" interaction, makes reading an article feel fast and very, very fun.

Etsy, an app that helps you explore a world of wonderful hand-crafted goods with thoughtfully designed screen transitions, beautifully arranged layouts, and subtle flourishes like a blur effect that lets you focus on the task at hand. This wonderfully detailed app is an absolute joy to use.

Airbnb, The Whole Pantry, Runtastic Heart Rate Pro, Tumblr, Umano, Yahoo! Weather… each with delightful design details.

Grand St. and Pinterest, veterans of the collection from this summer.

If you’re an Android developer, make sure to play with some of these apps to get a sense for the types of design details that can separate good apps from great ones. And remember to review the Android Design guidelines and the Android Design in Action video series for more ideas on how to design your next beautiful Android app.


Join the discussion on

+Android Developers
Categories: Programming

How You Are Wasting Your Time

Making the Complex Simple - John Sonmez - Thu, 12/26/2013 - 17:30

I write quite often on this blog about starting a side business or becoming self-employed, but one of the biggest struggles in getting something new started is finding time. The same goes for getting fit and getting in shape. So, if you want to be successful in your career, or in life in general, it […]

The post How You Are Wasting Your Time appeared first on Simple Programmer.

Categories: Programming

Counting IFPUG Function Points: The Transactions

Whether an External Output or External Inquiry the goal is present data to the user!

Whether an External Output or External Inquiry the goal is present data to the user!

As noted in Counting IFPUG Function Points, The Process and More, after classifying and counting the data functions our attention turns to the transaction functions.  There are three types of transactions; external inputs (EI), external inquires (EQ) and external outputs (EO). The person who taught me IFPUG Function Points more than a few years ago pointed out that the transaction functions move data.

The precise definition of an external input is “an elementary process that processes data or control information sent from outside the boundary[1].”  The definition goes on to say that an EI must either update one or more ILFs and/or alter the behavior of the system.  The former is more typical and the later more esoteric. The EI transaction can bring data into an application from a screen, a file, a feed from another application or be data from a sensor.  An EI can be batch or online.  Here are a few examples of an external inputs:

Common:  A human resource clerk sits in front of a laptop and enters the data needed to add a new employee (Employee Number, Name, Address, City, State, Zip, Work Location, and Job Code) and after entering the data the clerk hits the enter key and the data is committed to the employee ILF.

Less Common:  A temperature sensor reads the temperature from a pressure reactor in a chemical process.  The data is sent to a control application that raises or lowers the temperature in the reactor by regulating the heating coils.  The input is used by the software to adjust the behavior of the application.

The precise definition of an external inquiry is “an elementary process that sends data or control information outside the boundary[2].”   The EQ must retrieve the data from a logical file, and can’t contain directed data, perform math, change the behavior of the system or update an internal logical file. The easiest way to imagine an EQ is a simple direct retrieval of data.  Using our human resource system example, a simple EQ would be for the HR clerk to ask type an employee name into a search field, press the enter key and then see the information retrieved.

The third transaction is an external output and is defined as “an elementary process that sends data or control information outside the boundary and includes additional processing beyond that of an external inquiry[3].”  The processing is one or all of those things that an EQ can’t do, i.e. contain directed data, perform math, change the behavior of the system or update an internal logical file.  Examples of EOs abound.  Every morning I run and review a report of the download from the Software Process and Measurement Cast.  The report for each podcast includes the monthly download for the past three months, an overall total and then a calculated grand total since the beginning of the podcast.  A report with a calculated total would be an external output.

All three definitions use the term ‘elementary’, which just means that the transaction must represent the smallest whole unit of work that is meaningful to the user (any person or thing that interacts with the application). IFPUG function points include three basic transactions that move data to and from internal logical files and external interface files.

Like the data functions, the transaction functions come in three distinct sizes, which we will explore next.

[1] Function Point Counting Practice Manual 4.0, Part 2 7-3

[2] Function Point Counting Practice Manual 4.0, Part 2 7-3

[3] Function Point Counting Practice Manual 4.0, Part 2 7-3


Categories: Process Management

Season's greetings

Coding the Architecture - Simon Brown - Tue, 12/24/2013 - 11:28

2013 has been a fantastic year for me and I've had the pleasure of meeting so many people in more than a dozen countries. I just wanted to take this opportunity to say thank you and to wish all of my friends and clients a Merry Christmas. I hope that you have a happy and healthy 2014.

Instead of sending greeting cards, I'll be making a donation to the Channel Islands Air Search and RNLI Jersey. Both are voluntary organisations made up of people who regularly risk their own lives to save others. Jersey has experienced some very stormy weather during recent weeks where the RNLI and the CI AirSearch have both been called out. The CI AirSearch aeroplane unfortunately had to make an emergency landing too. They're incredibly brave to head out in the scary conditions we've had over the past few weeks and are an essential part of island life. "Inspirational" is an understatement. Let's hope they have a quiet holiday season.

Categories: Architecture

Finding Your Own Integrity

James Bach’s Blog - Tue, 12/24/2013 - 10:33

I have a belief that I’m not going to justify– I’m simply going to say it and challenge you to look into your own experience and your own heart and see the truth of it for yourself: Your sense of identity, as a human among humans, is the most powerful force that animates and directs your choices. It is more important than sex or food or religion. It lurks behind every neurosis (including those involving sex or food or religion). As I read history and experience life, answers to the questions “Who am I? Am I a good example of what I should be?” are the prime movers of human choice throughout all of history, and the proximal cause of every war.

There are certainly exceptions to this rule: drug addiction, mental illness, or panic over a sudden, surprising, physical threat. Maybe those things have little to do with identity. Granted. I’m talking about normal daily life (and every Shakespeare play).

“I am an American. I am a human. I am a father. I am a husband. I am lovable. I am helpful. I am a tester. I am a skeptic. I am an outsider. I am dangerous. I am safe. I am honorable. I am fallible. I am truthful. I am intellectual…”  Each of these statements, for me, are reflective shards that tumble in a kaleidoscope of my identity. The models of personhood they represent comprise my moral compass. Although the pattern formed in that kaleidoscope may seem to shift with the situation, the underlying logic of my adult identity changes little with time.

That is the context for integrity.

Integrity means wholeness; the harmony and completeness of one’s identity. Practically speaking, a person with integrity is a person to lives consistently according to their avowed moral code, as opposed to someone who has no moral code, or who changes it as a matter of convenience. A person of integrity therefore creates continuity across the events of his life, and other people feel they know who they are dealing with.

The Challenge of Finding Your Integrity

Recently, in a discussion about what is reasonable for an employer to ask of a tester, a colleague felt I was trying to impose my own values onto potential employers of my students and wrote that as teachers of new testers “employment [for the testers] should be our first priority.” I disagreed sharply, writing that “our first priority is integrity.” My correspondent seemed to take offense to that.

Now, the employment-first position might be construed to imply that we should advocate robbing banks, because it is the quickest way to get money, or perhaps we should train prostitutes, because prostitution is an old and reliable industry with lots of job security for the right people. That would be absurd, but it’s also a straw man argument. I am certain no one intends to argue that any job is better than no job. Safety, legality and morality do enter into the equation.

Conversely, the integrity-first position might be cast as requiring a tester to immediately protest or resign in the face of any ethical dilemma or systemic ethical lapse, no matter how seemingly minor. This would turn most testers into insufferable, dour lawyers on their projects. We would get very little done. Who would hire such people?

These extreme positions are not very interesting, except as tools for meditating on what might be reasonable behavior. Therefore, I’d like to describe a less extreme position that I think is more defensible and workable. It goes like this:

1. Integrity is a vital and important matter. We suffer as people and society suffers when we treat it too lightly.

2. As testers and technical people, our integrity is routinely threatened by well-meaning clients and colleagues who want us to portray ourselves and the world to be a certain way, even if that isn’t strictly the truth.

3. If we never think directly about integrity, and simply trust in the innate goodness of ourselves and others, we are definitely taking this matter too lightly.

4. Integrity is not like a vase that shatters easily, and that once shattered is irretrievable. Integrity is more like an ongoing public artwork, exposed to and buffeted by the elements, sometimes damaged but always ultimately repairable (although our reputation may be another matter). Integrity is a work in progress for all of us.

5. Integrity, like education, is both personal and social. Your society judges you. It is reasonable that it does. But it is also reasonable to negotiate limits on that judgment. We spend our lives negotiating those lines, one way or another.

6. Forgiveness, although perhaps difficult and expensive to obtain, should always be available to us. (I test this by occasionally imagining my most “depraved” enemies in testing, and then imagining what they could do that would allow me to forgive them and even collaborate with them.)

7. Although integrity is our highest priority, in general, it is not the only thing that matters. We must apply wisdom and judgment so that the maintenance of integrity does not unreasonably affect our ability to survive. There is no set formula for how to do that.

8. Therefore, our practical priority must be: to learn how to think through and solve problems of survival while maintaining reasonable integrity. This itself is an ongoing project, requiring temperance and self-forgiveness.

9. New testers need to realize that they are not necessarily responsible for the quality of their work. Sometimes you will be asked to do things you don’t understand the value of, even though there may be value. In those situations, it’s okay to be compliant, as long as you are under supervision and someone competent is taking responsibility for what you do. It’s okay to watch and learn and not necessarily to make trouble. (Although, I usually did, even as a newbie.)

10. Experienced testers? Well, much is expected of you. Your clients (your non-tester colleagues and bosses) don’t know how to test, but you are supposed to. You can’t just do what you are told. That would be like letting a drunk friend drive home. Remember, someday your clients may sober up and wonder why you agreed to their stupid plan when you were supposed to be the expert.

Having laid this hopefully reasonable and workable strategy before you… I actually think the dispute between me and my correspondent, above, was not about the importance of integrity or employment at all, but rather about the specifics of the case we were debating. I should tell you what that was: whether it is reasonable for an employer to expect an entry-level tester to “write test cases.”

From a context-driven testing perspective, no practice can be declared unreasonable outside all contexts. But I do know a lot about the typical contexts of testing. I have seen profound waste, all around the industry, due to reckless and thoughtless documenting and counting of things called “test cases.” So, I don’t think that it is reasonable, generally speaking, to require young testers to write test cases. First, because “writing test cases” is what people who don’t know how to test think testers do– so, it’s usually an indicator of incompetent management. Second, because entry-level testers do not have the skills to write test cases in such a way that won’t result in a near complete waste of their client’s time and money. And third, because there are such obviously better things to do, in most cases, including learning about the product and actually testing the product.

Many people disagree with me. But I believe their attitude on this is the single most direct and vital cause of the perpetual infancy and impotency that we see in the testing industry. In other words, it’s not just a disagreement about style, it’s something that I think threatens our integrity as sincere and thoughtful testers. Casual shrugging about test case writing must be stamped out the way transfats are being outlawed in fast food. Yes, that also took years to accomplish.

Speaking of fast food…

Here’s a metaphor that might help: eating at McDonalds.

Eating at McDonalds will not kill you (well, not outright). But what if you were forced to eat at McDonalds for your work? Every day, breakfast, lunch and dinner. Nothing but McDonalds. What if it were obvious to you that eating at McDonalds was not helping you actually succeed in your work? What if instead it was clear to you that such a diet was harming your ability to get your work done? For instance, perhaps you are a restaurant reviewer, except you are almost always full of McDonalds food so you can’t ever enjoy a proper meal at a restaurant you are supposed to review? And yet your manager, who knows nothing about restaurant reviewing, insists that you maintain a McDonalds-dominated dietary regimen.

Couldn’t someone say, hey, it’s a job and you should do what you are told? Yes, they could say that. And it might be true enough at first. But over time, that diet would hurt you; over time, you would have to cope with how poorly you were doing what you believed to be your real job. You might even be criticized for missing bugs– I mean– failing to review restaurants fully, even though it’s largely due to your employer’s own unreasonable process prescriptions.

At some point you might say “enough!!” You might refuse to eat another Big Mac. From the point of view of your management and colleagues, it might look like you were risking your job just because you didn’t want to eat a hamburger. It might look crazy to them. But from your point of view, the issue isn’t the one burger, but rather the unhealthy system, slowly killing you. This breakdown comes more quickly if you happen to have a gluten allergy.

Ethics and integrity in testing is not just about following prissy little rules that many other people flout– it’s about not making yourself sick even if other people are willing to live a sickly life. This requires that you be able to judge what health and sickness means to you. Integrity is about identity health.

A Story of Quitting Even Though I Needed the Work

In 1998, I was hired by a consulting company outside of Washington D.C. I negotiated for a $30,000 sign-on bonus, and bought a house in Virginia. I was the sole breadwinner in my family, with a wife and son to support. I bought a new car, too. In short, I was “all in.”

Six months later, I quit. I had no other job to go to. I had bills due. It took me seven years to pay back my sign-on bonus, with interest (I forfeited it because I did not stay for two years). But with the help of colleagues and family over the following weeks, I made the transition to running my own business. I am most thankful for my wife’s response when I came home that night and told her I walked out on our only source of income. She shrugged and said it was surely for the best, and that something good would come of it. (I can only recommend, gentlemen, that you marry an optimist if you can.) I am also thankful to Cem Kaner, who bought me a laptop (my only computer was then owned by my employer) and said “times like these are when you discover who your true friends are.” This was fitting because it was partly because of Cem that I had previously decided never to sacrifice my professional integrity.

This illustrates one lesson about ethics: community support helps us find and maintain our integrity.

I quit because my company was insisting that I bill hours on a project that, in my opinion, was absolutely certain not to benefit from my work. The client wanted me to create fake test cases. They didn’t call them fake test cases, of course. They claimed to want real test cases; and good ones. But no product had been designed at that time! All I had access to was a general description of requirements, which in this case were literally statements of problems the product was intended to solve, with no information on how they would be solved. It was a safety-critical factory process control system, and no one could show me what it would look like or provide any examples of what specifically it might do. The only test cases I could possibly design would therefore be vague and imaginary, based on layers of soft, fluffy assumptions. The customer told me they would be happy if I delivered a document that consisted of the text of each requirement preceded by the phrase “verify that…” I told them they didn’t need a tester for that. They needed a macro.

The integrity picture was clouded, in that case, because the client believed they had to follow the “V-Model” process, which they had interpreted as demanding that I submit a test case specification document. It was a clash between the integrity of a heuristic (the V-Model) vs. the integrity of solving the problem for which the heuristic was designed. My client might have said that I was the one violating the integrity of the process. Whereas I would have said that my client was not competent to make that judgment.

I’m not saying I won’t do bad work… I’m just saying I won’t do bad work for money. If I do bad work, I want it to be for fun or for learning, but not to anyone’s expense or detriment. Hence a line I use once in a while “I could do that for you, except that you pay me too much.” This is one reason I like being independent. I control what I bill for, and if I think a portion of my work is not acceptable, I don’t charge for it– like a chef who refuses to serve an overcooked steak.

It wasn’t as sudden as it looked…

I didn’t just lose my temper at the first sign of trouble. Things had been coming to a boil for a while. On my very first day I reviewed the RFP for that project and concluded it was doomed, but management bid on it anyway, telling me I needed to “be practical” and that surely “we could be helpful to them if they hired us.” I needed the job, so I relented against my better judgment.

During my first staff meeting, my first week on the job, I challenged the consulting staff about what they did to study testing on their own time. My challenge was met with an awkward silence, after which one of the consultants, sounding soul-wounded, told me he was offended that I would suggest that they weren’t already good enough as testers, “These are the best people I’ve ever worked with” said the twenty-something tester with little experience and no public reputation. “But how do you know they are good?” I asked, knowing that our company had just issued a press release about having hired me (a “distinguished industry pioneer” to quote it exactly). There were other murmurs of annoyance around the table, and the manager stepped in to change the subject. I could have pushed the issue, but I didn’t. I needed the job, so I relented against my better judgment.

I was later told that despite my company’s public position, the other consultants felt that I was a mere armchair expert, whereas they were practical men. I don’t know what evidence they had for that. They never showed me what they could do that I supposedly could not. Management tolerated this attitude. That means they were lying directly to their customers about me– claiming I was an expert when clearly they did not believe I was one. I could have insisted they behave in accordance with their public statements about me. But… I needed the job, so I relented against my better judgment.

I knew the day had come when I must quit because I found myself fantasizing about throwing chairs through windows. That triggered a sort of circuit-breaker of judgment: change your life now, now, now.

So what happened after that?

I suffered for this decision. First came the panic attack. I felt dizzy and it was hard to breathe for a few hours. This was followed by a few years of patching together a project here and a project there, never more than 8 weeks from completely running out of money and credit. We were twice within a week of filing for bankruptcy in the early days. During that time I walked away from a few more projects. I resigned from a dysfunctional government project, hopefully saving valuable taxpayer dollars by not writing a completely unnecessary Software Configuration Management plan that no one on the team wanted. I got myself fired from a project at Texas Instruments after about 90 minutes, because I told them a few things they didn’t want to hear (but that I felt were both true and important).

It’s not all suffering, of course. I once was fired from a project (along with the rest of the test team) and then was the only one hired back– partly because the client realized that my high standards meant that I billed far fewer hours than other consultants. In other words, saying no and being a troublemaker earned me another 500 hours of work, while the yes-sayers lost their situations. I also got some great gigs, including my very first one as an independent, specifically because I am a rabble-rousing kind of thinker.

These days, I cultivate as many clients as I can, so that I don’t rely too much on any one of them. And I have established a reputation for being honest and blunt that generally prevents the wrong sort of people from trying to hire me. It’s not easy, but it can be done: I have made integrity my top priority.

What about before I was well known?

Well, I’ve always had this attitude. It’s not some luxury to me. It’s fundamental. That’s why I had to leave high school. I’ve never been able to “play the game” at the expense of feeling like a good honest man. Like I said, I suffered for it. I wanted to go try myself at MIT, where my much more pliable best friend from high school eventually graduated. I am born to be an academic, but since I can’t stand the compliance-to-ceremony culture of the academic world, I must be an independent scholar, without access to expensive journals and fantastic libraries.

Before anybody heard of me, I treated getting work like dating: be a slightly exaggerated version of myself so that I will be rejected quickly if the relationship is not a fit (a stress testing strategy, you might say). My big break came at Apple, where I worked for a man of vision and good humor who seemed to relish being the mentor I needed. The environment was open and supportive. There was an element of luck in that my first ten years in testing I worked for people who didn’t ask me to tell lies or do bad work on purpose.

So I know it’s possible to find such people. They are out there. You don’t have to work for bozos, and if you currently do, there is yet hope.

A person who does not live true to himself feels sick and weak inside. My identity as “excellent software tester” demands that I take my craft seriously. I hope you will take this craft seriously, too.

P.S. What if my sense of identity doesn’t require me to be good at my job?

Then, technically, none of this applies to you. Your ethical code can include doing bad work. But… why are you reading my blog? How did you get in? Guards! Seize him!

 

Categories: Testing & QA

Software Architecture, Agile UX and Toyota Kata in Methods & Tools Winter 2013 issue

From the Editor of Methods & Tools - Mon, 12/23/2013 - 16:14
Methods & Tools – the free e-magazine for software developers, testers and project managers – has just published its Winter 2013 issue that discusses Software Architecture Diagrams, Agile UX integration in Scrum, Toyota Kata, Groovy enVironment Manager (GVM), Selenide Java testing tool. Methods & Tools Winter 2013 issue contains the following articles: * Simple Sketches for Diagramming your Software Architecture * The UX Runway – Integrating UX, Lean, and Scrum Cohesively * Toyota Kata – Habits for Continuous Improvement * GVM Groovy enVironment Manager * Selenide: Concise UI tests in Java 50 pages of software development knowledge ...

Neo4j: Cypher – Using MERGE with schema indexes/constraints

Mark Needham - Mon, 12/23/2013 - 14:30

A couple of weeks about I wrote about cypher’s MERGE function and over the last few days I’ve been exploring how it works when used with schema indexes and unique constraints.

A common use case with Neo4j is to model users and events where an event could be a tweet, Facebook post or Pinterest pin. The model might look like this:

2013 12 22 20 14 04

We’d have a stream of (user, event) pairs and a cypher statement like the following to get the data into Neo4j:

MERGE (u:User {id: {userId}})
MERGE (e:Event {id: {eventId}})
MERGE (u)-[:CREATED_EVENT]->(m)
RETURN u, e

We’d like to ensure that we don’t get duplicate users or events and MERGE provides the semantics to do this:

MERGE ensures that a pattern exists in the graph. Either the pattern already exists, or it needs to be created.

I wanted to see what would happen if I wrote a script that tried to create the same (user, event) pairs concurrently and ended up with the following:

import org.neo4j.cypher.javacompat.ExecutionEngine;
import org.neo4j.cypher.javacompat.ExecutionResult;
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.graphdb.factory.GraphDatabaseFactory;
import org.neo4j.helpers.collection.MapUtil;
import org.neo4j.kernel.impl.util.FileUtils;
 
...
 
public class MergeTime
{
    public static void main(String[] args) throws Exception
    {
        String pathToDb = "/tmp/foo";
        FileUtils.deleteRecursively(new File(pathToDb));
 
        GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase( pathToDb );
        final ExecutionEngine engine = new ExecutionEngine( db );
 
        ExecutorService executor = Executors.newFixedThreadPool( 50 );
        final Random random = new Random();
 
        final int numberOfUsers = 10;
        final int numberOfEvents = 50;
        int iterations = 100;
        final List<Integer> userIds = generateIds( numberOfUsers );
        final List<Integer> eventIds = generateIds( numberOfEvents );
        List<Future> merges = new ArrayList<>(  );
        for ( int i = 0; i < iterations; i++ )
        {
            Integer userId = userIds.get(random.nextInt(numberOfUsers));
            Integer eventId = eventIds.get(random.nextInt(numberOfEvents));
            merges.add(executor.submit(mergeAway( engine, userId, eventId) ));
        }
 
        for ( Future merge : merges )
        {
            merge.get();
        }
 
        executor.shutdown();
 
        ExecutionResult userResult = engine.execute("MATCH (u:User) RETURN u.id as userId, COUNT(u) AS count ORDER BY userId");
 
        System.out.println(userResult.dumpToString());
 
    }
 
    private static Runnable mergeAway(final ExecutionEngine engine,
                                      final Integer userId, final Integer eventId)
    {
        return new Runnable()
        {
            @Override
            public void run()
            {
                try
                {
                    ExecutionResult result = engine.execute(
                            "MERGE (u:User {id: {userId}})\n" +
                            "MERGE (e:Event {id: {eventId}})\n" +
                            "MERGE (u)-[:CREATED_EVENT]->(m)\n" +
                            "RETURN u, e",
                            MapUtil.map( "userId", userId, "eventId", eventId) );
 
                    // throw away
                    for ( Map<String, Object> row : result ) { }
                }
                catch ( Exception e )
                {
                    e.printStackTrace();
                }
            }
        };
    }
 
    private static List<Integer> generateIds( int amount )
    {
        List<Integer> ids = new ArrayList<>();
        for ( int i = 1; i <= amount; i++ )
        {
            ids.add( i );
        }
        return ids;
    }
}

We create a maximum of 10 users and 50 events and then do 100 iterations of random (user, event) pairs with 50 concurrent threads. Afterwards we execute a query which checks how many users of each id have been created and get the following output:

+----------------+
| userId | count |
+----------------+
| 1      | 6     |
| 2      | 3     |
| 3      | 4     |
| 4      | 8     |
| 5      | 9     |
| 6      | 7     |
| 7      | 5     |
| 8      | 3     |
| 9      | 3     |
| 10     | 2     |
+----------------+
10 rows

Next I added in a schema index on users and events to see if that would make any difference, something Javad Karabi recently asked on the user group.

CREATE INDEX ON :User(id)
CREATE INDEX ON :Event(id)

We wouldn’t expect this to make a difference as schema indexes don’t ensure uniqueness but I ran it anyway t and got the following output:

+----------------+
| userId | count |
+----------------+
| 1      | 2     |
| 2      | 9     |
| 3      | 7     |
| 4      | 2     |
| 5      | 3     |
| 6      | 7     |
| 7      | 7     |
| 8      | 6     |
| 9      | 5     |
| 10     | 3     |
+----------------+
10 rows

If we want to ensure uniqueness of users and events we need to add a unique constraint on the id of both of these labels:

CREATE CONSTRAINT ON (user:User) ASSERT user.id IS UNIQUE
CREATE CONSTRAINT ON (event:Event) ASSERT event.id IS UNIQUE

Now if we run the test we’ll only end up with one of each user:

+----------------+
| userId | count |
+----------------+
| 1      | 1     |
| 2      | 1     |
| 3      | 1     |
| 4      | 1     |
| 5      | 1     |
| 6      | 1     |
| 7      | 1     |
| 8      | 1     |
| 9      | 1     |
| 10     | 1     |
+----------------+
10 rows

We’d see the same type of result if we ran a similar query checking for the uniqueness of events.

As far as I can tell this duplication of nodes that we merge on only happens if you try and create the same node twice concurrently. Once the node has been created we can use MERGE with a non-unique index and a duplicate node won’t get created.

All the code from this post is available as a gist if you want to play around with it.

Categories: Programming

Give Credit Generously

I had a boss who was great at saying, “Terri did this. Jen did that. JR did this other thing.” We all knew who had learned about different areas of the system, who had succeeded at which parts of testing or development or project management. It was great.

She didn’t just tell us. Nope, our boss told her bosses.

That’s one of the reasons I had many opportunities to grow in that particular job. Not just because I worked hard and did a good job. But because my boss told her management team.

Contrast that with some other places I’ve worked, especially where command-and-control still had a foothold.

I once led a small team where we were implementing a process control application. It was a difficult project. My manager knew what we were doing, but we were on the hairy edge of success/failure the entire time. I took a one-week vacation, and my team continued while I was away.

Another VP across the organization—not my manager—inserted himself in my project while I was away. For that entire week, he “managed” our customer. I had been the sole customer contact up until then. All hell broke loose.

I returned from vacation to a gazillion voice mails on my personal answering machine. (This is before the days of cell phones :-) It took me a month of plane rides to fix this customer problem.

When we released that project, it was successful. At the next Ops meeting, he told everyone that he personally had overseen the project. My manager did not participate at Ops meetings.

Afterwards, my manager asked me about the project. “I thought you and the team were working on this project?”

“We’re still cleaning up. You should see what Andy, Bill, and Mack have done. They have really performed. I’m just about ready to write my post-project review for you,” I replied.

My manager frowned. “Well according to VP, you had nothing to do with this project. It was all him, and only him.”

“Are you serious?” I was flabbergasted. “You know how hard I’ve been working on this. You’ve been signing my expense reports. Wait a minute. What about the guys? Did he say anything about them?”

“No, he didn’t mention them either. He pulled this project out of the fire, all by himself.”

“Do you believe him?” I couldn’t believe what I was hearing.

“Well, I know you’ve been busy, but I don’t know if you’ve been distracted by other things.”

And that’s when I learned about the value of weekly one-on-ones (I’d been flying, so I hadn’t had one in a few weeks), email status reports or somehow letting my boss know what I was doing, and giving credit up. If I’d been giving credit all along for my project team, no one would have been distracted by the bozo VP.

I fixed that, pretty darn quick, and apologized to my team. They laughed it off. I’d built a lot of trust with them already. But my manager had a gazillion fires to fight. He had no idea who was managing what.

That is the topic of this month’s management myth: People Don’t Need External Credit.

The more you give credit, the more you look good.

Categories: Project Management

Quote of the Month December 2013

From the Editor of Methods & Tools - Wed, 12/18/2013 - 16:32
Tools and machines are great, but in the end, it is the people who matter. Putting together an effective automated test team requires lots of planning and skills. Source: Experiences of Test Automation – Case Studies of Software Test Automation, Dorothy Graham and Mark Fewster editors, Addison-Wesley

Composite User Stories

Xebia Blog - Wed, 12/18/2013 - 11:21

User stories

User stories must represent the business value. That's why we use the well known one-line description 'as an <actor> I want an <action>, so I can reach a <goal>'. It is both simple and powerful because it provides the team a concrete customer related context for identifying the relevant tasks to reach the required goal.

The stories pulled into the sprint by the team have a constraint on size. They should at least be small enough to fit into a sprint. This constraint of story size can in some cases require the story to be broken down into smaller stories. There are some useful patterns to do this like workflow steps, business rule or data variation etc.

Complex systems

When dealing with large and complex systems consisting of many interacting components the process of breaking down can impose problems even when following the standard guidelines. Especially when breaking down a story leads to stories which are related to components deep within the system without direct connection to the end user or the business goal. Those stories are usually inherently technical and far away from the business perspective.

Lets say the team encounters a story like ‘As a user I want something really complex that doesn’t fit in a story, so I can do A’. The story requires interaction of multiple components so the team breaks it down in to smaller stories like ‘As a user I want component X to expose operation Y, so I can do A’. There should be a user and business goal, but the action has no direct relation to either of them. It provides no meaningful context for this particular story and it just doesn't feel right.

Constraint by time and with no apparent solution provided by known patterns the team is likely to define a story like: ‘Implement operation Y in component X’, which is basically a compound task description and provides no context at all.

Components as actors

Breaking the rules a bit it is possible to use the principle of user story definition and provide meaningful context in these cases. The trick is to zoom into the system and define the sub stories on another level using a composite relation and making the components actors themselves with their own goals: ‘As component Z I want to call operation Y on component X, so I can do B’ and ‘As component X I want to implement operation Y, so I can do C’.

There is no direct customer or business value in this sub story, but because it is linked by composition it is quite easy to trace back to the business value. Each of the sub goals contributes to the goal stated in the composite story. Goal A will be reached by reaching both goal B and goal C (A = B + C).

Linking the stories

There are several ways to link the stories to their composite story. You can number stories like 1, 1a, 1b, ... or by indenting the sticky notes with sub user stories on the scrum board to visualize the relationship. To make the relation more explicit you can also extend the story description like: As an <Actor> I want <Action>, so I can reach <Goal> and contribute to <Composite Goal>.

Composite Stories

Summary

The emphasis of this approach is to try to maintain meaningful context while splitting (technical) user stories for complex systems with many interacting components. By viewing components as actors with their own goals you can create meaningful user stories with relevant contexts. The use of a composite structure creates logical relations between the stories in the composition and connects them to the business value. This way the team can maintain a consistent way of expressing functionality using user stories.

Disclaimer

This method should only be applied when splitting user stories using the standard patterns is not possible. For instance it does not provide an answer to the rule that each story should deliver value to the end user. It is likely that more than one sprint is needed to deliver a composite story.  Also you should ask yourself the question why there is complexity in the system and could it be avoided. But for teams facing this complexity and the challenge to split the stories today this method can be the lesser of two evils.

Affordance Open-Space at XP Day

Mistaeks I Hav Made - Nat Pryce - Tue, 12/03/2013 - 00:20
Giovanni Asproni and I facilitated an open-space session at XP Day on the topic of affordance in software design. Affordance is "a quality of an object, or an environment, which allows an individual to perform an action". Don Norman distinguishes between actual afforance and perceived affordance, the latter being "those action possibilities that are readily perceivable by an actor". I think this distinction applies as well to software programming interfaces. Unless running in a sandboxed execution environments, software has great actual affordance, as long as we're willing to do the wrong thing when necessary: break encapsulation or rely on implementation-dependent data structures. What makes programming difficult is determining how and why to work with the intentions of those who designed the software that we are building upon: the perceived affordance of those lower software layers. Here are the notes I took during the session, cleaned up a little and categorised into things that provide help affordance, hinder affordance or can both help and hinder. Helping Repeated exposure to language you understand results in fluency. Short functions Common, consistent language. Domain types. Good error messages Things that behave differently should look different. e.g. convention for ! suffix in Scheme and Ruby for functions that mutate data. Able to explore software: able to put "probes" into/around the software to experiment with its behaviour know where to put those probes TDD, done properly, is likely to make most important affordances visible simpler APIs ability to experiment and probe software running in isolation. Hindering Two ways software has bad affordance: can't work out what it does (poor perceived affordance) it behaves unexpectedly (poor actual affordance) (are these two viewpoints on the same thing?) Null pointer exceptions detected far from origin. Classes named after design patterns incorrectly. e.g. programmer did not understand the pattern. Factories that do not create objects Visitors that are really internal iterators Inconsistencies can be caused by unfinished refactoring which code style to use? agree in the team spread agreement by frequent pair rotation Names that have subtly different meanings in natural language and in the code. E.g. OO design patterns. Compare with FP design patterns that have strict definitions and meaningless names (in natural language). You have to learn what the names really mean instead of rely on (wrong) intuition. Both Helping and Hindering Conventions e.g. Ruby on Rails alternative to metaphor? but: don't be magical! hard to test code in isolation If you break convention, draw attention to where it has been broken. Naming types after patterns better to use domain terms than pattern names? e.g. CustomerService mostly does stuff with addresses. Better named AddressBook, can see it also has unrelated behaviour that belongs elsewhere. but now need to communicate the system metaphors and domain terminology (is that ever a bad thing?) Bad code style you are familiar with does offer perceived affordance. Bad design might provide good actual affordance. e.g. lots of global variables makes it easy to access data. but provides poor perceived affordance e.g. cannot understand how state is being mutated by other parts of the system Method aliasing: makes code expressive common in Ruby but Python takes the opposite stance - only one way to do anything Frameworks: new jargon to learn new conventions provide specific affordances if your context changes, you need different affordances and the framework may get in the way. Documentation vs Tests actual affordance of physical components is described in a spec documents and not obvious from appearance. E.g. different grade of building brick. Is this possible for software? docs are more readable tests are more trustworthy best of both worlds - doctests? Used in Go and Python How do we know what has changed between versions of a software component? open source libraries usually have very good release notes why is it so hard in enterprise development?
Categories: Programming, Testing & QA

When C4 becomes C5

Coding the Architecture - Simon Brown - Mon, 12/02/2013 - 10:59

I've been working with a number of teams recently, helping them to diagram their software systems using the C4 approach that is described in my Software Architecture for Developers book. To summarise, it's a way to visualise the static structure of a software system using a small number of simple diagrams as follows:

  1. Context: a high-level diagram that sets the scene; including key system dependencies and actors.
  2. Containers: a containers diagram shows the high-level technology choices (web servers, application servers, databases, mobile devices, etc), how responsibilities are distributed across them and how they communicate.
  3. Components: for each container, a components diagram lets you see the key logical components/services and their relationships.
  4. Classes: if there’s important information that I want to portray, I’ll include some class diagrams. This is an optional level of detail and its inclusion depends on a number of factors.

In the real-world, software systems never live in isolation and it's often useful to understand how all of the various software systems fit together within the bounds of an enterprise. To do this, I'll simply add another diagram that sits on top of the C4 diagrams, to show the enterprise context from an IT perspective. This usually includes:

  • The organisational boundary.
  • Internal and external users.
  • Internal and external systems (including a high-level summary of their responsibilities and data owned).

Essentially this becomes a high-level map of the software systems at the enterprise level, with a C4 drill-down for each software system of interest. Caveat time. I do appreciate that enterprise architecture isn't simply about technology but, in my experience, many organisations don't have an enterprise architecture view of their IT landscape. In fact, it shocks me how often I come across organisations of all sizes that lack such a holistic view, especially considering IT is usually a key part of the way they implement business processes and service customers. Sketching out the enterprise context from a technology perspective at least provides a way to think outside of the typical silos that form around IT systems.

Categories: Architecture

Oh no, more logs, start with logstash

Gridshore - Sun, 11/10/2013 - 20:26

How many posts have you seen about logging? And how many have your read about logging? Recently logging became cool again. Nowadays everybody talks about logstash, elasticsearch and kibana. It feels like everybody is playing with these tools. If you are not among the people playing around with it, than this is your blog post. I am going to help you get started with logstash, get familiar with the configuration and configuring the input as well as output. Than when you are familiar with the concepts and know how to play around with logstash, I move on to storing things in elasticsearch. There are some interesting steps to take there as well. When you have a way to put data in elasticsearch we move on to looking at the data. Before you can understand the power of Kibana, you have to create some queries on your own. I’ll help you there as well. In the end we will also have a look at Kibana.


Logstash

Logstash comes with a number of different components. You can run them all using the executable jar. But logstash is very pluggable, therefore you can also use other components to replace the internal logstash components. Logstash contains the following components:

  • Shipper – sends events to logstash
  • broker/indexer – sends events to an output, elasticsearch for instance
  • search/storage – provides search capabilities using an internal elasticsearch
  • Web interface – provided a guy using a version of kibana

Logstash is created using jRuby, so you need a jvm to run logstash. When you have the executable jar all you need to do is create a basic config file and you can start experimenting. The config file consists of three main parts:

  • Input – the way we receive messages or events
  • Filters – how we leave out or convert messages
  • Output – the way to send out messages

The next code block gives the most basic config, use the standard input from the terminal where you run logstash and output the messages to the same console.

input {
  stdin { }
}
output {
  stdout { }
}

Time to run logstash using this config:

java -jar logstash-1.2.2-flatjar.jar agent -v -f basic.conf

Then when typing Hello World!! we get (I did remove some debug info):

2013-11-08T21:58:13.178+0000 jettro.gridshore.nl Hello World!!

With the 1.2.2 release it is still annoying that you cannot just stop the agent using ctrl+c. I have to really kill it with the -9 option.

It is important to understand that the input and output contents are plugins. We can add other plugins to handle other input sources as well as plugins for outputting data. One is to output data to elasticsearch we will see later on.

Elasticsearch

I am not going to explain how to install elasticsearch. There are so many resources available online, especially on the elasticsearch website. So I tak it you have a running elasticsearch installation. Now we are going to update the logstash configuration to send all events to elasticsearch. The following code block shows the config for sending events as entered in the standard in to elasticsearch.

input {
  stdin { }
}
output {
  stdout { }

  elasticsearch {
    cluster => "logstash"
  }
}

Now when we have elasticsearch running with auto discovery enabled and a cluster name equal to logstash we can start logstash again. The output should resemble the following.

[~/javalibs/logstash-indexer]$ java -jar logstash-1.2.2-flatjar.jar agent -v -f basic.conf 
Pipeline started {:level=>:info}
New ElasticSearch output {:cluster=>"logstash", :host=>nil, :port=>"9300-9305", :embedded=>false, :level=>:info}
log4j, [2013-11-09T17:51:30.005]  INFO: org.elasticsearch.node: [Nuklo] version[0.90.3], pid[32519], build[5c38d60/2013-08-06T13:18:31Z]
log4j, [2013-11-09T17:51:30.006]  INFO: org.elasticsearch.node: [Nuklo] initializing ...
log4j, [2013-11-09T17:51:30.011]  INFO: org.elasticsearch.plugins: [Nuklo] loaded [], sites []
log4j, [2013-11-09T17:51:31.897]  INFO: org.elasticsearch.node: [Nuklo] initialized
log4j, [2013-11-09T17:51:31.897]  INFO: org.elasticsearch.node: [Nuklo] starting ...
log4j, [2013-11-09T17:51:31.987]  INFO: org.elasticsearch.transport: [Nuklo] bound_address {inet[/0:0:0:0:0:0:0:0:9301]}, publish_address {inet[/192.168.178.38:9301]}
log4j, [2013-11-09T17:51:35.052]  INFO: org.elasticsearch.cluster.service: [Nuklo] detected_master [jc-server][wB-3FWWEReyhWYRzWyZggw][inet[/192.168.178.38:9300]], added {[jc-server][wB-3FWWEReyhWYRzWyZggw][inet[/192.168.178.38:9300]],}, reason: zen-disco-receive(from master [[jc-server][wB-3FWWEReyhWYRzWyZggw][inet[/192.168.178.38:9300]]])
log4j, [2013-11-09T17:51:35.056]  INFO: org.elasticsearch.discovery: [Nuklo] logstash/5pzXIeDpQNuFqQasY6jFyw
log4j, [2013-11-09T17:51:35.056]  INFO: org.elasticsearch.node: [Nuklo] started

Now when typing Hello World !! in the terminal the following is logged to the standard out.

output received {:event=>#"Hello World !!", "@timestamp"=>"2013-11-09T16:55:19.180Z", "@version"=>"1", "host"=>"jettrocadiesmbp.fritz.box"}>, :level=>:info}
2013-11-09T16:55:19.180+0000 jettrocadiesmbp.fritz.box Hello World !!

This time however, the same thing is also send to elasticsearch. When we can check that by doing the following query to determine the index that is created.

curl -XGET "http://localhost:9200/_mapping?pretty"

The response than is the mapping. The index is the date of today, there is a type called logs and it has the fields that are also written out in the console.

{
  "logstash-2013.11.09" : {
    "logs" : {
      "properties" : {
        "@timestamp" : {
          "type" : "date",
          "format" : "dateOptionalTime"
        },
        "@version" : {
          "type" : "string"
        },
        "host" : {
          "type" : "string"
        },
        "message" : {
          "type" : "string"
        }
      }
    }
  }
}

Now that we know the name of the index, we can create a query to see if our message got through.

curl -XGET "http://localhost:9200/logstash-2013.11.09/logs/_search?q=message:hello&pretty"

The response for this query now is

{
  "took" : 2,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 0.19178301,
    "hits" : [ {
      "_index" : "logstash-2013.11.09",
      "_type" : "logs",
      "_id" : "9l8N2tIZSuuLaQhGrLhg7A",
      "_score" : 0.19178301, "_source" : {"message":"Hello World !!","@timestamp":"2013-11-09T16:55:19.180Z","@version":"1","host":"jettrocadiesmbp.fritz.box"}
    } ]
  }
}

There you go, we can enter messages in the console where logstash is running and query elasticsearch to see the messages are actually in the system. Not sure if this is useful, but at least you have seen the steps. Next step is to have a look at our data using a tool called kibana.

Kibana

There are a multitude of ways to install kibana. Depending on your environment one is easier than the other. I like to install kibana as a plugin in elasticsearch on a development machine. So in the plugins folder create the folder kibana/_site and store all the content of the downloaded kibana tar in there. Now browse to http://localhost:9200/_plugin/kibana. In the first screen look for the logstash dashboard. When you open the dashboard it looks a bit different than mine, I made some changes to make it easier to present on the screen. Later on I will show how to create your own dashboard and panels. The following screen shows Kibana.

Screen Shot 2013 11 09 at 18 43 34

logstash also comes with an option to run kibana from the logstash executable. I personally prefer to have it as a separate install, that way you can always use the latest and greatest version.

Using tomcat access logs

This is all nice, but we are not implementing a system like this to enter a few messages, therefore we want to attach another input source to logstash. I am going to give an example with tomcat access logs. If you want to obtain access logs in tomcat you need to add a valve to the configured host in server.xml.

<Valve className="org.apache.catalina.valves.AccessLogValveDC" 
       directory="/Users/jcoenradie/temp/logs/" 
       prefix="localhost_access_log" 
       suffix=".txt" 
       renameOnRotate="true"
       pattern="%h %t %S &quot;%r&quot; %s %b" />

An example output from the logs than is, the table shows what the pattern we have means

0:0:0:0:0:0:0:1 [2013-11-10T16:28:00.580+0100] C054CED0D87023911CC07DB00B2F8F75 "GET /admin/partials/dashboard.html HTTP/1.1" 200 988
0:0:0:0:0:0:0:1 [2013-11-10T16:28:00.580+0100] C054CED0D87023911CC07DB00B2F8F75 "GET /admin/api/settings HTTP/1.1" 200 90
0:0:0:0:0:0:0:1 [2013-11-10T16:28:02.753+0100] C054CED0D87023911CC07DB00B2F8F75 "GET /admin/partials/users.html HTTP/1.1" 200 7160
0:0:0:0:0:0:0:1 [2013-11-10T16:28:02.753+0100] C054CED0D87023911CC07DB00B2F8F75 "GET /admin/api/users HTTP/1.1" 200 1332
h remote host t timestamp S session id r first line of request s http status code of response b bytes send

If you want mote information about the logging options check the tomcat configuration.

First step is get the contents of this file into logstash. Therefore we have to make a change to add an input coming from a file.

input {
  stdin { }
  file {
    type => "tomcat-access"
    path => ["/Users/jcoenradie/temp/dpclogs/localhost_access_log.txt"]
  }
}
output {
  stdout { }

  elasticsearch {
    cluster => "logstash"
  }
}

The debug output now becomes.

output received {:event=>#"0:0:0:0:0:0:0:1 [2013-11-10T17:15:11.028+0100] 9394CB826328D32FEB5FE1F510FD8F22 \"GET /static/js/mediaOverview.js HTTP/1.1\" 304 -", "@timestamp"=>"2013-11-10T16:15:20.554Z", "@version"=>"1", "type"=>"tomcat-access", "host"=>"jettro-coenradies-macbook-pro.fritz.box", "path"=>"/Users/jcoenradie/temp/dpclogs/localhost_access_log.txt"}>, :level=>:info}

Now we have stuff in elasticsearch, but we have just one string, the message. We now we have more interesting data in the message. Let us move on to the following component in logstash, filtering.

Logstash filtering

You can use filters to enhance the received events. The following configuration shows how to extract client, timestamp, session id, method, uri path, uri param, protocol, status code and bytes. As you can see we use grok to match these fields from the input.

input {
  stdin { }
  file {
    type => "tomcat-access"
    path => ["/Users/jcoenradie/temp/dpclogs/localhost_access_log.txt"]
  }
}
filter {
  if [type] == "tomcat-access" {
    grok {
      match => ["message","%{IP:client} \[%{TIMESTAMP_ISO8601:timestamp}\] (%{WORD:session_id}|-) \"%{WORD:method} %{URIPATH:uri_path}(?:%{URIPARAM:uri_param})? %{DATA:protocol}\" %{NUMBER:code} (%{NUMBER:bytes}|-)"]
    }
  }
}
output {
  stdout { }

  elasticsearch {
    cluster => "logstash"
  }
}

Now compare the new output.

output received {:event=>#"0:0:0:0:0:0:0:1 [2013-11-10T17:46:19.000+0100] 9394CB826328D32FEB5FE1F510FD8F22 \"GET /static/img/delete.png HTTP/1.1\" 304 -", "@timestamp"=>"2013-11-10T16:46:22.112Z", "@version"=>"1", "type"=>"tomcat-access", "host"=>"jettro-coenradies-macbook-pro.fritz.box", "path"=>"/Users/jcoenradie/temp/dpclogs/localhost_access_log.txt", "client"=>"0:0:0:0:0:0:0:1", "timestamp"=>"2013-11-10T17:46:19.000+0100", "session_id"=>"9394CB826328D32FEB5FE1F510FD8F22", "method"=>"GET", "uri_path"=>"/static/img/delete.png", "protocol"=>"HTTP/1.1", "code"=>"304"}>, :level=>:info}

Now if we go back to kibana, we can see we have more fields. The message is now replace with the mentioned fields. So now we can easily filter on for instance session_id. The following image shows that we can select the new fields.

Screen Shot 2013 11 10 at 17 56 24

That is it for now, later on I’ll blog about more logstash options and creating dashboards with kibana.

The post Oh no, more logs, start with logstash appeared first on Gridshore.

Categories: Architecture, Programming

Multimethods

Mistaeks I Hav Made - Nat Pryce - Thu, 10/10/2013 - 00:30
I have observed that... Unless the language already provides them, any sufficiently complicated program written in a dynamically typed language contains multiple, ad hoc, incompatible, informally-specified, bug-ridden, slow implementations of multimethods. (To steal a turn of phrase from Philip Greenspun).
Categories: Programming, Testing & QA