Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

SPaMCAST 345 ‚Äď Cognitive Biases, QA Corner, TameFlow

Software Process and Measurement Cast - Sun, 06/07/2015 - 22:00

The Software Process and Measurement Cast 345 features our essay on Cognitive Biases and two new columns. The essay on cognitive bias provides important tools for anyone that works on a team or interfaces with other people! A sample for the podcast:

‚ÄúThe discussion of cognitive biases is not a theoretical exercise. Even a relatively simple process such as sprint planning in Scrum is influenced by the cognitive biases of the participants. Even the planning process itself is built to use cognitive biases like the anchor bias to help the team come to consensus efficiently. How all the members of the team perceive their environment and the work they commit to delivering will influence the probability of success therefore cognitive biases need to be understood and managed.‚ÄĚ

The first of the new columns is Jeremy Berriault’s QA Corner.  Jeremy’s first QA Corner discusses root cause analysis and some errors that people make when doing root cause analysis. Jeremy, is a leader in the world of quality assurance and testing and was originally interviewed on the Software Process and Measurement Cast 274.

The second new column features Steve Tendon discussing his great new book, Hyper-Productive Knowledge Work Performance.  Our intent is to discuss the book chapter by chapter.  This is very much like the re-read we do on blog weekly but with the author.  Steve has offered the SPaMCAST listeners are great discount if you use the link shown above.

As part of the chapter by chapter discussion of Steve‚Äôs book we are embedding homework questions.¬† The first question we pose is ‚ÄúIs the concept of hyper-productivity transferable from one group or company to another?‚ÄĚ Send your comments to¬†spamcastinfo@gmail.com.

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

Next . . . The Mythical Man-Month Get a copy now and start reading! We will start in four weeks!

Upcoming Events

2015 ICEAA PROFESSIONAL DEVELOPMENT & TRAINING WORKSHOP
June 9 ‚Äď 12
San Diego, California
http://www.iceaaonline.com/2519-2/
I will be speaking on June 10.¬† My presentation is titled ‚ÄúAgile Estimation Using Functional Metrics.‚ÄĚ

Let me know if you are attending!

Also upcoming conferences I will be involved in include and SQTM in September. More on these great conferences next week.

Next SPaMCast

The next Software Process and Measurement Cast will feature our will interview with Jon Quigley.  We discussed configuration management and his new book Configuration Management: Theory, Practice, and Application. Jon co-authored the book with Kim Robertson. Configuration management is one of the most critical practices anyone building a product, writing a piece of code or working on a project with other must learn or face the consequences!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.

Categories: Process Management

SPaMCAST 345 ‚Äď Cognitive Biases, QA Corner, TameFlow

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

The Software Process and Measurement Cast 345 features our essay on Cognitive Biases and two new columns. The essay on cognitive bias provides important tools for anyone that works on a team or interfaces with other people! A sample for the podcast:

‚ÄúThe discussion of cognitive biases is not a theoretical exercise. Even a relatively simple process such as sprint planning in Scrum is influenced by the cognitive biases of the participants. Even the planning process itself is built to use cognitive biases like the anchor bias to help the team come to consensus efficiently. How all the members of the team perceive their environment and the work they commit to delivering will influence the probability of success therefore cognitive biases need to be understood and managed.‚ÄĚ

The first of the new columns is Jeremy Berriault’s QA Corner.  Jeremy’s first QA Corner discusses root cause analysis and some errors that people make when doing root cause analysis. Jeremy, is a leader in the world of quality assurance and testing and was originally interviewed on the Software Process and Measurement Cast 274.

The second new column features Steve Tendon discussing his great new book, Hyper-Productive Knowledge Work Performance.  Our intent is to discuss the book chapter by chapter.  This is very much like the re-read we do on blog weekly but with the author.  Steve has offered the SPaMCAST listeners are great discount if you use the link shown above.

As part of the chapter by chapter discussion of Steve‚Äôs book we are embedding homework questions.¬† The first question we pose is ‚ÄúIs the concept of hyper-productivity transferable from one group or company to another?‚ÄĚ Send your comments to spamcastinfo@gmail.com.

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

Next . . . The Mythical Man-Month Get a copy now and start reading! We will start in four weeks!

Upcoming Events

2015 ICEAA PROFESSIONAL DEVELOPMENT & TRAINING WORKSHOP
June 9 ‚Äď 12
San Diego, California
http://www.iceaaonline.com/2519-2/
I will be speaking on June 10.¬† My presentation is titled ‚ÄúAgile Estimation Using Functional Metrics.‚ÄĚ

Let me know if you are attending!

Also upcoming conferences I will be involved in include and SQTM in September. More on these great conferences next week.

Next SPaMCast

The next Software Process and Measurement Cast will feature our will interview with Jon Quigley.  We discussed configuration management and his new book Configuration Management: Theory, Practice, and Application. Jon co-authored the book with Kim Robertson. Configuration management is one of the most critical practices anyone building a product, writing a piece of code or working on a project with other must learn or face the consequences!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.


Categories: Process Management

Re-Read Saturday: The Goal: A Process of Ongoing Improvement. Part 16

IMG_1249

As part of my day job I am often asked to help a team, project or department find a way to improve the value they deliver.  When dealing with knowledge work having a single, prescriptive path is rarely effective because even the most mundane product support work includes discovery and innovation. Once we have discovered a path it is important to step back and generalize the approach so that teams can use the process in a variety of scenarios.  I have found that developing a generalized approach is rarely as straight forward as changing the personal pronouns in the process to refer to another group. Regardless of this hard won realization, I still read posts and hear about people that are considering adopting best practices or procedures from other groups without tailoring.  Adopting a process, procedure or even a tool using an untailored, out of the box approach is rarely a good idea in knowledge work.  Alex and his team continue to search for a generalized approach that can be used to transform the entire division

Previous Installments:

Part 1       Part 2       Part 3      Part 4      Part 5 
Part 6       Part 7      Part 8     Part 9      Part 10
Part 11     Part 12      Part 13    Part 14    Part 15

 

Chapter 37. Alex and his team continue their daily meetings do discover the answer to the question ‚ÄúWhat are the techniques needed for management?‚ÄĚ In Chapter 36 the team had settled on a generalized five step process which was:

  1. Find the bottleneck,
  2. Exploit the bottleneck,
  3. Subordinate every other step to the bottleneck,
  4. Elevate the bottleneck, then
  5. Repeat if the bottleneck has been broken.

 

Ralph (computer guy) voices a concern that they really had not done step three.  After some discussion the team finds that the by constraining how work and material enter the process they really had subordinated all of the steps in the process to the bottlenecks.  Remember that the work and material entering the process had been constrained so the bottlenecks were 100% utilized (no more, no less).  During the discussion, Stacey (materials) recognized that the earlier red/yellow card approach the team had used to control the flow of work into the bottlenecks was still in place and was the cause of the problems she had been observing (Chapter 36). In order to deal with the problems caused by earlier red/yellow card approach and to keep everyone busy, Stacey admitted to have been releasing extra work into the process therefore building excess inventory of finished goods.  The back of the envelope calculations showed that the plant now had 20% extra capacity therefore they needed more orders to keep the plant at maximum capacity.  Alex decides go see Johnny Jons (sales manager) to see if they can prime the sales pump.

These observations led the team to the understanding that every time they recycled through the process they should have re-questioned and revalidated EVERY change they had previously made. The inertia of thinking something will work because it has in the past or because it has for someone else is often not your friend in process improvement!

Chapter 38. Jons, Alex, Lou (plant controller), Ralph and one of Jons more innovative salesmen meet at headquarters to discuss where they can come up with 10 million dollars of additional orders.  During the discussion it comes to light that Jons has a potential deal that he about to reject because the prices are well below standard margins. Alex points out that since the plant has excess capacity the real cost to produce the product is far lower than Jons is assuming (labor and overhead are already sunk costs). The plant could take the order and make a huge profit.  Alex and his team convince Jons to take the order if the potential client will commit to a one year deal.  They further sweeten the deal by committing to a quick deliveries (something other companies can’t emulate) in order to avoid starting a price war.  Jons agrees to accept the order as the potential client is well outside of the company’s standard area of distribution therefore will not impact the margins they getting on other orders.  On the way back to the plant Alex, Lout and Ralph reflect that they had just seen the same type of inertia that the team discovered the previous day in their process improvement approach and that Alex’s new role in changing the whole division will need to address even more organizational inertia.

Later Alex and Julie (wife) reflect that the key to the management practices Alex is searching for lie in the application of the scientific method.  Instead of collecting a lot of data and making inferences, the approach Johan had taken begins with a single observation, develops a hypothesis, leverages if-then relationships and then tests those relationships.  Alex searches popular scientific books for inspiration to his management questions.  When they discuss the topic again, Julie, who has continued to read the Socratic Dialogs, points out that they follow the same if-then pattern that Alex has described as Johan’s approach.

Re-Read Saturday Notes:

  1. I anticipate that the re-read of The Goal will conclude in two weeks with part 18. Our next book will be The Mythical Man-Month (I am buying a new copy today so if you do not have a copy . . . get a copy today and please use this Man-Month).
  2. Remember that the summary of previous entries in the re-read of The Goal have been shifted to a new page (here).
  3. Also, if you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version

 


Categories: Process Management

Carl Sagan's BS Detector

Herding Cats - Glen Alleman - Sat, 06/06/2015 - 00:24

There are several BS detector paradigms. One of my favorites is Carl Sagan's. This one has been adapted from Reality Charting the book that goes along with the Apollo Method Root Cause Analysis process we use in our governance process in our domain. Any outage, any hard break, and disruption to the weekly release process is cause for Root Cause Analysis. 

Here's Carl's check list applied to the #NoEstimates conjecture that decisions can be made in the absence of estimates

  1. Seek independent facts. Remember, a fact is a cause supported by sensed evidence and should be independently verified by you before it can be deemed legitimate. If you cannot find sensed evidence of causal relationships you should be skeptical. 
    • Writing software for money is based on making decisions in the presence of uncertainty.
    • a primary process for making those those decisions is Microeconomics and Opportunity costs.¬†What's the lost opportunity for one decison over another?
    • To do this we need to estimate both the cost that results from choosing one decision over another and the benefits, opportunity, from that decision.
  2. Welcome open debate on all points of view. Suspend judgment about the event or claim until all cause paths have been pursued to your satisfaction using RealityCharting¬ģ.¬†
    • The notion that #NoEstimates is¬†exploring and seeking conversation is not evidenced.
    • Any challenge questions are labeled as trolling, harassing, and unwanted.
    • You'd think is #NoEstimates is the¬†next big thing responding to any and all challenges would be a terrific marketing opportunity. It's basic sales strategy, find all objections to your products vale propositon, over come them, and the¬†deal is closed.
  3. Always challenge authority. Ask to be educated. Ask the expert how they came to know what they know. If they cannot explain it to your satisfaction using evidence-based causal relationships then be very skeptical. 
    • This is not the same as¬†challenge¬†everything.
    • This is¬†when you hear¬†something¬†that is not backed b tangible evidence¬†challenge¬†it.
    • Ask the person making the claim how they know it will work outside their personal anecdotal experience?
  4. Consider more than one hypothesis. The difference between a genius and a normal person is that when asked to solve a problem the genius doesn’t look for the right answer, he or she looks for how many possible solutions he or she can find. A genius fundamentally understands that there is always another possibility, limited by our fundamental ignorance of what is really happening. 
    • The self proclaimed thought leaders,¬†agile leaders were¬†challenged, I've been¬†challenged, I must be¬†a agile though leader, needs to be tested with actual evidence.
  5. Don’t defend a position because it is yours. All ideas are prototypical because there is no way we can really know all the causes. Seek to understand before seeking to be understood. 
    • Use this on both sides of the conversation.
    • Where can non estimates be applied?
    • Where is it not¬†applicable
    • Provide evidence on both sides.
  6. Try to quantify what you think you know. Can you put numbers to it? 
    • Show the numbers
    • Do the math
  7. If there is a chain of causes presented, every link must work. Use RealityCharting¬ģ to verify that the chain of causes meets the advanced logic checks defined above and that the causes are sufficient in and of themselves.¬†
    1. If estimating is the smell of dysfunction, show the causal chain of the dysfunction.
    2. Confirm that estimating is actually the root cause of management dysfunction.
    3. Is misuse and abuse of estimates caused by the estimating process?
  8. Use Occam’s razor to decide between two hypothesis; If two explanations appear to be equally viable, choose the simpler one if you must. Nature loves simplicity.
    • When conjecturing that stopping estimates fixed the dysfunction, is this the simplist solution?
    • How about stopping Bad Management practices
    • In the upcoming #NoEstimates book, Chapter 1 opens with a blatant Bad Management process of assigning a project to an inexperienced PM that is 100's of times bigger than she has ever seen.
    • Then blaming the estimating process for her failure.
    • This notion is continued on with references to other failed projects, without ever seeking the actual root cause of the failure.
    • No evidence is ever presented to show that stopping estimates will have make the project successful.
  9. Try to prove your hypothesis wrong. Every truth is prototypical and the purpose of science is to disprove that which we think we know. 

    • This notion is lost on those conjecturing #NoEstimates is applicable their personal anecdotal experience.¬†
    • Testing the idea in external domain, not finding a CEO that supports the notion.¬†
    • The null hypothesis test H0, is basic High School statistics.¬†
    • Missing entirely there
  10.  

    Use carefully designed experiments to test all hypotheses.
    • No such thing in the #NoEstimates paradigm

So it's becoming clear #NoEstimates does pass the smell test of the basic BS meter

The Big Questions

  1. What's the answer to how can we make a decision in the presence of uncertainty and not estimate and NOT violate the core principles of Microeconomics
  2.  It's not about the developers like or dislike of estimates. When I was a developer - radar, realtime controls, flight avionics, enterprise IT - I never liked estimates. It's about business. It's not our money. This notion appears to be completely lost. It's the millennials  view of the world. We have two millennials (25 and 26) It's all about ME. Even if those suggesting are millennials, the message appears to be it's all about me. Go talk to the CFO.

The End

The rhetoric  on #NoEstimates has now reached a fever pitch, paid conferences, books, blatant misrepresentations. Time to call BS and move on. This is the last post.  I've met many interesting people in both good and bad ways. And will stay in touch. So long and thanks for the Fish. As Douglas Adams says. Those with the money will have the final say on this idea.

Categories: Project Management

Netty: Testing encoders/decoders

Mark Needham - Fri, 06/05/2015 - 22:25

I’ve been working with Netty a bit recently and having built a pipeline of encoders/decoders as described in this excellent tutorial wanted to test that the encoders and decoders were working without having to send real messages around.

Luckily there is a EmbeddedChannel which makes our life very easy indeed.

Let’s say we’ve got a message ‘Foo’ that we want to send across the wire. It only contains a single integer value so we’ll just send that and reconstruct ‘Foo’ on the other side.

We might write the following encoder to do this:

// Examples uses Netty 4.0.28.Final
public static class MessageEncoder extends MessageToMessageEncoder<Foo>
{
    @Override
    protected void encode( ChannelHandlerContext ctx, Foo msg, List<Object> out ) throws Exception
    {
        ByteBuf buf = ctx.alloc().buffer();
        buf.writeInt( msg.value() );
        out.add( buf );
    }
}
 
public static class Foo
{
    private Integer value;
 
    public Foo(Integer value)
    {
        this.value = value;
    }
 
    public int value()
    {
        return value;
    }
}

So all we’re doing is taking the ‘value’ field out of ‘Foo’ and putting it into the list which gets passed downstream.

Let’s write a test which simulates sending a ‘Foo’ message and has an empty decoder attempt to process the message:

@Test
public void shouldEncodeAndDecodeVoteRequest()
{
    // given
    EmbeddedChannel channel = new EmbeddedChannel( new MessageEncoder(), new MessageDecoder() );
 
    // when
    Foo foo = new Foo( 42 );
    channel.writeOutbound( foo );
    channel.writeInbound( channel.readOutbound() );
 
    // then
    Foo returnedFoo = (Foo) channel.readInbound();
    assertNotNull(returnedFoo);
    assertEquals( foo.value(), returnedFoo.value() );
}
 
public static class MessageDecoder extends MessageToMessageDecoder<ByteBuf>
{
    @Override
    protected void decode( ChannelHandlerContext ctx, ByteBuf msg, List<Object> out ) throws Exception { }
}

So in the test we write ‘Foo’ to the outbound channel and then read it back into the inbound channel and then check what we’ve got. If we run that test now this is what we’ll see:

junit.framework.AssertionFailedError
	at NettyTest.shouldEncodeAndDecodeVoteRequest(NettyTest.java:28)

The message we get back is null which makes sense given that we didn’t bother writing the decoder. Let’s implement the decoder then:

public static class MessageDecoder extends MessageToMessageDecoder<ByteBuf>
{
    @Override
    protected void decode( ChannelHandlerContext ctx, ByteBuf msg, List<Object> out ) throws Exception
    {
        int value = msg.readInt();
        out.add( new Foo(value) );
    }
}

Now if we run our test again it’s all green and happy. We can now go and encode/decode some more complex structures and update our test accordingly.

Categories: Programming

Stuff The Internet Says On Scalability For June 5th, 2015

Hey, it's HighScalability time:


Stunning Multi-Wavelength Image Of The Solar Atmosphere.
  • 4x: amount spent by Facebook users
  • Quotable Quotes:
    • Facebook: Facebook's average data set for CF has 100 billion ratings, more than a billion users, and millions of items. In comparison, the well-known Netflix Prize recommender competition featured a large-scale industrial data set with 100 million ratings, 480,000 users, and 17,770 movies (items).
    • @BenedictEvans: The number of photos shared on social networks this year will probably be closer to 2 trillion than to 1 trillion.
    • @jeremysliew: For every 10 photos shared on @Snapchat, 5 are shared on @Facebook and 1 on @Instagtam. 8,696 photos/sec on Snapchat.
    • @RubenVerborgh: “Don’t ask for an API, ask for data access. Tim Berners-Lee called for open data, not open services.” —@pietercolpaert #SemDev2015 #ESWC2015
    • Craig Timberg: When they thought about security, they foresaw the need to protect the network against potential intruders or military threats, but they didn’t anticipate that the Internet’s own users would someday use the network to attack one another. 
    • Janet Abbate: They [ARPANET inventors] thought they were building a classroom, and it turned into a bank.
    • A.C. Hadfield: The power of accurate observation is often called cynicism by those who don’t possess it.
    • @plightbo: Every business is becoming a software business
    • @potsdamnhacker: Replaced Go service with an Erlang one. Already used hot-code reloading, fault tolerance, runtime inspectability to great effect. #hihaters
    • @PHP_CEO: WE SPENT 18 MONTHS MIGRATING FROM A MONOLITH TO MICROSERVICES RESULT:- GITHUB GETS PAID FOR MORE PRIVATE REPOS - FIND/REPLACE IS HARDER
    • @alsargent: Given continuous deployment, average lifetime of a #Docker image @newrelic is 12hrs. Different ops pattern than VMs. #velocityconf
    • @PHP_CEO: ALSO THE NODE GUY WHO WAS A RUBY GUY THAT REWROTE IT ALL BECAME A RUST GUY AND MOVED TO THAILAND TO BECOME A NOMAD STARTUP GUY
    • @abt_programming: "If you think good architecture is expensive, try bad architecture" - Brian Foote - and Joseph Yoder
    • @KlangianProverb: "I thought studying distributed systems would make me understand software better—it made me understand reality better."—Old Klangian Proverb
    • @rachelmetz: google's error rate for image recognition was 28 percent in 2008, now it's like 5 percent, quoc le says.

  • Fear or strength? Apple’s Tim Cook Delivers Blistering Speech On Encryption, Privacy. With Google Now on Tap Google is saying we've joyously crossed the freaky line and we unapologetically plan to leverage our huge lead in machine learning to the max. Apple knows they can't match this feature. Google knows this is a clear and distinct exploitable marketing idea, like a super thin MacBook Air slowly slipping out of a manila envelope.

  • How does Kubernetes compare to Mesos? cmcluck, who works at Google and was one of the founders of the project explains: we looked really closely at Apache Mesos and liked a lot of what we saw, but there were a couple of things that stopped us just jumping on it. (1) it was written in C++ and the containers world was moving to Go -- we knew we planned to make a sustained and considerable investment in this and knew first hand that Go was more productive (2) we wanted something incredibly simple to showcase the critical constructs (pods, labels, label selectors, replication controllers, etc) and to build it directly with the communities support and mesos was pretty large and somewhat monolithic (3) we needed what Joe Beda dubbed 'over-modularity' because we wanted a whole ecosystem to emerge, (4) we wanted 'cluster environment' to be lightweight and something you could easily turn up or turn down, kinda like a VM; the systems integrators i knew who worked with mesos felt that it was powerful but heavy and hard to setup (though i will note our friends at Mesosphere are helping to change this). so we figured doing something simple to create a first class cluster environment for native app management, 'but this time done right' as Tim Hockin likes to say everyday. < Also, CoreOS (YC S13) Raises $12M to Bring Kubernetes to the Enterprise.

  • If structure arises in the universe because electrons can't occupy the same space, why does structure arise in software?

  • The cost of tyranny is far lower than one might hope. How much would it cost for China to intercept connections and replace content flowing at 1.9-terabits/second? About $200K says Robert Graham in Scalability of the Great Cannon. Low? Probably. But for the price of a closet in the new San Francisco you can edit an entire people's perception of the Internet in real-time.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Traffic Light Measurement Indicators ‚Äď Not All Lights Are Green

Red Light, Green Light

Red Light, Green Light

One of the most common indicators used in measurement and status report are traffic light indicators.  Traffic light indicators have been adopted because they are easy to recognize, represent complex information in a palatable manner and indicators are easy to explain.  The traffic light is elegant in its simplicity; however, that simplicity can also be its undoing. There three critical issues that traffic lights often exhibit which reduce their usefulness.

  1. Traffic light indicators obscure nuances and trends. Traffic light indicators generally use the simple green, yellow and red scale (good to bad). The indicator can only be set to one of those states, and there is no in-between (no orange or greenish yellow). HOWEVER, project status is rarely that cut and dried. For example, how would the indicator be set if a project exhibits serious threatening risks but the stakeholders currently satisfied with progress? Regardless of whether the indicator was set to red or yellow, much of the nuance of the situation would be lost. In addition, the traffic light indicator could not show whether the risks were being mitigated or threatening to become issues.
  2. Traffic light Indicators can generate poor personal and team behaviors. One of the most common problems observed with the usage of traffic light indicators is sticky statuses. A status is green or yellow, then seems to suddenly turn yellow or red overnight. The change from one color to another typically surprises management and stakeholders.  The change of color/status is often resisted because a change is viewed as a failure since there is no mechanism to provide a warning making the change is resisted. A second common problem is that making the indicator change becomes the project leader or team’s most important goal. When the metric becomes the goal, individuals and teams can be incented into trying to game of the metric which removes the focus from the customer.
  3. Traffic light indicators can lead to users of the indicator losing track of how it was calculated. Any high-level indicator, like a traffic light indicator, is a synthesis of many individual measures and metrics. Meg Gillikin, Deloitte Consulting, suggests ‚Äúthat you should have definitions of what each state means with specifics.‚ÄĚ ¬†The users of the indicator need to understand how it is set and the factors that go into setting the metric.¬† Lack of understand of any indicator can lead managers into making poor decisions‚Ķ

D√°cil Castelo, Leda MC, sums up the use of traffic light indicators, ‚ÄúThe use of red, green and yellow provides a quick, visual summary of the status in a simple and easy way (everyone knows what a traffic light is). On the other hand, easy to understood doesn’t mean easy to calculate nor necessarily useful for the user.‚ÄĚ Remember that with any indicator there is a basic issue IF an indicator doesn‚Äôt actually help teams and leaders to delivery of value it will be view as overhead


Categories: Process Management

Android Design Support Library

Android Developers Blog - Thu, 06/04/2015 - 23:47

Posted by Ian Lake, Developer Advocate

Android 5.0 Lollipop was one of the most significant Android releases ever, in no small part due to the introduction of material design, a new design language that refreshed the entire Android experience. Our detailed spec is a great place to start to adopt material design, but we understand that it can be a challenge for developers, particularly ones concerned with backward compatibility. With a little help from the new Android Design Support Library, we’re bringing a number of important material design components to all developers and to all Android 2.1 or higher devices. You’ll find a navigation drawer view, floating labels for editing text, a floating action button, snackbar, tabs, and a motion and scroll framework to tie them together.

Navigation View

The navigation drawer can be an important focal point for identity and navigation within your app and consistency in the design here can make a considerable difference in how easy your app is to navigate, particularly for first time users. NavigationView makes this easier by providing the framework you need for the navigation drawer as well as the ability to inflate your navigation items through a menu resource.

You use NavigationView as DrawerLayout’s drawer content view with a layout such as:

<android.support.v4.widget.DrawerLayout
        xmlns:android="http://schemas.android.com/apk/res/android"
        xmlns:app="http://schemas.android.com/apk/res-auto"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:fitsSystemWindows="true">

    <!-- your content layout -->

    <android.support.design.widget.NavigationView
            android:layout_width="wrap_content"
            android:layout_height="match_parent"
            android:layout_gravity="start"
            app:headerLayout="@layout/drawer_header"
            app:menu="@menu/drawer"/>
</android.support.v4.widget.DrawerLayout>

You’ll note two attributes for NavigationView: app:headerLayout controls the (optional) layout used for the header. app:menu is the menu resource inflated for the navigation items (which can also be updated at runtime). NavigationView takes care of the scrim protection of the status bar for you, ensuring that your NavigationView interacts with the status bar appropriately on API21+ devices.

The simplest drawer menus will be a collection of checkable menu items:

<group android:checkableBehavior="single">
    <item
        android:id="@+id/navigation_item_1"
        android:checked="true"
        android:icon="@drawable/ic_android"
        android:title="@string/navigation_item_1"/>
    <item
        android:id="@+id/navigation_item_2"
        android:icon="@drawable/ic_android"
        android:title="@string/navigation_item_2"/>
</group>

The checked item will appear highlighted in the navigation drawer, ensuring the user knows which navigation item is currently selected.

You can also use subheaders in your menu to separate groups of items:

<item
    android:id="@+id/navigation_subheader"
    android:title="@string/navigation_subheader">
    <menu>
        <item
            android:id="@+id/navigation_sub_item_1"
            android:icon="@drawable/ic_android"
            android:title="@string/navigation_sub_item_1"/>
        <item
            android:id="@+id/navigation_sub_item_2"
            android:icon="@drawable/ic_android"
            android:title="@string/navigation_sub_item_2"/>
    </menu>
</item>

You’ll get callbacks on selected items by setting a OnNavigationItemSelectedListener using setNavigationItemSelectedListener(). This provides you with the MenuItem that was clicked, allowing you to handle selection events, changed the checked status, load new content, programmatically close the drawer, or any other actions you may want.

Floating labels for editing text

Even the humble EditText has room to improve in material design. While an EditText alone will hide the hint text after the first character is typed, you can now wrap it in a TextInputLayout, causing the hint text to become a floating label above the EditText, ensuring that users never lose context in what they are entering.

In addition to showing hints, you can also display an error message below the EditText by calling setError().

Floating Action Button

A floating action button is a round button denoting a primary action on your interface. The Design library’s FloatingActionButton gives you a single consistent implementation, by default colored using the colorAccent from your theme.

In addition to the normal size floating action button, it also supports the mini size (fabSize="mini") when visual continuity with other elements is critical. As FloatingActionButton extends ImageView, you’ll use android:src or any of the methods such as setImageDrawable() to control the icon shown within the FloatingActionButton.

Snackbar

Providing lightweight, quick feedback about an operation is a perfect opportunity to use a snackbar. Snackbars are shown on the bottom of the screen and contain text with an optional single action. They automatically time out after the given time length by animating off the screen. In addition, users can swipe them away before the timeout.

By including the ability to interact with the Snackbar through swiping it away or actions, these are considerably more powerful than toasts, another lightweight feedback mechanism. However, you’ll find the API very familiar:

Snackbar
  .make(parentLayout, R.string.snackbar_text, Snackbar.LENGTH_LONG)
  .setAction(R.string.snackbar_action, myOnClickListener)
  .show(); // Don’t forget to show!

You’ll note the use of a View as the first parameter to make() - Snackbar will attempt to find an appropriate parent of the Snackbar’s view to ensure that it is anchored to the bottom.

Tabs

Switching between different views in your app via tabs is not a new concept to material design and they are equally at home as a top level navigation pattern or for organizing different groupings of content within your app (say, different genres of music).

The Design library’s TabLayout implements both fixed tabs, where the view’s width is divided equally between all of the tabs, as well as scrollable tabs, where the tabs are not a uniform size and can scroll horizontally. Tabs can be added programmatically:

TabLayout tabLayout = ...;
tabLayout.addTab(tabLayout.newTab().setText("Tab 1"));

However, if you are using a ViewPager for horizontal paging between tabs, you can create tabs directly from your PagerAdapter’s getPageTitle() and then connect the two together using setupWithViewPager(). This ensures that tab selection events update the ViewPager and page changes update the selected tab.

CoordinatorLayout, motion, and scrolling

Distinctive visuals are only one part of material design: motion is also an important part of making a great material designed app. While there are a lot of parts of motion in material design including touch ripples and meaningful transitions, the Design library introduces CoordinatorLayout, a layout which provides an additional level of control over touch events between child views, something which many of the components in the Design library take advantage of.

CoordinatorLayout and floating action buttons

A great example of this is when you add a FloatingActionButton as a child of your CoordinatorLayout and then pass that CoordinatorLayout to your Snackbar.make() call - instead of the snackbar displaying over the floating action button, the FloatingActionButton takes advantage of additional callbacks provided by CoordinatorLayout to automatically move upward as the snackbar animates in and returns to its position when the snackbar animates out on Android 3.0 and higher devices - no extra code required.

CoordinatorLayout also provides an layout_anchor attribute which, along with layout_anchorGravity, can be used to place floating views, such as the FloatingActionButton, relative to other views.

CoordinatorLayout and the app bar

The other main use case for the CoordinatorLayout concerns the app bar (formerly action bar) and scrolling techniques. You may already be using a Toolbar in your layout, allowing you to more easily customize the look and integration of that iconic part of an app with the rest of your layout. The Design library takes this to the next level: using an AppBarLayout allows your Toolbar and other views (such as tabs provided by TabLayout) to react to scroll events in a sibling view marked with a ScrollingViewBehavior. Therefore you can create a layout such as:

 <android.support.design.widget.CoordinatorLayout
        xmlns:android="http://schemas.android.com/apk/res/android"
        xmlns:app="http://schemas.android.com/apk/res-auto"
        android:layout_width="match_parent"
        android:layout_height="match_parent">
     
     <! -- Your Scrollable View -->
    <android.support.v7.widget.RecyclerView
            android:layout_width="match_parent"
            android:layout_height="match_parent"
            app:layout_behavior="@string/appbar_scrolling_view_behavior" />

    <android.support.design.widget.AppBarLayout
            android:layout_width="match_parent"
            android:layout_height="wrap_content">
   <android.support.v7.widget.Toolbar
                  ...
                  app:layout_scrollFlags="scroll|enterAlways">

        <android.support.design.widget.TabLayout
                  ...
                  app:layout_scrollFlags="scroll|enterAlways">
     </android.support.design.widget.AppBarLayout>
</android.support.design.widget.CoordinatorLayout>

Now, as the user scrolls the RecyclerView, the AppBarLayout can respond to those events by using the children’s scroll flags to control how they enter (scroll on screen) and exit (scroll off screen). Flags include:

  • scroll: this flag should be set for all views that want to scroll off the screen - for views that do not use this flag, they‚Äôll remain pinned to the top of the screen
  • enterAlways: this flag ensures that any downward scroll will cause this view to become visible, enabling the ‚Äėquick return‚Äô pattern
  • enterAlwaysCollapsed: When your view has declared a minHeight and you use this flag, your View will only enter at its minimum height (i.e., ‚Äėcollapsed‚Äô), only re-expanding to its full height when the scrolling view has reached it‚Äôs top.
  • exitUntilCollapsed: this flag causes the view to scroll off until it is ‚Äėcollapsed‚Äô (its minHeight) before exiting

One note: all views using the scroll flag must be declared before views that do not use the flag. This ensures that all views exit from the top, leaving the fixed elements behind.

Collapsing Toolbars

Adding a Toolbar directly to an AppBarLayout gives you access to the enterAlwaysCollapsed and exitUntilCollapsed scroll flags, but not the detailed control on how different elements react to collapsing. For that, you can use CollapsingToolbarLayout:

<android.support.design.widget.AppBarLayout
        android:layout_height="192dp"
        android:layout_width="match_parent">
    <android.support.design.widget.CollapsingToolbarLayout
            android:layout_width="match_parent"
            android:layout_height="match_parent"
            app:layout_scrollFlags="scroll|exitUntilCollapsed">
        <android.support.v7.widget.Toolbar
                android:layout_height="?attr/actionBarSize"
                android:layout_width="match_parent"
                app:layout_collapseMode="pin"/>
        </android.support.design.widget.CollapsingToolbarLayout>
</android.support.design.widget.AppBarLayout>

This setup uses CollapsingToolbarLayout’s app:layout_collapseMode="pin" to ensure that the Toolbar itself remains pinned to the top of the screen while the view collapses. Even better, when you use CollapsingToolbarLayout and Toolbar together, the title will automatically appear larger when the layout is fully visible, then transition to its default size as it is collapsed. Note that in those cases, you should call setTitle() on the CollapsingToolbarLayout, rather than on the Toolbar itself.

In addition to pinning a view, you can use app:layout_collapseMode="parallax" (and optionally app:layout_collapseParallaxMultiplier="0.7" to set the parallax multiplier) to implement parallax scrolling (say of a sibling ImageView within the CollapsingToolbarLayout). This use case pairs nicely with the app:contentScrim="?attr/colorPrimary" attribute for CollapsingToolbarLayout, adding a full bleed scrim when the view is collapsed.

CoordinatorLayout and custom views

One thing that is important to note is that CoordinatorLayout doesn’t have any innate understanding of a FloatingActionButton or AppBarLayout work - it just provides an additional API in the form of a Coordinator.Behavior, which allows child views to better control touch events and gestures as well as declare dependencies between each other and receive callbacks via onDependentViewChanged().

Views can declare a default Behavior by using the CoordinatorLayout.DefaultBehavior(YourView.Behavior.class) annotation,or set it in your layout files by with the app:layout_behavior="com.example.app.YourView$Behavior" attribute. This framework makes it possible for any view to integrate with CoordinatorLayout.

Available now!

The Design library is available now, so make sure to update the Android Support Repository in the SDK Manager. You can then start using the Design library with a single new dependency:

 compile 'com.android.support:design:22.2.0'

Note that as the Design library depends on the Support v4 and AppCompat Support Libraries, those will be included automatically when you add the Design library dependency. We also took care that these new widgets are usable in the Android Studio Layout Editor’s Design view (find them under CustomView), giving you an easier way to preview some of these new components.

The Design library, AppCompat, and all of the Android Support Library are important tools in providing the building blocks needed to build a modern, great looking Android app without building everything from scratch.

Join the discussion on

+Android Developers
Categories: Programming

Neo4j: Cypher ‚Äď Step by step to creating a linked list of adjacent nodes using UNWIND

Mark Needham - Thu, 06/04/2015 - 23:17

In late 2013 I wrote a post showing how to create a linked list connecting different football seasons together using Neo4j’s Cypher query language, a post I’ve frequently copy & pasted from!

Now 18 months later, and using Neo4j 2.2 rather than 2.0, we can actually solve this problem in what I believe is a more intuitive way using the UNWIND function. Credit for the idea goes to Michael, I’m just the messenger.

To recap, we had a collection of football seasons and we wanted to connect adjacent seasons to each other to allow easy querying between seasons. The following is the code we used:

CREATE (:Season {name: "2013/2014", timestamp: 1375315200})
CREATE (:Season {name: "2012/2013", timestamp: 1343779200})
CREATE (:Season {name: "2011/2012", timestamp: 1312156800})
CREATE (:Season {name: "2010/2011", timestamp: 1280620800})
CREATE (:Season {name: "2009/2010", timestamp: 1249084800})
MATCH (s:Season)
WITH s
ORDER BY s.timestamp
WITH COLLECT(s) AS seasons
 
FOREACH(i in RANGE(0, length(seasons)-2) | 
    FOREACH(si in [seasons[i]] | 
        FOREACH(si2 in [seasons[i+1]] | 
            MERGE (si)-[:NEXT]->(si2))))

Our goal is to replace those 3 FOREACH loops with something a bit easier to understand. To start with, let’s run the first part of the query to get some intuition of what we’re trying to do:

MATCH (s:Season)
WITH s
ORDER BY s.timestamp
RETURN COLLECT(s) AS seasons
 
==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> | seasons                                                                                                                                                                                                                                                     |
==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> | [Node[1973]{timestamp:1249084800,name:"2009/2010"},Node[1972]{timestamp:1280620800,name:"2010/2011"},Node[1971]{timestamp:1312156800,name:"2011/2012"},Node[1970]{timestamp:1343779200,name:"2012/2013"},Node[1969]{timestamp:1375315200,name:"2013/2014"}] |
==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

So at this point we’ve got all the seasons in an array going from 2009/2010 up to 2013/2014. We want to create a ‘NEXT’ relationship between 2009/2010 -> 2010/2011, 2010/2011 -> 2011/2012 and so on.

To achieve this we need to get the adjacent seasons split into two columns, like so:

2009/2010	2010/2011
2010/2011	2011/2012
2011/2012	2012/2013
2012/2013	2013/2014

If we can get the data into that format then we can apply a MERGE between the two fields to create the ‘NEXT’ relationship. So how do we do that?

If we were in Python we’d be calling for the zip function which we could apply like this:

>>> seasons = ["2009/2010", "2010/2011", "2011/2012", "2012/2013", "2013/2014"]
 
>>> zip(seasons, seasons[1:])
[('2009/2010', '2010/2011'), ('2010/2011', '2011/2012'), ('2011/2012', '2012/2013'), ('2012/2013', '2013/2014')]

Unfortunately we don’t have an equivalent function in Cypher but we can achieve the same outcome by creating 2 columns with adjacent integer values. The RANGE and UNWIND functions are our friends here:

return RANGE(0,4)
 
==> +-------------+
==> | RANGE(0,4)  |
==> +-------------+
==> | [0,1,2,3,4] |
==> +-------------+
UNWIND RANGE(0,4) as idx 
RETURN idx, idx +1;
 
==> +--------------+
==> | idx | idx +1 |
==> +--------------+
==> | 0   | 1      |
==> | 1   | 2      |
==> | 2   | 3      |
==> | 3   | 4      |
==> | 4   | 5      |
==> +--------------+
==> 5 rows

Now all we need to do is plug this code into our original query where ‘idx’ and ‘idx + 1′ represent indexes into the array of seasons. We use a range which stops 1 element early since there isn’t anywhere to connect our last season to:

MATCH (s:Season)
WITH s
ORDER BY s.timestamp
WITH COLLECT(s) AS seasons
UNWIND RANGE(0,LENGTH(seasons) - 2) as idx 
RETURN seasons[idx], seasons[idx+1]
 
==> +-------------------------------------------------------------------------------------------------------+
==> | seasons[idx]                                      | seasons[idx+1]                                    |
==> +-------------------------------------------------------------------------------------------------------+
==> | Node[1973]{timestamp:1249084800,name:"2009/2010"} | Node[1972]{timestamp:1280620800,name:"2010/2011"} |
==> | Node[1972]{timestamp:1280620800,name:"2010/2011"} | Node[1971]{timestamp:1312156800,name:"2011/2012"} |
==> | Node[1971]{timestamp:1312156800,name:"2011/2012"} | Node[1970]{timestamp:1343779200,name:"2012/2013"} |
==> | Node[1970]{timestamp:1343779200,name:"2012/2013"} | Node[1969]{timestamp:1375315200,name:"2013/2014"} |
==> +-------------------------------------------------------------------------------------------------------+
==> 4 rows

Now we’ve got all the adjacent seasons lined up we complete the query with a call to MERGE:

MATCH (s:Season)
WITH s
ORDER BY s.timestamp
WITH COLLECT(s) AS seasons
UNWIND RANGE(0,LENGTH(seasons) - 2) as idx 
WITH seasons[idx] AS s1, seasons[idx+1] AS s2
MERGE (s1)-[:NEXT]->(s2)
 
==> +-------------------+
==> | No data returned. |
==> +-------------------+
==> Relationships created: 4

And we’re done. Hopefully I can remember this approach more than I did the initial one!

Categories: Programming

Myth's Abound

Herding Cats - Glen Alleman - Thu, 06/04/2015 - 18:35

Screen Shot 2015-06-04 at 8.43.58 AMMyths, misinformation, and magic bullets abound in the software development business. No more so than in the management and estimating of complex development projects.

The Standish report is a classic example of applying How to Lie with Statistics. This book is a must read for anyone responsible for spending other peoples money.

The current misrepresentation approach is to  quote people like Bent Flyvbjerg - who by the way does superior work in the domain of mass transportation and public work. Bent, along with many others, one of which is a client, have studied the problems with Mega Projects. The classic misuse of these studies starts with the reading of the introduction of a report and going no further. Here's a typical summary.

9/10 Costs (are) underestimated. 9/10 Benefits (are) overestimated 9/10 Schedules (are) underestimated.

OK, we all know that's what the report says, now what?

  • Do you have a root cause for each of the project's overages?
  • Did those root causes have sufficient assessment to show:
    • Primary Effect ‚Äď is any effect we want to prevent.
    • Action ‚Äď momentary causes that bring condition together to cause an effect.
    • Conditions ‚Äď the fundamental causal element of all that happens. It is made up of an effect and its immediate causes that represent a single causal relationship.
      • As a minimum, the causes in this set consist of an action and one or more conditions.
      • Causal sets, like causes, cannot exist alone.
      • They are part of a continuum of causes with no beginning or end, which leads to the next principle that¬†Causes and Effects are Part of an Infinite Continuum of Causes.

NO?, then the use of reports and broad unqualified clips from reports is just Lying With Statistics.

The classic example from the same source states Steve McConnell PROVES estimates can't be done in his book. Which of course the antithesis of the title of the book and the content of the book.

This approach is pervasive in places where doing your homework appears to be a step that was skipped.

From our own research in DOD ACAT1 programs (>$5B qualifies for Megaprojects) here's the Root Cause of program problems in our domain.

Screen Shot 2015-06-04 at 9.07.45 AM

When we hear some extracted statement from a source in another domain - Bent for example is large construction infrastructure projects - roads, rail, ports - moved to our domain without the underlying details of both the data, the root causes and all the possible corrective actions to avoid the problem in the first place - that idea is basically bogus. Don't listen. Do your own investigation, learn how to not succumb to those who Lie With Statistics.

So let's look at some simple questions when we hear there are problems with our projects or projects in other domains trying to convince us it's applicable to our domain.

  • Was there a credible set of requirements that were understood by those initiating the project?
    • No? You're going to be over budget, late, and the products not likely to meet the needs of those paying.
  • Was there a credible basis of estimate for the defined work?
    • No? Then what ever estimate was produced is not going to be what the project actually costs, its duration, or the planned value.
  • Was the project defined in a Risk Adjusted manner for both reducible and irreducible uncertainties?
    • No? Then those uncertainties are still there and will cause the project to be over budget and late.
    • If the reducible risks are unaddressed, will come true with their probabilistic outcomes will drive the project over budget, late, and not provide the needed capabilities
    • If the irreducible risks don't have margin, you're late and over budget before you even start.
  • Was there a means of measuring physical percent complete in meeting the needed Effectiveness and Performance measures?
    • No? Then money will be spent and time will pass and there is no way to know if the project is actually producing value and meeting its goals.

These questions and their lack of answers are at the heart of most project performance problems. So pointing out all the problems is very easy. Providing corrective actions once the root cause is discovered is harder, mandatorily harder by the way. Because Risk Management is How Adults Manage Projects. 

First let's look at what Bent says

He states the political economy of megaprojects, that is massive investments of a billion dollars or more in infrastructure or technology, consistently ends up costing more with smaller benefits than projected and almost always end up with costs that exceed the benefits. 

So the first question is are we working in the mega project domain? No? Then can we assert Bent's assessments are applicable. If we haven't then we're Lying with Statistics. (Read Huff's book to find out why).

Flyvbjerg then explores the reasons for the poor predictions and poor performance of giant investment projects and what might be done to improve their effectiveness. Have we explored the reasons why our projects overrun? No? Then we haven't done our homework and are speculating on false data. Another How to Lie With Statistics.

Stating that projects over run 9 out of 10 times without also finding the reasons for this is the perfect How to Lie with Statistics. Make a statement, no supporting data, be the provocateur.

The End

When we read a statement without a domain or context, without a corrective action, that is intended to convey a different message, taken out of context, without the evidence it is applicable in our domain, than the person writing the original statement, is Lying with Statistics - don't listen, go find out for yourself.

Related articles

The Dysfunctional Approach to Using "5 Whys" There is No Such Thing as Free Mr. Franklin's Advice Essential Reading List for Managing Other People's Money Eyes Wide Shut - A View of No Estimates
Categories: Project Management

Paper: Heracles: Improving Resource Efficiency at Scale

Underutilization and segregation are the classic strategies for ensuring resources are available when work absolutely must get done. Keep a database on its own server so when the load spikes another VM or high priority thread can't interfere with RAM, power, disk, or CPU access. And when you really need fast and reliable networking you can't rely on QOS, you keep a dedicated line.

Google flips the script in Heracles: Improving Resource Efficiency at Scale, shooting for high resource utilization while combining different load profiles.

I'm assuming the project name Heracles was chosen not simply for his legendary strength, but because when strength failed, Heracles could always depend on his wits. Who can ever forget when Heracles tricked Atlas into taking the sky back onto his shoulders? Good times.

The problem: better utilization of compute resources while complying  service level objectives (SLOs) for latency-critical (LC) and best effort batch (BE) tasks: 

Categories: Architecture

How Do I Get A Programming Job Without Experience?

Making the Complex Simple - John Sonmez - Thu, 06/04/2015 - 16:00

In this episode, I give my advice on getting a programming job without any experience. Full transcript: John:¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† Hey, this is John Sonmez from simpleprogrammer.com. So I‚Äôve got another question for you today. This one is about‚ÄĒit‚Äôs kind of for you newbies out there that are wondering how to get started in the field. This […]

The post How Do I Get A Programming Job Without Experience? appeared first on Simple Programmer.

Categories: Programming

The Dysfunctional Approach to Using "5 Whys" - Redux

Herding Cats - Glen Alleman - Wed, 06/03/2015 - 21:54

It's been popular recently in some agile circles to mention we use the 5 whys when asking about dysfunction. This common and misguided approach assumes - wrongly - causal relationship are linear and problems come from a single source. For example:

Estimates are the smell of dysfunction. Let's ask the 5 Whys to reveal these dysfunctions

The natural tendency to assume that in asking 5 whys there is a connection from beginning to end for the thread connecting cause and effect. This single source of the problem - the symptom - is labeled the Root Cause. The question is is the root cause that actual root cause. The core problem is the 5 whys is not really seeking a solution but just eliciting more symptoms masked as causes.

A simple example illustrates the problem from Apollo Root Cause Analysis.

Say we're in the fire prevention business. If preventing fires is our goal, let's look for the causes of the fire and determine the correction actions needed to actual prevent fire from occuring. In this example let's says we've identified 3 potential causes of fire. There is ...

  1. An ignition source
  2. Combustible material
  3. Oxygen

So what is the root cause of the fire? To prevent the fire - and in the follow on example prevent a dysfunction - we must find at least one cause of the fire that can be acted on to meet the goals and objectives of preventing the fire AND are within our control.

Here's a briefing used now for our development and deployment processes in the health insurance domain

Root cause analysis master plan from Glen Alleman

The notion that Estimates are the smell of dysfunction in a software development organization and asking the 5 Whys in search for the Root Cause is equally flawed. 

The need to estimate or not estimate has not been established. It is presumed that it is the estimating process that creates the dysfunction, and then the search - through the 5 Whys - is the false attempt to categorize the root causes of this dysfunction. The supposed dysfunction is them reverse engineered to be connected to the estimating process. This is not only a na√Įve approch to solving the dysfunction is inverts the logic by ignoring the need to estimate. Without confirmation that estimates are needed ot not needed, the search for the cause of the dysfunction has no purposeful outcome.¬†

The decision that estimates are needed or not need does not belong to those being asked to produce the estimates. That decision belongs to those consuming the estimate information in the decision making process of the business - those whose money is being spent.

And of course those consuming the estimates need to confirm they are operating their decision making processes in some framework that requires estimates. It could very well be those providing the money to be spent by those providing the value don't actual need an estimate. The value at risk may be low enough - 100 hours of development for a DB upgrade. But when the value at risk is sufficiently large - and that determination of done again by those providing the money, then a legitimate need to know how much, when, and what is made by the business In this case, decisions are based on Microeconomics of opportunity cost for uncertain outcomes in the future.

This is the basis of estimating and the determination of the real root causes of the problems with estimates. Saying we're bad at estimating is NOT the root cause. And it is never the reason not to estimate. If we are bad at estimating, and if we do have confirmation and optimism biases, then fix them. Remove the impediments to produce credible estimates. Because those estimates are needed to make decisions in any non-trivial value at risk work. 

 

Related articles Let's Get The Dirt On Root Cause Analysis The Fallacy of the Planning Fallacy Mr. Franklin's Advice The Dysfunctional Approach to Using "5 Whys" Essential Reading List for Managing Other People's Money
Categories: Project Management

What Does it Mean to Poke a Complex System?

A little bit of follow up...

In How Can We Build Better Complex Systems? Containers, Microservices, And Continuous Delivery I had a question about what Mary Poppendieck meant when she talked about poking complex systems.

InfoQ interviewed Mary & Tom Poppendieck and found out the answer:

Categories: Architecture

R: ggplot geom_density ‚Äď Error in exists(name, envir = env, mode = mode) : argument ‚Äúenv‚ÄĚ is missing, with no default

Mark Needham - Wed, 06/03/2015 - 06:52

Continuing on from yesterday’s blog post where I worked out how to clean up the Think Bayes Price is Right data set, the next task was to plot a distribution of the prices of show case items.

To recap, this is what the data frame we’re working with looks like:

library(dplyr)
 
df2011 = read.csv("~/projects/rLearning/showcases.2011.csv", na.strings = c("", "NA"))
df2011 = df2011 %>% na.omit()
 
> df2011 %>% head()
              X Sep..19 Sep..20 Sep..21 Sep..22 Sep..23 Sep..26 Sep..27 Sep..28 Sep..29 Sep..30 Oct..3
3    Showcase 1   50969   21901   32815   44432   24273   30554   20963   28941   25851   28800  37703
4    Showcase 2   45429   34061   53186   31428   22320   24337   41373   45437   41125   36319  38752
6         Bid 1   42000   14000   32000   27000   18750   27222   25000   35000   22500   21300  21567
7         Bid 2   34000   59900   45000   38000   23000   18525   32000   45000   32000   27500  23800
9  Difference 1    8969    7901     815   17432    5523    3332   -4037   -6059    3351    7500  16136
10 Difference 2   11429  -25839    8186   -6572    -680    5812    9373     437    9125    8819  14952
...

So our goal is to plot the density of the ‘Showcase 1′ items. Unfortunately those aren’t currently stored in a way that makes this easy for us. We need to flip the data frame so that we have a row for each date/price type/price:

PriceType  Date     Price
Showcase 1 Sep..19  50969
Showcase 2 Sep..19  21901
...
Showcase 1 Sep..20  45429
Showcase 2 Sep..20  34061

The reshape library’s melt function is our friend here:

library(reshape)
meltedDf = melt(df2011, id=c("X"))
 
> meltedDf %>% sample_n(10)
                X variable value
643    Showcase 1  Feb..24 27883
224    Showcase 2  Nov..10 34089
1062 Difference 2   Jun..4  9962
770    Showcase 2  Mar..28 39620
150  Difference 2  Oct..24  9137
431  Difference 1   Jan..4  7516
345         Bid 1  Dec..12 21569
918  Difference 2    May.1 -2093
536    Showcase 2  Jan..31 30918
502         Bid 2  Jan..23 27000

Now we need to plug this into ggplot. We’ll start by just plotting all the prices for showcase 1:

> ggplot(aes(x = value), data = meltedDf %>% filter(X == "Showcase 1")) +
    geom_density()
 
Error in exists(name, envir = env, mode = mode) : 
  argument "env" is missing, with no default


This error usually means that you’ve passed an empty data set to ggplot which isn’t the case here, but if we extract the values column we can see the problem:

> meltedDf$value[1:10]
 [1] "50969" "45429" "42000" "34000" "8969"  "11429" "21901" "34061" "14000" "59900"

They are all strings! Making it very difficult to plot a density curve which relies on the data being continuous. Let’s fix that and try again:

meltedDf$value = as.numeric(meltedDf$value)

ggplot(aes(x = value), data = meltedDf %>% filter(X == "Showcase 1")) +
  geom_density()

2015 06 03 06 46 48

If we want to show the curves for both showcases we can tweak our code slightly:

ggplot(meltedDf %>% filter(grepl("Showcase", X)), aes(x = value, colour = X)) + 
  geom_density() + 
  theme(legend.position="top")
2015 06 03 06 50 35

Et voila!

Categories: Programming

Humpty Dumpty and #NoEstimates

Herding Cats - Glen Alleman - Wed, 06/03/2015 - 06:06

Humpty-Dumpty--010When I use a word Humpty Dumpty said in a rather scornful tone, it means just what I choose to to mean - neither more nor less.

The question is, said Alice, whether you can make words mean so many different things.

The question is said Humpty Dumpty which is to ne master.

Through the Looking Glass, Chapter 6

The mantra of #NoEstimates is that No Estimates is not about Not Estimating. Along with that oxymoron comes

Forecasting is Not Estimating

  • Forecasting the future based on past performance is not the same as estimating the future from past performance.
  • The Humpty Dumpty logic is Forecasting ‚ȆEstimating.

This of course redefines the standard definition of both terms. Estimating is a rough calculation or judgment of a value, number, quantity, or extent of some outcome. 

An estimate is Approximation, prediction, or projection of a quantity based on experience and/or information available at the time, with the recognition that other pertinent facts are unclear or unknown.

  • Let‚Äôs estimate how many Great Horned Owls are in the county by sampling.
  • Let‚Äôs estimate to the total cost of this project using reference classes assigned to work element duration and running a Monte Carlo simulation

Forecasting is a prediction of a future event

  • Let‚Äôs produce weather forecast for the next five days

Both Estimating and Forecasting result in a probabilistic output in the presence of uncertainty

Slicing is Not Estimating??

Slicing work into smaller pieces so that "standard" size can be used to project the work effort and completion time. This is a standard basis of estimate in many domains. So slicing is Not Estimating in the #NoEstimates paradigm. In fact slicing is Estimating, another inversion of the term
No means Yes

Past Performance is #NoEstimates

using Past Performance to estimate future performance is core to all estimating processes. Time series used to estimate possible future outcomes is easily done with AIRMA, 4 lines of R, and some raw data as shown in The Flaw of Averages. But as described there, care is needed to confirm the future is like the past.

When We Redefine Words to Suite Our Needs We're Humpty Dumpty

Lewis Carol's Alice in Wonderland is political allegory of 19th century England. When #NoEstimates redefines established mathematical terms like Forecasting and Estimating and ignores the underlying mathematics for time series forecasting, ARIMA for example, they are willfully ignoring established practices and replacing them with their own untested conjectures.

No Estimates

Key here ways to make decisions with NO ESTIMATES. OK, show how that is not actually an estimating technical, no matter how simple or flawed and estimating technical.

Related articles Mr. Franklin's Advice There is No Such Thing as Free The Fallacy of the Planning Fallacy Do The Math Monte Carlo Simulation of Project Performance Essential Reading List for Managing Other People's Money
Categories: Project Management

Traffic Light Indicators for Metrics and KPIs

Stop on Red!

Stop on Red!

One of the most common indicators used in measurement and status report are traffic light indicators.  Traffic light indicators are most commonly shown as a set of red, yellow and green lights.  The metaphor draws from the nearly ubiquitous traffic light seen at nearly every intersection.  Traffic light indicators are part family of indicators of that combine indices and scales. Indices are typically used when a single measure or metric does not tell the story. An index reflects a composite of measures. Measures and/or metrics are averaged together or combined using complex mathematics.  The index is then transposed onto a scale so that it can be interpreted and used.  For example, wind chill is an index that combines temperature and wind speed into a temperature perceived by the skin. Wind chill once calculated is shown on a temperature scale. As a project status indicator, a traffic light indicator typically reflects a synthesis of many attribute.  The traffic light uses a simple scale in which red means trouble; yellow means caution and green means clear sailing. Traffic lights are adopted for three highly related reasons.

  1. Traffic lights are easy to recognize. The traffic light is a common symbol that every driver has been taught to recognize. Attaching a traffic light instantly indicates that a summary of status is being communicated.
  2. Traffic lights provide a consolidated view of complex attributes. The traffic light scale is a simple metaphor with three possible indications of overall performance.  Even in a simple project attributes such as budget, client satisfaction and risk must be synthesized into single perception of status that can be communicated. Traffic light indicators force a synthesized view.
  3. Traffic lights are easy to explain. Once an organization reaches a consensus on the business rules that set a traffic light indictor to red, yellow or green, is easy to explain. Red is bad and requires immediate action, yellow means caution, performance issues require mitigation and green mean business as usual. ¬†¬†Paul Byrnes, CMMI Lead Appraiser, when asked why people are drawn to traffic lights noted that ‚Äúcolors are easy…except for people that can‚Äôt see them… .‚ÄĚ

 

Karl Jentzsch, a colleague at David Consulting Group summarized the case for traffic light indicators as ‚Äúthe appeal is that it provides an easily manageable number of ‚Äėbuckets‚Äô to drop things into where the categorical distinctions are still fairly clear and inherently understood ‚Äď good/go (green), bad/no (red), and in between (yellow).‚ÄĚ

I often hear traffic lights defended with the statements like ‚Äúwe have always used traffic lights‚ÄĚ and ‚Äúor they are required by the PMO.‚ÄĚ These are excuses that reflect an abrogation of thought and responsibility. It is too easy to succumb the simplicity of the indicator without reflecting on all the hard work and analysis needed to set the indicator. Typically this should be a lot of math and analysis to set the traffic light to red, yellow or green.¬† The math and the analysis is where the real magic happens and requires thought and understanding. As an indicator, the traffic light is elegant in its simplicity; however that simplicity can also be its undoing.


Categories: Process Management

Microservices architecture principle #6: One team is responsible for full life cycle of a Micro service

Xebia Blog - Tue, 06/02/2015 - 20:08

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Today's blog is the last in a series about our Microservices principles. This blog explains why a Microservice should be the responsibility of exactly one team (but one team may be responsible for more services).

Being responsible for the full life cycle of a service means that a single team can deploy and manage a service as well as create new versions and retire obsolete ones. This means that users of the service have a single point of contact for all questions regarding the use of the service. This property makes it easier to track changes in a service. Developers can focus on a specific area of the business they are supporting so they will become specialists in that area. This in turn will lead to better quality. The need to also fix errors and problems in production systems is a strong motivator to make sure code works correctly and problems are easy to find.
Having different teams working on different services introduces a challenge that may lead to a more robust software landscape. If TeamA needs a change in TeamB’s service in order to complete it’s task, some form of planning has to take place. Both teams have to cater for slipping schedules and unforeseen events that cause the delivery date of a feature to change. However, depending on a commitment made by another team is tricky; there are lots of valid reasons why a change may be late (e.g. production issues or illness temporarily reduces a teams capacity or high priority changes take precedence). So TeamA may never depend on TeamB to finish before the deadline. TeamA will learn to protect its weekends and evenings by changing their architecture. Not making assumptions about another teams schedule, in a Microservice environment, will therefore lead to more robust software.

Microservices architecture principle #6: One team is responsible for full life cycle of a Micro service

Xebia Blog - Tue, 06/02/2015 - 20:08

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Today's blog is the last in a series about our Microservices principles. This blog explains why a Microservice should be the responsibility of exactly one team (but one team may be responsible for more services).

Being responsible for the full life cycle of a service means that a single team can deploy and manage a service as well as create new versions and retire obsolete ones. This means that users of the service have a single point of contact for all questions regarding the use of the service. This property makes it easier to track changes in a service. Developers can focus on a specific area of the business they are supporting so they will become specialists in that area. This in turn will lead to better quality. The need to also fix errors and problems in production systems is a strong motivator to make sure code works correctly and problems are easy to find.
Having different teams working on different services introduces a challenge that may lead to a more robust software landscape. If TeamA needs a change in TeamB’s service in order to complete it’s task, some form of planning has to take place. Both teams have to cater for slipping schedules and unforeseen events that cause the delivery date of a feature to change. However, depending on a commitment made by another team is tricky; there are lots of valid reasons why a change may be late (e.g. production issues or illness temporarily reduces a teams capacity or high priority changes take precedence). So TeamA may never depend on TeamB to finish before the deadline. TeamA will learn to protect its weekends and evenings by changing their architecture. Not making assumptions about another teams schedule, in a Microservice environment, will therefore lead to more robust software.

Using Lines of Code as a Software Size Measure - New Lecture Posted

10x Software Development - Steve McConnell - Tue, 06/02/2015 - 18:08

I've posted this week's lecture in my Understanding Software Projects series at https://cxlearn.com. Most of the lectures that have been posted are still free. Lectures posted so far include:  

0.0 Understanding Sofware Projects - Intro
     0.1 Introduction - My Background
     0.2 Reading the News

1.0 The Software Lifecycle Model - Intro
     1.1 Variations in Iteration 
     1.2 Lifecycle Model - Defect Removal

2.0 Software Size
     2.05 Size - Comments on Lines of Code (New)
     2.1 Size - Staff Sizes 
     2.2 Size - Schedule Basics 

Check out the lectures at http://cxlearn.com!

Understanding Software Projects - Steve McConnell