Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Sponsored Post: Apple, Tumblr, Gawker, FoundationDB, Monitis, BattleCry, Surge, Cloudant, CopperEgg, Logentries, Couchbase, MongoDB, BlueStripe, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here. 
    • Senior Software Engineer -iOS Systems. Do you love building highly scalable, distributed web applications? Does the idea of a fast-paced environment make your heart leap? Do you want your technical abilities to be challenged every day, and for your work to make a difference in the lives of millions of people? If so, the iOS Systems Carrier Services team is looking for a talented software engineer who is not afraid to share knowledge, think outside the box, and question assumptions. Please apply here.
    • Software Engineering Manager, IS&T WWDR Dev Systems. The WWDR development team is seeking a hands-on engineering manager with a passion for building large-scale, high-performance applications. The successful candidate will be collaborating with Worldwide Developer Relations (WWDR) and various engineering teams throughout Apple. You will lead a team of talented engineers to define and build large-scale web services and applications. Please apply here.
    • C++ Senior Developer and Architect- Maps. The Maps Team is looking for a senior developer and architect to support and grow some of the core backend services that support Apple Map's Front End Services. Ideal candidate would have experience with system architecture, as well as the design, implementation, and testing of individual components but also be comfortable with multiple scripting languages. Please apply here.
    • Site Reliability Engineer. The iOS Systems team is building out a Site Reliability organization. In this role you will be expected to work hand-in-hand with the teams across all phases of the project lifecycle to support systems and to take ownership as they move from QA through integrated testing, certification and production.  Please apply here.

  • Make Tumblr fast, reliable and available for hundreds of millions of visitors and tens of millions of users. As a Site Reliability Engineer you are a software developer with a love of highly performant, fault-tolerant, massively distributed systems. Apply here.

  • Systems & Networking Lead at Gawker. We are looking for someone to take the initiative on the lowest layers of the Kinja platform. All the way down to power and up through hardware, networking, load-balancing, provisioning and base-configuration. The goal for this quarter is a roughly 30% capacity expansion, and the goal for next quarter will be a rolling CentOS7 upgrade as well as to planning/quoting/pitching our 2015 footprint and budget. For the full job spec and to apply, click here: http://grnh.se/t8rfbw

  • BattleCry, the newest ZeniMax studio in Austin, is seeking a qualified Front End Web Engineer to help create and maintain our web presence for AAA online games. This includes the game accounts web site, enhancing the studio website, our web and mobile- based storefront, and front end for support tools. http://jobs.zenimax.com/requisitions/view/540

  • FoundationDB is seeking outstanding developers to join our growing team and help us build the next generation of transactional database technology. You will work with a team of exceptional engineers with backgrounds from top CS programs and successful startups. We don’t just write software. We build our own simulations, test tools, and even languages to write better software. We are well-funded, offer competitive salaries and option grants. Interested? You can learn more here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • OmniTI has a reputation for scalable web applications and architectures, but we still lean on our friends and peers to see how things can be done better. Surge started as the brainchild of our employees wanting to bring the best and brightest in Web Operations to our own backyard. Now in its fifth year, Surge has become the conference on scalability and performance. Early Bird rate in effect until 7/24!
Cool Products and Services
  • Get your Black Belt in NoSQL. Couchbase Learning Services presents hands-on training for mission-critical NoSQL. This is the fastest, most reliable way to gain skills in advanced NoSQL application development and systemsoperation, through expert instruction with hands-on practical labs, in a structured learning environment. Trusted training. From Couchbase. View the schedule, read attendee testimonials & enroll now.

  • Now track your log activities with Log Monitor and be on the safe side! Monitor any type of log file and proactively define potential issues that could hurt your business' performance. Detect your log changes for: Error messages, Server connection failures, DNS errors, Potential malicious activity, and much more. Improve your systems and behaviour with Log Monitor.

  • The NoSQL "Family Tree" from Cloudant explains the NoSQL product landscape using an infographic. The highlights: NoSQL arose from "Big Data" (before it was called "Big Data"); NoSQL is not "One Size Fits All"; Vendor-driven versus Community-driven NoSQL.  Create a free Cloudant account and start the NoSQL goodness

  • Finally, log management and analytics can be easy, accessible across your team, and provide deep insights into data that matters across the business - from development, to operations, to business analytics. Create your free Logentries account here.

  • CopperEgg. Simple, Affordable Cloud Monitoring. CopperEgg gives you instant visibility into all of your cloud-hosted servers and applications. Cloud monitoring has never been so easy: lightweight, elastic monitoring; root cause analysis; data visualization; smart alerts. Get Started Now.

  • Whitepaper Clarifies ACID Support in Aerospike. In our latest whitepaper, author and Aerospike VP of Engineering & Operations, Srini Srinivasan, defines ACID support in Aerospike, and explains how Aerospike maintains high consistency by using techniques to reduce the possibility of partitions.  Read the whitepaper: http://www.aerospike.com/docs/architecture/assets/AerospikeACIDSupport.pdf.

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Not vs. Not-Now Prioritization Along with Medium-Term Goals

Mike Cohn's Blog - Tue, 08/19/2014 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

In last month’s newsletter I wrote about how we make personal financial decisions in a now vs. not-now manner. We don’t map out must-haves, should-haves, could-haves, and won’t haves. And I promised in this month’s newsletter, I would cover a simple approach to now vs. not-now planning while still accommodating working toward a bigger vision for a product.

I always recommend having a medium-term vision for where a product is headed. I find a three-month horizon works well. At the start of each quarter, a product owner should put a stake in the ground saying “Here’s where we want to be in three months.” This is done in conjunction with the team and other stakeholders, but the ultimate vision for a product is up to the product owner.

A product owner doesn’t need to be overly committed to the vision—it can be changed. But, without a stake in the ground a few months out, prioritization decisions are likely to be driven by whatever emergencies erupt right before sprint planning meetings. Without a vision, the urgent always wins over the strategic.

For choosing between competing ideas for a medium-term vision, I like using a formal approach—that is, something I can explain to someone else. I want to be able to say, “I chose to focus on such-and-such rather than this-and-that” and then show some analysis indicating how I made that decision. I’ve written elsewhere about relative weighting, theme screening and theme scoring—and we have tools on this website for performing those analyses.

But at the start of each sprint, a product owner needs to make the smaller now vs. not-now decisions of prioritizing user stories to be worked on in the next one sprint. Rather than using a formal, explainable approach for that, I advise product owners consider four things about the product backlog items they are evaluating:

  1. how valuable the feature will be
  2. the learning that will occur by developing the feature
  3. the riskiness of the feature
  4. any change in the relative cost of the feature

I do this by first sorting the candidate product backlog items on attribute one, the value of each feature. There’s nothing special about this; it’s how product people have prioritized features for years.

But I don’t stop there. I use items two through four to adjust the now sorted backlog. For the most part, I will move product backlog items up rather than down based on the influence of these additional factors. Let me explain what each factor is about.

The second factor refers to how much learning will occur by developing the feature. Learning can take many forms—for example, a team might learn about the technology being used (“this vendor’s library isn’t as easy as we were told it would be”) or a product owner might learn how well users respond to a new user interface. If the learning that will result from developing a particular product backlog item is significant, I will move that item up the product backlog so it is developed in the coming sprint.pro

As for riskiness, if a given risk cannot be avoided, I prefer doing that product backlog item sooner rather than later so that I can learn the impact of the risk. And so, I will move the product backlog item up into the current sprint. If, however, there is a chance of avoiding a risk entirely (perhaps by not doing the feature at all), I will move that product backlog item out of the current sprint. I’ll then hopefully continue to do that in each subsequent sprint, thereby avoiding that risk entirely.

Finally, some features can be cheap if they are done now or expensive if they are put off. When I see such an item on a product backlog, I will move it into the current sprint to avoid the higher cost of delaying the feature.

By combining the use of these four factors when selecting items for a sprint with a formal approach to establishing a medium-term, three-month vision, you’ll be able to successfully prioritize in a now vs. not-now manner within the context of achieving a bigger goal.

Now vs. Not-Now Prioritization Along with Medium-Term Goals

Mike Cohn's Blog - Tue, 08/19/2014 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

In last month’s newsletter I wrote about how we make personal financial decisions in a now vs. not-now manner. We don’t map out must-haves, should-haves, could-haves, and won’t haves. And I promised in this month’s newsletter, I would cover a simple approach to now vs. not-now planning while still accommodating working toward a bigger vision for a product.

I always recommend having a medium-term vision for where a product is headed. I find a three-month horizon works well. At the start of each quarter, a product owner should put a stake in the ground saying “Here’s where we want to be in three months.” This is done in conjunction with the team and other stakeholders, but the ultimate vision for a product is up to the product owner.

A product owner doesn’t need to be overly committed to the vision—it can be changed. But, without a stake in the ground a few months out, prioritization decisions are likely to be driven by whatever emergencies erupt right before sprint planning meetings. Without a vision, the urgent always wins over the strategic.

For choosing between competing ideas for a medium-term vision, I like using a formal approach—that is, something I can explain to someone else. I want to be able to say, “I chose to focus on such-and-such rather than this-and-that” and then show some analysis indicating how I made that decision. I’ve written elsewhere about relative weighting, theme screening and theme scoring—and we have tools on this website for performing those analyses.

But at the start of each sprint, a product owner needs to make the smaller now vs. not-now decisions of prioritizing user stories to be worked on in the next one sprint. Rather than using a formal, explainable approach for that, I advise product owners consider four things about the product backlog items they are evaluating:

  1. how valuable the feature will be
  2. the learning that will occur by developing the feature
  3. the riskiness of the feature
  4. any change in the relative cost of the feature

I do this by first sorting the candidate product backlog items on attribute one, the value of each feature. There’s nothing special about this; it’s how product people have prioritized features for years.

But I don’t stop there. I use items two through four to adjust the now sorted backlog. For the most part, I will move product backlog items up rather than down based on the influence of these additional factors. Let me explain what each factor is about.

The second factor refers to how much learning will occur by developing the feature. Learning can take many forms—for example, a team might learn about the technology being used (“this vendor’s library isn’t as easy as we were told it would be”) or a product owner might learn how well users respond to a new user interface. If the learning that will result from developing a particular product backlog item is significant, I will move that item up the product backlog so it is developed in the coming sprint.pro

As for riskiness, if a given risk cannot be avoided, I prefer doing that product backlog item sooner rather than later so that I can learn the impact of the risk. And so, I will move the product backlog item up into the current sprint. If, however, there is a chance of avoiding a risk entirely (perhaps by not doing the feature at all), I will move that product backlog item out of the current sprint. I’ll then hopefully continue to do that in each subsequent sprint, thereby avoiding that risk entirely.

Finally, some features can be cheap if they are done now or expensive if they are put off. When I see such an item on a product backlog, I will move it into the current sprint to avoid the higher cost of delaying the feature.

By combining the use of these four factors when selecting items for a sprint with a formal approach to establishing a medium-term, three-month vision, you’ll be able to successfully prioritize in a now vs. not-now manner within the context of achieving a bigger goal.

AngularJS directives for c3.js

Gridshore - Tue, 08/19/2014 - 10:19

For one of my projects we wanted to create some nice charts. Feels like something you often want but do not do because it takes to much time. This time we really needed it. We had a look at D3.js library, a very nice library but so many options and a lot to do yourself. Than we found c3.js, check the blog post by Roberto: Creating charts with C3.js. Since I do a lot with AngularJS, I wanted to integrate these c3.js charts with AngularJS. I already wrote a piece about the integration. Now I went one step further by creating a set of AngularJS directives.

You can read the blog on the trifork blog:

http://blog.trifork.com/2014/08/19/angularjs-directives-for-c3-js-chart-library/

The post AngularJS directives for c3.js appeared first on Gridshore.

Categories: Architecture, Programming

Systems Thinking and Capabilities Based Planning

Herding Cats - Glen Alleman - Tue, 08/19/2014 - 04:23

The term Systems Thinking is many times used in vague and unspecified ways to mean think about the system and all will emerge in a way needed to produce value. 

Systems Thinking is a term toosed around many times with little regard for what it actually means and its relationship to Systems Engineering and Engineered Systems.

But systems thinking is much more than that. Systems thinking provides a rigorous way of integrating people, purpose, process and performance and: †

  • relating systems to their environment.
  • understanding complex problem situations
  • maximising the outcomes achieved. 
  • avoiding or minimising the impact of unintended consequences.
  • aligning teams, disciplines, specialisms and interest groups.
  • managing uncertainty, risk and opportunity.

†  "How Systems Thinking Contributes to Systems Engineering," INCOSEUK, Z7, Issue 1.0, March 2010.

Related articles A Breathtaking Paper Positive Deviance
Categories: Project Management

How to choose the right project? Decision making frameworks for software organizations

Software Development Today - Vasco Duarte - Tue, 08/19/2014 - 04:00

Frameworks to choose the best projects in organizations are a dime a dozen.

We have our NPV (net present value), we have our customized Criteria Matrix, we have Strategic alignment, we have Risk/Value scoring, and the list goes on and on.

In every organization there will a preference for one of these or similar methods to choose where to invest people’s precious time and money.

Are all these frameworks good? No, but they aren’t bad either. They all have some potential positive impact, at least when it comes to reflection. They help executive teams reflect on where they want to take their organizations, and how each potential project will help (or hinder) those objectives.

So far, so good.

“Everybody’s got a plan, until they get punched in the face” ~Tyson Surviving wrong decisions made with perfect data

However, reality is seldom as structured and predictable as the plans make it out to be. Despite the obvious value that the frameworks above have for decision making, they can’t be perfect because they lack one crucial aspect of reality: feedback.

Models lack on critical property of reality: feedback.

As soon as we start executing a particular project, we have chosen a path and have made allocation of people’s time and money. That, in turn, sets in motion a series of other decisions: we may hire some people, we may subcontract part of the project, etc.

All of these subsequent decisions will have even further impacts as the projects go on, and they may lead to even more decisions being made. Each of these decisions will also have an impact on the outcome of the chosen projects, as well as on other sub-decisions for each project. Perhaps the simplest example being the conflicts that arise from certain tasks for different projects having to be executed by the same people (shared skills or knowledge).

And at this point we have to ask: even assuming that we had perfect data when we chose the project based on one of the frameworks above, how do we make sure that we are still working on the most important and valuable projects for our organization?

Independently from the decisions made in the past, how do we ensure we are working on the most important work today? The feedback bytes back

This illustrates one of the most common problems with decision making frameworks: their static nature. They are about making decisions "now", not "continuously". Decision making frameworks are great at the time when you need to make a decision, but once the wheels are in motion, you will need to adapt. You will need to understand and harness the feedback of your decisions and change what is needed to make sure you are still focusing on the most valuable work for your organization.

All decision frameworks have one critical shortcoming: they are static by design. How do we improve decision making after the fact?

First, we must understand that any work that is “in flight” (aka in progress) in IT projects has a value of zero, i.e., in IT projects no work has value until it is in use by someone, somewhere. And at that point it has both value (the benefit) and cost (how much we spend maintaining that functionality).

This dynamic means that even if you have chosen the right project to start with, you have to make sure that you can stop any project, at any time. Otherwise you will have committed to invest more time and more money (by making irreversible “big bang” decisions) into projects that may prove to be much less valuable than you expected when you started them. This phenomenon of continuing to invest beyond the project benefit/cost trade-off point is known as Sunk Cost Fallacy and is a very common problem in software organizations: because reversing a decision made using a trustworthy process is very difficult, both practically (stop project = loose all value) and due to bureaucracy (how do we prove that the decision to stop is better than the decision to start the project?)

Can we treat the Sunk Cost Fallacy syndrome?

While using the decision frameworks listed above (or others), don’t forget that the most important decision you can make is to keep your options open in a way that allows you to stop work on projects that prove less valuable than expected, and to invest more in projects that prove more valuable than expected.

In my own practice this is one of the reasons why I focus on one of the #NoEstimates rules: Always know what is the most valuable thing to work on, and work only on that.

So my suggestion is: even when you score projects and make decisions on those scores, always keep in mind that you may be wrong. So, invest in small increments into the projects you believe are valuable, but be ready to reassess and stop investing if those projects prove less valuable than other projects that will become relevant later on.

The #NoEstimates approach I use allows me to do this at three levels:

  • a) Portfolio level: by reviewing constant progress in each project and assess value delivered. As well as constantly preparing to stop each project by releasing regularly to a production-like environment. Portfolio flexibility.
  • b) Project level: by separating each piece of value (User Story or Feature) into an independent work package that can be delivered independently from all other project work. Scope flexibility.
  • c) User Story / Feature level: by keeping User Stories and Features as small as possible (1 day for User Stories, 1-2 weeks for Features), and releasing them independently at fixed time intervals. Work item flexibility

Do you want to know more about adaptive decision frameworks? Woody Zuill and myself will be hosting a workshop in Helsinki to present our #NoEstimates ideas and to discuss decision making frameworks for software projects that build on our #NoEstimates work.

You can sign up here. But before you do, email me and get a special discount code.

If you manage software organizations and projects, there will be other interesting workshops for you in the same days. For example, the #MobProgramming workshop where Woody Zuill shows you how he has been able to help his teams significantly improve their well-being and performance. #MobProgramming may well be a breakthrough in Agile management.

Picture credit: John Hammink, follow him on twitter

Which Size Metric Should I Use? Part 1

Size matters

Size matters.

All jokes aside, size matters. Size matters because at least intellectually we all recognize that there is a relationship between the size of product and the effort required to build. We might argue over degree of the relationship or whether there are other attributes required to define the relationship, but the point is that size and effort are related. Size is important for estimating project effort, cost and duration. Size also provides us with a platform for topics as varied as scope management (defining scope creep and churn) to benchmarking. In a nutshell, size matters both as an input into the planning and controlling development processes and as a denomination to enable comparison between projects.

Finding the specific measure of software size for your organization is part art and part science. The selection of your size measure must deliver the data need to meet the measurement goal and to fit within the corporate culture (culture includes both people and the methodologies the organization uses). A framework for evaluation would include the following categories:

  • Supports measurement goal
  • Industry recognized
  • Published methodology
  • Useable when needed
  • Accurate
  • Easy enough

Categories: Process Management

The 5 worst mistakes people make with requirements

Software Requirements Blog - Seilevel.com - Mon, 08/18/2014 - 17:00
By: Ulf Eriksson Working with requirements is a serious responsibility. The people behind them are those that provide the rest of the team with something to develop and something to test. After the initial briefing and discussion about a new project, they effectively kick-start the process that will eventually result in a finished product. In […]
Categories: Requirements

1 Aerospike server X 1 Amazon EC2 instance = 1 Million TPS for just $1.68/hour

This a guest post by Anshu Prateek, Tech Lead, DevOps at Aerospike and Rajkumar Iyer, Member of the Technical Staff at Aerospike.

Cloud infrastructure services like Amazon EC2 have proven their worth with wild success. The ease of scaling up resources, spinning them up as and when needed and paying by unit of time has unleashed developer creativity, but virtualized environments are not widely considered as the place to run high performance applications and databases.

Cloud providers however have come a long way in their offerings and need a second review of their performance capabilities. After showing 1 Million TPS on Aerospike on bare metal servers, we decided to investigate cloud performance and in the process, bust the myth that cloud != high performance.

We examined a variety of Amazon instances and just discovered the recipe for processing 1 Million TPS in RAM on 1 Aerospike server on a single C3.8xlarge instance - for just $1.68/hr !!!

According to internetlivestats.com, there are 7.5k new tweets per second, 45k google searches per second and 2.3 Million emails sent per second. What would you build if you could process 1 Million database transactions per second for just $1.68/hr?

Categories: Architecture

Office Hours for Google’s Udacity MOOCs

Google Code Blog - Mon, 08/18/2014 - 16:39
By Peter Lubbers, a Program Manager in charge of Google’s Scalable Developer Programs, which include MOOC developer training. Peter is the author of "Pro HTML5 Programming" (Apress) and, yes, his car's license plate is HTML5!
At Google I/O, we launched four new Udacity MOOCs, helping developers learn how to work with Android, Web, UX Design, and Cloud technology. We’re humbled that almost 100,000 students signed up for these new courses since then. Over the next two weeks, we’ll be hosting on-air office hours to help out students who are working through some of these classes. Ask your questions via the Moderator links below, and the Google Experts will answer them live. Please join us if you are taking the class, or just interested in the answers.
Screen Shot 2014-08-18 at 8.06.30 AM.png

Class: Web Performance Optimization — Critical Rendering Path
Class: Android Fundamentals
Class: Building Scalable Apps with Google App Engine
You can find all of the Google Udacity MOOCs at www.udacity.com/google.


Posted by Mano Marks, Scalable Developer Advocacy Team


Categories: Programming

Taking Action

Making the Complex Simple - John Sonmez - Mon, 08/18/2014 - 15:00

On any given day my inbox is full of emails from software developers asking me for advice on all kinds of topics. Even though many of these questions are unique, I’ve found that many of the emails have one root, all-encompassing solution: taking action. Most people never actually do anything with their lives. Most people […]

The post Taking Action appeared first on Simple Programmer.

Categories: Programming

Three Types of Activities

NOOP.NL - Jurgen Appelo - Mon, 08/18/2014 - 13:47
Three Types of Activities

Three types of activities are keeping you busy:

Type 1: Bad

These are the things you should not be doing. They are wasting your time but still you do them. Stop it!

The post Three Types of Activities appeared first on NOOP.NL.

Categories: Project Management

Why Is It Hard To Manage Projects?

Herding Cats - Glen Alleman - Mon, 08/18/2014 - 05:36

In the absence of a control system that provides feedback to the progress to plan, the project is just wandering around looking for what Done looks like. The notion of emergent requirements is fine. The notion of emergent capabilities is not.

If we don't know what capabilities are needed to fulfill the business plan or fulfill the mission need, then we're on a Death March project. We don't know what value is being produced, when we will be done, or how much this will cost when we're done.

This is the role of the project control system. Without a control system there is no way to use feedback to steer the project toward success.

Control systems from Glen Alleman So when we hear we can make decisions wihout estimating the impact of those decisions on the furture outcomes of the project, we'll know to call BS. First not knowing this impact on the decision making process violates the principle of Microeconomics and second there is no way to close the loop to generate a variance signal needed to steer the project to success. Related articles Staying on Plan Means Closed Loop Control Concept of Operations First, then Capabilities, then Requirements Delivering Needed Capabilities What Do We Mean When We Say "Agile Community?" More Than Target Needed To Steer Toward Project Success Getting to done!
Categories: Project Management

SPaMCAST 303 – Topics in Estimation, Software Sensei, Education

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 303

Software Process and Measurement Cast number 303 features our essay titled “Topics in Estimation.” This essay is a collection of smaller essays that cover wide range of issues effecting estimation.  Topics include estimation and customer satisfaction, risk and project estimates, estimation frameworks and size and estimation.  Something to help and irritate everyone, we are talking about estimation – what would you expect?

We also have a new installment of Kim Pries’s Software Sensei column.  In this installment Kim discusses education as defect prevention.  Do we really believe that education improves productivity, quality and time to market?

Listen to the Software Process and Measurement Cast 303

Next

Software Process and Measurement Cast number 304 will feature our long awaited interview with Jamie Lynn Cooke, author The Power of the Agile Business Analyst. We discussed the definition of an Agile business analyst and what they actually do in Agile projects.  Jamie provides a clear and succinct explanation of the role and value of Agile business analysts.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested!

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

Ruby: Create and share Google Drive Spreadsheet

Mark Needham - Sun, 08/17/2014 - 22:42

Over the weekend I’ve been trying to write some code to help me create and share a Google Drive spreadsheet and for the first bit I started out with the Google Drive gem.

This worked reasonably well but that gem doesn’t have an API for changing the permissions on a document so I ended up using the google-api-client gem for that bit.

This tutorial provides a good quick start for getting up and running but it still has a manual step to copy/paste the ‘OAuth token’ which I wanted to get rid of.

The first step is to create a project via the Google Developers Console. Once the project is created, click through to it and then click on ‘credentials’ on the left menu. Click on the “Create new Client ID” button to create the project credentials.

You should see something like this on the right hand side of the screen:

2014 08 17 16 29 39

These are the credentials that we’ll use in our code.

Since I now have two libraries I need to satisfy the OAuth credentials for both, preferably without getting the user to go through the process twice.

After a bit of trial and error I realised that it was easier to get the google-api-client to handle authentication and just pass in the token to the google-drive code.

I wrote the following code using Sinatra to handle the OAuth authorisation with Google:

require 'sinatra'
require 'json'
require "google_drive"
require 'google/api_client'
 
CLIENT_ID = 'my client id'
CLIENT_SECRET = 'my client secret'
OAUTH_SCOPE = 'https://www.googleapis.com/auth/drive https://docs.google.com/feeds/ https://docs.googleusercontent.com/ https://spreadsheets.google.com/feeds/'
REDIRECT_URI = 'http://localhost:9393/oauth2callback'
 
helpers do
  def partial (template, locals = {})
    haml(template, :layout => false, :locals => locals)
  end
end
 
enable :sessions
 
get '/' do
  haml :index
end
 
configure do
  google_client = Google::APIClient.new
  google_client.authorization.client_id = CLIENT_ID
  google_client.authorization.client_secret = CLIENT_SECRET
  google_client.authorization.scope = OAUTH_SCOPE
  google_client.authorization.redirect_uri = REDIRECT_URI
 
  set :google_client, google_client
  set :google_client_driver, google_client.discovered_api('drive', 'v2')
end
 
 
post '/login/' do
  client = settings.google_client
  redirect client.authorization.authorization_uri
end
 
get '/oauth2callback' do
  authorization_code = params['code']
 
  client = settings.google_client
  client.authorization.code = authorization_code
  client.authorization.fetch_access_token!
 
  oauth_token = client.authorization.access_token
 
  session[:oauth_token] = oauth_token
 
  redirect '/'
end

And this is the code for the index page:

%html
  %head
    %title Google Docs Spreadsheet
  %body
    .container
      %h2
        Create Google Docs Spreadsheet
 
      %div
        - unless session['oauth_token']
          %form{:name => "spreadsheet", :id => "spreadsheet", :action => "/login/", :method => "post", :enctype => "text/plain"}
            %input{:type => "submit", :value => "Authorise Google Account", :class => "button"}
        - else
          %form{:name => "spreadsheet", :id => "spreadsheet", :action => "/spreadsheet/", :method => "post", :enctype => "text/plain"}
            %input{:type => "submit", :value => "Create Spreadsheet", :class => "button"}

We initialise the Google API client inside the ‘configure’ block before each request gets handled and then from ‘/’ the user can click a button which does a POST request to ‘/login/’.

‘/login/’ redirects us to the OAuth authorisation URI where we select the Google account we want to use and login if necessary. We’ll then get redirected back to ‘/oauth2callback’ where we extract the authorisation code and then get an authorisation token.

We’ll store that token in the session so that we can use it later on.

Now we need to create the spreadsheet and share that document with someone else:

post '/spreadsheet/' do
  client = settings.google_client
  if session[:oauth_token]
    client.authorization.access_token = session[:oauth_token]
  end
 
  google_drive_session = GoogleDrive.login_with_oauth(session[:oauth_token])
 
  spreadsheet = google_drive_session.create_spreadsheet(title = "foobar")
  ws = spreadsheet.worksheets[0]
 
  ws[2, 1] = "foo"
  ws[2, 2] = "bar"
  ws.save()
 
  file_id = ws.worksheet_feed_url.split("/")[-4]
 
  drive = settings.google_client_driver
 
  new_permission = drive.permissions.insert.request_schema.new({
      'value' => "some_other_email@gmail.com",
      'type' => "user",
      'role' => "reader"
  })
 
  result = client.execute(
    :api_method => drive.permissions.insert,
    :body_object => new_permission,
    :parameters => { 'fileId' => file_id })
 
  if result.status == 200
    p result.data
  else
    puts "An error occurred: #{result.data['error']['message']}"
  end
 
  "spreadsheet created and shared"
end

Here we create a spreadsheet with some arbitrary values using the google-drive gem before granting permission to a different email address than the one which owns it. I’ve given that other user read permission on the document.

One other thing to keep in mind is which ‘scopes’ the OAuth authentication is for. If you authenticate for one URI and then try to do something against another one you’ll get a ‘Token invalid – AuthSub token has wrong scope‘ error.

Categories: Programming

SPaMCAST 303 – Topics in Estimation, Software Sensei, Education

Software Process and Measurement Cast - Sun, 08/17/2014 - 22:00

Software Process and Measurement Cast number 303 features our essay titled “Topics in Estimation.” This essay is a collection of smaller essays that cover wide range of issues effecting estimation.  Topics include estimation and customer satisfaction, risk and project estimates, estimation frameworks and size and estimation.  Something to help and irritate everyone, we are talking about estimation – what would you expect?

We also have a new installment of Kim Pries’s Software Sensei column.  In this installment Kim discusses education as defect prevention.  Do we really believe that education improves productivity, quality and time to market?

 

Next

Software Process and Measurement Cast number 304 will feature our long awaited interview with Jamie Lynn Cooke, author The Power of the Agile Business Analyst. We discussed the definition of an Agile business analyst and what they actually do in Agile projects.  Jamie provides a clear and succinct explanation of the role and value of Agile business analysts.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested!

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Little's Law in 3D

Xebia Blog - Sun, 08/17/2014 - 16:21

The much used relation between average cycle time, average total work and input rate (or throughput) is known as Little's Law. It is often used to argue that it is a good thing to work on less items at the same time (as a team or as an individual) and thus lowering the average cycle time. In this blog I will discuss the less known generalisation of Little's Law giving an almost unlimited number of additional relation. The only limit is your imagination.

I will show relations for the average 'Total Operational Cost in the system' and for the average 'Just-in-Timeness'.

First I will describe some rather straightforward generalisations and in the third part some more complex variations on Little's Law.

Little's Law Variations

As I showed in the previous blogs (Applying Little's Law in Agile Games and Why Little's Law Works...Always) Little's Law in fact states that measuring the total area from left-to-right equals summing it from top-to-bottom.

Once we realise this, it is easy to see some straightforward generalisations which are well-known. I'll mention them here briefly without ging into too much details.

Subsystem

new doc 8_1 Suppose a system that consists of 1 or more subsystems, e.g. in a kanban system consisting of 3 columns we can identify the subsystems corresponding to:

  1. first column (e.g. 'New') in 'red',
  2. second column (e.g. 'Doing') in 'yellow',
  3. third column (e.g. 'Done') in 'green'

See the figure on the right.

By colouring the subsystems different from each other we see immediately that Little's Law applies to the system as a whole as well as to every subsystem ('red' and 'yellow' area).

Note: for the average input rate consider only the rows that have the corresponding color, i.e. for the input rate of the column 'Doing' consider only the rows that have a yellow color; in this case the average input rate equals 8/3 items per round (entering the 'Doing' column). Likewise for the 'New' column.

Work Item Type

new doc 9_1Until now I assumed only 1 type of work items. In practise teams deal with more than one different work item types. Examples include class of service lanes, user stories, and production incidents. Again, by colouring the various work item type differently we see that Little's Law applies to each individual work item type.

In the example on the right, we have coloured user stories ('yellow') and production incidents ('red'). Again, Little's Law applies to both the red and yellow areas separately.

Doing the math we se that for 'user stories' (yellow area):

  • Average number in the system (N) = (6+5+4)/3 = 5 user stories,
  • Average input rate (\lambda\lambda = 6/3 = 2 user stories per round,
  • Average waiting time (W) = (3+3+3+3+2+1)/6 = 15/6 = 5/2 rounds.

As expected, the average number in the system equals the average input rate times the average waiting time.

The same calculation can be made for the production incidents which I leave as an exercise to the reader.

Expedite Items

new doc 10_1 Finally, consider items that enter and spend time in an 'expedite' lane. In Kanban an expedite lane is used for items that need special priority. Usually the policy for handling such items are that (a) there can be at most 1 such item in the system at any time, (b) the team stop working on anything but on this item so that it is completed as fast as possible, (c) they have priority over anything else, and (d) they may violate any WiP limits.

Colouring any work items blue that spend time in the expedite lane we can apply Little's Law to the expedite lane as well.

An example of the colouring is shown in the figure on the right. I leave the calculation to the reader.

3D


We can even further extend Little's Law. Until now I have considered only 'flat' areas.

The extension is that we can give each cell a certain height. See the figure to the right. A variation on Little's Law follows once we realise that measuring the volume from left-to-right is the same as calculating it from top-to-bottom. Instead of measuring areas we measure volumes instead.

The only catch here is that in order to write down Little's Law we need to give a sensible interpretation to the 'horizontal' sum of the numbers and a sensible interpretation to the 'vertical' sum of the numbers. In case of a height of '1' these are just 'Waiting Time' (W) and 'Number of items in the system' (N) respectively.

A more detailed, precise, and mathematical formulation can be found in the paper by Little himself: see section 3.2 in [Lit11].

Some Applications of 3D-Little's Law

Value

As a warming-up exercise consider as the height the (business) value of an item. Call this value 'V'. Every work item will have its own specific value.
new doc 12_1

 

 

 \overline{\mathrm{Value}} = \lambda \overline{V W} \overline{\mathrm{Value}} = \lambda \overline{V W}

The interpretation of this relation is that the 'average (business) value of unfinished work in the system at any time' is equal to the average input rate multiplied by the 'average of the product of cycle time and value'.

Teams may ant to minimise this while at the same time maximising the value output rate.

Total Operational Cost

As the next example let's take as the height for the cells a sequence of numbers 1, 2, 3, .... An example is shown in the figures below. What are the interpretations in this case?

Suppose we have a work item that has an operational cost of 1 per day. Then the sequence 1, 2, 3, ... gives the total cost to date. At day 3, the total cost is 3 times 1 which is the third number in the sequence.

new doc 12_2The 'vertical' sum is just the 'Total Cost of unfinished work in the system.

For the interpretation of the 'horizontal' sum we need to add the numbers. For a work item that is in the system for 'n' days, the total is 1+2+3+...+n1+2+3+...+n which equals 1/2 n (n+1)1/2 n (n+1). For 3 days this gives 1+2+3=1/2 * 3 * 4 = 61+2+3=1/2 * 3 * 4 = 6. Thus, the interpretation of the 'horizontal' sum is 1/2 W (W+1)1/2 W (W+1) in which 'W' represents the waiting time of the item.

Putting this together gives an additional Little's Law of the form:

 \overline{\mathrm{Cost}} = \frac{1}{2} \lambda C \overline{W(W + 1)} \overline{\mathrm{Cost}} = \frac{1}{2} \lambda C \overline{W(W + 1)}

where 'C' is the operational cost rate of a work item and \lambda\lambda is the (average) input rate. If instead of rounds in a game, the 'Total Cost in the system' is measured at a time interval 'T' the formula slightly changes into

 \overline{\mathrm{Cost}} = \frac{1}{2} \lambda C \overline{W\left(W + T\right)} \overline{\mathrm{Cost}} = \frac{1}{2} \lambda C \overline{W\left(W + T\right)}

Teams may want to minimise this which gives an interesting optimisation problem is different work item types have different associated operational cost rates. How should the capacity of the be divided over the work items? This is a topic for another blog.

Just-in-Time

For a slightly more odd relation consider items that have a deadline associated with them. Denote the date and time of the deadline by 'D'. As the height choose the number of time units before or after the deadline the item is completed. Further, call 'T' the time that the team has taken up to work on the item. Then the team finishes work on this item at time  T + W T + W , where 'W' represent the cycle time of the work item.

new doc 12_4

In the picture on the left a work item is shown that is finished 2 days before the deadline. Notice that the height decreases as the deadline is approached. Since it is finished 2 time units before the deadline, the just-in-timeness is 2 at the completion time.

new doc 12_3

The picture on the left shows a work item one time unit after the deadline and has an associated just-in-timeness of 1.

 

 \overline{\mathrm{Just-in-Time}} = \frac{1}{2} \lambda \overline{|T+W-D|(|T+W-D| + 1)} \overline{\mathrm{Just-in-Time}} = \frac{1}{2} \lambda \overline{|T+W-D|(|T+W-D| + 1)}

This example sounds like a very exotic one and not very useful. A team might want to look at what the best time is to start working on an item so as to minimise the above variable.

Conclusion

From our 'playing around' with the size of areas and volumes and realising that counting it in different ways (left-to-right and top-to-bottom) should give the same result I have been able to derive a new set of relations.

In this blog I have rederived well-known variations on Little's Law regarding subsystems and work items types. In addition I have derived new relations for the 'Average Total Operational Cost', 'Average Value', and 'Average Just-in-Timeness'.

Together with the familiar Little's Law these give rise to interesting optimisation problems and may lead to practical guidelines for teams to create even more value.

I'm curious to hear about the variations that you can come up with! Let me know by posting them here.

References

[Lit11] John D.C. Little, "Little’s Law as Viewed on Its 50th Anniversary", 2011, Operations Research, Vol. 59 , No 3, pp. 536-549, https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf

 

 

Ruby: Receive JSON in request body

Mark Needham - Sun, 08/17/2014 - 13:21

I’ve been building a little Sinatra app to play around with the Google Drive API and one thing I struggled with was processing JSON posted in the request body.

I came across a few posts which suggested that the request body would be available as params['data'] or request['data'] but after trying several ways of sending a POST request that doesn’t seem to be the case.

I eventually came across this StackOverflow post which shows how to do it:

require 'sinatra'
require 'json'
 
post '/somewhere/' do
  request.body.rewind
  request_payload = JSON.parse request.body.read
 
  p request_payload
 
  "win"
end

I can then POST to that endpoint and see the JSON printed back on the console:

dummy.json

{"i": "am json"}
$ curl -H "Content-Type: application/json" -XPOST http://localhost:9393/somewhere/ -d @dummy.json
{"i"=>"am json"}

Of course if I’d just RTFM I could have found this out much more quickly!

Categories: Programming

Ruby: Google Drive – Error=BadAuthentication (GoogleDrive::AuthenticationError) Info=InvalidSecondFactor

Mark Needham - Sun, 08/17/2014 - 02:49

I’ve been using the Google Drive gem to try and interact with my Google Drive account and almost immediately ran into problems trying to login.

I started out with the following code:

require "rubygems"
require "google_drive"
 
session = GoogleDrive.login("me@mydomain.com", "mypassword")

I’ll move it to use OAuth when I put it into my application but for spiking this approach works. Unfortunately I got the following error when running the script:

/Users/markneedham/.rbenv/versions/1.9.3-p327/lib/ruby/gems/1.9.1/gems/google_drive-0.3.10/lib/google_drive/session.rb:93:in `rescue in login': Authentication failed for me@mydomain.com: Response code 403 for post https://www.google.com/accounts/ClientLogin: Error=BadAuthentication (GoogleDrive::AuthenticationError)
Info=InvalidSecondFactor
	from /Users/markneedham/.rbenv/versions/1.9.3-p327/lib/ruby/gems/1.9.1/gems/google_drive-0.3.10/lib/google_drive/session.rb:86:in `login'
	from /Users/markneedham/.rbenv/versions/1.9.3-p327/lib/ruby/gems/1.9.1/gems/google_drive-0.3.10/lib/google_drive/session.rb:38:in `login'
	from /Users/markneedham/.rbenv/versions/1.9.3-p327/lib/ruby/gems/1.9.1/gems/google_drive-0.3.10/lib/google_drive.rb:18:in `login'
	from src/gdoc.rb:15:in `<main>'

Since I have two factor authentication enabled on my account it turns out that I need to create an app password to login:

2014 08 17 02 47 03

It will then pop up with a password that we can use to login (I have revoked this one!):

2014 08 17 02 46 29

We can then use this password instead and everything works fine:

 
require "rubygems"
require "google_drive"
 
session = GoogleDrive.login("me@mydomain.com", "tuceuttkvxbvrblf")
Categories: Programming

One Metric To Rule Them All: Shortcomings

Careful, you might come up short.

Careful, you might come up short.

Using a single metric to represent the performance of entire team or organization is like picking a single point in space.  The team can move in an infinite number of directions from that point in their next sprint or project.  If the measure is important to the team we would assume that human nature would tend to push the team to maximize their performance. The opposite would be true if it was not important to the team. Gaming, positive or negative, often occur at the expense of other critical measures.  An example I observed (more than once) was a contract that specifies payment on productivity (output per unit of input) without mechanisms to temper human nature.  In most cases time-to-market and quality where measured, but were not involved in payment.  In each case, productivity was maximized at the expense of quality or time-to-market.  These were unintended consequences of poorly constructed contracts; in my opinion neither side of the contractual equation consequently wanted to compromise quality or time-to-market.

While developing a single metric is an admirable goal, the process of constructing this type of metric will require substantial thought and effort.  Metrics programs that are still in their development period typically cannot afford the time or effort required for developing a single metric (or the loss of organizational capital if they fail). Regardless of where the metrics program is in its development process, I would suggest an approach that develops an index of individual metrics or the use of a balanced scorecard (a group of metrics that show a balanced view of organizational performance developed by Kaplan and Norton in the 1990s – we will tackle this in detail in the future) is more expeditious.  Using a pallet of well know measures and metrics will leverage the basic knowledge and understanding measurement programs and their stakeholders have developed during the development and implementation of individual measures and metrics.


Categories: Process Management