Software Development Blogs: Programming, Software Testing, Agile Project Management

### Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

### Three Types of Activities

NOOP.NL - Jurgen Appelo - Mon, 08/18/2014 - 13:47

Three types of activities are keeping you busy:

Type 1: Bad

These are the things you should not be doing. They are wasting your time but still you do them. Stop it!

The post Three Types of Activities appeared first on NOOP.NL.

Categories: Project Management

### Why Is It Hard To Manage Projects?

Herding Cats - Glen Alleman - Mon, 08/18/2014 - 05:36

In the absence of a control system that provides feedback to the progress to plan, the project is just wandering around looking for what Done looks like. The notion of emergent requirements is fine. The notion of emergent capabilities is not.

If we don't know what capabilities are needed to fulfill the business plan or fulfill the mission need, then we're on a Death March project. We don't know what value is being produced, when we will be done, or how much this will cost when we're done.

This is the role of the project control system. Without a control system there is no way to use feedback to steer the project toward success.

Control systems from Glen Alleman So when we hear we can make decisions wihout estimating the impact of those decisions on the furture outcomes of the project, we'll know to call BS. First not knowing this impact on the decision making process violates the principle of Microeconomics and second there is no way to close the loop to generate a variance signal needed to steer the project to success. Related articles Staying on Plan Means Closed Loop Control Concept of Operations First, then Capabilities, then Requirements Delivering Needed Capabilities What Do We Mean When We Say "Agile Community?" More Than Target Needed To Steer Toward Project Success Getting to done!
Categories: Project Management

### SPaMCAST 303 – Topics in Estimation, Software Sensei, Education

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 303

Software Process and Measurement Cast number 303 features our essay titled “Topics in Estimation.” This essay is a collection of smaller essays that cover wide range of issues effecting estimation.  Topics include estimation and customer satisfaction, risk and project estimates, estimation frameworks and size and estimation.  Something to help and irritate everyone, we are talking about estimation – what would you expect?

We also have a new installment of Kim Pries’s Software Sensei column.  In this installment Kim discusses education as defect prevention.  Do we really believe that education improves productivity, quality and time to market?

Listen to the Software Process and Measurement Cast 303

Next

Software Process and Measurement Cast number 304 will feature our long awaited interview with Jamie Lynn Cooke, author The Power of the Agile Business Analyst. We discussed the definition of an Agile business analyst and what they actually do in Agile projects.  Jamie provides a clear and succinct explanation of the role and value of Agile business analysts.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested!

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

### Ruby: Create and share Google Drive Spreadsheet

Mark Needham - Sun, 08/17/2014 - 22:42

Over the weekend I’ve been trying to write some code to help me create and share a Google Drive spreadsheet and for the first bit I started out with the Google Drive gem.

This worked reasonably well but that gem doesn’t have an API for changing the permissions on a document so I ended up using the google-api-client gem for that bit.

This tutorial provides a good quick start for getting up and running but it still has a manual step to copy/paste the ‘OAuth token’ which I wanted to get rid of.

The first step is to create a project via the Google Developers Console. Once the project is created, click through to it and then click on ‘credentials’ on the left menu. Click on the “Create new Client ID” button to create the project credentials.

You should see something like this on the right hand side of the screen:

These are the credentials that we’ll use in our code.

Since I now have two libraries I need to satisfy the OAuth credentials for both, preferably without getting the user to go through the process twice.

After a bit of trial and error I realised that it was easier to get the google-api-client to handle authentication and just pass in the token to the google-drive code.

I wrote the following code using Sinatra to handle the OAuth authorisation with Google:

require 'sinatra'
require 'json'
require "google_drive"
require 'google/api_client'

CLIENT_ID = 'my client id'
CLIENT_SECRET = 'my client secret'
OAUTH_SCOPE = 'https://www.googleapis.com/auth/drive https://docs.google.com/feeds/ https://docs.googleusercontent.com/ https://spreadsheets.google.com/feeds/'
REDIRECT_URI = 'http://localhost:9393/oauth2callback'

helpers do
def partial (template, locals = {})
haml(template, :layout => false, :locals => locals)
end
end

enable :sessions

get '/' do
haml :index
end

configure do
google_client = Google::APIClient.new
google_client.authorization.client_id = CLIENT_ID
google_client.authorization.client_secret = CLIENT_SECRET
google_client.authorization.scope = OAUTH_SCOPE
google_client.authorization.redirect_uri = REDIRECT_URI

set :google_client, google_client
set :google_client_driver, google_client.discovered_api('drive', 'v2')
end

post '/login/' do
client = settings.google_client
redirect client.authorization.authorization_uri
end

get '/oauth2callback' do
authorization_code = params['code']

client = settings.google_client
client.authorization.code = authorization_code
client.authorization.fetch_access_token!

oauth_token = client.authorization.access_token

session[:oauth_token] = oauth_token

redirect '/'
end

And this is the code for the index page:

%html
%head
%title Google Docs Spreadsheet
%body
.container
%h2
Create Google Docs Spreadsheet

%div
- unless session['oauth_token']
%form{:name => "spreadsheet", :id => "spreadsheet", :action => "/login/", :method => "post", :enctype => "text/plain"}
%input{:type => "submit", :value => "Authorise Google Account", :class => "button"}
- else
%form{:name => "spreadsheet", :id => "spreadsheet", :action => "/spreadsheet/", :method => "post", :enctype => "text/plain"}
%input{:type => "submit", :value => "Create Spreadsheet", :class => "button"}

We initialise the Google API client inside the ‘configure’ block before each request gets handled and then from ‘/’ the user can click a button which does a POST request to ‘/login/’.

‘/login/’ redirects us to the OAuth authorisation URI where we select the Google account we want to use and login if necessary. We’ll then get redirected back to ‘/oauth2callback’ where we extract the authorisation code and then get an authorisation token.

We’ll store that token in the session so that we can use it later on.

Now we need to create the spreadsheet and share that document with someone else:

post '/spreadsheet/' do
client = settings.google_client
if session[:oauth_token]
client.authorization.access_token = session[:oauth_token]
end

google_drive_session = GoogleDrive.login_with_oauth(session[:oauth_token])

spreadsheet = google_drive_session.create_spreadsheet(title = "foobar")
ws = spreadsheet.worksheets[0]

ws[2, 1] = "foo"
ws[2, 2] = "bar"
ws.save()

file_id = ws.worksheet_feed_url.split("/")[-4]

drive = settings.google_client_driver

new_permission = drive.permissions.insert.request_schema.new({
'value' => "some_other_email@gmail.com",
'type' => "user",
'role' => "reader"
})

result = client.execute(
:api_method => drive.permissions.insert,
:body_object => new_permission,
:parameters => { 'fileId' => file_id })

if result.status == 200
p result.data
else
puts "An error occurred: #{result.data['error']['message']}"
end

"spreadsheet created and shared"
end

Here we create a spreadsheet with some arbitrary values using the google-drive gem before granting permission to a different email address than the one which owns it. I’ve given that other user read permission on the document.

One other thing to keep in mind is which ‘scopes’ the OAuth authentication is for. If you authenticate for one URI and then try to do something against another one you’ll get a ‘Token invalid – AuthSub token has wrong scope‘ error.

Categories: Programming

### SPaMCAST 303 – Topics in Estimation, Software Sensei, Education

Software Process and Measurement Cast - Sun, 08/17/2014 - 22:00

Software Process and Measurement Cast number 303 features our essay titled “Topics in Estimation.” This essay is a collection of smaller essays that cover wide range of issues effecting estimation.  Topics include estimation and customer satisfaction, risk and project estimates, estimation frameworks and size and estimation.  Something to help and irritate everyone, we are talking about estimation – what would you expect?

We also have a new installment of Kim Pries’s Software Sensei column.  In this installment Kim discusses education as defect prevention.  Do we really believe that education improves productivity, quality and time to market?

Next

Software Process and Measurement Cast number 304 will feature our long awaited interview with Jamie Lynn Cooke, author The Power of the Agile Business Analyst. We discussed the definition of an Agile business analyst and what they actually do in Agile projects.  Jamie provides a clear and succinct explanation of the role and value of Agile business analysts.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested!

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

### Little's Law in 3D

Xebia Blog - Sun, 08/17/2014 - 16:21

The much used relation between average cycle time, average total work and input rate (or throughput) is known as Little's Law. It is often used to argue that it is a good thing to work on less items at the same time (as a team or as an individual) and thus lowering the average cycle time. In this blog I will discuss the less known generalisation of Little's Law giving an almost unlimited number of additional relation. The only limit is your imagination.

I will show relations for the average 'Total Operational Cost in the system' and for the average 'Just-in-Timeness'.

First I will describe some rather straightforward generalisations and in the third part some more complex variations on Little's Law.

Little's Law Variations

As I showed in the previous blogs (Applying Little's Law in Agile Games and Why Little's Law Works...Always) Little's Law in fact states that measuring the total area from left-to-right equals summing it from top-to-bottom.

Once we realise this, it is easy to see some straightforward generalisations which are well-known. I'll mention them here briefly without ging into too much details.

Subsystem

Suppose a system that consists of 1 or more subsystems, e.g. in a kanban system consisting of 3 columns we can identify the subsystems corresponding to:

1. first column (e.g. 'New') in 'red',
2. second column (e.g. 'Doing') in 'yellow',
3. third column (e.g. 'Done') in 'green'

See the figure on the right.

By colouring the subsystems different from each other we see immediately that Little's Law applies to the system as a whole as well as to every subsystem ('red' and 'yellow' area).

Note: for the average input rate consider only the rows that have the corresponding color, i.e. for the input rate of the column 'Doing' consider only the rows that have a yellow color; in this case the average input rate equals 8/3 items per round (entering the 'Doing' column). Likewise for the 'New' column.

Work Item Type

Until now I assumed only 1 type of work items. In practise teams deal with more than one different work item types. Examples include class of service lanes, user stories, and production incidents. Again, by colouring the various work item type differently we see that Little's Law applies to each individual work item type.

In the example on the right, we have coloured user stories ('yellow') and production incidents ('red'). Again, Little's Law applies to both the red and yellow areas separately.

Doing the math we se that for 'user stories' (yellow area):

• Average number in the system (N) = (6+5+4)/3 = 5 user stories,
• Average input rate ($\lambda$\lambda = 6/3 = 2 user stories per round,
• Average waiting time (W) = (3+3+3+3+2+1)/6 = 15/6 = 5/2 rounds.

As expected, the average number in the system equals the average input rate times the average waiting time.

The same calculation can be made for the production incidents which I leave as an exercise to the reader.

Expedite Items

Finally, consider items that enter and spend time in an 'expedite' lane. In Kanban an expedite lane is used for items that need special priority. Usually the policy for handling such items are that (a) there can be at most 1 such item in the system at any time, (b) the team stop working on anything but on this item so that it is completed as fast as possible, (c) they have priority over anything else, and (d) they may violate any WiP limits.

Colouring any work items blue that spend time in the expedite lane we can apply Little's Law to the expedite lane as well.

An example of the colouring is shown in the figure on the right. I leave the calculation to the reader.

3D

We can even further extend Little's Law. Until now I have considered only 'flat' areas.

The extension is that we can give each cell a certain height. See the figure to the right. A variation on Little's Law follows once we realise that measuring the volume from left-to-right is the same as calculating it from top-to-bottom. Instead of measuring areas we measure volumes instead.

The only catch here is that in order to write down Little's Law we need to give a sensible interpretation to the 'horizontal' sum of the numbers and a sensible interpretation to the 'vertical' sum of the numbers. In case of a height of '1' these are just 'Waiting Time' (W) and 'Number of items in the system' (N) respectively.

A more detailed, precise, and mathematical formulation can be found in the paper by Little himself: see section 3.2 in [Lit11].

Some Applications of 3D-Little's Law

Value

As a warming-up exercise consider as the height the (business) value of an item. Call this value 'V'. Every work item will have its own specific value.

$\overline{\mathrm{Value}} = \lambda \overline{V W}$ \overline{\mathrm{Value}} = \lambda \overline{V W}

The interpretation of this relation is that the 'average (business) value of unfinished work in the system at any time' is equal to the average input rate multiplied by the 'average of the product of cycle time and value'.

Teams may ant to minimise this while at the same time maximising the value output rate.

Total Operational Cost

As the next example let's take as the height for the cells a sequence of numbers 1, 2, 3, .... An example is shown in the figures below. What are the interpretations in this case?

Suppose we have a work item that has an operational cost of 1 per day. Then the sequence 1, 2, 3, ... gives the total cost to date. At day 3, the total cost is 3 times 1 which is the third number in the sequence.

The 'vertical' sum is just the 'Total Cost of unfinished work in the system.

For the interpretation of the 'horizontal' sum we need to add the numbers. For a work item that is in the system for 'n' days, the total is $1+2+3+...+n$1+2+3+...+n which equals $1/2 n (n+1)$1/2 n (n+1). For 3 days this gives $1+2+3=1/2 * 3 * 4 = 6$1+2+3=1/2 * 3 * 4 = 6. Thus, the interpretation of the 'horizontal' sum is $1/2 W (W+1)$1/2 W (W+1) in which 'W' represents the waiting time of the item.

Putting this together gives an additional Little's Law of the form:

$\overline{\mathrm{Cost}} = \frac{1}{2} \lambda C \overline{W(W + 1)}$ \overline{\mathrm{Cost}} = \frac{1}{2} \lambda C \overline{W(W + 1)}

where 'C' is the operational cost rate of a work item and $\lambda$\lambda is the (average) input rate. If instead of rounds in a game, the 'Total Cost in the system' is measured at a time interval 'T' the formula slightly changes into

$\overline{\mathrm{Cost}} = \frac{1}{2} \lambda C \overline{W\left(W + T\right)}$ \overline{\mathrm{Cost}} = \frac{1}{2} \lambda C \overline{W\left(W + T\right)}

Teams may want to minimise this which gives an interesting optimisation problem is different work item types have different associated operational cost rates. How should the capacity of the be divided over the work items? This is a topic for another blog.

Just-in-Time

For a slightly more odd relation consider items that have a deadline associated with them. Denote the date and time of the deadline by 'D'. As the height choose the number of time units before or after the deadline the item is completed. Further, call 'T' the time that the team has taken up to work on the item. Then the team finishes work on this item at time $T + W$ T + W , where 'W' represent the cycle time of the work item.

In the picture on the left a work item is shown that is finished 2 days before the deadline. Notice that the height decreases as the deadline is approached. Since it is finished 2 time units before the deadline, the just-in-timeness is 2 at the completion time.

The picture on the left shows a work item one time unit after the deadline and has an associated just-in-timeness of 1.

$\overline{\mathrm{Just-in-Time}} = \frac{1}{2} \lambda \overline{|T+W-D|(|T+W-D| + 1)}$ \overline{\mathrm{Just-in-Time}} = \frac{1}{2} \lambda \overline{|T+W-D|(|T+W-D| + 1)}

This example sounds like a very exotic one and not very useful. A team might want to look at what the best time is to start working on an item so as to minimise the above variable.

Conclusion

From our 'playing around' with the size of areas and volumes and realising that counting it in different ways (left-to-right and top-to-bottom) should give the same result I have been able to derive a new set of relations.

In this blog I have rederived well-known variations on Little's Law regarding subsystems and work items types. In addition I have derived new relations for the 'Average Total Operational Cost', 'Average Value', and 'Average Just-in-Timeness'.

Together with the familiar Little's Law these give rise to interesting optimisation problems and may lead to practical guidelines for teams to create even more value.

I'm curious to hear about the variations that you can come up with! Let me know by posting them here.

References

[Lit11] John D.C. Little, "Little’s Law as Viewed on Its 50th Anniversary", 2011, Operations Research, Vol. 59 , No 3, pp. 536-549, https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf

### Ruby: Receive JSON in request body

Mark Needham - Sun, 08/17/2014 - 13:21

I’ve been building a little Sinatra app to play around with the Google Drive API and one thing I struggled with was processing JSON posted in the request body.

I came across a few posts which suggested that the request body would be available as params['data'] or request['data'] but after trying several ways of sending a POST request that doesn’t seem to be the case.

I eventually came across this StackOverflow post which shows how to do it:

require 'sinatra'
require 'json'

post '/somewhere/' do
request.body.rewind
request_payload = JSON.parse request.body.read

p request_payload

"win"
end

I can then POST to that endpoint and see the JSON printed back on the console:

dummy.json

{"i": "am json"}
$curl -H "Content-Type: application/json" -XPOST http://localhost:9393/somewhere/ -d @dummy.json {"i"=>"am json"} Of course if I’d just RTFM I could have found this out much more quickly! Categories: Programming ### Ruby: Google Drive – Error=BadAuthentication (GoogleDrive::AuthenticationError) Info=InvalidSecondFactor Mark Needham - Sun, 08/17/2014 - 02:49 I’ve been using the Google Drive gem to try and interact with my Google Drive account and almost immediately ran into problems trying to login. I started out with the following code: require "rubygems" require "google_drive" session = GoogleDrive.login("me@mydomain.com", "mypassword") I’ll move it to use OAuth when I put it into my application but for spiking this approach works. Unfortunately I got the following error when running the script: /Users/markneedham/.rbenv/versions/1.9.3-p327/lib/ruby/gems/1.9.1/gems/google_drive-0.3.10/lib/google_drive/session.rb:93:in rescue in login': Authentication failed for me@mydomain.com: Response code 403 for post https://www.google.com/accounts/ClientLogin: Error=BadAuthentication (GoogleDrive::AuthenticationError) Info=InvalidSecondFactor from /Users/markneedham/.rbenv/versions/1.9.3-p327/lib/ruby/gems/1.9.1/gems/google_drive-0.3.10/lib/google_drive/session.rb:86:in login' from /Users/markneedham/.rbenv/versions/1.9.3-p327/lib/ruby/gems/1.9.1/gems/google_drive-0.3.10/lib/google_drive/session.rb:38:in login' from /Users/markneedham/.rbenv/versions/1.9.3-p327/lib/ruby/gems/1.9.1/gems/google_drive-0.3.10/lib/google_drive.rb:18:in login' from src/gdoc.rb:15:in <main>' Since I have two factor authentication enabled on my account it turns out that I need to create an app password to login: It will then pop up with a password that we can use to login (I have revoked this one!): We can then use this password instead and everything works fine:  require "rubygems" require "google_drive" session = GoogleDrive.login("me@mydomain.com", "tuceuttkvxbvrblf") Categories: Programming ### One Metric To Rule Them All: Shortcomings Careful, you might come up short. Using a single metric to represent the performance of entire team or organization is like picking a single point in space. The team can move in an infinite number of directions from that point in their next sprint or project. If the measure is important to the team we would assume that human nature would tend to push the team to maximize their performance. The opposite would be true if it was not important to the team. Gaming, positive or negative, often occur at the expense of other critical measures. An example I observed (more than once) was a contract that specifies payment on productivity (output per unit of input) without mechanisms to temper human nature. In most cases time-to-market and quality where measured, but were not involved in payment. In each case, productivity was maximized at the expense of quality or time-to-market. These were unintended consequences of poorly constructed contracts; in my opinion neither side of the contractual equation consequently wanted to compromise quality or time-to-market. While developing a single metric is an admirable goal, the process of constructing this type of metric will require substantial thought and effort. Metrics programs that are still in their development period typically cannot afford the time or effort required for developing a single metric (or the loss of organizational capital if they fail). Regardless of where the metrics program is in its development process, I would suggest an approach that develops an index of individual metrics or the use of a balanced scorecard (a group of metrics that show a balanced view of organizational performance developed by Kaplan and Norton in the 1990s – we will tackle this in detail in the future) is more expeditious. Using a pallet of well know measures and metrics will leverage the basic knowledge and understanding measurement programs and their stakeholders have developed during the development and implementation of individual measures and metrics. Categories: Process Management ### One Metric To Rule Them All: Communication Will a single metric make communication easier? Measuring software development (inclusive of development, enhancement and support activities) generally requires a pallet of specific measures. Measures typically include productivity, time-to-market, quality, customer satisfaction and budget (the list can go on and on). Making sense of the measures that might be predictive (forecast the future) or reflective (tell us about the past) and may sent seemly conflicting or contradictory messages is difficult. Proponents of a single metrics suggest simplifying the process by developing or adopting a single metric that they believe embodies the performance towards the organizations goals and predicts whether that performance will continue. Can adopting a single metric as a core principle in a metrics program enhance communication and therefore the value of a measurement program? The primary goal of any metrics program in IT, whether stated or not, is to generate and communicate information. A metrics program acts as a platform to connect metrics users and data providers. This process of connection is done by collecting, analyzing and communicating information to all of the stakeholders. The IT environment in general and the software development environment specifically is complex. That complexity is often translated in to a wide variety of measures and metrics that are difficult to understand and consume unless you spend your career analyzing the data. Unless you are working for a think tank that level of analysis is generally out of reach which is why managers and measurement professionals have and continue to seek a single number to use to communicate progress and predict the future of their departments. Development of a single metric that can be easily explained holds great promise as a means for simplifying communication. A single metric will simplify communication needs if (and it is a big if), a metric can be developed that is easily explainable and is it as useful in predicting performance as most metrics are in reflecting performance. While there are many elements of good communication such as a simple message, ensuring the communication has few moving parts and is relevant to the receiver are critical. A simple metric by definition has few moving parts. The starting point for developing a single metric are the design requirements of simplicity and relevance which can be controlled and tuned (hopefully) by the measurement group as business needs change. Developing a single metric is a tall order for a metric program, which is why most approaches to this problem use indexes (such as Larry Putnam’s PI). Indices are generally more difficult (albeit there are exceptions, such as the Dow Jones Industrial Average) to understand for wider audiences or fall into the overly academic trap requiring a trained a cadre to generate and interpret them. Regardless of what has been pursued, a single metric done correctly would foster communication and communication is instrumental for generating value and success from a measurement program. Categories: Process Management ### Managing OpenStack security groups from the command line Agile Testing - Grig Gheorghiu - Fri, 08/15/2014 - 20:47 I had an issue today where I couldn't connect to a particular OpenStack instance on port 443. I decided to inspect the security group it belongs (let's call it myapp) to from the command line: # nova secgroup-list-rules myapp +-------------+-----------+---------+------------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+------------+--------------+ | tcp | 80 | 80 | 0.0.0.0/0 | | | tcp | 443 | 443 | 0.0.0.0/24 | | +-------------+-----------+---------+------------+--------------+ Note that the IP range for port 443 is wrong. It should be all IPs and not a /24 network. I proceeded to delete the wrong rule: # nova secgroup-delete-rule myapp tcp 443 443 0.0.0.0/24 +-------------+-----------+---------+------------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+------------+--------------+ | tcp | 443 | 443 | 0.0.0.0/24 | | +-------------+-----------+---------+------------+--------------+ Then I added back the correct rule: # nova secgroup-add-rule myapp tcp 443 443 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+| IP Protocol | From Port | To Port | IP Range | Source Group |+-------------+-----------+---------+-----------+--------------+| tcp | 443 | 443 | 0.0.0.0/0 | |+-------------+-----------+---------+------------+--------------+ Finally, I verified that the rules are now correct: # nova secgroup-list-rules myapp +-------------+-----------+---------+-----------+--------------+| IP Protocol | From Port | To Port | IP Range | Source Group |+-------------+-----------+---------+-----------+--------------+| tcp | 443 | 443 | 0.0.0.0/0 | || tcp | 80 | 80 | 0.0.0.0/0 | |+-------------+-----------+---------+-----------+--------------+ Of course, the real test was to see if I could now hit port 443 on my instance, and indeed I was able to. ### Creativity, INC. – by Ed Catmull Gridshore - Fri, 08/15/2014 - 20:39 A colleague of mine, Ronald Vonk, recommended this book to me. It is a book by one of the founders of Pixar, you know from all those fantastic computer animated movies. At pixar they created an continuous changing environment where creativity should excel. It is a very interesting read if you are interested in management books that are not to heavy on theories. Ed explains very well and entertaining how they went from a small company with a vision to a huge company with a vision. Without to much spoilers, you really need to reed the book yourself, I want to mention a few things that I remembered after reading the book. The team is more important than the idea’s or the talent of the separate people. Take care of the team, make sure they function well and give them responsibility. Make them feel proud when they finished what they wanted to create. Always put people first. This is something I ran into in my normal working life as well. I do think you have to enable the teams to adept and to stay as a good team. The challenge is to get others in to learn and later on replace team members or start their own team. We would never make a film that way again. It is the managements job to take the long view, to intervene and protect our people from their willingness to pursue excellence at all costs. Not to do so would be irresponsible. This was a remark after delivering a movie under great time stress. They pulled through, but at a cost. Braintrust – Group of people giving feedback and ideas for improvements on a certain idea. Important is that the feedback is meant to improve the idea, not to bully the person(s) the idea originated from. It is very important that everybody is open to the feedback and not defensive. In the end it is not the braintrust that makes a decision, it is the person in charge for the product. Still this group of people is kind of the first user and therefore the feedback should not be taken to lightly. This was something I had a long thought about, my conclusion was that I am not really good at this. I often do feel that my ideas are my babies that need to be defended. First persuade me I am wrong, o sorry, an idea that someone had was not the best. I did not want to become a manager, I just wanted to be one of the boys and do research. When we became bigger I realised I became more important and new people did not see me as a peer or one of the guys. I realised things were starting to get hidden from me. It is no problem as long as you trust people will tell someone else that will tell the most important things to me again. Couldn’t agree more. You can have this very nice polished finely tuned locomotive. People think that being the driver of the train is giving them power. They feel that driving the train in the end is shaping the company. The truth is, it’s not. Driving the train does not set it’s course. The real job is laying the track. This was an eye opener a well, something you know but is hard to put into words. At pixar they do not have contracts. They feel that employment contracts both hurt the employer as well as the employee. If someone had a problem with the company, there wasn’t much point in complaining because they were under contract. If someone didn’t perform well, on the other hand, there was no point in confronting them about it; their contract simply wouldn’t be renewed, which might be the first time they heard about their need to improve. The whole system discouraged and devaluated day-to-day communication and was culturally dysfunctional. But since everybody was used to it, they were blind to the problem. This is a long one, have thought about it for a while. I think for now I would be to scared to do this in a company, still I like the idea. What is the point of hiring smart people if you don’t empower them to fix what’s broken? Often to much time is lost in making sure no mistakes will be made. Often however, it just takes a few days to find solutions for mistakes. Keeps coming back to the same point, a manager is a facilitator, nothing more nothing less. It is a very important role, just like all the others. Think about it, it is the team, the complete team. The post Creativity, INC. – by Ed Catmull appeared first on Gridshore. Categories: Architecture, Programming ### Stuff The Internet Says On Scalability For August 15th, 2014 Hey, it's HighScalability time: Somehow this seems quite appropriate. (via John Bredehoft) • 75 acres: Pizza eaten in US daily; 270TB: Backblaze storage pod; 14nm: Intel extends Moore's Law • Quotable Quotes • discreteevent: The dream of reuse has made a mess of many systems. • David Crawley: Don't think of Moore's Law in terms of technology; think of it in terms of economics and you get much greater understanding. The limits of Moore's Law is not driven by current technology. The limits of Moore's Law are really a matter of cost. • Simon Brown: If you can't build a monolith, what makes you think microservices are the answer? • smileysteve: The net result is that you should be able to transmit QPSK at 32GBd in 2 polarizations in maybe 80 waves in each direction. 2bits x 2 polarizations x 32G ~128Gb/s per wave or nearly 11Tb/s for 1 fiber. If this cable has 6 strands, then it could easily meet the target transmission capacity [60TB]. • Eric Brumer: Highly efficient code is actually memory efficient code. • How to be a cloud optimist. Tell yourself: an instance is half full, it's not half empty; Downtime is temporary; Failures aren't your fault. • Mother Earth, Motherboard by Neal Stephenson. Goes without saying it's gorgeously written. The topic: The hacker tourist ventures forth across the wide and wondrous meatspace of three continents, chronicling the laying of the longest wire on Earth. < Related to Google Invests In$300M Submarine Cable To Improve Connection Between Japan And The US.

• IBM compares virtual machines and against Linux containers: Our results show that containers result in equal or better performance than VM in almost all cases. Both VMs and containers require tuning to support I/O-intensive applications.

• Does Psychohistory begin with BigData? Of a crude kind, perhaps. Google uses BigQuery to uncover patterns of world history: What’s even more amazing is that this analysis is not the result of a massive custom-built parallel application built by a team of specialized HPC programmers and requiring a dedicated cluster to run on: in stark contrast, it is the result of a single line of SQL code (plus a second line to create the initial “view”). All of the complex parallelism, data management, and IO optimization is handled transparently by Google BigQuery. Imagine that – a single line of SQL performing 2.5 million correlations in just 2.5 minutes to uncover the underlying patterns of global society.

• Fabian Giesen with an deep perspective on how communication has evolved to use a similar pattern. Networks all the way down (part2): anything we would call a computer these days is in fact, for all practical purposes, a heterogeneous cluster made up of various specialized smaller computers, all connected using various networks that go by different names and are specified in different standards, yet are all suspiciously similar at the architecture level; a fractal of switched, packet-based networks of heterogeneous nodes that make up what we call a single “computer”. It means that all the network security problems that plague inter-computer networking also exist within computers themselves. Implementations may change substantially over time, the interfaces – protocols, to stay within our networking terminology – stay mostly constant over large time scales, warts and all.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

### One Metric To Rule Them All

A good number for a birthday but not for a metric!

In the Lord of the Rings, J.R.R. Tolkien wrote that nine rings of power were created, however a single ring was then fashioned to bind them all.  The goal on many metrics programs is to find the “one ring,” or to create a single metric that will accurately reflect the past, predict the future and track changes.  The creation of a single, easily understood metric that can satisfy all of these needs is the holy grail of all metrics programs. To date the quest for the one metric has been fruitless. However while the quest should continue until both research and testing can be done, adopting a single metric can be dangerous.

A single, understandable metric would have substantial benefits, ranging from the ability to provide an improved communications platform, to a tool to support process improvement activities on areas of the organization where change can make a difference in the metric. An example of a single metric is the Dow Jones Industrial Average (DJIA), which summarizes a large number of individual measures (individual stock prices) into a single easily explainable index. Whether you like or dislike the DJIA most everyone can interpret changes in the index and trends over time. Every daily business program en Market Place (American Public Media, heard on National Public Radio) reports the performance of the DJIA. The problem is when DJIA becomes the only number bereft of context that a problem begins to occur. Often the simplicity has become a narcotic.

Anyone attempting to find a one metric solution (or to use the one metric solutions currently marketed) have a tough hill to climb. There are issues with a one metric solution that must be addressed when designing and developing the solution.  The first of these issues is context. What is important to one organization is different what is important to another and what is important today may not be important tomorrow. How would a single metric morph to reflect these complexities? Lord of the Rings had fewer changes in goals than a typical IT department. A second category of issues ranges is environmental complexity. Complexity includes the interactions between the metric and the human users through the basic mathematical complexity of creating a metric with both the historical and predictive power required.  In my opinion, the most intricate issues swirl around the metrics/human interaction.  In general people will use any measure for wildly divergent purposes ranging providing status to identifying process improvement. Each different use triggers a different behavior.

When seeking a single metric we need to answer the bottom line question is the effort worth the cost. Stated in a less black and white manner, will any single metric be more valuable as a communication tool than the loss of information and transparency that the metric would have?

Categories: Process Management

### Volunteer Power

Software Requirements Blog - Seilevel.com - Thu, 08/14/2014 - 17:00
Generally, when someone asks for project involvement or even shows a high amount of interest, I’ll find a way to include that person to the extent possible. It can sometimes make meetings more complicated, especially when you have to provide background information in order to ‘loop in’ the new person, but often a new perspective, […]
Categories: Requirements

### The Purpose Of Guiding Principles in Project Management

Herding Cats - Glen Alleman - Thu, 08/14/2014 - 16:02

A Guiding Principle defines the key criteria for making decisions about the application of a project's Practices and Processes. The Principles provide the project with the foundation to test the practices and processes in pursuit of the project's goals in the most timely and cost-effective way while still meeting essential requirements of business outcomes or mission accomplishment.

In the absence of Principles, the Practices and Processes - while possible the rights one - have no way of being tested to assure they are producing actionable information for the management of the project.

The common lament of we could be spending time and effort doing something more useful, ignores the fact that those  useful things need to be guided by a higher structure - a holistic structure - of exchanging cost for value. The questions should be are the practices and processes we are applying to this project the proper ones to maximize the efficacy of our funding?

These five principles are the foundation of project success. Success means - in the simplist terms - On Time, On Cost, On Value. Time and Cost are easily defined, Value needs another layer for it to be connected with Time and Budget.

What is Strategy

One approach for Value to to connect the outcomes of the project with the Strategy for the business or the mission. Strategy is creating fit among a company’s activities. The success of a strategy depends on doing many things well – not just a few. The things that are done well must operate within a close nit system. If there is no fit among the activities - in this post, the project management activities - there is no distinctive strategy and little to sustain the project management practices and processes. Project Management then reverts to the simpler task of overseeing independent functions. When this occurs operational effectiveness determines the relative performance of the project management activities and the results of the project itself.

Improving operational effectiveness is a necessary part of management, but it is not strategy. In confusing the two, managers will be unintentionally backed into a way of thinking about competition that drives the business support processes (IT) away from the strategic support and toward the tactical improvement of operational effectiveness.

The concept of fit among functions is one of the oldest ideas in strategy. Gradually, it has been supplanted with new concepts of core competencies, critical resources and key success factors. Fit is far more critical to the success of the project management System. Strategic fit among the project management Practices and Processes and the business processes in which the project is deployed is fundamental not only to competitive advantage but also to the sustainability of that advantage.

This is the fundation of success for Project Based Organizations

The mechnism for creating this Fit is the Programmatic Architecture of the project. This architecture is the same term used for technical architecture. It is the form of the project, in the same way it is the form of the product or service.

The Five Principles That Establish Programmatic Architecture

These Five Principles and their Practices are...

Principles and Practices of Performance-Based Project Management® from Glen Alleman Related articles What Is Strategy? Elements of Project Success Why Project Management is a Control System Creating Effective Mission Statements That Have Meaning and Function Performance Based Management Principles First, Then Practice Moving EVM from Reporting and Compliance to Managing for Success Project Maxims How To Make Decisions
Categories: Project Management

### Rely on Specialists, but Sparingly

Mike Cohn's Blog - Thu, 08/14/2014 - 15:49

Last week, I talked about the concept of equality on an agile team. I mentioned that one meaning of equality could be all team members do the same work, so that everyone in agile becomes a generalist.

A common misconception is that everyone on a Scrum team must be a generalist—equally good at all technologies and disciplines, rather than a specialist in one. This is simply not true.

What I find surprising about this myth is that every sandwich shop in the world has figured out how to handle specialists, yet we, in the software industry, still struggle with the question.

My favorite sandwich shop is the Beach Hut Deli in Folsom, California. I’ve spent enough lunches there to notice that they have three types of employees: order takers, sandwich makers, and floaters.

The order takers work the counter, writing each sandwich order on a slip of paper that is passed back to the sandwich makers. Sandwich makers work behind the order takers and prepare each sandwich as it’s ordered.

Order takers and sandwich makers are the specialists of the deli world. Floaters are generalists—able to do both jobs, although perhaps not as well as the specialists. It’s not that their sandwiches taste worse, but maybe a floater is a little slower making them.

When I did my obligatory teenage stint at a fast food restaurant, I was a floater. I wasn’t as quick at wrapping burritos and making tacos as Mark, one of the cooks. And whenever the cash register needed a new roll of paper, I had to yell for my manager, Nikki, because I could never remember how to do it. But, unlike Mark and Nikki, I could do both jobs.

I suspect that just about every sandwich shop in the world has some specialists—people who only cook or who only work the counter. But these businesses have also learned the value of having generalists.

Having some generalists working during the lunch rush helps the sandwich shop balance the need to have some people writing orders and some people making the sandwiches.

What this means for Scrum teams is that yes, we should always attempt to have some generalists around. It is the generalists who enable specialists to specialize.

There will always be teams who need the hard-core device driver programmer, the C++ programmer well-versed in Windows internals, the artificial intelligence programmer, the performance test engineer, the bioinformaticist, the artist, and so on.

But, every time a specialist is added to a team, think of it as equivalent to adding a pure sandwich maker to your deli. Put too many specialists on your team, and you increase the likelihood that someone will spend perhaps too much time waiting for work to be handed off.

Note: A portion of this post is an excerpt from Mike Cohn’s book, Succeeding with Agile.

### How Often Should I Blog

Making the Complex Simple - John Sonmez - Thu, 08/14/2014 - 15:00

In this video I talk about how often you should blog and why blogging more often is better as long as you can maintain a consistent level of quality.

The post How Often Should I Blog appeared first on Simple Programmer.

Categories: Programming

### Where does r studio install packages/libraries?

Mark Needham - Thu, 08/14/2014 - 11:24

As a newbie to R I wanted to look at the source code of some of the libraries/packages that I’d installed via R Studio which I initially struggled to do as I wasn’t sure where the packages had been installed.

I eventually came across a StackOverflow post which described the .libPaths function which tells us where that is:

> .libPaths()
[1] "/Library/Frameworks/R.framework/Versions/3.1/Resources/library"

If we want to see which libraries are installed we can use the list.files function:

> list.files("/Library/Frameworks/R.framework/Versions/3.1/Resources/library")
[1] "alr3"         "assertthat"   "base"         "bitops"       "boot"         "brew"
[7] "car"          "class"        "cluster"      "codetools"    "colorspace"   "compiler"
[13] "data.table"   "datasets"     "devtools"     "dichromat"    "digest"       "dplyr"
[19] "evaluate"     "foreign"      "formatR"      "Formula"      "gclus"        "ggplot2"
[25] "graphics"     "grDevices"    "grid"         "gridExtra"    "gtable"       "hflights"
[31] "highr"        "Hmisc"        "httr"         "KernSmooth"   "knitr"        "labeling"
[37] "Lahman"       "lattice"      "latticeExtra" "magrittr"     "manipulate"   "markdown"
[43] "MASS"         "Matrix"       "memoise"      "methods"      "mgcv"         "mime"
[49] "munsell"      "nlme"         "nnet"         "openintro"    "parallel"     "plotrix"
[55] "plyr"         "proto"        "RColorBrewer" "Rcpp"         "RCurl"        "reshape2"
[61] "RJSONIO"      "RNeo4j"       "Rook"         "rpart"        "rstudio"      "scales"
[67] "seriation"    "spatial"      "splines"      "stats"        "stats4"       "stringr"
[73] "survival"     "swirl"        "tcltk"        "testthat"     "tools"        "translations"
[79] "TSP"          "utils"        "whisker"      "xts"          "yaml"         "zoo"

We can then drill into those directories to find the appropriate file – in this case I wanted to look at one of the Rook examples:

$cat /Library/Frameworks/R.framework/Versions/3.1/Resources/library/Rook/exampleApps/helloworld.R app <- function(env){ req <- Rook::Request$new(env)
res <- Rook::Response$new() friend <- 'World' if (!is.null(req$GET()[['friend']]))
friend <- req$GET()[['friend']] res$write(paste('<h1>Hello',friend,'</h1>\n'))
res$write('What is your name?\n') res$write('<form method="GET">\n')
res$write('<input type="text" name="friend">\n') res$write('<input type="submit" name="Submit">\n</form>\n<br>')
res\$finish()
}
Categories: Programming

### Running TAP

Phil Trelford's Array - Thu, 08/14/2014 - 08:23

The Test Anything Protocol (TAP) is a text-based protocol for test results:

 1..4
ok 1 - Input file opened
not ok 2 - First line of the input valid
ok 3 - Read the rest of the file
not ok 4 - Summarized correctly # TODO Not written yet

I think the idea is an good one, a simple cross-platform human readable standard for formatting test results. There are TAP producers and consumers for Perl, Java, JavaScript etc. allowing you to join up tests for cross-platform projects.

NUnit runner

Over the last week or so I’ve created a TAP runner F# script for NUnit test methods. It supports the majority of NUnit’s attributes including ExpectedException, TimeOut and test generation with TestCase, TestCaseSource, Values, etc., .

The runner can be used in a console app to produce TAP output to the console or directly in F# interactive for running tests embedded in a script.

TAP run

Tests can be organized in classes:

type TAPExample () =
[<Test>] member __.input file opened () = Assert.Pass()
[<Test>] member __.First line of the input valid () = Assert.Fail()
[<Test>] member __.Read the rest of the file () = Assert.Pass()
[<Test>] member __.Summarized correctly () = Assert.Fail()

Tap.Run typeof<TAPExample>

or in modules:

let [<Test>] input file opened () = Assert.Pass()
let [<Test>] First line of the input valid () = Assert.Fail()
let [<Test>] Read the rest of the file () = Assert.Pass()
let [<Test>] Summarized correctly () = Assert.Fail()

type Marker = interface end
Tap.Run typeof<Marker>.DeclaringType`

Note: the marker interface is used to reflect the module’s type.

Console output

In the console test cases are marked in red or green:

Debugging

If you create an F# Tutorial project you get both, an F# script file that runs as a console application allowing you to set breakpoints in your script with the Visual Studio debugger.

Prototyping

When I’m prototyping a new feature I typically use the F# interactive environment for quick feedback and to do exploratory testing. The TAP runner lets you create and run NUnit  formatted tests directly in the script file before later promoting to a full fat project for use in a continuous build environment.

F# Scripting

Interested in learning more about F# scripting, pop along to the Phil Nash’s talk at the F#unctional Londoners tonight.

Categories: Programming