Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Stuff The Internet Says On Scalability For August 15th, 2014

Hey, it's HighScalability time:


Somehow this seems quite appropriate. (via John Bredehoft)
  • 75 acres: Pizza eaten in US daily; 270TB: Backblaze storage pod; 14nm: Intel extends Moore's Law
  • Quotable Quotes
    • discreteevent: The dream of reuse has made a mess of many systems.
    • David Crawley: Don't think of Moore's Law in terms of technology; think of it in terms of economics and you get much greater understanding. The limits of Moore's Law is not driven by current technology. The limits of Moore's Law are really a matter of cost.
    • Simon Brown: If you can't build a monolith, what makes you think microservices are the answer?
    • smileysteve: The net result is that you should be able to transmit QPSK at 32GBd in 2 polarizations in maybe 80 waves in each direction. 2bits x 2 polarizations x 32G ~128Gb/s per wave or nearly 11Tb/s for 1 fiber. If this cable has 6 strands, then it could easily meet the target transmission capacity [60TB].
    • Eric Brumer: Highly efficient code is actually memory efficient code.

  • How to be a cloud optimist. Tell yourself: an instance is half full, it's not half empty; Downtime is temporary; Failures aren't your fault.

  • Mother Earth, Motherboard by Neal Stephenson. Goes without saying it's gorgeously written. The topic: The hacker tourist ventures forth across the wide and wondrous meatspace of three continents, chronicling the laying of the longest wire on Earth. < Related to Google Invests In $300M Submarine Cable To Improve Connection Between Japan And The US.

  • IBM compares virtual machines and against Linux containers: Our results show that containers result in equal or better performance than VM in almost all cases. Both VMs and containers require tuning to support I/O-intensive applications.

  • Does Psychohistory begin with BigData? Of a crude kind, perhaps. Google uses BigQuery to uncover patterns of world history: What’s even more amazing is that this analysis is not the result of a massive custom-built parallel application built by a team of specialized HPC programmers and requiring a dedicated cluster to run on: in stark contrast, it is the result of a single line of SQL code (plus a second line to create the initial “view”). All of the complex parallelism, data management, and IO optimization is handled transparently by Google BigQuery. Imagine that – a single line of SQL performing 2.5 million correlations in just 2.5 minutes to uncover the underlying patterns of global society.

  • Fabian Giesen with an deep perspective on how communication has evolved to use a similar pattern. Networks all the way down (part2): anything we would call a computer these days is in fact, for all practical purposes, a heterogeneous cluster made up of various specialized smaller computers, all connected using various networks that go by different names and are specified in different standards, yet are all suspiciously similar at the architecture level; a fractal of switched, packet-based networks of heterogeneous nodes that make up what we call a single “computer”. It means that all the network security problems that plague inter-computer networking also exist within computers themselves. Implementations may change substantially over time, the interfaces – protocols, to stay within our networking terminology – stay mostly constant over large time scales, warts and all.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

One Metric To Rule Them All

A good number for a birthday but not for a metric!

A good number for a birthday but not for a metric!

In the Lord of the Rings, J.R.R. Tolkien wrote that nine rings of power were created, however a single ring was then fashioned to bind them all.  The goal on many metrics programs is to find the “one ring,” or to create a single metric that will accurately reflect the past, predict the future and track changes.  The creation of a single, easily understood metric that can satisfy all of these needs is the holy grail of all metrics programs. To date the quest for the one metric has been fruitless. However while the quest should continue until both research and testing can be done, adopting a single metric can be dangerous.

A single, understandable metric would have substantial benefits, ranging from the ability to provide an improved communications platform, to a tool to support process improvement activities on areas of the organization where change can make a difference in the metric. An example of a single metric is the Dow Jones Industrial Average (DJIA), which summarizes a large number of individual measures (individual stock prices) into a single easily explainable index. Whether you like or dislike the DJIA most everyone can interpret changes in the index and trends over time. Every daily business program en Market Place (American Public Media, heard on National Public Radio) reports the performance of the DJIA. The problem is when DJIA becomes the only number bereft of context that a problem begins to occur. Often the simplicity has become a narcotic.

Anyone attempting to find a one metric solution (or to use the one metric solutions currently marketed) have a tough hill to climb. There are issues with a one metric solution that must be addressed when designing and developing the solution.  The first of these issues is context. What is important to one organization is different what is important to another and what is important today may not be important tomorrow. How would a single metric morph to reflect these complexities? Lord of the Rings had fewer changes in goals than a typical IT department. A second category of issues ranges is environmental complexity. Complexity includes the interactions between the metric and the human users through the basic mathematical complexity of creating a metric with both the historical and predictive power required.  In my opinion, the most intricate issues swirl around the metrics/human interaction.  In general people will use any measure for wildly divergent purposes ranging providing status to identifying process improvement. Each different use triggers a different behavior.

When seeking a single metric we need to answer the bottom line question is the effort worth the cost. Stated in a less black and white manner, will any single metric be more valuable as a communication tool than the loss of information and transparency that the metric would have?

 


Categories: Process Management

Volunteer Power

Software Requirements Blog - Seilevel.com - Thu, 08/14/2014 - 17:00
Generally, when someone asks for project involvement or even shows a high amount of interest, I’ll find a way to include that person to the extent possible. It can sometimes make meetings more complicated, especially when you have to provide background information in order to ‘loop in’ the new person, but often a new perspective, […]
Categories: Requirements

The Purpose Of Guiding Principles in Project Management

Herding Cats - Glen Alleman - Thu, 08/14/2014 - 16:02

Five principles

A Guiding Principle defines the key criteria for making decisions about the application of a project's Practices and Processes. The Principles provide the project with the foundation to test the practices and processes in pursuit of the project's goals in the most timely and cost-effective way while still meeting essential requirements of business outcomes or mission accomplishment.

In the absence of Principles, the Practices and Processes - while possible the rights one - have no way of being tested to assure they are producing actionable information for the management of the project. 

The common lament of we could be spending time and effort doing something more useful, ignores the fact that those  useful things need to be guided by a higher structure - a holistic structure - of exchanging cost for value. The questions should be are the practices and processes we are applying to this project the proper ones to maximize the efficacy of our funding?

These five principles are the foundation of project success. Success means - in the simplist terms - On Time, On Cost, On Value. Time and Cost are easily defined, Value needs another layer for it to be connected with Time and Budget.

What is Strategy

One approach for Value to to connect the outcomes of the project with the Strategy for the business or the mission. Strategy is creating fit among a company’s activities. The success of a strategy depends on doing many things well – not just a few. The things that are done well must operate within a close nit system. If there is no fit among the activities - in this post, the project management activities - there is no distinctive strategy and little to sustain the project management practices and processes. Project Management then reverts to the simpler task of overseeing independent functions. When this occurs operational effectiveness determines the relative performance of the project management activities and the results of the project itself. 

Improving operational effectiveness is a necessary part of management, but it is not strategy. In confusing the two, managers will be unintentionally backed into a way of thinking about competition that drives the business support processes (IT) away from the strategic support and toward the tactical improvement of operational effectiveness.

The concept of fit among functions is one of the oldest ideas in strategy. Gradually, it has been supplanted with new concepts of core competencies, critical resources and key success factors. Fit is far more critical to the success of the project management System. Strategic fit among the project management Practices and Processes and the business processes in which the project is deployed is fundamental not only to competitive advantage but also to the sustainability of that advantage. 

This is the fundation of success for Project Based Organizations

The mechnism for creating this Fit is the Programmatic Architecture of the project. This architecture is the same term used for technical architecture. It is the form of the project, in the same way it is the form of the product or service.

The Five Principles That Establish Programmatic Architecture

These Five Principles and their Practices are...

Principles and Practices of Performance-Based Project Management® from Glen Alleman Related articles What Is Strategy? Elements of Project Success Why Project Management is a Control System Creating Effective Mission Statements That Have Meaning and Function Performance Based Management Principles First, Then Practice Moving EVM from Reporting and Compliance to Managing for Success Project Maxims How To Make Decisions
Categories: Project Management

Rely on Specialists, but Sparingly

Mike Cohn's Blog - Thu, 08/14/2014 - 15:49

Last week, I talked about the concept of equality on an agile team. I mentioned that one meaning of equality could be all team members do the same work, so that everyone in agile becomes a generalist.

A common misconception is that everyone on a Scrum team must be a generalist—equally good at all technologies and disciplines, rather than a specialist in one. This is simply not true.

What I find surprising about this myth is that every sandwich shop in the world has figured out how to handle specialists, yet we, in the software industry, still struggle with the question.

My favorite sandwich shop is the Beach Hut Deli in Folsom, California. I’ve spent enough lunches there to notice that they have three types of employees: order takers, sandwich makers, and floaters.

The order takers work the counter, writing each sandwich order on a slip of paper that is passed back to the sandwich makers. Sandwich makers work behind the order takers and prepare each sandwich as it’s ordered.

Order takers and sandwich makers are the specialists of the deli world. Floaters are generalists—able to do both jobs, although perhaps not as well as the specialists. It’s not that their sandwiches taste worse, but maybe a floater is a little slower making them.

When I did my obligatory teenage stint at a fast food restaurant, I was a floater. I wasn’t as quick at wrapping burritos and making tacos as Mark, one of the cooks. And whenever the cash register needed a new roll of paper, I had to yell for my manager, Nikki, because I could never remember how to do it. But, unlike Mark and Nikki, I could do both jobs.

I suspect that just about every sandwich shop in the world has some specialists—people who only cook or who only work the counter. But these businesses have also learned the value of having generalists.

Having some generalists working during the lunch rush helps the sandwich shop balance the need to have some people writing orders and some people making the sandwiches.

What this means for Scrum teams is that yes, we should always attempt to have some generalists around. It is the generalists who enable specialists to specialize.

There will always be teams who need the hard-core device driver programmer, the C++ programmer well-versed in Windows internals, the artificial intelligence programmer, the performance test engineer, the bioinformaticist, the artist, and so on.

But, every time a specialist is added to a team, think of it as equivalent to adding a pure sandwich maker to your deli. Put too many specialists on your team, and you increase the likelihood that someone will spend perhaps too much time waiting for work to be handed off.

Note: A portion of this post is an excerpt from Mike Cohn’s book, Succeeding with Agile.

How Often Should I Blog

Making the Complex Simple - John Sonmez - Thu, 08/14/2014 - 15:00

In this video I talk about how often you should blog and why blogging more often is better as long as you can maintain a consistent level of quality.

The post How Often Should I Blog appeared first on Simple Programmer.

Categories: Programming

Where does r studio install packages/libraries?

Mark Needham - Thu, 08/14/2014 - 11:24

As a newbie to R I wanted to look at the source code of some of the libraries/packages that I’d installed via R Studio which I initially struggled to do as I wasn’t sure where the packages had been installed.

I eventually came across a StackOverflow post which described the .libPaths function which tells us where that is:

> .libPaths()
[1] "/Library/Frameworks/R.framework/Versions/3.1/Resources/library"

If we want to see which libraries are installed we can use the list.files function:

> list.files("/Library/Frameworks/R.framework/Versions/3.1/Resources/library")
 [1] "alr3"         "assertthat"   "base"         "bitops"       "boot"         "brew"        
 [7] "car"          "class"        "cluster"      "codetools"    "colorspace"   "compiler"    
[13] "data.table"   "datasets"     "devtools"     "dichromat"    "digest"       "dplyr"       
[19] "evaluate"     "foreign"      "formatR"      "Formula"      "gclus"        "ggplot2"     
[25] "graphics"     "grDevices"    "grid"         "gridExtra"    "gtable"       "hflights"    
[31] "highr"        "Hmisc"        "httr"         "KernSmooth"   "knitr"        "labeling"    
[37] "Lahman"       "lattice"      "latticeExtra" "magrittr"     "manipulate"   "markdown"    
[43] "MASS"         "Matrix"       "memoise"      "methods"      "mgcv"         "mime"        
[49] "munsell"      "nlme"         "nnet"         "openintro"    "parallel"     "plotrix"     
[55] "plyr"         "proto"        "RColorBrewer" "Rcpp"         "RCurl"        "reshape2"    
[61] "RJSONIO"      "RNeo4j"       "Rook"         "rpart"        "rstudio"      "scales"      
[67] "seriation"    "spatial"      "splines"      "stats"        "stats4"       "stringr"     
[73] "survival"     "swirl"        "tcltk"        "testthat"     "tools"        "translations"
[79] "TSP"          "utils"        "whisker"      "xts"          "yaml"         "zoo"

We can then drill into those directories to find the appropriate file – in this case I wanted to look at one of the Rook examples:

$ cat /Library/Frameworks/R.framework/Versions/3.1/Resources/library/Rook/exampleApps/helloworld.R
app <- function(env){
    req <- Rook::Request$new(env)
    res <- Rook::Response$new()
    friend <- 'World'
    if (!is.null(req$GET()[['friend']]))
	friend <- req$GET()[['friend']]
    res$write(paste('<h1>Hello',friend,'</h1>\n'))
    res$write('What is your name?\n')
    res$write('<form method="GET">\n')
    res$write('<input type="text" name="friend">\n')
    res$write('<input type="submit" name="Submit">\n</form>\n<br>')
    res$finish()
}
Categories: Programming

Running TAP

Phil Trelford's Array - Thu, 08/14/2014 - 08:23

The Test Anything Protocol (TAP) is a text-based protocol for test results:

 1..4
 ok 1 - Input file opened
 not ok 2 - First line of the input valid
 ok 3 - Read the rest of the file
 not ok 4 - Summarized correctly # TODO Not written yet

 

I think the idea is an good one, a simple cross-platform human readable standard for formatting test results. There are TAP producers and consumers for Perl, Java, JavaScript etc. allowing you to join up tests for cross-platform projects.

NUnit runner

Over the last week or so I’ve created a TAP runner F# script for NUnit test methods. It supports the majority of NUnit’s attributes including ExpectedException, TimeOut and test generation with TestCase, TestCaseSource, Values, etc., .

The runner can be used in a console app to produce TAP output to the console or directly in F# interactive for running tests embedded in a script.

TAP run

Tests can be organized in classes:

type TAPExample () =
    [<Test>] member __.``input file opened`` () = Assert.Pass()
    [<Test>] member __.``First line of the input valid`` () = Assert.Fail()
    [<Test>] member __.``Read the rest of the file`` () = Assert.Pass()
    [<Test>] member __.``Summarized correctly`` () = Assert.Fail()

Tap.Run typeof<TAPExample>

or in modules:

let [<Test>] ``input file opened`` () = Assert.Pass()
let [<Test>] ``First line of the input valid`` () = Assert.Fail()
let [<Test>] ``Read the rest of the file`` () = Assert.Pass()
let [<Test>] ``Summarized correctly`` () = Assert.Fail()
type Marker = interface end
Tap.Run typeof<Marker>.DeclaringType

Note: the marker interface is used to reflect the module’s type.

Console output

In the console test cases are marked in red or green:

image


Debugging

If you create an F# Tutorial project you get both, an F# script file that runs as a console application allowing you to set breakpoints in your script with the Visual Studio debugger.

image


Prototyping

When I’m prototyping a new feature I typically use the F# interactive environment for quick feedback and to do exploratory testing. The TAP runner lets you create and run NUnit  formatted tests directly in the script file before later promoting to a full fat project for use in a continuous build environment.

F# Scripting

Interested in learning more about F# scripting, pop along to the Phil Nash’s talk at the F#unctional Londoners tonight.

Categories: Programming

Measurement Proliferation: Guarding the Peace or Mutually Assured Destruction?

Small, Medium and Large or Low, Average  and High?

Lots of ways to measure

Measurement proliferation is when organizations decide that everything can should be measured therefore there is a rapid increase in measures and metrics. There are at least two measurement proliferation scenarios, and they both have as great of a chance of destroying your measurement program as helping it.  The two scenarios can be summarized as proliferation of breadth (measuring everything), followed by proliferation of depth (measuring the same thing many ways).

There are many items that are very important to measure, and it’s difficult to restrain yourself once you’ve started.  Because it seems important to measure many activities within an IT organization, many measurement teams think measuring everything is important.  Unfortunately, measuring what is really important is rarely easy or straightforward. When organizations slip into the “measure everything” mode, often times what gets measured is not related to the organization’s target behavior (the real needs).  When measures are not related to the target behavior, it will be easy to breed unexpected behaviors (not indeterminate or unpredictable, just not what was expected).  For example, one organization determined that the personal capability was a key metric.  More capability would translate into higher productivity and quality.  During the research into the topic, it was determined that capability was too difficult or “touchy-feely” to measure directly. The organization decided that counting requirements were a rough proxy for systems capability, and if systems capability went up, it must be a reflection of personal capability.  So, of course, they measured requirements.  One unanticipated behavior was that the requirements became more granular (actually more consistent), which caused the appearance that increased capability that could not be sustained (or easily approved) after the initial baseline of the measure.

The explosion of pre-defined measures drives the second proliferation scenario, having too many measures for the same concept. Capers Jones mentioned a number of examples in my interview with him for SPaMCAST.  Capers caught my imagination with the statement that there are many functional metrics are currently in use, ranging from IFPUG function points to cosmic; with use case points, NESMA function points and others in between.  This is in addition to counting lines of code, object and ants.  The fracturing in the world of functional metrics has occurred for many reasons, ranging from a natural maturation of the measurement category to the explosion of information sharing on the web. Regardless of the reason for the proliferation, using multiple measures for the same concept just because you can, can have unintended consequences. Having multiple measures for the same concept can cause focus making the concept seem more important than it is. Secondly having multiple measures may send a message that no one is quite sure how to measure the concept which can lead to confusion by the casual observer.  Generally this no reason to use multiple methods to measure the same concept within any organization. Even if each measure was understood, proliferation of multiple measures to measure the same concept will waste time and money. An organization I recently observed had implemented IFPUG Function Points, Cosmic Function Points, Use Case Points and Story Points to measure software size. This organization had spent the time and effort to find a conversion mechanism so that each measure could be combined for reporting. In this case the proliferation metrics for the same concept had become an ‘effort eater.’ Unfortunately it is not uncommon to see organizations trying to compare the productivity of projects based on very different yardsticks rather than adopting a single measure for size. The value of measurement tends to get lost when there is no common basis for discussion. A single measure will provide that common basis.

Both the proliferation of breadth and of depth have upsides, everybody gets to collect, report and use their favorite measure, and downsides, (which sound very similar) everybody gets to collect, report and use their favorite measures.  Extra choices come at a cost: the cost of effort, communication and compatibility.  The selection of measures and metrics must be approached with the end in mind – your organization’s business goals.  Allowing the proliferation of measures and metrics, whether in depth or breadth, must be approached with great thought, or it will cost you dearly in information and credibility.


Categories: Process Management

Seven Principles of Agile Software Development in the US DOD

Herding Cats - Glen Alleman - Wed, 08/13/2014 - 23:36

The Software Engineering Institute is a FFRDC (Federally Funded Research and Development Center). An FFRDC, IDA, is a client.

SEI is focused on helping the DOD improve the development of software.

Here are Pod Casts of the Principles of Agile Development of software in the DOD

  • First Principle - Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.”
  • Second Principle - Welcome changing requirements, even late in development
  • Third Principle - Delivering working software frequently”
  • Fourth Principle - Business people and developers must work together daily throughout the project.
  • Fifth Principle - Build projects around motivated individuals. Give them the environment and support they need and trust them to get the job done.
  • Sixth Principle - The most efficient and effective method of conveying information to and within a development team is face-to-face conversation
  • Seventh Principle - Working software is the primary measure of progress
Related articles Is Your Organization Ready for Agile? Three Kinds of Uncertainty About the Estimated Cost and Schedule Can Enterprise Agile Be Bottom Up? What Do We Mean When We Say "Agile Community?" Studies, Science, Cybersecurity & Saddam: $888.8M to IDA for its 3 US FFRDCs from 2014-2018.
Categories: Project Management

Testing on the Toilet: Web Testing Made Easier: Debug IDs

Google Testing Blog - Wed, 08/13/2014 - 18:58
by Ruslan Khamitov 

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

Adding ID attributes to elements can make it much easier to write tests that interact with the DOM (e.g., WebDriver tests). Consider the following DOM with two buttons that differ only by inner text:
Save buttonEdit button
<div class="button">Save</div>
<div class="button">Edit</div>

How would you tell WebDriver to interact with the “Save” button in this case? You have several options. One option is to interact with the button using a CSS selector:
div.button

However, this approach is not sufficient to identify a particular button, and there is no mechanism to filter by text in CSS. Another option would be to write an XPath, which is generally fragile and discouraged:
//div[@class='button' and text()='Save']

Your best option is to add unique hierarchical IDs where each widget is passed a base ID that it prepends to the ID of each of its children. The IDs for each button will be:
contact-form.save-button
contact-form.edit-button

In GWT you can accomplish this by overriding onEnsureDebugId()on your widgets. Doing so allows you to create custom logic for applying debug IDs to the sub-elements that make up a custom widget:
@Override protected void onEnsureDebugId(String baseId) {
super.onEnsureDebugId(baseId);
saveButton.ensureDebugId(baseId + ".save-button");
editButton.ensureDebugId(baseId + ".edit-button");
}

Consider another example. Let’s set IDs for repeated UI elements in Angular using ng-repeat. Setting an index can help differentiate between repeated instances of each element:
<tr id="feedback-{{$index}}" class="feedback" ng-repeat="feedback in ctrl.feedbacks" >

In GWT you can do this with ensureDebugId(). Let’s set an ID for each of the table cells:
@UiField FlexTable table;
UIObject.ensureDebugId(table.getCellFormatter().getElement(rowIndex, columnIndex),
baseID + colIndex + "-" + rowIndex);

Take-away: Debug IDs are easy to set and make a huge difference for testing. Please add them early.

Categories: Testing & QA

The Ruckus About DDSTOP Post

Herding Cats - Glen Alleman - Wed, 08/13/2014 - 18:03

OpinionAwhile back I posted about Don't Do Stupid Things On Purpose (DDSTOP). This caused some comments about this being a very Theory-X management approach. The notion of Don't Do Stupid Things on Purpose comes from observinig people DSTOP with other peoples money.

Many might say, people have to experiment, try thing out, test the waters. Yes this is true, but who pays for that testing, trying, experimenting? Is there budget for that in the project plan? May be that this is actually an experimental project and the whole point of the project is to do stupid things on purpose to see what happens. 

I wonder if the experimental work being paid for by the customer were worded like that if it would sound so clever?

Here's some clarity and a few examples of DSTOP in recent years.

  • Install CA's Clarity (a very expensive project management and project accounting system).
    • Apply Earned Value to a large ($250M) Enterprise IT project. Project gets in trouble, we come on board to perform triage and discover that the Program Management Office has been copying ACWP - the Actual Cost of Work Performed - to BCWP - the Budgeted Cost of Work Performed - the Earned Value - then reporting to senior management that everything is going fine.
    • Instead of actually measuring physical percent complete to compute BCWP, they simplied copies the cost of the budgeted work, hiding the actual process
    • That's DSTOP
  • Install TTPro as a defect tracking and help desk ticketing system,
    • Communicate verbally most of the defect repairs dome in development.
    • Lose the traceability between the defect and the fix, so the QA staff can't trace the defects back to their rest coverage suite
    • That's DSTOP
  • Plan the development of several 100 million $ of flight avioncis upgrades for an aircraft that has been upgraded in the past.
    • Build the Integrated Master Schedule around the past performance activities - good idea
    • Resource load the IMS from past performance - good idea
    • Discover that the cost and schedule don't fit inside the stated needs of the customer, so dial the work durations and assigned labor loads to fit the price to win for the RFP
    • Get awarded the control, go over budget, and show up late after 12 months of work, get contract canceled
    • That's DSTOP

So is it Theory X to ask those working on the project and managing the project to stop and think about what they are doing, what decisions are being made - and assess the impact of those decisions? Or should those entrusted with the customers money just go exploring, trying out ideas with the hope that something innovative will come out of it?

When we're assumed to be the stewards of other people's money, they should expect us to behave as those stewards. This means we make decision about the use of that time and treasure is ways that are informed by our experience, skills, and governance model. Doing otherwise means we're not the right people to be spending our customers money, or our customer has a lot of money to spend on us to become the right people they should have hired to spend their money.

Related articles In Earned Value, Money Spent is Not a Measure of Progress! What Do We Mean When We Say "Agile Community?"
Categories: Project Management

Hamsterdb: An Analytical Embedded Key-value Store

 

In this post, I’d like to introduce you to hamsterdb, an Apache 2-licensed, embedded analytical key-value database library similar to Google's leveldb and Oracle's BerkeleyDB.

hamsterdb is not a new contender in this niche. In fact, hamsterdb has been around for over 9 years. In this time, it has dramatically grown, and the focus has shifted from a pure key-value store to an analytical database offering functionality similar to a column store database. 

hamsterdb is single-threaded and non-distributed, and users usually link it directly into their applications. hamsterdb offers a unique (at least, as far as I know) implementation of Transactions, as well as other unique features similar to column store databases, making it a natural fit for analytical workloads. It can be used natively from C/C++ and has bindings for Erlang, Python, Java, .NET, and even Ada. It is used in embedded devices and on-premise applications with millions of deployments, as well as serving in cloud instances for caching and indexing.

hamsterdb has a unique feature in the key-value niche: it understands schema information. While most databases do not know or care what kind of keys are inserted, hamsterdb supports key types for binary keys...

Categories: Architecture

People Are Not Resources

My manager reviewed the org chart along with the budget. “I need to cut the budget. Which resources can we cut?”

“Well, I don’t think we can cut software licenses,” I was reviewing my copy of the budget. “I don’t understand this overhead item here,” I pointed to a particular line item.

“No,” he said. “I’m talking about people. Which people can we lay off? We need to cut expenses.”

“People aren’t resources! People finish work. If you don’t want us to finish projects, let’s decide which projects not to do. Then we can re-allocate people, if we want. But we don’t start with people. That’s crazy.” I was vehement.

My manager looked at me as if I’d grown three heads. “I’ll start wherever I want,” he said. He looked unhappy.

“What is the target you need to accomplish? Maybe we can ship something earlier, and bring in revenue, instead of laying people off? You know, bring up the top line, not decrease the bottom line?”

Now he looked at me as if I had four heads.

“Just tell me who to cut. We have too many resources.”

When managers think of people as resources, they stop thinking. I’m convinced of this. My manager was under pressure from his management to reduce his budget. In the same way that technical people under pressure to meet a date stop thinking, managers under pressure stop thinking. Anyone under pressure stops thinking. We react. We can’t consider options. That’s because we are so very human.

People are resourceful. But we, the people, are not resources. We are not the same as desks, licenses, infrastructure, and other goods that people need to finish their work.

We need to change the language in our organizations. We need talk about people as people, not resources. And, that is the topic of this month’s management myth: Management Myth 32: I Can Treat People as Interchangeable Resources.

Let’s change the language in our organizations. Let’s stop talking about people as “resources” and start talking about people as people. We might still need layoffs. But, maybe we can handle them with humanity. Maybe we can think of the work strategically.

And, maybe, just maybe, we can think of the real resources in the organization. You know, the ones we buy with the capital equipment budget or expense budget, not operating budget. The desks, the cables, the computers. Those resources. The ones we have to depreciate. Those are resources. Not people.

People become more valuable over time. Show me a desk that does that. Ha!

Go read Management Myth 32: I Can Treat People as Interchangeable Resources.

Categories: Project Management

Success Articles for Work and Life

"Success consists of going from failure to failure without loss of enthusiasm." -- Winston Churchill

I now have more than 300 articles on the topic of Success to help you get your game on in work and life:

Success Articles

That’s a whole lot of success strategies and insights right at your fingertips. (And it includes the genius from a wide variety of sources including  Scott Adams, Tony Robbins, Bruce Lee, Zig Ziglar, and more.)

Success is a hot topic. 

Success has always been a hot topic, but it seems to be growing in popularity.  I suspect it’s because so many people are being tested in so many new ways and competition is fierce.

But What is Success? (I tried to answer that using Zig Ziglar’s frame for success.)

For another perspective, see Success Defined (It includes definitions of success from Stephen Covey and John Maxwell.)

At the end of the day, the most important definition of success, is the one that you apply to you and your life.

People can make or break themselves based on how they define success for their life.

Some people define success as another day above ground, but for others they have a very high, and very strict bar that only a few mere mortals can ever achieve.

That said, everybody is looking for an edge.   And, I think our best edge is always our inner edge.

As my one mentor put it, “the fastest thing you can change in any situation is yourself.”  And as we all know, nature favors the flexible.  Our ability to adapt and respond to our changing environment is the backbone of success.   Otherwise, success is fleeting, and it has a funny way of eluding or evading us.

I picked a few of my favorite articles on success.  These ones are a little different by design.  Here they are:

Scott Adam’s (Dilbert) Success Formula

It’s the Pebble in Your Shoe

The Wolves Within

Personal Leadership Helps Renew You

The Power of Personal Leadership

Tony Robbins on the 7 Traits of Success

The Way of Success

The future is definitely uncertain.  I’m certain of that.   But I’m also certain that life’s better with skill and that the right success strategies under your belt can make or break you in work and life.

And the good news for us is that success leaves clues.

So make like a student and study.

Categories: Architecture, Programming

The Measurement Value Equation

04_14 Garlic

Producing now, so we can consume later.

A food chain is the sequence between production and consumption. In software development the development team generates the functionality that is delivered to a user. In the measurement food chain, the measures that the team generates and collect marks the beginning of process that is consumed to create analysis and reports.  As the team develops, enhances or maintains functionality they consume raw materials, such as effort, ideas and the time to produce an output to be measured.  Managers and administrators monitor the consumption of inputs, the process of transformation and the outputs.  Each of these components can be analyzed and measured; transformed into a number that equates to value or cost.  The comparison of value to cost can be evaluated against the trials and tribulation of production, adding a significant component to the overall value equation.

The question that begs to be asked is ‘who needs this data?’  Who can and does leverage the output of measurement?  Does the audience for measurement include the project and support personnel that create and maintain the functionality?  Or is measurement merely a tool to control the work and workers?   In order to maximize value of you metrics program all constituencies must derive value: development teams, administrators, project managers and organizational managers.  Design measures with this end in mind.


Categories: Process Management

Welcome to The Situation Room

Software Requirements Blog - Seilevel.com - Tue, 08/12/2014 - 17:00
The Seilevel World Headquarters in Austin has workstations scattered throughout about 10 different offices (rooms, not separate locations) and about 7 conference rooms with marker boards, large tables, conference phones, and projectors. We are a consulting firm; so we aren’t all always in our office. Those of us who travel to client sites don’t have […]
Categories: Requirements

Agile Bootcamp Talk Posted on Slideshare

I posted my slides for my Agile 2014 talk, Agile Projects, Program & Portfolio Management: No Air Quotes Required on Slideshare. It’s a bootcamp talk, so the majority of the talk is making sure that people understand the basics about projects. Walk before you run. That part.

However, you can take projects and “scale” them to programs. I wish people wouldn’t use that terminology. Program management isn’t exactly scaling. Program management is when the strategic endeavor  of the program encompases each of the projects underneath.

If you have questions about the presentation, let me know. Happy to answer questions.

Categories: Project Management

Hierarchies remove scaling properties in Agile Software projects

Software Development Today - Vasco Duarte - Tue, 08/12/2014 - 05:00

There is a lot of interest in scaling Agile Software Development. And that is a good thing. Software projects of all sizes benefit from what we have learned over the years about Agile Software Development.

Many frameworks have been developed to help us implement Agile at scale. We have: SAFe, DAD, Large-scale Scrum, etc. I am also aware of other models for scaled Agile development in specific industries, and those efforts go beyond what the frameworks above discuss or tackle.

However, scaling as a problem is neither a software nor an Agile topic. Humanity has been scaling its activities for millennia, and very successfully at that. The Pyramids in Egypt, the Panama Canal in central America, the immense railways all over the world, the Airbus A380, etc.

All of these scaling efforts share some commonalities with software and among each other, but they are also very different. I'd like to focus on one particular aspect of scaling that has a huge impact on software development: communication.

The key to scaling software development

We've all heard countless accounts of projects gone wrong because of lack (inadequate, or just plain bad) communication. And typically, these problems grow with the size of the team. Communication is a major challenge in scaling any human endeavor, and especially one - like software - that so heavily depends on successful communication patterns.

In my own work in scaling software development I've focused on communication networks. In fact, I believe that scaling software development is first an exercise in understanding communication networks. Without understanding the existing and necessary communication networks in large projects we will not be able to help those project adapt. In many projects, a different approach is used: hierarchical management with strict (and non-adaptable) communication paths. This approach effectively reduces the adaptability and resilience in software projects.

Scaling software development is first and foremost an exercise in understanding communication networks.

Even if hierarchies can successfully scale projects where communication needs are known in advance (like building a railway network for example), hierarchies are very ineffective at handling adaptive communication needs. Hierarchies slow communication down to a manageable speed (manageable for those at the top), and reduce the amount of information transferred upwards (managers filter what is important - according to their own view).

In a software project those properties of hierarchy-bound communication networks restrict valuable information from reaching stakeholders. As a consequence one can say that hierarchies remove scaling properties from software development. Hierarchical communication networks restrict information reach without concern for those who would benefit from that information because the goal is to "streamline" communication so that it adheres to the hierarchy.

In software development, one must constantly map, develop and re-invent the communication networks to allow for the right information to reach the relevant stakeholders at all times. Hence, the role of project management in scaled agile projects is to curate communication networks: map, intervene, document, and experiment with communication networks by involving the stakeholders.

Scaling agile software development is - in its essential form - a work of developing and evolving communication networks.

A special thank you note to Esko Kilpi and Clay Shirky for the inspiration for this post through their writings on organizational patterns and value networks in organizations.

Picture credit: John Hammink, follow him on twitter

Seven Deadly Sins of Metrics Programs: a Conclusion

Dr. Deming

Dr. Deming

The Seven Deadly Sins of metrics programs are:

  1. Pride – Believing that a single number/metric is more important than any other factor.
  2. Envy – Instituting measures that facilitate the insatiable desire for another team’s people, tools or applications.
  3. Wrath – Using measures to create friction between groups or teams.
  4. Sloth – Unwillingness to act on or care about the measures you create.
  5. Greed – Allowing metrics to be used as a tool to game the system.
  6. Gluttony – Application of an excess of metrics.
  7. Lust – Pursuit of the number rather than the business goal.

In the end, these sins are a reflection of the organization’s culture. Bad metrics can generate bad behavior and reinforce an organizational culture issues. Adopting good measures is a step in the right direction however culture can’t be changed by good metrics alone. Shifting the focus on an organizations business goals, fostering transparency to reduce gaming and then using measures as tools rather than weapons can support changing the culture. Measurement can generate behavior that leads towards a healthier environment.  As leaders, measurement and process improvement professionals, we should push to shape their environment so that everyone can work effectively for the company.

The Shewhart PDCA Cycle (or Deming Wheel), set outs of model where measurement becomes a means to an end rather than an end in their own right. The Deming wheel popularized the Plan, Do Check, Act (PDCA) cycle which is focused on delivering business value. Using the PDCA cycle, organizational changes are first planned, executed, checked by measurement and then refined based on a positive feedback model. In his book The New Economics Deming wrote “Reward for good performance may be the same as reward to the weather man for a pleasant day.” Organizations that fall prey to the Seven Deadly Sins of metrics programs are apt to incent the wrong behavior.

(Thank you Dr. Deming).


Categories: Process Management