Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Stuff The Internet Says On Scalability For May 1st, 2015

Hey, it's HighScalability time:


Got containers? Gorgeous shot of the CSCL Globe (by Walter Scriptunas II), world's largest container ship: 1,313ft long; 19,000 standard containers.
  • $3000: Tesla's new 7kWh daily cycle battery.
  • Quotable Quotes:
    • @mamund: "Turns out there is nothing about HTTP that I like" --  Douglas Crockford 
    • @PeterChch: Your little unimportant site might be hacked not for your data but for your aws resources. E.g. bitcoin mining.
    • @Joseph_DeSimone: I find it stunning that Google's annual R&D budget totaled $9.8 billion and the Budget for the National Science Foundation was $7.3 billion
    • @jedberg: The new EC2 container service adds the missing granularity to #ec2
    • Randy Shoup: “Every service at Google is either deprecated or not ready yet.”  -- Google engineering proverb
    • @mtnygard: Today the ratio of admins to servers in a well-behaved scalable web companies is about 1 to 10,000. @botchagalupe #craftconf
    • @joshk: Data: There Are Over 9x More Private IPOs Than Actual Tech IPOs 
    • @nwjsmith: “Systems are not algorithms. Systems are much more complex.“ #CraftConf @skamille
    • kk: “Because the center of the universe is wherever there is the least resistance to new ideas.”
    • John Allspaw: Stop thinking that you’re trying to solve a troubleshooting problem; you’re not. Instead of telling me about how your software will solve problems, show me that you’re trying to build a product that is going to join my team as an awesome team member, because I’m going to think about using/buying your service in the same way that I think about hiring.
    • @mpaluchowski: "Netflix is a #logging system that happens to play movies." #CraftConf
    • John Wilke:  Resiliency is more important than performance.
    • @peakscale: The server/cattle metaphor rubs me the wrong way... all the farmers I knew and worked for named and cared about their herd.
    • @aphyr: "We've managed to run 40 services in prod for three years without needing to introduce a consensus system" @skamille, #CraftConf
    • @ryantomlinson: “Spotify have been using DNS for service discovery for a long time” #CraftConf
    • @csanchez: Google "we start over 2 billion containers per week" containers, containers, containers! #qconlondon 
    • @tyler_treat: If you're using RabbitMQ, consider replacing it with Kafka. Higher throughput, better replication, replayability. Same goes for other MQs.
    • @tastapod: @botchagalupe telling #CraftConf how it is! “Yelp is spinning up 8 containers a second. This is the real sh*t, man!”
    • @mpaluchowski: "A static #alert threshold won't be any good next week. It must be calculated." #CraftConf
    • @mtnygard: #craftconf @randyshoup “Microservices are an answer to a scaling problem, not a business problem.”  So right.
    • @adrianco: @mtnygard @randyshoup speed of development is the business problem that leads to Microservices.
    • @b6n: the aws financials should be a wake-up call to anyone still thinking cloud isn't a game of raw scale
    • @mtnygard: The “edge” used to be top-of-rack. Then the hypervisor. Now it’s the container. That’s 100x the number of IPs. — @botchagalupe #craftconf
    • @idajantis: 'An escalator can never break; it can only become stairs' - nice one by @viktorklang at #CraftConf on Distributed Systems failing gracefully
    • @jessitron: "You should store your data in a real database and replicate it to Elasticsearch." @aphyr #CraftConf

  • A telling difference between Google and Apple: Google Now becomes a more robust platform with 70 new partner apps. Apple takes an app-centric view of the world and Google not surprisingly takes a data centric view. With Google developers feed Google data for Google to display. With Apple developers feed Apple apps for users to consume. On Apple developers push their own brand and control functionality through bundled extensions, but Google will have the perspective to really let their deep learning prowess sing. So there's a real choice.

  • How appropriate that game theory is applied to cyberwarfare. Mutually Assured Destruction isn't just for nukes. Pentagon Announces New Strategy for Cyberwarfare: “Deterrence is partially a function of perception,” the new strategy says. “It works by convincing a potential adversary that it will suffer unacceptable costs if it conducts an attack on the United States, and by decreasing the likelihood that a potential adversary’s attack will succeed.

  • Reducing big data using ideas from quantum theory makes it easier to interpret. So maybe QM is nature's way of making sense of the BigData that is the Universe?

  • Synergy is not always BS. Cheaper bandwidth or bust: How Google saved YouTube: YouTube was burning through $2 million a month in bandwidth costs before the acquisition. What few knew at the time was that Google was a pioneer in data center technology, which allowed it to dramatically lower the costs of running YouTube.

  • In a winner take all market is the cost of customer acquisition pyrrhic? Uber Burning $750 Million in a Year.

  • The cloud behind the cloud. Apple details how it rebuilt Siri on Mesos: Apple’s custom Mesos scheduler is called J.A.R.V.I.S.; Apple uses J.A.R.V.I.S. as its internal platform-as-a-service; Apple’s Mesos cluster spans thousands of nodes and runs about a hundred services; Siri’s Mesos backend represents its third generation, and a move away from “traditional” infrastructure.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Fit for Purpose and Fit for Use

Herding Cats - Glen Alleman - Fri, 05/01/2015 - 07:15

ITILIn the governance paradigm there are two terms used to test ideas, processes, and their outcomes.

Fit for Purpose and Fit for Use

Both these terms are used to describe value of an IT outcome. A product or service value is defined by fit to purpose (utility) and fit to use (warranty).

Fit to purpose, or utility, means that service needs to fulfill customer needs.

Fit for use, or warranty, means that product or service is available when a user needs it.

From Phillippe Kruchten's The Frog and the Octopus we get 8 factors defining a context from which to test any  idea that we encounter.

  1. Size
  2. Criticality
  3. Business model
  4. Stable architecture
  5. Team distribution
  6. Governance
  7. Rate of change
  8. Age of system

So when we hear something that doesn't seem to pass the smell test, think of Carl's advice

Nice Hypothesis

And then when we hear you're just not willing to try out this idea, think of some more of his advice.

Tumblr_mizbc3VWht1ri1icuo1_500

Then ask, how is this idea of your Fit for Purpose and Fit for Use in those 8 context factors? Don't have an answer? Then maybe the personal anecdotes are ready for prime time in Enterprise domain.

Related articles Root Cause Analysis The Reason We Plan, Schedule, Measure, and Correct
Categories: Project Management

The Top Five Issues In Project Estimation

Sometimes estimation leaves you in a fog!

Sometimes estimation leaves you in a fog!

I recently asked a group of people the question, “What are the two largest issues in project estimation?” I received a wide range of answers; probably a reflection of the range of individuals answering.  Five macro categories emerged from the answers. They are:

  1. Requirements. The impact of unclear and changing requirements on budgeting and estimation was discussed in detail in the entry, Requirements: The Chronic Problem with Project Estimation.  Bottom line, change is required to embrace dynamic development methods and that change will require changes in how the organization evaluates projects.
  2. Estimate Reliability. The perceived lack of reliability of an estimate can be generated by many factors including differences in between development and estimation processes. One of the respondents noted, “most of the time the project does not believe the estimate and thus comes up with their own, which is primarily based on what they feel the customer wants to hear.”
  3. Project History. Both analogous and parametric estimation processes use the past as an input in determining the future.  Collection of consistent historical data is critical to learning and not repeating the same mistakes over and over.  According to Joe Schofield, “few groups retain enough relevant data from their experiences to avoid relearning the same lesson.”
  4. Labor Hours Are Not The Same As Size.  Many estimators either estimate the effort needed to perform the project or individual tasks.  By jumping immediately to effort, estimators miss all of the nuances that effect the level of effort required to deliver value.  According to Ian Brown, “then the discussion basically boils down to opinions of the number of hours, rather that assessing other attributes that drive the number of hours that something will take.”
  5. No One Dedicated to Estimation.  Estimating is a skill built on a wide range of techniques that need to be learned and practiced.  When no one is dedicated to developing and maintaining estimates it is rare that anyone can learn to estimate consistently, which affects reliability.  To quote one of the respondents, “consistency of estimation from team to team, and within a team over time, is non-existent.”

Each of the top five issues are solvable without throwing out the concept of estimation that are critical for planning at the organization, portfolio and product levels.  Every organization will have to wrestle with their own solution to the estimation conundrum. However the first step is to recognize the issues you face and your goals from the estimation process.


Categories: Process Management

Microsoft is becoming cool again

Eric.Weblog() - Eric Sink - Thu, 04/30/2015 - 19:00
Isn't "New Microsoft" awesome?

.NET is going open source? And cross-platform? On Github?!?

The news out of Redmond often seems like a mis-timed April fools joke.

But it's real. This is happening. Microsoft is apparently willing to do "whatever it takes" to get developers to love them again.

How did this company change so much, so quickly?

A lot of folks are giving credit to CEO Satya Nadella. And there could be some truth to that. Maaaaaaybe.

Another popular view: Two of the most visible people in this story are:

  • Scott Hanselman (whose last name I cannot type without double-checking the spelling.)

  • and Scott Gu (whose last name is Guenther. Or something like that. I can never remember anything after the first two letters.)

I understand why people think maybe these two guys caused this revolution. They both seem to do a decent job I suppose.

But the truth is that New Microsoft started when Microsoft hired Martin Woodward.

What?!? Who the heck is Martin Woodward?

Martin's current position is Executive Director of the .NET Foundation. Prior to that, he worked as a Program Manager on various developer tools.

Nobody knows who Martin is. Either of the Scotts have 2 orders of magnitude more Twitter followers.

But I think if you look closely at Martin's five+ year career at Microsoft, you will see a pattern. Every time a piece of Old Microsoft got destroyed in favor of New Microsoft, Martin was nearby.

Don't believe me? Ask anybody in DevDiv how TFS ended up getting support for Git.

It's all about Martin.

So all the credit should go to Martin Woodward then?

Certainly not.

You see, Martin joined Microsoft in late 2009 as part of their acquisition of Teamprise.

And Teamprise was a division of SourceGear.

I hired Martin Woodward (single-handedly, with no help or input from anybody else) in 2005. Four years later, when Microsoft acquired our Teamprise division (which I made happen, all by myself), Martin became a Microsoft employee.

Bottom line: Are you excited about all of the fantastic news coming out of Build 2015 this week? That stuff would never have happened if it were not for ME.

Eric, how can we ever thank you?

So, now that you know that I am the one behind all the terrific things Microsoft is doing, I'm sure you want to express your appreciation. But that won't be necessary. While I understand the sentiment, in lieu of expensive gifts and extravagant favors, I am asking my adoring fans to do two things:

First, try not to forget all the little people at Microsoft who are directly involved in the implementation of change. People like Martin, or the Scotts, or Satya. Even though these folks are making a relatively minor contribution compared to my own, I would like them to be supported in their efforts. Be positive. Don't second-guess their motives. Lay aside any prejudices you might have from Old Microsoft. Believe.

Second, get involved. Interact with New Microsoft as part of the community. Go to Github and grab the code. Report an issue. Send a pull request.

Embellishments and revisionist history aside...

Enjoy this! It's a GREAT time to be a developer.

 

The #NoBureaucracy Discount

NOOP.NL - Jurgen Appelo - Thu, 04/30/2015 - 17:23
No Bureaucracy

On just a single day, I had to deal with various activities that I can only describe as bureaucracy.

The post The #NoBureaucracy Discount appeared first on NOOP.NL.

Categories: Project Management

Domain is King, No Domain Defined, No Way To Test Your Idea

Herding Cats - Glen Alleman - Thu, 04/30/2015 - 16:09

ValueStreamFrom Mark Kennaley's book. 

When we hear of a new and exciting approach to old and never ending problems, we need to first ask in what domain is your new and clever solution applicable

No domain, not likely to have been tested outside you personal anecdotal experience.

Here's Mark's Scaling factors. He uses Agile as the starting point, but I'm expanding it to any suggestion that has yet to be tested outside of personal anecdotes

  • Team size - 1 maybe 2 ↔︎ 1000's.
    • If you've tested your idea on a small team, will it work in a larger setting?
    • There are project where 1000's of engineers, developers, testers, support people work on the same outcome.
    • There are projects where 2 people work on the same project.
    • The business process usually defines the team size, not the method. Writing code for a family owned sprinkler company is not the same as writing code for a world-wide ERP system.
  • Geographic location - co located ↔︎ across the planet.
    • Having a co-located team is wonderful in some instances.
    • In other instances this is physically not possible.
    • In other cases, the customer simply can't be with the development team, no matter how desirable that is.
  • Organizational Structure - single monolithic team ↔︎ Multiple Divisions
    • Small firms with single structures 
    • Larger firms with "business units"
    • Even larger firms with separate companies under the same Logos
    • Your method doesn't get to say how the firm is organized, it needs to adapt to that organization.
  • Compliance - None ↔︎ Life, financial, safety critical
    • Governance is a broad term, so is compliance
    • The notion of customer collaboration over contract negotiation is usually spoken by those with low value at risk.
  • Domain Complexity - Straightforward ↔︎ Very Complex
    • Complex and Complexity are relative terms.
    • For a developer who has built web pages for the warehouse, the autonomous rendezvous and dock flight software may appear complex. 
    • To the autonomous rendezvous and dock developers, the GPS ground system of 37 earth stations may appear complex.
    •  Pick the domain, then assess the complexity
  • Technical Complexity - same as above

What's the Point?

When a new approach is being proposed, without a domain, it's a solution looking for a problem to solve.

Categories: Project Management

What Are Your Thoughts On “Job Hopping”

Making the Complex Simple - John Sonmez - Thu, 04/30/2015 - 16:00

In this episode, I talk about job hopping and how my thoughts about it has changed over the years. Full transcript: John:               Hey, this is John Sonmez from simpleprogrammer.com. I’ve got a question for you today about job hopping. This is something that I get asked quite a bit so I thought I would answer […]

The post What Are Your Thoughts On “Job Hopping” appeared first on Simple Programmer.

Categories: Programming

Software Development Conferences Forecast April 2015

From the Editor of Methods & Tools - Thu, 04/30/2015 - 13:57
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine. STAREAST – Software Testing Conference, May 3-8 2015, Orlando, FL, USA GOTO Chicago, May 11-15 2015, Chicago, USA Baltic DevOps, May 12 2015, Tallinn, Estonia GeeCon, May 13-15 2015, Krakow, Poland Romanian Testing Conference, May 14-15 2015, Cluj ...

What is Trolling?

Coding Horror - Jeff Atwood - Thu, 04/30/2015 - 10:11

If you engage in discussion on the Internet long enough, you're bound to encounter it: someone calling someone else a troll.

The common interpretation of Troll is the Grimms' Fairy Tales, Lord of the Rings, "hangs out under a bridge" type of troll.

Thus, a troll is someone who exists to hurt people, cause harm, and break a bunch of stuff because that's something brutish trolls just … do, isn't it?

In that sense, calling someone a Troll is not so different from the pre-Internet tactic of calling someone a monster – implying that they lack all the self-control and self-awareness a normal human being would have.

Pretty harsh.

That might be what the term is evolving to mean, but it's not the original intent.

The original definition of troll was not a beast, but a fisherman:

Troll

verb \ˈtrōl\

  1. to fish with a hook and line that you pull through the water

  2. to search for or try to get (something)

  3. to search through (something)

If you're curious why the fishing metaphor is so apt, check out this interview:

There's so much fishing going on here someone should have probably applied for a permit first.

  • He engages in the interview just enough to get the other person to argue. From there, he fishes for anything that can nudge the argument into some kind of car wreck that everyone can gawk at, generating lots of views and publicity.

  • He isn't interested in learning anything about the movie, or getting any insight, however fleeting, into this celebrity and how they approached acting or directing. Those are perfunctory concerns, quickly discarded on the way to their true goal: generating controversy, the more the better.

I almost feel sorry for Quentin Tarantino, who is so obviously passionate about what he does, because this guy is a classic troll.

  1. He came to generate argument.
  2. He doesn't truly care about the topic.

Some trolls can seem to care about a topic, because they hold extreme views on it, and will hold forth at great length on said topic, in excruciating detail, to anyone who will listen. For days. Weeks. Months. But this is an illusion.

The most striking characteristic of the worst trolls is that their position on a given topic is absolutely written in stone, immutable, and they will defend said position to the death in the face of any criticism, evidence, or reason.

Look. I'm not new to the Internet. I know nobody has ever convinced anybody to change their mind about anything through mere online discussion before. It's unpossible.

But I love discussion. And in any discussion that has a purpose other than gladiatorial opinion bloodsport, the most telling question you can ask of anyone is this:

Why are you here?

Did you join this discussion to learn? To listen? To understand other perspectives? Or are you here to berate us and recite your talking points over and over? Are you more interested in fighting over who is right than actually communicating?

If you really care about a topic, you should want to learn as much as you can about it, to understand its boundaries, and the endless perspectives and details that make up any interesting topic. Heck, I don't even want anyone to change your mind. But you do have to demonstrate to us that you are at least somewhat willing to entertain other people's perspectives, and potentially evolve your position on the topic to a more nuanced, complex one over time.

In other words, are you here in good faith?

People whose actions demonstrate that they are participating in bad faith – whether they are on the "right" side of the debate or not – need to be shown the door.

So now you know how to identify a troll, at least by the classic definition. But how do you handle a troll?

You walk away.

I'm afraid I don't have anything uniquely insightful to offer over that old chestnut, "Don't feed the trolls." Responding to a troll just gives them evidence of their success for others to enjoy, and powerful incentive to try it again to get a rise out of the next sucker and satiate their perverse desire for opinion bloodsport. Someone has to break the chain.

I'm all for giving people the benefit of the doubt. Just because someone has a controversial opinion, or seems kind of argumentative (guilty, by the way), doesn't automatically make them a troll. But their actions over time might.

(I also recognize that in matters of social justice, there is sometimes value in speaking out and speaking up, versus walking away.)

So the next time you encounter someone who can't stop arguing, who seems unable to generate anything other than heat and friction, whose actions amply demonstrate that they are no longer participating in the conversation in good faith … just walk away. Don't take the bait.

Even if sometimes, that troll is you.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Categories: Programming

Deliberate Practice: Building confidence vs practicing

Mark Needham - Thu, 04/30/2015 - 08:48

A few weeks ago I wrote about the learning to cycle dependency graph which described some of the skills required to become proficient at riding a bike.

IMG 20150430 073120


While we’ve been practicing various skills/sub skills I’ve often found myself saying the following:

if it’s not hard you’re not practicing

me, April 2015


i.e. you should find the skill you’re currently practicing difficult otherwise you’re not stretching yourself and therefore aren’t getting better.

For example, in cycling you could be very comfortable riding with both hands on the handle bars and find using one hand a struggle. However, if you don’t practice that you won’t be able to indicate and turn corners.

This ties in with all my reading about deliberate practice which suggests that the type of exercises you do while deliberately practicing aren’t intended to be fun and are meant to expose your lack of knowledge.

In an ideal world we would spend all our time practicing these challenging skills but in reality there’s some part of us that wants to feel that we’re actually improving by spending some of the time doing things that we’re good at. Doing things you’re not good at is a bit of a slog as well so we might find that we have less motivation for this type of thing.

We therefore need to find a balance between doing challenging exercises and having fun building something or writing code that we already know how to do. I’ve found the best way to do this is to combine the two types of work into mini projects which contain some tasks that we’re already good at and some that require us to struggle.

For me this might involved cleaning up and importing a data set into Neo4j, which I’m comfortable with, and combining that with something else that I want to learn.

For example in the middle of last year I did some meetup analysis which involved creating a Neo4j graph of London’s NoSQL meetups and learning a bit about R, dplyr and linear regression along the way.

In January I built a How I met your mother graph and then spent a few weeks learning various algorithms for extracting topics from free text to give even more ways to explore the dataset.

Most recently I’ve been practicing exercises from Think Bayes and while it’s good practice I think I’d probably spend more time doing it if I linked it into a mini project with something I’m already comfortable with.

I’ll go off and have a think what that should be!

Categories: Programming

Paper: DNACloud: A Tool for Storing Big Data on DNA

"From the dawn of civilization until 2003, humankind generated five exabytes (1 exabytes = 1 billion gigabytes) of data. Now we produce five exabytes every two days and the pace is accelerating."

-- Eric Schmidt, Executive Chairman, Google, August 4, 2010. 

 

Where are we going to store the deluge of data everyone is warning us about? How about in a DNACloud that can store store 1 petabyte of information per gram of DNA?

Writing is a little slow. You have to convert your data file to a DNA description that is sent to a biotech company that will send you back a vile of synthetic DNA. Where do you store it? Your refrigerator.

Reading is a little slow too. The data can apparently be read with great accuracy, but to read it you have to sequence the DNA first, and that might take awhile.

The how of it is explained in DNACloud: A Tool for Storing Big Data on DNA (poster). Abstract:

The term Big Data is usually used to describe huge amount of data that is generated by humans from digital media such as cameras, internet, phones, sensors etc. By building advanced analytics on the top of big data, one can predict many things about the user such as behavior, interest etc. However before one can use the data, one has to address many issues for big data storage. Two main issues are the need of large storage devices and the cost associated with it. Synthetic DNA storage seems to be an appropriate solution to address these issues of the big data. Recently in 2013, Goldman and his collegues from European Bioinformatics Institute demonstrated the use of the DNA as storage medium with capacity of storing 1 peta byte of information on one gram of DNA and retrived the data successfully with low error rate [1]. This significant step shows a promise for synthetic DNA storage as a useful technology for the future data storage. Motivated by this, we have developed a software called DNACloud which makes it easy to store the data on the DNA. In this work, we present detailed description of the software.

 Related Articles
Categories: Architecture

Two Books in the Spectrum of Software Development

Herding Cats - Glen Alleman - Wed, 04/29/2015 - 04:59

ValueStreamI had the pleasure of meeting Mark Kennaley a week ago in Boulder when he attended the Denver PMI conference. I have talked with Mark on several occasions on social media about the SDLC 3.0 book and concepts. And now his Value Stream: Generally Accepted Practice in Enterprise Software Development, which is a continuation of the first book, but focused not just in the development life cycle but the entire enterprise process.

I say all this for a simple reason. Mark's book is unique in that from the first page it resonated with the ideas I hold dear. It is not only well written, it contains powerful ideas that need to be read any anyone in the enterprise IT business. Mark signed the title page with a phrase that reflects my feelings on many modern topics in project management and software development.

UntitledThis pretty much sums up my World View for those suggesting solutions to complex problems can be had with simple and many time simple minded ideas. Mosty ideas borrowed from quotes about completely different domains, or recycle words like Systems and Systems Engineering into psycho-babble terms that cannot be tested in practice. 

I'm going to write a review of Mark's book chapter-by- chapter. I'm on chapter 3. So far it's a breathtaking read. Mandatory for anyone claiming to work in the Enterprise IT world and be accountable for the spend of other peoples money.

DownloadI also bought Ron Jeffries book to read on my two week assignment to a client site.

I met Ron at an Agile Development Conference long ago, where I was speaking about agile in government contracting. As well Ron spoke at a local ACM meeting that included many software engineering staff from US West, now Centurylink.

It was interesting to observe the interactions between eXtreme Programming and RBOC software engineers.

I've just started this book, but thought it would be a good read just to see how far the eXtreme Programming paradigm has come. I expect I'll get something out of both books. I'll do the same chapter-by-chapter review here as I'll do for Mark's book. 

Related articles Root Cause Analysis The Reason We Plan, Schedule, Measure, and Correct The Flaw of Empirical Data Used to Make Decisions About the Future Who Builds a House without Drawings?
Categories: Project Management

Budgeting, Estimation, Planning, #NoEstimates and the Agile Planning Onion

1127131620a_1

There are many levels of estimation including budgeting, high-level estimation and task planning (detailed estimation).  We can link a more classic view of estimation to  the Agile planning onion popularized by Mike Cohn.   In the Agile planning onion, strategic planning is on the outside of the onion and the planning that occurs in the daily sprint meetings at the core of the onion. Each layer closer to the core relates more to the day-to-day activity of a team. The #NoEstimates movement eschew developing story- or task-level estimates and sometimes at higher levels of estimation. As you get closer to the core of the planning onion the case for the #NoEstimates becomes more compelling.

03fig01.jpg (500×393)

Planning Onion

 

Budgeting is a strategic form of estimation that most corporate and governmental entities perform.  Budgeting relates to the strategy and portfolio layers of the planning onion.  #NoEstimates techniques doesn’t answer the central questions most organizations need to answer at this level which include:

1.     How much money should I allocate for software development, enhancements and maintenance?

2.     Which projects or products should we fund?

3.     Which projects will return the greatest amount of value?

Budgets are often educated guesses that provide some approximation of the size and cost of the work on the overall backlog. Budgeting provides the basis to allocate resources in environments where demand outstrips capacity. Other than in the most extreme form of #NoEstimate, which eschews all estimates, budgeting is almost always performed.

High-level estimation, performed in the product and release layers of the planning onion, is generally used to forecast when functionality will be available. Release plans and product road maps are types of forecasts that are used to convey when products and functions will be available. These types of estimates can easily be built if teams have a track record of delivering value on a regular basis. #NoEstimates can be applied at this level of planning and estimation by substituting the predictable completion of work items for developing effort estimates.  #NoEstimates at this level of planning can be used only IF  conditions that facilitate predictable delivery flow are met. Conditions include:

  1. Stable teams
  2. Adoption of an Agile mindset (at both the team and organizational levels)
  3. A backlog of well-groomed stories

Organizations that meet these criteria can answer the classic project/release questions of when, what and how much based on the predictable delivery rates of #NoEstimate teams (assuming some level of maturity – newly formed teams are never predictable). High level estimate is closer to the day-to-day operations of the team and connect budgeting to the lowest level of planning in the planning onion.

In the standard corporate environment, task-level estimation (typically performed at the iteration and daily planning layers of the onion) is an artifact of project management controls or partial adoptions of Agile concepts. Estimating tasks is often mandated in organizations that allocate individual teams to multiple projects at the same time. The effort estimates are used to enable the organization to allocate slices of time to projects. Stable Agile teams that are allowed to focus one project at a time and use #NoEstimate techniques have no reason to estimate effort at a task level due to their ability to consistently say what they will do and then deliver on their commitments. Ceasing task-level estimation and planning is the core change all proponents of #NoEstimates are suggesting.

A special estimation case that needs to be considered is that of commercial or contractual work. These arrangements are often represent lower trust relationships or projects that are perceived to be high risk. The legal contracts agreed upon by both parties often stipulate the answers to the what, when and how much question before the project starts. Due to the risk the contract creates both parties must do their best to predict/estimate the future before signing the agreement. Raja Bavani, Senior Director at Cognizant Technology Solutions suggested in a recent conversation, that he thought that, “#NoEstimates was a non-starter in a contractual environment due the financial risk both parties accept when signing a contract.”

Estimation is a form of planning, and planning is a considered an important competency in most business environments. Planning activities abound whether planning the corporate picnic to planning the acquisition and implementation of a new customer relationship management system. Most planning activities center on answering a few very basic questions. When with will “it” be done? How much will “it” cost? What is “it” that I will actually get? As an organization or team progresses through the planning onion, the need for effort and cost estimation lessens in most cases. #NoEstimation does not remove the need for all types of estimates. Most organizations will always need to estimate in order to budget. Organizations that have stable teams, adopted the Agile mindset and have a well-groomed backlog will be able to use predictable flow to forecast rather than effort and cost estimation. At a sprint or day-to-day level Agile teams that predictably deliver value can embrace the idea of #NoEstimate while answering the basic questions based what, when and how much based on performance.


Categories: Process Management

Sponsored Post: OpenDNS, MongoDB, Internap, Aerospike, SignalFx, InMemory.Net, Couchbase, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • The Cloud Platform team at OpenDNS is building a PaaS for our engineering teams to build and deliver their applications. This is a well rounded team covering software, systems, and network engineering and expect your code to cut across all layers, from the network to the application. Learn More

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • How to Get a Game-Changing Performance Advantage with Intel SSDs and Aerospike. Presenter: Frank Ober, Data Center Solution Architect at Intel Corporation. Wednesday, May 13, 2015 @ 10:00AM PST, 1:00PM PST. Learn how to maximize the price/performance of your Intel Solid-State Drives (SSDs) with Aerospike. Frank Ober of Intel’s Solutions Group will review how he achieved 1+ million transactions per second on a single dual socket Xeon Server with SSDs using the open source tools of Aerospike for benchmarking. Register Now.

  • MongoDB World brings together over 2,000 developers, sysadmins, and DBAs in New York City on June 1-2 to get inspired, share ideas and get the latest insights on using MongoDB. Organizations like Salesforce, Bosch, the Knot, Chico’s, and more are taking advantage of MongoDB for a variety of ground-breaking use cases. Find out more at http://mongodbworld.com/ but hurry! Super Early Bird pricing ends on April 3.
Cool Products and Services
  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • Benchmark: MongoDB 3.0 (w/ WiredTiger) vs. Couchbase 3.0.2. Even after the competition's latest update, are they more tired than wired? Get the Report.

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

The Customer Bought a Capability

Herding Cats - Glen Alleman - Tue, 04/28/2015 - 15:33

Customers buy capabilities to accomplish a business goal or successfully complete a mission. Deliverables are the foundation of the ability to provide this capability. Here's how to manage a project focused on Deliverables. 

Deliverables Based Planning from Glen Alleman These capabilities and the deliverables that enable them do need to show up at the time they are needed for the cost necessary to support the business case. They're not the Minimal Viable Capabilities, they're the Needed Capabilities. Anything less will not meet the needs of the business.
Categories: Project Management

Should Team Members Sign Up for Tasks During Sprint Planning?

Mike Cohn's Blog - Tue, 04/28/2015 - 15:00

During sprint planning, a team selects a set of product backlog items they will work on during the coming sprint. As part of doing this, most teams will also identify a list of the tasks to be performed to complete those product backlog items.

Many teams will also provide rough estimates of the effort involved in each task. Collectively, these artifacts are the sprint backlog and could be presented along the following lines:

One issue for teams to address is whether individuals should sign up for tasks during sprint planning.

If a team walks out of sprint planning with a name next to every task, individual accountability will definitely be increased. I will feel more responsibility to finish the tasks with my name or initials next to them. And you will feel the same for those with yours. But, this will come at the expense of team accountability.

My recommendation is that a team should leave sprint planning without having put names on tasks. Following a real-time sign-up strategy will allow more flexibility during the sprint.

Episode 225: Brendan Gregg on Systems Performance

Senior performance architect and author of *Systems Performance* Brendan Gregg talks with Robert Blumen about systems performance: how the hardware and OS layers affect application behavior. The discussion covers the scope of systems performance, systems performance in the software life cycle, the role of performance analysis in architecture, methodologies for solving performance problems, dynamic tracing […]
Categories: Programming

R: dplyr – Error in (list: invalid subscript type ‘double’

Mark Needham - Mon, 04/27/2015 - 23:34

In my continued playing around with R I wanted to find the minimum value for a specified percentile given a data frame representing a cumulative distribution function (CDF).

e.g. imagine we have the following CDF represented in a data frame:

library(dplyr)
df = data.frame(score = c(5,7,8,10,12,20), percentile = c(0.05,0.1,0.15,0.20,0.25,0.5))

and we want to find the minimum value for the 0.05 percentile. We can use the filter function to do so:

> (df %>% filter(percentile > 0.05) %>% slice(1))$score
[1] 7

Things become more tricky if we want to return multiple percentiles in one go.

My first thought was to create a data frame with one row for each target percentile and then pull in the appropriate row from our original data frame:

targetPercentiles = c(0.05, 0.2)
percentilesDf = data.frame(targetPercentile = targetPercentiles)
> percentilesDf %>% 
    group_by(targetPercentile) %>%
    mutate(x = (df %>% filter(percentile > targetPercentile) %>% slice(1))$score)
 
Error in (list(score = c(5, 7, 8, 10, 12, 20), percentile = c(0.05, 0.1,  : 
  invalid subscript type 'double'

Unfortunately this didn’t quite work as I expected – Antonios pointed out that this is probably because we’re mixing up two pipelines and dplyr can’t figure out what we want to do.

Instead he suggested the following variant which uses the do function

df = data.frame(score = c(5,7,8,10,12,20), percentile = c(0.05,0.1,0.15,0.20,0.25,0.5))
targetPercentiles = c(0.05, 0.2)
 
> data.frame(targetPercentile = targetPercentiles) %>%
    group_by(targetPercentile) %>%
    do(df) %>% 
    filter(percentile > targetPercentile) %>% 
    slice(1) %>%
    select(targetPercentile, score)
Source: local data frame [2 x 2]
Groups: targetPercentile
 
  targetPercentile score
1             0.05     7
2             0.20    12

We can then wrap this up in a function:

percentiles = function(df, targetPercentiles) {
  # make sure the percentiles are in order
  df = df %>% arrange(percentile)
 
  data.frame(targetPercentile = targetPercentiles) %>%
    group_by(targetPercentile) %>%
    do(df) %>% 
    filter(percentile > targetPercentile) %>% 
    slice(1) %>%
    select(targetPercentile, score)
}

which we call like this:

df = data.frame(score = c(5,7,8,10,12,20), percentile = c(0.05,0.1,0.15,0.20,0.25,0.5))
> percentiles(df, c(0.08, 0.10, 0.50, 0.80))
Source: local data frame [2 x 2]
Groups: targetPercentile
 
  targetPercentile score
1             0.08     7
2             0.10     8

Note that we don’t actually get any rows back for 0.50 or 0.80 since we don’t have any entries greater than those percentiles. With a proper CDF we would though so the function does its job.

Categories: Programming

What Mongo-ish API would mobile developers want?

Eric.Weblog() - Eric Sink - Mon, 04/27/2015 - 19:00

A couple weeks ago I blogged about mobile sync for MongoDB.

Updated Status of Elmo

Embeddable Lite Mongo continues to move forward nicely:

  • Progress on indexes:

    • Compound and multikey indexes are supported.
    • Sparse indexes are not done yet.
    • Index key encoding is different from the KeyString stuff that Mongo itself does. For encoding numerics, I did an ugly-but-workable F# port of the encoding used by SQLite4.
    • Hint is supported, but is poorly tested so far.
    • Explain is supported, partially, and only for version 3 of the wire protocol. More work to do there.
    • The query planner (which has delusions of grandeur for even referring to itself by that term) isn't very smart.
    • Indexes cannot yet be used for sorting.
    • Indexes are currently never used to cover a query.
    • When grabbing index bounds from the query, $elemMatch is ignored. Because of this, and because of the way Mongo multikey indexes work, most index scans are bounded at only one end.
    • The $min and $max query modifiers are supported.
    • The query planner doesn't know how to deal with $or at all.
  • Progress on full-text search:

    • This feature is working for some very basic cases.
    • Phrase search is not implemented yet.
    • Language is currently ignored.
    • The matcher step for $text is not implemented yet at all. Everything within the index bounds will get returned.
    • The tokenizer is nothing more than string.split. No stemming. No stop words.
    • Negations are not implemented yet.
    • Weights are stored in the index entries, but textScore is not calculated yet.

I also refactored to get better separation between the CRUD logic and the storage of bson blobs and indexes (making it easier to plug-in different storage layers).

Questions about client-side APIs

So, let's assume you are building a mobile app which communicates with your Mongo server in the cloud using a "replicate and sync" approach. In other words, your app is not doing its CRUD operations by making networking/REST calls back to the server. Instead, your app is working directly with a partial clone of the Mongo database that is right there on the mobile device. (And periodically, that partial clone is magically synchronized with the main database on the server.)

What should the API for that "embedded lite mongo" look like?

Obviously, for each development environment, the form of the API should be designed to feel natural or native in that environment. This is the approach taken by Mongo's own client drivers. In fact, as far as I can tell, these drivers don't even share much (or any?) code. For example, the drivers for C# and Java and Ruby are all different, and (unless I'm mistaken) none of them are mere wrappers around something lower level like the C driver. Each one is built and maintained to provide the most pleasant experience to developers in a specific ecosystem.

My knee-jerk reaction here is to say that mobile developers might want the exact same API as presented by their nearest driver. For example, if I am building a mobile app in C# (using the Xamarin tools), there is a good chance my previous Mongo experience is also in C#, so I am familiar with the C# driver, so that's the API I want.

Intuitive as this sounds, it may not be true. Continuing with the C# example, that driver is quite large. Is its size appropriate for use on a mobile device? Is it even compatible with iOS, which requires AOT compilation? (FWIW, I tried compiling this driver as a PCL (Portable Class Library), and it didn't Just Work.)

For Android, the same kinds of questions would need to be asked about the Mongo Java driver.

And then there are Objective-C and Swift (the primary developer platform for iOS), for which there is no official Mongo driver. But there are a couple of them listed on the Community Supported Drivers page: http://docs.mongodb.org/ecosystem/drivers/community-supported-drivers/.

And we need to consider Phonegap/Cordova as well. Is the Node.js driver a starting point?

And in all of these cases, if we assume that the mobile API should be the same as the driver's API, how should that be achieved? Fork the driver code and rip out all the networking and replace it with calls to the embedded library?

Or should each mobile platform get a newly-designed API which is specifically for mobile use cases?

Believe it or not, some days I wonder: Suppose I got Elmo running as a server on an Android device, listening on localhost port 27017. Could an Android app talk to it with the Mongo Java driver unchanged? Even if this would work, it would be more like a proof-of-concept than a production solution. Still, when looking for solutions to a problem, the mind goes places...

So anyway, I've got more questions than answers here, and I would welcome thoughts or opinions.

  • Feel free to post an issue on GitHub: https://github.com/zumero/Elmo/issues

  • Or email me: eric@zumero.com

  • Or Tweet: @eric_sink

  • Or find me at MongoDB World in NYC at the beginning of June.

 

How can we Build Better Complex Systems? Containers, Microservices, and Continuous Delivery.

We must be able to create better complex software systems. That’s that message from Mary Poppendieck in a wonderful far ranging talk she gave at the Craft Conference: New New Software Development Game: Containers, Micro Services.

The driving insight is complexity grows nonlinearly with size. The type of system doesn’t really matter, but we know software size will continue to grow so software complexity will continue to grow even faster.

What can we do about it? The running themes are lowering friction and limiting risk:

  • Lower friction. This allows change to happen faster. Methods: dump the centralizing database; adopt microservices; use containers; better organize teams.

  • Limit risk. Risk is inherent in complex systems. Methods: PACT testing; continuous delivery.

Some key points:

  • When does software really grow? When smart people can do their own thing without worrying about their impact on others. This argues for building federated systems that ensure isolation, which argues for using microservices and containers.

  • Microservices usually grow successfully from monoliths. In creating a monolith developers learn how to properly partition a system.

  • Continuous delivery both lowers friction and lowers risk. In a complex system if you want stability, if you want security, if you want reliability, if you want safety then you must have lots of little deployments. 

  • Every member of a team is aware of everything. That's what makes a winning team. Good situational awareness.

The highlight of the talk for me was the section on the amazing design of the Swedish Gripen Fighter Jet. Talks on microservices tend to be highly abstract. The fun of software is in the building. Talk about parts can be so nebulous. With the Gripen the federated design of the jet as a System of Systems becomes glaringly concrete and real. If you can replace your guns, radar system, and virtually any other component without impacting the rest of the system, that’s something! Mary really brings this part of the talk home. Don’t miss it.

It’s a very rich and nuanced talk, there’s a lot history and context given, so I can’t capture all the details, watching the video is well worth the effort. Having said that, here’s my gloss on the talk...

Hardware Scales by Abstraction and Miniaturization
Categories: Architecture