Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

The Calculus of Writing Software for Money, Part II

Herding Cats - Glen Alleman - Sun, 04/06/2014 - 16:43

Boehm1981The dictionary definition of economics is a social science concerned with the description and analysis of production, distribution, and consumption of goods and services. [1]

Another definition of economics, closer to software developemnt is the study of how people make decisions in resource-limited situations. This definition matches the classical use of the term and is the basis of software economics. Since software product development always takes place in the presence of limited resoruces. Time, money, capabilities, even knowledge. And since software development always is the exchange of those resources for the production of value, looking at development from an economics point of view is a good start for any discussion around improving the process. 

Two other definitions are needed before continuing. Macroeconomics is the study of how people make decisions in a resource-limited situation on a national or global scale. Microeconomics is the study of how people make decisions in a resource-limited situation on an individual or organization scale. 

For software development, microeconomics deals more with the type of decision making needed for successful projects. And since much of the discussion these days is about making decisions on projects, let's see how the microeconomics paradigm may improve the communication.

There have been suggestions that the book above is old school and no longer applicable to the modern world of software development - e.g. Agile. Since the book is actually about engineering economics  not about software development methods, let's see what the book actually says for those who have not read it, heard Dr. Boehm speak, or in my case worked for the same firm where Dr. Boehm lead our process improvement management effort.

This book was a working text, when I attended USC as a Master's student while working at TRW (Boehm's home) for an Engineering Economics course. The book is still in print and available in used form for low prices. So those wishing to comment on the book, without having first read all 765 pages, can now do so at a low cost.

The preface of the book starts with the usual qualifiers, but contains three important objectives

  1. To make a book easy for students to learn from
  2. To make a book easy for professors to teach from
  3. To provide help for working professionals in the field

The major objective of the book is to provide a basis for a software engineering course, intended to be taken at the college senior or first year graduate level

So let's look at chapters to get a feel of the concepts of software engineering economics. My comments on the chapter are in italics.

  1. Chapter 1 - Case Study of the Scientific American subscription processing system.
  2. Chapter 2 - Case Study of an Urban School Attendance system
  3. Chapter 3 - Goals of Software Engineering
  4. Chapter 4 - The Software Lifecycle: Phases and Activities. This is where the book does not represent the current practice. 1980 Agile was not known. But TRW applied an iterative and incremental development process to Cobra Dane and TRDSS, two programs I participated in as a software engineer writing signal processing code in FORTRAN 77. 
  5. Chapter 5 - The Basic COCOMO Model. This model is in use today, along with SEER , QSM, Price-S, and many other software cost and schedule modeling tools. COCOMO was the first. But all are based on some sort of reference class estimating process.
  6. Chapter 6, 7, 8, and 9 - The  COCOMO Model: Development Modes,Activity Disribution, Product, and Component level 
  7. Chapter 10 - Performance Models and Cost-Effectiveness Models
  8. Chapter 11 - Production Functions: Economies of Scale
  9. Chapter 12 - Choosing Amoung Alternatives: Decision Criteria. This is where the microeconomics starts to enter the current discussion. If we intend to make decisions about how we are going to spend our customers money, we need to consider (from the chapter)
    • Minimum available budget
    • Minimum performance requirements Performance in this context is anything techncial. We have cost, schedule, and performance the three variables of all projects.
    • Maximum effectiveness / Cost Ratio
    • Maximum Effectiveness-Cost Difference
    • Composite options
    • The basis of all decision making in the software development paradigm starts and ends with cost. This is true, simply because writing software for money, costs money.
    • Value to customer, delievry dates are of course critical decision making attributes as well.
    • Knowing how much money, how that money is being utilized - efficacy of funds, how that money is time phased - absorption of funds, and the "return on investment" of those funds Starts and Stops with estimating (because we can't know for sure in the beginning and can't wait till the end) how much it will cost to deliver the needed capabilities to produce the value for the customer. [2]
  10. Chapter 13 - Net Value and Marginal Analysis - another chapter on the effacay of money
  11. Chapter 14 - Present versus Future Expenditure and Income. Another microeconomics chapter on forecasting and estimating budget, performance, and value
  12. Chapter 15 - Figures of Merit - how to decide what to do, now that you have some notion of cost, schedule, performance, and risk
  13. Chapter 16 - Goals as Constraints. Some good discussion in the #NoEstimates world about contraints. Wonder if that author is famailar with this book?
  14. Chapter 17 - Systems Analysis and Constrained Optimization. The Masters degree TRW sent us to was about Systems Engineering and Systems Management. This is one of the chapter we paid close attention to.
  15. Chapter 18 - Coping with Un-reconcilable and Unquantifiable Goals. Lots of decisions deal with these two conditions. "Deciding" means performing an "Analysis of Alternatives" from a microeconomic point of view. This is course means knowing something about the three variables of all projects. One of which is cost.
  16. Chapter 19 - Coping with Uncertainties: Risk Analysis
  17. Chapter 20 - Statistical Decision Theory: The Value of Information. Much of the discussion around current "estimating" processes fails to deal with the statistical processes involved in those decision. Small samples - 5 cycles, self-selected samples, uncalibrated baselines, use of terms (bayesian) without working examples of their use - all seem to have entered the conversation. This chapter can provide insight to managing software projects using statistical processes.
  18. Chapter 21 - Seven Basic Steps in Software Cost Estimation. When it is mentioned "software can't be etimated," this chapter had better have been read and found flawed with working, reviewed examples of it not working. Conjecture is no substitute for facts.
  19. Chapter 22 - Alternative Software Cost Estimation Methods. When we hear "we're exploring for new ways to estimate" wonder if this chapter has been read?
    • Algorithmic models
    • Expert judgement
    • Estimation by analogy
    • Parkinsonian estimating
    • Price-to-Win estimating
    • Top-down estimating
    • Bottom-up estimating
    • Summary comparison of methods
    • So next time we hear "we're exploring altenative" ask straight out - "What are those alternatives, where have you had success with them compared to other methods, what domain have you had this success, and can you share the details of this success so others can asertain if they might be applicable in their domain? 
    • When there is no information forthcoming to these questions, think hard about the veracity of the speaker. Maybe they don't actually have a solution to this critically important problem that is still with us in 2014.
  20. Chapters 23 - 29 are about the details of the COCOMO model
  21. Chapters 30 - 33 are about software cost estimation and life-cycle management using more the COCOMO model

So What's the Point of All This?

When we hear estimating can't be done for software, we actually know better. It is being done in every software domain. Tools, processes, books, papers, conferences, vendors, professional organizations will show you how.

When we hear this, we now know better.

[1] "Software Engineering Economics," Barry W. Boehm, IEEE Transactions on Software Engineering, Volume SE-10, Number 1, Januarym 1984, pages 4-21.

[2] This is the crux of the post, the book and the discussion around the conjecture that we can make decisions about how to spend other peoples money with estimating that spend. 

Related articles Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices
Categories: Project Management


Team members should swarm to a problem like tourists to a photo opp

Team members should swarm to a problem like tourists to a photo opp

Swarming is common behavior in an Agile team. Swarming occurs when a group of team members work together on a single story or impediment to break a log jam. It helps to overcome problems and bottlenecks so the team can deliver functionality quickly and often. Swarming provides a team with a mechanism focus on impediments and remove them immediately. Studies support that the act of swarming, as an act of kindness, generates further acts of kindness (within limits).

In an article titled The Science of ‘Paying It Forward’ in the New York Times on Sunday, March 16, 2014, Melena Tsvetkova and Michael Macy describe an article they published in the journal PLoS One.  Their research found that receiving and observing generosity can increase the likelihood of being generous, however in some cases, if the level generosity is too high it can shift have the opposite effect. When the level of observed generosity is perceived to be too high the observer is less likely to pay it forward and be comes a bystander rather than helping some someone else. The findings suggest that the act of helping a fellow team member out makes it more likely that the recipient of the behavior will also help his/her fellow team members out when they have a problem.

The findings carry a cautionary note for teams, Scrum Masters and coaches. If one person always is there to save the day it is possible to turn others in the team into bystanders. Also it is possible for the other team members to expect help before they exhausted all options.  The ability to ask for help and expect the team to swarm to the problem too easily is a form of moral hazard that can cause other team members to feel that they are being taken advantage.  If this perception takes root, it  will reduce the possibility that the team will swarm to problems.

Scrum Masters or coaches need to be aware that swarming, like any act of kindness, is apt to ripple thought the team. Team members helping each other out in order to overcome impediments is an important activity in effective Agile teams. Coaches need to ensure that swarming is not limited to an individual (or small group), but is balanced across the team to reduce the risk that team members won’t respond when all hands are needed on deck.

Categories: Process Management

Android Wear Developer Preview Now Available

Android Developers Blog - Sat, 04/05/2014 - 22:55

By Austin Robison, Android Wear team

Android Wear extends the Android platform to wearables. These small, powerful devices give users useful information just when they need it. Watches powered by Android Wear respond to spoken questions and commands to provide info and get stuff done. These new devices can help users reach their fitness goals and be their key to a multiscreen world.

We designed Android Wear to bring a common user experience and a consistent developer platform to this new generation of devices. We can’t wait to see what you will build.

Getting started

Your app’s notifications will already appear on Android wearables and starting today, you can sign up for the Android Wear Developer Preview. You can use the emulator provided to preview how your notifications will appear on both square and round Android wearables. The Developer Preview also includes new Android Wear APIs which will let you customize and extend your notifications to accept voice replies, feature additional pages, and stack with similar notifications. Head on over to to sign up and learn more.

For a brief introduction to the developer features of Android Wear, check out these DevBytes videos. They include demos and a discussion about the code snippets driving them.

What’s next?

We’re just getting started with the Android Wear Developer Preview. In the coming months we’ll be launching new APIs and features for Android Wear devices to create even more unique experiences for the wrist.

Join the Android Wear Developers community on Google+ to discuss the Preview and ask questions.

We’re excited to see what you build!

Join the discussion on

+Android Developers
Categories: Programming

The Growth Mindset: A Key to Resilience, Motivation, and Achievement

Your mindset holds the key to realizing your potential.

Your mindset is your way of thinking, and your way of thinking can limit or empower you, in any number of ways.

In fact, according to Carol S. Dweck, author of Mindset: The New Psychology of Success, mindset is the one big idea that helps explain the following:

  • Why brains and talent don’t bring success
  • How they can stand in the way of it
  • Why praising brains and talent doesn’t foster self-esteem and accomplishment, but jeopardizes them
  • How teaching a simple idea about the brain raises grades and productivity
  • What all great CEOs, parents, teachers, athletes know

When Dweck was a young researcher, she was obsessed with understanding how people cope with failures, and she decided to study it by watching how students grapple with heard problems.

You’re Learning, Not Failing

One of Dweck’s key insights was that a certain kind of mindset could turn  a failure into a gift.

Via Mindset: The New Psychology of Success:

“What did they know?  They knew that human qualities, such as intellectual skills could be cultivated through effort.  And that’s what they were doing – getting smarter.  Not only weren’t they discouraged by failure, they didn’t even think they were failing.  They thought they were learning.”

Your Can Change Your IQ

Believe it or not, a big believer in the idea that you can use education and practice to fundamentally change your intelligence is Alfred Binet, the inventor of the IQ test.

Via Mindset: The New Psychology of Success:

“Binet, a Frenchman working in Paris in the early twentieth century, designed this test to identify children who were not profiting from the Paris public schools, so that new educational programs could be designed to get them back on track. Without denying individual differences in children’s intellects, he believed that education and practice could bring about fundamental changes in intelligence.”

Methods Make the Difference

Here is a quote from one of Binet’s major books,  Modern Ideas About Children:

"A few modern philosophers ... assert that an individual's intelligence is a fixed quantity, a quantity which cannot be increased.  We must protest  and react against this brutal pessimism ... With practice, training, and above all, method, we manage to increase our attention, our memory, our judgment and literally to become more intelligent than we were before."

Growth Mindset vs. Fixed Mindset

The difference that makes the difference in success and achievement is your mindset.  Specifically, a Growth Mindset is the key to unleashing and realizing your potential.

To fully appreciate what a Growth Mindset is, let’s contrast it by first understanding what a Fixed Mindset is.

According to Carol Dweck, a Fixed Mindset means that you fundamentally believe that intelligence and talent are fixed traits:

“In a fixed mindset, people believe their basic qualities, like their intelligence or talent, are simply fixed traits. They spend their time documenting their intelligence or talent instead of developing them. They also believe that talent alone creates success—without effort. They’re wrong.”

In contrast, according to Dweck, a Growth Mindset means that you fundamentally believe that you can develop your brains and talent:

“In a growth mindset, people believe that their most basic abilities can be developed through dedication and hard work—brains and talent are just the starting point. This view creates a love of learning and a resilience that is essential for great accomplishment. Virtually all great people have had these qualities.”

If you want to improve your motivation, set yourself up for success, and achieve more in life, then adopt and build a growth mindset.

Here are a few articles to help you get started:

3 Mindsets that Support You

5 Sources of Beliefs for Personal Excellence

6 Sources of Beliefs and Values

Growth Mindset Over Fixed Mindset

Training Mindset and Trusting Mindset

Categories: Architecture, Programming

The Calculus of Writing Software for Money

Herding Cats - Glen Alleman - Sat, 04/05/2014 - 15:43

Writing_is_like_sexThere is a nearly unlimited collection of views about how to write software for money.

From the anarchy of gaming coders sitting in the basement of the incubator on 28th and Pearl Street here in Boulder to the full verification and validation of ballistic missile defense system software, 7 miles up the road.

When I hear about how software should be written, how teams should be organized, how budgeting, planning, testing, deployment, maintenance, transiston to business, transistion to production, sustainment, and the myriad of other activities around software development should be done - the first question is always - what's the domain you're speaking about. 

Then - have you tested these ideas outside our personal experience. And finally have you tested these ideas in another domain to see if they carry over? If you're just exploring ideas, no problem. But that limits the credibility of the idea to being just and idea with no actionable outcome, other than a conversation. Those paying for the software you are writing for money, usually don't like paying for you to explore using their cash - unless that effort is actually in the plan.

There are of course fundamental - immutable actually - principles for any project based endeavor. These are the Five Immutable Principles of all project success, shown over and again in the root cause analysis of failed projects.

  1. Do we know what DONE looks like in some units of measure meaningful to the decision makers?
  2. Do have any idea on how to arrive at DONE at an expected time so the user can put our efforts to work when they are needed?
  3. Do we know what resources we need to perform Number 2? These include talent, facilities, and of course money.
  4. Do we have any idea what's going to go wrong along the way and how we're going to handle it so we can show up on time with the needed value the customer is expecting?
  5. And finally, how are we going to convince ourselves we are making progress at the rate expected by those paying for our effort?

All five of these principles need answers if we're going to have any hope of success. No matter how often it is repeated, insisted upon, or how clever the message is trying to avoid these principles, they're not going away. They are immutable. They need to be answered on day one and on every day until the project is over.

So if we are writing software for money - internal money, external money, maybe even our own money - ask these questions and see if our answers are credible.

  1. We don't know what done looks like, so let's get started and find out. This is a good way to spend more money. Write a few sentences about what capabilities will be provided by the software. Use these to test the business value. From those capabilities, requirements can emerge. Don't listen to anyone who suggests software requirements age if not put to use immediately. Think of the requirement of properly interfacing with the IEEE-1553 bus next time you're sitting in seat 23B on a 737 watching it fly through the air. That requirement was set down in 1973.
  2. Without a plan any path will get us lost. Make a plan. It can be a set of sticky notes on the wall. Or it can be 30,000 lines of an Integrated Master Schedule to fly to Mars and return. Never ever listen to someone who says planning isn't needed, they've probably not been accountable for delivering on-time, on-budget anything.
  3. How much will this cost? Don't know? It's going to cost more than you think. If those providing you the money aren't interested in how much you're going to spend, when you're going to spend it, cash their check up front, because they're going to run out of money before you're done.
  4. Risk Management is How Adults Manage Projects - Tim Lister. Behave appropriately.
  5. Tangible evidence of progress is always measured from the users point of view. In the end measures of effectiveness are the best assessment. The Measures of Performance. Compliance with requirements is weak. We have lots of examples of compliant, but no useable products. When someone says working software is our measure  ask what are our units of measure of "working?" Don't confuse effort with results. Show me it does what we planned it to do, on or near the day we planned it to do it.

More in next post about the economics of writing software for money.

Categories: Project Management

Get Up And Code 048: Brian Lagunas: XAML and Oats

Making the Complex Simple - John Sonmez - Sat, 04/05/2014 - 15:00

In this episode, I got to talk to my hometown buddy, Brian Lagunas. Brian is a Program Manager for Infragistics and a former bodybuilder. In this episode, I talk to Brian about his new fitness goals and how he is gaining muscle by eating clean. Full transcript below: show John:               Hey everyone, welcome back to […]

The post Get Up And Code 048: Brian Lagunas: XAML and Oats appeared first on Simple Programmer.

Categories: Programming

Case for VB.Net vNext

Phil Trelford's Array - Sat, 04/05/2014 - 11:50

Following up on the last Roslyn preview way back in 2012, this week saw the availability of a new preview with a more complete compiler along with a few new language features for C# and VB. A lot of inspiration for these features seems to have come from the F# language.

The C# interactive shell from 2012 appears to be missing, perhaps ScriptCS is expected to fill this space, or you could just use F# interactive which already exists in Visual Studio.

On the language features side, C# 6 gets primary constructors for classes, heavily inspired by F#, and using static which brings parity with Java and VB.Net.

For me VB.Net gets the most interesting new feature in the form of Select Case TypeOf. which provides the first steps towards pattern matching.


Taking a hierarchy of shapes as an example:

Public MustInherit Class Shape
End Class

Public Class Rectangle
    Inherits Shape
    Public Property Width As Integer
    Public Property Height As Integer
End Class

Public Class Circle
    Inherits Shape
    Public Property Radius As Integer
End Class

Sub Main()
    Dim shape As Shape = New Rectangle With {.Width = 10, .Height = 10}
    Select Case shape
        Case r As Rectangle When r.Width = r.Height
            Console.WriteLine("Square of {0}", r.Width)
        Case r As Rectangle
            Console.WriteLine("Rectangle of {0},{1}", r.Width, r.Height)
        Case c As Circle
            Console.WriteLine("Circle of {0}", c.Radius)
    End Select
End Sub

The functionality is still quite limited and quite verbose in comparison to say F# or Scala, but I feel it’s definitely an interesting development for VB.Net.

For comparison here’s an equivalent F# version using discriminated unions:

type Shape =
    | Rectangle of width:int * height:int
    | Circle of radius:int

let shape = Rectangle(10,10)
match shape with
| Rectangle(w,h) when w=h -> printfn "Square %d" w
| Rectangle(w,h) -> printfn "Rectangle %d, %d" w h
| Circle(r) -> printfn "Circle %d" r


Pattern matching can be really useful when writing compilers, here’s a simple expression tree evaluator in F#:

type Expression =
   | Factor of value:int
   | Add of lhs:Expression * rhs:Expression

let rec eval e =
   match e with
   | Factor(x) -> x
   | Add(l,r) -> eval l + eval r

let onePlusOne = Add(Factor(1),Factor(1))

VB.Net vNext can approximate this, albeit in a rather more verbose way:

Public MustInherit Class Expression
End Class

Public Class Factor
    Inherits Expression
    Public Property Value As Integer
    Sub New(x As Integer)
        Value = x
    End Sub
End Class

Public Class Op
    Inherits Expression
    Public Property Lhs As Expression
    Public Property Rhs As Expression
End Class

Public Class Add
    Inherits Op
End Class

Function Eval(e As Expression) As Integer
    Select Case e
        Case x As Factor
            Return x.Value
        Case op As Add
            Return Eval(op.Lhs) + Eval(op.Rhs)
        Case Else
            Throw New InvalidOperationException
    End Select
End Function

Sub Main()
    Dim onePlusOne As Expression =
        New Add With {.Lhs = New Factor(1), .Rhs = New Factor(1)}
End Sub


It will be interesting to see how VB.Net vNext develops. I think first-class support for tuples could be an interesting next step for the language.

Categories: Programming

Testing First Really Is Different Than Testing Last

There really is a difference  . . .

There really is a difference . . .

Classic software development techniques follow a pattern of analysis, design, coding and testing. Regardless of whether the process is driven by a big upfront design or incremental design decisions, code is written and then tests are executed for the first time. Test plans, scenarios and cases may well be developed in parallel with code (generally only with a minimum of interaction), however execution only occurs as code is completed. In test-last models, the roles of developer and tester are typically filled by professionals with very different types of training and many times from separate management structures. The differences in training and management structure create structural communication problems. At best the developers and testers are viewed as separate but equal, although rarely do you see testers and developers sharing tables at lunch.

Testing last supports a sort of two tiered development environment apartheid in which testers and developers are kept apart. I have heard it argued that anyone that learned to write code in school has been taught how to test the work they have create, at least in a rudimentary manner, therefore they should be able to write the test cases needed to implement test-driven development (TDD). When the code is thrown over the wall to the testers, the test cases may or may not be reused to build regression suites. Testers have to interpret both the requirements and implementation approaches based on point in time reviews and documentation.

Test-first development techniques take a different approach, and therefore require a different culture. The common follow of a test first process is:

  • The developer accepts a unit of work and immediately writes a set of tests that will prove that the unit of work actually functions correctly.
  • Run the tests.  The tests should fail.
  • Write the code need to solve the problem.  Write just enough code to really solve the problem.
  • Run the test suite again.  The test should pass.

Most adherents of Agile development methods have heard of TDD which is the most common instantiation of the broader school of test-first development (TFD).  TDD extends TFD by adding a final refactoring step to the flow in which developers write the tests cases needed to prove that unit of work solves the business problem and is technically correct. Other variants, such as behavior-driven and acceptance test-driven development, are common.  In all cases the process follows the same cycle of writing tests, running the tests, writing the code, re-running the tests and then  refactoring the only difference being the focus tests.

TFD techniques intermingle coding and testing disciplines, creating a codependent chimera in which neither technique can exist without the other. For TFD to work most effectively coders and testers must understand each other and be able to work together. Pairing testers and developers together to write the test case either for component/unit testing (TDD), behavior-driven tests (BDD) or acceptance testing (ATDD) creates an environment of collaboration and communication.  Walls between the two groups will be weak, and over time, torn down.

Test-first development and test-last development techniques are different both in terms of the how work is performed and how teams are organized. TFD takes a collaborative approach to delivering value, while test-last approaches are far more adversarial in nature.

Categories: Process Management

Stuff The Internet Says On Scalability For April 4th, 2014

Hey, it's HighScalability time:

The world ends not with a bang, but with 1 exaFLOP of bitcoin whimpers.
  • Quotable Quotes:
    • @EtienneRoy: Algorithm:  you must encode and leverage your ignorance, not only your knowledge #hadoopsummit - enthralling
    • Chris Brenny: A material is nothing without a process. While the constituent formulation imbues the final product with fundamental properties, the bridge between material and function has a dramatic effect on its perception and use.
    • @gallifreyan: Using AWS c1, m1, m2? @adrianco says don't. c3, m3, r3 are now better and cheaper. #cloudconnect #ccevent
    • @christianhern: Mobile banking in the UK: 1,800 transactions per MINUTE. A "seismic shift" that banks were unprepared for

  • While we are waiting for that epic article deeply comparing Google's Cloud with AWS, we have Adrian Cockcroft's highly hopped slide comparing the two. Google: no enterprise customers, no reservation options, need more regions and zones, need lower inter-zone latency, no SSD options. AWS: no per minute billing, need simpler discount options, need more regions and zones, no real PaaS strategy, not instance migration.
  • Technology has change us from a demo or die culture to a deploy or die culture. We see this in software, but it's happening in hardware too says, Joi Ito (MIT Media Lab), in this interview.
  • The Curmudgeon of Truth declares I reckon your message broker might be a bad idea: Engineering in practice is not a series of easy choices, but a series of tradeoffs. Brokers aren’t bad, but the tradeoffs you’re making might not be in your favour." < Good discussion on Hacker News.
  • High phive this. Your application may already achieved a degree of consciousness. An information integration theory of consciousness: consciousness corresponds to the capacity of a system to integrate information.
  • People really really like to talk. Line, a messaging app that's big in Japansurpasses 400 Million Registered Users, 10 billion chat messages per day, 1.8 billion stickers per day, and over 12 million calls per day.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Categories: Architecture

Enterprise WinRT Apps Build Roundup

DevHawk - Harry Pierson - Fri, 04/04/2014 - 16:40

Wow, it’s been a whirlwind couple of days down here in San Francisco @ Build 2014. It has certainly been a huge thrill for me, getting a chance to be a part of the day one keynote and getting 15 minutes of fame. However, as the conference winds down I wanted to pull together a summary of the stuff Microsoft announced that relates to enterprise app development and Windows 8.1 Update. I mean, it wasn’t all about my wardrobe choices

The Windows for Business blog as a good summary post that hits the highlights. The stuff I wanted to specifically call out is:

  • We’ve changed the policy to allow side loaded apps to communicate with desktop apps. Literally every single enterprise customer, Microsoft dev consultant and enterprise technical sales rep I’ve spoken to in the last year has asked for this.
  • We’ve added a feature in Windows 8.1 Update to enable side loaded apps to run code outside of the App Container. This opens up side loaded apps to access the full power of Windows as well as all the existing code the enterprise may have in its portfolio
  • We’ve made it significantly easier to get side load rights. I’d go thru the specifics here, but Rocky Lhotka (who has been *very* vocal about the issues in this space) had a great summary: “For a maximum of around $100 virtually every organization (small to huge) can get a side loading unlock key for all their devices.”

If you want more information on how to take advantage of these new features for side loaded apps, here are some resources for you:

  • In addition to my 5 minutes in the keynote, I did a whole session where I drilled into more details on that demo. I also demos that used network loopback for interprocess communication.
  • John Vintzel and Cliff Strom had a session on deploying enterprise apps. As of this writing, the video isn’t online yet but it will be within a day or two at that URL.
  • We have published whitepapers on both Brokered WinRT Components and using network loopback in WinRT apps that go into more details on how to build solutions with this technology
  • Last but not least, we have a set of samples of sideloaded WinRT apps. This includes the keynote demo, another brokered component demo and the WCF & ASP.NET network loopback demos I did in my session. Note, the keynote demo sample is packaged oddly because of the way MSDN’s sample repo handles (or in this case doesn’t handle) VS solutions with multiple projects. When I get back to Redmond, I’m  going to see if there’s a better way to get this sample hosted.

I heard many times over the past two days from folks in person at the conference and via email, twitter, facebook, carrier pigeon, etc just how excited they are about these changes & features. As an engineer who spends most of his days in his office and or in meetings building this stuff, it is amazingly gratifying to hear directly from our users how much our work can help them.

Categories: Architecture, Programming

Do It Right or Do It Twice

Herding Cats - Glen Alleman - Fri, 04/04/2014 - 15:23

Screen Shot 2014-04-03 at 10.47.18 AMI heard this phrase in a conference call yesterday with a DOD client and thought, how clever I'll write a blog about this. Only to find out there is a Forbes article with same name and several other articles as well. 

The Forbes article had a case study about doing it right around a business process. It was the perfect framework (repeated here) for applying Performance-Based Project ManagementŸ 

In the Forbes article there are five steps:

  1. The Vision Meeting - develop a set of needed capabilities for the outcome of the project. These provide the ability for the business to do something of value. There is all this discussion around creating value, but rarely are the units of measure for value mentioned.
    • Value cannot exist if we don't know both the units of measure of the value itself and the cost to deliver those units of measure. This is where the naive and ill informed notion of #NoEstimates and the phrase we are exploring for ways to make decisions with "No Estimates" goes right in the ditch.
    • The analysis of alternatives is a starting point. Balanced Scorecard is a broader approach, but AoA will work for this post. If we have some idea about what capabilities we need to possess, then we can make decisions about them. What do they cost? How do they actually provide value to our business or mission.
    • What are the units of measure of that value. One good unit of measures is effectiveness. How effective will this new capability be in solving our problem?
  2. Build a strategy - what is the Plan to deliver the needed capabilities. The notion that we don't need to plan - we'll let the resulting capabilities and their technical and operational requirements emerge - is of course going to allow us to Do It Twice.
    • We need to know what DONE looks like in units of measure meaningful to the decision makers. The Plan is not the schedule. The Plan describes what will be delivered.
    • The schedule shows when it is needed to deliver the value. We need both. 
  3. Adapt if necessary - the needed capabilities should pretty much be fixed. If not we're wasting time and money exploring for what DONE looks like.
    • That's called Research. No problem, if we acknowledge we are on a research project. If the customer thinks - and has paid us - we're on a development and delivery project, there's going to be disappointment when we discover we've spent a bunch of money exploring when we should have been delivering.
    • When I here about projects where the customer doesn't know what they want yet, so let's get started and we'll discover along the way. Go back to the office and get a bigger check book.
  4. Execute in time boxes - time boxing, rolling waves, incremental, iterative execution and delivery are  common sense. No one knows enough about anything at the detail level to know how to build it far into the future.
    • The Capabilities shouldn't be changing, but the mechanism for delivering these capabilities must be flexible and adaptable. The key outcome from executing in time boxes, is the answer to the question - how long are you willing to wait before you find out you are late? The answer must be, short enough to take corrective action to NOT be late. This time interval is domain and project dependent. But answering this question will define your business rhythm.
  5. What are we working on now? What are we working on next? - make this visible. Have a plan of the month, a plan of the week, a plan for this quarter.
    • Have everyone on the project acknowledge they know what the outcomes of this plan are in units of performance, technical performance measures, and the quantifiable backup data showing physical percent complete.
    • We must measure progress in this manner. This is the notion in agile software of working software. But the agile community doesn't have a formal way of stating the units of measure of working. They leave that up to the customer, who may not know. The Systems Engineering paradigm does, through Measures of Effectiveness, Measures of Performance, and Technical Performance Measures.
    • Create a framework based on these, and only then insert your favorite agile development processes.

In the end project success is about knowing what done looks like, knowing how to get there, how to measure progress along the way. And of course knowing impediments to progress and handling them. These concepts are instantiated in two papers from a colleague Pat Barker, What is Your Estimate at Complete and Program Master Scehdule Can Improve Results, on page 20.

Not Sure Where We Are

Categories: Project Management

Business Analyst Tip: CIOs are People, Too

Software Requirements Blog - - Fri, 04/04/2014 - 12:38
Like most of us, I’m working in an environment where there is too much to do and not enough people to do it. We recently received a request for a new project which didn’t make sense to me—it would only be used short-term and the need met by the project was already satisfied. So, I […]

Business Analyst Tip: CIOs are People, Too is a post from:

Categories: Requirements

Doing The Right Thing Right


Being on time is one aspect of ethical behavior

Ethics are the moral principles that govern behavior.  The principles that underlie ethics help us judge what is right or wrong. Team members on a project team are presented with a nearly continuous stream of choices to test their principles. Over the years I have seen choices that were clearly unethical, some that might be in a gray area and the majority have been ethical or at least neutral. An example of a clearly unethical decision was when a project showed a project as having a green status when it was clear that it was in deep trouble. I was recently asked why not unit testing code when the organizations standard process called for it to be unit tested was an ethical issue rather than just a failure to follow best practices. Unit testing is an expected behavior for most coders. There are at least two reasons this behavior is an ethical issue. The first is when a developer does not unit test the code they have written they are making someone else responsible for finding their mistakes. The second reason is because unit testing is the expected behavior in their methodology (and 99.9% of the coding methods I am aware of).

Here are four general attributes of ethical behavior:

  1. Reliability: A person’s actions should match the behavior they have committed to follow. Many organizations spend substantial time and effort defining techniques and methods with policies to ensure they are followed.  Deciding not to follow the standard process is generally not ethical behavior. In our unit testing example, most methodologies specifically call for developers to unit test their code. When a coder doesn’t unit test they are damaging their reliability. They can no longer be trusted to behave as expected by others following the process.
  2. Responsibility:  Responsibility is about the duty to deal with your actions. In our unit testing example, not unit testing makes someone else responsible for finding and removing your mistakes.
  3. Respectfulness:  Team members must be aware and have regard for the feelings of those around them. Respectful does not mean avoiding tough decisions or conversations, but rather being aware of how deeds, actions and words affect those you and doing your best to help deal with those effects. Making someone else clean up your code is not respectful to the next developer or tester.
  4. Fairness: Actions and decisions need to be objective, evenhanded and consistent. Not following a mandated or agreed upon process does not represent  consistent  behavior.

Why do these four attributes matter when discussing ethics of project team members? It can be boiled down to reputation. In most Agile teams, being able to be counted on to do the right thing is essential for long-run influence.  Self-organizing and self-managing teams which are core feature of Agile need team members to perform within ethical boundaries and to be able to influence each other to do the right thing and to do the right thing right.

Categories: Process Management


Phil Trelford's Array - Thu, 04/03/2014 - 18:28

Microsoft’s Build 2014 conference is currently in full flow, one of the new products announced is Orleans, an Actor framework with a focus on Azure.

There’s an MSDN blog article with some details, apparently it was used on Halo 4.

Demis Bellot of ServiceStack fame, tweeted his reaction:

.NET's actor model uses static factories, RPC Interfaces and code-gen client proxies for comms, WCF all over again:

— Demis Bellot (@demisbellot) April 3, 2014

I retweeted, as it wasn’t far off my initial impression and the next thing I know my phone is going crazy with replies and opinions from the .Net community and Microsoft employees. From what I can make out the .Net peeps weren’t overly impressed, and the Microsoft peeps weren’t overly impressed that they weren’t overly impressed.

So what’s the deal.


Erlang has distributed actors via OTP, this is the technology behind WhatsApp, recently acquired for $19BN!

The JVM has the ever popular Akka which is based heavily on Erlang and OTP.

An industrial strength distributed actor model for .Net should be a good thing. In fact Microsoft are currently also working on another actor framework called ActorFX,

The .Net open source community have projects in the pipeline too including:

There’s also in-memory .Net actor implementations with F#’s MailboxProcessor and TPL Dataflow. Not to mention the departed Axum and Retlang projects.


From what I can tell, Orleans appears to be focused on Azure, making use of it’s proprietary APIs, so there's probably still a big space for the community's open source projects to fill.

Like Demis I’m not a huge fan of WCF XML configuration and code generation. From the Orleans MSDN post, XML and code-gen seem to be front and centre.

You write an interface, derive from an interface, add attributes and then implement methods, which must return Task<T>. Then you do some XML configuration and Orleans does some code generation magic for hydration/dehydration of your objects (called grains).

Smells like teen spirit WCF, that is methods are king, although clearly I could be reading it wrong.

From my limited experience with actors in F# and Erlang, messages and message passing are king, with pattern matching baked into the languages to make things seamless.

Initial impressions are that Orleans is a long way from Erlang Kansas

The Good Parts

Building a fault-tolerant enterprise distributed actor model for .Net is significant, and could keep people on the platform where they may have turned with Erik Meijer to the JVM, Scala and Akka otherwise.

Putting async front and centre is also significant as it simplifies the programming model.

C# 5’s async is based on F#’s asynchronous workflows, which was originally developed to support actors via the MailBoxProcessor.


Underneath, Erlang’s processes, F#’s async workflows and C#’s async/await are simply implementations of coroutines.

Coroutines are subroutines that allow multiple entry points for suspending and resuming execution at certain locations. They’ve been used in video games for as long as I can remember (which only goes back as far as the 80s).

Coroutines help make light work of implementing state machines and workflows.


In Erlang messages are typically described as named tuples (an atom is used as the name), and in F# discriminated unions are typically employed.

Orleans appears to use methods as the message type, where the method name is analogous to an Erlang atom name, or an F# union case name and the parameters are analogous to a tuple. So far so good.

Where they differ is that return values are first-class for methods, and return values feel more like an RPC approach. In fact this is the example given in the article:

public class HelloGrain : Orleans.GrainBase, IHello
  Task<string> IHello.SayHelloAsync(string greeting)
    return Task.FromResult("You said: '" + greeting + "', I say: Hello!");

Also current wisdom for C# async is to avoid async void... which is why I guess they’ve plumped for Task as the convention for methods with no return value.


.Net’s built-in binary serialization is bottom of the league for size and performance, hopefully alternative serialization libraries like Google Protocol Buffers will be supported.

Judge for yourself

But these are just my initial impressions, try out the samples and judge for yourself.

Categories: Programming

Leslie Lamport to Programmers: You're Doing it Wrong

Famous computer scientist Leslie Lamport is definitely not a worse is better kind of guy. In Computation and State Machines he wants to make the case that to get better programs we need to teach programmers to think better. And programmers will think better when they learn to think in terms of concepts firmly grounded in the language of mathematics.

I was disappointed that there was so much English in the paper. Surely it would have been more convincing if it was written as a mathematical proof. Or would it?

This whole topic has been argued extensively throughout thousands of years of philosophy. Mathematics has always been a strange attractor for those trying to escape a flawed human rationality. In the end as alluring as the utopia of mathematics is, it lacks a coherent theory of meaning and programming is not about rearranging ungrounded symbols, it's about manipulating and shaping meaning.

For programmers I think Ludwig Wittgenstein has the right sense of things. Meaning is derived by use within a community. Programs built and maintained by programmers is at bottom a community of effort.


For quite a while, I’ve been disturbed by the emphasis on language in computer science. One result of that emphasis is programmers who are C++ experts but can’t write programs that do what they’re supposed to. The typical computer science response is that programmers need to use the right programming/specification/development language instead of/in addition to C++. The typical industrial response is to provide the programmer with better debugging tools, on the theory that we can obtain good programs by putting a monkey at a keyboard and automatically finding the errors in its code.

I believe that the best way to get better programs is to teach programmers how to think better. Thinking is not the ability to manipulate language; it’s the ability to manipulate concepts. Computer science should be about concepts, not languages. But how does one teach concepts without getting distracted by the language in which those concepts are expressed? My answer is to use the same language as every other branch of science and engineering—namely, mathematics. But how should that be done in practice? This note represents a small step towards an answer. It doesn’t discuss how to teach computer science; it simply addresses the preliminary question of what is computation. 

Related Articles
Categories: Architecture

Who Solves Which Problems?

Many years ago, I was part of a task force to “standardize” project management at an organization. I suggested we gather some data to see what kinds of projects the client had.

They had short projects, where it was clear what they had to do: 1-3 week projects where 2-4 people could run with the requirements and finish them. They had some of what they called “medium-risk, medium return” projects, where a team or two of people needed anywhere from 3-9 months to work on features that were pretty well defined. But they still needed product managers to keep working with the teams. And, they had the “oh-my-goodness, bet the company” projects and programs. Sometimes, they started with a small team of 2-5 people to do a proof-of-concept for these projects/programs. Then, they staffed those projects or programs with almost everyone. (BTW, this is one of the reasons I wrote Manage It! Your Guide to Modern, Pragmatic Project Management. Because one size approach to each project does not fit all!)

The management team wanted us, the task force, to standardize on one project management approach.

In the face of the data, they relented and agreed it didn’t make sense to standardize.

It made a little sense to have some guidelines for some project governance, although I didn’t buy that. I’ve always preferred deliverable-based milestones and iterative planning. When you do that, when you see project progress in the form of demos and deliverables, you don’t need as much governance.

There are some things that might make sense for a team to standardize on—those are often called team norms. I’m all in favor of team norms. They include what “done” means. I bet you’re in favor of them, too!

But, when someone else tells you what a standard for your work has to be? How does that feel to you?

I don’t mind constraints. Many of us live with schedule constraints. We live with budget constraints. We live with release criteria. In regulated industries, we have a whole set of regulatory constraints. No problem. But how to do the work? I’m in favor of the teams deciding how to do their own work.

That’s the topic of this month’s management myth, Management Myth 28: I Can Standardize How Other People Work.

If you think you should tell other people how to do their work, ask yourself why. What problem are you trying to solve? Is there another way you could solve that problem? What outcome do you desire?

In general, it’s a really good idea for the people who have the problem to solve the problem. As long as they know it’s a problem.

How about you tell the team the outcome you desire, and you let them decide how to do their work?

Categories: Project Management

How To Get Started Programming

Making the Complex Simple - John Sonmez - Thu, 04/03/2014 - 16:00

Breaking into the software development industry can be rather difficult. It is dfficult to get a job without experience and it is difficult to get experience without a job. In this video, I talk about how you can get started learning to program and then how you can actually land that first job. Full transcript: […]

The post How To Get Started Programming appeared first on Simple Programmer.

Categories: Programming

Agile as a Systems Engineering Paradigm

Herding Cats - Glen Alleman - Thu, 04/03/2014 - 15:41

In yesterday's post, the notion of Systems Engineering was suggested as one solutuion to project failure. Here's the next step. The Agile notion started with a manifesto that turned into many interpretations and practices. In the standard project management paradigm, there is a set of principles, practices, and processes described in a variety of ways through several organizations. ITIL, PMI, APM, DOD, DOE, and other owners of project management activities.

When we take the Systems Engineering approach, we can put a wrapper around ALL project management, technical development, and deployment processes, that can be use to assess each practice and process to assure it is providing value. Here's a short overview of this paradigm. 

Agile project management is systems management from Glen Alleman The frameworks for Systems Engineering starts with several guides
  • ISO 15288
  • INCOSE Systems Engineering Handbook
  • FAA Systems Engineering 
Categories: Project Management

Welcoming Changing Requirements

Software Requirements Blog - - Thu, 04/03/2014 - 12:35
The Agile Manifesto is based on twelve principles (I’ve highlighted a couple – 2 and 12 – that are of the most interest for this post): Customer satisfaction by rapid delivery of useful software Welcome changing requirements, even late in development Working software is delivered frequently (weeks rather than months) Working software is the principal […]

Welcoming Changing Requirements is a post from:

Categories: Requirements

Another Learning Style Model


Every team member has a different learning style that has to be synced.

Every team member has a different learning style that has to be synced.

Another learning style model is built on four dimensions of learning styles. The dimensions of the Index of Learning Styles developed by Dr Richard Felder and Barbara Soloman are each described as a continuum.  Each continuum is bounded by opposite attributes of a learning style. An individual could map him or herself on each of the continuum to generate rich understand of their learning style. They are summarized below:


Mature teams generally are comprised of mix of learning styles. A mixture of styles can be complementary. For example, many IT groups I have worked with have at least one big picture person and several more linear learners.  What I generally do not see are individuals that sit at the extremes of any of the dichotomies. Individuals that sit at an extreme tend to be more difficult to draw into the group which impacts the ability to communicate and the ability of team members to trust each other.

One use of this type of model is to map teams.  For example, if we use the example used in Learning Styles and Communication Problems in a mapping exercise, I would judge the three personalities Lawyer (L), Talker (T) and Diagrammer (D) to fall as below:


The mapping exercise can be used to flag extremes that might cause trouble for the team. As noted in the example, the overall team was having issues staying focused when the Lawyer was presenting due to the sequential style being used. Using a mapping approach early in formation of the team can provide the coach with the impetuous for training exercises to sensitize the team to the disparate learning styles.

I suggest doing this exercise as a team when generating the team charter. The process I follow is:

  1. Place each of the descriptors on separate sticky notes and then place them on the wall so that all four continuums are visible.
  2. Review and discuss the meaning of each attribute.
  3. Have each team member mark where they believe they fall on each of the attribute continuums
  4. Discuss how the team can use the information to more effectively communicate.

Opposites might attract in poetry and sitcoms, however rarely do opposite learning styles work together well in teams without empathy and a dash of coaching. Therefore coach and teams need to have an inventory of learning styles on the team. Models and active evaluation against a model are tools to generate knowledge about teams so they can tune how they work to maximize effectiveness.

Categories: Process Management