Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

5 Ways to Destroy Your Productivity

Making the Complex Simple - John Sonmez - Mon, 04/06/2015 - 16:00

Hey you. Yeah, you. Want to know how to absolutely and utterly destroy your productivity? Good. You’ve come to the right place. Being productive is overrated. I mean really. What good does it get you? The more work you get done, the more work you get asked to do. So, here are a few quick […]

The post 5 Ways to Destroy Your Productivity appeared first on Simple Programmer.

Categories: Programming

Capability Maturity Levels and Implications on Software Estimating

Herding Cats - Glen Alleman - Mon, 04/06/2015 - 14:52

 An estimate is the most knowledgeable statement you can make at a particular point in time regarding, cost/effort, schedule, staffing, risk, the ...ilities of the product or service.[1]

CMMI  for Estimates

 

Immature versus Mature Software Organizations [3]

Setting sensible goals for improving the software development processes requires  understanding the difference between immature and mature organizations. In an immature organization, processes are generally improvised by practitioners and their management during the course of the project. Even if a process has been specified, it is not rigorously followed or enforced.

Immature organizations are reactionary with managers focused on solving immediate crises. Schedules and budgets are routinely exceeded because they are not based on realistic estimates. When hard deadlines are imposed, product functionality and quality are often compromised to meet the schedule.

In immature organizations, there is no objective basis for judging product quality or for solving product or process problems. The result is product quality is difficult to predict. Activities intended to enhance quality, such as reviews and testing, are often curtailed or eliminated when projects fall behind schedule.

In mature organizations possesses guide the organization-wide ability to manage development and maintenance processes. The process is accurately communicated to existing staff and new employees, and work activities are carried out according to the planned process. The processes mandated are usable and consistent with the way the work actually gets done. These defined processes are updated when necessary, and improvements are developed through controlled pilot-tests and/or cost benefit analyses. Roles and responsibilities within the defined process are clear throughout the project and across the organization.

Let's look at the landscape of maturity on estimating the work for those providing the funding for the work.

1. Initial

Projects are small, short, and while important to the customer, not likely critical to the success of the business in terms of cost and schedule. 

  • Informal or no estimating
  • When there are estimates, they are manual, without any processes, and likely considered¬†guesses

The result of this level of maturity is poor forecasting of the cost and schedule of the planned work. And surprise for those paying for the work.

2. Managed

Projects may be small, short, and possibly important. Some for of estimating, either from past experience or from decomposition of the planned work is used to make linear projects of future cost, schedule, and technical performance.

This past performance is usually not adjusted for the variances of the past, just and average. As well the linear average usually doesn't consider changes in the demand for work, technical differences in the works, and other uncertainties in the future for that work.

This is the Flaw of Averages approach to estimating. As well the effort needed to decompose the work into same sized chunks is the basis of all good estimating processes. In the Space and Defense business the 44 day rule is used to bound the duration of work. This answers the question how long are you willing to wait before you find out you're late? For us, the answer is no more than one accounting period. In practice, project status - physical percent complete is done every Thursday afternoon.

3. Defined

There is an estimating process, using recorded past performance and the statistical adjustments of that past performance. Reference Classes are  used to model future performance from the past. Parametric estimates can be used with those reference classes or other estimating processes. Function Points is common in enterprise IT projects where interfaces to legacy systems, database topology, user interfaces, transactions are the basis of the business processes. 

The notion that we've never done this before so how can we estimate, begs the question do you have the right development team? This is a past performance issues. Why hire a team that has no understanding of the problem and then ask then to estimate the cost of the solution? You wouldn't hire a plumber to install a water system if she hadn't done this before in some way.

4. Quantitatively Managed

Measures, Metrics, Architecture assessments - Design Structure Matrix is one we use - are used to construct a model of the future. External Databases referenced to compare internal estimates with external experiences.

5. Optimized 

There is an estimating organization that supports development, starting with the proposal and continuing through project close out. As well there is a risk management organization helping inform the estimates about possible undesirable outcomes in the future.

Resources

[1] Improving Estimate Maturity for More Successful Projects, SEER/Tracer Presentation, March 2010.

[2] Software Engineering Information Repository, Search Results on Estimates

[3] The Capability Maturity Model for Software

Categories: Project Management

SPaMCAST 336 ‚Äď Yves Hanoulle, Communities and Coaching Retreats

 www.spamcast.net

                    http://www.spamcast.net

Listen Now

Subscribe on iTunes

In this episode of the Software Process and Measurement Cast we feature our interview with Yves Hanoulle, builder of community builders.  We discussed collaboration, coaching retreats and the future of Agile.  Yves is an Agile thought leader among thought leaders and he shared his wisdom the Software Process and Measurement Cast listeners.

Yves’ Bio:

Yves Hanoulle has taken on almost every role in the software development field, from software support, developer, trainer, scrum master to agile coach. Over the last 10 years, Yves has focused on agile coaching. Yves grows community builders. His personal goal is to make his customers independent from him as soon as possible. Yves is the inventor of the Who is Agile series of books and the co-inventor of the leadership game. Although he co-invented Pair Coaching & Coach Retreats, Yves is not interested in being a rock star coach inventing new methodologies, he rather wants to mix existing ideas like a thought disc jockey, adjusting to the needs of the audience.

Connect with Yves at:

Twitter: @yveshanoulle
LinkedIn: https://www.linkedin.com/in/yveshanoulle

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. If not use the link below and support the podcast at the same time!

Dead Tree Version or Kindle Version 

@stevena510 (Steven Adams) has recommended that the next re-read be Fred Brooks masterpiece The Mythical Man-Month.  I think it is a great idea.

Next

In the next SPaMCAST we feature our essay on Agile release planning *** MEG june 10 ‚Äď 15 2013****.¬† Unless your project consists of one or two sprints and a cloud of dust (see three yards and a cloud of dust) you will need to tackle release planning.¬† It does not have to be as hard as many people want you to believe.¬† We will have new entries from the Software Sensei (Kim Pries) and Jo Ann Sweeney with her Explaining Change column.

Upcoming Events

QAI Quest 2015
April 20 -21 Atlanta, GA, USA
Scale Agile Testing Using the TMMi
http://www.qaiquest.org/2015/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with the builder of community builders, Yves Hanoulle.  Yves and I talked Agile communities, coaching retreats, why the factory metaphor for IT is harmful and the future of Agile. A wonderful interview, full of information and ideas that can improve your development environment!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 336 ‚Äď Yves Hanoulle, Communities and Coaching Retreats

Software Process and Measurement Cast - Sun, 04/05/2015 - 22:00

In this episode of the Software Process and Measurement Cast we feature our interview with Yves Hanoulle, builder of community builders.  We discussed collaboration, coaching retreats and the future of Agile.  Yves is an Agile thought leader among thought leaders and he shared his wisdom the Software Process and Measurement Cast listeners.

Yves' Bio:

Yves Hanoulle has taken on almost every role in the software development field, from software support, developer, trainer, scrum master to agile coach. Over the last 10 years, Yves has focused on agile coaching. Yves grows community builders. His personal goal is to make his customers independent from him as soon as possible. Yves is the inventor of the Who is Agile series of books and the co-inventor of the leadership game. Although he co-invented Pair Coaching & Coach Retreats, Yves is not interested in being a rock star coach inventing new methodologies, he rather wants to mix existing ideas like a thought disc jockey, adjusting to the needs of the audience.

Connect with Yves at:

Twitter: @yveshanoulle
LinkedIn: https://www.linkedin.com/in/yveshanoulle

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. If not use the link below and support the podcast at the same time!

Dead Tree Version or Kindle Version 

@stevena510 (Steven Adams) has recommended that the next re-read be Fred Brooks masterpiece The Mythical Man-Month.  I think it is a great idea.

Next

In the next SPaMCAST we feature our essay on Agile release planning *** MEG june 10 ‚Äď 15 2013****.¬† Unless your project consists of one or two sprints and a cloud of dust (see three yards and a cloud of dust) you will need to tackle release planning.¬† It does not have to be as hard as many people want you to believe.¬† We will have new entries from the Software Sensei (Kim Pries) and Jo Ann Sweeney with her Explaining Change column.

 

Upcoming Events

QAI Quest 2015
April 20 -21 Atlanta, GA, USA
Scale Agile Testing Using the TMMi
http://www.qaiquest.org/2015/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with the builder of community builders, Yves Hanoulle.  Yves and I talked Agile communities, coaching retreats, why the factory metaphor for IT is harmful and the future of Agile. A wonderful interview, full of information and ideas that can improve your development environment!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.

Categories: Process Management

Increasing the tap area of UIButtons made with PaintCode

Xebia Blog - Sun, 04/05/2015 - 21:04

Whether you're a developer or not, every iPhone user has experienced the case where he or she tries to tap a button (with an image and nothing happens. Most likely because the user missed the button and pressed next to it. And that's usually not the fault of the user, but the fault of the developer/designer because the button is too small. The best solution is to have bigger icons (we're only talking about image buttons, no text only buttons) so it's easier for the user to tap. But sometimes you (or your designer) just wants to use a small icon, because it simply looks better. What do you do then?

For buttons with normal images this is very easy. Just make the button bigger. However, for buttons that draw themselves using PaintCode, this is slightly harder. In this blogpost I'll explain why and show two different ways to tackle this problem.

I will not go into the basics of PaintCode. If you're new to PaintCode have a look at their website or go to Freek Wielstra's blogpost Working With PaintCode And Interface Builder In XCode. I will be using the examples of his post as basis for the examples here so it's good to read his post first (though not strictly necessary).

To get a better understanding of how a UIButton with an image differs from a UIButton drawn with PaintCode, I will first show what a non-PaintCode button would look like. Below we have a PNG image of 25 by 19 pixels (for non-Retina). Apple recommends buttons to be 44 by 44 points so we increase the size of the button. The button has a gray background and we can see that the image stays the original size and is nicely in the center while the touch target becomes big enough for the user to easily tap it.

uibutton-content

The reason it behaves like this is because of the Control settings in the Attribute Inspector. We could let the content align different or even make it stretch. But stretching this would be a bad idea since it's not a vector image and it would look pixelated.

That's why we love PaintCode graphics. They can scale to any size so we can use the same graphic in different places and at different screen scales (non-retina, retina @2x and iPhone 6 plus which uses @3x). But what if we don't want the graphic to scale of to the entire size of the button like above?

A bad solution would be not to use a frame in PaintCode. Yes, the image will stay the size you gave it when you put it in a button, but it will be left-top aligned. Also you won't be able to use it anywhere else.

If you are really sure that you will never use the image anywhere else than on your button you can group your vector graphic and tell it to have flexible margins within the frame and have a fixed width and height. Using the email icon from Freek Wielstra this will look something like this:

Screen Shot 2015-04-05 at 20.39.36

You can verify that this works by changing the size of the frame. The email icon should always stay in the center both horizontal and vertical and it should stay the same size. The same will happen if you use this in a custom UIButton.

Now let's have a look at a better solution that will allow us to reuse the graphic at different sizes and have some padding around it in a button. It's easier than you might expect and it follows the actual rules of a UIButton. The UIButton class has a property named contentEdgeInsets with the following description: "The inset or outset margins for the rectangle surrounding all of the button’s content." That sounds exactly like what we need. Our PaintCode graphics are the content of the buttons right? Unfortunately it's not treated as such since we are doing the drawing ourselves which does not adhere to that property. However we can very easily make it adhere to it:

@IBDesignable
class EmailButton: UIButton {

    override func drawRect(rect: CGRect) {
        StyleKit.drawEmail(frame: UIEdgeInsetsInsetRect(rect, contentEdgeInsets), color: tintColor!)
    }
    
}

And that's really all there is to it. Just make sure to put a frame around your graphics in PaintCode and make it stretch to all sides of the frame. Now you can specify the content insets of your buttons in Interface Builder and you icon will have padding around it.

Screen Shot 2015-04-05 at 20.59.42

Hope is not a Strategy

Herding Cats - Glen Alleman - Sun, 04/05/2015 - 16:17

We don't need to Hope the sun will come up tomorrow. That probability is 100%. The probability that the outpatient surgery you've scheduled next week will be successful is high. The surgeon has done this procedure 1,000's of time with a 98% success rate.

The probability that you'll be able to arrive on time with the needed features for the first release of the accounts payable and accounts receivable upgrade to the ERP system, is not like the sun coming up or the outpatient surgery. It's actually a probability that you'll have to think about. Actually have to calculate. Actually make an estimate of this probability before those paying for your work can make a decision about when to switch from the old system to the new system.

Hoping we can show up when needed is not a strategy. To show up on time, and on budget, with the needed capabilities. To do that, we need a Closed Loop Control System, in which we have a target performance, measures of actual performance, compared to estimates of the planned performance to create an error signal to take corrective actions.

Control systems from Glen Alleman Related articles How to Avoid the "Yesterday's Weather" Estimating Problem Incremental Delivery of Features May Not Be Desirable Release Early and Release Often Some More Background on Probability, Needed for Estimating Managing in the Presence of Uncertainty Making Decisions in the Presence of Uncertainty
Categories: Project Management

R: Markov Chain Wikipedia Example

Mark Needham - Sun, 04/05/2015 - 11:07

Over the weekend I’ve been reading about Markov Chains and I thought it’d be an interesting exercise for me to translate Wikipedia’s example into R code.

But first a definition:

A Markov chain is a random process that undergoes transitions from one state to another on a state space.

It is required to possess a property that is usually characterized as “memoryless”: the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it.

that ‘random process’ could be moves in a Monopoly game, the next word in a sentence or, as in Wikipedia’s example, the next state of the Stock Market.

The diagram below shows the probabilities of transitioning between the various states:

800px Finance Markov chain example state space svg

e.g. if we’re in a Bull Market the probability of the state of the market next week being a Bull Market is 0.9, a Bear Market is 0.075 and a Stagnant Market is 0.025.

We can model the various transition probabilities as a matrix:

M = matrix(c(0.9, 0.075, 0.025, 0.15, 0.8, 0.05, 0.25, 0.25, 0.5),
          nrow = 3,
          ncol = 3,
          byrow = TRUE)
 
> M
     [,1]  [,2]  [,3]
[1,] 0.90 0.075 0.025
[2,] 0.15 0.800 0.050
[3,] 0.25 0.250 0.500

Rows/Cols 1-3 are Bull, Bear, Stagnant respectively.

Now let’s say we start with a Bear market and want to find the probability of each state in 3 weeks time.

We can do this is by multiplying our probability/transition matrix by itself 3 times and then multiplying the result by a vector representing the initial Bear market state.

threeIterations = (M %*% M %*% M)
 
> threeIterations
> threeIterations
       [,1]    [,2]    [,3]
[1,] 0.7745 0.17875 0.04675
[2,] 0.3575 0.56825 0.07425
[3,] 0.4675 0.37125 0.16125
 
> c(0,1,0) %*% threeIterations
       [,1]    [,2]    [,3]
[1,] 0.3575 0.56825 0.07425

So we have a 56.825% chance of still being in a Bear Market, 35.75% chance that we’re now in a Bull Market and only a 7.425% chance of being in a stagnant market.

I found it a bit annoying having to type ‘%*% M’ multiple times but luckily the expm library allows us to apply a Matrix power operation:

install.packages("expm")
library(expm)
 
> M %^% 3
       [,1]    [,2]    [,3]
[1,] 0.7745 0.17875 0.04675
[2,] 0.3575 0.56825 0.07425
[3,] 0.4675 0.37125 0.16125

The nice thing about this function is that we can now easily see where the stock market will trend towards over a large number of weeks:

> M %^% 100
      [,1]   [,2]   [,3]
[1,] 0.625 0.3125 0.0625
[2,] 0.625 0.3125 0.0625
[3,] 0.625 0.3125 0.0625

i.e. 62.5% of weeks we will be in a bull market, 31.25% of weeks will be in a bear market and 6.25% of weeks will be stagnant,

Categories: Programming

Re-Read Saturday: The Goal: A Process of Ongoing Improvement. Part 7

IMG_1249

I first read The Goal: A Process of Ongoing Improvement when I actively became involved in process improvement.  I was bit late to the party; however since my first read of this business novel, a copy has always graced my bookshelf.  The Goal uses the story of Alex Rogo, plant manager, to illustrate the theory of constraints and how the wrong measurement focus can harm an organization. The focus of the re-read is less on the story, but rather on the ideas that have shaped lean thinking. Even though set in a manufacturing plant the ideas are useful in understanding how all projects and products can be delivered more effectively.  Earlier entries in this re-read are:

Part 1                Part 2                  Part 3                      Part 4                Part 5           Part 6

As we noted during our re-read of John P. Kotter’s Leading Change, significant organizational change typically requires changes to many different groups and processes to be effective. As we observed in Chapter 18, change is difficult even when everyone has sense of urgency and understands the goal of the change.

Chapter 19

Alex begins to despair of being able to deal with two constraints at in the same process. After dinner with his mom and kids, he picks up Johan at the airport.  On the way to the plant, Johan points out that there are only two reason why what Alex and his team are learning won’t help them save the plant. The first reason is if there is no demand for the product they are making and second if they are not willing to change. Once at the plant, Johan gets a briefing on the problems.  The multiple bottleneck problem has Alex and his team on edge.  They do not see a solution and press Johan on how other plants handle bottlenecks. A bottleneck is any resource that has a capacity that is equal to or less than the demand placed upon it.  Johan points out that most plants do not have bottlenecks, rather they have excess capacity and therefore are not efficient.  Efficient plants have and manage bottlenecks rather than over-investing in capacity. Johan points out that in order to increase the output of the overall process only the capacity of the bottlenecks need to be addressed. Increasing the capacity of the bottlenecked resource increases the throughput of the process.

Johan, Alex and his management staff adjourn to the plant to see the problems in action.  When they visit the robot, which is the first bottleneck in the process, it is idle. The staff went on a break before they completed the set up needed to manufacture parts for an order. Johan points out that any downtime on a bottlenecked resource can’t be made up later in the process, and any downtime limits the plant’s ability to produce completed product, which directly impacts the bottom line. As a group, they explore ideas to increase the capacity of the robot (bottleneck) ranging from changing how breaks are taken to re-commissioning the older machines that the robots replaced.

When the team reaches the second bottleneck, the heat-treating step, the problem of increasing capacity continues to plague the group.  You just can’t add extra heat-treating capacity quickly due to the footprint of the department and the equipment needed.  Johan quizzes the group on different ideas to increase the capacity of the step. He begins by asking whether all the part that go through heat treating really require the step. They also discussed why they were padding out the batches to build inventory and decrease cost per unit of work. Doing work that is not needed (including building inventory of parts that will be used later) steals capacity that can be better used to relieve some of the pressure on the bottleneck. During discussion Johan observes a pile of rejected parts that had been heat treated.  He observes that using a bottlenecked resource to work on broken parts does not make sense. One solution is to move the parts inspection step before the heat-treating step, so that only good parts are treated; an effective increase in the capacity of the process.  This is just like building code and then using an independent testing resource to find the problems just before it is scheduled for implementation.

The focus in Chapter 19 is to drive home the point that every time the capacity of a bottleneck is increased more product can be shipped. The impact of a bottleneck is not the cost of individual part, but the cost of the whole product that cannot be shipped.

Johan leaves the team with a reminder that wasted time on a bottleneck includes idle time, time working on defects and making parts that are not currently needed.

Chapter 20

Early the next morning Alex meets with his team to start planning how to implement the ideas they discussed the previous evening.  Some of the changes, like moving the quality inspections before the bottlenecks, are relatively simple, while others, such as changing the union work rules, are more difficult. However starting to implement the changes is more important than waiting until they all can be implemented.  This is similar to the Agile approach of making small changes and gathering feedback, rather than big bang approaches.  Alex tells his team to shift priorities to make the latest job (most behind schedule) the top priority.  While we might argue that a better approach might be to approach prioritization using the weighted shortest job first approach to maximize value delivered, the important message in the chapter is the shift of focus from step efficiency to maximizing product delivery.

The chapter ends with Alex reminding his team that the changes in process and the new work they are doing is of maximum importance.  Anything that takes their focus off moving forward, including reports for the home office, imperils their future.

Summary of The Goal so far:

Chapters 1 through 3 actively present the reader with a burning platform. The plant and division are failing. Alex Rogo has actively pursued increased efficiency and automation to generate cost reductions, however performance is falling even further behind and fear has become central feature in the corporate culture.

Chapters 4¬†through¬†6¬†shift the focus from steps in the process to the process as a whole. Chapters 4 ‚Äď 6 move us down the path of identifying the ultimate goal of the organization (in this book). The goal is making money and embracing the big picture of systems thinking. In this section, the authors point out that we are often caught up with pursuing interim goals, such as quality, efficiency or even employment, to the exclusion of the of the ultimate goal. We are reminded by the burning platform identified in the first few pages of the book, the impending closure of the plant and perhaps the division, which in the long run an organization must make progress towards their ultimate goal, or they won‚Äôt exist.

Chapters 7 through 9¬†show Alex‚Äôs commitment to change, seeks more precise advice from Johan, brings his closest reports into the discussion and begins a dialog with his wife (remember this is a novel). In this section of the book the concept ‚Äúthat you get what you measure‚ÄĚ is addressed. In this section of the book, we see measures of efficiency being used at the level of part production, but not at the level of whole orders or even sales. We discover the corollary to the adage ‚Äėyou get what you measure‚Äô is that if you measure the wrong thing ‚Ķyou get the wrong thing. We begin to see Alex‚Äôs urgency and commitment to make a change.

Chapters 10 through 12 mark a turning point in the book. Alex has embraced a more systems view of the plant and that the measures that have been used to date are more focused on optimizing parts of the process to the detriment to overall goal of the plant.  What has not fallen into place is how to take that new knowledge and change how the plant works. The introduction of the concepts of dependent events and statistical variation begin the shift the conceptual understanding of what measure towards how the management team can actually use that information.

Chapters 13 through 16 drive home the point that dependent events and statistical variation impact the performance of the overall system. In order for the overall process to be more effective you have to understand the capability and capacity of each step and then take a systems view. These chapters establish the concepts of bottlenecks and constraints without directly naming them and that focusing on local optimums causes more trouble than benefit.

Chapters 17 through 18 introduces the concept of bottlenecked resources. The affect of the combination dependent events and statistical variability through bottlenecked resources makes delivery unpredictable and substantially more costly. The variability in flow through the process exposes bottlenecks that limit our ability to catch up, making projects and products late or worse generating technical debt when corners are cut in order to make the date or budget.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version


Categories: Process Management

Making Decisions in the Presence of Uncertainty

Herding Cats - Glen Alleman - Sat, 04/04/2015 - 15:57

Tinker_Bell_Official_PoseIf you're on a project that has certainty, then you're wasting your time estimating. If you are certain things are going to turn out the way you think they are when you have choices to make about the future, then estimating the impact of that choice is a waste of time.

If your future is clear as day far enough ahead that you can see what's going to happen long before you get there, estimating is a waste. If you live in Fantasyland you really don't need to estimate the impact of decisions made today for outcomes in the future.

Peter Pan and Tinker Bell will look after you and make sure nothing comes as a surprise.

If how ever you live in the real world of projects - then uncertainty is the dominant force driving your project. 

Let's say some things about uncertainty. First a tautology

Uncertainties are things we can not be certain about

Uncertainty is created by our incomplete knowledge of the future, present, or past. Uncertainty is not about our ignorance of the future, past, or present. When we say uncertain we speak about some state of knowledge of the system in interest that is not fixed or determine. If we are in fat ignorant of the future, then the only approach to is spend money of find things out before proceeding. Estimating is not need in this case either. We can only estimate after we have acquired the the needed knowledge. This knowledge will be probabilistic of course. But starting a project in the presence of ignorance of the future is itself a waste, unless those paying for the project are also willing to pay to discover things they should have know before starting. At that point it's not really a project - which has bounded time and scope.

So when you hear we're exploring, ask first who's paying for that exploration. Is the exploration part of a plan for the project? Can that cost be counted in the cost of the project and therefore be burdened on the ROI of the project? Maybe finding someone who actually knows about the project domain and can define the uncertainties would be faster, better, and cheaper, than hiring someone who knows little about what they're doing and is going to spend your money finding out.

This is one reason for a Past Performance section in every proposal we submit - tell me (the buyer) you guys actually know WTF you're doing and that you're not learning on my dime.

Back to Uncertainty

Uncertainty is related to three aspects of the project management domain:

  • The external world - the activities of the project.
  • Our knowledge of this world - the planned and actual behaviours, past, present, and future of the project.
  • Our perception of this world - the data and information we receive about these behaviours.

There are many sources of uncertainty, here's a few:

  • Lack of precision about the underlying uncertainty.
  • Lack of accuracy about the possible values in the uncertainty probability distributions.
  • Undiscovered Biases used in defining the range os possible outcomes in the project's processes, technologies, staff, and other participants
  • Unknowability of the range of probability distributions.
  • Absence of information about the probability distributions.

This project uncertainty creates Risk to the success of the project. Cost, Schedule, and Performance risk. Performance being the ability to deliver the needed capabilities in exchange for cost and schedule. There is a formal relationship between uncertainty and risk. 

  • Uncertainty is present when probabilities cannot be quantified in a rigorous or valid manner, but can described as intervals within a probability distribution function (PDF).
  • Risk is present when the uncertainty of the outcome can be quantified in terms of probabilities or a range of possible values.
  • This distinction is important for modeling the future performance of cost, schedule, and technical outcomes of a project.

There are two types of uncertainty on all projects:

  • Aleatory - Pertaining to stochastic (non-deterministic) events, the outcome of which is described using probability.
    • From the Latin alea.
    • For example in a game of chance stochastic variabilities are the natural randomness of the process and are characterized by a probability density function (PDF) for their range and frequency.
    • Since these variabilities are natural they are therefore irreducible.
  • Epistemic (subjective or probabilistic) uncertainties are event based probabilities, are knowledge-based, and are reducible by further gathering of knowledge.
    • Pertaining to the degree of knowledge about models and their parameters.
    • From the Greek episteme (knowledge).

Separating these classes helps in design of assessment calculations and in presentation of results for the integrated program risk assessment.

Three Conditions for Aleatory Uncertainty

  • An aleatory model contains a single unknown parameter.
    • Duration
    • Cost
  • The prior information for this parameter is homogeneous and is known with certainty.
    • Reference Classes
    • Past Performance
    • Parametric models
  • The observed data are homogeneous and are known with certainty.
    • A set of information that is made up of similar constituents.
    • A homogeneous population is one in which each item is of the same type.

Aleatory Uncertainty can not be reduced - it is Irreducible

Epistemic Uncertainty

Epistemic Uncertainty is any lack of knowledge or information in any phase or activity of the project. This uncertainty and the resulting risks can be reduced, through testing, modeling, past performance assessments, research, comparable systems, and other processes to increase the knowledge needed to reduce the uncertainty in the knowledge of the project outcomes.

Epistemic uncertainty can be further classified into model, phenomenological, and behavioural uncertainty. (in "Risk-informed Decision-making In The Presence Of Epistemic Uncertainty," Didier Dubois, Dominique Guyonnet, International Journal of General Systems 40, 2 (2011) 145-167)

Dealing with Aleatory and Epistemic Uncertainty

  • Epistemic¬†uncertainty results from gaps in knowledge. For example, we can be uncertain of an outcome because we have never used a particular technology before.¬†Such uncertainty is essentially a state of mind and hence subjective.
  • Aleatory¬†uncertainty results from variability that is intrinsic to the behavior of some systems. For example, we can be confident regarding the long term frequency of throwing sixes but remain uncertain of the outcome of any given throw of a dice. This uncertainty can be objectively determined.

So Now The Punch Line

To manage in the presence of these two uncertainties, reducible and irreducible, we need to know something about what will happen when we make decisions that are mitigating the risks that result from the uncertainties. The actions we need to take to reduce the risk or provide margin for the irreducible risks.

The probability that our actions will produce desirable outcomes in the presence of these uncertainties. The probabilities that the residual uncertainties, will be sufficiently low, so we will still have sufficient confidence of success - defined in any units of measure you want. Ours or on time, on budget, on specification. You can pick your own, but please write them down in some units of measure meaningful to the decision makers.

So Here it is, Wait for It

You can't make decisions in the presence of uncertainty unless you estimate the outcomes of these decisions, influenced by the probabilistic nature of the drivers of the uncertainties. 

Let's make it clear - You can't Make Decisions For Uncertain Outcomes Without Estimating

Anyone says you can, ask to see exactly how. They can't show you. move on they don't know what they're talking about. ‚ąī

Managing in the Presence of Uncertainty The Cost Estimating Problem Calculating Value from Software Projects - Estimating is a Risk Reduction Process Decision Analysis and Software Project Management Distribution of random numbers Related articles
Categories: Project Management

Working with PaintCode and Interface Builder in XCode

Xebia Blog - Sat, 04/04/2015 - 08:00

Every self-respecting iOS developer should know about PaintCode by now, an OSX app for drawing graphics that don't save as images, but as lengths of code that draw graphics. The benefits of this are vastly reduced app installation size - no need to include three resolutions of the same image for every image - and seamlessly scalable graphics.

One thing that I personally struggled with for a while now was how to use them effectively in combination with Interface Builder, the UI development tool for iOS and OSX apps. In this blog I will explain an effective and simple method to draw PaintCode graphics in a way where you can see what you're doing in Interface Builder, using the relatively new @IBDesignable annotation. I will also go into setting colors, and finally about how to deal with views that depend on dynamic runtime data to draw themselves.

First, let's create a simple graphic in PaintCode. Let's draw a nice envelope or email icon. Add a frame around it, and set the constraints to variable so it will adjust to the size of the frame (drag the frame corners around within PaintCode to confirm). This gives us a nice, scalable icon. Save the file, then export it to your xcode project in the language of your choice.

Email icon in PaintCode

Squiggly lines are the best

 

Now, the easiest and most straightforward way to get a PaintCode image into your Interface Builder is to simply create a UIView subclass, and call the relevant StyleKit method in its drawRect method. You can do this too in either Swift or Objective-C; this example will be in Swift, but if you're stuck with the slow Swift compiler in XCode 6.x, you might prefer Objective-C for this kind of simple and straightforward code.

To use it in Interface Builder, simply create a storyboard or xib, drag an UIView onto it, and change its Custom Class to your UIView subclass. Make sure to have your constraints set properly, and run - you should be able to see your custom icon in your working app.

class EmailIcon: UIView {
  override func drawRect(rect: CGRect) {
    StyleKit.drawEmail(frame: rect)
  }
}
Screenshot 2015-04-03 21.36.42

Ghost views, the horror of Interface Builder

 

As you probably noticed though, it's far from practical as it is in Interface Builder right now - all you see (or don't see really) is an empty UIView. I guess you could give it a background color in IB and unset that again in your code, but that's far from practical. Instead, we'll just slap an @IBDesignable annotation onto the UIView subclass. Going back to IB, it should start to compile code, and a moment later, your PaintCode image shows up, resizable and all.

@IBDesignable
class EmailIcon: UIView {
  override func drawRect(rect: CGRect) {
    StyleKit.drawEmail(frame: rect)
  }
}
@IBDesignable view in Interface Builder

like magic

 

Adding color

We can configure our graphic in PaintCode to have a custom color. Just go back to PaintCode and change the Color to 'Parameter' - see image.

Set color to Parameter

Export the file again, and change the call to the StyleKit method to include the color. It's easiest in this case to just pass the UIView's own tintColor property. Going back to Interface Builder, we can now set the default tintColor property, and it should update in the IB preview instantly.

As an alternative, you can add an @IBInspectable color property to your UIView, but I would only recommend this for more complicated UIViews - if they include multiple StyleKit graphics, for example.

@IBDesignable
class EmailIcon: UIView {
  override func drawRect(rect: CGRect) {
    StyleKit.drawEmail(frame: rect, color: tintColor)
  }
}

let's make it Xebia purple, to please the boss

 

Dealing with dynamic properties

One case I had to deal with is an UIView that can draw a selection of StyleKit images, depending on the value of a dynamic model value. I considered a few options to deal with that:

  1. Set a default value, to be overridden at runtime. This is a bit dangerous though; forget to reset or unset this property at runtime and your user will be stuck with some default placeholder, or worse, flat-out wrong information.
  2. Make the property @IBInspectable. This only works with relatively simple values though (strings, numbers, colors), and it has the same problem as the above.

What I needed was a way to set a property, but only from within Interface Builder. Luckily, that exists, as I found out later. In UIView, there is a method called prepareForInterfaceBuilder, which does exactly what it says on the tin - override the method to set properties only relevant in Interface Builder. So in our case:

@IBDesignable
class EmailIcon: UIView {
  // none set by default
  var iconType: String? = nil

  override func drawRect(rect: CGRect) {
    if iconType == "email" {
      StyleKit.drawEmail(frame: rect, color: tintColor)
    }

    if iconType = "email-read" {
      StyleKit.drawEmailRead(frame: rect, color: tintColor)
    }

    // if none of the above, draw nothing
  }

  // draw the 'email' icon by default in Interface Builder
  override func prepareForInterfaceBuilder() {
    iconType = "email"
  }
}

And that's all there is to it. You can use this method to create all of your graphics, keep them dynamically sized and colored, and to use the full power of Interface Builder and PaintCode combined.

Given Enough Money, All Bugs Are Shallow

Coding Horror - Jeff Atwood - Sat, 04/04/2015 - 00:58

Eric Raymond, in The Cathedral and the Bazaar, famously wrote

Given enough eyeballs, all bugs are shallow.

The idea is that open source software, by virtue of allowing anyone and everyone to view the source code, is inherently less buggy than closed source software. He dubbed this "Linus's Law".

Insofar as it goes, I believe this is true. When only the 10 programmers who happen to work at your company today can look at your codebase, it's unlikely to be as well reviewed as a codebase that's public to the world's scrutiny on GitHub.

However, the Heartbleed SSL vulnerability was a turning point for Linus's Law, a catastrophic exploit based on a severe bug in open source software. How catastrophic? It affected about 18% of all the HTTPS websites in the world, and allowed attackers to view all traffic to these websites, unencrypted... for two years.

All those websites you thought were secure? Nope. This bug went unnoticed for two full years.

Two years!

OpenSSL, the library with this bug, is one of the most critical bits of Internet infrastructure the world has – relied on by major companies to encrypt the private information of their customers as it travels across the Internet. OpenSSL was used on millions of servers and devices to protect the kind of important stuff you want encrypted, and hidden away from prying eyes, like passwords, bank accounts, and credit card information.

This should be some of the most well-reviewed code in the world. What happened to our eyeballs, man?

In reality, it's generally very, very difficult to fix real bugs in anything but the most trivial Open Source software. I know that I have rarely done it, and I am an experienced developer. Most of the time, what really happens is that you tell the actual programmer about the problem and wait and see if he/she fixes it – Neil Gunton

Even if a brave hacker communities to read the code, they're not terribly likely to spot one of the hard-to-spot problems. Why? Few open source hackers are security experts. – Jeremy Zawodny

The fact that many eyeballs are looking at a piece of software is not likely to make it more secure. It is likely, however, to make people believe that it is secure. The result is an open source community that is probably far too trusting when it comes to security. – John Viega

I think there are a couple problems with Linus's Law:

  1. There's a big difference between usage eyeballs and development eyeballs. Just because you pull down some binaries in a RPM, or compile something in Linux, or even report bugs back to the developers via their bug tracker, doesn't mean you're doing anything at all to contribute to the review of the underlying code. Most eyeballs are looking at the outside of the code, not the inside. And while you can discover bugs, even important security bugs, through usage, the hairiest security bugs require inside knowledge of how the code works.

  2. The act of writing (or cut-and-pasting) your own code is easier than understanding and peer reviewing someone else's code. There is a fundamental, unavoidable asymmetry of work here. The amount of code being churned out today – even if you assume only a small fraction of it is "important" enough to require serious review – far outstrips the number of eyeballs available to look at the code. (Yes, this is another argument in favor of writing less code.)

  3. There are not enough qualified eyeballs to look at the code. Sure, the overall number of programmers is slowly growing, but what percent of those programmers are skilled enough, and have the right security background, to be able to audit someone else's code effectively? A tiny fraction.

Even if the code is 100% open source, utterly mission critical, and used by major companies in virtually every public facing webserver for customer security purposes, we end up with critical bugs that compromise everyone. For two years!

That's the lesson. If we can't naturally get enough eyeballs on OpenSSL, how does any other code stand a chance? What do we do? How do we get more eyeballs?

The short term answer was:

These are both very good things and necessary outcomes. We should be doing this for all the critical parts of the open source ecosystem people rely on.

But what's the long term answer to the general problem of not enough eyeballs on open source code? It's something that will sound very familar to you, though I suspect Eric Raymond won't be too happy about it.

Money. Lots and lots of money.

Increasingly, companies are turning to commercial bug bounty programs. Either ones they create themselves, or run through third party services like Bugcrowd, Synack, HackerOne, and Crowdcurity. This means you pay per bug, with a larger payout the bigger and badder the bug is.

Or you can attend a yearly event like Pwn2Own, where there's a yearly contest and massive prizes, as large as hundreds of thousands of dollars, for exploiting common software. Staging a big annual event means a lot of publicity and interest, attracting the biggest guns.

That's the message. If you want to find bugs in your code, in your website, in your app, you do it the old fashioned way: by paying for them. You buy the eyeballs.

While I applaud any effort to make things more secure, and I completely agree that security is a battle we should be fighting on multiple fronts, both commercial and non-commercial, I am uneasy about some aspects of paying for bugs becoming the new normal. What are we incentivizing, exactly?

Money makes security bugs go underground

There's now a price associated with exploits, and the deeper the exploit and the lesser known it is, the more incentive there is to not tell anyone about it until you can collect a major payout. So you might wait up to a year to report anything, and meanwhile this security bug is out there in the wild – who knows who else might have discovered it by then?

If your focus is the payout, who is paying more? The good guys, or the bad guys? Should you hold out longer for a bigger payday, or build the exploit up into something even larger? I hope for our sake the good guys have the deeper pockets, otherwise we are all screwed.

I like that Google addressed a few of these concerns by making Pwnium, their Chrome specific variant of Pwn2Own, a) no longer a yearly event but all day, every day and b) increasing the prize money to "infinite". I don't know if that's enough, but it's certainly going in the right direction.

Money turns security into a "me" goal instead of an "us" goal

I first noticed this trend when one or two people reported minor security bugs in Discourse, and then seemed to hold out their hand, expectantly. (At least, as much as you can do something like that in email.) It felt really odd, and it made me uncomfortable.

Am I now obligated, on top of providing a completely free open source project to the world, to pay people for contributing information about security bugs that make this open source project better? Believe me, I was very appreciative of the security bug reporting, and I sent them whatever I could, stickers, t-shirts, effusive thank you emails, callouts in the code and checkins. But open source isn't supposed to be about the money… is it?

Perhaps the landscape is different for closed-source, commercial products, where there's no expectation of quid pro quo, and everybody already pays for the service directly or indirectly anyway.

No Money? No Security.

If all the best security researchers are working on ever larger bug bounties, and every major company adopts these sorts of bug bounty programs, what does that do to the software industry?

It implies that unless you have a big budget, you can't expect to have great security, because nobody will want to report security bugs to you. Why would they? They won't get a payday. They'll be looking elsewhere.

A ransomware culture of "pay me or I won't tell you about your terrible security bug" does not feel very far off, either. We've had mails like that already.

Easy money attracts all skill levels

One unfortunate side effect of this bug bounty trend is that it attracts not just bona fide programmers interested in security, but anyone interested in easy money.

We've gotten too many "serious" security bug reports that were extremely low value. And we have to follow up on these, because they are "serious", right? Unfortunately, many of them are a waste of time, because …

  • The submitter is more interested in scaring you about the massive, critical security implications of this bug than actually providing a decent explanation of the bug, so you'll end up doing all the work.

  • The submitter doesn't understand what is and isn't an exploit, but knows there is value in anything resembling an exploit, so submits everything they can find.

  • The submitter can't share notes with other security researchers to verify that the bug is indeed an exploit, because they might "steal" their exploit and get paid for it before they do.

  • The submitter needs to convince you that this is an exploit in order to get paid, so they will argue with you about this. At length.

The incentives feel really wrong to me. As much as I know security is incredibly important, I view these interactions with an increasing sense of dread because they generate work for me and the returns are low.

What can we do?

Fortunately, we all have the same goal: make software more secure.

So we should view bug bounty programs as an additional angle of attack, another aspect of "defense in depth", perhaps optimized a bit more for commercial projects where there is ample money. And that's OK.

But I have some advice for bug bounty programs, too:

  • You should have someone vetting these bug reports, and making sure they are credible, have clear reproduction steps, and are repeatable, before we ever see them.

  • You should build additional incentives in your community for some kind of collaborative work towards bigger, better exploits. These researchers need to be working together in public, not in secret against each other.

  • You should have a reputation system that builds up so that only the better, proven contributors are making it through and submitting reports.

  • Encourage larger orgs to fund bug bounties for common open source projects, not just their own closed source apps and websites. At Stack Exchange, we donated to open source projects we used every year. Donating a bug bounty could be a big bump in eyeballs on that code.

I am concerned that we may be slowly moving toward a world where given enough money, all bugs are shallow. Money does introduce some perverse incentives for software security, and those incentives should be watched closely.

But I still believe that the people who will freely report security bugs in open source software because

  • It is the right thing to do™

and

  • They want to contribute back to open source projects that have helped them, and the world

… will hopefully not be going away any time soon.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Categories: Programming

How I met your mother: Story arcs

Mark Needham - Sat, 04/04/2015 - 00:31

After weeks of playing around with various algorithms to extract story arcs in How I met your mother I’ve come to the conclusion that I don’t yet have the skills to completely automate this process so I’m going to change my approach.

The new plan is to treat the outputs of the algorithms as suggestions for possible themes but then have a manual step where I extract what I think are interesting themes in the series.

A theme can consist of a single word or a phrase and the idea is that once a story arc is identified we’ll search over the corpus and find the episodes where that phrase occurs.

We can then generate a CSV file of (story arc) -> (episodeId), store that into our HIMYM graph and use the story arc as another factor for episode similarity.

I ended up with the following script to work out which episodes contained a story arc:

#!/bin/bash
 
find_term() {
  arc=${1}
  searchTerm=${2}
  episodes=$(grep --color -iE "${searchTerm}" data/import/sentences.csv | awk -F"," '{ print $2  }' | sort | uniq)
  for episode in ${episodes}; do
    echo ${arc},${episode}
  done
}
 
find_term "Bro Code" "bro code"
find_term "Legendary" "legen(.*)ary"
find_term "Slutty Pumpkin" "slutty pumpkin"
find_term "Magician's Code" "magician's code"
find_term "Thanksgiving" "thanksgiving"
find_term "The Playbook" "playbook"
find_term "Slap Bet" "slap bet"
find_term "Wrestlers and Robots" "wrestlers"
find_term "Robin Sparkles" "sparkles"
find_term "Blue French Horn" "blue french horn"
find_term "Olive Theory" "olive"
find_term "Thank You Linus" "thank you, linus"
find_term "Have you met...?" "have you met"
find_term "Laser Tag" "laser tag"
find_term "Goliath National Bank" "goliath national bank"
find_term "Challenge Accepted" "challenge accepted"
find_term "Best Man" "best man"

If we run this script we’ll see something like the following:

$ ./scripts/arcs.sh
Bro Code,14
Bro Code,155
Bro Code,170
Bro Code,188
Bro Code,201
Bro Code,61
Bro Code,64
Legendary,117
Legendary,120
Legendary,122
Legendary,136
Legendary,137
Legendary,15
Legendary,152
Legendary,157
Legendary,162
Legendary,171
...
Best Man,208
Best Man,30
Best Man,32
Best Man,41
Best Man,42

I pulled out these themes by eyeballing the output of the following scripts:

  • TF/IDF – calculates TF/IDF scores for ngrams. This helps find important themes in the context of a single episode. I then did some manual searching to see how many of those themes existed in other episodes
  • Weighted Term Frequency – this returns a weighted term frequency for ngrams of different lengths. The weights are determined by the skewed random discrete distribution I wrote about earlier in the week. I ran it with different skews and ngram lengths.
  • Named entity extraction – this pulls out any phrases that are named entities. It mostly pulled out names of people (which I used as a stop word list in some other algorithms) but also revealed a couple of themes.
  • Topic modelling – I used mallet to extract topics across the corpus. Most of them didn’t make much sense to me but there were a few which identified themes that I recognised.

I can’t remember off the top of my head if any obvious themes have been missed so if you know HIMYM better than me let me know and I’ll try and work out why those didn’t surface.

Next I want to see how these scripts fare against some other TV shows and see how quickly I can extract themes for those. It’d also be cool if I can make the whole process a bit more automated.

Categories: Programming

Do You Have Questions About Estimation?

I am doing a google hangout with Marcus Blankenship on April 10. We’ll be talking about estimation and my new book, Predicting the Unpredictable: Pragmatic Approaches to Estimating Cost or Schedule.

The book is about ways you can estimate and explain your estimates to the people who want to know. It also has a number of suggestions for when your estimates are inaccurate. (What a surprise!)

Marcus and I are doing a google hangout, April 10, 2015. There’s only room for 15 people on the hangout live. ¬†If you want me to answer your question, go ahead and send your question in advance by email. Send your questions to marcus at marcusblankenship dot com. Yes, you can send your questions to me, and I will forward to Marcus.

The details you’ll need to attend:

Hangout link: https://plus.google.com/events/c3n6qpesrlq5b8192tkici26lcc

Youtube live streaming link: http://www.youtube.com/watch?v=82IXhj4oI0U

Time & Date: April 10th, 2015 @ 11:30am Pacific, 2:30 Eastern.

Hope you join us!

Categories: Project Management

Enable your messaging app for Android Auto

Android Developers Blog - Fri, 04/03/2015 - 18:53

Posted by Joshua Gordon, Developer Advocate

What if there was a way for drivers to stay connected using your messaging app, while keeping their hands on the wheel and eyes on the road?

Android Auto helps drivers stay connected, but in a more convenient way that's integrated with the car. It eliminates the need to type and read messages by replacing these activities with a voice controlled interface.

Enabling your messaging app to work with Android Auto is easy. Developers like Skype and textPlus have already done so. Check out this DevByte for an overview of the messaging APIs, and see the developer training guide for a deep dive. Read on for a look at the key steps involved.


Message notifications on the car’s display

When an Android 5.0+ phone is connected to a compatible car, users receive incoming message notifications from Auto-enabled apps on the car’s head unit display. Your app runs on the phone, but is controlled by the car. To learn more about how this works, watch the Introduction to Android Auto DevByte.

A new message notification from Skype

If your app already uses notifications to alert the user to incoming messages, it’ll be easy to extend these for Auto. It takes just a few lines of code, and you won’t have to change how your app works on the phone.

There are a couple small differences between message notifications on Auto vs. a phone. On Auto, a preview of the message content isn’t shown, because messaging is driven entirely by voice. Second, message notifications are backed by a conversation object. This is simply a collection of unread messages from a particular sender.

Decorate your notification with the CarExtender to add support for the car. Next, use the UnreadConversation.Builder to create a conversation, and populate it by iterating over your app's unread messages (from a certain sender) and adding them to the conversation. Pass your conversation object to the CarExtender, and you’re done!

Tap to hear messages

Tapping on a message notification plays it back on the car's sound system, via text to speech. This is handled automatically by the framework; no additional code is required. Pretty cool, right?

In order to know when the user hears a message, you provide a PendingIntent that’s triggered by the system. That’s one of just two intents you’ll need to handle to enable your app for Auto.

Reply by voice

Voice control is the real magic of Android Auto. Users reply to messages by speaking, via voice recognition. This is far faster and more natural than typing.

Enabling this functionality is as simple as adding a RemoteInput instance to your conversation objects, before you issue the notification. Speech recognition is handled entirely by the framework. The recognition result is delivered to your app as a plain text string via a second PendingIntent.

Replying to a message from textPlus by voice.

Next Steps Make your messaging app more natural to use in the car by enabling it for Android Auto. Now drivers can stay connected, without typing or reading messages. It just takes a few lines of code. To learn more visit developer.android.com/auto Join the discussion on

+Android Developers
Categories: Programming

Stuff The Internet Says On Scalability For April 3rd, 2015

Hey, it's HighScalability time:


Luscious SpaceX photos have been launched under Creative Commons.
  • 1,000: age of superbug treatment; 18 million: number of laws in the US
  • Quotable Quotes:
    • @greenberg: Only in the Bay Area would you find a greeting card for closing a funding round.
    • @RichardWarburto: "Do Not Learn Frameworks. Learn the Architecture"
    • Alex Dzyoba: Know your data and develop a simple algorithm for it.
    • @BenedictEvans: Akamai: 17% of US mobile connections are >4 Mbps. Most of the rest of the developed world is over 50%
    • Linus: Linux is just a hobby, won’t be big and professional like GNU
    • jhugg: This just lines up with what we've seen in the KV space over the last 5 years. Mutating data and key-lookup are all well and good, but without a powerful query language and real index support, it's much less interesting.
    • Facebook: Whatever the scale of your engineering organization, developer efficiency is the key thing that your infrastructure teams should be striving for. This is why at Facebook we have some of our top engineers working on developer infrastructure.
    • mysticreddit: Micro-optimization is a complete waste of time when you haven't spent time focusing on the meta & macro optimization
    • @adriancolyer: If you think cross-partition transactions can't scale, it's well worth taking a look at the RAMP model: 
    • @jasongorman: Microservices are a great solution to a problem you probably don't have
    • @dbrady: If 1 service dies and your whole system breaks, you don't have SOA. You have a monolith whose brain has been chopped up and stuck in jars.

  • Fascinating realization. We live in a world in which every tech interaction is subject to a man-in-the-middle attack. Future Crimes: All of this is possible because the screens on our phones show us not reality but a technological approximation of it. Because of this, not only can the caller ID and operating system on a mobile device be hacked, but so too can its other features, including its GPS modules. That’s right, even your location can be spoofed.

  • That's every interaction. Pin-pointing China's attack against GitHub: The way the attack worked is that some man-in-the-middle device intercepted web requests coming into China from elsewhere in the world, and then replaced the content with JavaScript code that would attack GitHub. 

  • Messaging and mobile platforms: If you take all of this together, it looks like Facebook is trying not to compete with other messaging apps but to relocate itself within the landscape of both messaging and the broader smartphone interaction model. 

  • Martin Thompson: Love the point that the compiler can only solve problems in the 1-10% problem space. The 90% problem space is our data access which is all about data structures and algorithms. The summary is he shows how instruction processing can be dwarfed by cache misses. This resonates for me with what I've seen in the field with customers in the high-performance space. Obvious caveat is applications where time is dominated by IO.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

String and StringBuilder revisited

Phil Trelford's Array - Fri, 04/03/2015 - 00:03

I came across a topical .Net article by Dave M Bush published towards the tail end of 2014 entitled String and StringBuilder where he correctly asserts that .Net’s built-in string type are reference types and immutable. All good so far.

The next assertion is that StringBuilder will be faster than simple string concatenation when adding more than 3 strings together, which is probably a pretty good guess, but lets put it to the test with 4 strings.

The test can be performed easily using F# interactive (built-in to Visual Studio) with the #time directive:

open System.Text

#time

let a = "abc"
let b = "efg"
let c = "hij"
let d = "klm"

for i = 1 to 1000000 do
   let e = StringBuilder(a)
   let f = e.Append(b).Append(c).Append(d).ToString() 
   ()
// Real: 00:00:00.317, CPU: 00:00:00.343, GC gen0: 101, gen1: 0, gen2: 0
   
for i = 1 to 1000000 do
   let e = System.String.Concat(a,b,c,d)
   ()
// Real: 00:00:00.148, CPU: 00:00:00.156, GC gen0: 36, gen1: 0, gen2: 0

What we actually see is that for concatenating 4 strings StringBuilder takes twice as long as using String.Concat (on this run 0.317ms vs 0.148ms) and generates approximately 3 times as much garbage (gen0: 101 vs gen0: 36)!

Underneath the hood the StringBuilder is creating an array to append the strings into. When appending if the current buffer length is exceeded (the default is 16) then a new array must be created. When ToString is called it may, based on a heuristic, decide to return the builder’s array or allocate a new array and copy the value into that. Therefore the performance of StringBuilder is dependent on the initial capacity of the builder and the number and lengths of the strings to append.

In contrast, String.Concat (which the compiler resolves the ‚Äė+‚Äô operator to) calculates the length of the concatenated string from the lengths of the passed in strings, then allocates a string of the required size and copies the values in, ergo, in many scenarios it will require less copying and less allocation.

When concatenating 2, 3 or 4 strings we can take advantage of String.Concat’s optimized overloads, after this the picture changes as an array argument must be passed which requires an additional allocation. However String.Concat may still be faster than StringBuilder in some scenarios where the builder requires multiple reallocations.

But wait there‚Äôs more, going back to the ‚Äė+‚Äô operator, if we assign the integer literal expression 1 + 2 + 3 the compiler can reduce the value to 6, equally if we define the strings as const string then the compiler can apply the string concatenations at compile time leading to, in this contrived example, no cost whatsoever.

The moral of the story is when it comes to performance optimization - measure, measure, measure.

Categories: Programming

Other Uses for Personas

An archetypal user of dresses.

An archetypal user of dresses.

A persona represents an archetypical user that interacts with a system or product. Typically the first introduction a developer or product owner might have to the concept is when they are writing user stories in the form of <persona><goal><benefit>. If the concept of persona was just used in user stories they would be interesting, however personas have other uses. Other uses of personas include:

  1. Identifying requirements. Personas provide teams with a platform for developing scenarios that incorporate both who will use the system or product and how it will be used. Scenarios delineate purpose-driven interactions between a persona(s) and the product or system. The expectations, motivations and needs defined as part of a persona gives team members the information needed to playact solving problems as if they were an archetypical user. This allows the team to mirror real content and usage without tying the scenario to a specific system solution.
  2. Guiding decisions. As the development process moves from user story to delivered functionality, teams will make innumerable decisions. Personas help teams make better decisions by providing them with a tool to ground or justify a decision. I recently watched a team use a set of personas to make a decision about how they would structure a user survey. The team asked themselves how each persona would react to the design choice being considered. Several tweaks were adopted based on how the team felt the personas would react. The team in question keeps a copy of each persona on the wall in their team room and references the personas as a matter of course. A secondary impact of using a common set of personas to guide decisions is foe the team to align to the common goal of being user focused.
  3. Identifying releases.¬†¬† While there are a wide variety of release strategies including continuous deployment and minimum viable products, teams and product owners often struggle with grouping functions and features for release. Personas can be used to identify which features and functions are related so that a release can used to solve a persona‚Äôs (or group of personas‚Äô) specific needs. I often use personas and the scenarios used to generate requirements to identify a minimum viable product. ¬†Once a release candidate is identified I ask the team¬†whether the group of functions being considered could be used by a specific persona and then whether the use¬†would generate useful feedback. If the answer is yes for this part but no for another part, the release can be trimmed down to just the yes part. Alternately if the answer is no because we need something else, I generally start the campaign to increase the prioritization of that ‚Äúsomething else.‚ÄĚ
  4. Facilitating communication. All projects require well thought out communications. Personas provide the team with information needed to plan communications. Jo Ann Sweeney, author of the Explaining Change column on the Software Process and Measurement Cast strongly suggests that knowing your audience is a critical step in getting communications right (albeit she says it with a British accent). Personas are a tool to define your audience. The details of the persona’s needs, motivation and backstory provide a wealth of information needed to shape and develop a communication plan.

Personas are not just for user stories. Personas represent a lot of valuable information about who will use the system or product ¬†the intrinsic benefit they expect. The information documented is useful across the entire development life cycle. A simple way of reminding a team that it really isn’t just about the system or product is to¬†tape the personas to the project team room wall to create information radiators that remind the team that the¬†users have an identity.


Categories: Process Management

FlatBuffers 1.1: a memory-efficient serialization library

Google Code Blog - Thu, 04/02/2015 - 17:51

Posted by Wouter van Oortmerssen, Fun Propulsion Labs at Google*

Originally posted to the Google Open Source blog

After months in development, the FlatBuffers 1.1 update is here. Originally released in June 2014, it’s a highly efficient open source cross-platform serialization library that allows you to read data without parsing/unpacking or allocating additional memory. It supports schema evolution (forwards/backwards compatibility) and optional JSON conversion. We primarily created it for games written in C++ where performance is critical, but it’s also useful more broadly. This update brings:

  • an extensive overhaul to the Java API
  • out-of-the-box support for C# and Go
  • an optional verifier to make FlatBuffers practical in untrusted scenarios
  • .proto parsing for easier migration from Protocol Buffers
  • optional manual assignment of field IDs
  • dictionary functionality through binary search on a key field
  • bug fixes and other improvements thanks to 200+ commits from 28 contributors -- thank you!

Download the latest release from our github page and join our discussion list for more details.

*Fun Propulsion Labs is a team within Google that's dedicated to advancing gaming on Android and other platforms.

Categories: Programming

Decision Analysis and Software Project Management

Herding Cats - Glen Alleman - Thu, 04/02/2015 - 16:54

An announcement can across in email today from AACEI (Association for the Advancement of Cost Engineering International), about a Denver meeting on Decision Analysis. 

Decision Analysis provides ways to incorporate judgments about uncertainty into the evaluation of alternatives. Cost professionals using these methods can provide more-credible analyses. The foundation calculation is expected value, usually solved by a decision tree or by Monte Carlo simulation. A formal definition is:

Decision analysis is the discipline comprising the philosophy, theory, methodology, and professional practice necessary to address important decisions in a formal manner.

In project management and especially the software development project management domain, making decisions is always about making decisions in the presence of uncertainty. Uncertainty is always in place for the three core elements of any project, shown below:

Slide1

In order to make decisions about future outcomes of a project subject to these uncertainties, we need to not only know how these three variables randomly interact, but also how they behave as standalone processes. This behaviour - in the presence of uncertainty - has two types:

  • Event based behaviour - the probability that something will or will not happen in the future.
  • Naturally occurring random behaviour - the Probability Distribution Function that describes the possible outcomes.

Both these behaviours are present in all project work. If they are not present the project is likely simple enough, you can just start working and have the confidence of completing on time, on budget and have the outcome work.

When the behaviours are not deterministic and the interactions are not deterministic - then to make decisions we can only do one thing.

We need to estimate these behaviours

When it is suggested we can make decisions in the presence of uncertainty, and there is no method provided to make those decisions, it can't be true.

Recording past performance and then taking the average of that performance as an estimate of future performance is seriously flawed, as discussed in How to Avoid Yesterday's Weather Estimating Problem.

So do the math, random variables abound in our project domain, be it software, hardware, pouring concrete, welding steel - it's all random. Don't fall prey the simple minded statements of we're exploring ways to make decisions without estimates without asking the direct question - show me how that nonexistent description does not violate the principles of Decision Analysis and Microeconomics of software development decision making.

Related articles Calculating Value from Software Projects - Estimating is a Risk Reduction Process Five Estimating Pathologies and Their Corrective Actions The Cost Estimating Problem Why We Need Governance
Categories: Project Management

Android UI Automated Testing

Google Testing Blog - Thu, 04/02/2015 - 16:51
by Mona El Mahdy

Overview

This post reviews four strategies for Android UI testing with the goal of creating UI tests that are fast, reliable, and easy to debug.

Before we begin, let’s not forget an important rule: whatever can be unit tested should be unit tested. Robolectric and gradle unit tests support are great examples of unit test frameworks for Android. UI tests, on the other hand, are used to verify that your application returns the correct UI output in response to a sequence of user actions on a device. Espresso is a great framework for running UI actions and verifications in the same process. For more details on the Espresso and UI Automator tools, please see: test support libraries.

The Google+ team has performed many iterations of UI testing. Below we discuss the lessons learned during each strategy of UI testing. Stay tuned for more posts with more details and code samples.

Strategy 1: Using an End-To-End Test as a UI Test

Let’s start with some definitions. A UI test ensures that your application returns the correct UI output in response to a sequence of user actions on a device. An end-to-end (E2E) test brings up the full system of your app including all backend servers and client app. E2E tests will guarantee that data is sent to the client app and that the entire system functions correctly.

Usually, in order to make the application UI functional, you need data from backend servers, so UI tests need to simulate the data but not necessarily the backend servers. In many cases UI tests are confused with E2E tests because E2E is very similar to manual test scenarios. However, debugging and stabilizing E2E tests is very difficult due to many variables like network flakiness, authentication against real servers, size of your system, etc.


When you use UI tests as E2E tests, you face the following problems:
  • Very large and slow tests. 
  • High flakiness rate due to timeouts and memory issues. 
  • Hard to debug/investigate failures. 
  • Authentication issues (ex: authentication from automated tests is very tricky).

Let’s see how these problems can be fixed using the following strategies.

Strategy 2: Hermetic UI Testing using Fake Servers

In this strategy, you avoid network calls and external dependencies, but you need to provide your application with data that drives the UI. Update your application to communicate to a local server rather than external one, and create a fake local server that provides data to your application. You then need a mechanism to generate the data needed by your application. This can be done using various approaches depending on your system design. One approach is to record server responses and replay them in your fake server.

Once you have hermetic UI tests talking to a local fake server, you should also have server hermetic tests. This way you split your E2E test into a server side test, a client side test, and an integration test to verify that the server and client are in sync (for more details on integration tests, see the backend testing section of blog).

Now, the client test flow looks like:


While this approach drastically reduces the test size and flakiness rate, you still need to maintain a separate fake server as well as your test. Debugging is still not easy as you have two moving parts: the test and the local server. While test stability will be largely improved by this approach, the local server will cause some flakes.

Let’s see how this could this be improved...

Strategy 3: Dependency Injection Design for Apps.

To remove the additional dependency of a fake server running on Android, you should use dependency injection in your application for swapping real module implementations with fake ones. One example is Dagger, or you can create your own dependency injection mechanism if needed.

This will improve the testability of your app for both unit testing and UI testing, providing your tests with the ability to mock dependencies. In instrumentation testing, the test apk and the app under test are loaded in the same process, so the test code has runtime access to the app code. Not only that, but you can also use classpath override (the fact that test classpath takes priority over app under test) to override a certain class and inject test fakes there. For example, To make your test hermetic, your app should support injection of the networking implementation. During testing, the test injects a fake networking implementation to your app, and this fake implementation will provide seeded data instead of communicating with backend servers.


Strategy 4: Building Apps into Smaller Libraries

If you want to scale your app into many modules and views, and plan to add more features while maintaining stable and fast builds/tests, then you should build your app into small components/libraries. Each library should have its own UI resources and user dependency management. This strategy not only enables mocking dependencies of your libraries for hermetic testing, but also serves as an experimentation platform for various components of your application.

Once you have small components with dependency injection support, you can build a test app for each component.

The test apps bring up the actual UI of your libraries, fake data needed, and mock dependencies. Espresso tests will run against these test apps. This enables testing of smaller libraries in isolation.

For example, let’s consider building smaller libraries for login and settings of your app.


The settings component test now looks like:


Conclusion

UI testing can be very challenging for rich apps on Android. Here are some UI testing lessons learned on the Google+ team:
  1. Don‚Äôt write E2E tests instead of UI tests. Instead write unit tests and integration tests beside the UI tests. 
  2. Hermetic tests are the way to go. 
  3. Use dependency injection while designing your app. 
  4. Build your application into small libraries/modules, and test each one in isolation. You can then have a few integration tests to verify integration between components is correct . 
  5. Componentized UI tests have proven to be much faster than E2E and 99%+ stable. Fast and stable tests have proven to drastically improve developer productivity.

Categories: Testing & QA