Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

SPaMCAST 438 - Size for Testers, Organizations as Systems, Problem Solving

Software Process and Measurement Cast - Mon, 04/17/2017 - 01:23

The Software Process and Measurement Cast 438 features our essay on leveraging sizing in testing. Size can be a useful tool for budgeting and planning both at the portfolio level and the team level.

Gene Hughson brings his Form Follows Function Blog to the cast this week to discuss his recent blog entry titled, Organizations as Systems and Innovation. One of the highlights of the conversation is whether emergence is a primary factor driving change in a complex system.

Our third column is from the Software Sensei, Kim Pries.  Kim discusses why blindly accepting canned solutions does not negate the need for active troubleshooting of for problems in software development.

Re-Read Saturday News

This week, we tackle chapter 1 of Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson published by Henry Holt and Company in 2015. Chapter 1 is titled, Evolving Organization.  Holacracy is an approach to address shortcomings that have appeared as organizations evolve. Holacracy is not a silver bullet, but rather provides a stable platform for identifying and addressing problems efficiently.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our interview with Alex Yakyma.  Our discussion focused on the industry's broken mindset that prevents it from being Lean and Agile.  A powerful and possibly controversial interview.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

SPaMCAST 438 ‚Äď Size for Testers, Organizations as Systems, Problem Solving

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play MusicListen Now

The Software Process and Measurement Cast 438 features our essay on leveraging sizing in testing. Size can be a useful tool for budgeting and planning both at the portfolio level and the team level.

Gene Hughson brings his Form Follows Function Blog to the cast this week to discuss his recent blog entry titled, Organizations as Systems and Innovation. One of the highlights of the conversation is whether emergence is a primary factor driving change in a complex system.

Our third column is from the Software Sensei, Kim Pries.  Kim discusses why blindly accepting canned solutions does not negate the need for active troubleshooting of for problems in software development.

Re-Read Saturday News

This week, we tackle chapter 1 of Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson published by Henry Holt and Company in 2015. Chapter 1 is titled, Evolving Organization.  Holacracy is an approach to address shortcomings that have appeared as organizations evolve. Holacracy is not a silver bullet, but rather provides a stable platform for identifying and addressing problems efficiently.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our interview with Alex Yakyma. ¬†Our discussion focused on the industry’s broken mindset that prevents it from being Lean and Agile. ¬†A powerful and possibly controversial interview.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

Holacracy: Re-read Week 2, Chapter 1 Evolving Organization

Book Cover

Holacracy

This week, we tackle chapter 1 of Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson published by Henry Holt and Company in 2015. Holacracy is an approach to address the shortcomings that have appeared as organizations evolve. Holacracy is not a silver bullet, but rather provides a stable platform for identifying and addressing problems efficiently.

Part One: Evolution at work: Introducing Holacracy

Chapter 1: Evolving Organization

One of the most powerful points made in this chapter is that humans look for what could be; we can imagine a future and see the need to change. ¬†The ability to see past the ‚Äúnow‚ÄĚ generates our ability to respond and evolve our organizations and institutions. ¬†When Imagining the future of an organization it is rarely effective to rely on the perception of a single person. ¬†The ability to sense and then to filter and dismiss makes it critical to have multiple perspectives.

Most mature organizations use a classic organization model from early last century.  The model centers on the ability to predict and control. Centralized control and the prevention of deviation are core attributes of this model and reflect the Industrial Age in which the model evolved.  Today’s business conditions are less oriented on manufacturing and more dynamic. The need for change and a structure that avoids deviations cause friction.

Overlaying leading edge (closely the following edge) ideas and techniques require rewriting the basic infrastructure. ¬†If the basic premise of predict and control organization is not changed, an enormous amount of time and energy will be wasted as the old and new paradigms struggle for supremacy. This struggle does not deliver value to any of the organization’s stakeholders. Robertson uses the metaphor of a PC operating system (OS). ¬†In today’s computer environment the operating system should enable the functionality of systems and apps and should be invisible to those using it. ¬†It is only when the OS is out of date or broken that people‚Äôs awareness of the OS is raised. Rarely is changing a part of the OS and leaving the rest intact advisable or even possible.

Holacracy includes:

  1. A constitution to provide framework for structure an organization;
  2. A new way to structure an organization and to find people’s roles;
  3. A decision-making process for updating those roles, and
  4. A meeting process for keeping teams in sync.

Robertson concludes the chapter with what I feel is the second major point in the chapter.  Holacracy is a guide rather than a cookbook with a fixed set of ideas principles.  This suggests that every organization will need to use its unique set of filters to interpret holacracy.

Transformation Thoughts: Changing a small part of an organization’s overall management model will cause large amounts of friction. ¬†Embracing holacracy is a scenario in which a Big Bang change makes sense. ¬†Re-writing the whole management model / operating system will help to overwhelm the organization’s change antibodies.

Team Coaching Thought:  Recognize the human capacity to look forward and imagine the future. Techniques like team planning can harness and guide this innate capability.  Plans that are not transparent to the team will often team members to envision a future that might be at odds with the overall intent of the team, generating friction and reducing their ability to deliver value.  

Remember to buy a copy of Holacracy (use the link in the show notes to help support and defray the costs of the Software Process and Measurement Cast blog and podcast).

Previous Entries:

Week 1:  Logistics and Introduction


Categories: Process Management

Java 8 Language Features Support Update

Android Developers Blog - Fri, 04/14/2017 - 21:00
Posted by James Lau, Product Manager

Yesterday, we released Android Studio 2.4 Preview 6. Java 8 language features are now supported by the Android build system in the javac/dx compilation path. Android Studio's Gradle plugin now desugars Java 8 class files to Java 7-compatible class files, so you can use lambdas, method references and other features of Java 8.

For those of you who tried the Jack compiler, we now support the same set of Java 8 language features but with faster build speed. You can use Java 8 language features together with tools that rely on bytecode, including Instant Run. Using libraries written with Java 8 is also supported.

We first added Java 8 desugaring in Android Studio 2.4 Preview 4. Preview 6 includes important bug fixes related to Java 8 language features support. Many of these fixes were made in response to bug reports you filed. We really appreciate your help in improving Android development tools for the community!

It's easy to try using Java 8 language features in your Android project. Just download Android Studio 2.4 Preview 6, and update your project's target and source compatibility to Java version 1.8. You can find more information in our preview documentation.

Happy lambda'ing!
Categories: Programming

Business Analysis Manifesto: the changing role of Business Analysis in an Agile organization

Xebia Blog - Fri, 04/14/2017 - 20:00

  The other day a discussion moved towards the -changing- role of Business Analysts in an Agile environment. I referred to the Business Analysis Manifesto. Created by and for Business Analysts, but never published. I realized I could share it with ‚Äėthe world‚Äô and wrap it in blog-paper. So, this Business Analysis Manifesto is not […]

The post Business Analysis Manifesto: the changing role of Business Analysis in an Agile organization appeared first on Xebia Blog.

Future of Java 8 Language Feature Support on Android

Android Developers Blog - Fri, 04/14/2017 - 17:48
Posted by James Lau, Product Manager 

At Google, we always try to do the right thing. Sometimes this means adjusting our plans. We know how much our Android developer community cares about good support for Java 8 language features, and we're changing the way we support them.

We've decided to add support for Java 8 language features directly into the current javac and dx set of tools, and deprecate the Jack toolchain. With this new direction, existing tools and plugins dependent on the Java class file format should continue to work. Moving forward, Java 8 language features will be natively supported by the Android build system. We're aiming to launch this as part of Android Studio in the coming weeks, and we wanted to share this decision early with you.

We initially tested adding Java 8 support via the Jack toolchain. Over time, we realized the cost of switching to Jack was too high for our community when we considered the annotation processors, bytecode analyzers and rewriters impacted. Thank you for trying the Jack toolchain and giving us great feedback. You can continue using Jack to build your Java 8 code until we release the new support. Migrating from Jack should require little or no work.

We hope the new plan will pave a smooth path for everybody to take advantage of Java 8 language features on Android. We'll share more details when we release the new support in Android Studio.
Categories: Programming

Stuff The Internet Says On Scalability For April 14th, 2017

Hey, it's HighScalability time:

 

After 20 years, Cassini will not go gently into that good night, it will burn and rave at close of day. (nasa)
If you like this sort of Stuff then please support me on Patreon.
  • 10^15: synapses activated per second in human brain (2/3rds fail); $4.5B: Amazon spend on video (Netflix $6 billion); 22,000: AWS database migrations served; ~15%: Dropbox reduced CPU usage using Brotli; $3.5 trillion: IT spending in 2017; 10%: reduction in QoQ hard drive shipments; 33.3%: Nginx share of webserver market; 37.2 trillion: human cells in a Cell Atlas; 6.2 miles: journey to the center of the earth; 200: lines of code for blockchain; 95%: Wikipedia pages end up at philosophy; 1.2 billion: Messenger monthly users; 

  • Quotable Quotes:
    • Jeff Bezos: Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death. And that is why it is always Day 1.
    • Bob Schmidt: If debugging is the process of removing errors from a design, then designing must be the process of putting errors into a design!
    • @swardley: the gap between where the cutting edge is and where the majority are just seems to increase year on year.
    • Riot Games: We need to provide resources when it's time to grow, we need to react when it gets sick, and we need to do it all as fast as possible at a global scale.
    • masklinn: High-performance native code already does these specialisation, generally on a per-project basis (some projects include multiple allocators for different bits of data), and possibly using a non-OS allocator in the first place
    • @erikbryn: MT: @DKThomp : there are 950k warehouse workers —6X the number of steel workers and miners combined
    • Joeri: The challenge of a rewrite is not in mapping the core architecture and core use case, it's mapping all the edge cases and covering all the end user needs. You need people intimately familiar with the old system to make sure the new system does all the weird stuff which nobody understood but had good reasons that was in the corners of the old system's code. 
    • @redblobgames: 2016 GDC Diablo talk: let's switch from turn-based to real-time 2017 GDC Civilization talk: let's switch from real-time to turn-based
    • @random_walker: Encrypted traffic has a fingerprint—enough to distinguish among 200 Netflix vids with 99.5% accuracy in < 2.5 mins.
    • Sophie Wilson: You’re going to buy a 10-way, 18-way multi-core processor that’s the latest, all because we told you you could buy it and made it available, and we’re going to turn some of those processors off most of the time. So you’re going to pay for logic and we’re going to turn it off so you can’t use it.
    • qq66: But is there anything more personal than a computer programmer writing a bot to send messages for him?
    • Anu Hariharan: Unlike other social products, WeChat does not only measure growth by number of users or messages sent. Instead they also focus on measuring how deeply is the product engaged in every aspect of daily life (e.g., the number of tasks WeChat can help with in a day).
    • @fredwilson: "The real issue here is Facebook’s market power. And we face similar market power issues in search (Google) and commerce (Amazon)"
    • There are so many quotable quotes I couldn't include them all here. Click through to read the full article.

  • Luna Duclos on Game Development and Rebuilding Microservices. Switching from PHP/Python to Go. Go is much faster and uses less CPU. As big as the switch to Go is the switch from Google App Engine to VMs. GAE servers are small and CPU constrained despite the relatively high cost. Their Go cluster runs in the Google Cloud on Google Container Engine.

  • Werner Against the Machine. Wait, aren't you the machine now?

  • Kwabena Boahe on Stanford Seminar: Neuromorphic Chips: Addressing the Nanostransistor Challenge. A dollar bought more and more transistors until 2014, when for the first time the price for transistors went up. Fundamental constraints at the physical level is the cause. The challenge is to continually shrink the footprint of the transistor so it occupies less space. A traffic metaphor is used to explain the difficulty of continually shrinking transistors. Shrinking gives you fewer lanes and electrons can block a lane by being trapped in a pothole. When you get down to one lane and electron is trapped the current flows slowly. Our brains work with ultimately scaled devices...

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

#NoEstimates Book - Chapter 1 Summation

Herding Cats - Glen Alleman - Fri, 04/14/2017 - 06:11

I posted some comments on the #NoEstimates book awhile back.  I have a break this week and would like to sum up Chapter 1.

Chapter 1 Introduction

  • Why estimates don't work - Carmen is assigned a project that will make or break the company. Carmen has never managed a project this size.
    • This is a¬†classic example used in the book of making seriously bad management decisions.¬†
    • Why would the manager assign Carmen that project?

Then comes one of three quotes that are the basis of the argument for NOT estimating. The most egregious is Hofstadter's Law, which says It always takes longer than you expect, even when you take into account Hofstadter's Law

This quote is misused to suggest that estimating can't be done. On page 152 of Gödel, Escher, Bach: an Eternal Golden Braid, Hofstadter explains the context and meaning of Hofstadter's Law.

Hofstadter is speaking about the development of a Chess playing program - and doing so from the perspective of 1978 style software development. The game playing programs use a look-ahead tree with branches of the moves and countermoves. The art of the program is to avoid exploring every branch of the look-ahead tree down to the terminal nodes. In chess - actual chess, people - not the computer - have the skill to know what branches to look down and what branches to not look down. 

In the early days (before 1978) people used to estimate that it would be ten years until the computer was a world champion, But after ten years (1988) it was still estimated that day was still ten years away. 

This notion is part of the recursive Hofstadter's Law which is what the whole book is about. The principle of Recursion and Unpredictability is described at the bottom of page 152. 

For a set to be recursively enumerable (the condition to traverse the look ahead tree for all position moves), means it can be generated from a set of starting points (axioms), by the repeated application of rules of inference. Thus, the set grows and grows, each new element being compounded somehow out of previous elements, in a sort of mathematical snowball. But this is the essence of recursion - something being defined in terms of simpler versions of itself, instead of explicitly. 

Recursive enumeration is a process in which new things emerge from old things by fixed rules. There seem to be many surprises in such processes ...

So if you work on the development of recursive enumeration based software designs, then yes - estimating when you'll have your program working is likely going to be hard. Or if you work on the development of software that has no stated Capabilities, no Product Roadmap, no Release Plan, no Product Owner or Customer that may have even the slightest notion of what Done Looks like in units of measure meaningful to the decision makers, then probably you can apply Hofstadter's Law. Yordan calls this type of project A Death March Project - good luck with that.

If not, then DO NOT fall prey to the misuse of Hofstadter's Law by those likely to not have actually read Hofstadter's book, nor have the skills and experience to understand the processes needed to produce credible estimates.

So Why Is It Important to call out something like the misuse of a quote? Simple, if you can't get the simple understanding of what someone said, how they meant, and in what context they said it, how can you have a coherent message for the message you're trying to convey? Several other "agile" books do the say they. This is why Agile! The Good, the Hype, and the Ugly is a mandatory read 

Some More nonsense quotes Out of Context or Without ANY Context and Bad Management Being Done on Purpose

This Chapter contains many bogus quotes, examples of Doing Stupid Things on Purpose, here are a few examples:

  • A Late Change in Requirements is a Competitive Advantage - is only true if that change enhances the value of the product and the cost of that change does not impact the ROI of the product
  • 80,000 lines of code won't fit on a 20-page contract - never seen a contract to stated how many lines of code. A simple letter contract for the development of an IT Ticketing systems can be stated in 5 pages of less. Yet another example of stating nonsense with no context.
  • Figure 4 shows the inability of the team and management to actually flow the principles of Scrum. This is a picture of a bad scrum team. they need an intervention from a Scrum coach to get them back to actually providing value to their customer

Chapter 1 Ends With the #NoEstimates Argument But Ignores the Uncertainties of all Project Work

If it is so hard to estimate software work, can you predict when a particular project will end? Yes, you can. And that is the right question. Instead of asking how long the project will take, ask instead: ‚Äúgiven the rate of progress so far, and the amount of work still left, when will the project end?‚ÄĚ Or, a similar question: ‚ÄúGiven the rate of progress, how much of the work can be finalized by date X?‚ÄĚ

This is possibly the case IIF (If and Only  If):

  • All the work is of equal effort
  • All the work is of equal duration
  • All the workers have consistent productivity over the life of the work
  • There are no aleatory uncertainties in any of the processes
  • There are no epistemic uncertainties in any process, people, or tools

In other words, the work is non-variable, equal size and effort, and the developers work at an unchanging rate, and the arrival of the work is constant. This the definition of a Machine Based process. Not likely ever to be true.

So at the end of Chapter 1, No estimates is based on a set of conditions that are NEVER present on actual project

Intro the Chapter 2

Chapter 2 starts with a scene of sending poor Carmen on a Death March. 

  • This was not Carmen‚Äôs first project, but until now she had managed to give fair estimates on her projects because they were small and somehow similar.
  • But this was a different kind of beast. New technologies, new client, new domain, new product.

So does Carmen have any reference classes for making estimates of the new project? How about some models with scaling parameters? How about an understanding of the uncertainties involved in the project? The reducible uncertainties (Epistemic) the irreducible uncertainties (Aleatory) both of which create Risk for the success of the projects. Does Carmen have a Product Roadmap where these uncertainties can be assessed? Does Carmine have a Product Owner, an Agile estimating process? A Release Plan?

Does Carmen have anything she needs for success? Sounds like she's being tossed to the Wolf's in pursuit of a predefined conjecture that estimates can not work. This is the style of writing here. If this is appealing to you - make up a bad situation, where the protagonist is sacrificed to the Gods of Bad Management, then the solution is just as incredible, then keep reading. I'll take a break before Chapter 2 for all these contrived plot twists based on contrived examples of Bad Management. 

Related articles Systems Thinking, System Engineering, and Systems Management Humpty Dumpty and #NoEstimates Why Guessing is not Estimating and Estimating is not Guessing Why We Need Governance The Microeconomics of Decision Making in the Presence of Uncertainty Agile Software Development in the DOD Economics of Software Development Two Books in the Spectrum of Software Development
Categories: Project Management

Deciding Which Size Metric Works For Testing

33668249915_7a8d73072b_k.jpg

Which size metric makes sense for a testing organization is influenced by how testing is organized, where testing is incorporated into the value delivery chain, and whether the work is being done for a fee.

Organization

How the people are organized to identify needs, develop software and deliver value will influence which sizing technique ¬†will be appropriate for testing. ¬†There are two basic competing models for organizing testers. The two models are independent test groups and testers are embedded into the team. ¬†Variants of the later include testers as part of the team and testers are matrixed members of the team. ¬†In an independent test, model developers complete their work (usually after unit testing) and then ‚Äúthrow‚ÄĚ the work over the wall to the independent testers, whereas in the embedded version the work does not have to pass over a boundary.

Independent testing teams generally focus most of their efforts on planning and executing tests (these types of tests are often termed dynamic testing and include system testing through user acceptance testing).  In this organizational scenario, testing teams size their work independently of the development team.  Size is used to predict when work will be completed or for how much work will be accepted by the team. Metrics that specifically leverage or count testing deliverables such as test cases or test case points are typically used.

Testers embedded in the team generally align to the same size metric as the development personnel (the decision of which size metric is often decided by the development personnel).  Testers plan their work as part of the overall team or at worst, concurrently with development tasks.  In this scenario, all three categories of size metrics are used.  Examples of physical, functional and relative size metrics used include a number of requirements, IFPUG function points or story points.

Value chain

How testing activities, both dynamic and static (static testing includes techniques such as reviews and pair programming) are incorporated in the value delivery chain influence which size metric will be used to plan testing activities.  

The more integrated into the development process testers are the more likely the team will plan together using either a functional metric (function points) or a relative measure (tee shirt sizing or story points). Team’s that are using any of the test first techniques (e.g. TDD, BDD and ATTD) are examples scenarios in which testing is highly integrated into the value delivery chain. The higher the level of integration of testing into the more apt testers will be to leverage sizing metrics reflect the ultimate functional deliverable versus counting individuals physical items.

Similar to the scenarios in which testing is an independent group, when test activities are segregated to a separate phase, test teams often leverage physical size metrics (number of requirements or components) or hybrid sizing methods such as test case points.

Work for fee

Outsourced testing is an extreme version of the independent test group. Outsourced test groups that price by project face all of the same issues as teams doing any outsourced piece of work. With the exception of open, time and material contracts, the testing team needs some basis of the estimate that allows them to complete the work and make a healthy profit margin. Just like many development groups that have begun to leverage cost per function point, testing providers are experimenting with cost per test case point.  

The size metric a test group leverages is rarely a random choice.  Many of the influences are outside of the individual testers span of control.  Organization culture influences how testers are involved in the development process and whether they are segregated into separate teams.  The best option for a size metric is the one that helps a team know how much work they can commit to completing and when that work can be completed well.

 


Categories: Process Management

Deciding Which Size Metric Works For Testing

33668249915_7a8d73072b_k.jpg

Which size metric makes sense for a testing organization is influenced by how testing is organized, where testing is incorporated into the value delivery chain, and whether the work is being done for a fee.

Organization

How the people are organized to identify needs, develop software and deliver value will influence which sizing technique will be appropriate for testing.¬† There are two basic competing models for organizing testers. The two models are independent test groups and testers embedded into the team.¬† Variants of the later include testers as part of the team and testers as matrixed members of the team.¬† In an independent test model, developers complete their work (usually after unit testing) and then ‚Äúthrow‚ÄĚ the work over the wall to the independent testers, whereas in the embedded version the work does not have to pass over a boundary.

Independent testing teams generally focus most of their efforts on planning and executing tests (these types of tests are often termed dynamic testing and include system testing through user acceptance testing).  In this organizational scenario, testing teams size their work independently of the development team.  Size is used to predict when work will be completed or for how much work will be accepted by the team. Metrics that specifically leverage or count testing deliverables such as test cases or test case points are typically used.

Testers embedded in the team generally align to the same size metric as the development personnel (the decision of which size metric is often decided by the development personnel).  Testers plan their work as part of the overall team or at worst, concurrently with development tasks.  In this scenario, all three categories of size metrics are used.  Examples of physical, functional and relative size metrics used include a number of requirements, IFPUG function points or story points.

Value chain

How testing activities, both dynamic and static (static testing includes techniques such as reviews and pair programming) are incorporated in the value delivery chain influence which size metric will be used to plan testing activities.

The more integrated into the development process testers are the more likely the team will plan together using either a functional metric (function points) or a relative measure (tee shirt sizing or story points). Team’s that are using any of the test first techniques (e.g. TDD, BDD and ATDD) are examples of scenarios in which testing is highly integrated into the value delivery chain. The higher the level of integration of testing into the development process the more apt testers will be to leverage sizing metrics that reflect the ultimate functional deliverable versus counting individuals physical items.

Similar to the scenarios in which testing is an independent group, when test activities are segregated to a separate phase, test teams often leverage physical size metrics (number of requirements or components) or hybrid sizing methods such as test case points.

Work for fee

Outsourced testing is an extreme version of the independent test group. Outsourced test groups that price by project, face all of the same issues as teams doing any outsourced piece of work. With the exception of open, time and material contracts, the testing team needs some basis for the estimate that allows them to complete the work and make a healthy profit margin. Just like many development groups that have begun to leverage cost per function point, testing providers are experimenting with cost per test case point.

The size metric a test group leverages is rarely a random choice.  Many of the influences are outside of the individual testers span of control.  Organization culture influences how testers are involved in the development process and whether they are segregated into separate teams.  The best option for a size metric is the one that helps a team know how much work they can commit to completing and when that work can be completed well.


Categories: Process Management

FORTIFY in Android

Android Developers Blog - Thu, 04/13/2017 - 21:50
Posted by George Burgess, Software Engineer

FORTIFY is an important security feature that's been available in Android since mid-2012. After migrating from GCC to clang as the default C/C++ compiler early last year, we invested a lot of time and effort to ensure that FORTIFY on clang is of comparable quality. To accomplish this, we redesigned how some key FORTIFY features worked, which we'll discuss below.

Before we get into some of the details of our new FORTIFY, let's go through a brief overview of what FORTIFY does, and how it's used.

What is FORTIFY?
FORTIFY is a set of extensions to the C standard library that tries to catch the incorrect use of standard functions, such as memset, sprintf, open, and others. It has three primary features:

  • If FORTIFY detects a bad call to a standard library function at compile-time, it won't allow your code to compile until the bug is fixed.
  • If FORTIFY doesn't have enough information, or if the code is definitely safe, FORTIFY compiles away into nothing. This means that FORTIFY has 0 runtime overhead when used in a context where it can't find a bug.
  • Otherwise, FORTIFY adds checks to dynamically determine if the questionable code is buggy. If it detects bugs, FORTIFY will print out some debugging information and abort the program.

Consider the following example, which is a bug that FORTIFY caught in real-world code:

struct Foo {
    int val;
    struct Foo *next;
};
void initFoo(struct Foo *f) {
    memset(&f, 0, sizeof(struct Foo));
}
FORTIFY caught that we erroneously passed &f as the first argument to memset, instead of f. Ordinarily, this kind of bug can be difficult to track down: it manifests as potentially writing 8 bytes extra of 0s into a random part of your stack, and not actually doing anything to *f. So, depending on your compiler optimization settings, how initFoo is used, and your project's testing standards, this could slip by unnoticed for quite a while. With FORTIFY, you get a compile-time error that looks like:

/path/to/file.c: call to unavailable function 'memset': memset called with size bigger than buffer
    memset(&f, 0, sizeof(struct Foo));
    ^~~~~~
For an example of how run-time checks work, consider the following function:

// 2147483648 == pow(2, 31). Use sizeof so we get the nul terminator,
// as well.
#define MAX_INT_STR_SIZE sizeof("2147483648")
struct IntAsStr {
    char asStr[MAX_INT_STR_SIZE];
    int num;
};
void initAsStr(struct IntAsStr *ias) {
    sprintf(ias->asStr, "%d", ias->num);
}
This code works fine for all positive numbers. However, when you pass in an IntAsStr with num <= -1000000, the sprintf will write MAX_INT_STR_SIZE+1 bytes to ias->asStr. Without FORTIFY, this off-by-one error (that ends up clearing one of the bytes in num) may go silently unnoticed. With it, the program prints out a stack trace, a memory map, and will abort with a core dump.

FORTIFY also performs a handful of other checks, such as ensuring calls to open have the proper arguments, but it's primarily used for catching memory-related errors like the ones mentioned above.
However, FORTIFY can't catch every memory-related bug that exists. For example, consider the following code:

__attribute__((noinline)) // Tell the compiler to never inline this function.
inline void intToStr(int i, char *asStr) { sprintf(asStr, ‚Äú%d‚ÄĚ, num); }


char *intToDupedStr(int i) {
    const int MAX_INT_STR_SIZE = sizeof(‚Äú2147483648‚ÄĚ);
    char buf[MAX_INT_STR_SIZE];
    intToStr(i, buf);
    return strdup(buf);
}
Because FORTIFY determines the size of a buffer based on the buffer's type and‚ÄĒif visible‚ÄĒits allocation site, it can't catch this bug. In this case, FORTIFY gives up because:

  • the pointer is not a type with a pointee size we can determine with confidence because char * can point to a variable amount of bytes
  • FORTIFY can't see where the pointer was allocated, because asStr could point to anything.

If you're wondering why we have a noinline attribute there, it's because FORTIFY may be able to catch this bug if intToStr gets inlined into intToDupedStr. This is because it would let the compiler see that asStr points to the same memory as buf, which is a region of sizeof(buf) bytes of memory.

How FORTIFY works
FORTIFY works by intercepting all direct calls to standard library functions at compile-time, and redirecting those calls to special FORTIFY'ed versions of said library functions. Each library function is composed of parts that emit run-time diagnostics, and‚ÄĒif applicable‚ÄĒparts that emit compile-time diagnostics. Here is a simplified example of the run-time parts of a FORTIFY'ed memset (taken from string.h). An actual FORTIFY implementation may include a few extra optimizations or checks.

_FORTIFY_FUNCTION
inline void *memset(void *dest, int ch, size_t count) {
    size_t dest_size = __builtin_object_size(dest);
    if (dest_size == (size_t)-1)
        return __memset_real(dest, ch, count);
    return __memset_chk(dest, ch, count, dest_size);
}
In this example:

  • _FORTIFY_FUNCTION expands to a handful of compiler-specific attributes to make all direct calls to memset call this special wrapper.
  • __memset_real is used to bypass FORTIFY to call the "regular" memset function.
  • __memset_chk is the special FORTIFY'ed memset. If count > dest_size, __memset_chk aborts the program. Otherwise, it simply calls through to __memset_real.
  • __builtin_object_size is where the magic happens: it's a lot like size sizeof, but instead of telling you the size of a type, it tries to figure out how many bytes exist at the given pointer during compilation. If it fails, it hands back (size_t)-1.

The __builtin_object_size might seem sketchy. After all, how can the compiler figure out how many bytes exist at an unknown pointer? Well... It can't. :) This is why _FORTIFY_FUNCTION requires inlining for all of these functions: inlining the memset call might make an allocation that the pointer points to (e.g. a local variable, result of calling malloc, …) visible. If it does, we can often determine an accurate result for __builtin_object_size.

The compile-time diagnostic bits are heavily centered around __builtin_object_size, as well. Essentially, if your compiler has a way to emit diagnostics if an expression can be proven to be true, then you can add that to the wrapper. This is possible on both GCC and clang with compiler-specific attributes, so adding diagnostics is as simple as tacking on the correct attributes.

Why not Sanitize?
If you're familiar with C/C++ memory checking tools, you may be wondering why FORTIFY is useful when things like clang's AddressSanitizer exist. The sanitizers are excellent for catching and tracking down memory-related errors, and can catch many issues that FORTIFY can't, but we recommend FORTIFY for two reasons:

  • In addition to checking your code for bugs while it's running, FORTIFY can emit compile-time errors for code that's obviously incorrect, whereas the sanitizers only abort your program when a problem occurs. Since it's generally accepted that catching issues as early as possible is good, we'd like to give compile-time errors when we can.
  • FORTIFY is lightweight enough to enable in production. Enabling it on parts of our own code showed a maximum CPU performance degradation of ~1.5% (average 0.1%), virtually no memory overhead, and a very small increase in binary size. On the other hand, sanitizers can slow code down by well over 2x, and often eat up a lot of memory and storage space.

Because of this, we enable FORTIFY in production builds of Android to mitigate the amount of damage that some bugs can cause. In particular, FORTIFY can turn potential remote code execution bugs into bugs that simply abort the broken application. Again, sanitizers are capable of detecting more bugs than FORTIFY, so we absolutely encourage their use in development/debugging builds. But the cost of running them for binaries shipped to users is simply way too high to leave them enabled for production builds.

FORTIFY redesign
FORTIFY's initial implementation used a handful of tricks from the world of C89, with a few GCC-specific attributes and language extensions sprinkled in. Because Clang cannot emulate how GCC works to fully support the original FORTIFY implementation, we redesigned large parts of it to make it as effective as possible on clang. In particular, our clang-style FORTIFY implementation makes use of clang-specific attributes and language extensions, as well as some function overloading (clang will happily apply C++ overloading rules to your C functions if you use its overloadable attribute).

We tested hundreds of millions of lines of code with this new FORTIFY, including all of Android, all of Chrome OS (which needed its own reimplementation of FORTIFY), our internal codebase, and many popular open source projects.

This testing revealed that our approach broke existing code in a variety of exciting ways, like:
template <typename OpenFunc>
bool writeOutputFile(OpenFunc &&openFile, const char *data, size_t len) {}

bool writeOutputFile(const char *data, int len) {
    // Error: Can’t deduce type for the newly-overloaded `open` function.
    return writeOutputFile(&::open, data, len);
}
and
struct Foo { void *(*fn)(void *, const void *, size_t); }
void runFoo(struct Foo f) {
    // Error: Which overload of memcpy do we want to take the address of?
    if (f.fn == memcpy) {
        return;
    }
    // [snip]
}


There was also an open-source project that tried to parse system headers like stdio.h in order to determine what functions it has. Adding the clang FORTIFY bits greatly confused the parser, which caused its build to fail.

Despite these large changes, we saw a fairly low amount of breakage. For example, when compiling Chrome OS, fewer than 2% of our packages saw compile-time errors, all of which were trivial fixes in a couple of files. And while that may be "good enough," it is not ideal, so we refined our approach to further reduce incompatibilities. Some of these iterations even required changing how clang worked, but the clang+LLVM community was very helpful and receptive to our proposed adjustments and additions, such as:


We recently pushed it to AOSP, and starting in Android O, the Android platform will be protected by clang FORTIFY. We're still putting some finishing touches on the NDK, so developers should expect to see our upgraded FORTIFY implementation there in the near future. In addition, as we alluded to above, Chrome OS also has a similar FORTIFY implementation now, and we hope to work with the open-source community in the coming months to get a similar implementation* into glibc, the GNU C library.

* For those who are interested, this will look very different than the Chrome OS patch. Clang recently gained an attribute called diagnose_if, which ends up allowing for a much cleaner FORTIFY implementation than our original approach for glibc, and produces far prettier errors/warnings than we currently can. We expect to have a similar diagnose_if-powered implementation in a later version of Android.
Categories: Programming

The Bad Apple Syndrome in Process Improvement

Herding Cats - Glen Alleman - Thu, 04/13/2017 - 19:54

When process improvement starts with the solution, it's common to anchor this improvement on the Bad Apple syndrome. The Dilbert Manager, the bad apple on the team, and another example of starting with the symptom and skipping to the solution, bypassing the root cause.

When this happens, those selling the solution need to defend the solution in the presence of hard questions:

  • Will your solution actually¬†correct my problem?
  • Will your solution actually prevent the problem from coming back?
  • What is the reason for that? Have you made a¬†cause and effect assessment of the problem to the solution, to the sustainment of that solution?

This, of course, is the basis of Root Cause Analysis.

Screen Shot 2017-04-12 at 8.00.34 AM

ISO/IEC 17025:2005 (4.11.2) ‚Äí The procedure for corrective action shall start with an investigation to determine the root cause(s) of the problem.

Here's a simple perspective for problem-solving

  • Every problem in our lives, personal, business, civic, technical¬†has three basic elements connected through causality
  • Each effect has to two causes
    • An Action
    • A Condition

Screen Shot 2017-04-12 at 8.04.49 AM

Before looking further, here's a principle that has served us well. 

Ignorance is a most wonderful thing.
It facilitates magic.
It allows the masses to be led.
It provides answers when there are none.
It allows happiness in the presence of danger.
All this, while the pursuit of knowledge can only destroy the illusion. Is it any wonder mankind chooses ignorance?

- from The Apollo Method, orginall from 

This Action and Condition are critical to finding the solution. The classic misuse of the Five Whys of the root cause is the conjecture.

So when you hear a story about some undesired outcome - labeled as a dysfunction - and then here a quick and dirty solution to that dysfunction ask if there has been a Root Cause Analysis of the situation?

No? Then it's likely the person making the suggestion is trying to sell you something. A conference, a book, a consulting gig. Especially if that person's suggestion violates several of the principles of business and technical management.

And we're all selling all the time, but in our Federal Acquisition Regulation environment and other contract domain, there is a Past Performance Volume with a formal evaluation. So when a suggestion is made to a client (of ours) that Past Performance section must state what we did, what was the outcome, and what are the tangible benefits of that outcome to the decision maker. Always ask for those pieces of information before spending any more on anything.

Long ago a very senior piping engineer on a multi-billion refinery piping design system, asked us software weenies 

That a nice idea boys (there were no females working there) what have you done for me lately

One of the engineers for the product we were using (Evans and Sutherland Picture System) went on to found Adobe. He asked if we wanted to join the startup. Na, I don't want to move from Southern California (Irvine) to Northern California. They had written a driver for a CalComp bed plotter to plot our 3D piping diagrams (ISOs) and were getting ready to approach laser printer manufacturers. That protocol was called PostScript.

Related articles Estimating and Making Decisions in Presence of Uncertainty Are Estimates Really The Smell of Dysfunction? Myth's Abound Herding Cats: Where is the Adult Supervision on this Program?
Categories: Project Management

Team Size Matters, Reprise

Several years ago, I wrote a post for a different blog called “Why Team Size Matters.” That post is long gone. I explained that the number of communication paths in the team does not increase linearly as the team size¬†increases;¬† team communication paths square when the team increases linearly. Here is the¬†calculation where N is the number of people on the team: Communication Paths=(N*N-N)/2.¬†

  • 4 people, (16-4)/2=6
  • 5 people, (25-5)/2=10
  • 6 people, (36-6)/2=15
  • 7 people, (49-7)/2=21
  • 8 people, (56-8)/2=24
  • 9 people, (81-9)/2=36
  • 10 people (100-10)/2=45

Here’s why the number of communication paths matter: we need to be able to depend on our team members to deliver. Often, that means we need to understand how they work. The more communication paths, the more the team¬†might have trouble understanding who is doing what and when.

When team members pair, swarm, or mob, they have frequent interconnection points. By working together, they reduce the number of necessary communication paths. Maybe you can have a larger team if the team mobs. (I bet you don’t need a larger team then

Categories: Project Management

Planning is King

Herding Cats - Glen Alleman - Thu, 04/13/2017 - 16:58

There has been lots of buzz in the project management space lately about the benefits of planning. Planning is one of the Five Immutable Principles of project success. Planning tells us, in units of measure meaningful to the decision makers, what Done looks like.

No Plan? You can't know what Done looks like, until time and money runs out.

Screen Shot 2017-04-13 at 9.50.45 AM

Plans are strategies for success. Strategies are hypotheses. Hypotheses needed tests to verify their correctness. Tests are working products, models, feedback - some tangible evidence that the work produces the desired outcomes.

And as always, planning and the resulting plans are estimates of the desired outcomes and the path to get there. With these estimates, the steps needed to reach the desired outcomes can be assessed to make corrective actions.

Related articles Mr. Franklin's Advice Capabilities Based Planning Carl Sagan's BS Detector Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Debunking
Categories: Project Management

Android Developer Story: LinkedIn uses Android Studio to build a performant app

Android Developers Blog - Thu, 04/13/2017 - 16:39

Posted by Christopher Katsaros, Developer Marketing, Android


LinkedIn is the world's largest social network for professionals. LinkedIn has 10 apps on Google Play, including the flagship LinkedIn app, which provides all of the same features users find on the web, so users can do things like browse and send messages to their professional network with an improved user experience.

For LinkedIn, and other teams with a large number of developers adding code to a project, making sure that everyone pays attention to areas that affect performance is vital for the quality of their app. That's why the the LinkedIn mobile team uses Android Studio to build high quality Android apps.

Watch Pradeepta Dash, Engineering Manager for Infrastructure at LinkedIn, as well as Drew Hannay, Tech Lead for the Android Infrastructure team, talk about how Android Studio helps everyone on their team stay focused on these topics while getting new engineers quickly up and running:


The top Android developers use Android Studio to build powerful, successful apps for Google Play; you can learn more about the official IDE for Android app development, and get started for yourself.

Get more tips and watch more success stories in the Playbook for Developers app.

How useful did you find this blogpost? ‚ėÖ ‚ėÖ ‚ėÖ ‚ėÖ ‚ėÖ         

Categories: Programming

Some Dark Sides to Agile

Herding Cats - Glen Alleman - Wed, 04/12/2017 - 23:16

I work agile programs that are subject to FAR 34.2 and DFARS 234.2 These are Software Intensive System of Systems, starting at $20M in some domains and $5M in others and going much larger. Many are household names in space and defense business. These programs integrate Earned Value Management with Agile software development to great advantage.

But there are some Dark Sides to agile in this domain and context.

Screen Shot 2017-04-12 at 11.32.21 AM

Like all processes and tools that support them, there is a Dark Side.

Here are some Dark Sides to the integration of Agile and EVM.

  • One of the principles of Agile is the encouragement of Late Changes. This allows the project to adapt to emerging needs. But someone has to pay for this non-recoverable sunk cost that results from the changes that don‚Äôt appear in production.
  • These changes may also impact the PMB, and drive changes to the baseline. The accounting processes will be burdened as well, with BCRs.
  • In the EVM world, CPI can forecast cost overruns. In Agile the cost spreads are fixed by the flat staffing profile during the sprints. Unfavorable CV is not possible. Undelivered functionality is.
  • SV is problematic in normal EVM. What does it mean to be $250K unfavorable to schedule (SV) unless you know the burn rate?

Screen Shot 2017-04-12 at 11.35.19 AM

There is no field in the IPMR for Story Points or Stories. The WBS at the Control Account and Work Package level is assigned Dollars. Performance is measured by Physical Percent Complete against BCWS ‚Äď in dollars.

Related articles Deadlines Always Matter Thinking, Talking, Doing on the Road to Improvement The Art of Systems Architecting The Fallacy of the Planning Fallacy Just Because You Say Words, It Doesn't Make Then True Herding Cats: Velocity versus Speed There is No Such Thing as Free Who's Budget is it Anyway?
Categories: Project Management

Welcome New Host Kim Carter

We’re pleased to welcome Kim Carter to the SE radio team.¬†Kim is a technologist / engineer, information security professional, entrepreneur, and the founder of BinaryMist. He has 15 years’ commercial experience in architecture, development, engineering, and testing of both small and¬†large-scale software and networks. He¬†also has considerable experience in security assessments and penetration testing. Carter¬†is […]
Categories: Programming

Size As A Factor In Test Estimation: Other Sizing Techniques

33510320232_40376a5052_k

Test case points is only one approach to determining the size of work that needs to be tested. The other measures fall into three broad categories.  The categories are:

Physical Measures. ¬†This category represents a count tangible ‚Äúthings‚ÄĚ like requirements or test cases. ¬†The assumption is that there is a relationship between the count of the physical item and the effort or the duration of testing. For example, there might be a relationship between the number of test cases need for a project and the amount of effort needed to execute and review those test cases. Measurement approaches that count physical items are generally an easy approach to generating a measure or metric. The most immediate problem with counting physical things is that individual items are generally not the same size. Therefore¬†simple counting does not reflect the range of sizes.

Functional Measures.  This category uses a set of rules to determine software size by assessing the software based on a set of rules focusing delivered functionality.  IFPUG Function Points (and other function point methods) are examples of size measures in this category. The assumption made when using this category of metrics is that the functional size of the software components is related to the duration and effort required for testing. Function point counts are a good reflection of the functionality; however, projects are often a mix of functional and nonfunctional requirements.  In scenarios where the ratio of functional to non-functional is out of the ordinary, functional measure may not be useful.

Relative Measure. Measures in this category use the measurer’s perspective as a framework to assess size. ¬†Story points and tee shirt sizing are examples of relative size measures. This form of measurement makes that same basic assumption that every other size category makes; that size is related to effort. The way the metric is used is typically different. ¬†Instead of using size to estimate how much effort a piece of work will require, relative measures are typically used to gauge how much work can be done in a fixed time by a fixed group. The development of relative measures engages the whole team, through techniques such as planning poker. ¬†The process which helps the team determine size while learning about the user story being measured. Issues with this type of measure center around the need for the team to be stable or the need for reporting size for use in other measures.

Test case points are a hybrid approach.  The method begins with test cases (a tangible thing) and then counts the steps, verification points,  interfaces and factors in whether baseline data is needed. These counts within a count are used to adjust the size of the test case in order to more closely relate size to effort.  This approach uses concepts from the physical and functional approaches.

Each of these sizing approaches has pluses and minuses.  When and where the value of these metrics outweighs the effort to generate these metrics is a subject of much debate as is which size measure or metric you choose. Where you fall in this debate is heavily influenced by how you are organized, where you fall in the value delivery food chain and whether the work is being done for a fee.

We will address this topic next!

 


Categories: Process Management

SE-Radio Episode 287: Success Skills for Architects with Neil Ford

Neal Ford of ThoughtWorks chats with SE Radio’s Kim Carter about the skills required to be a successful software architect, how to create and maintain them, and how to transition from other roles, such as software engineering. Neal discusses that the required skills can be learned, you do not have to be born with special […]
Categories: Programming

Android O to drop insecure TLS version fallback in HttpsURLConnection

Android Developers Blog - Tue, 04/11/2017 - 20:00
Posted by Tobias Thierer, Software Engineer
To improve security, insecure TLS version fallback has been removed from HttpsURLConnection in Android O.

What is changing and why?
TLS version fallback is a compatibility workaround in the HTTPS stack to connect to servers that do not implement TLS protocol version negotiation correctly. In previous versions of Android, if the initial TLS handshake fails in a particular way, HttpsURLConnection retries the handshake with newer TLS protocol versions disabled. In Android O, it will no longer attempt those retries. Connections to servers that correctly implement TLS protocol version negotiation are not affected.

We are removing this workaround because it weakens TLS by disabling TLS protocol version downgrade protections. The workaround is no longer needed, because fewer than 0.01% of web servers relied on it as of late 2015.

Will my app be affected?
Most apps will not be affected by this change. The easiest way to be sure is to build and test your app with the Android O Developer Preview. Your app's HTTPS connections in Android O will not be affected if they:

  • Target web servers that work with recent versions of Chrome or Firefox, because those servers have correctly implemented TLS protocol version negotiation. Support for TLS version fallback was removed in Firefox 37 (Mar 2015) and Chrome 50 (Apr 2016).
  • Use a third-party HTTP library not built on top of HttpsURLConnection. We suggest you disable protocol fallback if you're using a third-party library. For example, in OkHttp versions up to 3.6, you may want to configure your OkHttpClient to only use ConnectionSpec.MODERN_TLS.

My app is affected. What now?
If your app relies on TLS version fallback, its HTTPS connections are vulnerable to downgrade attacks. To fix this, you should contact whoever operates the server. If this is not possible right away, then as a workaround you could use a third-party HTTP library that offers TLS version fallback. Be aware that using this method weakens your app's TLS security. To discover any compatibility issues, please test your app against the Android O Developer Preview.
Categories: Programming