Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=5' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Project Bloks: Making code physical for kids

Google Code Blog - Mon, 06/27/2016 - 16:46

Originally posted on Google Research Blog


Posted by Steve Vranakis and Jayme Goldstein, Executive Creative Director and Project Lead, Google Creative Lab
At Google, we’re passionate about empowering children to create and explore with technology. We believe that when children learn to code, they’re not just learning how to program a computer—they’re learning a new language for creative expression and are developing computational thinking: a skillset for solving problems of all kinds.
In fact, it’s a skillset whose importance is being recognised around the world—from President Obama’s CS4All program to the inclusion of Computer Science in the UK National Curriculum. We’ve long supported and advocated the furthering of CS education through programs and platforms such as Blockly, Scratch Blocks, CS First and Made w/ Code.
Today, we’re happy to announce Project Bloks, a research collaboration between Google, Paulo Blikstein (Stanford University) and IDEO with the goal of creating an open hardware platform that researchers, developers and designers can use to build physical coding experiences. As a first step, we’ve created a system for tangible programming and built a working prototype with it. We’re sharing our progress before conducting more research over the summer to inform what comes next.
Physical codingKids are inherently playful and social. They naturally play and learn by using their hands, building stuff and doing things together. Making code physical - known as tangible programming - offers a unique way to combine the way children innately play and learn with computational thinking.
Project Bloks is preceded and shaped by a long history of educational theory and research in the area of hands-on learning. From Friedrich Froebel, Maria Montessori and Jean Piaget’s pioneering work in the area of learning by experience, exploration and manipulation, to the research started in the 1970s by Seymour Papert and Radia Perlman with LOGO and TORTIS. This exploration has continued to grow and includes a wide range of research and platforms.
However, designing kits for tangible programming is challenging—requiring the resources and time to develop both the software and the hardware. Our goal is to remove those barriers. By creating an open platform, Project Bloks will allow designers, developers and researchers to focus on innovating, experimenting and creating new ways to help kids develop computational thinking. Our vision is that, one day, the Project Bloks platform becomes for tangible programming what Blockly is for on-screen programming. The Project Bloks systemWe’ve designed a system that developers can customise, reconfigure and rearrange to create all kinds of different tangible programming experiences. A birdseye view of the customisable and reconfigurable Project Bloks systemThe Project Bloks system is made up of three core components the “Brain Board”, “Base Boards” and “Pucks”. When connected together they create a set of instructions which can be sent to connected devices, things like toys or tablets, over wifi or Bluetooth. The three core components of the Project Bloks systemPucks: abundant, inexpensive, customisable physical instructionsPucks are what make the Project Bloks system so versatile. They help bring the infinite flexibility of software programming commands to tangible programming experiences. Pucks can be programmed with different instructions, such as ‘turn on or off’, ‘move left’ or ‘jump’. They can also take the shape of many different interactive forms—like switches, dials or buttons. With no active electronic components, they’re also incredibly cheap and easy to make. At a minimum, all you'd need to make a puck is a piece of paper and some conductive ink. Pucks allow for the creation and customisation of endless amount of different domain-specific physical instructions cheaply and easily.Base Boards: a modular design for diverse tangible programming experiencesBase Boards read a Puck’s instruction through a capacitive sensor. They act as a conduit for a Puck’s command to the Brain Board. Base Boards are modular and can be connected in sequence and in different orientations to create different programming flows and experiences. The modularity of the Base Boards means they can be arranged in different configurations and flowsEach Base Board is fitted with a haptic motor and LEDs that can be used to give end-users real time feedback on their programming experience. The Base Boards can also trigger audio feedback from the Brain Board’s built-in speaker. Brain Board: control any device that has an API over WiFi or BluetoothThe Brain Board is the processing unit of the system, built on a Raspberry Pi Zero. It also provides the other boards with power, and contains an API to receive and send data to the Base Boards. It sends the Base Boards’ instructions to any device with WiFi or Bluetooth connectivity and an API. As a whole, the Project Bloks system can take on different form factors and be made out of different materials. This means developers have the flexibility to create diverse experiences that can help kids develop computational thinking: from composing music using functions to playing around with sensors or anything else they care to invent. The Project Bloks system can be used to create all sorts of different physical programming experiences for kidsThe Coding KitTo show how designers, developers, and researchers might make use of system, the Project Bloks team worked with IDEO to create a reference device, called the Coding Kit. It lets kids learn basic concepts of programming by allowing them to put code bricks together to create a set of instructions that can be sent to control connected toys and devices—anything from a tablet, to a drawing robot or educational tools for exploring science like LEGO® Education WeDo 2.0. What’s next?We are looking for participants (educators, developers, parents and researchers) from around the world who would like to help shape the future of Computer Science education by remotely taking part in our research studies later in the year. If you would like to be part of our research study or simply receive updates on the project, please sign up. If you want more context and detail on Project Bloks, you can read our position paper. Finally, a big thank you to the team beyond Google who’ve helped us get this far—including the pioneers of tangible learning and programming who’ve inspired us and informed so much of our thinking.
Categories: Programming

The Bestselling Software Intended for People Who Couldn’t Use It.

James Bach’s Blog - Mon, 06/27/2016 - 16:26

In 1983, my boss, Dale Disharoon, designed a little game called Alphabet Zoo. My job was to write the Commodore 64 and Apple II versions of that game. Alphabet Zoo is a game for kids who are learning to read. The child uses a joystick to move a little character through a maze, collecting letters to spell words.

We did no user testing on this game until the very day we sent the final software to the publisher. On that day, we discovered that out target users (five year-olds) did not have the ability to use a joystick well enough to play the game. They just got frustrated and gave up.

We shipped anyway.

This game became a bestseller.

alphabet-zoo

It was placed on a list of educational games recommended by the National Education Association.

alphabet-zoo-list

Source: Information Please Almanac, 1986

So, how to explain this?

Some years later, when I became a father, I understood. My son was able to play a lot of games that were too hard for him because I operated the controls. I spoke to at least one dad who did exactly that with Alphabet Zoo.

I guess the moral of the story is: we don’t necessarily know the value of our own creations.

Categories: Testing & QA

SPaMCAST 400 – Jim Benson, Personal Kanban and More

SPaMCAST Logo

http://www.spamcast.net

Listen Now

Subscribe on iTunes                   Check out the podcast on Google Play Music

Software Process and Measurement Cast 400 features our interview with Jim Benson. Jim and I talked about personal Kanban, micromanagement, work-in-process limits, pattern matching, pomodoro and more. A great interview to cap our first 400 episodes!

Jim’s career path has taken him through government agencies, Fortune 10 corporations, and start-ups. Through them all his passion has remained consistent – applying new technologies to work groups. In each case asking how they can be leveraged to collaborate and cooperate more effectively. Jim loves ideas, creation, and building opportunities. He loves working with teams who are passionate about the future, pushing boundaries, and inclusion. His goal with all technologies is to increase beneficial contact between people and reduce the bureaucratic noise which so often tends to increase costs and destroy creativity.

Jim is the author of the Shingo Research Award winning book Personal Kanban (use the link to support the podcast) . He is a noted expert in business process, personal work management, and the application of Lean to personal work and life. Jim believes that the best process is the least process necessary to achieve goals. He has zero tolerance for process waste.

All said, Jim enjoys helping people and teams work out sticky problems, an advocate of people actually seeing their work, and inventing new ways to work at the intersection of Lean thinking, brain science, and leadership.

Contact Jim

Twitter: https://twitter.com/ourfounder

LinkedIn: https://www.linkedin.com/in/jimbenson

Personal Kanban: http://www.personalkanban.com/pk/#sthash.MtOA96sV.dpbs

Modus Cooperandi: http://moduscooperandi.com/

Re-Read Saturday News

This week we continue the Re-read Saturday of  Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 2 and 3.  The first two chapters in section One provide us with an overview of the conceptual framework that underpins XP.  Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on listening.  Effective listening is CRITICAL to every aspect of software development  and maintenance. Listening is a complex set of tasks that is more than simply receiving audio data. You also need to interpret that data.

We will also have columns from Jeremy Berriault who brings us the QA Corner and Jon M. Quigley’s second column which covers the gamut of product development.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 400 – Jim Benson, Personal Kanban and More

Software Process and Measurement Cast - Sun, 06/26/2016 - 22:00

Software Process and Measurement Cast 400 features our interview with Jim Benson. Jim and I talked about personal Kanban, micromanagement, work-in-process limits, pattern matching, pomodoro and more. A great interview to cap our first 400 episodes!

Jim’s career path has taken him through government agencies, Fortune 10 corporations, and start-ups. Through them all his passion has remained consistent – applying new technologies to work groups. In each case asking how they can be leveraged to collaborate and cooperate more effectively. Jim loves ideas, creation, and building opportunities. He loves working with teams who are passionate about the future, pushing boundaries, and inclusion. His goal with all technologies is to increase beneficial contact between people and reduce the bureaucratic noise which so often tends to increase costs and destroy creativity.

Jim is the author of the Shingo Research Award winning book Personal Kanban (use the link to buy a copy and support the podcast). He is a noted expert in business process, personal work management, and the application of Lean to personal work and life. Jim believes that the best process is the least process necessary to achieve goals. He has zero tolerance for process waste.

All said, Jim enjoys helping people and teams work out sticky problems, an advocate of people actually seeing their work, and inventing new ways to work at the intersection of Lean thinking, brain science, and leadership.

 

Contact Jim

Twitter: https://twitter.com/ourfounder

LinkedIn: https://www.linkedin.com/in/jimbenson

Personal Kanban: http://www.personalkanban.com/pk/#sthash.MtOA96sV.dpbs

Modus Cooperandi: http://moduscooperandi.com/

 

Re-Read Saturday News

This week we continue the Re-read Saturday of  Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 2 and 3.  The first two chapters in section One provide us with an overview of the conceptual framework that underpins XP.  Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.

 

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on listening.  Effective listening is CRITICAL to every aspect of software development  and maintenance. Listening is a complex set of tasks that is more than simply receiving audio data. You also need to interpret that data.

We will also have columns from Jeremy Berriault who brings us the QA Corner and Jon M. Quigley’s second column which covers the gamut of product development.

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Decision Making in the Presence of Uncertainty

Herding Cats - Glen Alleman - Sun, 06/26/2016 - 19:08

There's another rash of Twitter posters supporting the conjecture that decisions can be made about how to spend other people's money without estimating the impact and outcome of that decision. This is a core premise of #NoEstimates from the Original Poster

No Estimates

Here's a starting point to learn that is simply not true...

There are 100's more books, papers, conference proceedings on the topic of decision making in the presence of uncertanty.

It comes down to a simple concept

All project work is uncertain. Reducible uncertainties (epistemic) and irreducible uncertainties (aleatory) are present on all projects. When money is being provided to develop software, those providing the money have a reasonable expectation to know something about when you will be providing the value in exchange for that money, how much money will be required to provide that value, and what capabilities will be provided in exchange for that money. The field of study where these questions and answers live is microeconomics of decision making. Software development decision making is a well studied subset of that field.

When it is conjectured decisions can be made in the presence of uncertanty without estimates - like the Original Poster did, and there is no supporting theory, principle, practices, or evidence of how that can be done - run away - it's a bogus claim. 

Related articles Making Decisions In The Presence of Uncertainty Essential Reading List for Managing Other People's Money Herding Cats: Decision Making in the Presence of Uncertainty Herding Cats: Thoughts on Suggestions That Violate Principles of Finance, Probability, and Statistics Making Conjectures Without Testable Outcomes
Categories: Project Management

Extreme Programming Explained: Embrace Change Second Edition, Week 2 (Chapters 2 -3)

XP Explained

This week we begin Section 1of Kent Beck’s Extreme Programing Explained, Second Edition (2005), titled Exploring XP, by tackling Chapters two and three.  The first two chapters in section One provide us with an overview of the conceptual framework that underpins XP. 

Chapter 2: Learning to Drive.  In this chapter, Beck challenges us to consider that developing a product or writing code is like driving.  The metaphor of driving is used because driving is not a one-time event where you start the car and point it in the right direction then take a nap.  Rather, driving is a process of constantly paying attention, gathering feedback and making a series of small changes so that you arrive at your goal healthy and whole.  As noted in the preface, the core XP paradigm is “stay aware, adapt, change.” Tying the driving metaphor and XP together is the common problem of mushy requirements. In XP customers drive the content of the system and steer by getting feedback and adapting based on short iterations. 

As a veteran of the quality movement of the 1990’s, XP with its “stay aware, adapt, change” empirical model is easily recognizable to me as continuous improvement model that improves effectiveness, communication, competence, and productivity.

Chapter 3: Values, Principles, and Practices. XP, a methodology, represents a very different approach to software development. In order to effectively use the practices that make up XP requires adopting a set of values that are implemented based on a set of principles. Practices are the things we do, and therefore, are clear and objective. They are done or they are not done.  For example, a stand-up meeting is a practice; anyone can easily confirm whether a standup meeting is being held. In the book, Beck used the difference between a person who putters around the garden and a master gardener to illustrate the reasons why learning techniques and practices do not yield mastery. Values allow the practitioner to understand the environment, apply and evolve practices when the context demands. No framework stands as merely a set of rote practices even it might seem like just doing practices leads to a useful outcome. This why there are arguments about the ideas of “being Agile” rather than just “doing Agile.”  The lure of  just learning and applying practices without a framework of values and principles is not limited to software development. On my bookshelf there is a book titled, You Can’t Teach A Kid To Ride A Bike At A Seminar. The book is on sales (I believe everyone should have sales training).  Like most how-to books it is full of practices and techniques, with one big difference. The book makes the point that without a set of values and principles we are just robots that don’t have the level of knowledge and understanding needed to apply those practices when the context changes.  Values, principles, and principles are necessary to be a gardener, salesperson, or developer.

Values are an overarching set of rules define the boundaries of behavior. Values define what we believe in and are used to judge what we see, think, and do.  Most cognitive biases are driven by the values a person or team holds. A team that values being a process more than people will act much differently than a team with the opposite set of values.  Defining explicit values provide humans with purpose and direction.

Principles act as the connecting tissue between practices and values.  Principles provide the context-specific guidelines for implementing practices based on the values the team holds.  Beck uses the metaphor of a bridge between values and practices. 

I often talk to people that begin a transformation by adopting practices before thinking about values and principles. Perhaps a small group of true believers have drunk the Koolaid and have institutionalized the values and principles that are at the core of the framework.  However, that belief structure does not extend very away from that core group.  This approach yields highly variable results and rarely delivers long-term change. This variability occurs whether discussing Agile, the CMMI or a new sales approach. The rationale often seems to be that by adopting practices it is clear that “change” has occurred because it easy to see actions.  However because practices are situational, without the guidance of internalized values and principles it is difficult for practices to mold to situations.  When practice doesn’t seem to fit, domain specific guidelines are needed or they will be abandoned. 

Previous installment of Extreme Programing Explained, Second Edition (2005) on Re-read Saturday:

Extreme Programming Explained: Embrace Change Second Edition Week 1, Preface and Chapter 1


Categories: Process Management

Improving Stability with Private C/C++ Symbol Restrictions in Android N

Android Developers Blog - Sat, 06/25/2016 - 02:04

Posted by Dimitry Ivanov & Elliott Hughes, Software Engineers

As documented in the Android N behavioral changes, to protect Android users and apps from unforeseen crashes, Android N will restrict which libraries your C/C++ code can link against at runtime. As a result, if your app uses any private symbols from platform libraries, you will need to update it to either use the public NDK APIs or to include its own copy of those libraries. Some libraries are public: the NDK exposes libandroid, libc, libcamera2ndk, libdl, libGLES, libjnigraphics, liblog, libm, libmediandk, libOpenMAXAL, libOpenSLES, libstdc++, libvulkan, and libz as part of the NDK API. Other libraries are private, and Android N only allows access to them for platform HALs, system daemons, and the like. If you aren’t sure whether your app uses private libraries, you can immediately check it for warnings on the N Developer Preview.

We’re making this change because it’s painful for users when their apps stop working after a platform update. Whether they blame the app developer or the platform, everybody loses. Users should have a consistent app experience across updates, and developers shouldn’t have to make emergency app updates to handle platform changes. For that reason, we recommend against using private C/C++ symbols. Private symbols aren’t tested as part of the Compatibility Test Suite (CTS) that all Android devices must pass. They may not exist, or they may behave differently. This makes apps that use them more likely to fail on specific devices, or on future releases — as many developers found when Android 6.0 Marshmallow switched from OpenSSL to BoringSSL.

You may be surprised that there’s no STL in the list of NDK libraries. The three STL implementations included in the NDK — the LLVM libc++, the GNU STL, and libstlport — are intended to be bundled with your app, either by statically linking into your library, or by inclusion as a separate shared library. In the past, some developers have assumed that they didn’t need to package the library because the OS itself had a copy. This assumption is incorrect: a particular STL implementation may disappear (as was the case with stlport, which was removed in Marshmallow), may never have been available (as is the case with the GNU STL), or it may change in ABI incompatible ways (as is the case with the LLVM libc++).

In order to reduce the user impact of this transition, we’ve identified a set of libraries that see significant use from Google Play’s most-installed apps, and that are feasible for us to support in the short term (including libandroid_runtime.so, libcutils.so, libcrypto.so, and libssl.so). For legacy code in N, we will temporarily support these libraries in order to give you more time to transition. Note that we don't intend to continue this support in any future Android platform release, so if you see a warning that means your code will not work in a future release — please fix it now!

Table 1. What to expect if your app is linking against private native libraries.

Libraries App's targetSdkVersion Runtime access via dynamic linker Impact, N Developer Preview Impact, Final N Release Impact, future platform version NDK Public Any Accessible Private (graylist) <=23 Temporarily accessible Warning / Toast Warning Error >=24 Restricted Error Error Error Private (all other)> Any Restricted Error Error Error What behavior will I see?

Please test your app during the N Previews.

N Preview behavior
  • All public NDK libraries (libandroid, libc, libcamera2ndk, libdl, libGLES, libjnigraphics, liblog, libm, libmediandk, libOpenMAXAL, libOpenSLES, libstdc++, libvulkan, and libz), plus libraries that are part of your app are accessible.
  • For all other libraries you’ll see a warning in logcat and a toast on the display. This will happen only if your app’s targetSdkVersion is less than N. If you change your manifest to target N, loading will fail: Java’s System.loadLibrary will throw, and C/C++’s dlopen(3) will return NULL.

Test your apps on the Developer Preview — if you see a toast like this one, your app is accessing private native APIs. Please fix your code soon!

N Final Release behavior
  • All NDK libraries (libandroid, libc, libcamera2ndk, libdl, libGLES, libjnigraphics, liblog, libm, libmediandk, libOpenMAXAL, libOpenSLES, libstdc++, libvulkan, and libz), plus libraries that are part of your app are accessible.
  • For the temporarily accessible libraries (such as libandroid_runtime.so, libcutils.so, libcrypto.so, and libssl.so), you’ll see a warning in logcat for all API levels before N, but loading will fail if you update your app so that its targetSdkVersion is N or later.
  • Attempts to load any other libraries will fail in the final release of Android N, even if your app is targeting a pre-N platform version.
Future platform behavior
  • In O, all access to the temporarily accessible libraries will be removed. As a result, you should plan to update your app regardless of your targetSdkVersion prior to O. If you believe there is missing functionality from the NDK API that will make it impossible for you to transition off a temporarily accessible library, please file a bug here.
What do the errors look like?

Here’s some example logcat output from an app that hasn’t bumped its target SDK version (and so the restriction isn’t fully enforced because this is only the developer preview):

03-21 17:07:51.502 31234 31234 W linker  : library "libandroid_runtime.so"
("/system/lib/libandroid_runtime.so") needed or dlopened by
"/data/app/com.popular-app.android-2/lib/arm/libapplib.so" is not accessible
for the namespace "classloader-namespace" - the access is temporarily granted
as a workaround for http://b/26394120

This is telling you that your library “libapplib.so” refers to the library “libandroid_runtime.so”, which is a private library.

When Android N ships, or if you set your target SDK version to N now, you’ll see something like this if you try to use System.loadLibrary from Java:

java.lang.UnsatisfiedLinkError: dlopen failed: library "libcutils.so"
("/system/lib/libcutils.so") needed or dlopened by "/system/lib/libnativeloader.so"
is not accessible for the namespace "classloader-namespace"
  at java.lang.Runtime.loadLibrary0(Runtime.java:977)
  at java.lang.System.loadLibrary(System.java:1602)

If you’re using dlopen(3) from C/C++ you’ll get a NULL return and dlerror(3) will return the same “dlopen failed...” string as shown above.

For more information about how to check if your app is using private symbols, see the FAQ on developer.android.com.

Categories: Programming

#NoEstimates Book Review - Part 1

Herding Cats - Glen Alleman - Sat, 06/25/2016 - 01:17

I've started reading Vasco's book #NoEstimates and will write a detailed deconstruction. I got the Kindle version, so I have a $10 investment at risk. Let's start with some graphs that have been around and their misinformation that forms the basis of the book.

The Chaos Report graph,is the 1st one. This graph is from old 2004 numbers. That's 12 year old numbers. Many times the books uses 12, 16, even 25 year old reports as the basis of the suggestion that Not Estimating fixes the problems in the reports. The Chaos reports have been thoroughly debunked as self selected samples using uncalibrated surveys for the units of measure for project failure. Here's a few comments on the Standish reporting process. But first remember, Standish does not say what the units of measure are for Success, Challenged, or Failure. Without the units of measure, the actual statistics of the projects and the statistical ranges of those projects for each of the three categories, the units as essentially bogus. Good fodder for selling consulting services or for use by those with an idea to sell, but worthless for decision making about the root cause of Failure, Challenged, or even Success. Any undergraduate design of experiments class would have all that information in the public. 

So the 1st thing to read when you encounter data like this is Project Success: A Multidimensional Strategic Concept, Aaron J. Shenhar, Dov Dvir, Ofer Levy and Alan C. Maltz. Only then start to assess the numbers. Most likely, like the numbers in the book, they're not credible to support the hypothesis. Which by the way there is no hypothesis for you can make decisions in the presence of uncertanty without estimating

So let's look further at the difficulties with Standish and why NOT to use it as the basis of a conjecture

A simple google search would have found all this research and many many more. I get the sense V didn't do his homework. The bibliography has very few references to actually estimating, no actual estimating books, papers, or research sites. Just personal anecdotes from a  set of experiences as a developer.

The Standish Report failure mode is described in Darrell Huff's How to Lie With Statisticsself select the samples from the survey. Standish does not provide any population statistics for their survey.

  • How many surveys were set out?
  • How many responded?
  • Of those responding, how many were statistically significant?
  • What does it means in terms of actual measures of performance to the troubled?
  • If the project was over budget, was the needed budget estimate correct?
  • If the project was late, was the original schedule credible?

None of these questions are answered in Standish reports. No Estimate picks these serious statistical sampling error up and uses the, as the basis of the pure-conjecture that Not Estimating is going to fix the problems. This would garner a high school D is the Stats class.

Screen Shot 2016-06-24 at 2.08.13 PM

Next comes a chart that makes a similar error. This reference is from Steve McConnell's book, but is actually from another source. The No Estimates book does a poor job of keeping the references straight. It is common to misattribute a report, a graph, even a phrase. The book needs a professional editor.

The graph below is used to show that estimates are usually wrong. But there is a critical misunderstanding about the data.

  • It starts with a straight line called perfect accuracy/. First there are two attributes of any estimate. Accuracy and precision. There is no such thing as perfect accuracy in the estimating business. An estimate is an approximation of reality.
  • The sample projects show they did not meet the perfect accuracy - whatever that might have been. This knowledge can only be obtained after the work has been done. Either at the end or during the project - cumulative to date
  • But there are two sources of the variances of estimates and actuals

Screen Shot 2016-06-24 at 2.08.06 PM

I'm in the early parts of the book and already have a half dozen pages of notes for either fallacies, incorrect principles, 30 year old references, and other serious mistakes on understanding on how decisions are made in the presence of uncertainty. My short assessment is 

It's a concept built on sand.

Related articles Myth's Abound All Project Numbers are Random Numbers - Act Accordingly The Failure of Open Loop Thinking How to "Lie" with Statistics Statistical Significance How to Estimate Software Development Capabilities Based Planning IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing Making Conjectures Without Testable Outcomes
Categories: Project Management

Grow your business on Google Play with help from the new Playbook for Developers app

Android Developers Blog - Fri, 06/24/2016 - 18:08

Posted by Dom Elliott, the Google Play team

Today, the Playbook for Developers mobile app is now generally available for Android devices. The app helps you stay up-to-date with the features and best practices to grow your business on Google Play. Thanks to all our beta testers over the last six weeks whose feedback helped us tweak and refine the app in preparation for launch.

Here’s how you read and watch content in the Playbook for Developers app:

  • Choose topics relating to your business interests to personalize My Playbook with curated articles and videos from Google and experts across the web.
  • Explore the in-depth guide to Google’s developer products, with articles grouped by what you’re trying to do: develop, launch, engage, grow, and earn.
  • Take actions on items – complete, share, save, or dismiss them – and read your Saved articles later, including offline if they’re written in the app. A data connection will be needed to read articles and videos from across the web.

The app supports Android 5.0 and above. If you're on an older device, check out our ebook, The Secrets to App Success on Google Play. We will be adding and updating content in the app to help you stay up-to-date and grow your business. Get the Playbook for Developers app today and then give us your feedback. The app is also available in the following languages: Bahasa Indonesia, Deutsch, español (Latinoamérica), le français, português do Brasil, tiếng Việt, русский язы́к, 한국어, 中文 (简体), 中文 (繁體), and 日本語.

This is the second app we’ve released for Google Play developers. Get the Google Play Developer Console app to review your app's performance statistics and financial data, get notified about your app's status and publishing changes, and read and reply to user reviews on the go.

Categories: Programming

Unix: Find files greater than date

Mark Needham - Fri, 06/24/2016 - 17:56

For the latter part of the week I’ve been running some tests against Neo4j which generate a bunch of log files and I wanted to filter those files based on the time they were created to do some further analysis.

This is an example of what the directory listing looks like:

$ ls -alh foo/database-agent-*
-rw-r--r--  1 markneedham  wheel   2.5K 23 Jun 14:00 foo/database-agent-mac17f73-1-logs-archive-201606231300176.tar.gz
-rw-r--r--  1 markneedham  wheel   8.6K 23 Jun 11:49 foo/database-agent-mac19b6b-1-logs-archive-201606231049507.tar.gz
-rw-r--r--  1 markneedham  wheel   8.6K 23 Jun 11:49 foo/database-agent-mac1f427-1-logs-archive-201606231049507.tar.gz
-rw-r--r--  1 markneedham  wheel   2.5K 23 Jun 14:00 foo/database-agent-mac29389-1-logs-archive-201606231300176.tar.gz
-rw-r--r--  1 markneedham  wheel    11K 23 Jun 13:44 foo/database-agent-mac3533f-1-logs-archive-201606231244152.tar.gz
-rw-r--r--  1 markneedham  wheel   4.8K 23 Jun 14:00 foo/database-agent-mac35563-1-logs-archive-201606231300176.tar.gz
-rw-r--r--  1 markneedham  wheel   3.8K 23 Jun 13:44 foo/database-agent-mac35f7e-1-logs-archive-201606231244165.tar.gz
-rw-r--r--  1 markneedham  wheel   4.8K 23 Jun 14:00 foo/database-agent-mac40798-1-logs-archive-201606231300176.tar.gz
-rw-r--r--  1 markneedham  wheel    12K 23 Jun 13:44 foo/database-agent-mac490bf-1-logs-archive-201606231244151.tar.gz
-rw-r--r--  1 markneedham  wheel   2.5K 23 Jun 14:00 foo/database-agent-mac5f094-1-logs-archive-201606231300189.tar.gz
-rw-r--r--  1 markneedham  wheel   5.8K 23 Jun 14:00 foo/database-agent-mac636b8-1-logs-archive-201606231300176.tar.gz
-rw-r--r--  1 markneedham  wheel   9.5K 23 Jun 11:49 foo/database-agent-mac7e165-1-logs-archive-201606231049507.tar.gz
-rw-r--r--  1 markneedham  wheel   2.7K 23 Jun 11:49 foo/database-agent-macab7f1-1-logs-archive-201606231049507.tar.gz
-rw-r--r--  1 markneedham  wheel   2.8K 23 Jun 13:44 foo/database-agent-macbb8e1-1-logs-archive-201606231244151.tar.gz
-rw-r--r--  1 markneedham  wheel   3.1K 23 Jun 11:49 foo/database-agent-macbcbe8-1-logs-archive-201606231049520.tar.gz
-rw-r--r--  1 markneedham  wheel    13K 23 Jun 13:44 foo/database-agent-macc8177-1-logs-archive-201606231244152.tar.gz
-rw-r--r--  1 markneedham  wheel   3.8K 23 Jun 13:44 foo/database-agent-maccd92c-1-logs-archive-201606231244151.tar.gz
-rw-r--r--  1 markneedham  wheel   3.9K 23 Jun 13:44 foo/database-agent-macdf24f-1-logs-archive-201606231244165.tar.gz
-rw-r--r--  1 markneedham  wheel   3.1K 23 Jun 11:49 foo/database-agent-mace075e-1-logs-archive-201606231049520.tar.gz
-rw-r--r--  1 markneedham  wheel   3.1K 23 Jun 11:49 foo/database-agent-mace8859-1-logs-archive-201606231049507.tar.gz

I wanted to split the files in half so that I could have the ones created before and after 12pm on the 23rd June.

I discovered that this type of filtering is actually quite easy to do with the ‘find’ command. So if I want to get the files after 12pm I could write the following:

$ find foo -name database-agent* -newermt "Jun 23, 2016 12:00" -ls
121939705        8 -rw-r--r--    1 markneedham      wheel                2524 23 Jun 14:00 foo/database-agent-mac17f73-1-logs-archive-201606231300176.tar.gz
121939704        8 -rw-r--r--    1 markneedham      wheel                2511 23 Jun 14:00 foo/database-agent-mac29389-1-logs-archive-201606231300176.tar.gz
121934591       24 -rw-r--r--    1 markneedham      wheel               11294 23 Jun 13:44 foo/database-agent-mac3533f-1-logs-archive-201606231244152.tar.gz
121939707       16 -rw-r--r--    1 markneedham      wheel                4878 23 Jun 14:00 foo/database-agent-mac35563-1-logs-archive-201606231300176.tar.gz
121934612        8 -rw-r--r--    1 markneedham      wheel                3896 23 Jun 13:44 foo/database-agent-mac35f7e-1-logs-archive-201606231244165.tar.gz
121939708       16 -rw-r--r--    1 markneedham      wheel                4887 23 Jun 14:00 foo/database-agent-mac40798-1-logs-archive-201606231300176.tar.gz
121934589       24 -rw-r--r--    1 markneedham      wheel               12204 23 Jun 13:44 foo/database-agent-mac490bf-1-logs-archive-201606231244151.tar.gz
121939720        8 -rw-r--r--    1 markneedham      wheel                2510 23 Jun 14:00 foo/database-agent-mac5f094-1-logs-archive-201606231300189.tar.gz
121939706       16 -rw-r--r--    1 markneedham      wheel                5912 23 Jun 14:00 foo/database-agent-mac636b8-1-logs-archive-201606231300176.tar.gz
121934588        8 -rw-r--r--    1 markneedham      wheel                2895 23 Jun 13:44 foo/database-agent-macbb8e1-1-logs-archive-201606231244151.tar.gz
121934590       32 -rw-r--r--    1 markneedham      wheel               13427 23 Jun 13:44 foo/database-agent-macc8177-1-logs-archive-201606231244152.tar.gz
121934587        8 -rw-r--r--    1 markneedham      wheel                3882 23 Jun 13:44 foo/database-agent-maccd92c-1-logs-archive-201606231244151.tar.gz
121934611        8 -rw-r--r--    1 markneedham      wheel                3970 23 Jun 13:44 foo/database-agent-macdf24f-1-logs-archive-201606231244165.tar.gz

And to get the ones before 12pm:

$ find foo -name database-agent* -not -newermt "Jun 23, 2016 12:00" -ls
121879391       24 -rw-r--r--    1 markneedham      wheel                8856 23 Jun 11:49 foo/database-agent-mac19b6b-1-logs-archive-201606231049507.tar.gz
121879394       24 -rw-r--r--    1 markneedham      wheel                8772 23 Jun 11:49 foo/database-agent-mac1f427-1-logs-archive-201606231049507.tar.gz
121879390       24 -rw-r--r--    1 markneedham      wheel                9702 23 Jun 11:49 foo/database-agent-mac7e165-1-logs-archive-201606231049507.tar.gz
121879393        8 -rw-r--r--    1 markneedham      wheel                2812 23 Jun 11:49 foo/database-agent-macab7f1-1-logs-archive-201606231049507.tar.gz
121879413        8 -rw-r--r--    1 markneedham      wheel                3144 23 Jun 11:49 foo/database-agent-macbcbe8-1-logs-archive-201606231049520.tar.gz
121879414        8 -rw-r--r--    1 markneedham      wheel                3131 23 Jun 11:49 foo/database-agent-mace075e-1-logs-archive-201606231049520.tar.gz
121879392        8 -rw-r--r--    1 markneedham      wheel                3130 23 Jun 11:49 foo/database-agent-mace8859-1-logs-archive-201606231049507.tar.gz

Or we could even find the ones last modified between 12pm and 2pm:

$ find foo -name database-agent* -not -newermt "Jun 23, 2016 14:00" -newermt "Jun 23, 2016 12:00" -ls
121934591       24 -rw-r--r--    1 markneedham      wheel               11294 23 Jun 13:44 foo/database-agent-mac3533f-1-logs-archive-201606231244152.tar.gz
121934612        8 -rw-r--r--    1 markneedham      wheel                3896 23 Jun 13:44 foo/database-agent-mac35f7e-1-logs-archive-201606231244165.tar.gz
121934589       24 -rw-r--r--    1 markneedham      wheel               12204 23 Jun 13:44 foo/database-agent-mac490bf-1-logs-archive-201606231244151.tar.gz
121934588        8 -rw-r--r--    1 markneedham      wheel                2895 23 Jun 13:44 foo/database-agent-macbb8e1-1-logs-archive-201606231244151.tar.gz
121934590       32 -rw-r--r--    1 markneedham      wheel               13427 23 Jun 13:44 foo/database-agent-macc8177-1-logs-archive-201606231244152.tar.gz
121934587        8 -rw-r--r--    1 markneedham      wheel                3882 23 Jun 13:44 foo/database-agent-maccd92c-1-logs-archive-201606231244151.tar.gz
121934611        8 -rw-r--r--    1 markneedham      wheel                3970 23 Jun 13:44 foo/database-agent-macdf24f-1-logs-archive-201606231244165.tar.gz

Or we can filter by relative time e.g. to find the files last modified in the last 1 day, 5 hours:

$ find foo -name database-agent* -mtime -1d5h -ls
121939705        8 -rw-r--r--    1 markneedham      wheel                2524 23 Jun 14:00 foo/database-agent-mac17f73-1-logs-archive-201606231300176.tar.gz
121939704        8 -rw-r--r--    1 markneedham      wheel                2511 23 Jun 14:00 foo/database-agent-mac29389-1-logs-archive-201606231300176.tar.gz
121934591       24 -rw-r--r--    1 markneedham      wheel               11294 23 Jun 13:44 foo/database-agent-mac3533f-1-logs-archive-201606231244152.tar.gz
121939707       16 -rw-r--r--    1 markneedham      wheel                4878 23 Jun 14:00 foo/database-agent-mac35563-1-logs-archive-201606231300176.tar.gz
121934612        8 -rw-r--r--    1 markneedham      wheel                3896 23 Jun 13:44 foo/database-agent-mac35f7e-1-logs-archive-201606231244165.tar.gz
121939708       16 -rw-r--r--    1 markneedham      wheel                4887 23 Jun 14:00 foo/database-agent-mac40798-1-logs-archive-201606231300176.tar.gz
121934589       24 -rw-r--r--    1 markneedham      wheel               12204 23 Jun 13:44 foo/database-agent-mac490bf-1-logs-archive-201606231244151.tar.gz
121939720        8 -rw-r--r--    1 markneedham      wheel                2510 23 Jun 14:00 foo/database-agent-mac5f094-1-logs-archive-201606231300189.tar.gz
121939706       16 -rw-r--r--    1 markneedham      wheel                5912 23 Jun 14:00 foo/database-agent-mac636b8-1-logs-archive-201606231300176.tar.gz
121934588        8 -rw-r--r--    1 markneedham      wheel                2895 23 Jun 13:44 foo/database-agent-macbb8e1-1-logs-archive-201606231244151.tar.gz
121934590       32 -rw-r--r--    1 markneedham      wheel               13427 23 Jun 13:44 foo/database-agent-macc8177-1-logs-archive-201606231244152.tar.gz
121934587        8 -rw-r--r--    1 markneedham      wheel                3882 23 Jun 13:44 foo/database-agent-maccd92c-1-logs-archive-201606231244151.tar.gz
121934611        8 -rw-r--r--    1 markneedham      wheel                3970 23 Jun 13:44 foo/database-agent-macdf24f-1-logs-archive-201606231244165.tar.gz

Or the ones modified more than 1 day, 5 hours ago:

$ find foo -name database-agent* -mtime +1d5h -ls
121879391       24 -rw-r--r--    1 markneedham      wheel                8856 23 Jun 11:49 foo/database-agent-mac19b6b-1-logs-archive-201606231049507.tar.gz
121879394       24 -rw-r--r--    1 markneedham      wheel                8772 23 Jun 11:49 foo/database-agent-mac1f427-1-logs-archive-201606231049507.tar.gz
121879390       24 -rw-r--r--    1 markneedham      wheel                9702 23 Jun 11:49 foo/database-agent-mac7e165-1-logs-archive-201606231049507.tar.gz
121879393        8 -rw-r--r--    1 markneedham      wheel                2812 23 Jun 11:49 foo/database-agent-macab7f1-1-logs-archive-201606231049507.tar.gz
121879413        8 -rw-r--r--    1 markneedham      wheel                3144 23 Jun 11:49 foo/database-agent-macbcbe8-1-logs-archive-201606231049520.tar.gz
121879414        8 -rw-r--r--    1 markneedham      wheel                3131 23 Jun 11:49 foo/database-agent-mace075e-1-logs-archive-201606231049520.tar.gz
121879392        8 -rw-r--r--    1 markneedham      wheel                3130 23 Jun 11:49 foo/database-agent-mace8859-1-logs-archive-201606231049507.tar.gz

There are lots of other flags you can pass to find but these ones did exactly what I wanted!

Categories: Programming

Stuff The Internet Says On Scalability For June 24th, 2016

Hey, it's HighScalability time:


A complete and accurate demonstration of the internals of a software system.

 

If you like this sort of Stuff then please support me on Patreon.
  • 79: podcasts for developers; 100 million: daily voice calls made on WhatsApp; 2,000; cars Tesla builds each week; 2078 lbs: weight it takes to burst an exercise ball; 500 million: Instagram users; > 100M: hours watched per day on Netflix; 400 PPM: Antarctica’s CO2 Level; 2.5 PB: New Relic SSD storage; 

  • Quotable Quotes:
    • Alan Kay: The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs.
    • @jaykreps: Actually, yes: distributed systems are hard, but getting 100+ engineers to work productively on one app is harder.
    • @adrianco: All in 2016: Serverless Architecture: AWS Lambda, Codeless Architecture: Mendix, Architectureless Architecture: SaaS
    • @AhmetAlpBalkan: "That's MS SQL Server running on Ubuntu on Docker Swarm on Docker Datacenter on @Azure" @markrussinovich #dockercon
    • Erik Darling: EVERYTHING’S EASY WHEN IT’S WORKING
    • @blueben: Bold claim by @brianl: Most of the best tech of the last 10 years has come out of work at Google. #VelocityConf
    • Joe: there is no such thing as a silver bullet … no magic pixie dust, or magical card, or superfantastic software you can add to a system to make it incredibly faster. Faster, better performing systems require better architecture (physical, algorithmic, etc.). You really cannot hope to throw a metric-ton of machines at a problem and hope that scaling is simple and linear. Because it really never works like that.
    • Eran Hammer: The web is the present and it’s a f*cking mess. Deal with it.
    • @etherealmind: If you believe in DevOps/NetOps you have to believe that leaving Europe is a difficult but better course of action. Small, fast & fluid
    • Sanfilippo: Redis is currently not good for data problems where write safety is very important. One of the main design sacrifices Redis makes in order to provide many good things is data retention. It has best effort consistency and it has a configurable level of write safety, but it’s optimized for use cases where most of the time, you have your data, but in cases of large incidents you can lose a little bit of it. 
    • David Smith: The best time you are ever going to have to make a new app is when there's a new iOS update. Go through the diffs. Go through the What's New? Find something that couldn't be possible before and make an app around that. 
    • @DanielEricLee: There was a timezone bug so I wrote a test and then the test failed because the CI runs in a different timezone and then I became a farmer
    • @jasongorman: Reminds me of someone I know who led a dev team who built something that won an award. None of team invited to awards bash. Only snr mgmt.
    • David Robinson: My advice to graduate students: create public artifacts
    • @cdixon: Because distributed clusters of commodity machines are more powerful.
    • @wmf: 128-port TORs have outrun the compute capacity of most racks, so putting two mini-TORs in 1U is a great idea.
    • msravi: I moved from AWS EC2 to Google Cloud a few days ago. Google really seems to have beaten AWS, at least in pricing and flexibility. On AWS (Singapore region) a 2-vCPU, 7.5G RAM instance costs $143/month (not including IOPS and bandwidth costs), while a similar one on GC works out to about $56/month. That's a massive difference. In addition, GC allows me to customize cores and RAM flexibly to a point, which is important for me.
    • Mobile Breakfast: What is clear that we will get rid of the classic circuit-switched technology and move to all IP networks fairly soon in the US.
    • Douglas Rushkoff: I think they all come down to how you can optimize your business or the economy for flow over growth; for the circulation of money rather than the extraction of money.
    • Alan Hastings~ [Species] that go into synchrony may be more subject to extinction because a single driver can trigger a collapse
    • @TechCrunch: More cars than phones were connected to cell service in Q1 http://tcrn.ch/28MLtmt  by @kristenhg
    • @docker: "Nobody cares about #containers, it's the application that matters!" - @solomonstre, CTO @Docker #DockerCon
    • @cmeik: The most commercially successful NoSQL database: Lotus Notes.
    • Brittany Fleit: behavior-based push results in open rates 800 percent higher than immediate blasts. Personalizing a message based on individual actions garners much more engagement.

  • Dilbert, of course, nails it on AI.

  • How will hackers be stopped from using Return-oriented programming (ROP) to execute privilege escalation attacks? ROP was created when "clever hackers realized that they could successively invoke snippets of code at the end of existing subroutines as 'proxies' to accomplish the work they needed done." Randomizing memory locations didn't stop them so Intel created new hardware magic called Control-flow Enforcement Technology. Intel added a new "ENDBRANCH" instruction and created a new "shadow stack". It turns out the immense power and beauty of the stack in von neumann architectures is also a great weakness. The story of life. Steve Gibson with an inspiring deep dive on CET in Security Now 565

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Beginning An Agile Effort: Breaking Down The Big Picture

26955153324_91f40a31cc_k

How the big picture gets made…

A high-level narrative or story is a good first step in the process of turning an idea into something that delivers real business value.  However, it is only a step along the path through the backlog onion.  Software development (including new development, enhancements, and maintenance work) requires translating the big picture into a product backlog that includes features and epics.  As work progresses the backlog is further decomposed and groomed into user stories and tasks.  The process of moving from any big picture to a backlog in software development follows a predictable pattern.   

  1. Decompose the big picture by identifying the functional and architectural features directly identified or inferred in the story.  This initial step yields a combination of business requirements and the beginning of both the technical and business architecture that will be needed to support the features.  Features of all types are a description of need and outcome and do not represent a decision on how that outcome will be implemented (real options).
  2. Organize the features using a mapping technique such as Agile Story Maps to look for gaps.  Ask yourself and the team whether there are features that needed to achieve the goal identified in the big picture story.  This also a chance for the product owner to reject stories that are not needed to attain the overall goal.
  3. Identify the minimum viable product (MVP). The identification of a minimum viable product at this point identifies the set of features that are needed to deploy a product.  An MVP begins the discussion of priority and a release strategy.  Simply put, features in the MVP will need to be decomposed before features that will be needed later.

An Agile Story Map and an MVP allow a team to cut a wedge into the planning onion that slices from the outer layers directly to the core.  In the Scaled Agile Framework Enterprise (SAFe) the wedge might be called a Product Increment (PI) or a release for efforts using other frameworks and methods.

  1. Features and epics, while easily recognizable, rarely can be tackled by a single team in a single sprint; therefore, they need to be decomposed or split.  The process of decomposing epics and features breaks these large pieces of work into smaller pieces that can be understood and developed within the context of an individual sprint. The backlog of user stories is decomposed as needed so that, as a story gets closer to being accepted into a sprint and developed, the story becomes more and more granular.
  2. Backlog grooming the final step before a story is accepted into a sprint or drawn onto the Kanban board.  Grooming ensures that the story is ready for development (properly formed, sized correctly, understood and have acceptance criteria), and that a team can get started quickly.

The big picture defines the outside of the backlog onion.  However, after we peel the outside of the onion we can cut wedges directly to the core. Slicing wedges, like a minimum viable product, allows a team or teams to decompose and groom only the features and epics immediately needed.  Thereby getting the effort started quickly and effectively.   
Next, a simple example!


Categories: Process Management

Introducing Firebase Authentication

Google Code Blog - Thu, 06/23/2016 - 21:35

Originally posted on Firebase blog

Posted by Laurence Moroney, Developer Advocate and Alfonso Gómez Jordana, Associate Product Manager

For most developers, building an authentication system for your app can feel a lot like paying taxes. They are both relatively hard to understand tasks that you have no choice but doing, and could have big consequences if you get them wrong. No one ever started a company to pay taxes and no one ever built an app just so they could create a great login system. They just seem to be inescapable costs.

But now, you can at least free yourself from the auth tax. With Firebase Authentication, you can outsource your entire authentication system to Firebase so that you can concentrate on building great features for your app. Firebase Authentication makes it easier to get your users signed-in without having to understand the complexities behind implementing your own authentication system. It offers a straightforward getting started experience, optional UX components designed to minimize user friction, and is built on open standards and backed by Google infrastructure.

Implementing Firebase Authentication is relatively fast and easy. From the Firebase console, just choose from the popular login methods that you want to offer (like Facebook, Google, Twitter and email/password) and then add the Firebase SDK to your app. Your app will then be able to connect securely with the real time database, Firebase storage or to your own custom back end. If you have an auth system already, you can use Firebase Authentication as a bridge to other Firebase features.

Firebase Authentication also includes an open source UI library that streamlines building the many auth flows required to give your users a good experience. Password resets, account linking, and login hints that reduce the cognitive load around multiple login choices - they are all pre-built with Firebase Authentication UI. These flows are based on years of UX research optimizing the sign-in and sign-up journeys on Google, Youtube and Android. It includes Smart Lock for Passwords on Android, which has led to significant improvements in sign-in conversion for many apps. And because Firebase UI is open source, the interface is fully customizable so it feels like a completely natural part of your app. If you prefer, you are also free to create your own UI from scratch using our client APIs.

And Firebase Authentication is built around openness and security. It leverages OAuth 2.0 and OpenID Connect, industry standards designed for security, interoperability, and portability. Members of the Firebase Authentication team helped design these protocols and used their expertise to weave in latest security practices like ID tokens, revocable sessions, and native app anti-spoofing measures to make your app easier to use and avoid many common security problems. And code is independently reviewed by the Google Security team and the service is protected in Google’s infrastructure.

Fabulous use Firebase Auth to quickly implement sign-in

Fabulous uses Firebase Authentication to power their login system. Fabulous is a research-based app incubated in Duke University’s Center for Advanced Hindsight. Its goal is to help users to embark on a journey to reset poor habits, replacing them with healthy rituals, with the ultimate goal of improving health and well-being.

The developers of Fabulous wanted to implement an onboarding flow that was easy to use, required minimal updates, and reduced friction with the end user. They wanted an anonymous option so that users could experiment with it before signing up. They also wanted to support multiple login types, and have an option where the user sign-in flow was consistent with the look and feel of the app.

“I was able to implement auth in a single afternoon. I remember that I spent weeks before creating my own solution that I had to update each time the providers changed their API”
- Amine Laadhari, Fabulous CTO.

Malang Studio cut time-to market by months using Firebase Auth

Chu-Day is an application (available on Android and iOS) that helps couples to never forget the dates that matter most to them. It was created by the Korean firm Malang Studio, that develops character-centric, gamified lifestyle applications.

Generally, countdown and anniversary apps do not require users to sign-in, but Malang Studio wanted to make Chu-day special, and differentiate it from others by offering the ability to connect couples so they could jointly countdown to a special anniversary date. This required a sign-in feature, and in order to prevent users from dropping out, Chu-day needed to make the sign-in process seamless.

Malang Studio was able to integrate an onboarding flow in for their apps, using Facebook and Google Sign-in, in one day, without having to worry about server deployment or databases. In addition, Malang Studio has also been taking advantage of the Firebase User Management Console, which helped them develop and test their sign-in implementation as well as manage their users:

“Firebase Authentication required minimum configuration so implementing social account signup was easy and fast. User management feature provided in the console was excellent and we could easily implement our user auth system.”
- Marc Yeongho Kim, CEO / Founder from Malang Studio

For more about Firebase Authentication, visit the developers site and watch our I/O 2016 session, “Best practices for a great sign-in experience.”

Categories: Programming

Designing for Multi-Window

Android Developers Blog - Thu, 06/23/2016 - 18:10

Posted by Ian Lake, Developer Advocate

As a developer, there’s a wide range of features added in Android N to take advantage of, but when it comes to designing and building your UI, having strong multi-window support should be at the forefront.

The primary mode that users will be interacting with multi-window is through split-screen mode, which is available on both handheld devices and larger tablets. In this mode, two apps split the available screen space and the user can drag the divider between the two split screens to resize the apps. As you might imagine, this mode offers some unique design challenges beyond what was needed previously.



An even more responsive UI

The lessons learned from previous versions of Android, the mobile web, and desktop environments still apply to Android N. Designing a responsive UI is still an important first step towards an amazing multi-window experience.

A responsive UI is one that adapts to the size provided, picking the best representation of the content and the appropriate navigation patterns to make a great user experience on any device. Check out the Building a Responsive UI blog post for details on how to design and build an effective responsive UI.

Adapting your layout

As you’re designing layouts for the largest and smallest screens and everything in between, it is important to make resizing a smooth and seamless transition as mentioned in the split screen layout guidelines. If you already have a similar layout between mobile and tablet, you’ll find much of your work taken care of for you.

However, if your mobile and tablet layouts are vastly different and there’s no way to smoothly transition between the two, you should not transition between them when resizing. Instead, focus on making your tablet UI scale down using the same responsive UI patterns. This ensures that users do not have to relearn their UI when resizing your app.

Note that the minimalHeight and minimalWidth layout attributes allow you to set a minimum size you want reported to your Activity, but they do not mean the user cannot resize your activity smaller - it actually means that your activity will be cropped to the size the user requests, potentially forcing elements of your UI off the screen. Strive to support down to the minimum size of 220x220dp.

Design configurations to consider

While many of the sizes and aspect ratios possible in multi-window are similar to existing devices (1/3rd of a landscape tablet is similar to existing mobile devices in screen size), there are a few configurations that are much more common when considering multi-window.

The first is a 16x9 layout on mobile devices in portrait. In this case, the vertical space is extremely limited. If you have a number of fixed elements stacked on top of one another (a toolbar, scrolling content, and a bottom navigation bar), you might find there’s not actually any room for the scrolling content - the most important part!

The second case to consider is the 34.15% layout on tablets. The very wide aspect ratio in device portrait or very tall aspect ratio in device landscape orientation are more extreme than what is found on existing devices. Consider using your mobile layouts as a starting point for this configuration.

Patterns to avoid

When it comes to multi-window, there are a few patterns you want to avoid entirely.

The first is UI interactions that rely on swiping from the edge of the screen. This has already been somewhat of an issue when it comes to the on screen navigation bar prominent on many devices (such as Nexus devices), but is even more so in split-screen mode. Since there is (purposefully) no way to determine if your activity is on the top or bottom or the left or the right, don’t make edge swipes the only way to access functionality in your app. That doesn’t mean you have to avoid them entirely - just make sure there is an alternative. A good example of this is the temporary navigation drawer - an edge swipe opens the drawer, but it is also accessible by pressing the hamburger icon in the toolbar.

The second is disabling multi-window entirely. While there are certainly cases where this makes sense (i.e., it is fundamentally an immersive experience such as a game), there are also cases where your activity and any Activities launched from that Activity are forced to support multi-window. As mentioned in the Preparing for Multi-Window blog post, when you support external apps launching your activity, your activity inherits the multi-window properties of the calling Activity.

Designing for Multi-Window is designing for every device

Building a responsive UI that reacts to the space available is critical to a great multi-window experience, but it is an exercise that can benefit all of your users across the wide variety of Android devices.

So use this as an opportunity to #BuildBetterApps

Follow the Android Development Patterns Collection for more!

Categories: Programming

Product Owners and Learning, Part 3

Part 1 was about how the PO needs to see the big picture and develop the ranked backlog. Part 2 was about the learning that arises from small stories. This part is about ranking.

If you specify deliverables in your big picture and small picture roadmaps, you have already done a gross form of ranking. You have already made the big decisions: which feature/parts of features do you want when? You made those decisions based on value to someone.

I see many POs try to use estimation as their only input into ranking stories. How long will something take to complete? If you have a team who can estimate well, that might be helpful. It’s also helpful to see some quick wins if you can. See my most recent series of posts on Estimation for more discussion on ranking by estimation.

Estimation talks about cost. What about value? In agile, we want to work (and deliver) the most valuable work first.

Once you start to think about value, you might even think about value to all your different somebodies. (Jerry Weinberg said, “Quality is value to someone.”)  Now, you can start considering defects, technical debt, and features.

The PO must rank all three possibilities for a team: features, defects, and technical debt. If you are a PO who has feature-itis, you don’t serve the team, the customer, or the product. Difficult as it is, you have to think about all three to be an effective PO.

The features move the product forward on its roadmap. The defects prevent customers from being happy and prevent movement forward on the roadmap. Technical debt prevents easy releasing and might affect the ease of the team to deliver. Your customers might not see technical debt. They will feel the effects of technical debt in the form of longer release times.

Long ago, I suggested that a specific client consider three backlogs to store the work and then use pair-wise comparison with each item at the top of each queue. (They stored their product backlog, defects, and technical debt in an electronic tool. It was difficult to see all of the possible work.) That way, they could see the work they needed to do (and not forget), and they could look at the value of doing each chunk of work. I’m not suggesting keeping three backlogs is a good idea in all cases. They needed to see—to make visible—all the possible work. Then, they could assess the value of each chunk of work.

You have many ways to see value. You might look at what causes delays in your organization:

  • Technical debt in the form of test automation debt. (Insufficient test automation makes frictionless releasing impossible. Insufficient unit test automation makes experiments and spikes impossible or quite long.)
  • Experts who are here, there, and everywhere, providing expertise to all teams. You often have to wait for those experts to arrive to your team.
  • Who is waiting for this? Do you have a Very Important Customer waiting for a fix or a feature?

You might see value in features for immediate revenue. I have worked in organizations where, if we released some specific feature, we could gain revenue right away. You might look at waste (one way to consider defects and technical debt).

Especially in programs, I see the need for the PO to say, “I need these three stories from this feature set and two stories from that other feature set.” The more the PO can decompose feature sets into small stories, the more flexibility they have for ranking each story on its own.

Here are questions to ask:

  • What is most valuable for our customers, for us to do now?
  • What is most valuable for our team, for us to do now?
  • What is most valuable for the organization, for us to do now?
  • What is most valuable for my learning, as a PO, to decide what to do next?

You might need to rearrange those questions for your context. The more your PO works by value, the more progress the team will make.

The next post will be about when the PO realizes he/she needs to change stories.

If you want to learn how to deliver what your customers want using agile and lean, join me in the next Product Owner workshop.

Categories: Project Management

Product Owners and Learning, Part 4

Part 1 was about how the PO needs to see the big picture and develop the ranked backlog. Part 2 was about the learning that arises from small stories. Part 3 was about ranking. In this part, I’ll discuss the product owner value team and how to make time to do “everything,” and especially how to change stories.

Let’s imagine you started developing your product before you started using agile. Your product owners (who might have been a combination of  product managers and business analysts) gave you a list of features, problems, and who knows what else for a release. They almost never discussed your technical debt with you. In my experience, they rarely discussed defects unless a Very Important Customer needed something fixed. Now, they’re supposed to provide you a ranked backlog of everything. It’s quite a challenge.

Let’s discuss the difference between a product manager and a product owner.

Teamsandvalue

A product manager faces outward, seeing customers, asking them what they want, discussing dates and possibly even revenue. The product manager’s job is to shepherd the customer wishes into the product to increase the value of the product. In my world, the product manager has the responsibility for the product roadmap.

A product owner faces inward, working with the team. The PO’s job is to increase the value of the product. In my world, the PO works with the product manager (and the BAs if you have them) to create and update the product roadmap.

A business analyst might interview people (internal and external) to see what they want in the product. The BA might write stories with the PO or even the product manager.

The product manager and the product owners and any BAs are part of the Product Owner value team. The product owner value team works together to create and update the product roadmap. In a large organization, I’ve seen one product manager, several product owners and some number of BAs who work on one product throughout its lifetime. (I’ve also seen the BAs move around from product to product to help wherever they can be of use.)

What about you folks who work in IT and don’t release outside the company? You also need a product manager, except, with any luck, the product manager can walk down the hall to discuss what the customers need.

If you work in a small organization, yes, you may well have one person who does all of this work. Note: a product manager who is also a product owner is an overloaded operator. Overloaded people have trouble doing “all” the work. Why? Because product management is more strategic. Product ownership is more tactical.  You can’t work at different levels on an ongoing basis. Something wins—either the tactical work or the strategic work. (See Hiring Geeks That Fit for a larger discussion of this problem.)

When one person tries to do all the work, it’s possible that many other things suffer: feedback to the team, story breakdown, and ranking.

The Product Owner Value team takes the outside-learned information from customers/sponsors, the inside-learned information from the product development team (the people who write and test the product), and develop the roadmap to define the product direction.

In agile, you have many choices for release: continuous delivery, delivery at certain points (such as at the end of the iteration or every month or whenever “enough” features are done), or monthly/quarterly/some other time period.

Here’s the key for POs and change: the smaller the stories are or the more often the team can release stories, the more learning everyone gains. That learning informs the PO’s options for change.

Example.AgileRoadmapOneQuarterIn this example roadmap, you can see parts of feature sets in in the first and second iterations. (I’m using iterations because they are easy to show in a picture and because people often want a cadence for releasing unless you do continuous delivery.)

If the Product Development team completes parts of feature sets, such as Admin, Part 1, the PO can decide if Admin, Part 2 or Diagnostics, Part 1 is next up for the team. In fact, if the PO has created quite small stories, it’s really easy to say, “Please do this story from Admin and that story from Diagnostics.” The question for the PO is what is most valuable right now: breadth or depth?

The PO can make that decision, if the PO has external information from the Product Manager and internal information from the BA and the team. The PO might not know about breadth or depth or some combination unless there is a Product Owner Value team.

Here are some questions when your PO wants everything:

  • What is more valuable to our customers: breadth across which parts of the product, or depth?
  • What is more valuable for our learning: breadth or depth?
  • Does anyone need to learn something from any breadth or depth?
  • What cadence of delivery do we need for us, our customers, anyone else?
  • What is the first small step that helps us learn and make progress?

These questions help the conversation. The roadmaps help everyone see where the Product Owner Value team wants the product to go. I’ll do a summary post next. (If you have questions I haven’t answered, let me know.)

Someone needs to learn about what the customers want. That person is outward-facing and I call that person a Product Manager. Someone needs to create small stories and learn from what the team delivers. I call that person a Product Owner. Those people, along with BAs compose the Product Owner Value team, and guide the business value of the product over time. The business value is not just features—it is also when to fix defects for a better customer experience and when to address technical debt so the product development team has a better experience delivering value.

I’ll do a summary post next. (If you have questions I haven’t answered, let me know.)

If you want to learn how to deliver what your customers want using agile and lean, join me in the next Product Owner workshop.

Categories: Project Management

Introducing the Android Basics Nanodegree

Google Code Blog - Wed, 06/22/2016 - 22:50

Posted by Shanea King-Roberson, Lead Program Manager Twitter: @shaneakr Instagram: @theshanea


Do you have an idea for an app but you don’t know where to start? There are over 1 billion Android devices worldwide, providing a way for you to deliver your ideas to the right people at the right time. Google, in partnership with Udacity, is making Android development accessible and understandable to everyone, so that regardless of your background, you can learn to build apps that improve the lives of people around you.

Enroll in the new Android Basics Nanodegree. This series of courses and services teaches you how to build simple Android apps--even if you have little or no programming experience. Take a look at some of the apps built by our students:

The app "ROP Tutorial" built by student Arpy Vanyan raises awareness of a potentially blinding eye disorder called Retinopathy of Prematurity that can affect newborn babies.

And user Charles Tommo created an app called “Dr Malaria” that teaches people ways to prevent malaria.

With courses designed by Google, you can learn skills that are applicable to building apps that solve real world problems. You can learn at your own pace to use Android Studio(Google’s official tool for Android app development) to design app user interfaces and implement user interactions using the Java programming language.

The courses walk you through step-by-step on how to build an order form for a coffee shop, an app to track pets in a shelter, an app that teaches vocabulary words from the Native American Miwok tribe, and an app on recent earthquakes in the world. At the end of the course, you will have an entire portfolio of apps to share with your friends and family.

Upon completing the Android Basics Nanodegree, you also have the opportunity to continue your learning with the Career-track Android Nanodegree (for intermediate developers). The first 50 participants to finish the Android Basics Nanodegree have a chance to win a scholarship for the Career-track Android Nanodegree. Please visit udacity.com/legal/scholarshipfor additional details and eligibility requirements. You now have a complete learning path to help you become a technology entrepreneur or most importantly, build very cool Android apps, for yourself, your communities, and even the world.

All of the individual courses that make up this Nanodegree are available online for no charge at udacity.com/google. In addition, Udacity provides paid services, including access to coaches, guidance on your project, help staying on track, career counseling, and a certificate upon completion for a fee.

You will be exposed to introductory computer science concepts in the Java programming language, as you learn the following skills.

  • Build app user interfaces
  • Implement user interactions
  • Store information in a database
  • Pull data from the internet into your app
  • Identify and fix unexpected behavior in the app
  • Localize your app to support other languages

To enroll in the Android Basics Nanodegree program, click here.

See you in class!

Categories: Programming

Introducing the Android Basics Nanodegree

Android Developers Blog - Wed, 06/22/2016 - 18:48

Posted by Shanea King-Roberson, Lead Program Manager Twitter: @shaneakr Instagram: @theshanea


Do you have an idea for an app but you don’t know where to start? There are over 1 billion Android devices worldwide, providing a way for you to deliver your ideas to the right people at the right time. Google, in partnership with Udacity, is making Android development accessible and understandable to everyone, so that regardless of your background, you can learn to build apps that improve the lives of people around you.

Enroll in the new Android Basics Nanodegree. This series of courses and services teaches you how to build simple Android apps--even if you have little or no programming experience. Take a look at some of the apps built by our students:

The app "ROP Tutorial" built by student Arpy Vanyan raises awareness of a potentially blinding eye disorder called Retinopathy of Prematurity that can affect newborn babies.

And user Charles Tommo created an app called “Dr Malaria” that teaches people ways to prevent malaria.

With courses designed by Google, you can learn skills that are applicable to building apps that solve real world problems. You can learn at your own pace to use Android Studio (Google’s official tool for Android app development) to design app user interfaces and implement user interactions using the Java programming language.

The courses walk you through step-by-step on how to build an order form for a coffee shop, an app to track pets in a shelter, an app that teaches vocabulary words from the Native American Miwok tribe, and an app on recent earthquakes in the world. At the end of the course, you will have an entire portfolio of apps to share with your friends and family.

Upon completing the Android Basics Nanodegree, you also have the opportunity to continue your learning with the Career-track Android Nanodegree (for intermediate developers). The first 50 participants to finish the Android Basics Nanodegree have a chance to win a scholarship for the Career-track Android Nanodegree. Please visit udacity.com/legal/scholarship for additional details and eligibility requirements. You now have a complete learning path to help you become a technology entrepreneur or most importantly, build very cool Android apps, for yourself, your communities, and even the world.

All of the individual courses that make up this Nanodegree are available online for no charge at udacity.com/google. In addition, Udacity provides paid services, including access to coaches, guidance on your project, help staying on track, career counseling, and a certificate upon completion for a fee.

You will be exposed to introductory computer science concepts in the Java programming language, as you learn the following skills.

  • Build app user interfaces
  • Implement user interactions
  • Store information in a database
  • Pull data from the internet into your app
  • Identify and fix unexpected behavior in the app
  • Localize your app to support other languages

To enroll in the Android Basics Nanodegree program, click here.

See you in class!

Categories: Programming

My Improved Travel Checklist

NOOP.NL - Jurgen Appelo - Wed, 06/22/2016 - 17:20
My Improved Travel Checklist

Last week in London, I wanted to shoot a video, but it appeared that I had left my camera’s memory cards at home. On my previous trip, it had been my Android tablet that I had forgotten to bring with me. The trip before that, it was my stack of local currency, my power adapters, my sunglasses, or whatever, which I hadn’t properly packed.

Sure, I have a travel checklist. But clearly, it had turned into a checklost. With at least 50 confirmed upcoming events around the world, it was time for me to redesign it.

Four Preferred Places

My original checklist was one large unorganized list of reminders. This regularly led to problems because, by scanning the list too quickly, I easily overlooked an item that was buried among all the other things that I only knew too well. So I divided the checklist into four parts:

  • Personal items (that I carry on my body)
  • Shoulder bag
  • Handbag (typically carry on luggage)
  • Extra bag (typically check in luggage)

Each travel item on my improved checklist now has a preferred place. This not only helps to keep the individual lists smaller, and easier to check more carefully, but it also helps me keep things in the right bag. I don’t want to have that situation again where I thought I had my universal adapters or chargers packed in the other bag (but found out later that I hadn’t).

(And preferred means that items can change places depending on context. For example, my keys will have moved to my shoulder bag before I arrive at the airport, while my passport may temporarily move to my pocket while suffering the security and customs rituals.)

Standard versus Extra

Another thing I noticed messing up my packing efforts was that some standard items are by default in my bags (such as passports and adapters), other extra things are by default out of my bags (such as clothes and toiletries), and some items had a Schrödinger-kind of existence, not clearly being in or out, until I opened my bags to have a look (such as device chargers and headphones).

After the reorganization, packing my bags with my improved checklist now consists of two distinct activities:

  1. Checking that all standard items are where they should be;
  2. Adding all extra items to the place where I want them.

Hopefully, this will save me some stress and headaches in the future.

For example, I once had to interrupt my trip to the airport because my passport was not in my bag: it was still under the scanner next to my computer. I always know that my passports are in my shoulder bag, except for the one or two times when, apparently, I was wrong. And thus, I made it a separate activity to check that I am not deceiving myself, thinking that the standard items are where they should be before adding all extra items. And a Schrödinger-kind of existence is not part of the improved design.

Optional Stuff

And then, of course, there are the optional items that mainly depend on the weather forecasts. I remember once nearly freezing to death in Helsinki because I had no warm coat or gloves. I was once drying my clothes in my hotel room in London because, stupidly, I had brought no umbrella. And more than once, I have been sweating in the sun because I was silly enough to bring only dark blue jeans and long-sleeve shirts.

Short versus Long trips

Finally, things can always change a bit depending on context.

On short trips (one or two nights), I don’t need to take the larger bag with me. But this means I must jam any books, running gear, and clean clothes into my hand luggage, which is not always possible, or else just leave some of it at home. And on long trips (ten or more nights), the large bag magically changes into an extra large suitcase, and this also changes what I can bring with me (usually a lot more clothes).

Well, there you have it: the philosophy behind my new-and-improved travel checklist. I include the full list below (as it is now).

Is there anything missing that you have on your travel checklist, and that may be useful for me as well?

Personal
Phone
Wallet
Jacket
House keys
Car keys

Shoulder bag – standard items
Passport(s)
Credit cards and bank cards
Travel cards and loyalty cards
Pens and markers
Presentation clicker
Spare batteries
Memory sticks
Display adapter

Shoulder bag – extra items
Tablet + charger
Headphones
Foreign currency

Handbag – standard items
Spare underwear, socks, and shirt
Spare medicine, contact lenses
GoPro camera
GoPro stick
GoPro batteries
GoPro charger
GoPro memory
GoPro USB
GoPro microphone
Glasses
Sunglasses
Universal adapters
USB chargers
Business cards

Handbag – extra items
Notebook + charger
Underwear
Toiletries
Gloves, scarf, ear muffs [IF forecast = cold]
Umbrella [IF forecast = wet]
Travel clothes [IF distance = long]

Large bag – standard items
Laundry bag

Large bag – extra items
Clothes
Extra shoes
Belt
Running shoes and clothes
Novel
Book / giveaway
Fleece jacket [IF forecast = cold]
Warm socks and sweater [IF forecast = cold]
Raincoat [IF forecast = wet]
Short pants [IF forecast = hot]

photo (c) 2013 Chris Lott, Creative Commons 2.0

My new book Managing for Happiness is available from June 2016. PRE-ORDER NOW!

Managing for Happiness cover (front)

The post My Improved Travel Checklist appeared first on NOOP.NL.

Categories: Project Management

Product Owners and Learning, Part 1

When I work with clients, they often have a “problem” with product ownership. The product owners want tons of features, don’t want to address technical debt, and can’t quite believe how long features will take.  Oh, and the POs want to change things as soon as they see them.

I don’t see this as problems.To me, this is all about learning. The team learns about a feature as they develop it. The PO learns about the feature once the PO sees it. The team and the PO can learn about the implications of this feature as they proceed. To me, this is a significant value of what agile brings to the organization. (I’ll talk about technical debt a little later.)

AgileRoadmap.copyright-1080x794One of the problems I see is that the PO sees the big picture. Often, the Very Big Picture. The roadmap here is a 6-quarter roadmap. I see roadmaps this big more often in programs, but if you have frequent customer releases, you might have it for a project, also.

I like knowing where the product is headed. I like knowing when we think we might want releases. (Unless you can do continuous delivery. Most of my clients are not there. They might not ever get there, either. Different post.)

Here’s the problem with the big picture. No team can deliver according to the big picture. It’s too big. Teams need the roadmap (which I liken to a wish list) and they need a ranked backlog of small stories they can work on now.

Example.AgileRoadmapOneQuarter In Agile and Lean Program Management, I have this picture of what an example roadmap might look like.

This particular roadmap works in iteration-based agile. It works in flow-based agile, too. I don’t care what a team uses to deliver value. I care that a team delivers value often. This image uses the idea that a team will release internally at least once a month. I like more often if you can manage it.

Releasing often (internally or externally) is a function of small stories and the ability to move finished work through your release system. For now, let’s imagine you have a frictionless release system. (Let me know if you want a blog post about how to create a frictionless release system. I keep thinking people know what they need to do, but maybe it’s as clear as mud to  you.)

The smaller the story, the easier it is for the team to deliver. Smaller stories also make it easier for the PO to adapt. Small stories allow discovery along with delivery (yes, that’s a link to Ellen Gottesdiener’s book). And, many POs have trouble writing small stories.

That’s because the PO is thinking in terms of feature sets, not features. I gave an example for secure login in How to Use Continuous Planning. It’s not wrong to think in feature sets. Feature sets help us create the big picture roadmap. And, the feature set is insufficient for the frequent planning and delivery we want in agile.

I see these problems in creating feature sets:

  • Recognizing the different stories in the feature set (making the stories small enough)
  • Ranking the stories to know which one to do first, second, third, etc.
  • What to do when the PO realizes the story or ranking needs to change.

I’ll address these issues in the next posts.

If you want to learn how to deliver what your customers want using agile and lean, join me in the next Product Owner workshop.

Categories: Project Management