Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=3' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Better guesswork for Product Owners

Xebia Blog - Thu, 11/17/2016 - 09:22
Estimation, if there is one concept hard to grasp in product development it will be when things are done. With done I don‚Äôt mean the releasable increment from the iteration, but rather what will be in it? or in Product Management speak: ‚Äúwhat problem does it solve for our customer?‚ÄĚ. I increasingly am practicing randori

Understanding APK packaging in Android Studio 2.2

Android Developers Blog - Thu, 11/17/2016 - 07:27
Posted by Wojtek KaliciŇĄski, Android Developer Advocate

Android Studio 2.2 launched recently with many new and improved features. Some of the changes are easy to miss because they happened under the hood in the Android Gradle plugin, such as the newly rewritten integrated APK packaging and signing step.

APK Signature Scheme v2

With the introduction of the new APK Signature Scheme v2 in Android 7.0 Nougat, we decided to rewrite how assembling APKs works in the Android Gradle plugin. You can read all about the low-level technical details of v2 signatures in the documentation, but here's a quick tl;dr summary of the info you need as an Android app developer:

  • The cryptographic signature of the APK that is used to verify its integrity is now located immediately before the ZIP Central Directory.
  • The signature is computed and verified over the binary contents of the whole APK file, as opposed to decompressed file contents of each file in the archive in v1.
  • An APK can be signed by both v1 and v2 signatures at the same time, so it remains backwards compatible with previous Android releases.

Why introduce this change to how Android verifies APKs? Firstly, for enhanced security and extensibility of this new signing format, and secondly for performance - the new signatures take significantly less time to verify on the device (no need for costly decompression), resulting in faster app installation times.

The consequence of this new signing scheme, however, is that there are new constraints on the APK creation process. Since only uncompressed file contents were verified in v1, that allowed for quite a lot of modifications to be made after APK signing - files could be moved around or even recompressed. In fact, the zipalign tool which was part of the build process did exactly that - it was used to align ZIP entries on correct byte boundaries for improved runtime performance.

Because v2 signatures verify all bytes in the archive and not individual ZIP entries, running zipalign is no longer possible after signing. That's why compression, aligning and signing now happens in a single, integrated step of the build process.

If you have any custom tasks in your build process that involve tampering with or post-processing the APK file in any way, please make sure you disable them or you risk invalidating the v2 signature and thus making your APKs incompatible with Android 7.0 and above.

Should you choose to do signing and aligning manually (such as from the command line), we offer a new tool in the Android SDK, called apksigner, that provides both v1 and v2 APK signing and verification. Note that you need to run zipalign before running apksigner if you are using v2 signatures. Also remember the jarsigner tool from the JDK is not compatible with Android v2 signatures, so you can't use it to re-sign your APKs if you want to retain the v2 signature.

In case you want to disable adding v1 or v2 signatures when building with the Android Gradle plugin, you can add these lines to your signingConfig section in build.gradle:

v1SigningEnabled false
v2SigningEnabled false

Note: both signing schemes are enabled by default in Android Gradle plugin 2.2.

Release builds for smaller APKs

We took this opportunity when rewriting the packager to make some optimizations to the size of release APKs, resulting in faster downloads, smaller delta updates on the Play Store, and less wasted space on the device. Here are some of the changes we made:

  • Files in the archive are now sorted to minimize differences between APK builds.
  • All file timestamps and metadata are zeroed out.
  • Level 6 and level 9 compression is checked for all files in parallel and the optimal one is used, i.e. if L9 provides little benefit in terms of size, then L6 may be chosen for better performance
  • Native libraries are stored uncompressed and page aligned in the APK. This brings support for the android:extractNativeLibs="false" option from Android 6.0 Marshmallow and lets applications use less space on the device as well as generate smaller updates on the Play Store
  • Zopfli compression is not used to better support Play Store update algorithms. It is not recommended to recompress your APKs with Zopfli. Pre-optimizing individual resources such as PNG files in your projects is still fine and recommended.

These changes help make your releases as small as possible so that users can download and update your app even on a slower connection or on less capable devices. But what about debug builds?

Debug builds for installation speed

When developing apps you want to keep the iteration cycle fast - change code, build, and deploy on a connected device or emulator. Since Android Studio 2.0 we've been working to make all the steps as fast as possible. With Instant Run we're now able to update only the changed code and resources during runtime, while the new Emulator brings multi-processor support and faster ADB speeds for quicker APK transfer and installation. Build improvements can cut that time even further and in Android Studio 2.2 we're introducing incremental packaging and parallel compression for debug builds. Together with other features like selectively packaging resources for the target device density and ABI this will make your development even faster.

A word of caution: the APK files created for Instant Run or by invoking a debug build are not meant for distribution on the Play Store! They contain additional instrumentation code for Instant Run and are missing resources for device configurations other than the one that was connected when you started the build. Make sure you only distribute release versions of the APK which you can create using the Android Studio Generate Signed APK command or the assembleRelease Gradle task.

Categories: Programming

Formatting cells with the Google Sheets API

Google Code Blog - Wed, 11/16/2016 - 21:51
Originally posted on G Suite Developers Blog
Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite
At Google I/O earlier this year, we launched a new Google Sheets API (click here to watch the entire announcement). The updated API includes many new features that weren't available in previous versions, including access to more functionality found in the Sheets desktop and mobile user interfaces. Formatting cells in Sheets is one example of something that wasn't possible with previous versions of the API and is the subject of today's DevByte video. In our previous Sheets API video, we demonstrated how to get data into and out of a Google Sheet programmatically, walking through a simple script that reads rows out of a relational database and transferring the data to a new Google Sheet. The Sheet created using the code from that video is where we pick up today.

Formatting spreadsheets is accomplished by creating a set of request commands in the form of JSON payloads, and sending them to the API. Here is a sample JavaScript Object made up of an array of requests (only one this time) to bold the first row of the default Sheet automatically created for you (whose ID is 0):

{"requests": [
{"repeatCell": {
"range": {
"sheetId": 0,
"startRowIndex": 0,
"endRowIndex": 1
},
"cell": {
"userEnteredFormat": {
"textFormat": {
"bold": true
}
}
},
"fields": "userEnteredFormat.textFormat.bold"
}}
]}
With at least one request, say in a variable named requests and the ID of the sheet as SHEET_ID, you send them to the API via an HTTP POST to https://sheets.googleapis.com/v4/spreadsheets/{SHEET_ID}:batchUpdate, which in Python, would be a single call that looks like this:
SHEETS.spreadsheets().batchUpdate(spreadsheetId=SHEET_ID,
body=requests).execute()

For more details on the code in the video, check out the deepdive blog post. As you can probably guess, the key challenge is in constructing the JSON payload to send to API calls‚ÄĒthe common operations samples can really help you with this. You can also check out our JavaScript codelab where we guide you through writing a Node.js app that manages customer orders for a toy company, featuring the toy orders data we looked at today but in a relational database. While the resulting equivalent Sheet is featured prominently in today's video, we will revisit it again in an upcoming episode showing you how to generate slides with spreadsheet data using the new Google Slides API, so stay tuned for that!

We hope all these resources help developers enhance their next app using G Suite APIs! Please subscribe to our channel and tell us what topics you would like to see in other episodes of the G Suite Dev Show!

Categories: Programming

Chrome Dev Summit 2016: The Mobile Web Moves Forward

Google Code Blog - Wed, 11/16/2016 - 20:04
Originally posted on Chromium Blog
Posted by Darin Fisher, VP Engineering, Chrome
Last week at the 4th annual Chrome Dev Summit, we were excited to share a glimpse of what’s possible with over 1,000 developers in person, and thousands more on the livestream. Each year this is a time to hear what developers have been building, share our vision for the future of the web platform, and celebrate what we love about the web...

Reach of the webAs we've talked about before, one of the superpowers of the web is its incredible reach. There are now more than two billion active Chrome browsers worldwide, with many more web users across other browsers. The majority of these users are now on mobile devices, bringing new opportunities for us to explore as an industry.
Mobile browsers also lead the way for the internet’s newest users. Exclusively accessing the internet from mobile devices, users in emerging markets struggle with limited computing power, unreliable networks, and expensive data. For these users, native apps can be a poor match due to their large data and storage requirements. And, it’s these constraints that have resulted in the developing markets leading the charge when it comes to innovating on the web.
Progressive Instead, the web can fill these needs for all users through an experience we've been calling Progressive Web Apps (PWAs). These web apps provide the performance users have come to expect from their device, while also offering critical capabilities such as offlining, add-to-homescreen, and push notifications. We've been encouraged by the strong adoption of these capabilities, with push notifications recently exceeding 18 billion notifications per day across 50,000 domains.
Last year when we spoke about PWAs, things were just getting started. Now we're seeing the movement in full swing, with many large sites across the globe launching great new apps and feeling the success that PWAs can bring.DAY 1 - THAO LOGOS.png
Alibaba.com, built a PWA and saw a 76% increase in conversion rates across browsers. The investment in the mobile web increased monthly active user rates on iOS by 14 percent. On Android devices where re-engagement capabilities like push notifications and Add to Homescreen were enabled, active user rates increased by 30 percent.

Another great example is The Weather Channel. Since launching a PWA they achieved an 80% reduction in load time and within three months, saw almost 1 million users opt in to receive web push notifications.

During the Summit, we also heard from Lyft, who shared their experience of building a PWA in less than a month, and using less than a quarter of the engineering support needed to build their native app. Learn more about our how partners are using PWA technologies to enhance their mobile web experience.

What can you do?We also have a variety of tools, libraries, and APIs available to help you bring the benefits of PWAs to your site. For example, Chrome's DevTools provides assistance along every step of the development flow. DevTools has a ton of new features to help you build great mobile apps, such as network simulation, CPU throttling, and a PWA audit tool powered by Lighthouse.

For developers just beginning their web app or looking to rework an existing one, the Polymer App Toolbox provides a set of components and tools for easily building a Progressive Web App using web components. And Polymer 2.0 is right around the corner, making it easy to take advantage of the new Web Components v1 APIs shipping cross-browser and build mobile web apps with minimal overhead.

Finally, checkout can be a complicated process to complete and in the retail sector alone there are 66% fewer conversions on mobile than on desktop. With PaymentRequest, you can now bring a seamless checkout experience to your website with support for both credit cards and Android Pay, increasing odds for conversion.

Catch upFinally, if you didn’t catch our live stream in real time, you can always check back on our YouTube channel for all the recordings or see the highlights from the event in 57 seconds.

Thanks for coming, thanks for watching, and most of all, thank you for developing for the web!
Categories: Programming

The Story of Batching to Streaming Analytics at Optimizely

Our mission at Optimizely is to help decision makers turn data into action. This requires us to move data with speed and reliability. We track billions of user events, such as page views, clicks and custom events, on a daily basis. To provide our customers with immediate access to key business insights about their users has always been our top most priority. Because of this, we are constantly innovating on our data ingestion pipeline.

In this article we will introduce how we transformed our data ingestion pipeline from batching to streaming to provide our customers with real-time session metrics.

Motivations 

Unification. Previously, we maintained two data stores for different use cases - HBase is used for computing Experimentation metrics, whereas Druid is used for calculating Personalization results. These two systems were developed with distinctive requirements in mind:

Experimentation

Personalization

Instant event ingestion

Delayed event ingestion ok

Query latency in seconds

Query latency in subseconds

Visitor level metrics

Session level metrics

As our business requirements evolve, however, things quickly became difficult to scale. Maintaining a Druid + HBase Lambda architecture (see below) to satisfy these business needs became a technical burden for the engineering team. We need a solution that reduces backend complexity and increases development productivity. More importantly, a unified counting infrastructure creates a generic platform for many of our future product needs.

Consistency. As mentioned above, the two counting infrastructures provide different metrics and computational guarantees. For example, Experimentation results show you the number of visitors visited your landing page whereas Personalization shows you the number of sessions instead. We want to bring consistent metrics to our customers and support both type of statistics across our products.

Real-time results. Our session based results are computed using MR jobs, which can be delayed up to hours after the events are received. A real-time solution will provide our customers with more up-to-date view of their data.

Druid + HBase

In our earlier posts, we introduced our backend ingestion pipeline and how we use Druid and MR to store transactional stats based on user sessions. One biggest benefit we get from Druid is the low latency results at query time. However, it does come with its own set of drawbacks. For example, since segment files are immutable, it is impossible to incrementally update the indexes. As a result, we are forced to reprocess user events within a given time window if we need to fix certain data issues such as out of order events. In addition, we had difficulty scaling the number of dimensions and dimension cardinality, and queries expanding long period of time became expensive.

On the other hand, we also use HBase for our visitor based computation. We write each event into an HBase cell, which gave us maximum flexibility in terms of supporting the kind of queries we can run. When a customer needs to find out “how many unique visitors have triggered an add-to-cart conversion”, for example, we do a scan over the range of dataset for that experimentation. Since events are pushed into HBase (through Kafka) near real-time, data generally reflect the current state of the world. However, our current table schema does not aggregate any metadata associated with each event. These metadata include generic set of information such as browser types and geolocation details, as well as customer specific tags used for customized data segmentation. The redundancy of these data prevents us from supporting large number of custom segmentations, as it increases our storage cost and query scan time.

SessionDB 
Categories: Architecture

√ėredev 2016 ‚Äď A Quick Review

√ėredev 2016
November 9 – 11
Malmö, Sweden

√ėredev was AWESOME! ¬†The short review reads: good sessions, good food, good people and great conversations. ¬†Here’s the longer version . . .

I attend and speak at several conferences every year. Those conferences include process, test, measurement, quality and development conferences. ¬†This year I was asked to speak at √ėredev. The conference portion of √ėredev, described as a developer conference, spanned three days run by a master of ceremonies who managed six excellent keynote speakers that¬†challenged the audience to harness the power of computer in the broadest sense. Two of the keynotes stood out above the rest. ¬†The most memorable was ¬†Christian Heilmann‚Äôs talk on the fourth industrial revolution. ¬†Mr Heilmann got me excited about the computing power and interfaces between humans and machines just over the horizon. ¬†There are exciting times to come. ¬†The second was Jurgen Appelo‚Äôs keynote titled ‚ÄúManaging for Happiness,‚ÄĚ which discussed Management 3.0 and the fact that happy workers are more productive workers. ¬†

In addition to the¬†keynotes, there were over 180 track sessions. ¬†Each day was made up of seven tracks each. ¬†Sessions on coding, testing, project management, risk, tools, and even mindfulness. ¬†As a somewhat jaded conference goer, I found at least one session for each time that was useful and challenging. ¬†For example, Jose Lorenzo Rodriguez‚Äôs session titled ‚ÄúFixing mind anti-patterns with mindfulness‚ÄĚ has stayed with me since I attended his session (expect to hear Jose on the podcast).

One of the more interesting features of this conference was the food (breakfast, lunch, snacks, and dinner) and the evening events at the conference location. Every effort was made by the conference organizers to facilitate conversations between the attendees which, in the end is what makes a conference more than a series of lectures. The cacophony of voices in engaged in earnest and passionate conversations was invigorating. I was originally a bit anxious about attending a conference that even though in English, the attendees were more comfortable in languages that I have never studied.  I think I will remedy that in the future.    

One of the exciting parts of attending √ėredev was meeting several people that I have interviewed on the Software Process and Measurement Cast (some multiple times). Interviewees that we also √ėredev speakers included:

Jim Benson – SPaMCAST 400 – Jim Benson Personal Kanban and More

Allan Kelly – SPaMCAST 353 -Allan Kelly, #NoProjects

Mike Burrows – SPaMCAST 396 ‚Äď Mike Burrows, Agendashift,¬†SPaMCAST 310 ‚Äď Mike Burrows, Kanban from the Inside

Marcus Hammarberg – SPaMCAST 414 ‚Äď Marcus Hammarberg, Agile In the Real World

Listen to their interviews and then watch the video of their talks on the √ėredev site. ¬†

I did talks on Agile Risks and on using Storytelling to Define the Big Picture In Agile (the links are to the videos).  A funny story, near the end of  the talk on storytelling, for some reason the projection system froze.  It was still a fun presentation and I was happy it was not my computer that was the problem.

The long and short of √ėredev? ¬†This is a developer conference. ¬†There is lots of live coding going on. But, √ėredev is more than a conference about development. ¬†There are topics for everyone involved in the delivery of functional software. ¬†¬†


Categories: Process Management

GTAC 2016 Registration Deadline Extended

Google Testing Blog - Tue, 11/15/2016 - 21:09
by Sonal Shah on behalf of the GTAC Committee

Our goal in organizing GTAC each year is to make it a first-class conference, dedicated to presenting leading edge industry practices. The quality of submissions we've received for GTAC 2016 so far has been overwhelming. In order to include the best talks possible, we are extending the deadline for speaker and attendee submissions by 15 days. The new timelines are as follows:

June 1, 2016 June 15, 2016 - Last day for speaker, attendee and diversity scholarship submissions.
June 15, 2016 July 15, 2016 - Attendees and scholarship awardees will be notified of selection/rejection/waitlist status. Those on the waitlist will be notified as space becomes available.
August 15, 2016 August 29, 2016 - Selected speakers will be notified.

To register, please fill out this form.
To apply for diversity scholarship, please fill out this form.

The GTAC website has a list of frequently asked questions. Please do not hesitate to contact gtac2016@google.com if you still have any questions.

Categories: Testing & QA

GTAC 2016 Registration Deadline Extended

Google Testing Blog - Tue, 11/15/2016 - 21:09
by Sonal Shah on behalf of the GTAC Committee

Our goal in organizing GTAC each year is to make it a first-class conference, dedicated to presenting leading edge industry practices. The quality of submissions we've received for GTAC 2016 so far has been overwhelming. In order to include the best talks possible, we are extending the deadline for speaker and attendee submissions by 15 days. The new timelines are as follows:

June 1, 2016 June 15, 2016 - Last day for speaker, attendee and diversity scholarship submissions.
June 15, 2016 July 15, 2016 - Attendees and scholarship awardees will be notified of selection/rejection/waitlist status. Those on the waitlist will be notified as space becomes available.
August 15, 2016 August 29, 2016 - Selected speakers will be notified.

To register, please fill out this form.
To apply for diversity scholarship, please fill out this form.

The GTAC website has a list of frequently asked questions. Please do not hesitate to contact gtac2016@google.com if you still have any questions.

Categories: Testing & QA

Making it easier for anyone to start exploring A.I.

Google Code Blog - Tue, 11/15/2016 - 21:04
Alexander Chen, Creative Lab

With all the exciting A.I. stuff happening, there are lots of people eager to start tinkering with machine learningtechnology. We want to help make it easier for anyone to do that ‚Äď whether you're an engineer, hobbyist, student, or someone who's just curious. But sometimes, it can feel pretty intimidating when you're just getting started.

That's why we've created a site called A.I. Experiments. The site showcases simple experiments that let anyone play with this technology hands-on, and resources for creating your own experiments.

The experiments show how machine learning can make sense of all kinds of things ‚Äď images, drawings, language, sound, and more. They were made by people with all different interests ‚Äď web developers, musicians, game designers, bird sound enthusiasts, data visualizers ‚Äď with everyone bringing their own ideas for how to use machine learning.

We also want to make it easier for coders to make their own experiments. Many of the projects we're featuring are built with tools anyone can use, like Cloud Vision API, Tensorflow, and other libraries from the machine learning community. The site has videos by the creators explaining how they work, and links to open-source code to help you get started. To submit something you've made, or just play with things other people are making, visit A.I. Experiments.

And if you're looking for even more inspiration for what's possible using machine learning, check out these new experiments from our friends in Google Arts & Culture.

Categories: Programming

Risk Management is How Adults Manage Projects

Herding Cats - Glen Alleman - Tue, 11/15/2016 - 17:33

Risk Management is How Adults Manage Projects - Tim Lister

Here's how we manage risk on our software intensive system of systems using Agile. 

Risk management requires making estimating of the reducible and irreducible uncertanties that create risk. If you're not estimating these uncertanties and their impacts, you're not managing as an adult.

Related articles IT Risk Management Risk Management is How Adults Manage Projects Estimating is Risk Management Mr. Franklin's Advice Late Start = Late Finish
Categories: Project Management

The Myth of the Scrum Master and Actual Life Experiences

Herding Cats - Glen Alleman - Tue, 11/15/2016 - 00:11

Just listened to Tony Richards "The Capital Role of Scrum" in Scrum Master Toolbox Podcast (yes I know, this is Vasco's podcast and it does have value when he sticks to Scrum topics). Vasco describes the scrum Master as the Scrum Mom. 

This brings to mind a concept we (wife and me) came to a bit late in our child raising experience.  We encountered Parenting with Love and Logic a bit late - middle school. In this paradigm, there are 3 types of parents.

  • The drill sergeant¬†parent
  • The helicopter¬†parent
  • The consultant parent

In the Scrum paradigm

  • Drill Sergent - enforces compliance with rules using the policies and procedures of the firm's software development lifecycle (SDLC). Remember Jack Welsch's quote¬†bureaucracy protects the organization from the incompetent. ¬†When everyone is competent less bureaucracy is needed.¬†
  • Helicopter -¬†rescue the team when they get in trouble. This transfers the accountability for¬†getting things done to the Scrum Master, rather than the Scrum Team.

It's the Consultant that serves as the best parenting style. For an Agile (Scrum) Team, the parenting actions in Love and Logic have direct applicability. I'm not suggesting the Scrum master or Scrum Coach is the parent of the team. Rather the paradigm of parenting is applicable. Vasco may not have realized that, but parenting is very close to managing the action of others for a beneficial outcome of both those performing the work and those paying for the work.

  • Provide messages of personal worth and strength - a team is defined as¬†a small group of qualified individuals who hold each other accountable for a shared outcome. The SM needs to¬†message that idea at all times. Determine if the team is behaving as a team, and when they are not consulting with them to determine why not, what can be changed to get back to the¬†team processes. Here's one of the best talks about what a Team does when it is working properly.
  • Very seldom mention responsibilities - if the team is acting like a team, then they have a¬†shared accountability for the outcomes. This is self-defining the responsibilities. The have agreement on this accountability for a shared outcome means having a process to reveal the outcome. Product Roadmap, Release Plan, backlogs, Big Visible Charts again, are ways to¬†broadcast the results of the¬†shared outcome Alistair Cockburn calls these¬†Information Radiators.
  • Demonstrates how to take care of self and be responsible - the SM behaves as a¬†consultant advisor.
  • Shares personal feelings about own performance and responsibilities - communication¬†is¬†all ways (meaning¬†not top down, not bottom¬†up, not dominated by the vocal few).
  • Provides and helps team explore alternatives and then allows the team to make their own decision - making those decisions is the basis of Scrum. Along with the¬†accountability of the team for those decisions.
  • Provides ‚Äútime frames‚ÄĚ in which the team may complete responsibilities - all software development is time-based. Those paying for the work hopefully understand the¬†time value of money. Time is the only thing a team can't get more of. Self-managing in the presence of uncertainty means the team must manage the time aspects of their work.
  • Models doing a good job, finishing, cleaning up, feeling good about it¬†-¬†the SM walks the walk of being a consultative guide.¬†
  • Often asks, ‚ÄúWho owns the problem?‚ÄĚ helps the team¬†explore solutions to the¬†problem¬†- guides the team to the solution through Socratic interaction. This means the SM needs to have some sense of what the solutions might be. Having little¬†understanding of the product domain, means not being able to ask the right questions. Without that skill and experience, the Team can easily get in trouble.
  • Uses lots of actions, but very few words - big visible charts, directed question, artifacts of the teams work, self-created outcomes that demonstrate success as a team for that¬†shared outcome speak much louder than words.¬†
  • Allows the team to experience life‚Äôs natural consequences and allows them to serve as their own teacher -¬†the notion of fail fast and fail often is misunderstood in the business world by the teams. It is many times taken as we don't need to know what done looks like and we can have it emerge as we go. In Love and Logic, the paradigm means failures as a young child have much lower consequences than failures as a young adult. Learn to see what failures will occur and avoid them. Falling out of a chair¬†at age 3 is much less critical than falling off the side of a mountain at age 16 with no protection ¬†- helmet or belaying ropes. Make mistake early on, the cost of mistakes later can be life threatening.

Summary

Scrum teams must act as teams in the Jon Katzenbach notion. The Scrum Master must act as the consultant parent for that team. The term Scrum Coach has two aspects. The parenting coach (consultant) and the coach found on sports teams. Agilist many times forget this. The sports coach is not a player. The sports coach may have played and knows the game. But the agile coach, like the sports coach, has insight to how to improve the performance of the team, that the team members themselves do not have. This is evidenced by the Super Bowl win of the Broncos and the World Series win of the Chicago Cubs. 

Categories: Project Management

How Urban Airship Scaled to 2.5 Billion Notifications During the U.S. Election

This is a guest post by Urban Airship. Contributors: Adam Lowry, Sean Moran, Mike Herrick, Lisa Orr, Todd Johnson, Christine Ciandrini, Ashish Warty, Nick Adlard, Mele Sax-Barnett, Niall Kelly, Graham Forest, and Gavin McQuillan

Urban Airship is trusted by thousands of businesses looking to grow with mobile. Urban Airship is a seven year old SaaS company and has a freemium business model so you can try it for free. For more information, visit www.urbanairship.com. Urban Airship now averages more than one billion push notifications delivered daily. This post highlights Urban Airship notification usage for the 2016 U.S. election, exploring the architecture of the system--the Core Delivery Pipeline--that delivers billions of real-time notifications for news publishers.

2016 U.S. Election

In the 24 hours surrounding Election Day, Urban Airship delivered 2.5 billion notifications—its highest daily volume ever. This is equivalent to 8 notification per person in the United States or 1 notification for every active smartphone in the world. While Urban Airship powers more than 45,000 apps across every industry vertical, analysis of the election usage data shows that more than 400 media apps were responsible for 60% of this record volume, sending 1.5 billion notifications in a single day as election results were tracked and reported.

 

Notification volume was steady and peaked when the presidential election concluded:

Categories: Architecture

Little Book of Bad Excuses

Herding Cats - Glen Alleman - Mon, 11/14/2016 - 16:13

Long ago there were a set of small books from the Software Program Managers Network and Norm Brown's work on Industrial Strength Management Strategies, which was absorbed by an organization which is no longer around. I have all the Little Books and they contain gems of wisdom that need restating in the presence of the current approaches to software development and hype around processes conjectured to fix problems.

The Software Program Managers Network produced books. I have 7 of them.

  • Little Books of Configuration Management
  • Project Breathalyzer
  • Little Book of Bad Excuses
  • Little of Testing Volume I and II
  • Condensed Guide to Software Acquisition Practices
  • Little Yellow Book of Software Management Questions

These books are built on the Nine Best Practices for developing software-intensive systems and short briefing that goes with the paper

Let's start with formal risk management. There was a twitter post yesterday asking about the connection between Agile development and Risk Management. Agile is a participant in risk management but it is not risk management in and of itself.

From Bad Excuses book, here's a list for Risk Management and the Project Breathalyzer where these  items live in a larger context

  1. What are the top ten risks as determined by the customer, technical, and program management?
  2. How are these risks identified?
  3. How are these risks resolved?
  4. How much money and time has bee set aside for risk mitigation?
  5. What risks would be classified as showstoppers and were these derived?
  6. How many risks are in the Risk Register? How recently has the Risk Register been updated?
  7. How many risks have been added in the last six months?
  8. Can a risk be named that was mitigated in the last six months?
  9. What risks are expected to be mitigated or resolved in the next six months?
  10. Are risks assessed and prioritized in terms of their likelihood of occurrence and the potential impact to the program?
  11. Are as many viewpoints as possible involved in the risk assessment process?
  12. What percentage of the risks impact the final delivery of the system?
  13. To date how many risks have been closed out?
  14. How are identified risks made visible to all project participants?

No matter the development method - agile or traditional - risk management is how adults manage projects (Tim Lister). No risk management no adults at the table.

Categories: Project Management

Little Book of Bad Excuses

Herding Cats - Glen Alleman - Mon, 11/14/2016 - 16:13

Long ago there were a set of small books from the Software Program Managers Network and Norm Brown's work on Industrial Strength Management Strategies, which was absorbed by an organization which is no longer around. I have all the Little Books and they contain gems of wisdom that need restating in the presence of the current approaches to software development and hype around processes conjectured to fix problems.

The Software Program Managers Network produced books. I have 7 of them.

  • Little Books of Configuration Management
  • Project Breathalyzer
  • Little Book of Bad Excuses
  • Little of Testing Volume I and II
  • Condensed Guide to Software Acquisition Practices
  • Little Yellow Book of Software Management Questions

These books are built on the Nine Best Practices for developing software-intensive systems and short briefing that goes with the paper

Let's start with formal risk management. There was a twitter post yesterday asking about the connection between Agile development and Risk Management. Agile is a participant in risk management but it is not risk management in and of itself.

From Bad Excuses book, here's a list for Risk Management and the Project Breathalyzer where these  items live in a larger context

  1. What are the top ten risks as determined by the customer, technical, and program management?
  2. How are these risks identified?
  3. How are these risks resolved?
  4. How much money and time has bee set aside for risk mitigation?
  5. What risks would be classified as showstoppers and were these derived?
  6. How many risks are in the Risk Register? How recently has the Risk Register been updated?
  7. How many risks have been added in the last six months?
  8. Can a risk be named that was mitigated in the last six months?
  9. What risks are expected to be mitigated or resolved in the next six months?
  10. Are risks assessed and prioritized in terms of their likelihood of occurrence and the potential impact to the program?
  11. Are as many viewpoints as possible involved in the risk assessment process?
  12. What percentage of the risks impact the final delivery of the system?
  13. To date how many risks have been closed out?
  14. How are identified risks made visible to all project participants?

No matter the development method - agile or traditional - risk management is how adults manage projects (Tim Lister). No risk management no adults at the table.

Categories: Project Management

SPaMCAST 418 ‚Äď Larry Cooper, The Agility Series

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 418 features our interview with Larry Cooper.  Larry and I talked about his project, The Agility Series.  The series is providing the community an understanding of how Agile is applied and how practitioners are interpreting practices and principles.

Reminder: Schedule Change for Vacation, Travel and Holiday

Last week I was in Sweden for the ¬†√ėredev conference with a day of sightseeing thrown in. New listeners joining from the conference: WELCOME. The trip was great, and the conference was awesome and mind expanding. ¬†I will publish a review soon. ¬†Brazil and ‚ÄúM√©tricas 2016‚ÄĚ is next followed immediately by the Thanksgiving holiday in the United States. ¬†This is the long way of saying that I will be publishing on an every other week basis through November 27th. ¬†We will be back to weekly posting in December. ¬†

Larry Cooper’s Bio
Larry Cooper is a Project Executive in the public and private sectors in Canada and the USA and holds over 20 industry certifications in Agile, Project Management, and ITIL. His books include ‚ÄúAgile Value Delivery: Beyond the Numbers‚ÄĚ (which was endorsed by a co-author of the Agile Manifesto) as well as the ‚ÄúThe Agility Series‚ÄĚ to be published over the next year or two. He was also the Mentor for ‚ÄúPRINCE2 Agile‚ÄĚ published by AXELOS.

Larry has  been an invited speaker at numerous conferences and symposia for the PMI, BAWorld, and the itSMF. He has presented global webinars with BrightTalk and ProjectManagement.com and authored more than  30 courses including an Agile-oriented curriculum that is sold directly to training companies in Canada and the USA.

The first two book in the Agility Series on  Organizational Agility and Leadership Agility are  available for free download at www.mplaza.ca as is The Adaptive Strategy Framework Guide.

You can join the adventure with the rest of the Wisdom Council for the Agility through their LinkedIn group https://www.linkedin.com/groups/8539263

Re-Read Saturday News

The read/re-read of The Five Dysfunctions of a Team by Patrick Lencioni (published by Jossey-Bass) continues on the Blog.  Lencioni’s model of team dysfunctions (we get through most of it this week) is illustrated through a set of  crises used to illustrate the common problems that make teams into dysfunctional collections of individuals. The current entry features the sections titled Film Noir and Application.  

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 419 will feature four essays.  Essays from Kim Pries, Jon Quigley, Gene Hughson and one from The SPaMCAST will be featured.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 418 - Larry Cooper, The Agility Series

Software Process and Measurement Cast - Sun, 11/13/2016 - 23:00

The Software Process and Measurement Cast 418 features our interview with Larry Cooper.  Larry and I talked about his project, The Agility Series.  The series is providing the community an understanding of how Agile is applied and how practitioners are interpreting practices and principles.

Reminder: Schedule Change for Vacation, Travel and Holiday

Last week I was in Sweden for the ¬†√ėredev conference with a day of sightseeing thrown in. New listeners joining from the conference: WELCOME. The trip was great, and the conference was awesome and mind expanding. ¬†I will publish a review soon. ¬†Brazil and ‚ÄúM√©tricas 2016‚ÄĚ is next followed immediately by the Thanksgiving holiday in the United States. ¬†This is the long way of saying that I will be publishing on an every other week basis through November 27th. ¬†We will be back to weekly posting in December. ¬†

Larry Cooper’s Bio
Larry Cooper is a Project Executive in the public and private sectors in Canada and the USA and holds over 20 industry certifications in Agile, Project Management, and ITIL. His books include ‚ÄúAgile Value Delivery: Beyond the Numbers‚ÄĚ (which was endorsed by a co-author of the Agile Manifesto) as well as the ‚ÄúThe Agility Series‚ÄĚ to be published over the next year or two. He was also the Mentor for ‚ÄúPRINCE2 Agile‚ÄĚ published by AXELOS.

Larry has  been an invited speaker at numerous conferences and symposia for the PMI, BAWorld, and the itSMF. He has presented global webinars with BrightTalk and ProjectManagement.com and authored more than  30 courses including an Agile-oriented curriculum that is sold directly to training companies in Canada and the USA.

The first two book in the Agility Series on  Organizational Agility and Leadership Agility are  available for free download at www.mplaza.ca as is The Adaptive Strategy Framework Guide.

You can join the adventure with the rest of the Wisdom Council for the Agility through their LinkedIn group https://www.linkedin.com/groups/8539263

Re-Read Saturday News

The read/re-read of The Five Dysfunctions of a Team by Patrick Lencioni (published by Jossey-Bass) continues on the Blog.  Lencioni’s model of team dysfunctions (we get through most of it this week) is illustrated through a set of  crises used to illustrate the common problems that make teams into dysfunctional collections of individuals. The current entry features the sections titled Film Noir and Application.  

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 419 will feature four essays.  Essays from Kim Pries, Jon Quigley, Gene Hughson and one from The SPaMCAST will be featured.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

How to Lie With Statistics

Herding Cats - Glen Alleman - Sun, 11/13/2016 - 15:49

The book How To Lie With Statistics, Darrell Huff  tells us how the make the numbers look the way we want them to look, without actually changing the numbers.  One common way to is adjust the coordinates scales. Either in this way, by not have the full y-axis scale. This approach makes it look like the voter turnout is dramatically different between 2008 and 2016. Which it is as a percentage

Screen Shot 2016-11-11 at 7.56.04 AM

The Republican vote was lower than the Democratic vote, but the scale makes it look much different

Screen Shot 2016-11-11 at 7.58.02 AM

Of course, there's my favorite where the y-axis has no scale - supposedly normalized - but normalized in heterogeneous units - Cats versus Cumquats. In this case, as well the Ideal has no basis in fact since both the estimate and the actual are random variances subject to the uncertainties of project management - aleatory and epistemic uncertainties. The missing error bands hide what is the Root Cause of the non-ideal actuals. Each dot (diamond and triangle) needs a confidence band from the original estimating process. Was that estimate an 80% confidence estimate, a 60% confidence estimate, a wild ass guess. With this knowledge the single point results are worthless in determining what could the numbers have been, why they are the way they are, and what could we possibly do about making them better.

Without knowing why those projects dod not follow the ideal - meaning the actuals matched the estimate, the chart is just a bunch of date with no information for taking corrective actions to improve project performance.

Screen Shot 2016-11-11 at 8.01.02 AM

So first go buy How To Lie with Statistics, (you can find a downloadable version at Archive.org) then download How to Lie with Statistical Graphs.

Along with

  • Statistics, A Very Short Introduction, David J. Hand
  • Principles of Statistics, M. G. Bulmer
  • Flaws and Fallacies in Statistical Thinking, Stephen K. Campbell
  • Stanard Deviations: Flawed Assumptions, Tortured Data, and Other Ways to Lie with¬†Statistics, Gary¬†Smith

You can start to push back on graphs, charts, assumptions, and conclusions derived from bad statistics

After that go buy Apollo Root Cause Analysis: Effective Solution to Everyday Problems Every Time, Dean Gano and learn that without the CAUSE is the numbers you see in a graph, you have no way to taking any action to make them better. You're just by stander to possible bad statsitics.

Categories: Project Management

Neo4j 3.1 beta3 + docker: Creating a Causal Cluster

Mark Needham - Sun, 11/13/2016 - 13:30

Over the weekend I’ve been playing around with docker and learning how to spin up a Neo4j Causal Cluster.

Causal Clustering is Neo4j’s new clustering architecture which makes use of Diego Ongaro’s Raft consensus algorithm to ensure writes are committed on a majority of servers. It’ll be available in the 3.1 series of Neo4j which is currently in beta. I’ll be using BETA3 in this post.

2016 11 13 09 14 41

I don’t know much about docker but luckily my colleague Kevin Van Gundy wrote a blog post a couple of weeks ago explaining how to spin up Neo4j inside a docker container which was very helpful for getting me started.

Kevin spins up a single Neo4j server using the latest released version, which at the time of writing is 3.0.7. Since we want to use a beta version we’ll need to use a docker image from the neo4j-experimental repository.

We’re going to create 3 docker instances, each running Neo4j, and have them form a cluster. We’ll name them instance0, instance1, and instance2. We’ll create config files for each instance on the host machine and refer to those from our docker instance. This is the config file for instance0:

/tmp/ce/instance0/conf/neo4j.conf

unsupported.dbms.edition=enterprise
dbms.mode=CORE
 
dbms.security.auth_enabled=false
dbms.memory.heap.initial_size=512m
dbms.memory.heap.max_size=512m
dbms.memory.pagecache.size=100M
dbms.tx_log.rotation.retention_policy=false
 
dbms.connector.bolt.type=BOLT
dbms.connector.bolt.enabled=true
dbms.connector.bolt.listen_address=0.0.0.0:7687
dbms.connector.http.type=HTTP
dbms.connector.http.enabled=true
dbms.connector.http.listen_address=0.0.0.0:7474
 
dbms.connectors.default_listen_address=0.0.0.0
dbms.connectors.default_advertised_address=instance0
 
causal_clustering.initial_discovery_members=instance0:5000,instance1:5000,instance2:5000
causal_clustering.leader_election_timeout=2s

The only config that changes between instances is dbms.connectors.default_advertised_address which would have a value of instance1 or instance2 for the other members of our cluster.

We can create a docker instance using this config:

docker run --name=instance0 --detach \
           --publish=7474:7474 \
           --publish=7687:7687 \
           --net=cluster \
           --hostname=instance0 \
           --volume /tmp/ce/instance0/conf:/conf \
           --volume /tmp/ce/instance0/data:/data \
           neo4j/neo4j-experimental:3.1.0-M13-beta3-enterprise

We create the network ‘cluster’ referenced on the 4th line like this:

docker network create --driver=bridge cluster

It’s a bit of a pain having to create these config files and calls to docker by hand but luckily Michael has scripted the whole thing for us.

docker.sh

function config {
mkdir -p /tmp/ce/$1/conf
cat > /tmp/ce/$1/conf/neo4j.conf << EOF
unsupported.dbms.edition=enterprise
dbms.mode=CORE
 
dbms.security.auth_enabled=false
dbms.memory.heap.initial_size=512m
dbms.memory.heap.max_size=512m
dbms.memory.pagecache.size=100M
dbms.tx_log.rotation.retention_policy=false
 
dbms.connector.bolt.type=BOLT
dbms.connector.bolt.enabled=true
dbms.connector.bolt.listen_address=0.0.0.0:7687
dbms.connector.http.type=HTTP
dbms.connector.http.enabled=true
dbms.connector.http.listen_address=0.0.0.0:7474
 
dbms.connectors.default_listen_address=0.0.0.0
dbms.connectors.default_advertised_address=${1}
 
causal_clustering.initial_discovery_members=instance0:5000,instance1:5000,instance2:5000
causal_clustering.leader_election_timeout=2s
EOF
}
 
function run {
HOST=$1
INSTANCE=instance$HOST
config $INSTANCE
docker run --name=$INSTANCE --detach \
           --publish=$[7474+$HOST]:7474 \
           --publish=$[7687+$HOST]:7687 \
           --net=cluster \
           --hostname=$INSTANCE \
           --volume /tmp/ce/$INSTANCE/conf:/conf \
           --volume /tmp/ce/$INSTANCE/data:/data \
           neo4j/neo4j-experimental:3.1.0-M13-beta3-enterprise
}
 
docker network create --driver=bridge cluster
 
run 0
run 1
run 2

Once we run the script we can run the following command to check that the cluster has come up:

$ docker logs instance0
Starting Neo4j.
2016-11-13 11:46:55.863+0000 INFO  Starting...
2016-11-13 11:46:57.241+0000 INFO  Bolt enabled on 0.0.0.0:7687.
2016-11-13 11:46:57.255+0000 INFO  Initiating metrics...
2016-11-13 11:46:57.439+0000 INFO  Waiting for other members to join cluster before continuing...
2016-11-13 11:47:17.816+0000 INFO  Started.
2016-11-13 11:47:18.054+0000 INFO  Mounted REST API at: /db/manage
2016-11-13 11:47:19.068+0000 INFO  Remote interface available at http://instance0:7474/

Each instance is available at port 7474 but we’ve mapped these to different ports on the host OS by using this line in the parameters we passed to docker run:

--publish=$[7474+$HOST]:7474

We can therefore access each of these Neo4j instances from the host OS at the following ports:

instance0 -> http://localhost:7474
instance1 -> http://localhost:7475
instance2 -> http://localhost:7476

If we open one of those we’ll be confronted with the following dialog:

2016 11 13 12 10 06

This is a bit strange as we explicitly disabled security in our config.

The actual problem is that the Neo4j browser is unable to communicate with the underlying database. There are two ways to work around this:

Connect using HTTP instead of BOLT

We can tell the browser to connect to the database using the HTTP protocol rather than BOLT by unticking the checkbox:

2016 11 13 12 12 24 Update the BOLT host

Or we can update the Bolt host value to refer to a host:port value that’s accessible from the host OS. Each server is accessible from port 7687 but we mapped those ports to different ports on the host OS with this flag that we passed to docker run:

--publish=$[7687+$HOST]:7687 \

We can access BOLT from the following ports:

instance0 -> localhost:7687
instance1 -> localhost:7688
instance2 -> localhost:7689

Let’s try changing it for instance2:

2016 11 13 12 20 29

You might have to refresh your web browser after you change value but it usually updates automatically. We can run the :sysinfo command in the browser to see the state of our cluster:

2016 11 13 12 22 55

And we’re good to go. The full script is available as a gist if you want to give it a try.

Let me know how you get on!

Categories: Programming

Avoid Impossible and Unvoiced Expectations

Chances are that unless you ask you will not get what you want.

Unless you ask, you’re not going to get what you want.

In the long run, goals must be based on our expectations. The Merriam-Webster Online Dictionary defines expectation as ‚Äúa belief that something will happen or is likely to happen.‚ÄĚ ¬†They provide the motivation to begin a new project or to plan for the future. The belief that something good will happen can provide a significant amount of energy to propel toward our goals. When we discover that our expectations are impossible they stop being motivators. I realized many years ago that the possibility of winning the lottery is not a motivator to me, I understand statistics and therefore I don‚Äôt play. For me, saying that I expect to win the lottery has no motivational power because I have no expectation of winning.¬† If I were to set a goal of winning the lottery with no real expectation of achieving that goal I would be setting myself up for disappointment. Another example of the impact of the mismatch between goals and expectations can be seen in poorly set project estimates.¬† Occasionally (I am being kind), I see PMOs or managers set an estimate for a project team without input or participation.¬† Usually the estimate is wrong, and wrong low. ¬†There are many physiological reasons for setting a low estimate, ranging from creating an anchor bias to providing the team a stretch goal. In most cases, no one on the team is fooled (at least more than once), therefore no one is motivated.

Second criteria for maximizing to potential for meeting expectations is to voice the expectations.  My wife occasionally berates me about letting unvoiced expectations get in the way of a good time. These expectations usually have to do with the dining while out and about. Expectations that are unvoiced, and therefore potentially unmet, can cause anger and resentment. We can’t simply assume that the picture we have in our head about the future will just happen.  A number of times over my career as a manager, employees have come to me to let me know that they had wanted to be assigned to a specific project after someone else had volunteered.   In most cases these employees had formed an expectation about their role and the project but had never voiced that expectation.  Because the expectation was unvoiced it had far less chance of being met.

We need to make sure our expectations of the future are possible.  Expectations that are goals are important motivators but only if they our our expectations therefore those we believe they will happen. Voicing our expectations is also an important step towards realizing those expectations.  Like the raise you want but have not taken the opportunity to ask for.  An unvoiced expectation is less apt to evoke feedback, for example, if you asked for a raise that did not match your performance  as a child you may well have been told no.  But, unless you ask and make your case, you may never get that raise.  Set your goals and expectations, share them and listen to the feedback.  Avoiding impossible and unvoiced expectations will  educe the potential for disappointment and resentment.


Categories: Process Management

The Agile Cannon

Herding Cats - Glen Alleman - Sat, 11/12/2016 - 18:30

The paper Agile Base Patterns in the Agile Canon, Daniel R Greening, 2016 49th Hawaii International Conference on System Sciences is an important contribution to the discussion of agile at scale in organizations beyond 5 developers at the table with their customer.

The Agile Cannon is composed of 5 elements

  1. Measure Economic Progress
    • Plans don't guarantee creative success - creative efforts operate in an¬†economy - as system where people manage limited resources to maximize return and growth
    • Forces¬†on economic progress
      • Economics¬†- actions that be participants without well-defined economic guidance, wander aimlessly. They don't know what they value. They don't know their costs
      • Measurement - lagging measures applied to current decisions can fail perversely
      • When measurement drives rewards, perceived value is gamed. Creativity is improved with rewards of mastery, autonomy, and purpose [1]
    • Measure economic progress with well-chosen, evolving metrics
      • Identify desired outcomes
      • Identify relevant¬†metrics
      • Create a forecasting discipline
      • Embrace objectivity
      • Evolve
  2. Proactively Experiment to Improve
    • Not improving¬†fast¬†enough
    • Forces on proactive improvement
      • Complacency - passive observation¬†
      • Loss of control creates risk of failure
      • Quest for control (in manufacturing sense) makes innovation harder [2]
      • Non-creative work is easier
      • Uncertainty creates confusion
    • Proactively¬†experiment to improve
      • Run adaptive improvement experiments
      • Before changing anything assess different options and explore possible results
      • Experiments¬†can be evolutionary or revolutionary
      • Establish a hypothesis
      • Innovation causes variability
      • Kaizen emphasis¬†on small improvements
      • Variation accompanies chaos and complex adaptive systems
      • Two solutions to all this
        • Compensate for metric variations¬†by including¬†learning¬†metrics
        • Compensate¬†for cost variation¬†by including risk reduction metrics
    • Teams that apply experiment¬†techniques can become¬†hyper-productive [3]
  3. Limit Work-In-Progress
    • When going too slow, more detailed¬†plans makes it worse
    • Forces on WIP¬†
      • Inventory - fungible assets helps increase productivity, but increase costs
      • Congestion - as randomly timed requests increase system utilization, delay before request started increases exponentially
      • Cognition - the most limited resource for creative people is time and attention
    • Limit WIP to improve value flow
      • Cognition and backlogs = a clear mind helps prioritize work
        • Focus¬†on most profitable work
        • Establish¬†a Zero backlog approach to planning - Scrum creates Sprint Backlog. Highly effective Scrum teams have seven items in Sprint Backlog
        • Create¬†fractually structured Product Backlog - seven small backlog items, followed by seven bigger ones.¬†Fractually Structured backlog limits the amount of planned effort invested¬†early in a large project, reducing planning, decreasing sunk cost bias, and encourages rapid¬†adoption to new information
      • Collaborative Focus
        • Swarm on top most items
        • Goal is completion - shippable product
        • Communication delays are a form of WIP
      • Value Stream Optimization
        • Visibly track active work by category
        • Limit WIP in each category
        • Organize using VSM [4]
  4. Embrace Collective Responsibility
    • Forces on Collective Responsibility
      • Readily claim responsibility for success, but refrain from claim responsibility for failure
        • Deny the problem
        • Blame others
        • Blame circumstances
        • Feel obligation to keep doing our job
    • Help People embrace collective responsibility
      • Autonomy
      • Understanding
      • Agency
    • Organizational culture largely determines if teams and individuals embrace and sustain collective responsibility
  5. Solve Systemic Problems
    • Forces on systemic problems
      • Operating with many actors, dysfunctions of others limits agility
      • Competing for attention from dependencies creates queue that increases latency¬†
    • Collaboratively analyze and mitigate systemic dysfunction
      • Root Cause Analysis [5]
      • Static analysis¬†- dependency mapping
      • Dynamic analysis - analyze flow

[1] D. Pink, Drive: The surprising truth about what motivates us (2011). 

[2]¬†R. Ashkenas, ‚ÄúIt‚Äôs Time to Rethink Continuous Improvement,‚ÄĚ HBR blog http://j.mp/hbrci (2012).¬†

[3]¬†C.R. Jakobsen et al, ‚ÄúScrum and CMMI ‚Äď Going from Good to Great: Are you ready-ready to be done-done?‚ÄĚ Agile Conference 2009, IEEE.¬†

[4] G. Alleman, "Product & Process Development Kaizen for Software Development, Project, and Program Management, LPPDE, Denver Colorado, April 2008

[5]¬†D.R. Greening, ‚ÄúAgile Pattern: Social Cause Mapping,‚ÄĚ http://senexrex.com/cause-mapping/ (2015).¬†

Categories: Project Management