Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=6' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Your Digital Pinball Machine

Coding Horror - Jeff Atwood - Wed, 11/02/2016 - 21:01

I've had something of an obsession with digital pinball for years now. That recently culminated in me buying a Virtuapin Mini.

OK, yes, it's an extravagance. There's no question. But in my defense, it is a minor extravagance relative to a real pinball machine.

The mini is much smaller than a normal pinball machine, so it's easier to move around, takes up less space, and is less expensive. Plus you can emulate every pinball machine, ever! The Virtuapin Mini is a custom $3k build centered around three screens:

  • 27" main playfield (HDMI)
  • 23" backglass (DVI)
  • 8" digital matrix (USB LCD)

Most of the magic is in those screens, and whether the pinball sim in question allows you to arrange the three screens in its advanced settings, usually by enabling a "cabinet" mode.

Let me give you an internal tour. Open the front coin door and detach the two internal nuts for the front bolts, which are finger tight. Then remove the metal lockdown bar and slide the tempered glass out.

The most uniquely pinball item in the case is right at the front. This Digital Plunger Kit connects the 8 buttons (2 on each side, 3 on the front, 1 on the bottom) and includes an analog tilt sensor and analog plunger sensor. All of which shows up as a standard game controller in Windows.

On the left front side, the audio amplifier and left buttons.

On the right front side, the digital plunger and right buttons.

The 27" playfield monitor is mounted using a clever rod assembly to the standard VESA mount on the back, so we can easily rotate it up to work on the inside as needed.

To remove the playfield, disconnect the power cord and the HDMI connector. Then lift it up and out, and you now have complete access to the interior.

Notice the large down-firing subwoofer mounted in the middle of the body, as well as the ventilation holes. The PC "case" is just a back panel, and the power strip is the Smart Strip kind where it auto-powers everything based on the PC being powered on or off. The actual power switch is on the bottom front right of the case.

Powering it up and getting all three screens configured in the pinball sim of your choice results in … magic.

It is a thoroughly professional build, as you'd expect from a company that has been building these pinball rigs for the last decade. It uses real wood (not MDF), tempered glass, and authentic metal pinball parts throughout.

I was truly impressed by the build quality of this machine. Paul of Virtuapin said they're on roughly version four of the machine and it shows. It's over 100 pounds fully assembled and arrives on a shipping pallet. I can only imagine how heavy the full size version would be!

That said, I do have some tweaks I recommend:

  • Make absolutely sure you get an IPS panel as your 27" playfield monitor. As arrived, mine had a TN panel and while it was playable if you stood directly in front of the machine, playfield visibility was pretty dire outside that narrow range. I dropped in the BenQ GW2765HT to replace the GL2760H that was in there, and I was golden. If you plan to order, I would definitely talk to Paul at VirtuaPin and specify that you want this IPS display even if it costs a little more. The 23" backglass monitor is also TN but the viewing angles are reasonable-ish in that orientation and the backglass is mostly for decoration anyway.

  • The improved display has a 1440p resolution compared to the 1080p originally shipped, so you might want to upgrade from the GeForce 750 Ti video card to the just-released 1050 Ti. This is not strictly required, as I found the 750 Ti an excellent performer even at the higher resolution, but I plan to play only fully 3D pinball sims and the 1050 Ti gets excellent reviews for $140, so I went for it.

  • Internally everything is exceptionally well laid out, the only very minor improvement I'd recommend is connecting the rear exhaust fan to the motherboard header so its fan speed can be dynamically controlled by the computer rather than being at full power all the time.

  • On the Virtuapin website order form the PC they provide sounds quite outdated, but don't sweat it: I picked the lowest options thinking I would have to replace it all, and they shipped me a Haswell based quad-core PC with 8GB RAM and a 256GB SSD, even though those options weren't even on the order form.

I realize $3k (plus palletized shipping) is a lot of money, but I estimate it would cost you at least $1500 in parts to build this machine, plus a month of personal labor. Provided you get the IPS playfield monitor, this is a solidly constructed "real" pinball machine, and if you're into digital pinball like I am, it's an absolute joy to play and a good deal for what you actually get. As Ferris Bueller once said:

If you'd like to experiment with this and don't have three grand burning a hole in your pocket, 90% of digital pinball simulation is a widescreen display in portrait mode. Rotate one of your monitors, add another monitor if you're feeling extra fancy, and give it a go.

As for software, most people talk about Visual Pinball for these machines, and it works. But the combination of janky hacked-together 2D bitmap technology used in the gameplay, and the fact that all those designs are ripoffs that pay nothing in licensing back to the original pinball manufacturers really bothers me.

I prefer Pinball Arcade in DirectX 11 mode, which is downright beautiful, easily (and legally!) obtainable via Steam and offers a stable of 60+ incredible officially licensed classic pinball tables to choose from, all meticulously recreated in high resolution 3D with excellent physics.

As for getting pinball simulations running on your three monitor setup, if you're lucky the game will have a cabinet mode you can turn on. Unfortunately, this can be weird due to … licensing issues. Apparently building a pinball sim on the computer requires entirely different licensing than placing it inside a full-blown pinball cabinet.

Pinball Arcade has a nifty camera hack someone built that lets you position three cameras as needed to get the three displays. You will also need the excellent x360ce program to dynamically map joystick events and buttons to a simulated Xbox 360 controller.

Pinball FX2 added a cabinet mode about a year ago, but turning it on requires a special code and you have to send them a picture of your cabinet (!) to get that code. I did, and the cabinet mode works great; just enter your code, specify the coordinates of each screen in the settings and you are good to go. While these tables definitely have arcadey physics, I find them great fun and there are a ton to choose from.

Pro Pinball Timeshock Ultra is unique because it's originally from 1997 and was one of the first "simulation" level pinball games. The current rebooted version is still pre-rendered graphics rather than 3D, but the client downloads the necessary gigabytes of pre-rendered content at your exact screen resolution and it looks amazing.

Timeshock has explicit cabinet support in the settings and via command line tweaks. Also, in cabinet mode, when choosing table view, you want the bottom left one. Trust me on this! It supports maximum height for portrait cabinet mode.

Position each window as necessary, then enable fullscreen for each one and it'll snap to the monitor you placed it on. It's "only" one table, but arguably the most classic of all pinball sims. I sincerely hope they continue to reboot the rest of the Pro Pinball series, including Big Race USA which is my favorite.

I've always loved pinball machines, even though they struggled to keep up with digital arcade games. In some ways I view my current project, Discourse, as a similarly analog experience attempting to bridge the gap to the modern digital world:

The fantastic 60 minute documentary Tilt: The Battle to Save Pinball has so many parallels with what we're trying to do for forum software.

Pinball is threatened by Video Games, in the same way that Forums are threatened by Facebook and Twitter and Tumblr and Snapchat. They're considered old and archaic technology. They've stopped being sexy and interesting relative to what else is available.

Pinball was forced to reinvent itself several times throughout the years, from mechanical, to solid state, to computerized. And the defining characteristic of each "era" of pinball is that the new tables, once you played them, made all the previous pinball games seem immediately obsolete because of all the new technology.

The Pinball 2000 project was an attempt to invent the next generation of pinball machines:

It wasn't a new feature, a new hardware set, it was everything new. We have to get everything right. We thought that we had reinvented the wheel. And in many respects, we had.

This is exactly what we want to do with Discourse – build a forum experience so advanced that playing will make all previous forum software seem immediately obsolete.

Discourse aims to save forums and make them relevant and useful to a whole new generation.

So if I seem a little more nostalgic than most about pinball, perhaps a little too nostalgic at times, maybe that's why.

[advertisement] Building out your tech team? Stack Overflow Careers helps you hire from the largest community for programmers on the planet. We built our site with developers like you in mind.
Categories: Programming

Your Digital Pinball Machine

Coding Horror - Jeff Atwood - Wed, 11/02/2016 - 21:01

I've had something of an obsession with digital pinball for years now. That recently culminated in me buying a Virtuapin Mini.

OK, yes, it's an extravagance. There's no question. But in my defense, it is a minor extravagance relative to a real pinball machine.

The mini is much smaller than a normal pinball machine, so it's easier to move around, takes up less space, and is less expensive. Plus you can emulate every pinball machine, ever! The Virtuapin Mini is a custom $3k build centered around three screens:

  • 27" main playfield (HDMI)
  • 23" backglass (DVI)
  • 8" digital matrix (USB LCD)

Most of the magic is in those screens, and whether the pinball sim in question allows you to arrange the three screens in its advanced settings, usually by enabling a "cabinet" mode.

Let me give you an internal tour. Open the front coin door and detach the two internal nuts for the front bolts, which are finger tight. Then remove the metal lockdown bar and slide the tempered glass out.

The most uniquely pinball item in the case is right at the front. This Digital Plunger Kit connects the 8 buttons (2 on each side, 3 on the front, 1 on the bottom) and includes an analog tilt sensor and analog plunger sensor. All of which shows up as a standard game controller in Windows.

On the left front side, the audio amplifier and left buttons.

On the right front side, the digital plunger and right buttons.

The 27" playfield monitor is mounted using a clever rod assembly to the standard VESA mount on the back, so we can easily rotate it up to work on the inside as needed.

To remove the playfield, disconnect the power cord and the HDMI connector. Then lift it up and out, and you now have complete access to the interior.

Notice the large down-firing subwoofer mounted in the middle of the body, as well as the ventilation holes. The PC "case" is just a back panel, and the power strip is the Smart Strip kind where it auto-powers everything based on the PC being powered on or off. The actual power switch is on the bottom front right of the case.

Powering it up and getting all three screens configured in the pinball sim of your choice results in … magic.

It is a thoroughly professional build, as you'd expect from a company that has been building these pinball rigs for the last decade. It uses real wood (not MDF), tempered glass, and authentic metal pinball parts throughout.

I was truly impressed by the build quality of this machine. Paul of Virtuapin said they're on roughly version four of the machine and it shows. It's over 100 pounds fully assembled and arrives on a shipping pallet. I can only imagine how heavy the full size version would be!

That said, I do have some tweaks I recommend:

  • Make absolutely sure you get an IPS panel as your 27" playfield monitor. As arrived, mine had a TN panel and while it was playable if you stood directly in front of the machine, playfield visibility was pretty dire outside that narrow range. I dropped in the BenQ GW2765HT to replace the GL2760H that was in there, and I was golden. If you plan to order, I would definitely talk to Paul at VirtuaPin and specify that you want this IPS display even if it costs a little more. The 23" backglass monitor appears to be IPS already so no need to change there.

  • The improved display has a 1440p resolution compared to the 1080p originally shipped, so you might want to upgrade from the GeForce 750 Ti video card to the just-released 1050 Ti. This is not strictly required, as I found the 750 Ti an excellent performer even at the higher resolution, but I plan to play only fully 3D pinball sims and the 1050 Ti gets excellent reviews for $140, so I went for it.

  • Internally everything is exceptionally well laid out, the only very minor improvement I'd recommend is connecting the rear exhaust fan to the motherboard header so its fan speed can be dynamically controlled by the computer rather than being at full power all the time.

  • On the Virtuapin website order form the PC they provide sounds quite outdated, but don't sweat it: I picked the lowest options thinking I would have to replace it all, and they shipped me a Haswell based quad-core PC with 8GB RAM and a 256GB SSD, even though those options weren't even on the order form.

I realize $3k (plus palletized shipping) is a lot of money, but I estimate it would cost you at least $1500 in parts to build this machine, plus a month of personal labor. Provided you get the IPS playfield monitor, this is a solidly constructed "real" pinball machine, and if you're into digital pinball like I am, it's an absolute joy to play and a good deal for what you actually get. As Ferris Bueller once said:

If you'd like to experiment with this and don't have three grand burning a hole in your pocket, 90% of digital pinball simulation is a widescreen display in portrait mode. Rotate one of your monitors, add another monitor if you're feeling extra fancy, and give it a go.

As for software, most people talk about Visual Pinball for these machines, and it works. But the combination of janky hacked-together 2D bitmap technology used in the gameplay, and the fact that all those designs are ripoffs that pay nothing in licensing back to the original pinball manufacturers really bothers me.

I prefer Pinball Arcade in DirectX 11 mode, which is downright beautiful, easily (and legally!) obtainable via Steam and offers a stable of 60+ incredible officially licensed classic pinball tables to choose from, all meticulously recreated in high resolution 3D with excellent physics.

As for getting pinball simulations running on your three monitor setup, if you're lucky the game will have a cabinet mode you can turn on. Unfortunately, this can be weird due to … licensing issues. Apparently building a pinball sim on the computer requires entirely different licensing than placing it inside a full-blown pinball cabinet.

Pinball Arcade has a nifty camera hack someone built that lets you position three cameras as needed to get the three displays. You will also need the excellent x360ce program to dynamically map joystick events and buttons to a simulated Xbox 360 controller.

Pinball FX2 added a cabinet mode about a year ago, but turning it on requires a special code and you have to send them a picture of your cabinet (!) to get that code. I did, and the cabinet mode works great; just enter your code, specify the coordinates of each screen in the settings and you are good to go. While these tables definitely have arcadey physics, I find them great fun and there are a ton to choose from.

Pro Pinball Timeshock Ultra is unique because it's originally from 1997 and was one of the first "simulation" level pinball games. The current rebooted version is still pre-rendered graphics rather than 3D, but the client downloads the necessary gigabytes of pre-rendered content at your exact screen resolution and it looks amazing.

Timeshock has explicit cabinet support in the settings and via command line tweaks. Position each window as necessary, then enable fullscreen for each one and it'll snap to the monitor you placed it on. It's "only" one table, but arguably the most classic of all pinball sims. I sincerely hope they continue to reboot the rest of the Pro Pinball series, including Big Race USA which is my favorite.

I've always loved pinball machines, even though they struggled to keep up with digital arcade games. In some ways I view my current project, Discourse, as a similarly analog experience attempting to bridge the gap to the modern digital world:

The fantastic 60 minute documentary Tilt: The Battle to Save Pinball has so many parallels with what we're trying to do for forum software.

Pinball is threatened by Video Games, in the same way that Forums are threatened by Facebook and Twitter and Tumblr and Snapchat. They're considered old and archaic technology. They've stopped being sexy and interesting relative to what else is available.

Pinball was forced to reinvent itself several times throughout the years, from mechanical, to solid state, to computerized. And the defining characteristic of each "era" of pinball is that the new tables, once you played them, made all the previous pinball games seem immediately obsolete because of all the new technology.

The Pinball 2000 project was an attempt to invent the next generation of pinball machines:

It wasn't a new feature, a new hardware set, it was everything new. We have to get everything right. We thought that we had reinvented the wheel. And in many respects, we had.

This is exactly what we want to do with Discourse – build a forum experience so advanced that playing will make all previous forum software seem immediately obsolete.

Discourse aims to save forums and make them relevant and useful to a whole new generation.

So if I seem a little more nostalgic than most about pinball, perhaps a little too nostalgic at times, maybe that's why.

[advertisement] Building out your tech team? Stack Overflow Careers helps you hire from the largest community for programmers on the planet. We built our site with developers like you in mind.
Categories: Programming

Support Ended for Eclipse Android Developer Tools

Android Developers Blog - Wed, 11/02/2016 - 20:33

By Jamal Eason, Product Manager, Android

With the release of Android Studio 2.2, the time has now come to say goodbye to the Eclipse Android Developer Tools. We have formally ended their support and development. There's never been a better time to switch to Android Studio and experience the improvements we've made to the Android development workflow.

Android Studio

Android Studio, the official IDE for Android, features powerful code editing with advanced code-completion and refactoring. It includes robust static analysis, bringing the intelligence of the Android engineering team to you to help you easily apply Android coding best practices, and includes simultaneous debugging in both Java and C++ to help fix any bugs that slip through. When you combine this with performance tooling, a fast, flexible build system, code templates, GitHub integration, and its high-performance, feature-rich emulator, you get a deeply Android-tailored development environment for the many form factors of the OS. It's the development environment used by 92% of the top 125 Google Play apps and games, and we're constantly innovating it to handle every Android development need.

What's New in Android Studio 2.2

Android Studio 2.2 builds on the great features from Android Studio 2.0. There are over twenty new features that improve development whether you are designing, iterating, or testing. Notable changes include:

  • Instant Run - The super-fast iteration engine now is both more reliable and available for more types of changes
  • Layout Editor - The new user interface designer that makes it easier than ever to create beautiful app experiences
  • Constraint Layout - A new flexible layout engine for building dynamic user interfaces - designed to work with the new layout editor
  • C++ Support - CMake and ndk-build are now supported alongside improved editing and debug experiences
  • APK Analyzer - Inspects APKs to help you streamline your APK and debug multi-dex issues
  • GPU Debugger (beta) - Captures a stream of OpenGL ES commands and replays them with GPU state inspection
  • Espresso Test Recorder (beta) - Records interactions with your app and outputs UI test code
Top Developers Love Android Studio

For our ADT Fans

All of your favorite ADT tools are now part of Android Studio, including DDMS, Trace Viewer, Network Monitor, and CPU Monitor. We've also improved Android Studio's accessibility, including keyboard navigation enhancements and screen reader support.

We announced that we were ending development and official support for the Android Developer Tools (ADT) in Eclipse at the end of 2015, including the Eclipse ADT plugin and Android Ant build system. With the latest updates to Studio, we've completed the transition.

Migrating to Android Studio

To get started, download and install Android Studio. For most developers, including those with C/C++ projects, migration is as simple as importing your existing Eclipse ADT projects in Android Studio with the File > New > Import Project menu option. For more details on the migration process, check out the migration guide.

Feedback and Open Source Contributions

We're dedicated to making Android Studio the best possible integrated development environment for building Android apps, so if there are missing features or other challenges preventing you from switching to Android Studio, we want to hear about it [survey] ! You can also file bugs or feature requests directly with the team, and let us know via our Twitter or Google+ accounts.

Android Studio is an open source project, available to all at no cost. Check out our Open Source project page if you're interested in contributing or learning more.

Categories: Programming

Blisters: Thoughts on Change

Eww

Eww

Change is the currency of every organization involved developing, enhancing or maintaining software. Change includes the business process being automated, the techniques used to do work or even the technology upon which the software is built. Very little significant change is frictionless. For change to occur the need and benefit to be gained for solving the problem must overcome inertia – the work needed to find a solution and implement the solution.

I recently purchased a new pair of shoes; an act that I had put off for a bit longer than I should have. The pair of shoes I had been wearing nearly everyday for the last year were a pair of Italian loafers. The leather was exquisite; over the three years I had owned the shoes they had become very old, but very comfortable friends.  Unfortunately the soles had worn out, and because of how the shoes were constructed they we’re not repairable. As a rule, consultants wearing worn out shoes, however treasured, generally do not confer confidence. The hole in the sole and a need to earn a living established the need for me to change.  The final impetus to overcome inertia was delivered when I found that an important meeting on my schedule in the next week. Why was there any inertia? Unlike my wife, I can’t order 10 pairs of shoes online and then return seven after trying them on. First my need was immediate; the worn out soles were obvious to anyone sitting near me. Secondly, I am fairly hard to fit when it comes to dress shoes. Shopping (something I find as enjoyable as a prostate exam) is an event to be avoided. Deciding to change/shop requires an investment in effort and time. Almost every significant organizational change requires the same upfront investment in time, effort and rationalization to break the day-to-day inertia needed to begin to pursue change.

Breaking through the barrier of inertia by establishing a need and weighing the benefit to be gained by fulfilling that need is only the first step along the path of change. All leaders know that change requires spending blood, sweat and tears to find the correct solution. A team that has decided to change how they are working might have to sort through and evaluate Agile, lean or even classic software development techniques before finding the solution that fits their needs and culture. The process is not terribly different from my shopping for shoes. The shoe story continues with a trip to the local mall with two “major” department stores. Once at the mall I began the process of evaluating options. The process included an hour that I will ever get back in one store being told that that there were no size 10.5 shoes in black that would be suitable for an office. Then being offered a pair of 11’s that I could lace up myself to try on. The last caused me to immediately go to another store where I bought a pair (my normal brand in stylish black). Just like the team finding and deciding on a new development framework, I had to evaluate alternatives, try them on (sort of prototype) and then negotiate the sale. Change is not frictionless.

Once an organization decides to change and settles on how to they will meet their need implementation remains. Regardless of all the ground work done up to this point, sometimes pain is required to implement the change. Teams embracing Agile, kanban or even waterfall will need to learn new concepts, practice those techniques and understand that mistakes will be made. Looping back to the shoe story, I am now suffering through a blister. Organizational process change might might not generate physical pain, like new shoes; however the stress of the change has to be accounted for when determining if the cost of change is less than the gains foreseen for addressing the need.

In the end, change is unavoidable whether we are discussing new shoes or process improvements. The question is rarely will we change, but rather when we will change and how big a need do we have to generate to offset the effort and aggregation that any change requires.


Categories: Process Management

Blisters: Thoughts on Change

Eww

Eww

Change is the currency of every organization involved developing, enhancing or maintaining software. Change includes the business process being automated, the techniques used to do work or even the technology upon which the software is built. Very little significant change is frictionless. For change to occur the need and benefit to be gained for solving the problem must overcome inertia – the work needed to find a solution and implement the solution.

I recently purchased a new pair of shoes; an act that I had put off for a bit longer than I should have. The pair of shoes I had been wearing nearly everyday for the last year were a pair of Italian loafers. The leather was exquisite; over the three years I had owned the shoes they had become very old, but very comfortable friends.  Unfortunately the soles had worn out, and because of how the shoes were constructed they we’re not repairable. As a rule, consultants wearing worn out shoes, however treasured, generally do not confer confidence. The hole in the sole and a need to earn a living established the need for me to change.  The final impetus to overcome inertia was delivered when I found that an important meeting on my schedule in the next week. Why was there any inertia? Unlike my wife, I can’t order 10 pairs of shoes online and then return seven after trying them on. First my need was immediate; the worn out soles were obvious to anyone sitting near me. Secondly, I am fairly hard to fit when it comes to dress shoes. Shopping (something I find as enjoyable as a prostate exam) is an event to be avoided. Deciding to change/shop requires an investment in effort and time. Almost every significant organizational change requires the same upfront investment in time, effort and rationalization to break the day-to-day inertia needed to begin to pursue change.

Breaking through the barrier of inertia by establishing a need and weighing the benefit to be gained by fulfilling that need is only the first step along the path of change. All leaders know that change requires spending blood, sweat and tears to find the correct solution. A team that has decided to change how they are working might have to sort through and evaluate Agile, lean or even classic software development techniques before finding the solution that fits their needs and culture. The process is not terribly different from my shopping for shoes. The shoe story continues with a trip to the local mall with two “major” department stores. Once at the mall I began the process of evaluating options. The process included an hour that I will ever get back in one store being told that that there were no size 10.5 shoes in black that would be suitable for an office. Then being offered a pair of 11’s that I could lace up myself to try on. The last caused me to immediately go to another store where I bought a pair (my normal brand in stylish black). Just like the team finding and deciding on a new development framework, I had to evaluate alternatives, try them on (sort of prototype) and then negotiate the sale. Change is not frictionless.

Once an organization decides to change and settles on how to they will meet their need implementation remains. Regardless of all the ground work done up to this point, sometimes pain is required to implement the change. Teams embracing Agile, kanban or even waterfall will need to learn new concepts, practice those techniques and understand that mistakes will be made. Looping back to the shoe story, I am now suffering through a blister. Organizational process change might might not generate physical pain, like new shoes; however the stress of the change has to be accounted for when determining if the cost of change is less than the gains foreseen for addressing the need.

In the end, change is unavoidable whether we are discussing new shoes or process improvements. The question is rarely will we change, but rather when we will change and how big a need do we have to generate to offset the effort and aggregation that any change requires.


Categories: Process Management

Neo4j: Find the intermediate point between two lat/longs

Mark Needham - Tue, 11/01/2016 - 23:10

Yesterday I wrote a blog post showing how to find the midpoint between two lat/longs using Cypher which worked well as a first attempt at filling in missing locations, but I realised I could do better.

As I mentioned in the last post, when I find a stop that’s missing lat/long coordinates I can usually find two nearby stops that allow me to triangulate this stop’s location.

I also have train routes which indicate the number of seconds it takes to go from one stop to another, which allows me to indicate whether the location-less stop is closer to one stop than the other.

For example, consider stops a, b, and c where b doesn’t have a location. If we have these distances between the stops:

(a)-[:NEXT {time: 60}]->(b)-[:NEXT {time: 240}]->(c)

it tells us that point ‘b’ is actually 0.2 of the distance from ‘a’ to ‘c’ rather than being the midpoint.

There’s a formula we can use to work out that point:

a = sin((1−f)⋅δ) / sin δ
b = sin(f⋅δ) / sin δ
x = a ⋅ cos φ1 ⋅ cos λ1 + b ⋅ cos φ2 ⋅ cos λ2
y = a ⋅ cos φ1 ⋅ sin λ1 + b ⋅ cos φ2 ⋅ sin λ2
z = a ⋅ sin φ1 + b ⋅ sin φ2
φi = atan2(z, √x² + y²)
λi = atan2(y, x)
 
δ is the angular distance d/R between the two points.
φ = latitude
λ = longitude

Translated to Cypher (with mandatory Greek symbols) it reads like this to find the point 0.2 of the way from one point to another

with {latitude: 51.4931963543, longitude: -0.0475185810} AS p1, 
     {latitude: 51.47908, longitude: -0.05393950 } AS p2
 
WITH p1, p2, distance(point(p1), point(p2)) / 6371000 AS δ, 0.2 AS f
WITH p1, p2, δ, 
     sin((1-f) * δ) / sin(δ) AS a,
     sin(f * δ) / sin(δ) AS b
WITH radians(p1.latitude) AS φ1, radians(p1.longitude) AS λ1,
     radians(p2.latitude) AS φ2, radians(p2.longitude) AS λ2,
     a, b
WITH a * cos(φ1) * cos(λ1) + b * cos(φ2) * cos(λ2) AS x,
     a * cos(φ1) * sin(λ1) + b * cos(φ2) * sin(λ2) AS y,
     a * sin(φ1) + b * sin(φ2) AS z
RETURN degrees(atan2(z, sqrt(x^2 + y^2))) AS φi,
       degrees(atan2(y,x)) AS λi
╒═════════════════╤════════════════════╕
│φi               │λi                  │
╞═════════════════╪════════════════════╡
│51.49037311149128│-0.04880308288561931│
└─────────────────┴────────────────────┘

A quick sanity check plugging in 0.5 instead of 0.2 finds the midpoint which I was able to sanity check against yesterday’s post:

╒═════════════════╤═════════════════════╕
│φi               │λi                   │
╞═════════════════╪═════════════════════╡
│51.48613822097523│-0.050729537454086385│
└─────────────────┴─────────────────────┘

That’s all for now!

Categories: Programming

SE-Radio Episode 273: Steve McConnell on Software Estimation

Sven Johann talks with Steve McConnell about Software Estimation. Topics include when and why businesses need estimates and when they don’t need them; turning estimates into a plan and validating progress on the plan; why software estimates are always full of uncertainties, what these uncertainties are and how to deal with them. They continue with: […]
Categories: Programming

OMG They made me Product Owner!!

Xebia Blog - Tue, 11/01/2016 - 13:24
The face of guy in the hallway expressed a mixture of euphoria and terror when I passed him in the hallway. We had met at the coffee machine before and we discussed how the company was moving to a more Scrum based way of developing their products. “You sort of know how this PO thing

Coaches, Managers, Collaboration and Agile, Part 3

I started this series writing about the need for coaches in Coaches, Managers, Collaboration and Agile, Part 1. I continued in Coaches, Managers, Collaboration and Agile, Part 2, talking about the changed role of managers in agile. In this part, let me address the role of senior managers in agile and how coaches might help.

For years, we have organized our people into silos. That meant we had middle managers who (with any luck) understood the function (testing or development) and/or the problem domain (think about the major chunks of your product such as Search, Admin, Diagnostics, the feature sets). I often saw technical organizations organized into product areas with directors at the top, and some functional directors such as those test/quality and/or performance.

In addition to the idea of functional and domain silos, some people think of testing or technical writing as services. I don’t think that way. To me, it’s not a product unless you can release it. You can’t release a product without having an idea of what the testers have discovered and, if you need it, user documentation for the users.

I don’t think about systems development. I think about product development. That means there are no “service” functions, such as test. We need cross-functional teams to deliver a releasable product. But, that’s not how we have historically organized the people.

When an organization wants to use agile, coaches, trainers, and consultants all say, “Please create cross-functional teams.” What are the middle managers supposed to do? Their identity is about their function or their domain. In addition, they probably have MBOs (Management By Objective) for their function or domain. Aside from not working and further reducing flow efficiency, now we have affected their compensation. Now we have the container problem I mentioned in Part 2.

Middle and senior managers need to see that functional silos don’t work. Even silos by part of product don’t work. Their compensation has to change. And, they don’t get to tell people what to do anymore.

Coaches can help middle managers see what the possibilities are, for the work they need to do and how to muddle through a cultural transition.

Instead of having managers tell people directly what to do, we need senior management to update the strategy and manage the project portfolio so we optimize the throughput of a team, not a person. (See Resource Management is the Wrong Idea; Manage Your Project Portfolio Instead and Resource Efficiency vs. Flow Efficiency.)

The middle managers need coaching and a way to see what their jobs are in an agile organization. The middle managers and the senior managers need to understand how to organize themselves and how their compensation will change as a result of an agile transformation.

In an agile organization, the middle managers will need to collaborate more. Their collaboration includes: helping the teams hire, creating communities of practice, providing feedback and meta-feedback, coaching and meta-coaching, helping the teams manage the team health, and most importantly, removing team impediments.

Teams can remove their local impediments. However, managers often control or manage the environment in which the teams work. Here’s an example. Back when I was a manager, I had to provide a written review to each person once a year. Since I met with every person each week or two, it was easy for me to do this. And, when I met with people less often, I discovered they took initiative to solve problems I didn’t know existed. (I was thrilled.)

I had to have HR “approve” these reviews before I could discuss them with the team member. One not-so-experienced HR person read one of my reviews and returned it to me. “This person did not accomplish their goals. You can’t give them that high a ranking.”

I explained that the person had finished more valuable work. And, HR didn’t have a way to update goals in the middle of a year. “Do you really want me to rank this person lower because they did more valuable work than we had planned for?”

That’s the kind of obstacle managers need to remove. Ranking people is an obstacle, as well as having yearly goals. If we want to be able to change, the goals can’t be about projects.

We don’t need to remove HR, although their jobs must change. No, I mean the HR systems are an impediment. This is not a one-conversation-and-done impediment. HR has systems for a reason. How can the managers help HR to become more agile? That’s a big job and requires a management team who can collaborate to help HR understand. That’s just one example. Coaches can help the managers have the conversations.

As for senior management, they need to spend time developing and updating the strategy. Yes, I’m fond of continuous strategy update, as well as continuous planning and continuous project portfolio management.

I coach senior managers on this all the time.

Let me circle back around to the question in Part 1: Do we have evidence we need coaches? No.

On the other hand, here are some questions you might ask yourself to see if you need coaches for management:

  • Do the managers see the need for flow efficiency instead of resource efficiency?
  • Do the managers understand and know how to manage the project portfolio? Can they collaborate to create a project portfolio that delivers value?
  • Do the managers have an understanding of how to do strategic direction and how often they might need to update direction?
  • Do the managers understand how to move to more agile HR?
  • Do the managers understand how to move to incremental funding?

If the answers are all yes, you probably don’t need management coaching for your agile transformation. If the answers are no, consider coaching.

When I want to change the way I work and the kind of work I do, I take classes and often use some form of coaching. I’m not talking about full-time in person coaching. Often, that’s not necessary. But, guided learning? Helping to see more options? Yes, that kind of helping works. That might be part of coaching.

Categories: Project Management

Neo4j: Find the midpoint between two lat/longs

Mark Needham - Mon, 10/31/2016 - 20:31

2016 10 31 06 06 00

Over the last couple of weekends I’ve been playing around with some transport data and I wanted to run the A* algorithm to find the quickest route between two stations.

The A* algorithm takes an estimateEvaluator as one of its parameters and the evaluator looks at lat/longs of nodes to work out whether a path is worth following or not. I therefore needed to add lat/longs for each station and I found it surprisingly hard to find this location date for all the points in the dataset.

Luckily I tend to have the lat/longs for two points either side of a station so I can work out the midpoint as an approximation for the missing one.

I found an article which defines a formula we can use to do this and there’s a StackOverflow post which has some Java code that implements the formula.

I wanted to find the midpoint between Surrey Quays station (51.4931963543,-0.0475185810) and a point further south on the train line (51.47908,-0.05393950). I wrote the following Cypher query to calculate this point:

WITH 51.4931963543 AS lat1, -0.0475185810 AS lon1, 
     51.47908 AS lat2 , -0.05393950 AS lon2
 
WITH radians(lat1) AS rlat1, radians(lon1) AS rlon1, 
     radians(lat2) AS rlat2, radians(lon2) AS rlon2, 
     radians(lon2 - lon1) AS dLon
 
WITH rlat1, rlon1, rlat2, rlon2, 
     cos(rlat2) * cos(dLon) AS Bx, 
     cos(rlat2) * sin(dLon) AS By
 
WITH atan2(sin(rlat1) + sin(rlat2), 
           sqrt( (cos(rlat1) + Bx) * (cos(rlat1) + Bx) + By * By )) AS lat3,
     rlon1 + atan2(By, cos(rlat1) + Bx) AS lon3
 
RETURN degrees(lat3) AS midLat, degrees(lon3) AS midLon
╒═════════════════╤═════════════════════╕
│midLat           │midLon               │
╞═════════════════╪═════════════════════╡
│51.48613822097523│-0.050729537454086385│
└─────────────────┴─────────────────────┘

The Google Maps screenshot on the right hand side shows the initial points at the top and bottom and the midpoint in between. It’s not perfect; ideally I’d like the midpoint to be on the track, but I think it’s good enough for the purposes of the algorithm.

Now I need to go and fill in the lat/longs for my location-less stations!

Categories: Programming

Keeping the Play Store trusted: fighting fraud and spam installs

Android Developers Blog - Mon, 10/31/2016 - 18:33

Posted by Kazushi Nagayama, Search Quality Analyst, and Andrew Ahn, Product Manager

We strive to continuously make Google Play the best platform for enjoying and discovering the most innovative and trusted apps. Today we are announcing additional enhancements to protect the integrity of the store.

Our teams work every day to improve the quality of our discovery systems. These content discovery systems ensure that users can find and download apps they will love. From time to time, we observe instances of developers attempting to manipulate the placement of their apps through illegitimate means like fraudulent installs, fake reviews, and incentivized ratings. These attempts not only violate the Google Play Developer Policy, but also harm our community of developers by hindering their chances of being discovered or recommended through our systems. Ultimately, they put the end users at risk of making wrong decisions based on inaccurate, unauthentic information.

Today we are rolling out improved detection and filtering systems to combat such manipulation attempts. If an install is conducted with the intention to manipulate an app's placement on Google Play, our systems will detect and filter it. Furthermore, developers who continue to exhibit such behaviors could have their apps taken down from Google Play.

In the vast majority of cases, no action will be needed. If you are asking someone else to promote your app (e.g., third-party marketing agency), we advise you to make sure that the promotion is based on legitimate practices. In case of questions, please check out the Developer Support Resources.

These important changes will help protect the integrity of Google Play, our developer community, and ultimately our end user. Thank you for your support in building the world's most trusted store for apps and games!

Categories: Programming

SPaMCAST 417- Six Elements of Business Stories, QA Corner, Herbie and Tame Flow

Software Process and Measurement Cast - Mon, 10/31/2016 - 03:37

The Software Process and Measurement Cast 417 discusses the six elements of business stories.  These six elements are required for effective business stories.  We also tackle whether each of those elements are equally important in telling the different types of stories spun in a business environment.

Steve Tendon joins the SPaMCAST this week to discuss Chapter 12 in Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban, published by J Ross (buy a copy here).   We discussed the Herbie and Kanban. The story of Herbie provides a great metaphor for the flow of work through an organization and how it can be improved. Visit Steve at www.tendon.net.

We cap this edition of the Software Process and Measurement Cast with a visit to the QA Corner with Jeremy Berriault. Jeremy and I discussed the Samsung Note 7 and testing. While we may not have to test lithium-ion batteries professionally, we can extract lessons from this scenario on risk and testing! Connect with Jeremy on Linkedin.

Re-Read Saturday News

We continue the read/re-read of The Five Dysfunctions of a Team by Patrick Lencioni (published by Jossey-Bass).   As we move through the first part of the book we are being exposed to Lencioni’s model of team dysfunctions (we get through most of it this week) and a set of crises to illustrate the common problems that make teams into dysfunctional collections of individuals. Today we re-read the three sections titled Deep Tissue, Attack and Exhibition.  

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 418 will feature our interview with Larry Cooper.  Larry and I talked about his project The Agility Series.  The series is providing the community an understanding of how Agile is applied and how practitioners are interpreting practices and principles.  

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

SPaMCAST 417- Six Elements of Business Stories, QA Corner, Herbie and Tame Flow

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 417 discusses the six elements of business stories.  These six elements are required for effective business stories.  We also tackle whether each of those elements are equally important in telling the different types of stories spun in a business environment.

Steve Tendon joins the SPaMCAST this week to discuss Chapter 12 in Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban, published by J Ross (buy a copy here).   We discussed the Herbie and Kanban. The story of Herbie provides a great metaphor for the flow of work through an organization and how it can be improved. Visit Steve at www.tendon.net.

We cap this edition of the Software Process and Measurement Cast with a visit to the QA Corner with Jeremy Berriault. Jeremy and I discussed the Samsung Note 7 and testing. While we may not have to test lithium ion batteries professionally, we can extract lessons from this scenario on risk and testing! Connect with Jeremy on Linkedin.

Re-Read Saturday News

We continue the read/re-read of The Five Dysfunctions of a Team by Patrick Lencioni (published by Jossey-Bass).   As we move through the first part of the book we are being exposed to Lencioni’s model of team dysfunctions (we get through most of it this week) and a set of crises to illustrate the common problems that make teams into dysfunctional collections of individuals. Today we re-read the three sections titled Deep Tissue, Attack and Exhibition.  

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 418 will feature our interview with Larry Cooper.  Larry and I talked about his project The Agility Series.  The series is providing the community an understanding of how Agile is applied and how practitioners are interpreting practices and principles.  

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

Neo4j: Create dynamic relationship type

Mark Needham - Sun, 10/30/2016 - 23:12

One of the things I’ve often found frustrating when importing data using Cypher, Neo4j’s query language, is that it’s quite difficult to create dynamic relationship types.

Say we have a CSV file structured like this:

load csv with headers from "file:///people.csv" AS row
RETURN row
╒═══════════════════════════════════════════════════════╕
│row                                                    │
╞═══════════════════════════════════════════════════════╡
│{node1: Mark, node2: Reshmee, relationship: MARRIED_TO}│
├───────────────────────────────────────────────────────┤
│{node1: Mark, node2: Alistair, relationship: FRIENDS}  │
└───────────────────────────────────────────────────────┘

We want to create nodes with the relationship type specified in the file. Unfortunately, in Cypher we can’t pass in relationship types so we have to resort to the FOREACH hack to create our relationships:

load csv with headers from "file:///people.csv" AS row
MERGE (p1:Person {name: row.node1})
MERGE (p2:Person {name: row.node2})
 
FOREACH(ignoreMe IN CASE WHEN row.relationship = "MARRIED_TO" THEN [1] ELSE [] END |
 MERGE (p1)-[:MARRIED_TO]->(p2))
 
FOREACH(ignoreMe IN CASE WHEN row.relationship = "FRIENDS" THEN [1] ELSE [] END |
 MERGE (p1)-[:FRIENDS]->(p2))

This works, but:

  1. Looks horrendous
  2. Doesn’t scale particularly well when we have multiple relationship types to deal with

As in my last post the APOC library comes to the rescue again, this time in the form of the apoc.create.relationship procedure.

This procedure allows us to change our initial query to read like this:

load csv with headers from "file:///people.csv" AS row
MERGE (p1:Person {name: row.node1})
MERGE (p2:Person {name: row.node2})
 
WITH p1, p2, row
CALL apoc.create.relationship(p1, row.relationship, {}, p2) YIELD rel
RETURN rel

Much better!

Categories: Programming

Five Dysfunctions of a Team, Patrick

The Five Dysfunctions of a Team Cover

The “Book” during unboxing!

Five Dysfunctions of a Team, Patrick Lencioni:  Re-Read Week 6

Today we continue our re-read of the business novel The Five Dysfunctions of a Team  by Patrick Lencioni (Jossey-Bass, Copyright 2002, 33rd printing). If you do not have a copy of the book, please buy a copy from the link above and read along.

This week, we get are exposed to Lencioni’s whole model of team dysfunctions. Lencioni continues to illustrate the model through a series of problems and crises that make the DecisionTech team into a dysfunctional collection of individuals.

Deep tissue

Reacting to the confusion generated when the group began discussing goals, Kathryn opens the chapter by announcing that she understands the underlying problem.  Goals must reflect the organizational results instead of individual recognition.  In order to be a team, everyone needs to adopt a set of common goals and measurements, and just as importantly to use those goals and measures to make collective decisions on a daily basis.  The observation that organizations have goals but do not use them is not that rare.  I have observed (and participated in) countless meetings that begin and end without seriously referencing the goals of the meeting.  I use the term “seriously” because I don’t think a random question about whether the team is meeting their goals count.  This behavior is not constrained to meeting rooms; I recently was asked to observe a Scrum stand-up meeting, and after watching the team go through the ubiquitous three questions, I asked whether the team was going to look at their burndown chart. One brave team member ventured that they only looked at “that” on the last day of the sprint and that everyone just does the best they can do to get their stories completed. The goals instantiated in the burndown chart were not really the team’s goals.

Each person on DecisionTech’s executive team (I am using the term loosely) expected everyone else to do their own job independently.  Because the focus was on doing their own thing and not sharing progress toward their goals! there were no impetuous to shift resources to facilitate achieving organizational goals over individual goals. This type of behavior can cause individuals within a group to feel cut-off and isolated from the larger group. Everyone in the Scrum team mentioned earlier, in essence, had their own backlog and goals and therefore was incented to act as if they were a team of one within a larger team, leading them to ignore the higher-level goals of the team.

Team-level politics is another natural outcome as individuals compete for resources and recognition.  Kathryn identifies this negative type of politics as a symptom she has observed within the executive team. In The Five Dysfunctions, politics is defined as “when people choose their words and actions based on how they want others to react rather than based on what they really think.”  Martin punctuates the section with an admission that “we’re definitely political.”

Attack

The discussion of refocusing  the team on organizational rather than personal goals lead the team to a fork in the road where they either had to begin accepting Kathryn’s ideas or to attack them. When an organization is facing a change, lack of resistance does not always indicate acceptance, but rather may indication passivity in which change is only made on the surface.  As a change agent, I had to learn the lesson the hard way.  Early in my career, I was leading a process improvement effort for a testing group.  We had laid out the plans and process to get the testers and developers to work more closely together (an early version of ATTD).  I should have known better at the presentation we got lots of nodding heads and zero resistance.  Lots of nodding heads didn’t indicate buy-in; the process improvement was only tacitly accepted and when the first project timeline got tight the changes went out the window.

In the book, JR leads the challenge to Kathryn’s plan by pushing her to share the other dysfunctions rather than waiting for the next off-site in three weeks.

Exhibition

The base of the model, as we have noted, is the absence of trust.  Teams without trust fear the conflict that comes from challenging and holding each other accountable. Fear of conflict is the next part of the model.  Teams that fear conflict seek to preserve a false sense of harmony (Lencioni call this ‘artificial harmony’). Lack of trust and the need to preserve harmony stops team members from engaging in open, constructive ideological conflict. The conflict that leads to goals and directions that the team can execute is worth the time and avoids the time need later when things go wrong and need to be sorted out.

The next step in the model is a lack of commitment.  Teams that lack commitment generally fail to buy into decisions.  This leads to ambiguity in the team’s direction and goals. One of the causes for the  lack of commitment is that team members do not get their issues and problems heard when the direction is being set or decisions are being made. Teams shouldn’t just listen to issues and problems to generate consensus (defined as an attempt to please everyone which usually turns into displeasing everyone equally), but rather to generate engagement. Consensus can lead to a type of paralysis generated as teams try to attain a complete agreement.  Hearing everyone’s issues and problems during decision making but then agreeing to follow a single direction allows an unambiguous direction to be set and decisions to be made so that everyone can commit (or get out of the way).

The next dysfunction is the avoidance of accountability, which is shown by low standards. This dysfunction builds on the previous dysfunctions: lack of trust, fear of conflict and lack of commitment. Without trust it very difficult to hold each other accountable for the commitments to the organization and toeach other. Mikey ended the day by saying “that makes sense, this actually makes sense.”

Three quick takeaways:

  •         Goals and measurement only have power if people use them to provide direction and make decisions.
  •         Resistance is not futile, but can be an indication of engagement.
  •         Dysfunctions build on each other.

Previous Installments in the re-read of  The Five Dysfunctions of a Team by Patrick Lencioni:

Week 1 – Introduction through Observations

Week 2 – The Staff through the End Run

Week 3 – Drawing the Line though Pushing Back

Week 4 – Entering Danger though Rebound

Week 5 – Awareness through Goals


Categories: Process Management

Five Dysfunctions of a Team, Patrick Lencioni: Re-Read Week 6

The Five Dysfunctions of a Team Cover

The “Book” during unboxing!

Five Dysfunctions of a Team, Patrick Lencioni:  Re-Read Week 6

Today we continue our re-read of the business novel The Five Dysfunctions of a Team  by Patrick Lencioni (Jossey-Bass, Copyright 2002, 33rd printing). If you do not have a copy of the book, please buy a copy from the link above and read along.

This week, we get are exposed to Lencioni’s whole model of team dysfunctions. Lencioni continues to illustrate the model through a series of problems and crises that make the DecisionTech team into a dysfunctional collection of individuals.

Deep tissue

Reacting to the confusion generated when the group began discussing goals, Kathryn opens the chapter by announcing that she understands the underlying problem.  Goals must reflect the organizational results instead of individual recognition.  In order to be a team, everyone needs to adopt a set of common goals and measurements, and just as importantly to use those goals and measures to make collective decisions on a daily basis.  The observation that organizations have goals but do not use them is not that rare.  I have observed (and participated in) countless meetings that begin and end without seriously referencing the goals of the meeting.  I use the term “seriously” because I don’t think a random question about whether the team is meeting their goals count.  This behavior is not constrained to meeting rooms; I recently was asked to observe a Scrum stand-up meeting, and after watching the team go through the ubiquitous three questions, I asked whether the team was going to look at their burndown chart. One brave team member ventured that they only looked at “that” on the last day of the sprint and that everyone just does the best they can do to get their stories completed. The goals instantiated in the burndown chart were not really the team’s goals.

Each person on DecisionTech’s executive team (I am using the term loosely) expected everyone else to do their own job independently.  Because the focus was on doing their own thing and not sharing progress toward their goals! there were no impetuous to shift resources to facilitate achieving organizational goals over individual goals. This type of behavior can cause individuals within a group to feel cut-off and isolated from the larger group. Everyone in the Scrum team mentioned earlier, in essence, had their own backlog and goals and therefore was incented to act as if they were a team of one within a larger team, leading them to ignore the higher-level goals of the team.

Team-level politics is another natural outcome as individuals compete for resources and recognition.  Kathryn identifies this negative type of politics as a symptom she has observed within the executive team. In The Five Dysfunctions, politics is defined as “when people choose their words and actions based on how they want others to react rather than based on what they really think.”  Martin punctuates the section with an admission that “we’re definitely political.”

Attack

The discussion of refocusing  the team on organizational rather than personal goals lead the team to a fork in the road where they either had to begin accepting Kathryn’s ideas or to attack them. When an organization is facing a change, lack of resistance does not always indicate acceptance, but rather may indication passivity in which change is only made on the surface.  As a change agent, I had to learn the lesson the hard way.  Early in my career, I was leading a process improvement effort for a testing group.  We had laid out the plans and process to get the testers and developers to work more closely together (an early version of ATTD).  I should have known better at the presentation we got lots of nodding heads and zero resistance.  Lots of nodding heads didn’t indicate buy-in; the process improvement was only tacitly accepted and when the first project timeline got tight the changes went out the window.

In the book, JR leads the challenge to Kathryn’s plan by pushing her to share the other dysfunctions rather than waiting for the next off-site in three weeks.

Exhibition

The base of the model, as we have noted, is the absence of trust.  Teams without trust fear the conflict that comes from challenging and holding each other accountable. Fear of conflict is the next part of the model.  Teams that fear conflict seek to preserve a false sense of harmony (Lencioni call this ‘artificial harmony’). Lack of trust and the need to preserve harmony stops team members from engaging in open, constructive ideological conflict. The conflict that leads to goals and directions that the team can execute is worth the time and avoids the time need later when things go wrong and need to be sorted out.

The next step in the model is a lack of commitment.  Teams that lack commitment generally fail to buy into decisions.  This leads to ambiguity in the team’s direction and goals. One of the causes for the  lack of commitment is that team members do not get their issues and problems heard when the direction is being set or decisions are being made. Teams shouldn’t just listen to issues and problems to generate consensus (defined as an attempt to please everyone which usually turns into displeasing everyone equally), but rather to generate engagement. Consensus can lead to a type of paralysis generated as teams try to attain a complete agreement.  Hearing everyone’s issues and problems during decision making but then agreeing to follow a single direction allows an unambiguous direction to be set and decisions to be made so that everyone can commit (or get out of the way).

The next dysfunction is the avoidance of accountability, which is shown by low standards. This dysfunction builds on the previous dysfunctions: lack of trust, fear of conflict and lack of commitment. Without trust it very difficult to hold each other accountable for the commitments to the organization and toeach other. Mikey ended the day by saying “that makes sense, this actually makes sense.”

Three quick takeaways:

  •         Goals and measurement only have power if people use them to provide direction and make decisions.
  •         Resistance is not futile, but can be an indication of engagement.
  •         Dysfunctions build on each other.

Previous Installments in the re-read of  The Five Dysfunctions of a Team by Patrick Lencioni:

Week 1 – Introduction through Observations

Week 2 – The Staff through the End Run

Week 3 – Drawing the Line though Pushing Back

Week 4 – Entering Danger though Rebound

Week 5 – Awareness through Goals


Categories: Process Management

Without a Root Cause Analysis, No Suggested Fix Can Be Effective

Herding Cats - Glen Alleman - Fri, 10/28/2016 - 16:12

In a recent Blog post titled Precision it is suggested Precision (or the implication thereof) is perhaps the root problem of most, if not all dysfunction related to estimation. 

Yes, projects have uncertainty. Everything has uncertainty. Projects are no exception. Estimates have precision and accuracy, but the reason for these unfavorable outcomes is not stated in the post

But as the second paragraph of the post says The number of people walking around with the natural assumption that someone is to blame whenever an estimate turns out to be wrong is just sad. When you’re pushed to deliver a low estimate to secure a bid, and then yelled at for not being able to build the product in time afterwards — you just can’t win. This is another example of Bad Management and Doing Stupid Things on Purpose.

Using the chart from the post, showing the number of projects that went over their estimated effort, let's look closer at a process to sort out the conjectures made in the post about estimating.

Screen Shot 2016-10-28 at 8.19.02 AMFirst, without finding the root cause of the estimating gaps, the quest for a solution is an open loop problem. Yes, there is data showing large variances of actuals versus estimated values.

Why is this the case? has no answer.

And by the way, this is not the same naive and simple-minded 5 whys used by #Noestimates advocates. The actual approach to asking Why is shown later in this post. This is like observing a number of people who are overweight and then claiming “being overweight” is the problem for these people. You need to find the Root Cause.

At times I work for the Institute for Defense Analyses, who produces Root Cause Analyses for software-intensive  system of systems, here's an example Expeditionary Combat Support System: Root Cause Analysis.

When you find the Root Cause for those projects reported to be “overweight” you may also find the corrective action to the inaccuracies and imprecision of the estimates. OR it may be there were technical issues on the program that caused the overages.

Research has shown there are 4 “major” causes of cost and schedule growth in our domain  Estimating is only One of those causes.

Screen Shot 2016-10-28 at 8.25.28 AM

Without determining which root cause is the source of the unfavorable performance of the projects shown in the first chart, no suggestion for corrections can be made. 

Start with Root Cause Analysis and only then suggest the reason for the problem. Here’s the process used in our domain http://www.apollorootcause.com/about/apollo-root-cause-analysis-method  Buy the Reality Charting tool, buy Apollo Root Cause Analysis: Effective Solutions to Everyday Problems Every Time download Seven Steps to Effective Problem-Solving And Strategies for Personal Success Then this approach may be useful in your domain as well.

Then when you hear estimates are the smell of dysfunction you'll know that can not be correct without knowing the Root Cause of that dysfunction. And more importantly you'll know those making any suggestion for a fix to any problem when there is no Root Cause of that problem are just treating the symptoms and the problem will recur over and over - just like the recurring problems with project cost and schedule overruns.

 

Related articles Humpty Dumpty and #NoEstimates Information Technology Estimating Quality Herding Cats: Estimating Resources Herding Cats: Project Management, Performance Measures, and Statistical Decision Making Are Estimates Really The Smell of Dysfunction? Why Guessing is not Estimating and Estimating is not Guessing IT Risk Management The Dysfunctional Approach to Using "5 Whys" Five Estimating Pathologies and Their Corrective Actions
Categories: Project Management

Robots bring business and IT together

Xebia Blog - Fri, 10/28/2016 - 13:46
Maybe you’ve already read the diary of one of our mBots, if not I encourage you to do so first! So, what was this day all about? How did we come to organise this and what did the participants learn? Changing teams As companies decide to adopt a more agile way of working, they also start

A Whirlwind Tour of the Kotlin Type Hierarchy

Mistaeks I Hav Made - Nat Pryce - Fri, 10/28/2016 - 09:08
Kotlin has plenty of good language documentation and tutorials. But I’ve not found an article that describes in one place how Kotlin’s type hierarchy fits together. That’s a shame, because I find it to be really neat1. Kotlin’s type hierarchy has very few rules to learn. Those rules combine together consistently and predictably. Thanks to those rules, Kotlin can provide useful, user extensible language features – null safety, polymorphism, and unreachable code analysis – without resorting to special cases and ad-hoc checks in the compiler and IDE. Starting from the Top All types of Kotlin object are organised into a hierarchy of subtype/supertype relationships. At the “top” of that hierarchy is the abstract class Any. For example, the types String and Int are both subtypes of Any. Any is the equivalent of Java’s Object class. Unlike Java, Kotlin does not draw a distinction between “primitive” types, that are intrinsic to the language, and user-defined types. They are all part of the same type hierarchy. If you define a class that is not explicitly derived from another class, the class will be an immediate subtype of Any. class Fruit(val ripeness: Double) If you do specify a base class for a user-defined class, the base class will be the immediate supertype of the new class, but the ultimate ancestor of the class will be the type Any. abstract class Fruit(val ripeness: Double) class Banana(ripeness: Double, val bendiness: Double): Fruit(ripeness) class Peach(ripeness: Double, val fuzziness: Double): Fruit(ripeness) If your class implements one or more interfaces, it will have multiple immediate supertypes, with Any as the ultimate ancestor. interface ICanGoInASalad interface ICanBeSunDried class Tomato(ripeness: Double): Fruit(ripeness), ICanGoInASalad, ICanBeSunDried The Kotlin type checker enforces subtype/supertype relationships. For example, you can store a subtype into a supertype variable: var f: Fruit = Banana(bendiness=0.5) f = Peach(fuzziness=0.8) But you cannot store a supertype value into a subtype variable: val b = Banana(bendiness=0.5) val f: Fruit = b val b2: Banana = f // Error: Type mismatch: inferred type is Fruit but Banana was expected Nullable Types Unlike Java, Kotlin distinguishes between “non-null” and “nullable” types. The types we’ve seen so far are all “non-null”. Kotlin does not allow null to be used as a value of these types. You’re guaranteed that dereferencing a reference to a value of a “non-null” type will never throw a NullPointerException. The type checker rejects code that tries to use null or a nullable type where a non-null type is expected. For example: var s : String = null // Error: Null can not be a value of a non-null type String If you want a value to maybe be null, you need to use the nullable equivalent of the value type, denoted by the suffix ‘?’. For example, the type String? is the nullable equivalent String, and so allows all String values plus null. var s : String? = null s = "foo" s = null s = bar The type checker ensures that you never use a nullable value without having first tested that it is not null. Kotlin provides operators to make working with nullable types more convenient. See the Null Safety section of the Kotlin language reference for examples. When non-null types are related by subtyping, their nullable equivalents are also related in the same way. For example, because String is a subtype of Any, String? is a subtype of Any?, and because Banana is a subtype of Fruit, Banana? is a subtype of Fruit?. Just as Any is the root of the non-null type hierarchy, Any? is the root of the nullable type hierarchy. Because Any? is the supertype of Any, Any? is the very top of Kotlin’s type hierarchy. A non-null type is a subtype of its nullable equivalent. For example, String, as well as being a subtype of Any, is also a subtype of String?. This is why you can store a non-null String value into a nullable String? variable, but you cannot store a nullable String? value into a non-null String variable. Kotlin’s null safety is not enforced by special rules, but is an outcome of the same subtype/supertype rules that apply between non-null types. This applies to user-defined type hierarchies as well. Unit Kotlin is an expression oriented language. All control flow statements (apart from variable assignment, unusually) are expressions. Kotlin does not have void functions, like Java and C. Functions always return a value. Functions that don’t actually calculate anything – being called for their side effect, for example – return Unit, a type that has a single value, also called Unit. Most of the time you don’t need to explicitly specify Unit as a return type or return Unit from functions. If you write a function with a block body and do not specify the result type, the compiler will treat it as a Unit function. Otherwise the compiler will infer it. fun example() { println("block body and no explicit return type, so returns Unit") } val u: Unit = example() There’s nothing special about Unit. Like any other type, it’s a subtype of Any. It can be made nullable, so is a subtype of Unit?, which is a subtype of Any?. The type Unit? is a strange little edge case, a result of the consistency of Kotlin’s type system. It has only two members: the Unit value and null. I’ve never found a need to use it explicitly, but the fact that there is no special case for “void” in the type system makes it much easier to treat all kinds of functions generically. Nothing At the very bottom of the Kotlin type hierarchy is the type Nothing. As its name suggests, Nothing is a type that has no instances. An expression of type Nothing does not result in a value. Note the distinction between Unit and Nothing. Evaluation of an expression type Unit results in the singleton value Unit. Evaluation of an expression of type Nothing never returns at all. This means that any code following an expression of type Nothing is unreachable. The compiler and IDE will warn you about such unreachable code. What kinds of expression evaluate to Nothing? Those that perform control flow. For example, the throw keyword interrupts the calculation of an expression and throws an exception out of the enclosing function. A throw is therefore an expression of type Nothing. By having Nothing as a subtype of every other type, the type system allows any expression in the program to actually fail to calculate a value. This models real world eventualities, such as the JVM running out of memory while calculating an expression, or someone pulling out the computer’s power plug. It also means that we can throw exceptions from within any expression. fun formatCell(value: Double): String = if (value.isNaN()) throw IllegalArgumentException("$value is not a number") else value.toString() It may come as a surprise to learn that the return statement has the type Nothing. Return is a control flow statement that immediately returns a value from the enclosing function, interrupting the evaluation of any expression of which it is a part. fun formatCellRounded(value: Double): String = val rounded: Long = if (value.isNaN()) return "#ERROR" else Math.round(value) rounded.toString() A function that enters an infinite loop or kills the current process has a result type of Nothing. For example, the Kotlin standard library declares the exitProcess function as: fun exitProcess(status: Int): Nothing If you write your own function that returns Nothing, the compiler will check for unreachable code after a call to your function just as it does with built-in control flow statements. inline fun forever(action: ()->Unit): Nothing { while(true) action() } fun example() { forever { println("doing...") } println("done") // Warning: Unreachable code } Like null safety, unreachable code analysis is not implemented by ad-hoc, special-case checks in the IDE and compiler, as it has to be in Java. It’s a function of the type system. Nullable Nothing? Nothing, like any other type, can be made nullable, giving the type Nothing?. Nothing? can only contain one value: null. In fact, Nothing? is the type of null. Nothing? is the ultimate subtype of all nullable types, which lets the value null be used as a value of any nullable type. Conclusion When you consider it all at once, Kotlin’s entire type hierarchy can feel quite complicated. But never fear! I hope this article has demonstrated that Kotlin has a simple and consistent type system. There are few rules to learn: a hierarchy of supertype/subtype relationships with Any? at the top and Nothing at the bottom, and subtype relationships between non-null and nullable types. That’s it. There are no special cases. Useful language features like null safety, object-oriented polymorphism, and unreachable code analysis all result from these simple, predictable rules. Thanks to this consistency, Kotlin’s type checker is a powerful tool that helps you write concise, correct programs. “Neat” meaning “done with or demonstrating skill or efficiency”, rather than the Kevin Costner backstage at a Madonna show sense of the word↩
Categories: Programming, Testing & QA

Systems Thinking: The Pros to Offset the Cons

The big (panoramic) picture.

The big (panoramic) picture.

In Systems Thinking: Difficulties we focused on the dark side of systems thinking.  But, systems thinking is a powerful framework for change agents. There are two primary reasons systems thinking has a tremendous impact:

  • Understanding Context
  • Value Focus

Making changes to processes and activities that you are directly involved with and then demonstrating the impact of that change is (relatively) easy. Making a change that has a demonstrable impact on the products and services delivered by the larger organization and then proving it is . . . hard.  It would be impossible without an understanding of the big picture. We’ve defined a system as a group of interacting, interrelated, and interdependent components that form a complex and unified whole. Missing from the definition is an outcome, for most organizations that output is a product or service (or multiples).  Taking a systems thinking perspective allows change agents to generate an understanding of how raw materials are taken into the system, transformed and delivered, which provides the context any potential change.  Silos and departments can be transposed on the big picture to provide a map of the boundaries involved.  Mapping boundaries helps to develop an understanding of complexity and more importantly it helps when planning any change. Boundaries identify who needs to be involved and who needs to buy-into a change.  An understanding of the flow that work or product needs to take to reach the customer also provides information for identifying improvement opportunities. Changes to subsystems, steps or activities that are not on the path to delivery can’t impact products or services.  Value chain mapping would call the processes on the path to delivery ‘core processes’ and those not on the path ‘support’. Changes to support processes are important, generally for cost containment rather than to affect the product.  Having a systems thinking perspective ensures we know whether any opportunity has a chance at reaching our customers.

The second reason systems thinking is important is its focus on delivering value. Said differently, there is a focus on making a difference to the customers.  One of the more serious criticisms of many process improvement programs is that steps get optimized causing more problems (such as additional work-in-processes waiting for the next step).  The relentless focus of system thinking on what is delivered by the system helps to ensure that change opportunities are not just optimizations of individual steps that do not deliver an impact to the output of the systems. A second benefit to the focus on value is that the discussion of value and impact cuts through much of the resistance that boundaries can cause in organizations. Would you like to be the person that gets in the way of a change that delivers customer value . . . I think not.  More importantly than beating down resistance, organizational value provides a goal that helps builds bridges and that everyone in an organization can believe in.

Having a goal that everyone can rally around is critical when making any process improvement. Systems thinking provides a structure to see and consider the big picture while focusing on the one goal that most everyone in an organization can agree upon: value. I often use the metaphor of planting a flag on the top of a hill in a game of capture the flag to drive home the point of needing a goal. Systems thinking ensures that we plant the flag on the right hill.


Categories: Process Management