Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management

Next Book For Re-read Saturday

Peabody Library

Peabody Library – So many books, so little time.

We live and work in a dynamic era. In the software development field we are experiencing changes across the board in computing power, management styles, frameworks and techniques. Movements such as Agile and lean are just the tip of the iceberg. In order to build a base of knowledge and grow, IT practitioners need to read, listen, collaborate and experiment. While blogs, podcasts and conferences are great tools to explore the cutting edge, books are an important tool for building or expanding a base of personal knowledge.

I introduced the Re-Read Saturday feature on the Software Process and Measurement blog to help expose both my readers and myself to at least a few of the most important books. We have now re-read Covey’s Seven Habits of Highly Effective People and we finished Kotter’s Leading Change last week. I choose the first two books, and it is your turn to choose the next book. Over the last twelve weeks I asked you to send me the two books that you felt were most influential to your career. A few observations:

  1. The list has 30 entries.
  2. There was NO runaway leader on the list.
  3. The first five on the list each got two mentions.

Since there was no clear winner, I have created a poll. The poll will allow each person to vote for three books. Pick the top three books that have had major impact on your career, OR perhaps the books you always wanted to read. The book that is on the top of the list on February 14 will be the next to be featured on Re-Read Saturday.

Take Our Poll (function(d,c,j){if(!d.getElementById(j)){var pd=d.createElement(c),s;pd.id=j;pd.src='https://s1.wp.com/wp-content/mu-plugins/shortcodes/js/polldaddy-shortcode.js';s=d.getElementsByTagName(c)[0];s.parentNode.insertBefore(pd,s);} else if(typeof jQuery !=='undefined')jQuery(d.body).trigger('pd-script-load');}(document,'script','pd-polldaddy-loader'));
Categories: Process Management

Run your iOS app without overwriting the App Store version

Xebia Blog - Fri, 01/30/2015 - 13:59

Sometimes when you're developing a new version of your iOS app, you'd like to run it on your iPhone or iPad and still be able to run the current version that is released on the App Store. Normally when you run your app from Xcode on your device, it will overwrite any existing version. If you then want to switch back to the version from the App Store, you'll have to delete the development version and download it again from the App Store.

In this blog post I'll describe how you can run a test version of your app next to your production version. This method also works when you have embedded extensions (like the Today Widget or WatchKit app) in your app or when you want to beta test your app with Apple's TestFlight.

There are two different ways to accomplish this. The first method is to create a new target within your project that runs the same app but with a different name and identifier. With iOS 8 it is now possible to embed extensions in your app. Since these extensions are embedded in the app target, this approach doesn't work when you have extensions and therefore I'll only describe the second method which is based on User-Defined Settings in the build configuration.

Before going into detail, here is a quick explanation of how this works. Xcode already creates two build configurations for us: Release and Debug. By adding some User-Defined Settings to the build configurations we can run the Debug configuration with a different Bundle identifier than the Release configuration. This essentially creates a separate app on your device, keeping the one from the App Store (which used the Release configuration) intact.

To beta distribution of the app built with debug configuration easier we'll create multiple schemes.

The basics

Follow these steps exactly to run a test version on your device next to your App Store version. These steps are based on Xcode 6.1.1 with an Apple Developer Account with admin privileges.

Click on your project in the Project Navigator and make sure the main target is selected. Under both the General and Info tabs you will find the Bundle identifier. If you change this name and run your app, it will create a new app next to the old one. Add -test to your current Bundle identifier so you get something like com.example.myapp-test. Xcode will probably tell you that you don't have a provisioning profile. Let Xcode fix this issue for you and it will create a new Development profile for you named something like iOSTeam Provisioning Profile: com.example.myapp-test.

So we're already able to run a test version of our app next to the App Store version. However, manually changing the Bundle identifier all the time isn't very practical. The Bundle identifier is part of the Info.plist and it's not derived from a Build configuration property like for example the Bundle name (which uses ${PRODUCT_NAME} from the build configuration by default). Therefore we can't simply specify a different Bundle identifier for a different Build configuration using an existing property. But, we can use a User-Defined Setting for this.

Go to the Build Settings tab of your project and click the + button to add a User-Defined Setting. Name the new setting BUNDLE_ID and give it the value of the test Bundle identifier you created earlier (com.example.myapp-test). Then click the small triangle in front of the setting to expand the setting. Remove the -test part from the Release configuration. You should end up with something like this:

Screen Shot 2015-01-30 at 11.29.43

Now go to the Info tab and change the value of the Bundle identifier to ${BUNDLE_ID}. In the General tab you will still see the correct Bundle Identifier, but if you click on it you see the text is slightly grayed out, which means it's taken from a Build configuration setting.

Screen Shot 2015-01-30 at 11.35.52

To test if this works, you can change edit the current Scheme and change the Build Configuration of the Run action from Debug to Release. Close the Scheme window and go to the General tab again to see that the Bundle Identifier has changed to the Release BUNDLE_ID. (If you still had the General tab open and don't see the change then switch to another tab and back; the panel will reload the identifier). Make sure to change the Build configuration back to Debug in your scheme afterwards.

When you now Archive an app before you release it to the App Store, it will use the correct identifier from the Release configuration and when you run the app from Xcode on your device, it will use the identifier for testing. That way it no longer overwrites your App Store version on your device.

App name

Both our App Store app and test app still have the same name. This makes it hard to know which one is which. To solve this, find the Product Name in the Build Settings and change the name for the Debug configuration to something else, like MyApp Test. You can even use another app icon for your test build. Just change the Asset Catalog App Icon Set Name for the Debug configuration.

Beta distribution

What if you want to distribute a version of the app for Beta testing (through TestFlight or something similar) to other people that also shouldn't overwrite their Apple Store version? Our Archive action is using the Release build configuration. We could change that manually to Debug to have the test Bundle identifier but then we would be getting all of the Debug settings in our app, which is not something we want. We need to create another Build configuration.

Go to the project settings of your project (so not the target settings). Click the + button under Configurations and duplicate the Release configuration. Call the new configuration AdHoc. (You might already have such a Build configuration for other reasons, in that case you can skip this step and use that one for the next steps.)

Now go to the Build Settings of your main target and change the AdHoc value of the User-Defined Setting BUNDLE_ID to the same as the Debug value. Do the same for the Product name is you changed that in the previous step.

We could already make a Beta test Archive now by manually changing the configuration of the Archive action to Debug. But it makes it easier if we create a new scheme to do this. So go to the Manage Schemes and click to + button at the bottom left to create a new scheme. Make sure that your main target is selected as Target and add " Test" to the name so you end up with something like "MyApp Test". Also check the Shared checkbox if you are sharing your schemes on your version control system.

Double click the new scheme to edit it and change the build configuration of the Archive action to AdHoc. Now you can Archive with the MyApp Test scheme selected to create a test version of your app that you can distribute to your testers without it overwriting their App Store version.

To avoid confusion about which build configuration is used by which scheme action, you should also change the configuration of the Profile action to AdHoc. And in your normal non-test scheme, you can change the build configuration of all actions to Release. This allows you to run the app in both modes of your device, which sometimes might be necessary, for example when you need to test push notifications that only work for the normal Bundle identifier.

Extensions

As mentioned in the intro, the main reason to use multiple schemes with different build configurations and User-Defined settings as opposed to creating multiple targets with different Bundle identifiers is because of Extensions, like the Today extension or a WatchKit extension. An extension can only be part of a single target.

Extensions make things even more complex since their Bundle identifier needs to be prefixed with the parent app's bundle identifier. And since we just made that one dynamic, we need to make the Bundle identifier of our extensions dynamic as well.

If you don't already have an existing extension, create a new one now. I've tested the approach described below with Today extensions and WatchKit extensions but it should work with any other extension as well.

The steps for getting a dynamic Bundle identifier for the extensions is very similar as the once for the main target so I won't go into too much detail here.

First open the Info tab of the new target that was created for the extension, e.g. MyAppToday. You'll see here that the Bundle display name is not derived from the PRODUCT_NAME. This is probably because the product name (which is derived from the target name) is something not very user friendly like MyAppToday and it is assumed that you will change it. In case of the Today extension, running a test version of the app next to the App Store version will also create 2 Today extensions in the Today view of the phone. To be able to differentiate between the two we'll also make the Bundle display name dynamic.

Change the value of it to ${BUNDLE_DISPLAY_NAME} and then add a User-Defined Setting for BUNDLE_DISPLAY_NAME with different names for Debug/AdHoc and Release.

You might have noticed that the Bundle identifier of the extension is already party dynamic, something like com.example.myapp.$(PRODUCT_NAME:rfc1034identifier). Change this to ${PARENT_BUNDLE_ID}.$(PRODUCT_NAME:rfc1034identifier) and add a User-Defined Setting for PARENT_BUNDLE_ID to your Build Settings. The values of the PARENT_BUNDLE_ID should be exactly the same as the ones you used in your main target, e.g. com.example.myapp for Release and com.example.myapp-test for Debug and AdHoc.

That's it, you can now Run and Archive your app with extensions who's Bundle identifier are prefixed with the parent's Bundle identifier.

App Groups entitlements

You might have an extension that shares UserDefaults data or Core Data stores with the main app. In that case you need to have matching App Groups entitlements in both your main app and extensions. Since we have dynamic Bundle identifiers that use different provisioning profiles, we also have to make our App Groups dynamic.

If you don't have App Groups entitlements (or other entitlements) yet, go to the Capabilities tab of your main target and switch on App Groups. Add an app group in the form off group.[bundle identifier], e.g. group.com.example.myapp. This will generate an entitlements file for your project (MyApp.entitlements) and set the Code Signing Entitlements of your Build Settings to something like MyApp/MyApp.entitlements. Locate the entitlements file in Finder and duplicate it. Change the name of the copy by replacing " Copy" with "Test" (MyAppTest.entitlements). Drag the copy into your project. You should now have two entitlement files in your project. Open the Test entitlements file in Xcode's Property List editor and add "-test" to the value of Item 0 under com.apple.security.application-groups to match it with the Bundle identifier we used for testing, e.g. com.example.myapp-test. Now go back to the Build Settings and change the Debug and AdHoc values of Code Signing Entitlements to match with the file name of the Test entitlements.

Repeat all these steps for the Extension target. Xcode will also generate the entitlements file in the extension folder. You should end up with two entitlements files for your main target and two entitlements files for every extension.

The Application Groups for testing need to be added to the provisioning profiles which Xcode will handle automatically for you. It might warn/ask you for this while building.

Conclusion

It might be quite some work to follow all these steps and to get everything to work, but if you use your normal iPhone for development and still want to use or show a stable version of your app at the same time, it's definitely worth doing. And your Beta testers will thank your for it as well.

3 P’s of Agile Centers of Excellence

photo

An Agile center of excellence (ACoE) provides support and energy to an Agile transformation within an organization. It supports through leadership, evangelization, best practices, research, support and/or training for agile and lean ideas. ACoE‚Äôs support can be categorized in three inter-related areas. These areas, the three ‚ÄúP‚Äôs,‚ÄĚ are people, process and project.

People are the heart and soul of any development process. As we have noted, Agile has an enormous focus on people (remember the Agile value of valuing people over process). The ACoE provides support to people though bringing new ideas into the organization, by providing coaching, developing coaches and acting as change agents.

Agile is a set of processes, or sets of steps taken to achieve a specific end. A recipe is a process, as is a daily stand-up meeting or checking code in and out of configuration management tool. The ACoE supports Agile processes by capturing process, identifying and fostering the use of relevant metrics (collection and reporting are typically PMO functions ‚Äď to be discussed in the near future), facilitating communities of practices and providing tools.

Projects are the currency of most IT organizations. At its simplest a project is an enterprise with a start and end that is organized to deliver a result. ACoEs support the performance of Agile teams at a project-level as coaches. Coaches are folks who deliver help to teams, stakeholders and other leaders within an organization so they learn how to be Agile. At the project-level, coaches help teams use and tweak processes to meet the team’s needs, provide training and support for tools and processes and help the team learn how to ask the hard questions about how the team is using Agile.

The primary goal of the ACoE is to provide practitioners with the tools, techniques and capabilities they need to be Agile. By helping teams perform, the ACoE also helps sell and maintain the Agile transformation. Both of these goals begin as an organization starts a transformation to Agile and continue to be important as teams evolve and continuously improve. The ACoE delivers value by addressing the three Ps. For example, through the role of coaching and by facilitating communities of practice, the ACoE helps to promote an environment where there is consistency of practice and where innovation can happen. While the combination of innovation and consistency might sound contradictory, coaches often act as an Agile Johnny Appleseed. ACoE coaches see how teams work, the changes that have made to the processes and why those changes were made. The ACoE can then help to spread ideas that prove to be valuable through coaching, referrals or discussion in communities of practice.


Categories: Process Management

Two Factors Make Agile Centers of Excellence Different

Process or People Focused?

Process or People Focused?

 

An Agile center of excellence (ACoE) typically refers to a team that provides thought leadership to support or sustain the transformation to an agile organization. That can include providing leadership, evangelization, best practices, research, support and/or training for a focus area. An ACoE is similar to (but not the same as) engineering process groups (EPGs or SEPGs) that have been used to support and sustain organizational transformations such as the CMMI. The two most significant differences between SEPG/EPG and ACoE centers are the concept of controlling process and the ACoE’s focus on people. Groups like SEPGs and EPGs are primarily focused on implementing and controlling the process, even though most process improvement models understand the relationship between people, process and tools. Many SEPGs and EPGs views process as the most significant short-term variable. Processes could be changed, people trained to support the process and perhaps even new tools purchased to support the process, but the process was the driver.

The four values and 14 principles of the Agile Manifesto provide teams with a basis for self-organization and self-management. Agile techniques, such as retrospectives, provide a feedback loop that helps teams to regulate their performance by changing how they work. Both the manifesto and techniques create an expectation that teams will have some degree of control over how they work. This type of process self-determination is at odds with a group that defines, manages and controls a standard processes, even if that group listens to their customers which is exactly what most SEPGs and EPGs.  This type of behavior tends to depress innovation while fostering command and control management styles that are at odds with agile. An ACoE supports process innovation through coaching, collection and communication of best practices, and facilitating communities of practice.

ACoE typically have a people-first approach to fostering an agile transformation and then sustaining that transformation. As with process control, the Agile Manifesto and Agile techniques (including coaching) generate a natural focus on people. The general thought process is that if you influence people, behavior will follow. The alternate, process-focused perspective is that influencing process will influence behavior. ¬†One of the four values in the Agile Manifesto states “we have come to value individuals and interactions over process and tools.”¬†While that value does not say that we doe not value processes if does mean that¬†to be truly agile we need to put people first.

All organizational transformation models recognize that people are an important component when generating change. Agile centers of excellence take a people-first approach that eschews the rigid process control of other transformation frameworks. ACoEs provide thought leadership and coaching to support teams.  Those team take the knowledge for the ACoE and use techniques like retrospectives to tune how they work. Team drive the improvements  in order to improve their performance. Earlier in my career I fell prey to the conceit that a methodologist could tell people how work (too many Industrial Engineering classes), but I learned later that a methodologist/coach needs to work with teams to unlock their potential by giving them the tools to decide how to work.


Categories: Process Management

Don’t Blindly Follow

Mike Cohn's Blog - Tue, 01/27/2015 - 15:00

Don't blindly adopt anything.

Scrum is comprised of a self-organizing team that is given a challenge, and to meet that challenge, works in short, time-boxed iterations during which they meet daily to quickly synchronize their efforts.

At the start of each iteration, they meet to plan what they will accomplish. At the end, they demonstrate what has been accomplished and reflect on how well they worked together to achieve it.

That's it. Anything else—release planning, burndowns, and so on, is optional. Stick to the above and find the local optimizations that fit your environment. No expert knows more about your company than you do.

Software Development Conferences Forecast January 2015

From the Editor of Methods & Tools - Tue, 01/27/2015 - 14:37
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine. SPTechCon, February 8-11 2015, Austin, USA Use code SHAREPOINT for a $200 conference discount off the 3 and 4-day pass NorDevCon, February 27 2015, Norwich, UK Early birds tickets until February 13. QCon London, March 2-6 2015, London, ...

Quote of the Month January 2015

From the Editor of Methods & Tools - Mon, 01/26/2015 - 14:50
Principles Trump Diagrams. Most of the problems in using the 1988 spiral model stemmed from users looking at the diagram and constructing processes that had nothing to do with the underlying concepts. This is true of many process models: people just adopt the diagram and neglect the principles that need to be respected. Source: The Incremental Commitment Spiral Model, Barry Boehm,, Jo Ann Lane, Supannika Koolmanojwong & Richard Turner, Addison-Wesley

SPaMCAST 326 ‚Äď Steve Tendon, Tame The Flow

www.spamcast.net

http://www.spamcast.net

 

Listen to the Software Process and Measurement Cast

Subscribe to the Software Process and Measurement Cast on ITunes

Software Process and Measurement Cast features our Interview with Steve Tendon. We discussed his new book Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban published J Ross. Steve discussed how to lead knowledge workers and build a hyper-performing knowledge work organization.  We talked about the four flows,   psychology, information, work and finance that affect performance.  Steve’s ideas can be used to help teams can raise their game to deliver results that not only raise the bar but jump over it.

Steve has a great offer for SPaMCAST listeners. Check out https://tameflow.com/spamcast for a way to get Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban at 40% off the list price.

Steve’s Bio

Steve Tendon, creator of the TameFlow management approach, is a senior, multilingual, executive management consultant, experienced at leading and directing multi­national and distributed knowledge-­work organizations. He is an expert in organizational performance transformation programs. Mr. Tendon is a sought-after adviser, coach, mentor and consultant, as well as author and speaker, specializing in organizational productivity, organizational design, process excellence and process innovation. Steve helps businesses create high-performance organizations and teams and holds a MSc. in Software Project Management from the University of Aberdeen.

Mr. Tendon has published numerous articles and is a contributing author to¬†Agility Across Time and Space: Implementing Agile Methods in Global Software Projects. Steve is currently a Director at TameFlow Consulting Ltd, where he helps clients achieve outstanding organizational performance by applying the theories and practices described in this book. Mr. Tendon has held senior Software Engineering Management roles at various firms over the course of his career, including the role of¬†Technical Director for the Italian branch of Borland International, the birthplace of hyper-productivity in software development. Borland’s development of¬†Quattro Pro for Windows¬†remains the most productive software project ever documented. This case was¬†Mr. Tendon‚Äôs source of inspiration that lead to his development of the¬†TameFlow¬†perspective and management approach.

Contact Information:

Web: https://tameflow.com/

Web: http://tendon.net/

Twitter: @tendon

 

Next

In the next Software Process and Measurement Cast will feature our essay on the ubiquitous stand-up meeting. The stand-up meeting has become a feature of agile and non-agile project alike.  The technique can be a powerful force to improve team effectiveness and cohesion or it a can really make a mess out of things!  We explore how to get more of the former and less of the later!

 

Call to action!

We are just completed a re-read John Kotter’s classic Leading Change on the Software Process and Measurement Blog (www.tcagley.wordpress.com).  Please feel free to jump in and add your thoughts and comments!

Next week we will start the process to choose the next book based on the list you have suggested.  You can still influence the possible choices for the next re-read by answering the following question:

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

We will publish the list next week on the blog and ask you to vote on the next book for ¬†‚ÄúRe-read‚ÄĚ Saturday. ¬†Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 326 - Steve Tendon, Tame The Flow

Software Process and Measurement Cast - Sun, 01/25/2015 - 23:00

Software Process and Measurement Cast features our Interview with Steve Tendon. We discussed his new book Hyper-Productive Knowledge Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban published J Ross. Steve discussed how to lead knowledge workers and build a hyper-performing knowledge work organization.  We talked about the four flows,   psychology, information, work and finance that affect performance.  Steve’s ideas can be used to help teams can raise their game to deliver results that not only raise the bar but jump over it.

Steve has a great offer for SPaMCAST listeners. Check out https://tameflow.com/spamcast for a way to get Hyper-Productive Knowledge Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban at 40% off the list price.

Steve’s Bio

Steve Tendon, creator of the TameFlow management approach, is a senior, multilingual, executive management consultant, experienced at leading and directing multi­national and distributed knowledge-­work organizations. He is an expert in organizational performance transformation programs. Mr. Tendon is a sought-after adviser, coach, mentor and consultant, as well as author and speaker, specializing in organizational productivity, organizational design, process excellence and process innovation. Steve helps businesses create high-performance organizations and teams and holds a MSc. in Software Project Management from the University of Aberdeen.

 

Mr. Tendon has published numerous articles and is a contributing author to Agility Across Time and Space: Implementing Agile Methods in Global Software Projects. Steve is currently a Director at TameFlow Consulting Ltd, where he helps clients achieve outstanding organizational performance by applying the theories and practices described in this book. Mr. Tendon has held senior Software Engineering Management roles at various firms over the course of his career, including the role of Technical Director for the Italian branch of Borland International, the birthplace of hyper-productivity in software development. Borland's development of Quattro Pro for Windows remains the most productive software project ever documented. This case was Mr. Tendon’s source of inspiration that lead to his development of the TameFlow perspective and management approach.

Contact Information:

Web: https://tameflow.com/
Web: http://tendon.net/
Twitter: @tendon

Next

In the next Software Process and Measurement Cast will feature our essay on the ubiquitous stand-up meeting. The stand-up meeting has become a feature of agile and non-agile project alike.  The technique can be a powerful force to improve team effectiveness and cohesion or it a can really make a mess out of things!  We explore how to get more of the former and less of the later!

Call to action!

We are just completed a re-read John Kotter’s classic Leading Change on the Software Process and Measurement Blog (www.tcagley.wordpress.com).  Please feel free to jump in and add your thoughts and comments!

Next week we will start the process to choose the next book based on the list you have suggested.  You can still influence the possible choices for the next re-read by answering the following question:

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com..

We will publish the list next week on the blog and ask you to vote on the next book for  “Re-read” Saturday.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Re-read Saturday: Part Three: Implications for the Twenty-First Century, John P. Kotter Chapters 11 and 12

index

We complete the re-read of John P. Kotter’s book Leading Change by reviewing the implications from the last two chapters of the book.  Part Three paints the picture of a world in which the urgency for change will not abate and perhaps might even increase.  In chapter 11, titled The Organization of the Future, Kotter suggests that while in the past a single key leader can drive change, collaboration at the top of organizations is now required due to both the rate and complexity of change.  He argues that one person simply can’t have the time and expertise to manage, lead, communication, provide vision . . .  you get the point.  The message in the chapter is that for organizations of any type to prosper in the 21st century the ability to create and communicate vision is critical.  That skill needs to be fostered and developed over the long term just as any other significant organizational asset.  Long-term and continuous development of leadership is not accomplished simply by providing in a two-week course in leadership. While leadership is critical, it only goes so far in creating and fostering change and must be supplemented by a culture of empowerment. Broad-based empowerment allows organizations to tap a wide range of knowledge and energy at all levels of the organization.

Boiling the message of Chapter 11 down, Kotter suggests that an organization that will be at home with the dynamic nature of the 21st century will require a lean, non-bureaucratic structure that leverages a wide range of performance data. For example, in an empowered organization performance data must be gathered and analyzed from many sources. Performance data (e.g. customer satisfaction, productivity, returns, quality and others) gains maximum power when everyone has access to the data in order to drive continuous improvements. The culture of the new organization needs to shift from internally focused and command and control to an externally focused, non-bureaucratic organization. While Kotter does not use the terms lean and Agile, the organization he describes as tuned to the 21st Century reflects the tenants of lean and agile.

Chapter 12, titled Leadership and Lifelong Learning, circles back to the concept of leadership. It is a constant thread across all facets of the eight-stage model of change detailed in Leading Change. Kotter describes the need for leaders to continually develop competitive capacity (the capability to deal with an increasing competitive and dynamic environment). The model Kotter uses to describe the development competitive capacity begins with personal history and flows through competitive drive, lifelong learning, skills and abilities to competitive capacity.  Lifelong learning is an input and a tool for developing and honing skills and abilities. Skills and abilities feed competitive capacity. In our re-read of The Seven Habits of Highly Effective People, Stephen Covey culminated the Seven Habit with the habit call Sharpening the Saw.  Sharpening the Saw is a prescription for balanced self-renewal.  Life-long learning is an important component in balanced self-renewal. Whether you read Kotter or Covey the need to continuously learn is an inescapable necessity of any leaders.

As a rule, I am never overwhelmed by the chapters after the meat of most self-help books (I consider Leading Change a management self-help book, part of a continuum that Covey’s Seven Habits of Highly Effective People would be found on also). Part Three of Leading Change ties the book together by reinforcing the need for the eight-stage model for change and to address the need for continuously sharpening the saw.¬† Kotter‚Äôs model is a tool that requires leaders to apply therefore organizations and leaders must foster the capacity to address needed changes.

Re-read Summary

Change is a fact of life. John P. Kotter’s book, Leading Change, defines his famous eight-stage model for change. The first stage of the model is establishing a sense of urgency. A sense of urgency provides the energy and rational for any large, long-term change program. Once a sense of urgency has been established, the second stage in the eight-stage model for change is the establishment of a guiding coalition. If a sense of urgency provides energy to drive change, a guiding coalition provides the power for making change happen. A vision, built on the foundation of urgency and a guiding coalition, represents a picture of a state of being at some point in the future. Developing a vision and strategy is only a start, the vision and strategy must be clearly and consistently communicated to build the critical mass needed to make change actually happen. Once an organization wound up and primed, the people within the organization must be empowered and let loose to create change. Short-term wins provide the feedback and credibility needed to deliver on the change vision. The benefits and feedback from the short-term wins and other environmental feedback are critical for consolidating gains and producing more change. Once a change has been made it needs to anchored so that that the organization does not revert to older, comfortable behaviors throwing away the gains they have invest blood, sweat and tears to create.

The need for change is not abating. The eight-stage model for change requires leadership and vision.  Organizations need to foster leadership while both organizations and the people in those organizations must continually learn and hone their skills.

Next week we will review the list of books that readers of the blog and listeners to the podcast have identified as having a major impact on their career to vote on the next book we will tackle on Re-read Saturday.  Right now The Mythical Man Month by Fred Brooks is at the top of the list.  Care to influence the list?  Let me know the two books that most influenced your career.


Categories: Process Management

Agile Roles: What do product owners do other than make decisions?

 

The product owner role is anything but boring.

The product owner role is anything but boring.

The role of the product owner is incredibly important. The decision-making role of a product owner helps grease the skids for the team so that they deliver value efficiently and effectively. That said, there is more to the role than making decisions. In the survey of practioners (Agile Roles: What does a product owner do?) the next four items were:

      1. Attends Scrum meetings
      2. Prioritizes the user stories (and backlog)
      3. Grooms backlog
      4. Defines product vision and features

The product owner is a core member of the team. Participating in the Scrum meetings ensures that the voice of the customer is woven into all levels of planning and is not just a hurdle to be surmounted in a demo. When I was taught Scrum, the participation of the product owner was optional at the daily stand-up, in the retrospective and in more technical parts of sprint planning. Experience has taught me that optional typically translates to not present, and not present translates into defects and rework. Note, on the original list #15 was buy the pizza. I think the Scrum meetings are a good place to occasionally spring for pizza or DONUTS.

The backlog is ‚Äúowned‚ÄĚ by the product owner. The product owner prioritizes the backlog based on interaction with the whole team and other stakeholders. There are many techniques for prioritizing the backlog, ranging from business value, technical complexity, and the squeaky wheel (usually not a good method). Regardless of the method the final prioritization is delivered by the product owner.

As projects progress the backlog evolves. That evolution reflects new stories, new knowledge about the business problem, changes in the implementation approach and the need to break stories into smaller components. The process for making sure stories are well-formed, granular enough to complete and have acceptance criteria is story grooming. Grooming is often a small team affair, however typically the product owner is part of the grooming team. Techniques like the Three Amigos are useful for structuring the grooming approach.

Product owner interprets the sponsor’s (the person with the checkbook and political capital to authorize the project) vision by providing the team with the product vision. The product vision represents the purpose or motivation for the project. Until the project is delivered a vision is the picture that anyone involved with the project should be able to describe. Delivering the vision and vision for the features is a leadership role that helps teams decide on how to deliver a function. Knowing where the project needs to end up provides the team with knowledge that supports making technical decisions.

The product owner is leader, do’er, a visionary and a team member. As the voice of the customer the product owner describes the value proposition for the project from the business’ point of view. As part of the team the product owner interprets and synthesizes information from other team members and outside stakeholders. This is reflected in decision and priorities that shape the project and the value it delivers.

 


Categories: Process Management

Continuous Delivery across multiple providers

Xebia Blog - Wed, 01/21/2015 - 13:04

Over the last year three of the four customers I worked with had a similar challenge with their environments. In different variations they all had their environments setup across separate domains. Ranging from physically separated on-premise networks to having environments running across different hosting providers managed by different parties.

Regardless of the reasoning behind having these kinds of setup it‚Äôs a situation where the continuous delivery concepts really add value. The stereotypical problems that exist with manual deployment and testing practices tend to get amplified when they occur in seperated¬†domains. Things get even worse when you add more parties to the mix (like external application developers). Sticking to doing things manually is a recipe for disaster unless you enjoy going through expansive procedures every time you want to do anything in any of ‚Äėyour‚Äô environments. And if you‚Äôve outsourced your environments to an external party you probably don‚Äôt want to have to (re)hire a lot of people just so you can communicate with your supplier.

So how can continuous delivery help in this situation? By automating your provisioning and deployments you make deploying your applications, if nothing else, repeatable and predictable. Regardless of where they need to run.

Just automating your deployments isn’t enough however, a big question that remains is who does what. A question that is most likely backed by a lengthy contract. Agreements between all the parties are meant to provide an answer to that very question. A development partner develops, an outsourcing partners handles the hardware, etc. But nobody handles bringing everything together...

The process of automating your steps already provides some help with this problem. In order to automate you need some form of agreement on how to provide input for the tooling. This at least clarifies what the various parties need to produce. It also clarifies what the result of a step will be. This removes some of the fuzziness out of the process. Things like is the JVM part of the OS or part of the middleware should become clear. But not everything is that clearcut. It’s parts of the puzzle where pieces actually come together that things turn gray. A single tool may need input from various parties. Here you need to resists the common knee-jerk reaction to shield said tool from other people with procedures and red tape. Instead provide access to those tools to all relevant parties and handle your separation of concerns through a reliable access mechanism. Even then there might be some parts that can’t be used by just a single party and in that case, *gasp*, people will need to work together. 

What this results in is an automated pipeline that will keep your environments configured properly and allow applications to be deployed onto them when needed, within minutes, wherever they may run.

MultiProviderCD

The diagram above shows how we set this up for one of our clients. Using XL Deploy, XL Release and Puppet as the automation tooling of choice.

In the first domain we have a git repository to which developers commit their code. A Jenkins build is used to extract this code, build it and package it in such a way that the deployment automation tool (XL Deploy) understands. It’s also kind enough to make that package directly available in XL Deploy. From there, XL Deploy is used to deploy the application not only to the target machines but also to another instance of XL Deploy running in the next domain, thus enabling that same package to be deployed there. This same mechanism can then be applied to the next domain. In this instance we ensure that the machines we are deploying to are consistent by using Puppet to manage them.

To round things off we use a single instance of XL Release to orchestrate the entire pipeline. A single release process is able to trigger the build in Jenkins and then deploy the application to all environments spread across the various domains.

A setup like this lowers deployment errors that come with doing manual deployments and cuts out all the hassle that comes with following the required procedures. As an added bonus your deployment pipeline also speeds up significantly. And we haven’t even talked about adding automated testing to the mix…

Agile Roles: What does a product owner do?

 

One on the product owner's roles is to buy the pizza (or the sushi!)

One of¬†the product owner’s roles is to buy the pizza (or the sushi!)

The product owner role, one of the three identified in Scrum, is deceptively simple. The product owner is the voice of the customer; a conduit to bring business knowledge into the team. The perceived (the word perceived is important) simplicity of the role leads to a wide range of interpretations. Deconstructing the voice of the customer a bit further yields tasks and activities that include defining what needs to be delivered, dynamically providing answers and feedback to the team and prioritizing the backlog. I recently asked a number of product owners, Scrum masters and process improvement personnel for a list of the four activities that product owner was responsible for. The list (ranked by the number of responses, but without censorship) is shown below:

      1. Makes decisions
      2. Attends Scrum meetings
      3. Prioritizes the user stories (and backlog)
      4. Grooms backlog
      5. Defines product vision and features
      6. Accepts or rejects work
      7. Plans for releases
      8. Involves stakeholders (included customers, users, executives, SMEs)
      9. Sells the project
      10. Trains the business
      11. Buys pizza
      12. Provides the project budget
      13. Tests features
      14. Shares the feature list with business
      15. Generates team consensus

The #1 activity of the product owner is to make decisions. Decisions are a critical input for all project teams. Projects are a reflection of a nearly continuous stream of decisions. Decisions that if not made by the right person could take a project off course. While not all decisions made by a team rise to the level of needing input from product owner, or even more importantly rise to the level of needing immediate input from a product owner, when needed the product owner needs to be available and ready to make the tough calls that are needed.

In the first major technology project I was involved with my company decided to shift from one computer platform to another. It was a big deal. In our first attempt the IT department attempted to manage the process without interacting with the business (I was the business). That first attempt at a conversion was . . . exciting. I learned a number of new poignant phrases in several Eastern European languages. The second time around, a business lead was appointed to act as the voice of the business and to coordinate business involvement. The business lead spent at least ¬Ĺ the day with the project team and 1/2 in the business. Leads from all departments and project teams involved in the project met daily to review progress and issues (sort of Scrum of Scrums back in 1979). The ability to meet, talk and make decisions was critical for delivering the functionality needed by the business.

Making decisions isn’t the only task that product owners are called on to perform, but it is one of a very few that almost everyone can agree upon. Although buying pizza would have been higher up my list!

What would you add to the list? Which do you disagree with?


Categories: Process Management

Focus on Benefits Rather than Features

Mike Cohn's Blog - Tue, 01/20/2015 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

Suppose your boss has not bought into trying an agile approach in your organization. You schedule a meeting with the boss, and stress how your organization should use Scrum because Scrum:

  • Has short time boxes
  • Relies on self-organization
  • Includes three distinct roles

And based on this discussion, your boss isn’t interested.

Why? Because you focused on the features of Scrum rather than its benefits.

Product owners and Scrum teams make the same mistake when working with the product backlog. A feature is something that is in a product—a spell-checker is in a word processor. A benefit is something a user gets from of a product. By using a word processor, the user gets documents free from spelling errors. The spell-checker is the feature, mistake-free documents is the benefit.

It is generally considered a good practice for the items at the top of a product backlog to be small. Each must be small enough to fit into a sprint, and most teams will do at least a handful each sprint.

The risk here is that small items are much more likely to be features than benefits. When a Scrum team (and specifically its product owner) becomes overly focused on features, it is possible to lose sight of the benefits.

Scrum teams commonly mitigate this risk in two common ways. First, they leave stories as epics until they move toward the top of the product. Second, they include a so-that clause in their user stories. These help, but do not fully eliminate the risk.

Let’s return to your attempt to convince your boss to let your team use Scrum. Imagine you had focused on the benefits of Scrum rather than its features. You told your boss how using Scrum would lead to more successful products, more productive teams, higher quality software, more satisfied stakeholders, happier teams, and so on.

Can you see how that conversation would have gone differently than one focused on short time boxes, self-organization and roles?

Software Development Linkopedia January 2015

From the Editor of Methods & Tools - Tue, 01/20/2015 - 14:58
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.¬†This month you will find some interesting information and opinions about managing software developers, software architecture, Agile testing, product owner patterns, mobile testing, continuous improvement, project planning and technical debt. Blog: Killing the Crunch Mode Antipattern Blog: Barry’s Rules of Engineering and Architecture Blog: Lessons Learned Moving From Engineering Into Management Blog: A ScrumMaster experience report on using Feature Teams Article: Using Models to Help Plan Tests in Agile Projects Article: A Mitigation Plan for a Product Owner’s Anti-Pattern Article: Guerrilla Project ...

Try is free in the Future

Xebia Blog - Mon, 01/19/2015 - 09:40

Lately I have seen a few developers consistently use a Try inside of a Future in order to make error handling easier. Here I will investigate if this has any merits or whether a Future on it’s own offers enough error handling.

If you look at the following code there is nothing that a Future can’t supply but a Try can:

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.{Await, Future, Awaitable}
import scala.concurrent.duration._
import scala.util.{Try, Success, Failure}

object Main extends App {

  // Happy Future
  val happyFuture = Future {
    42
  }

  // Bleak future
  val bleakFuture = Future {
    throw new Exception("Mass extinction!")
  }

  // We would want to wrap the result into a hypothetical http response
  case class Response(code: Int, body: String)

  // This is the handler we will use
  def handle[T](future: Future[T]): Future[Response] = {
    future.map {
      case answer: Int => Response(200, answer.toString)
    } recover {
      case t: Throwable => Response(500, "Uh oh!")
    }
  }

  {
    val result = Await.result(handle(happyFuture), 1 second)
    println(result)
  }

  {
    val result = Await.result(handle(bleakFuture), 1 second)
    println(result)
  }
}

After giving it some thought the only situation where I could imagine Try being useful in conjunction with Future is when awaiting a Future but not wanting to deal with error situations yet. The times I would be awaiting a future are very few in practice though. But when needed something like this migth do:

object TryAwait {
  def result[T](awaitable: Awaitable[T], atMost: Duration): Try[T] = {
    Try {
      Await.result(awaitable, atMost)
    }
  }
}

If you do feel that using Trys inside of Futures adds value to your codebase please let me know.

SPaMCAST 325 ‚Äď Product Owners, Kim Pries, Jo Ann Sweeney

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast

Subscribe to the Software Process and Measurement Cast on ITunes

The Software Process and Measurement Cast our essay on product owners.  The role of the product owner is one of the hardest to implement when embracing Agile. However how the role of the product owner is implemented is often a clear determinant of success with Agile.  The ideas in our essay can help you get it right.

We will also have a new column from the Software Sensei, Kim Pries. In this installment Kim discusses the fact that are numerous ways go get something done when writing code.  Some are the right way and some are wrong way. For example, are you willing to sacrifice clarity for cool or fast?

We also continue with Jo Ann Sweeney’s column Explaining Communication. In this installment Jo Ann addresses why knowing who your audiences and stakeholders are will help make your communication more efficient and effective! Visit Jo Ann’s website at http://www.sweeneycomms.com and let her know what you think of her new column.

Next

In the next Software Process and Measurement Cast we will feature our Interview with Steve Tendon.  Steve has been a regular on the podcast in the past but took a break to   hone his ideas on hyper-productive knowledge work.  We discussed his new book Tame The Flow: Hyper-Productive Knowledge-Work Management published J Ross and how teams can raise their game to deliver results that not only raise the bar but jump over it

 

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.¬† Second, we will use the list to drive future ¬†‚ÄúRe-read‚ÄĚ Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th. ¬†Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 325 - Product Owners, Kim Pries, Jo Ann Sweeney

Software Process and Measurement Cast - Sun, 01/18/2015 - 23:00

Subscribe to the Software Process and Measurement Cast on ITunes

The Software Process and Measurement Cast our essay on product owners.  The role of the product owner is one of the hardest to implement when embracing Agile. However how the role of the product owner is implemented is often a clear determinant of success with Agile.  The ideas in our essay can help you get it right.

We will also have a new column from the Software Sensei, Kim Pries. In this installment Kim discusses the fact that are numerous ways go get something done when writing code.  Some are the right way and some are wrong way. For example, are you willing to sacrifice clarity for cool or fast?

We also continue with Jo Ann Sweeney’s column Explaining Communication. In this installment Jo Ann addresses why knowing who your audiences and stakeholders are will help make your communication more efficient and effective! Visit Jo Ann’s website at http://www.sweeneycomms.com and let her know what you think of her new column.

Next

In the next Software Process and Measurement Cast we will feature our Interview with Steve Tendon.  Steve has been a regular on the podcast in the past but took a break to   hone his ideas on hyper-productive knowledge work.  We discussed his new book Tame The Flow: Hyper-Productive Knowledge-Work Management published J Ross and how teams can raise their game to deliver results that not only raise the bar but jump over it

 

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Meteor

Xebia Blog - Sun, 01/18/2015 - 12:11

Did you ever use AngularJS as a frontend framework? Then you should definitely give Meteor a try! Where AngularJS is powerful just as a client framework, meteor is great as a full stack framework. That means you just write your code in one language as if there is no back- and frontend at all. In fact, you get an Android and IOS client for free. Meteor is so incredibly simple that you are productive from the beginning.

Where meteor kicks angular

One of the killing features of meteor is that you'll have a shared code base for frontend and backend. In the next code snippet, you'll see a file shared by backend and frontend:

// Collection shared and synchronized accross client, server and database
Todos = new Mongo.Collection('todos');

// Shared validation logic
validateTodo = function (todo) {
  var errors = {};
  if (!todo.title)
    todo.title = "Please fill in a title";
  return errors;
}

Can you imagine how neat the code above is?

Scan 04 Jan 2015 18.48-page4

With one codebase, you get the full stack!

  1. As in the backend file and in the frontend file one can access and query over the Todos collection. Meteor is responsible for syncing the todos. Even when another user adds an item, it will be visible to your client directly. Meteor accomplishes this by a client-side Mongo implementation (MiniMongo).
  2. One can write validation rules once! And they are executed both on the front-end and on the back-end. So you can give my user quick feedback about invalid input, but you can also guarantee that no invalid data is processed by the backend (when someone bypasses the client). And this is all without duplicated code.

Another killer feature of meteor is that it works out of the box, and it's easy to understand. Angular can be a bit overwhelming; you have to learn concepts like directives, services, factories, filters, isolated scopes, transclusion. For some initial scaffolding, you need to know grunt, yeoman, etcetera. With meteor every developer can create, run and deploy a full-stack application within minutes. After installing meteor you can run your app within seconds.

$ curl https://install.meteor.com | /bin/sh
$ meteor create dummyapp
$ cd dummyapp
$ meteor
$ meteor deploy dummyapp.meteor.com
Screen Shot 2015-01-04 at 19.49.08

Meteor dummy application

Another nice aspect of meteor is that it uses DDP, the Distributed Data Protocol. The team invented the protocol and they are heavily promoting it as "REST for websockets". It is a simple, pragmatic approach allowing it to be used to deliver live updates as data changes in the backend. Remember that this works all out of the box. This talk walks you through the concepts of it. But the result is that if you change data on a client it will be updated immediately on the other client.

And there is so much more, like...

  1. Latency Compensation. On the client, Meteor prefetches data and simulates models to make it look like server method calls return instantly.
  2. Meteor is open source and integrates with existing open source tools and frameworks.
  3. Services (like an official package server and a build farm).
  4. Command line tools
  5. Hot deploys
Where meteor falls short

Yes, meteor is not the answer to all your problems. The reason, I'll still choose angular above meteor for my professional work, is because the view framework of angular rocks. It makes it easy to structure your client code into testable units and connect them via dependency injection. With angular you can separate your HTML from your javascript. With meteor your javascript contains HTML elements, (because their UI-library is based on handlebars. That makes testing harder and large projects will become unstructured very quickly.

Another flaw emerges if your project already has a backend. When you choose meteor, you choose their full stack. That means: Mongo as database and Node.js as backend. Despite you are able to create powerful applications, Meteor doesn't allow you (easily) to change this stack.

Under the hood

Meteor consists out of several subprojects. In fact, it is a library of libraries. In fact, it is a stack; a standard set of core packages that are designed to work well together:

Components used by meteor

  1. To make meteor reactive they've included the components blaze and tracker. The blaze component is heavily based on handlebars.
  2. The DDP component is a new protocol, described by meteor, for modern client-server communication.
  3. Livequery and full stack database take all the pain of data synchronization between the database, backend and frontend away! You don't have to think about in anymore.
  4. The Isobuild package is a unified build system for browser, server and mobile.
Conclusion

If you want to create a website or a mobile app with a backend in no time, with getting lots of functionality out of the box, meteor is a very interesting tool. If you want to have more control or connect to an existing backend, then meteor is probably less suitable.

You can watch this presentation I recently gave, to go along with the article.

Re-read Saturday: Anchoring New Approaches in The Culture, John P. Kotter Chapter 10

index

Consider an elastic band that has been stretched between two points. If the elastic hasn’t lost its stretch, as soon as it is released at one end it will snap back. Organizational culture is like that elastic band. We pull and stretch to make changes and then we want them to settle in. However, we need to anchor the change so that when we change focus the changes don’t disappear. The eighth step in Kotter’s eight-stage model of change discusses this need to anchor the change to avoid reversion.

Culture describes the typical behaviors of a group and the meaning ascribed to those behaviors. Kotter describes culture as the reflection of shared values and group norms. All groups have a specific culture that allows them to operate in a predictable manner. Within a group or organization, culture allows members to interpret behavior and communication, and therefore build bonds of trust. When culture is disrupted bond are scrambled and behavior becomes difficult to predict until culture is reset. If a change program declares victory before the culture is reset, the group or organization tends to revert to back to the original cultural norm.

Culture is powerful because:

  1. The individuals within any group are selected to be part of the group and then indoctrinated into the culture. Cognitive biases are a powerful force that pushes people to hire and interact with people that are like them, homogenizing and reinforcing culture. Culture is further reinforced by training, standards and processes that are used to reduce the level of behavioral variance in the organization. Standardization and indoctrination help lock in culture.
  2. Culture exerts itself through the actions of each individual. While in a small firm, the combination of the number of people in the firm and proximity to the leaders of the change make culture change easier (not easy just easier).  However when we consider mid-sized or large firms in which hundreds or thousands of people need to make a consistent and permanent change to how they act, change gets really complicated. Since culture reflects and is reinforced by how people work, real change requires change each how each affected person behaves which is significantly more difficult to change than words in the personnel manual.
  3. Much actions taken in an organization is¬†not driven by conscious decision which makes it hard to challenge or discuss. A significant amount of our work behavior is governed by shared values and muscle memory. I often hear the statement ‚Äúthat‚Äôs just the way it is done here‚ÄĚ when I ask why a team has taken a specific action. Many of these actions are unconscious and therefore tend to go unrecognized until challenged from the outside. Pushing people away from comfortable patterns of behavior generates cognitive dissonance.

Less power is needed overcome entrenched culture if the change can build on the organization’s base culture rather than having to confront it. Building on to the current culture will often generate some early momentum towards change because those being asked to change see less risk. Alternately change that is at odds with the current culture will require significantly more effort and a greater sense of urgency to generate and sustain.

Kotter argues that culture changes trail behavior. Put another way, culture change happens last. Each of the stages in the model for change are designed to build urgency, momentum and support for organizational changes. Vision provides the direction for the change. Results provide proof that the change works and is better than what it replaced. Continuous communication of vision, direction and results break through the barriers of resistance. Breaking down the layers of resistance challenges old values and pushes people to admit that the change is better. When barriers can’t or won‚Äôt change sometimes change means changing key people. ¬†Nihilistic behavior in the face of results can‚Äôt be allowed to exist. Kotter finally points out that in order to anchor¬†long-term change the¬†organization will need to ensure that both succession planning and promotions reinforce the change rather than allow reversion.

Peter Drucker said, ‚ÄúCulture eats strategy for breakfast.‚ÄĚ Innumerable people have suggested a corollary that says ‚ÄúCulture eats change for breakfast.‚ÄĚ The Eight Stage Model for Significant Change provides a strategy for overcoming the power of an entrenched culture to generate lasting change.

Re-read Summary to-date

Change is a fact of life. John P. Kotter’s book, Leading Change, defines his famous eight-stage model for change. The first stage of the model is establishing a sense of urgency. A sense of urgency provides the energy and rational for any large, long-term change program. Once a sense of urgency has been established, the second stage in the eight-stage model for change is the establishment of a guiding coalition. If a sense of urgency provides energy to drive change, a guiding coalition provides the power for making change happen. A vision, built on the foundation of urgency and a guiding coalition, represents a picture of a state of being at some point in the future. Developing a vision and strategy is only a start, the vision and strategy must be clearly and consistently communicated to build the critical mass needed to make change actually happen. Once an organization wound up and primed, the people within the organization must be empowered and let loose to create change. Short-term wins provide the feedback and credibility needed to deliver on the change vision. The benefits and feedback from the short-term wins and other environmental feedback are critical for consolidating gains and producing more change. Once a change has been made it needs to anchored so that that the organization does not revert to older, comfortable behaviors throwing away the gains they have invest blood, sweat and tears to create.


Categories: Process Management