Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Hey, it's HighScalability time:
Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...
In the discussions around #NoEstimates, it's finally dawned on me - after walking the book shelf in the office - the conversation is split across a chasm. Governance based organizations and non-governance based organizations.Â
Same is the case for product development organizaitons. Those producing a software product for sale or providing a service in exchange for money. There are governance based product organizations and non-governance based product organizations.Â
I can't say how those are differentiated, but there is broad research on the top of governance and business success using IT. The book on the left is a start. In this book there is a study of 300 enterprises around the world, with the following...
Companies with effective IT governance have profits that are 20% higher than other companiesÂ pursuing similar strategies. One viable explanation for this differential is that IT governanceÂ specifies accountabilities for IT-related business outcomes and helps companies align their ITÂ investments with their business priorities. But IT governance is a mystery to key decision makersÂ at most companies. Our research indicates that, on average, only 38% of senior managers in aÂ company know how IT is governed. Ignorance is not bliss. In our research senior managementÂ awareness of IT governance processes proved to be the single best indicator of governanceÂ effectiveness with top performing firms having 60, 70 or 80% of senior executives aware of howÂ IT is governed. Taking the time at senior management levels to design, implement, andÂ communicate IT governance processes is worth the troubleâit pays off.
IT Governance is aÂ decision rights and accountability framework for encouraging desirable behaviours in the use of IT. And I'd add the creation of IT, IT products, and IT services. Since IT is a broad domain, let's exclude development effort for things like games, phone apps, plugins. and in general items that have lowÂ value at risk. This doesn't mean they don't have high revenue, but the investment value is low. So when they don't produce their desired beneficial outcome, the degree of loss is low as well.
Asessment of IT Governance focuses on four objectives:
In all four, Weill and Ross provide guidance for assessing the capabilities of IT. In all fourÂ Cost is considered a critical success factor.
Without knowing the cost of a decision, the choices presented by that decision cannot be assessed. So when we hear #NoEstimates is about making decisions, ask of those decisions are being made in a governance based organization?
Then ask the questionÂ who has the decision rights to make those decisions?Â Who has the need to know the cost of the value produced by the firm in exchange for that cost. The developers, the management of the development team, the business management of the firm, the customers of the firm?
The three dependent variables of all projects are schedule, cost, and technical perfomance of produced capabilities (this is a wrapper word for everything NOT time and cost).Â TheÂ value at risk is a good starting point forÂ decidingÂ to apply governance processes or not. If you fix one of these variables - sayÂ budget (which is a place holder for cost until the actuals arrive), then the other two (time and technical) are now free toÂ vary. Estimating their behaviour will be needed to assure the ROI meets the business goals. In the governance paradigm, these three dependent variables are part of the decision making process. Not knowing one of more puts at risk theÂ value produced by the project or work effort. It's thisÂ value at risk that is the key to determining why, how, and when to estimate.Â
What are you willing to loose (risk) if you don't need to know when you'll be done, or what you'll be able to produce on a planned day, or what that will cost, or determine if the ROI (return on investment), ROA (return on asset value), or ROE (return on equity) to some level of confidence to support your decision making - then estimating is a waste of time.
If on the other hand, the firm or customers you work for writing software in exchange for money have an interest in knowing any or all of those answers to support their decision making, you'll likley going to have to estimate, time, cost, produced capabilities to answer their questions.
It's not about you (the consumer of money). To find out who, follow the money, they'll tell you if they need an estimate or not.Related articles Back To The Future Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices Danger Will Robinson Some more answers to the estimating questions Capabilties Based Planning, Use Cases, and Agile The Value of Making An Estimate
I Could not sleepâŚ 3am and this ideaâŚ
Event sourcing is about fold but there is no monoid around !
ÂWhatâs a monoid ?
We need a set.
Letâs take a simple one, positive integers.
And an operation, let say + which takes two positive integers.
It returnsâŚ a positive integer. The operation is said to be closed on the set.
Something interesting is that 3 + 8 + 2 = 13 = (3 + 8) + 2 = 3 + (8 + 2).
This is associativity: (x + y) + z = x + (y + z)
Le last interesting thing is 0, the neutral element:
x + 0 = 0 + x = x
(N,+,0) is a monoid.
Let say it again:
(S, *, Ă¸) is a monoid if
warning: it doesnât need to be commutative so x * y can be different from y * x !
Some famous monoids:
(int, +, 0)
(int, *, 1)
(lists, concat, empty list)
(strings, concat, empty string)
ÂThe link with Event Sourcing
Event sourcing is based on an application function apply : State â> Event â> State, which returns the new state based on previous state and an event.
Current state is then:
fold apply emptyState events
(for those using C# Linq, fold is the same as .Aggregate)
Which is great because higher level functions and allâŚ
But fold is event more powerful with monoids ! For integers, fold is called sum, and the cool thing is that itâs associative !
With a monoid you can fold subsets, then combine them together after (still in the right order). This is what give the full power of map reduce: move code to the data. Combine in place, then combine results. As long as you have monoids, it works.
But apply will not be part of a monoid. Itâs not closed on a set.
To make it closed on a set it should have the following signature:
apply: State â> State â> State, so we should maybe rename the function combine.
Letâs imagine we have a combine operation closed on State.
Now, event sourcing goes from:
decide: Command â> State â> Event list
apply: State â> Event â> State
decide: Command â> State â> Event list
convert: Event â> State
combine: State â> State â> State
the previous apply function is then just:
apply state event = combine state (convert event)
and fold distribution gives:
fold apply emptyState events = fold combine emptyState (map convert events)Â
(where map applies the convert fonction to each item of the events list, as does .Select in Linq)
The great advantage here is that we map then fold which is another name for reduce !
Application of events can be done in parallel by chuncks and then combined !
ÂIs it just a dream ?
Most of the tricky decisions have been taken in the decide function which didnât change. The apply function usually just set state members to values present in the event, or increment/decrement values, or add items to a listâŚ No business decision is taken in the apply function, and most of the primitive types in state members are already monoids under there usual operationsâŚ
And a group (tuple, list..) of monoid is also a monoid under a simple composition:
if (N1,*,n1) and (N2,Â¤,n2) are monoids then N1 * N2 is a monoid with an operator <*> ( (x1,x2) <*> (y1,y2) = (x1*y1, x2Â¤y2)) and a neutral element (n1,n2)âŚ
To view it more easily, the convert fonction converts an event to a Delta, a difference of the State.
Those delta can then be aggregated/folded to make a bigger delta.
It can then be applied to a initial value to get the result !
The idea seams quite interesting and I never read anything about this.. If anyone knows prior study of the subject, Iâm interested.
Next time weâll see how to make monoids for some common patterns we can find in the apply function, to use them in the convert function.
I get around.Â Â Once upon a time in my life that might have been an epithet but now reflects a wide exposure to what works, doesn’t work and what is clearly a cop out.Â Â I suggest that there are five requirements for a successful process improvement program or five attributes that give a program a chance of success.Â Â They are:
We are not yet at the End of History for database theory as Peter Bailis and the Database Group at UC Berkeley continue to prove with a great companion blog post to their new paper: Scalable Atomic Visibility with RAMP Transactions. I like the approach of pairing a blog post with a paper. A paper almost by definition is formal, but a blog post can help put a paper in context and give it some heart.
From the abstract:
Databases can provide scalability by partitioning data across several servers. However, multi-partition, multi-operation transactional access is often expensive, employing coordination-intensive locking, validation, or scheduling mechanisms. Accordingly, many real-world systems avoid mechanisms that provide useful semantics for multi-partition operations. This leads to incorrect behavior for a large class of applications including secondary indexing, foreign key enforcement, and materialized view maintenance. In this work, we identify a new isolation model—Read Atomic (RA) isolation—that matches the requirements of these use cases by ensuring atomic visibility: either all or none of each transaction’s updates are observed by other transactions. We present algorithms for Read Atomic Multi-Partition (RAMP) transactions that enforce atomic visibility while offering excellent scalability, guaranteed commit despite partial failures (via synchronization independence), and minimized communication between servers (via partition independence). These RAMP transactions correctly mediate atomic visibility of updates and provide readers with snapshot access to database state by using limited multi-versioning and by allowing clients to independently resolve non-atomic reads. We demonstrate that, in contrast with existing algorithms, RAMP transactions incur limited overhead—even under high contention—and scale linearly to 100 servers.
What is RAMP?
These 5 questions need credible answers in units of measure meanigful to the decision makers.
What Does All This Mean?
With these top level questions, many approaches are available, not matter what the domain or technology. But in the end if we don't have answers the probability of success will be reduced.
I used to hate writing. It always felt like such a strain to put my ideas on paper or in a document. Why not just say what I thought? In this video, I’ll tell you why I changed my mind about writing and why I think writing is one of the best things you can […]
I donât decide which countries I go to.Â YouÂ do.
My new one-day workshop follows an important principle:
I go where people send me.
Selecting countries and cities
Every week I get questions such as âWill your book tour come to Argentina?â âWhen will you visit China?â and âWhy are you not planning for Norway?â
And every time my answer is the same: I go where people send me. The backlog of countries is based on the readers of my mailing list
So far we have discussed three of the top factors for successful Agile implementations:
Tied for fourth place in the list of success factors are trust, adaptable culture and coaching.
Trust was one of surprises on the list. Trust, in this situation, means that all of the players needed to deliver value including the team, stakeholders and management should exhibit predictable behavior. From the teamâs perspective there needs to be trust that they wonât be held to other process standards to judge how they deliver if they adopt Agile processes. From a stakeholder and management perspective there needs to be trust that a team will live up to the commitments they make.
An adaptable culture reflects an organizationâs ability to make and accept change.Â I had expected this to be higher on the list.Â Implementing Agile generally requires that an organization makes a substantial change to how people are managed and how work is delivered.Â Those changes typically impact not only the project team, but also the business departments served by the project. Organizations that do not adopt to change well rarely make a jump into Agile painlessly. Organizations that have problems adapting will need to spend significantly more effort on organizational change management.
Coaches help teams, stakeholders and other leaders within an organization learn how to be Agile. Being Agile requires some combination of knowing specific techniques and embracing a set of organizational principles. Even in more mature Agile organizations, coaches bring new ideas to the table, different perspectives and a shot of energy. That shot of energy is important to implementing Agile and then for holding on to those new behaviors until they become muscle memory.
Change in organizations is rarely easy. Those being asked to change very rarely perceive change being for the better, which makes trust very difficult. Adopting Agile requires building trust between teams, the business and IT management and vice versa. Coaching is a powerful tool to help even adaptable organizations build trust and embrace Agile as a mechanism to deliver value and as a set of principles for managing work.
I am happy to announce that Manage Your Job Search is available on all platforms: electronic and print. And, you get the presents!
For one week, I am running a series of special conference calls to promote the book. Take a look at my celebration page.
I also have special pricing on Hiring Geeks That Fit as a bundle at the Pragmatic Bookshelf, leanpub, and on Amazon. Why? Because you might want to know how great managers hire.
Please do join me on the conference calls. They’ll be short, sweet, and a ton of fun.
In Senior Management and the Success of Agile Implementation,I described the results of a survey of experienced process improvement personnel, testers or developers felt contribute to a successful Agile implementation. Tied for second place in the survey were team engagement and generating early feedback. These two concepts are curiously inter-related.
Team engagement is a reflection of motivated and capable individuals working together.Â Agile provides teams with the tools to instill unity of purpose. Working with the business on a continuous basis provides the team a clear understanding of the projectâs purpose. Short iterations provide the team with a sense of progress. Self-management and retrospectives provide teams with a degree of control over how they tackle impediments.Â Finally, the end-of-sprint demonstrations provide early feedback. Feedback helps reinforce the teamâs sense of purpose, which reinforces motivation.
Early feedback was noted in the survey as often as team engagement. In classic software development projects, the project would progress from requirements through analysis, design, coding and testing before customers would see functional code.Â Progress in these methods is conveyed through process documents (e.g. requirements documents) and status reports. On the other hand, one of the most important principles of Agile states:
Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
Delivering functional software provides all of the projectâs stakeholders with explicit proof of progress and provides stakeholders with a chance to provide feedback based on code they can execute. Early feedback increases stakeholder engagement and satisfaction, which also helps to motivate the team. As importantly, since stakeholders see incremental progress, any required course corrections are also incremental.Â Incremental course corrections help to ensure that when the project is complete that most value possible has been delivered.
Team engagement and early feedback are both important to successful Agile implementations. Interestingly, both concepts are inter-twined. Feedback helps to generate engagement and motivation. As one of the respondents to the survey stated, âAgile succeeds when it instills ‘unity of purpose’ and builds a ‘community of trust’ within an organization.â Team engagement and early feedback provides a platform for Agile success.
(This entry is part of a series. The audience: SQL Server developers. The topic: SQLite on mobile devices.)Different types of, er, types
At the SQL language level, the biggest difference with SQLite is the way it deals with data types. There are three main differences to be aware of:
There are only a few types
And types are dynamic
(But not entirely, because they have affinity)
And type declarations are weird
Okay, so actually that's FOUR things, not three. But the third one doesn't really count, so I'm not feeling terribly obligated to cursor all the way back up to the top just to fix the word "three". Let's keep moving.Only a few types
SQLite values can be one of the following types:
The following table shows roughly how these compare to SQL Server types:SQL Server SQLite Notes tinyint smallint int bigint bit INTEGER In SQLite, all integers are up to 64 bits wide (like bigint), but smaller values are stored more efficiently. real float REAL In SQLite, all floating point numbers are 64 bits wide. char varchar nchar nvarchar text ntext TEXT In SQLite, all strings are Unicode, and it doesn't care about widths on TEXT columns. binary varbinary image BLOB Width doesn't matter here either. decimal numeric money smallmoney INTEGER ? These are problematic, as SQLite 3 does not have a fixed point type. (In Zumero, we handle synchronization of these by mapping them to INTEGER and handling the scaling.) date datetime datetime2 datetimeoffset smalldatetime time (your choice) SQLite has no data types for dates or times. However, it does have a rich set of built-in functions for manipulating date/time values represented as text (ISO-8601 format), integer (unix time) or real (Julian day). Types are dynamic
In SQL Server, the columns in a table are strictly typed. If you define a column to be of type smallint, then every value in that column must be a 16 bit signed integer.
In contrast, SQLite's approach might be called "dynamic typing". Quoting from its own documentation: "In SQLite, the datatype of a value is associated with the value itself, not with its container."
For example, the following code will fail on SQL Server:
CREATE TABLE [foo] (a smallint); INSERT INTO [foo] (a) VALUES (3); INSERT INTO [foo] (a) VALUES (3.14); INSERT INTO [foo] (a) VALUES ('pi');
But on SQLite, it will succeed. The value in the first row is an INTEGER. The value in the second row is a REAL. The value in the third row is a TEXT string.
sqlite> SELECT a, typeof(a) FROM foo; 3|integer 3.14|real pi|text
The column [a] is a container that simply doesn't care what you place in it.Type affinity
Well, actually, it does care. A little.
A SQLite column does not have a type requirement, but it does have a type preference, called an affinity. I'm not going to reiterate the type affinity rules from the SQLite website here. Suffice it to say that sometimes SQLite will change the type of a value to fit match the affinity of the column, but you probably don't need to know this, because:
If you declare of column of type TEXT and always insert TEXT into it, nothing weird will happen.
If you declare of column of type INTEGER and always insert INTEGER into it, nothing weird will happen.
If you declare of column of type REAL and always insert REAL into it, nothing weird will happen.
In other words, just store values of the type that matches the column. This is the way you usually do things anyway.Type declarations are weird
In a column declaration, SQLite has a rather funky set of rules for how it parses the type. It uses these rules to try its very best to Do The Right Thing when somebody ports SQL code from another database.
For example, all of the columns in the following table end up with TEXT affinity, which is probably what was intended:
CREATE TABLE [foo] ( [a] varchar(50), [b] char(5), [c] nchar, [d] nvarchar(5), [e] nvarchar(max), [f] text );
But in some cases, the rules are funky. Here are more declarations which all end up with TEXT affinity, even though none of them look right:
CREATE TABLE [foo] ( [a] characters, [b] textish, [c] charbroiled, [d] context );
And if you want to be absurd, SQLite will let you. Here's an example of a declaration of a column with INTEGER affinity:
CREATE TABLE [foo] ( [d] My wife and I went to Copenhagen a couple weeks ago to celebrate our wedding anniversary and I also attended SQL Saturday while I there and by the way we saw Captain America The Winter Soldier there as well which means I got to see it before all my friends back here in Illinois and the main reason this blog entry is late is because I spent most of the following week gloating );
SQLite will accept nearly anything as a type name. Column [d] ends up being an INTEGER because its ridiculously long type name contains the characters "INT" (in "Winter Soldier").
Perhaps we can agree that this "feature" could be easily abused.
There are only four types anyway. Pick a name for each type and stick to it. Once again, the official names are:
(If you want a little more latitude, you can use INT for INTEGER. Or VARCHAR for TEXT. But don't stray very far, mkay?)
Pretend like these are the only four things that SQLite will allow, and then it will never surprise you.Summary
SQLite handles types very differently from SQL Server, but its approach is mostly a superset of your existing habits. The differences explained above might look like a big deal, but in practice, they probably won't affect you all that much.
Microservices are a style of software architecture that involves delivering systems as a set of very small, granular, independent collaborating services.
Though they aren't a particularly new idea, Microservices seem to have exploded in popularity this year, with articles, conference tracks, and Twitter streams waxing lyrical about the benefits of building software systems in this style.
This popularity is partly off the back of trends such as Cloud, DevOps and Continuous Delivery coming together as enablers for this kind of approach, and partly off the back of great work at companies such as Netflix who have very visibly applied the pattern to great effect.
Let me say up front that I am a fan of the approach. Microservices architectures have lots of very real and significant benefits:
Lot's of myth floating around about requirements elicitation and management. Starting with requirements is not how large, complex, mission critical DOD and NASA programs work. Start with Capabilities. This cna be directly transferred to Enterprise IT.
Here's a map of the planned capabilities for an ERP system. This figure is from Performance-Based Project ManagementÂŽ where all the deatails of this and other principles, practices, and processes needed for project success can be found.
Here each business systems capability is outlined in the order needed to maximize the business value. In agile parlance, the customer has prioritized the deliverables. But in fact the prioritization is part of the strategic planning needed to assure that the cost investment returns the maximum value to assure the business receive maintains a positive ROI throughout the life of the project
The first step is to identify the needed capabilities. Here's the steps
Only when we have the capabilities defined for each stage of the project can we start on the requirements. All the capabilities need to be identified and sequenced, otherwise we can't be assured to business goals can be met for the planned investment. With the planned capabilities, we can start on the requirements
With requirements in place for each capability, we can then start development. This is done incrementally and iterative using our favorite agile method. Doesn't mater as long an incremental delivery of value of the approach.
Posted by Ellie Powers, Google Play team
Today we are happy to announce that the App Translation Service, previewed in May at Google I/O, is now available to all developers. Every day, more than 1.5 million new Android phones and tablets around the world are turned on for the first time. Each newly activated Android device is an opportunity for you as a developer to gain a new user, but frequently, that user speaks a different language from you.
To help developers reach users in other languages, we launched the App Translation Service, which allows developers to purchase professional app translations through the Google Play Developer Console. This is part of a toolbox of localization features you can (and should!) take advantage of as you distribute your app around the world through Google Play.
We were happy to see that many developers expressed interest in the App Translation Service pilot program, and it has been well received by those who have participated so far, with many repeat customers.
Here are several examples from developers who participated in the App Translation Service pilot program: the developers of Zombie Ragdoll used this tool to launch their new game simultaneously in 20 languages in August 2013. When they combined app translation with local marketing campaigns, they found that 80% of their installs came from non-English-language users. Dating app SayHi Chat expanded into 13 additional languages using the App Translation Service. They saw 120% install growth in localized markets and improved user reviews of the professionally translated UI. The developer of card game G4A Indian Rummy found that the App Translation Service was easier to use than their previous translation methods, and saw a 300% increase with user engagement in localized apps. You can read more about these developersâ experiences with the App Translation Service in Developer Stories: Localization in Google Play.
To use the App Translation Service, youâll want to first read the localization checklist. Youâll need to get your APK ready for translation, and select the languages to target for translation. If youâre unsure about which languages to select, Google Play can help you identify opportunities. First, review the Statistics section in the Developer Console to see where your app has users already. Does your app have a lot of installs in a certain country where you havenât localized to their language? Are apps like yours popular in a country where your app isnât available yet? Next, go to the Optimization Tips section in the Developer Console to make sure your APK, store listing, and graphics are consistently translated.
Youâll find the App Translation Service in the Developer Console at the bottom of the APK section — you can start a new translation or manage an existing translation here. Youâll be able to upload your appâs file of string resources, select the languages you want to translate into, select a professional translation vendor, and place your order. Pro tip: you can put your store listing text into the file you upload to the App Translation Service. Youâll be able to communicate with your translator to be sure you get a great result, and download your translated string files. After you do some localization testing, youâll be ready to publish your newly translated app update on Google Play — with localized store listing text and graphics. Be sure to check back to see the results on your user base, and track the results of marketing campaigns in your new languages using Google Analytics integration.
Good luck! Bonne chance ! ăĺš¸éăçĽăăžă! íě´ě ëšě´ě ÂĄBuena suerte! ĐŁĐ´Đ°ŃĐ¸! Boa Sorte!Join the discussion on
Posted by Ellie Powers, Google Play team
A key part of growing your appâs installed base is knowing more about your users — how they discover your app, what devices they use, what they do when they use your app, and how often they return to it. Understanding your users is now made easier through a new integration between Google Analytics and the Google Play Developer Console.
Starting today, you can link your Google Analytics account with your Google Play Developer Console to get powerful new insights into your appâs user acquisition and engagement. In Google Analytics, youâll get a new report highlighting which campaigns are driving the most views, installs, and new users in Google Play. In the Developer Console, youâll get new app stats that let you easily see your appâs engagement based on Analytics data.
This combined data can help you take your app business to the next level, especially if youâre using multiple campaigns or monetizing through advertisements and in-app products that depend on high engagement. Linking Google Analytics to your Developer Console is straightforward — the sections below explain the new types of data you can get and how to get started.In Google Analytics, see your appâs Google Play referral flow
Once youâve linked your Analytics account to your Developer Console, youâll see a new report in Google Analytics called Google Play Referral Flow. This report details each of your campaigns and the user traffic that they drive. For each campaign, you can see how many users viewed listing page in Google Play and how many went on to install your app and ultimately launch it on their mobile devices.
With this data you can track the effectiveness of a wide range of campaigns — such as blogs, news articles, and ad campaigns — and get insight into which marketing activities are most effective for your business. You can find the Google Play report by going to Google Analytics and clicking on Acquisitions > Google Play > Referral Flow.In the Developer Console, see engagement data from Google Analytics
If youâre already using Google Analytics, you know how important it is to see how users are interacting with your app. How often do they launch it? How much do they do with it? What are they doing inside the app?
Once you link your Analytics account, youâll be able to see your appâs engagement data from Google Analytics right in the Statistics page in your Developer Console. Youâll be able to select two new metrics from the drop-down menu at the top of the page:
These engagement metrics are integrated with your other app statistics, so you can analyze them further across other dimensions, such as by country, language, device, Android version, app version, and carrier.How to get started
To get started, you first need to integrate Google Analytics into your app. If you havenât done this already, download the Google Analytics SDK for Android and then take a look at the developer documentation to learn how to add Analytics to your app. Once youâve integrated Analytics into your app, upload the app to the Developer Console.
Next, youâll need to link your Developer Console to Google Analytics. To do this, go to the Developer Console and select the app. At the bottom of the Statistics page, youâll see directions about how to complete the linking. The process takes just a few moments.
Thatâs it! You can now see both the Google Play Referral Flow report in Google Analytics and the new engagement metrics in the Developer Console.Join the discussion on
Over the past few weeks I have been asking friends and colleagues to formally answer the following question:
What are the top reasons you think an organization succeeds in implementingÂ Agile?
The group that participated in this survey are from a highly experienced cohort of process improvement personnel, testers or developers. Not all of the respondents were sure Agile and success belonged in the same sentence (more on that later in the week). There was a rich range of answers, however after the first dozen responses a consensus formed. Today I would like to explore the most important success factor as reported in this survey: senior leadership support.
Senior management support was the most often mentioned factor influencing Agile Success. By far one of the significant nuances in the senior management support was exhibiting aÂ trueÂ understanding of Agile. In particular, senior managers must understand what Agile really is rather than falling prey to buzzword bingo.Â One of the respondents suggested that, âI feel most senior leaders that I have dealt with don’t have a full understanding of what is needed and it trickles down to the rest of the organization.â Senior leaders need to walk the talk when it comes to Agile, if they expect to implement Agile successfully.Â They need to prove to both team members and middle managers that they understand how Agile impacts the flow of work through a sprint and that Agile teams are expected to self-organize. Senior leaders will help pull the transition to Agile forward by asking questions that elicit proof that teams are acting Agile.Â For example, asking to see teamâs burn down chart rather than report-based status reporting sends a strong message that leads behavior.
In many organizations, following the process is as important as the outcome of any specific project. This is based on the presumption that precisely following the process foreshadows success. In the role of process champion, senior leaders own one of the more significant barriers to change. Senior leadership needs to incent teams to try new processes such as Scrum. Senior managers need to understand thatÂ Agile frameworks areÂ scaffolds that that teams that needÂ to be tailored to fit projectÂ needs and requirements. Providing incentive for teams to experiment will create an environment ofÂ flexibility so that teams can decide how addressÂ impediments as soon as they are encountered.
Teams need support from senior leadership to allow innovation or Agile implementations will fail. Support for Agile innovation derives from the expectations of senior management that teams will use Agile techniques.Â These expectations need to part of the annual goals and objectives and be in evidence in the questions that leaders ask of middle managers and project teams. The power of asking for questions that require that teams prove they are using Agile is a VERY power evidence of a senior leaderâs expectations.