Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

How To Think Like a Microsoft Executive

One of the things I do, as a patterns and practices kind of guy, is research and share success patterns. 

One of my more interesting bodies of work is my set of patterns and practices for successful executive thinking.

A while back, I interviewed several Microsoft executives to get their take on how to think like an effective executive.

While the styles vary, what I enjoyed is the different mindset that each executive uses as they approach the challenge of how to change the world in a meaningful way.

5 Key Questions to Share Proven Practices for Executive Thinking

My approach was pretty simple.   I tried to think of a simple way to capture and distill the essence. I originally went the path of identifying key thinking scenarios (changing perspective, creating ideas, evaluating ideas, making decisions, making meaning, prioritizing ideas, and solving problems) ... and the path of identifying key thinking techniques (blue ocean/strategic profile, PMI, Six Thinking Hats, PQ/PA, BusinessThink, Five Whys, ... etc.) -- but I think just a simple set of 5 key questions was more effective.

These are the five questions I ended up using:

  1. What frame do you mostly use to evaluate ideas? (for example, one frame is: who's the customer? what's the problem? what's the competition doing? what does success look like?)
  2. How do you think differently, than other people might, that helps you get a better perspective on the problem?
  3. How do you think differently, than other people might, that helps you make a better decision?
  4. What are the top 3 questions you ask yourself the most each day that make the most difference?
  5. How do you get in your best state of mind or frame of mind for your best thinking?

The insights and lessons learned could fill books, but I thought I would share three of the responses that I tend to use and draw from on a regular basis …

Microsoft Executive #1

1) The dominant framework I like to use for decisions is: how can we best help the customer? Prioritizing the customer is nearly always the right way to make good decisions for the long term. While one has to have awareness of the competition and the like, it usually fails to “follow taillights” excessively. The best lens through which to view the competition is, “how are they helping their customers, and is there anything we can learn from them about how to help our own customers?”

2) I don’t think that there is anything magical about executive thinking. The one thing we hopefully have is a greater breadth and depth of experience on key decisions. We use this experience to discern patterns, and those patterns often help us make good decisions on relatively little data.

3) Same answer as #2.

4) How can we help our customers more? Are we being realistic in our assessments of ourselves, our offerings and the needs of our customers? How can we best execute on delivering customer value?

5) It is key to keep some discretionary time for connecting with customers, studying the competition and the marketplace and “white space thinking.” It is too easy to get caught up on being reactionary to lots of short-term details and therefore lose the time to think about the long term.

Microsoft Executive #2

There are three things that I think about as it relates to leading organizations: Vision, People and Results. Some of the principles in each of these components will apply to any organization, whether the organization's goal is to make profit, achieve strategic objectives, or make non-profit social impact.

Vision

In setting the vision and top level objectives, it is very important to pick the right priorities. I like to focus on the big rocks instead of small rocks at the vision-setting stage. In today's world of information overload, it is really easy to get bombarded with too many things needing attention. This can dilute your focus across too many objectives. The negative effect of not having a clear concentrated focus multiplies rapidly across many people when you are running a large organization. So, you need to first ask yourself what are the few ultimate results that are the objectives of your organization and then stay disciplined to focus on those objectives. The ultimate goal might be a single objective or a few, but should not be a laundry list. It is alright to have multiple metrics that are aligned to drive each objective, but the overall objectives themselves should be crisp and focused.

People

The next step in running an organization is to make sure you have the right people in the right jobs. This starts with first identifying the needs of the business to achieve the vision set out above. Then, I try to figure out what types of roles are needed to meet those needs. What will the organization structure look like? What kind of competencies, that is, attributes, skills, and behaviors, are needed in those roles to meet expected results? If there is a mismatch between the role and the person, it can set up both the employee and the business for failure. So, this is a crucial step in making sure you've a well running organization.

Once you have the right people in the right jobs, I try to make sure that the work environment encourages people to do their best. Selfless leadership, where the leaders have a sense of humility and are committed to the success of the business over their own self, is essential. An inclusive environment where everyone is encouraged to contribute is also a must. People's experience with the organization is for the most part shaped by their interaction with their immediate manager. Therefore, it is very important that a lot of care goes into selecting, encouraging and rewarding people managers who can create a positive environment for their employees.

Results

Finally, the organization needs to produce results towards achieving the vision and the objectives you set out. Do not confuse results with actions. You need to make sure you reward people based on performance towards producing results instead of actions. When setting commitments for people, you need to be thoughtful about what metrics you choose so that you incent the right behavior. This again helps build an environment that encourages people to do their best. Producing results also requires that you've a compelling strategy for the organization. Thus, you need to stay on top of where the market and customers are. This will help you focus your organization's efforts on anticipating customer needs, and proactively taking steps to delight customers. This is necessary to ensure that organization's resources are prioritized towards those efforts that will produce the highest return on investment.

Microsoft Executive #3
  1. Different situations call for different pivots.  That said, I most often start with the customer, as technology is just a tool; ultimately, people are trying to solve problems.  I should note, however, that “customer” does not always mean the person who licenses or uses our products and/or services.  While they may be the focus, my true “customer” is sometimes the business itself (and its management), a business group, or a government (addressing a policy issue).  Often, the problem presented has to be solved in a multi-disciplinary way (e.g., a mixture of policy changes, education, technological innovation, and business process refinements).  Think, for example, about protecting children on-line.  While technology may help, any comprehensive solution may also involve government laws, parental and child education, a change in website business practices, etc.
  2. As noted above, the key is thinking in a multi-disciplinary way. People gravitate to what they know; thus the old adage that “if you have a hammer, everything you see is a nail.” Think more broadly about an issue, and a more interesting solution to the customer’s problem may present itself. (Scenario focused engineering works this way too.)
  3. It is partially about thinking differently (as discussed above), but also about seeking the right counsel.  There is an interesting truth about hard Presidential decisions.  The more sensitive an issue, the fewer the number of people consulted (because of the sensitivity) and the less informed the decision.  Obtaining good counsel – while avoiding the pitfall of paralysis (either because you have yet to speak to everyone on the planet or because there was not universal consensus on what to do next) is the key.
  4. (1) What is the right thing to do? (This may be harder than it looks because the different customers described above may have different interests.  For example, a costly solution may be good for customers but bad for shareholders.  A regulatory solution might be convenient for governments but stifle technological innovation.)  (2) What unintended consequences might occur? (The best laid plans….).  (3) Will the solution be achievable?
  5. I need quiet time; time to think deeply.

The big things that really stand out for me are using the customer as the North Star, balancing with multi-disciplinary perspectives, evaluating multiple, cascading ramifications, and leading with vision.

You Might Also Like

100 Articles to Sharpen Your Mind

Rituals for Results

Thinking About Career Paths

Categories: Architecture, Programming

How to form a team to develop your mobile app

Software Requirements Blog - Seilevel.com - Tue, 09/16/2014 - 17:00
If you’re a Product Manager, chances are you have lots of ideas, and your problem may be deciding on which one to execute against. If you’re interested in getting your name more out into the field and keeping your skillset relevant for today’s world, you may have considered making your own mobile app. Once you […]
Categories: Requirements

Sponsored Post: Apple, Flipboard, All Your Base, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here. 
    • Siri Operations Developer. Apple is looking for talented developers to help build the next generation internal cloud platform for Siri. This person should be excited about solving difficult distributed systems problems as well as constantly improving user-experience. This person will be working with a highly technical and motivated team solving the hard problems. Please apply here.
    • Site Reliability Engineer. The Apple Pay Site Reliability Team is hiring for multiple roles focused on the front line customer experience and the back end integration of Apple systems with our Network and Banking partners. Please apply here.
    • Senior Software Engineer, iTunes Infrastructure. Hands-on senior software engineering for the iTunes digital media supply chain engineering team. We are looking for a self starting, energetic individual who is not afraid to question assumptions and with excellent written and oral communication skills. Please apply here
    • iTunes - Content Management Tools Engineer. The candidate should have several years experience developing large-scale web-based applications using object-oriented languages. Excellent understanding of relational databases and data-modeling techniques is also a must. Please apply here

  • Flipboard's Site Reliability Engineering Team is hiring! This team offers great challenges solving unique problems unlike any you have seen!  They work exclusively in the cloud, ensuring a highly available and performant product to millions of users daily.  If you have a passion for large-scale systems, next generation provisioning and orchestration tools apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • All Your Base is the only curated database conference of its kind in the UK. Listen to talks from database creators, industry leaders and developers working at the coal face on where to store and how to handle your data. Book tickets.
Cool Products and Services
  • FoundationDB launches SQL Layer. SQL Layer is an ANSI SQL engine that stores its data in the FoundationDB Key-Value Store, inheriting its exceptional properties like automatic fault tolerance and scalability. It is best suited for operational (OLTP) applications with high concurrency. Users of the Key Value store will have free access to SQL Layer. SQL Layer is also open source, you can get started with it on GitHub as well.

  • Better, Faster, Cheaper: Pick Three. Scalyr is your universal tool for visibility into your production systems. Log aggregation, server metrics, monitoring, alerting, dashboards, and more. Not just “hosted grep” or “hosted graphs”; our columnar data store enables enterprise-grade functionality with sane pricing and insane performance. Trusted by in-the-know companies like Codecademy – get on board!

  • Whitepaper Clarifies ACID Support in Aerospike. In our latest whitepaper, author and Aerospike VP of Engineering & Operations, Srini Srinivasan, defines ACID support in Aerospike, and explains how Aerospike maintains high consistency by using techniques to reduce the possibility of partitions.  Read the whitepaper: http://www.aerospike.com/docs/architecture/assets/AerospikeACIDSupport.pdf.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Docker images for Dart now available

Google Code Blog - Tue, 09/16/2014 - 16:25
By Søren Gjesse, Software Engineer on Dart

Developers increasingly want to use the same language and business logic on the client and the server to reduce risk and complexity. To help developers easily build and deploy end-to-end Dart apps, we are happy to announce ready-to-use Docker images for Dart. This expands our Docker usage further beyond the recently announced Docker support in Google App Engine. There are now three Dart-related images on hub.docker.com for you to use: dart, dart-runtime and dart-hello, which uses the same naming scheme as the corresponding Node, Python and Go images already offered.

The image google/dart adds the Dart SDK to google/debian Debian wheezy image. Running Dart in a container is now as simple as this:

  $ docker run -i -t google/dart /usr/bin/dart --version

The image google/dart-runtime inherits from google/dart, and provides a convenient way to run a Dart server application using a one line Dockerfile. To inherit from google/dart-runtime, your server application requires the following layout:

  • has a the pubspec.yaml and pubspec.lock files listing its dependencies.
  • has a file bin/server.dart as the entrypoint script.
  • listens on port 8080

With this layout and a Dockerfile with the following content:

FROM google/dart-runtime

You can run your app in a container as simple as this:

  $ docker build -t my-app .
  $ docker run -d -p 8080:8080 my-app

The last image google/dart-hello is a sample Dart server application, that inherits from dart/runtime. Here is an example of how to run the sample:

  $ docker run -d -p 8080:8080 google/dart-hello

Depending on your local Docker installation the address of the server differs. If you are using boot2docker with the default configuration you can talk to the Dart server in the docker container on http://192.168.59.103:8080:

  $ curl http://192.168.59.103:8080/version

You can choose specific version tags, such as 1.6.0 (recommended), or choose the ‘latest’ tag for the latest stable version. Here is an example of running Dart 1.6 with Docker:

  $ docker run -i -t google/dart:1.6.0 /usr/bin/dart --version

If you haven't already, go and install boot2docker and start building you Dart server application using Docker images. Pushing these images to you server will simplify deployment and ensure you are running the same code on your server as you have been testing locally.

Posted by Mano Marks, Google Developer Platform Team
Categories: Programming

Cuttable Scope

Early on in my Program Management career, I ran into challenges around cutting scope.

The schedule said the project was done by next week, but scope said the project would be done a few months from now.

On the Microsoft patterns & practices team, we optimized around “fix time, flex scope.”   This ensured we were on time, on budget.  This helped constrain risk.  Plus, as soon as you start chasing scope, you become a victim of scope creep, and create a runaway train.  It’s better to get smart people shipping on a cadence, and focus on creating incremental value.  If the trains leave the station on time, then if you miss a train, you know you can count on the next train.  Plus, this builds a reputation for shipping and execution excellence.

And so I would have to cut scope, and feel the pains of impact ripple across multiple dependencies.

Without a simple chunking mechanism, it was a game of trying to cut features and trying to figure out which requirements could be completed and still be useful within a given time frame.

This is where User Stories and System Stories helped.  

Stories created a simple way to chunk up value.   Stories help us put requirements into a context and a testable outcome, share what good looks like, and estimate our work.  So paring stories down is fine, and a good thing, as long as we can still achieve those basic goals.

Stories help us create Cuttable Scope.  

They make it easier to deliver value in incremental chunks.

A healthy project start includes a baseline set of stories that help define a Minimum Credible Release, and additional stories that would add additional, incremental value.

It helps create a lot of confidence in your project when there is a clear vision for what your solution will do, along with a healthy path of execution that includes a baseline release, along with a healthy pipeline of additional value, chunked up in the form of user stories that your stakeholders and user community can relate to.

You Might Also Like

Continuous Value Delivery the Agile Way

Experience-Driven Development

Kanban: The Secret of High-Performing Teams at Microsoft

Minimum Credible Release (MCR) and Minimum Viable Product (MVP)

Portfolios, Programs, and Projects

Categories: Architecture, Programming

Don’t Equate Story Points to Hours

Mike Cohn's Blog - Tue, 09/16/2014 - 15:00

I’ve been quite adamant lately that story points are about time, specifically effort. But that does not mean you should say something like, “One story point = eight hours.”

Doing this obviates the main reason to use story points in the first place. Story points are helpful because they allow team members who perform at different speeds to communicate and estimate collaboratively.

Two developers can start by estimating a given user story as one point even if their individual estimates of the actual time on task differ. Starting with that estimate, they can then agree to estimate something as two points if each agree it will take twice as long as the first story.

When story points equated to hours, team members can no longer do this. If someone instructs team members that one point equals eight (or any number of) hours, the benefits of estimating in an abstract but relatively meaningful unit like story points are lost.

When told to estimate this way, the team member will mentally estimate first in number of hours and then convert that estimate to points. Something the developer estimates to be 16 hours will be converted to 2 points.

Contrast this with a team member’s thought process when estimating in story points as they are truly intended. In this case, team members will consider how long each new story will take in comparison to other stories. You and I might agree this new story will take twice as long as a one-point story, and so we agree it’s a two.

However, you might be thinking that’s five hours of work, and I might be thinking it’s 10. In this way, story points are still about time (effort), but the amount of time per point is not pegged to the same amount for all team members.

If someone in your company wants to peg one point to some number of hours, just stop calling them points and use hours or days instead. Calling them points when they’re really just hours introduces needless complexity (and loses one of the main benefits of points).

 

 

 

 

 

 

 

 

Don’t Equate Story Points to Hours

Mike Cohn's Blog - Tue, 09/16/2014 - 15:00

I’ve been quite adamant lately that story points are about time, specifically effort. But that does not mean you should say something like, “One story point = eight hours.”

Doing this obviates the main reason to use story points in the first place. Story points are helpful because they allow team members who perform at different speeds to communicate and estimate collaboratively.

Two developers can start by estimating a given user story as one point even if their individual estimates of the actual time on task differ. Starting with that estimate, they can then agree to estimate something as two points if each agree it will take twice as long as the first story.

When story points equated to hours, team members can no longer do this. If someone instructs team members that one point equals eight (or any number of) hours, the benefits of estimating in an abstract but relatively meaningful unit like story points are lost.

When told to estimate this way, the team member will mentally estimate first in number of hours and then convert that estimate to points. Something the developer estimates to be 16 hours will be converted to 2 points.

Contrast this with a team member’s thought process when estimating in story points as they are truly intended. In this case, team members will consider how long each new story will take in comparison to other stories. You and I might agree this new story will take twice as long as a one-point story, and so we agree it’s a two.

However, you might be thinking that’s five hours of work, and I might be thinking it’s 10. In this way, story points are still about time (effort), but the amount of time per point is not pegged to the same amount for all team members.

If someone in your company wants to peg one point to some number of hours, just stop calling them points and use hours or days instead. Calling them points when they’re really just hours introduces needless complexity (and loses one of the main benefits of points).

 

 

 

 

 

 

Quote of the Month September 2014

From the Editor of Methods & Tools - Tue, 09/16/2014 - 13:36
The important thing is not your process, the important thing is the process for improving your process. Source: Henrik Kniberg, http://blog.crisp.se/wp-content/uploads/2013/08/20130820-What-is-Agile.pdf

Projects Where You Can’t Predict an End Date

Do you have projects where you can’t predict an end date? These tend to be a job search, a change project, and with a tip of the hat to Cesar Abeid, your life. I like to call these “emergent” projects.

You might prefer to call them “adaptable” projects, but to me, every project has to be adaptable. These projects are emergent. You need to plan, but not too much. You need to replan. You need to take advantage of serendipity.

My column this quarter for projectmanagement.com is Applying Agile to Emergent Projects. (Free registration required.)

Enjoy!

Categories: Project Management

Quote of the Day - Plan for the Future

Herding Cats - Glen Alleman - Tue, 09/16/2014 - 01:05

Before you begin a thing remind yourself that difficulties and delays quite impossible to foresee are ahead. You can only see one thing clearly, and that is your goal. Form a mental vision of that and cling to it through thick and thin.
Kathleen Norris

Screen Shot 2014-09-15 at 3.58.10 PMAnd when they are impossible to see we need margin and reserve to protect our project from them. This margin is for the irreducible risks and the reserve is for buying down the reducible risks. Both irreducible (aleatory) and reducible (epistemic) uncertainties can be modeled for all projects. This is the role of risk management and project. To NOT model these uncertainties is to ignore them. To ignore them is to say to those providing the money that you're not following Tim Lister's advice...

Risk Management is how Adults manage projects

As well when you begin a thing remember another important quote...

Measurement is the first step that leads to control and eventually to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.H. James Harrington

So if we're going to start a job and don't have an assessment - to some level of confidence - of how long it will take, how much it will cost, and what we will be capable of delivering when we get to the end of the money and the time - it is very likely that those paying for our work will be very disappointed in our efforts.

When we here that we can make decisions in the absence of knowing the probabilistic cost, schedule, and likelihood of producing the needed capabilities, think back to the two quotes above. And consider the conjectures below that requires us to ignore those quote and instead follow those statements in the absence of any evidence they are applicable outside of the Value at Risk being low enough that those providing the money don't really care if it's all a loss.

Workshop

† Orginal post from Mark Anderson's email from ExecuNet, 9/14/2014

Related articles Uncertainty is the Source of Risk The World of Probability and Statistics How to Deal With Complexity In Software Projects? Both Aleatory and Epistemic Uncertainty Create Risk Time to Revisit The Risk Discussion
Categories: Project Management

Project Success: An Overview

IMG_0696I recently asked  a sample of  the Software Process Measurement blog readers  and listeners of the Software Process and Measurement Cast how they defined project success. I constrained the answers to the classic project management three-legged stool of on-schedule, on-budget and on-scope, to avoid the predominant answer: “it depends.” Even though many of the respondents found the question difficult, everyone had a succinct opinion. The ranked responses were:

  1. On-schedule
  2. On-scope
  3. On-budget

When I asked which of the three “ons” defined project success, the top response hands down was on-schedule with nearly twice as many responses as on-scope. On-budget was a distant third, but interestingly, it was as mentioned as the second most important response on more than a few responses.

I am not surprised by how the responses stacked up. Schedule is the most easily measured and monitored of the success criteria. Everyone knows how to read a calendar, and given that most projects are created with a due date, whether or not a project is on-time is obvious.

On-scope, while representing the voice of the customer, is more difficult for most projects to interpret and measure. Projects (whether Agile or plan-based) generally evolve over the life of the project. That evolution means that the picture any stakeholder, developer or product owner had in their head at the beginning of the project may not represent what has been delivered when the project is completed. Since scope is less tangible, it is perceived to be less important (or at least more easily debated). Being on-scope may be more of a dis-satisfier (if what is delivered not close to what was wanted people will be dissatisfied, however just being on-target or close does not move the needle) than a satisfier.

The final leg on our stool was on-budget. In most cases the true budget of a project is an outcome impacted by schedule and scope, therefore is a metric that all project managers monitor but have few levers to control. Given that budget performance at a project level is deterministic the respondents perceived it as less important. Had I asked a room of IT finance personnel, I suspect that the answer might have been different.

One of the most interesting observations is that even without context everyone had an opinion about the definition of success. While context, if given, may have shifted the respondent’s perspective, their initial response represents the respondent’s natural cognitive bias. When asked to identify whether being on-schedule, on-budget or on-scope was the most important attribute of project success everyone I asked had an opinion. And, that opinion focused on the very tangible and measurable calendar metric: on-schedule. The basic definition of success that a project manager or leader carries with themselves is important because that opinion will guide how they will try to influence project behavior.

Note over the next few days we will explore the rationale behind why each leg of the stool was important to those who responded.


Categories: Process Management

Getting Things Right: A Look at Centralized vs Decentralized Systems Through the Eyes of Instant Replay

Three baseball umpires were sitting around a bar, talking about how they make calls on each pitch: First umpire: Some are balls and some are strikes, and I call them as they are. Second umpire: Some are balls and some are strikes, and I call them as I see 'em. Third umpire: Some are balls and some are strikes, but they ain’t nothin' until I call 'em.


AT&T's Global Network Operations Center

 


MLB's Instant Replay Bunker
NHL's Situation Room

It’s fun to look at how concepts we think of as belonging primarily to the domain of computer science play out in other fields. One intriguing example is how Instant Replay reflects and even helps shape the culture of a sport by how replay is implemented: decentralized or centralized.

Lucrative TV deals have pumped huge sums of money into professional sports. With so much money in play, sports have shifted from being pure entertainment to wanting to get things right. The price of making a bad call is just too high to let the human element decide the fate of titans.

Getting things right is also a much talked about subject in computer science. In CS the language of getting things right uses terms like transaction, rollback, quorum, optimistic replication, linearizability, synchronization, lock, eventually consistent, compensating transaction, and so on.

In sports to get things right referees use terms like flag, penalty, by rule, ruling stands, reset the clock, down and distance, line to gain, the whistle blew, ruling confirmed, and ruling overturned.

Though the vocabulary is different, the intent is much the same. Correctness.

Intent is not all tech and sports have in common. As technology evolves we are seeing sports change to take advantage of the new capabilities technology offers. And those changes should be familiar to anyone in software. Sports have gone from a completely decentralized system of officiating to where we now see the NBA, NFL, MLB, and NHL, all converging on some form of a centralized system.

The NHL were the innovators, starting their centralized instant replay system in 2011. It works something like this...officials sit in a war room located in Toronto that looks a lot like every network operations center ever built. Video feeds from all games flow into the room. When there is a controversy or an obvious review-worthy play, Toronto is contacted for a quick review and judgement on the correct call.  Every sport will implement their own centralized replay system in their own way, but that's the gist of it.

We’ve seen the exact same transformation as federated services like email have been replaced with centralized services like Twitter and Facebook. It turns out sports and computer science have some deeper similarities. What might those be?

Categories: Architecture

Don’t Focus on Your Strengths

Making the Complex Simple - John Sonmez - Mon, 09/15/2014 - 15:00

Guys, guys, guys… (and gals) I think we are starting to becoming a little “wussy”–all of us–no offense. I’ve found the most success in my life by working on my weaknesses and making them strengths. Why? Because, when you make a weakness into a strength you appreciate it more and you are more committed and […]

The post Don’t Focus on Your Strengths appeared first on Simple Programmer.

Categories: Programming

Why I Changed the Title of My New Book

NOOP.NL - Jurgen Appelo - Mon, 09/15/2014 - 14:40
Register for a FREE book!

#WorkoutThe original title of my new book was Management 3.0 Workout. It was the result of a long and challenging process in which I learned that readers of my newsletter liked the Management 3.0 brand and also the workout metaphor that I used in many of my articles and chapters.

So, why did I change it to #Workout for the Amazon Kindle edition?

The post Why I Changed the Title of My New Book appeared first on NOOP.NL.

Categories: Project Management

The Release Paradox: releasing less often makes your teams slower and decreases quality

Software Development Today - Vasco Duarte - Mon, 09/15/2014 - 04:00

Herman is a typical agile coach. He works with teams to help them learn how to deliver high-quality software quickly.
Many teams want to focus on design, architecture, or (sometimes) even on business value. But they are usually not in a hurry to release quickly.
Recently Herman conveyed a story to me that illustrates how releasing quickly can help teams deliver high-quality software much faster than if they would focus on quality in the first place. This is the case of a team that was working on a long overdue project. They had used a traditional and linear process in the past and had been able to release software only very recently, after more than 12 months of work on the latest release.
Not surprisingly, they were having trouble releasing regularly. The software was not stable; once it was live it had many problems that needed to be fixed quickly, and worst of all: all of this was having a direct impact on the company’s business.
The teams were extremely busy fixing the problems they had added to the product in the last year and could not focus on solving the root causes of those problems.
They were in full-fledged firefighting mode. They worked hard every day to fix yet another problem and release yet another hot fix.
This lasted for a few weeks, but once the fire-fighting mode was over, Herman worked with the teams to improve their release frequency. During their work with Herman, those teams went from one year without any release to a regular release every two weeks.
At first the releases were not always possible, but with time they improve their processes, removed the obstacles preventing them from releasing every two weeks and started releasing regularly.
What happened next was surprising for the teams. The list of problems after each release did not grow - as they expected - but instead shrank.
When some problems came in from the customers after a 2-week release, they were also much faster to fix the problem and quicker to release a fix if that was required. When the fix was not critical, they waited for the following release which was, after all, only 2 weeks away.
By focusing on releasing every two weeks, Herman’s teams were able to focus on small, incremental changes to their product. That, in turn, enabled them to fine-tune their development and release processes.
Here are some of the key changes the teams implemented
  1. They started with a 4 week release cycle, and fine-tuned their daily builds and release testing process to enable a release every 2 weeks.
  2. They invested time and energy to improve their test automation strategy and automated the critical tests to enable them to run “enough” tests to be confident that the quality was at release level.
  3. They had some teams on maintenance duty in the first few iterations to make sure that any problem found after release could quickly be fixed, and released to customers if necessary.
  4. They changed their source code management strategy to enable some teams to work on longer term changes while others worked on the next release.
  5. They involved all teams necessary to complete a release in their iterations. This affected especially: production/operations team, localization team, documentation team, marketing team, and other teams when needed.
This list of changes was the result of the drive to complete each release and learning from the failures in the previous release. Some changes were harder to implement, and especially the testing strategy to allow for 2-week release cycles had to be changed and adjusted several times.
One of the key problems the teams had to solve, was the lack of coordination with departments that directly contributed to the release but were not previously involved in their day-to-day work.
This process lasted several months, and would not have been possible without a clear Vision set forth by the teams in cooperation with Herman, who helped them discover the right way to reach that Vision within their context.
Herman’s work as a coach was that of a catalyst for management and the teams in that organization. He was able to create in their minds a clear picture of what was possible. Once that was clear, the teams and the management took ownership of the process and achieved a step-change in their ability to fulfill market demands and customer needs.
Customers have no reason to change provider as they have an ever-improving experience when using this company’s services.
Today, this organization releases a new version of their product every two weeks. Unaware of it, their customers receive regular improvements to the product they use, and have no reason to change provider as they have an ever-improving experience when using this company’s services.

Picture credit: John Hammink, follow him on twitter

SPaMCAST 307 – Integration Testing and Agile, Software Sensei

www.spamcast.net

http://www.spamcast.net

 

Listen to the Software Process and Measurement Cast 307 (podcast)

Software Process and Measurement Cast number 307 features our essay on integration testing and Agile. Integration testing is defined as testing in which components (software and hardware) are combined to confirm that they interact according to expectations and requirements.  Good integration testing is critical to effective development whether you are using Agile techniques or not.

Link and pictures noted in the essay:

Beer glass logo screen

Application Diagram

We also have a new installment from the Software Sensei.  Kim Pries, the Software Sensei, discusses layered process audits and software inspections.  The techniques are a powerful approach to deliver high quality software.

Next

SPaMCAST 308 features our interview with Michael West author of Return On Process (ROP): Getting Real Performance Results from Process Improvement and more! We had a great discussion about why some process improvements impact the organization’s bottom line and some don’t. Impacting the bottom line is not accident.

 

Upcoming Events

DCG Webinars:

Raise Your Game: Agile Retrospectives September 18, 2014 11:30 EDT Retrospectives are a tool that the team uses to identify what they can do better. The basic process – making people feel safe and then generating ideas and solutions so that the team can decide on what they think will make the most significant improvement – puts the team in charge of how they work. When teams are responsible for their own work, they will be more committed to delivering what they promise. Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming: ITMPI Webinar!

We Are All Biased!  September 16, 2014 11:00 AM – 12:30 PM EST

Register HERE

How we think and form opinions affects our work whether we are project managers, sponsors or stakeholders. In this webinar, we will examine some of the most prevalent workplace biases such as anchor bias, agreement bias and outcome bias. Strategies and tools for avoiding these pitfalls will be provided.

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 307 - Integration Testing and Agile, Software Sensei

Software Process and Measurement Cast - Sun, 09/14/2014 - 22:00

Software Process and Measurement Cast number 307 features our essay on integration testing and Agile. Integration testing is defined as testing in which components (software and hardware) are combined to confirm that they interact according to expectations and requirements.  Good integration testing is critical to effective development whether you are using Agile techniques or not.

Link and pictures noted in the essay:

Beer glass logo screen

 

Application Diagram

We also have a new installment from the Software Sensei.  Kim Pries, the Software Sensei, discusses layered process audits and software inspections.  The techniques are a powerful approach to deliver high quality software.

Next

SPaMCAST 308 features our interview with Michael West author of Return On Process (ROP): Getting Real Performance Results from Process Improvement and more! We had a great discussion about why some process improvements impact the organization’s bottom line and some don’t. Impacting the bottom line is not accident.

 

Upcoming Events

DCG Webinars:

Raise Your Game: Agile Retrospectives September 18, 2014 11:30 EDT Retrospectives are a tool that the team uses to identify what they can do better. The basic process – making people feel safe and then generating ideas and solutions so that the team can decide on what they think will make the most significant improvement – puts the team in charge of how they work. When teams are responsible for their own work, they will be more committed to delivering what they promise. Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming: ITMPI Webinar!

We Are All Biased!  September 16, 2014 11:00 AM - 12:30 PM EST

Register HERE

How we think and form opinions affects our work whether we are project managers, sponsors or stakeholders. In this webinar, we will examine some of the most prevalent workplace biases such as anchor bias, agreement bias and outcome bias. Strategies and tools for avoiding these pitfalls will be provided.

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Apache Spark, ETL and Parquet

spark-parquet

One of the projects we’re currently running in my group (Amdocs’ Technology Research) is an evaluation the current state of different option for reporting on top of and near Hadoop (I hope I’ll be able to publish the results when we’d have them). Anyway, part of the preparations for the benchmark includes ingesting a lot of events (CDRs) into the system and creating different aggregations on top of them for instance, for voice call billing events we create yearly, monthly, weekly and daily and hourly aggregations on the subscriber level which include measures like : count of calls, average duration, sum of pricing, median balance, hourly distribution of calls, popular destinations etc.

We are using spark to do the ingestion and I thought that there are two interesting aspects I can share, which I haven’t seen too many examples on the internet, namely:

  • doing multiple aggregations i.e. an aggregations that goes beyond a work-count level
  • Writing to an Hadoop output format (Parquet in the example)

I created a minimal example, which uses a simple, synthesized input and   demonstrates these two issues – you can get the complete code for that on github . The rest of this post will highlight some of the points from the example.

Let’s start with the main core spark code, which is simple enough:

View the code on Gist.

  • line 1 – is reading a CSV as text file
  • line 3 is doing a simple parsing of the file and replacing it with a class. The parsed RDDs are cached since we’d iterate them multiple times (for each aggregation)
  • like 5,6 groups by multiple keys. The nice way to do that is using SparkSQL but the version I used (1.0.2) had very limited and undocumented SQL (I had to go to the code). This should be better in future versions. If you still use “regular” spark, I haven’t found a better way to group on multiple fields except creating the complex key by hand which is what getHourly and getWeekly methods do (create a string with year,month… etc dimensions)
  • like 11,12 do the aggregations themselves (which we’d look into next). The output  is again Pair RDDs. This seems useless however, it turns out you need pair RDDs if you want to save to Hadoop formats (more on that later)

Aggregations

Once we do the group by key (lines 5,6 above) we have a collection of Call records (Iterable[Call]). Now scala has a lot of nifty functions to transform Iterables (map, flatmap, foldleft etc. etc.) however we have a long list of aggregations to compute and the the Calls collection can get rather big (e.g. for yearly aggregates) and we have lots and lots of collections to iterate (terabytes and terabytes). So instead of using all these features I opted for the more traditional Java like aggregation, which while being pretty ugly, minimized the number of iterations I am making on the data.

You can see the result below (if you have a nicer looking, yet efficient, idea, I’d be happy to hear about it)

View the code on Gist.

Save to Parquet

The end result of doing the aggregations is an hierarchical structure – lise of simple measures (avgs, sums, counts etc.) but also a phone book, which also has an array of pricings and an hours breakdown which is also an array. I decided to store that in Parquet/ORC formats which are efficient for queries in Hadoop (by Hive/Impala depending on the Hadoop distribution you are using). For the purpose of the example I included the code to persist to parquet.

If you can use SparkSQL than support for Parquet is built in and you can do something as simple as

View the code on Gist.

You might have notices that the weeklyAggregated is a simple RDD and not a pair RDD since it isn’t needed here. Unfortunately for me I had to use Spark 0.9 (our lab is currently on Cloudera 5.0.1) – also in the more general case of writing to other Hadoop file formats you can’t use this trick.

One point specific to Parquet is that you can’t write to it directly – you have to use a “writer” class and parquet has Avro, Thrift and ProtoBuf writers available. I thought that’s going to be nice and easy so I looked for scala library to serialize to one of these formats and chose Scalavro (which I used in the past) turns out that, while the serialization is Avro compatible it is not the standard Avro class the Avro writer for parquet expects. No problem, I thought I’ll just take another library that creates works with Thrift – Twitter has one, it is called Scrooge works nice and all – but again it is not completely standard (doesn’t inherit from TBase). Oh well, maybe protobuf will do the job so I tried ScalaBuff , alas, this didn’t work well either (inheriting from LightMessage and not Message as expected by the writer) – I ended up using the plain Java generator for protobuf. This further uglified the aggregation logic but at least it got he job done. The output from the aggregation is a  pair RDD  of protobuf serializable aggregations

So, why the the pair RDD? it turns out all the interesting save functions that can use Hadoop file format only exist on pair RDD and not on regular ones. If you know this little trick the code is not very complicated:

View the code on Gist.

The first two lines in the snippet above configure the writer and are specific to parquet. the last line is the one that does the actual save to file – it specified the output directory, the key class (Void since we don’t need this with the parquet format), the for the records, the Hadoop output format class (Parquet in our case) and lastly a job configuration

To summarize. When you want to do more that a simple toy example the code may end up more complicated than the concise examples you see on-line. When working with lots and lots of data the number of iterations you make on the data also begin to be important and you need to pay attention to that.

Spark is a very powerful library for working on big data, it has a lot of components and capabilities. However its biggest weakness (in my opinion anyway) is its documentation. It makes it easy to start work with the platform, but when you want to do something a little more interesting you are left to dig around without proper directions. I hope this post will help some others when they’d do their digging around :)

Lastly,again I remind you, that  you can get the complete code for that on github

Categories: Architecture

Community of Practice: A Meeting Checklist

Hand Drawn Checklist

Hand Drawn Checklist

Hand Drawn Chart Saturday

The simplest definition of a community of practice (COP) is people connecting, encouraging each other and sharing ideas and experiences. There are a few basic logistics that will affect the efficiency of a community of practice.  On the surface, logistics impact ease and comfort of meeting but in a deeper sense, impact the ability for members to connect and share information. A basic logistics checklist would include meeting announcements, facilities and agenda.

Community of practice meeting agenda:

Basics

__ Meeting date

__ Meeting location

__ Meeting time

__ Agenda

Meeting time and place can include lunch and learns (getting together over lunch to connect and learn). For co-located teams or distributed groups videoconferences/teleconferences might be the only option.

Communities of practice are typically more effective when members can physically meet. Teleconferences and videoconferences tend to foster partial attention during presentations and do not support random networking. Where possible each COP should be local. That means that in larger organizations each location will have its own COP for an area of interest. The local COP will connect periodically with like COPS outside their specific location. For example, I recently observed an Agile COP in a large multinational organization (that was supported and funded by the corporate headquarters). A team from headquarters facilitated the development of local COPs in each location (where there was an interest). Facilitation included ensuring rooms were available for meetings, funding was provided for lunch, activities and at least once, for beer and wine. Most importantly the corporate sponsorship ensured time to meet was made available. On a quarterly basis an organizational video-conference was held (lead by one of the local groups) and then annually an in-person conference was held. This has been going on for seven years.

The meeting program is crucial for gaining and holding interest. There a number of programming items for a community of practice that can be considered:

Category 1 – Networking:

__ Networking time with food or other sorts of social lubricant (sharing meals/food has been shown to be an effective team building tool)

__ Open sharing rounds (go around the room and share a success or failure)

Category 2 – Content:

__ Process demonstrations (demonstrate process/technique used within the organization or review a project that was of interest)

__ Agile or process related games (interactive games generate involvement and interest)

__ YouTube videos (introduce new ideas from outside experts without having to arrange for a speaker)

__ Outside presentations (new ideas from outside the team boundary to challenge biases)

The goal of the programming for the sessions is to keep the community interested, learning and interacting based on a common interests. For a typical one hour session I generally select one option from each category.

Note: Each programming element will have different logistics. For example, if the session includes lunch, someone will need to order lunch. A short and probably incomplete list of items to consider is shown below:

__ Food / drink ordered

__ Audio visual items

__ Projector

__ Internet

__ Telephone / conference number

__ Webinar session

__ Speaker (with any needed security approval)

__ Props for Agile or process games

Wrap-up

__ Solicit ideas for the next session

__ Set the time and date for next session

__ Short retrospective (for example identify one thing you learned and one thing that can be done better next time)

Getting a community of process together so that it can meet it’s goal of generating support among the community members, to provide a platform to share trials and successes and to inject new ideas and energy into the community is not an ad-hoc process. Solid, long running COPs work diligently on ensuring that each interaction holds member’s attention and delivers value!


Categories: Process Management

DDD East Anglia 2014

Phil Trelford's Array - Sun, 09/14/2014 - 00:50

This Saturday saw the Developer Developer Developer! (DDD) East Anglia conference in Cambridge. DDD events are organized by the community for the community with the agenda for the day set through voting.

T-Shirts

The event marked a bit of a personal milestone for me, finally completing a set of DDD regional speaker T-Shirts, with a nice distinctive green for my local region. Way back in 2010 I chanced a first appearance at a DDD event with a short grok talk on BDD in the lunch break at DDD Reading. Since then I’ve had the pleasure of visiting and speaking in Glasgow, Belfast, Sunderland, Dundee and Bristol.

Talks

There were five F# related talks on the day, enough to fill an entire track:

Tomas kicked off the day, knocking up a simple e-mail validation library with tests using FsUnit and FsCheck. With the help of Project Scaffold, by the end of the presentation he’d generated a Nuget package, continuous build with Travis and Fake and HTML documentation using FSharp.Formatting.

Anthony’s SkyNet slides are already available on SlideShare:

Building Skynet: Machine Learning for Software Developers from bruinbrown

ASP.Net was also a popular topic with a variety of talks including:

All your types are belong to us!

The title for this talk was borrowed from a slide in a talk given by Ross McKinlay which references the internet meme All your base are belong to us.

You can see a video of an earlier incarnation of the talk, which I presented at NorDevCon over on InfoQ, where they managed to capture me teapotting:

teapot

The talk demonstrates accessing a wide variety of data sources using F#’s powerful Type Provider mechanism.

The World at your fingertips

The FSharp.Data library, run by Tomas Petricek and Gustavo Guerra, provides a wide range of type providers giving typed data access to standards like CSV, JSON, XML, through to large data sources Freebase and the World Bank.

With a little help from FSharp.Charting and a simple custom operator based DSL it’s possible to view interesting statistics from the World Bank data with just a few key strokes:

[uk;fr;de] => fun country -> country.``CO2 emissions (metric tons per capita)`` // FSharp.Data + FSharp.Charting pic.twitter.com/D65m2KmZGp

— dirty coder (@ptrelford) September 11, 2014

The JSON and XML providers give easy typed access to most internet data, and there’s even a branch of FSharp.Data with an HTML type provider providing access to embedded tables.

Enterprise

The SQLProvider project provides type access with LINQ support to a wide variety of databases including MS SQL Server, PostgreSQL, Oracle, MySQL, ODBC and MS Access.

FSharp.Management gives typed access to the file system, registry, WMI and Powershell.

Orchestration

The R Type Provider lets you access and orchestrate R packages inside F#.

With FCell you can easily access F# functions from Excel and Excel ranges from F#, either from Visual Studio or embedded in Excel itself.

The Hadoop provider allows typed access to data available on Hive instances.

There’s also type providers for MATLAB, Java and TypeScript.

Fun

Type Providers can also be fun, I’ve particularly enjoyed Ross’s Choose Your Own Adventure provider and more recently 2048:

2048 

Write your own Type Provider

With Project Scaffold it’s easier than ever to write and publish your own FSharp type provider. I’d recommend starting with Michael Newton’s Type Provider’s from the Ground Up article and video of his session at Skills Matter.

You can learn more from Michael and others at the Progressive F# Tutorials in London this November:

DDD North

The next DDD event is in Leeds on Saturday October 18th, where I’ll be talking about how to Write your own Compiler, hope to see you there :)

Categories: Programming