Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Scaling Agile: Release Train Engineer

The Release Train Engineer is part of the crew that guides the train.

The Release Train Engineer is part of the crew that guides the train.

As groups of coordinated work get larger the tools, techniques and roles needed to keep the work coordinated expand. Complexity requires more controls to keep the train on the tracks. The Release Train Engineer (RTE) is one of the techniques the Scaled Agile Framework Enterprise (SAFe) uses to keep the 50 to 125 cars (number of people in an Agile Release Train or ART) on the track. Formally RTE is a scaled scrum master, however practically the role tends to take on a broader footprint. It could be viewed as a mixture of Scrum master and program leader. The roles and responsibilities of a RTE include:

  • Provides guidance to the release train. While the ART and the teams that are part of the train are predominately self-managing and self-organizing, the RTE provides guidance to help teams adapt to the environment.
  • Organizes, runs and facilitates the release planning meeting. The release planning meeting  is a two day meeting that includes all of the people involved with the ART. The meeting is crucial meeting that kicks-off and drives each program increment.
  • Assists in tracking the ART. In many organizations tracking and reporting fall on the shoulders of the RTE. The function is better placed in a Program Management Office with the RTE providing assistance. Let the RTE focus on facilitating and leading rather than on owning standard administration tasks.
  • Chairs the scrum of scrums (SoS). As a leader and the scrum master of scrum masters, the RTE is perfectly positioned to ensure that teams interact and share cross-team issues and risks by facilitating the SoS.
  • Resolves and/or escalates impediments. Scrum masters facilitate the resolution of impediments within their span of influence. Once an impediment is outside of a scrum master’s span of influence the RTE steps in to remove or escalate the impediment to someone that can remove the impediment.
  • Facilitates intergroup communication and dependency resolution. Much of the role of the RTE is facilitating communication between groups. In a perfect world all teams would have the proper understanding of what was going on in other teams that they could impact or could impact them, however many times the environment precludes that possibility. The RTE helps to keep information moving so that the teams can focus on delivery.
  • Facilitates process improvement at the ART level. While teams pursue improvements based on retrospectives, the RTE facilitates ensuring the overall processes are continually being refined. An example of one of process improvement events that the RTE leads is the retrospective that caps the release planning meeting.
  • Cajoles, leads and In general, helps make stuff happen. All leaders use their social influence to attain a goal. RTE’s are no different. SAFe (and most other frameworks) use goals that can be traced from business value to the work a team delivers. Work in ALL organizations is goal oriented. Leaders help everyone involved focus on those goals.

The release train engineer has a hand on the throttle of the agile release train, but just like the engineer on the freight train they are only one part of the process that that decides speed and direction. Locomotive engineers inspect their trains, monitor performance, communicate and interact with team members and passengers. They support getting the train to its appointed destination not only on-time but safely.


Categories: Process Management

Subject: Changes to the Google APIs Terms of Service

Google Code Blog - Wed, 11/05/2014 - 22:59
We'd like to inform you about some changes we're making to our Google APIs Terms of Service.

You’ll find the new terms at the same location as always, and we’ve also posted a summary of the main changes.

Some of the bigger changes include:
  • On the subject of Data Portability: we are making it clear that your obligation to abide by the Data Portability requirements continues for as long as you use or store user data obtained through the APIs (whether or not you are still using our APIs) and that you agree you won’t make that data available to third parties who don’t also abide by this obligation. In other words: we value end users’ control over their data; if you’d like to use our APIs, you should, too.
  • We are requiring developers to not violate any other Google terms of service.
  • We need to make our APIs better, and we may sometimes do that by using content submitted through the APIs. We reserve the right to do this, but we will only do this to provide, secure and improve the APIs (and related service(s)) and only in accordance with our privacy policies.

And some of the smaller changes include:
  • We are asking developers who use our services to keep us up-to-date on how to contact them.
  • Making it clear that the APIs should not be used for high risk activities, with ITAR data, or with HIPAA protected health information (unless Google agrees in writing).
  • Asking developers to make reasonable efforts to keep their private keys private and not embed them in open source projects.
  • We are reminding you that we set limits on your usage of our APIs; if you need more, you need to obtain our consent.
  • Most changes to the Terms of Service may go into effect 30 days after posting (rather than the prior 7 days)
The updated terms will go into effect on December 5, 2014.

Posted by Dan Ciruli, Product Manager
Categories: Programming

A Few Really Good Tech Podcasts for Your Listening Pleasure

Podcasts are back. Since I listen to a bunch of podcasts every week I was quite surprised to learn about their resurrection. But facial hair is back too, so I will revel in my short lived trendiness.

Why are podcasts back? One suggested reason is a little sad. People commute more than 25 hours a day and since cars are now faithful little bluetooth slaves, broadcasting podcasts over luxury car speakers is as easy as a smart technology and a fast cell network can make it. Podcasts are now a viable P2P radio replacement.

That’s the demand, means, motive, and opportunity side of things. What about supply?

Historically podcasts start in fire, the passion quickly burning to ash as podcasters learn making podcasts is a lot of hard work...and poorly unremunerated work at that. So podcasts have a high mortality rate.

What’s changed? Money. Strangely, people will exchange effort for money, so if podcasts can make money they will have the fuel they need to burn bright through the night.

Money from Subscriptions is New
Categories: Architecture

The Four Whys

NOOP.NL - Jurgen Appelo - Wed, 11/05/2014 - 15:11
Why

I’m sure you recognize the problem. You’re nose-deep in some activity that takes ten times longer than expected, and suddenly you think, “Why on earth am I wasting my time with this nonsense?” Yes, I have that too sometimes! I had it yesterday when I was explaining to one of my customers how sales tax works in Europe. It took so long, I needed two coffee breaks.

The post The Four Whys appeared first on NOOP.NL.

Categories: Project Management

Scaling Agile: Agile Release Trains and Value Streams

A value stream is a set of activities that are intended to create and deliver a consistent set of deliverables that are of value to customers. Using the train metaphor, value is the cargo the train delivers.

A value stream is a set of activities that are intended to create and deliver a consistent set of deliverables that are of value to customers. Using the train metaphor, value is the cargo the train delivers.

The Agile Release Train (ART) in SAFe is the primary high-level tool used to organize activity to deliver value. Other frameworks (Agile and classic) use projects and programs to organize around the delivery of value and value’s alter ego, funding. ARTs are used to deliver, enhance and support the functionality needed for business facing value streams to exist within an organization.

A value stream is a set of activities that are intended to create and deliver a consistent set of deliverables that are of value to customers. Using the train metaphor, value is the cargo the train delivers. The term consistent in the definition is important to delineate an ART from a typical project or program. The Project Management Institute defines a project as a temporary group activity designed to produce a unique product, service or result. A project or program might initiate or support a value stream in a non-scaled Agile organization, however the focus is on a temporary endeavor rather than on support an ongoing or consistent endeavor. The long-term focus makes it significantly easier to embrace Agile principles.

Value streams reflect a business orientation that integrate strategic themes and needs into how an organization delivers value. The products and services of most organizations tend to be long lived, therefore value streams have the same long-term orientation. SAFe ARTs organize activity around value streams. In practice a value stream in an organization can be large enough to support multiple ARTs. A value stream is a requirement for an ART whether the ART develops a new value stream, enhances or supports an existing value stream, without access to business value it hard to generate strategic alignment within the organization. When ARTs (or any significant part of an organization) are out of strategic alignment, support in terms of budget, people and resources will be difficult to acquire. This will lead to disillusionment with the ART. Using our train metaphor, the engines, train cars and crew will be dispersed to other trains. Without strategic alignment an organization will not be able to support the structure of an ART and stable teams that are needed.

Value streams present the whole path an organization takes to deliver value. The phrase “concept to cash” is often used describe the breadth of vision that a value stream must have. Value stream taking the big picture takes a on a whole business context rather than purely a software development project or programs. To adjust to the breadth of vision ARTs need to incorporate the needs of business process, architecture, systems as well as application software. Agile release chains make sense as long as they serve a real value chain.


Categories: Process Management

The Perils of Being a BA

Software Requirements Blog - Seilevel.com - Tue, 11/04/2014 - 17:00
I’ve been in the software business for a while now – as a programmer, a project manager, and a business analyst. I think it’s affecting the way my synapses fire. Everywhere I experience poor customer service or encounter a web site that doesn’t work well, I get all tied up in knots. What’s worse, I’ll […]
Categories: Requirements

Why I Prefer Commitment-Driven Sprint Planning

Mike Cohn's Blog - Tue, 11/04/2014 - 15:00

Over the past two weeks, I’ve described two alternative approaches to sprint planning:

This week I want to address why I prefer commitment-driven sprint planning. First, though, here is a very brief refresher on each of the approaches.

Brief Summary of Sprint-Planning Approaches

In velocity-driven sprint planning, a team selects a set of product backlog items whose high-level estimates (usually in story points or ideal days) equals their average velocity.

In commitment-driven sprint planning, a team commits to one product backlog item at a time by roughly identifying and estimating the tasks that will be involved, and stopping when they feel the sprint is full.

Because “commitment” is often mistaken as “guarantee,” this approach is becoming more commonly referred to as capacity-based sprint planning.

Velocity is Variable

To begin to see why I prefer a commitment-driven approach to sprint planning, consider the graph below which shows the velocities of a team over the course of nine sprints. The first thing you should notice is that velocity is not stable. It bounces around from sprint to sprint.

The first big drawback to velocity-driven planning is that velocity is too variable to be reliable for short-term (i.e., sprint) planning. My experience is that velocity bounces around in about a plus or minus 20% range. This means that a team whose true velocity is 20 could easily experience a velocity of 16 this sprint and 24 next sprint, and have that just be random variation.

When a team that averages 20 sometimes completes 24 units of work, but sometimes completes only 16, we might be tempted to say they are accomplishing more or less work in those sprints. And that is undoubtedly part of it for many teams. However, some part of the variation is attributable to the imprecision of the units used to estimate the product backlog items.

For example, most teams who estimate in story points use a subset of possible estimates such as the Fibonacci sequence of 1, 2, 3, 5, 8, 13 or a simple doubling (1, 2, 4, 8, 16). When a team using the Fibonacci sequence takes credit for finishing a 5-point story, they may have really only finished a 4-point story. This tends to lead to a slight overstating of velocity.

In the long-term this won’t be a problem as the law of big numbers can kick in and things will average out. In the short term, though, it can create problems.

Anchoring

A second problem with velocity-driven sprint planning is due to the effect of anchoring. Anchoring is the tendency for an estimate to be unduly influenced by earlier information.

A good example of anchoring is coming up as we approach the Christmas season. Suppose you go into a store and see a jacket you like. There’s a sign saying $400 but that price is crossed out and a new price of $200 is shown. Your brain instantly thinks, “Awesome! A $400 jacket for only $200!” And, of course, this is why the store shows you the original price.

Who cares what that jacket used to sell for? It’s $200 today. That should be all that matters in your buy-or-not decision. However, once that $400 price is in your brain, it’s hard to ignore it. It anchors you into thinking you’re getting a good deal when the jacket is $200.

I won't go into the details here, but the best paper I've read on anchoring is from Magne Jørgensen and Stein Grimstad and is called “The Impact of Irrelevant and Misleading Information on Software Development Effort Estimates.”)

Anchoring and Velocity-Driven Planning

But what does anchoring have to do with sprint planning?

Consider a team in a velocity-driven sprint planning meeting. They’ve selected a set of stories equal to their average velocity. They then ask themselves if that set of stories feels like the right amount of work. (As described, in the post on velocity-driven sprint planning they may also identify tasks and estimate those tasks as an aid in answering that.)

A team doing velocity-driven sprint planning is pre-disposed to say, “Yes, we can do this,” even if they can’t. They are anchored by knowing that the selected amount of work is the same as their average velocity.

It’s like me showing you that jacket with the $400 sign next to it and asking, “How much do you think this jacket sells for?” Even if you don’t say exactly $400, that $400 is in your head and will anchor you.

So, because of anchoring, a team with an average velocity of 20 is inclined to say the sprint is appropriately filled with 20 points – even if that work is really more like 16 or 24 units of work.

This will lead to teams sometimes doing less than they could have. It could also lead to teams sometimes doing more. But in those cases, my experience is that teams will be more likely to drop a product backlog item or two. Experience definitely tells me that teams are more likely to drop than to add.

Velocity Is Great for Longer-Term Planning

By this point, you may be thinking I’m pretty opposed to velocity-driven planning. That’s only half true. Although I’m not a fan of using velocity to plan a single sprint, I am quite likely the world’s biggest fan of using velocity to plan for the longer term. I’ve certainly written more about it than anyone I know.

The problem with velocity-driven sprint planning is that velocity is simply too variable to be useful in the short term. To illustrate why, suppose you get a job working at a car wash. On your first morning, your boss announces that you have a quota: You need to wash four cars per hour.

At the end of the first hour, though, you’ve only washed three cars. Should you be fired? Of course not. It was only one hour and perhaps you washed three large cars. Or perhaps it was overcast and only three drivers brought cars in to be washed.

What about at the end of your first day, an 8-hour shift? By then you should have washed 32 cars. But you’ve only washed 30. Should you be fired now? Again, almost certainly not.

What about the end of your first month? Figuring 20 days and 32 cars per day means you should have washed 640 cars. Suppose, though, you only washed 600. (That’s the same percentage as washing 30 instead of 32 in a day.) Should your boss fire you now? Perhaps still not, but it’s starting to be clear that you are not making quota.

What if you’re off by the same percentage at the end of the year? If that quota was well chosen—and all other employees are meeting it—your boss at some point should consider letting you go (or dealing with your below expected productivity in some way, but this post isn’t about good ways of dealing with productivity issues).

That quota is useful in the same way velocity is useful: over the long term. Think of how poorly run we’d consider that car wash if an employee was fired in the first hour of missing quota. The longest tenured employee would have been on the job for three days. So while that quota may be useful when measured over a month, it is not useful when measured hourly.

Velocity is useful in the long term, not the short term. A team with 30 product backlog items to deliver can sum the estimates (likely in story points) on those items and forecast when they’ll be done. A team with three product backlog items to deliver would be better off doing good ol’ task decomposition on those three items rather than relying on velocity.

So, Now What?

If you are already doing velocity-driven sprint planning and it’s working for you, don’t switch. However, if your team is new to Scrum or if your team is experiencing some of the issues I’ve described here, then I recommend using commitment-driven sprint planning.

Please let me know what you think in the comments below.

Assessing Value Produced By Investments

Herding Cats - Glen Alleman - Tue, 11/04/2014 - 14:28

Speaking at the Integrated Program Management Conference in Bethesda MD this week. The keynote speaker Monday was Katrina McFarland, Assistant Secretary of Defense (Acquisition)(ASD(A)), the principal adviser to the Secretary of Defense and Under Secretary of Defense for Acquisition. 

During her talk she spoke of the role of Earned Value Management. Here's a mashup of her remarks...

EV is a thoughtful tool as the basis of a conversation for determining the value (BCWP) produced by the investment (BCWS). This conversation is an assessment of the efficacy of our budget. 

We can determine the effecacy of our budget through:

  • Measures of Effectiveness of the deliverables in accomplishing the mission or fulfilling the technical and operational needs of the business.
  • Measures of Performance of these deliverables to perform the needed functions to produce the needed effectiveness
  • Technical Performance Measures of these functions to perform at the needed technical level.

These measures answer the question of what is the efficacy of our budget in delivering the outcomes of our project.

The value of the project outcomes must be traceable to a strategy for the business or mission. Once this strategy has been identified, the Measures of Effectiveness, Performance, and Technical Performance Measures can be assigned to the elements of project. These are shown in the figure below

Screen Shot 2014-11-04 at 6.23.39 AM

This approach is scalable up and down the complexity of projects based on five immutable principles of project success.

5 Immutable Principles

Without credible answers to each of these questions, the project is on the path to failure on day one.

Categories: Project Management

Scaling Agile: Agile Release Trains

3466780657_aec63156b8_b (1)

The Scaled Agile Framework Enterprise (SAFe) is one of the frameworks that has emerged for scaling Agile development. The Agile Release Train (ART) is one of the core concepts introduced as part of the SAFe framework. An ART is a group of logically related effort (train cars) traveling in the same direction (destination) at predictable cadence (speed). At its heart, most everyone on a train is focused on achieving a common goal, which is delivering all types of cargo to a destination. The ART in SAFe is a long-lived team of teams organized around a significant value stream to work towards delivering a common goal. The attributes of an ART interlock to keep it on track to delivering value by ensuring it adheres ot Agile and lean principles.  An ART has the following attributes:

  1. Team of Teams within and ART typically encompass 50 to 125 people. The size limits are a reflection that large groups have issues with communication and cohesion which is problematic when pursuing a common goal and smaller groups have issues absorbing the process overheard which makes development expensive.
  2. ART teams identify and use additional roles not typically found in some Agile frameworks, such as the Release Train Engineer, release managers product management, DevOps and share resources, to name a few.
  3. Teams work with the ART on a long-term basis, 18+ months.
  4. The majority of team members are 100% dedicated to the teams they are involved with.
  5. ARTs are driven by business goals based on the organizations strategic vision and themes, portfolio management constraints and enterprise architectural vision.
  6. ARTs are operationalized and synchronized by cadence at two levels. At a macro level, ARTs leverage a longer cycle cadence called a product increment, which is generally 8 -12 weeks and a shorter Scrum team-level cadence (two weeks is typical).

The Agile Release Train is a tool to help scale Agile to the organizational/product. Conceptually the ART is similar to the operating system (OS) release trains I was first exposed to in the 1990’s. The OS manufacturer and every product manufacturer I have observed from automobile to ATM manufacturer plans the introduction of features over some period of time. An ART provides a mechanism to translate that plan into Agile teams so that organizations can communicate a forecast for the delivery of features to stakeholders and customers. Some degree of predictability is critical if other businesses or parts of a business need to use your input so they can plan their business. Real profits and jobs usually ride on those release trains.

Other frameworks can be used to scale Agile, including DAD, DSDM and arguably Scrum itself. The need for frameworks with additional overheard to scale Agile is controversial. Whether you are an adherent of scaled frameworks or not is less important than having understanding the solutions these frameworks deliver. An Agile Release Train is a tool to organize activity to deliver a common business goal while being both predictable and reacting to the dynamic business environment. At its most basic level isn’t that just a definition of Agile?


Categories: Process Management

Material Design Gets More Material

Google Code Blog - Mon, 11/03/2014 - 20:32

A few weeks ago, we published our first significant update to the material design guidelines. Today, we’re addressing even more of the comments and suggestions you’ve provided with another major update to the spec. Check out some of the highlights below.

  • Links to Android developer docs. One of the biggest requests we’ve heard from developers and designers is that the guidelines should offer quicker access to related developer documentation. We’ve started to add key links for Android developers, and we’re committed to more tightly integrating the spec with both the Polymer and Android docs.
  • A new What is Material? section. While the introduction offers a succinct bird’s-eye-view of material design, it left open some questions that were previously only answered in video content (the Google I/O videos and DesignBytes). This new sections dives deeper into the environment established by material design, including material properties and how we work with objects in 3D space.
  • A What’s New section. We view the material design spec as a living document, meaning we’ll be continually adding and refining content. The What’s New section, which was a highly requested feature, should help designers track the spec’s evolution.

In addition to these major new features and sections, there’s even more in today’s update, including:

Stay tuned for even more updates as we continue to integrate the relevant developer docs, refine existing spec sections, and expand the spec to cover more ground. And as always, remember to leave your suggestions on Google+!

Posted by Roman Nurik, Design Advocate
Categories: Programming

Improve small job completion times by 47% by running full clones.

The idea is most jobs are small. Researchers found 82% of jobs on Facebook's cluster were less than 10 tasks. Clusters have a median utilization of under 20%. And since small jobs are particularly sensitive to stragglers the audacious solution is to proactively launch clones of a job as they are submitted and pick the result from the earliest clone. The result is an average completion time of all the small jobs improved by 47% using cloning, at the cost of just 3% extra resources.

For more details take a look at the very interesting Why Let Resources Idle? Aggressive Cloning of Jobs with Dolly.

Related Articles
Categories: Architecture

Drive Business Transformation by Reenvisioning Your Operations

When you create your digital vision, you have a few places to start.

One place to start is by reenvisioning your customer experience.   Another place to start is by reenvisioning your operations.   And, a third place to start is by renvisioning your business model.

In this post, let’s take a look at reenvisioning your operations.

In the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned from companies that are digital masters that created their digital visions and are driving business change.

Start with Reenvisioning Operations When Financial Performance is Tied to Your Supply Chain

If your financial performance is closely connected to the performance of your core operations and supply chain, then reenvisioning your operations can be a great place to start.

Via Leading Digital:

“Organizations whose fortunes are closely tied to the performance of their core operations and supply chains often start with reenvisioning their operations.”

Increase Process Visibility, Decision Making Speed, and Collaboration

There are many great business reasons to focus on improving your operations.   A few of the best include increasing process visibility, increasing speed of decision making, and improving collaboration across the board.

Via Leading Digital:

“The business drivers of operational visions include efficiency and the need to integrate disparate operations.  Executives may want to increase process visibility and decision making speed or to collaborate across silos.”

Proctor & Gamble Reenvisions Operational Excellence

Proctor and Gamble changed their game by focusing on operational excellence.  The key was to be able to manage the business in real time so they could keep up with their ever-changing world.

Via Leading Digital:

“For instance, in 2011, Proctor & Gamble put operational excellence at the center of its digital vision: 'Digitizing P&G will enable us to manage the business in real time and on a very demand-driven basis.  We'll be able to collaborate more effectively and efficiently, inside and outside the company.'  Other companies in industries from banking to manufacturing, have transformed themselves through similar operationally focused visions.”

Operational Visions are Key to Businesses that Sell to Other Businesses

If your business is a provider of products or services to other businesses, then your operational vision is especially important as it can have a ripple effect on what your customers do.

Via Leading Digital:

“Operational visions are especially useful for businesses that sell largely to other businesses.  When Codelco first launched its Codelco Digital initiative, the aim was to improve mining operations radically through automation and data integration.  As we described in chapter 3, Codelco continued to extend this vision to include new mining automation and integration operations-control capability.  Now, executives are envisioning radical new ways to redefine the mining process and possibly the industry itself.”

Operational Visions Can Change the Industry

When you change your operations, you can change the industry.

Via Leading Digital:

“The operational visions of some companies go beyond an internal perspective to consider how the company might change operations in its industry or even with its customers.“

Changes to Operations Can Enable Customers to Change Their Own Operations

When you improve your operations,  you can help others move up the solution stack.

Via Leading Digital:

“For example, aircraft manufacturer Boeing envisions how changes to its products may enable customers to change their own operations.  'Boeing believes the future of the aviation industry lie in 'the digital airline,' the company explained on its website. 'To succeed in the marketplace, airlines and their engineering and IT teams must take advantage of the increasing amount of data coming off of airplanes, using advanced analytics and airplane technology to take operational efficiency to the next level.' “

Get Information to the People Who Need it Most, When They Need It Most

One of the best things you can do when you improve operations is to put the information in the hands of the people that need it most, when they need it most, where they need it most.

Via Leading Digital:

“The manufacturer goes on to paint a clear picture of what a digital airline means in practice: 'The key to to the digital airline is delivering secure, detailed operational and maintenance information to the people who need it most, when they need it most.  That means that engineering will share data with IT, but also with the finance, accounting, operational and executive functions.' “

Better Operations Enables New Product Designs and Services

When you improve operations, you enable and empower business breakthroughs in all parts of the business.

Via Leading Digital:

“The vision will improve operations at Boeing's customers, but will also help Boeing's operations as the information from airplanes should help the company identify new ways to improve its product designs and services.  The day may also lead to new business models as Boeing uses the information to provide new services to customers.”

When you create your digital vision, while there are lots of places you could start, the key is to take an end-to-end view.

If your financial performance is tied to your core operations and your supply chain, and/or you are a provider of products and services to others, then consider starting your business transformation by reenvisioning your operations.

You Might Also Like

10 High-Value Activities in the Enterprise

Cloud Changes the Game from Deployment to Adoption

The Future of Jobs

Management Innovation is at the Top of the Innovation Stack

Reenvision Your Customer Experience

Categories: Architecture, Programming

How to Create a Simple Backup Solution That You Can Trust

Making the Complex Simple - John Sonmez - Mon, 11/03/2014 - 16:00

Backing up your data is really important. We’ve all heard too many stories of hard drives crashing or computers getting lost or stolen without having a backup and their owner’s suffering a horrible loss of irreplaceable data. So, if we all know that backing up data is so important, why don’t we do it? Well, some of us do, but ... Read More

The post How to Create a Simple Backup Solution That You Can Trust appeared first on Simple Programmer.

Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Mon, 11/03/2014 - 05:27

Never attribute to malice that which is adequately explained by stupidity - Hanlon's Razor

 

Categories: Project Management

SPaMCAST 314 – Crispin, Gregory, More Agile Testing

www.spamcast.net

http://www.spamcast.net

Listen to the interview here!

SPaMCAST 314 features our interview with Janet Gregory and Lisa Crispin.  We discussed their new book More Agile Testing. Testing is core to success in all forms of development.  Agile development and testing are no different. More Agile Testing builds on Gregory and Crispin’s first collaborative effort, the extremely successful Agile Testing to ensure everyone that uses an Agile frameworks delivers the most value possible.

The Bios!

Janet Gregory is an agile testing coach and process consultant with DragonFire Inc. Janet is the is the co-author with Lisa Crispin of Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), and More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014). She is also a contributor to 97 Things Every Programmer Should Know. Janet specializes in showing Agile teams how testers can add value in areas beyond critiquing the product; for example, guiding development with business-facing tests. Janet works with teams to transition to Agile development, and teaches Agile testing courses and tutorials worldwide. She contributes articles to publications such as Better Software, Software Test & Performance Magazine and Agile Journal, and enjoys sharing her experiences at conferences and user group meetings around the world. For more about Janet’s work and her blog, visit www.janetgregory.ca. You can also follow her on twitter @janetgregoryca.

Lisa Crispin is the co-author, with Janet Gregory, of More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014), Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), co-author with Tip House of Extreme Testing (Addison-Wesley, 2002), and a contributor to Experiences of Test Automation by Dorothy Graham and Mark Fewster (Addison-Wesley, 2011) and Beautiful Testing (O’Reilly, 2009). Lisa was honored by her peers by being voted the Most Influential Agile Testing Professional Person at Agile Testing Days 2012. Lisa enjoys working as a tester with an awesome Agile team. She shares her experiences via writing, presenting, teaching and participating in agile testing communities around the world. For more about Lisa’s work, visit www.lisacrispin.com, and follow @lisacrispin on Twitter.

Call to action!

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.  What will we do with this list?  We have two ideas.  First, we will compile a list and publish it on the blog.  Second, we will use the list to drive “Re-read” Saturday. Re-read Saturday is an exciting new feature we will begin on the the Software Process and Measurement blog on November 8th with a re-read of Leading Change. So feel free to choose you platform and send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 315 features our essay on Scrum Masters.  Scrum Masters are the voice of the process at the team level.  Scrum Masters are a critical member of every Agile team. The team’s need for a Scrum Master is not transitory because they evolve together as a team.

Upcoming Events

DCG Webinars:

How to Split User Stories
Date: November 20th, 2014
Time: 12:30pm EST
Register Now

Agile Risk Management – It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 314 - Crispin, Gregory, More Agile Testing

Software Process and Measurement Cast - Sun, 11/02/2014 - 23:00

SPaMCAST 314 features our interview with Janet Gregory and Lisa Crispin.  We discussed their new book More Agile Testing. Testing is core to success in all forms of development.  Agile development and testing are no different. More Agile Testing builds on Gregory and Crispin’s first collaborative effort, the extremely successful Agile Testing to ensure everyone that uses an Agile frameworks delivers the most value possible.

The Bios!

Janet Gregory is an agile testing coach and process consultant with DragonFire Inc. Janet is the is the co-author with Lisa Crispin of Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), and More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014)She is also a contributor to 97 Things Every Programmer Should Know. Janet specializes in showing Agile teams how testers can add value in areas beyond critiquing the product; for example, guiding development with business-facing tests. Janet works with teams to transition to Agile development, and teaches Agile testing courses and tutorials worldwide. She contributes articles to publications such as Better Software, Software Test & Performance Magazine and Agile Journal, and enjoys sharing her experiences at conferences and user group meetings around the world. For more about Janet’s work and her blog, visit www.janetgregory.ca. You can also follow her on twitter @janetgregoryca.

Lisa Crispin is the co-author, with Janet Gregory, of More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014), Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), co-author with Tip House of Extreme Testing (Addison-Wesley, 2002), and a contributor to Experiences of Test Automation by Dorothy Graham and Mark Fewster (Addison-Wesley, 2011) and Beautiful Testing (O’Reilly, 2009). Lisa was honored by her peers by being voted the Most Influential Agile Testing Professional Person at Agile Testing Days 2012. Lisa enjoys working as a tester with an awesome Agile team. She shares her experiences via writing, presenting, teaching and participating in agile testing communities around the world. For more about Lisa’s work, visit www.lisacrispin.com, and follow @lisacrispin on Twitter.

Call to action!

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.  What will we do with this list?  We have two ideas.  First, we will compile a list and publish it on the blog.  Second, we will use the list to drive “Re-read” Saturday. Re-read Saturday is an exciting new feature we will begin on the the Software Process and Measurement blog on November 8th with a re-read of Leading Change. So feel free to choose you platform and send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 315 features our essay on Scrum Masters.  Scrum Masters are the voice of the process at the team level.  Scrum Masters are a critical member of every Agile team. The team’s need for a Scrum Master is not transitory because they evolve together as a team.

Upcoming Events

DCG Webinars:

How to Split User Stories
Date: November 20th, 2014
Time: 12:30pm EST
Register Now

Agile Risk Management - It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Loewensberg re-animated

Phil Trelford's Array - Sun, 11/02/2014 - 14:51

Verena Lowensburg was a Swiss painter and graphic designer, assocciated with the concrete art movement. I came across some of her work while searching for pieces by Richard Paul Lohse.

Again I’ve selected some pieces and attempted to draw them procedurally.

Spiral of circles and semi-circles

Based on Verena Loewensburg’s Unititled, 1953

Loewensberg step-by-step[4]

The piece was constructed from circles and semi-circles arranged around 5 concentric rectangles drawn from the inside-out. The lines of the rectangles are drawn in a specific order. The size of each circle seems only to be related to the size of it’s rectangle. If a placed circle would overlap an existing circle then it is drawn as a semi-circle

Shaded spiral

Again based on Verena Loewensburg’s Unititled, 1953

Loewensberg colors

Here I took the palette of another one of Verena’s works.

Four-colour Snake

Based on Verena Loewensburg’s Untitled, 1971

Loewensberg snake[4]

The original piece reminded me of snake video game.

Rotating Red Square

Based on Verena Loewensburg’s Untitled, 1967 

Loewensberg square rotating

This abstract piece was a rotated red square between blue and green squares.

Multi-coloured Concentric Circles

Based on Verena Loewensburg’s Untitled, 1972

Loewensberg circles shakeup

This piece is made up of overlapping concentric circles with the bottom set clipped with a rectangle. region. The shape and colour remind me a little of the Mozilla Firefox logo.

Method

Each image was procedurally generated using the Windows Forms graphics API inside an F# script. Typically a parameterized function is used to draw a specific frame to a bitmap which can be saved out to a an animated gif.

I guess you could think of each piece as a coding kata.

Scripts

Have fun!

Categories: Programming

Announcing Re-read Saturday!

index

There are a number of books that have had a huge impact on my life and career. Many reader of the Software Process and Measure blog and listeners to the podcast have expressed similar sentiments. Re-read Saturday is a new feature that will begin next Saturday. We will begin Re-read Saturday with a re-read of Leading Change. The re-read will extend over a six to eight Saturdays as I share my current interpretation of a book that has a major impact on how I think about the world around me. When the re-read of Leading Change is complete we will dive into the list of books I am compiling from you, the readers and listeners.

Currently the list includes:

  • Seven Habit of Highly Effective People – Stephen Covey (re-read in early 2014)
  • Out of the Crisis – Edward Deming
  • To Be or Not to Be Intimated — Robert Ringer
  • Pulling Your Own Strings — Wayne Dyer
  • The Goal: A Process of Ongoing Improvement – Eliyahu M. Goldratt

So far each book has gotten one “vote” apiece.

How can you get involved in Re-read Saturday? You can begin by answering, “What are the two books that have most influenced you career (business, technical or philosophical)?” Send the titles to spamcastinfo@gmail.com, post the tiles on the blog, Facebook or Twitter with the hashtag #SPaMCAST. We will continue to add to the list and republish it on the blog. When we are close to the end of the re-read of Leading Change we will publish a poll based on the current list (unless there is a clear leader) to select the next book to re-read.

Two of other ways to get involved include adding your perspective to each of the re-read entries by commenting. Second, when the re-read is complete I will invite all of the commenters to participate in a discussion (just like a book club) that will be recorded and published on the Software Process and Measurement Cast.

We begin next Re-read Saturday next week but you can get involved today!

 


Categories: Process Management

Splitting Users Stories: More Anti-patterns

This hood is making an ineffective split of his face.

This hood is making an ineffective split of his face.

An anti-pattern is a typical response to a problem that is usually ineffective. There are numerous patterns for splitting user stories that are generally effective and there are an equal number that are generally ineffective. Splitting User Stories: When Things Go Wrong described some of the common anti-patterns such as waterfall splits, architectural splits, splitting with knowledge and keeping non-value remainders. Here are other equally problematic patterns for splitting user stories that I have observed since writing the original article:

  1. Vendor splits. Vendor splits are splits that are made based on work assignment or reporting structure. Organizations might use personnel from one company to design a solution, another company to code and another to test functionality. Teams and stories are constructed based on organization the team’s members report to rather than a holistic view of the functionality. I recently observed a project that had split stories between on project management and control  and technical activities.  The rational was that since the technical component of the project had been outsourced the work should be put in separate stories so that it would be easy to track work that was the vendors responsibility to complete. Scrumban or Kanban is often a better choice in these scenarios than other lean/Agile techniques.
  2. Generic personas splits. Splitting stories based on a generic persona or into stories where only a generic persona can be identified typically suggests that team members are unsure who really needs the functionality in the user story. Splitting stories without knowing who the story is trying to serve will make it difficult to hold the conversations needed to develop and deliver the value identified in the user story. Conversations are critical in the flesh out requirements and generating feedback in Agile projects.
  3. Too thin splits. While rare, occasionally teams get obsessed by splitting user stories in to thinner and thinner slices. While I have often said that smaller stories are generally better there comes a time when to split further is not worth the time it will take to make the split. Team that get overly obsessed with splitting user stories into thinner and thinner slices generally will spend more time in planning and less in delivering. Each user story should apply INVEST as a criteria to ensure good splits. In addition to using INVEST, each team should adopt a sizing guideline that maximizes the team’s productivity/velocity. Guidelines of this type are reflection of the capacity of the team to a greater extend and the capacity of the organization to a lesser extent.

Splitting stories well can deliver huge benefits to the team and to the organization. Benefits include increased productivity and velocity, improved quality and higher morale. Splitting user stories badly delivers none of the value we would expect from the process and may even cause teams, stakeholders and whole organizations to develop a negative impression of Agile.


Categories: Process Management

R: Converting a named vector to a data frame

Mark Needham - Sat, 11/01/2014 - 00:47

I’ve been playing around with igraph’s page rank function to see who the most central nodes in the London NoSQL scene are and I wanted to put the result in a data frame to make the data easier to work with.

I started off with a data frame containing pairs of people and the number of events that they’d both RSVP’d ‘yes’ to:

> library(dplyr)
> data %>% arrange(desc(times)) %>% head(10)
       p.name     other.name times
1  Amit Nandi Anish Mohammed    51
2  Amit Nandi Enzo Martoglio    49
3       louis          zheng    46
4       louis     Raja Kolli    45
5  Raja Kolli Enzo Martoglio    43
6  Amit Nandi     Raja Kolli    42
7       zheng Anish Mohammed    42
8  Raja Kolli          Rohit    41
9  Amit Nandi          zheng    40
10      louis          Rohit    40

I actually had ~ 900,000 such rows in the data frame:

> length(data[,1])
[1] 985664

I ran page rank over the data set like so:

g = graph.data.frame(data, directed = F)
pr = page.rank(g)$vector

If we evaluate pr we can see the person’s name and their page rank:

> head(pr)
Ioanna Eirini          Mjay       Baktash      madhuban    Karl Prior   Keith Bolam 
    0.0002190     0.0001206     0.0001524     0.0008819     0.0001240     0.0005702

I initially tried to convert this to a data frame with the following code…

> head(data.frame(pr))
                     pr
Ioanna Eirini 0.0002190
Mjay          0.0001206
Baktash       0.0001524
madhuban      0.0008819
Karl Prior    0.0001240
Keith Bolam   0.0005702

…which unfortunately didn’t create a column for the person’s name.

> colnames(data.frame(pr))
[1] "pr"

Nicole pointed out that I actually had a named vector and would need to explicitly extract the names from that vector into the data frame. I ended up with this:

> prDf = data.frame(name = names(pr), rank = pr)
> head(prDf)
                       name      rank
Ioanna Eirini Ioanna Eirini 0.0002190
Mjay                   Mjay 0.0001206
Baktash             Baktash 0.0001524
madhuban           madhuban 0.0008819
Karl Prior       Karl Prior 0.0001240
Keith Bolam     Keith Bolam 0.0005702

We can now sort the data frame to find the most central people on the NoSQL London scene based on meetup attendance:

> data.frame(prDf) %>%
+   arrange(desc(pr)) %>%
+   head(10)
             name     rank
1           louis 0.001708
2       Kannappan 0.001657
3           zheng 0.001514
4    Peter Morgan 0.001492
5      Ricki Long 0.001437
6      Raja Kolli 0.001416
7      Amit Nandi 0.001411
8  Enzo Martoglio 0.001396
9           Chris 0.001327
10          Rohit 0.001305
Categories: Programming