Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management

SPaMCAST 326 ‚Äď Steve Tendon, Tame The Flow

www.spamcast.net

http://www.spamcast.net

 

Listen to the Software Process and Measurement Cast

Subscribe to the Software Process and Measurement Cast on ITunes

Software Process and Measurement Cast features our Interview with Steve Tendon. We discussed his new book Tame The Flow: Hyper-Productive Knowledge-Work Management published J Ross. Steve discussed how to lead knowledge workers and build a hyper-performing knowledge work organization.  We talked about the four flows,   psychology, information, work and finance that affect performance.  Steve’s ideas can be used to help teams can raise their game to deliver results that not only raise the bar but jump over it.

Steve has a great offer for SPaMCAST listeners. Check out https://tameflow.com/spamcast for a way to get Tame The Flow: Hyper-Productive Knowledge-Work Management at 40% off the list price.

Steve’s Bio

Steve Tendon, creator of the TameFlow management approach, is a senior, multilingual, executive management consultant, experienced at leading and directing multi­national and distributed knowledge-­work organizations. He is an expert in organizational performance transformation programs. Mr. Tendon is a sought-after adviser, coach, mentor and consultant, as well as author and speaker, specializing in organizational productivity, organizational design, process excellence and process innovation. Steve helps businesses create high-performance organizations and teams and holds a MSc. in Software Project Management from the University of Aberdeen.

Mr. Tendon has published numerous articles and is a contributing author to¬†Agility Across Time and Space: Implementing Agile Methods in Global Software Projects. Steve is currently a Director at TameFlow Consulting Ltd, where he helps clients achieve outstanding organizational performance by applying the theories and practices described in this book. Mr. Tendon has held senior Software Engineering Management roles at various firms over the course of his career, including the role of¬†Technical Director for the Italian branch of Borland International, the birthplace of hyper-productivity in software development. Borland’s development of¬†Quattro Pro for Windows¬†remains the most productive software project ever documented. This case was¬†Mr. Tendon‚Äôs source of inspiration that lead to his development of the¬†TameFlow¬†perspective and management approach.

Contact Information:

Web: https://tameflow.com/

Web: http://tendon.net/

Twitter: @tendon

 

Next

In the next Software Process and Measurement Cast will feature our essay on the ubiquitous stand-up meeting. The stand-up meeting has become a feature of agile and non-agile project alike.  The technique can be a powerful force to improve team effectiveness and cohesion or it a can really make a mess out of things!  We explore how to get more of the former and less of the later!

 

Call to action!

We are just completed a re-read John Kotter’s classic Leading Change on the Software Process and Measurement Blog (www.tcagley.wordpress.com).  Please feel free to jump in and add your thoughts and comments!

Next week we will start the process to choose the next book based on the list you have suggested.  You can still influence the possible choices for the next re-read by answering the following question:

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

We will publish the list next week on the blog and ask you to vote on the next book for ¬†‚ÄúRe-read‚ÄĚ Saturday. ¬†Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 326 - Steve Tendon, Tame The Flow

Software Process and Measurement Cast features our Interview with Steve Tendon. We discussed his new book Tame The Flow: Hyper-Productive Knowledge-Work Management published J Ross. Steve discussed how to lead knowledge workers and build a hyper-performing knowledge work organization.  We talked about the four flows,   psychology, information, work and finance that affect performance.  Steve’s ideas can be used to help teams can raise their game to deliver results that not only raise the bar but jump over it.

Steve has a great offer for SPaMCAST listeners. Check out https://tameflow.com/spamcast for a way to get Tame The Flow: Hyper-Productive Knowledge-Work Management at 40% off the list price.

Steve’s Bio

Steve Tendon, creator of the TameFlow management approach, is a senior, multilingual, executive management consultant, experienced at leading and directing multi­national and distributed knowledge-­work organizations. He is an expert in organizational performance transformation programs. Mr. Tendon is a sought-after adviser, coach, mentor and consultant, as well as author and speaker, specializing in organizational productivity, organizational design, process excellence and process innovation. Steve helps businesses create high-performance organizations and teams and holds a MSc. in Software Project Management from the University of Aberdeen.

 

Mr. Tendon has published numerous articles and is a contributing author to Agility Across Time and Space: Implementing Agile Methods in Global Software Projects. Steve is currently a Director at TameFlow Consulting Ltd, where he helps clients achieve outstanding organizational performance by applying the theories and practices described in this book. Mr. Tendon has held senior Software Engineering Management roles at various firms over the course of his career, including the role of Technical Director for the Italian branch of Borland International, the birthplace of hyper-productivity in software development. Borland's development of Quattro Pro for Windows remains the most productive software project ever documented. This case was Mr. Tendon’s source of inspiration that lead to his development of the TameFlow perspective and management approach.

Contact Information:

Web: https://tameflow.com/
Web: http://tendon.net/
Twitter: @tendon

Next

In the next Software Process and Measurement Cast will feature our essay on the ubiquitous stand-up meeting. The stand-up meeting has become a feature of agile and non-agile project alike.  The technique can be a powerful force to improve team effectiveness and cohesion or it a can really make a mess out of things!  We explore how to get more of the former and less of the later!

Call to action!

We are just completed a re-read John Kotter’s classic Leading Change on the Software Process and Measurement Blog (www.tcagley.wordpress.com).  Please feel free to jump in and add your thoughts and comments!

Next week we will start the process to choose the next book based on the list you have suggested.  You can still influence the possible choices for the next re-read by answering the following question:

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com..

We will publish the list next week on the blog and ask you to vote on the next book for  “Re-read” Saturday.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Re-read Saturday: Part Three: Implications for the Twenty-First Century, John P. Kotter Chapters 11 and 12

index

We complete the re-read of John P. Kotter’s book Leading Change by reviewing the implications from the last two chapters of the book.  Part Three paints the picture of a world in which the urgency for change will not abate and perhaps might even increase.  In chapter 11, titled The Organization of the Future, Kotter suggests that while in the past a single key leader can drive change, collaboration at the top of organizations is now required due to both the rate and complexity of change.  He argues that one person simply can’t have the time and expertise to manage, lead, communication, provide vision . . .  you get the point.  The message in the chapter is that for organizations of any type to prosper in the 21st century the ability to create and communicate vision is critical.  That skill needs to be fostered and developed over the long term just as any other significant organizational asset.  Long-term and continuous development of leadership is not accomplished simply by providing in a two-week course in leadership. While leadership is critical, it only goes so far in creating and fostering change and must be supplemented by a culture of empowerment. Broad-based empowerment allows organizations to tap a wide range of knowledge and energy at all levels of the organization.

Boiling the message of Chapter 11 down, Kotter suggests that an organization that will be at home with the dynamic nature of the 21st century will require a lean, non-bureaucratic structure that leverages a wide range of performance data. For example, in an empowered organization performance data must be gathered and analyzed from many sources. Performance data (e.g. customer satisfaction, productivity, returns, quality and others) gains maximum power when everyone has access to the data in order to drive continuous improvements. The culture of the new organization needs to shift from internally focused and command and control to an externally focused, non-bureaucratic organization. While Kotter does not use the terms lean and Agile, the organization he describes as tuned to the 21st Century reflects the tenants of lean and agile.

Chapter 12, titled Leadership and Lifelong Learning, circles back to the concept of leadership. It is a constant thread across all facets of the eight-stage model of change detailed in Leading Change. Kotter describes the need for leaders to continually develop competitive capacity (the capability to deal with an increasing competitive and dynamic environment). The model Kotter uses to describe the development competitive capacity begins with personal history and flows through competitive drive, lifelong learning, skills and abilities to competitive capacity.  Lifelong learning is an input and a tool for developing and honing skills and abilities. Skills and abilities feed competitive capacity. In our re-read of The Seven Habits of Highly Effective People, Stephen Covey culminated the Seven Habit with the habit call Sharpening the Saw.  Sharpening the Saw is a prescription for balanced self-renewal.  Life-long learning is an important component in balanced self-renewal. Whether you read Kotter or Covey the need to continuously learn is an inescapable necessity of any leaders.

As a rule, I am never overwhelmed by the chapters after the meat of most self-help books (I consider Leading Change a management self-help book, part of a continuum that Covey’s Seven Habits of Highly Effective People would be found on also). Part Three of Leading Change ties the book together by reinforcing the need for the eight-stage model for change and to address the need for continuously sharpening the saw.¬† Kotter‚Äôs model is a tool that requires leaders to apply therefore organizations and leaders must foster the capacity to address needed changes.

Re-read Summary

Change is a fact of life. John P. Kotter’s book, Leading Change, defines his famous eight-stage model for change. The first stage of the model is establishing a sense of urgency. A sense of urgency provides the energy and rational for any large, long-term change program. Once a sense of urgency has been established, the second stage in the eight-stage model for change is the establishment of a guiding coalition. If a sense of urgency provides energy to drive change, a guiding coalition provides the power for making change happen. A vision, built on the foundation of urgency and a guiding coalition, represents a picture of a state of being at some point in the future. Developing a vision and strategy is only a start, the vision and strategy must be clearly and consistently communicated to build the critical mass needed to make change actually happen. Once an organization wound up and primed, the people within the organization must be empowered and let loose to create change. Short-term wins provide the feedback and credibility needed to deliver on the change vision. The benefits and feedback from the short-term wins and other environmental feedback are critical for consolidating gains and producing more change. Once a change has been made it needs to anchored so that that the organization does not revert to older, comfortable behaviors throwing away the gains they have invest blood, sweat and tears to create.

The need for change is not abating. The eight-stage model for change requires leadership and vision.  Organizations need to foster leadership while both organizations and the people in those organizations must continually learn and hone their skills.

Next week we will review the list of books that readers of the blog and listeners to the podcast have identified as having a major impact on their career to vote on the next book we will tackle on Re-read Saturday.  Right now The Mythical Man Month by Fred Brooks is at the top of the list.  Care to influence the list?  Let me know the two books that most influenced your career.


Categories: Process Management

Agile Roles: What do product owners do other than make decisions?

 

The product owner role is anything but boring.

The product owner role is anything but boring.

The role of the product owner is incredibly important. The decision-making role of a product owner helps grease the skids for the team so that they deliver value efficiently and effectively. That said, there is more to the role than making decisions. In the survey of practioners (Agile Roles: What does a product owner do?) the next four items were:

      1. Attends Scrum meetings
      2. Prioritizes the user stories (and backlog)
      3. Grooms backlog
      4. Defines product vision and features

The product owner is a core member of the team. Participating in the Scrum meetings ensures that the voice of the customer is woven into all levels of planning and is not just a hurdle to be surmounted in a demo. When I was taught Scrum, the participation of the product owner was optional at the daily stand-up, in the retrospective and in more technical parts of sprint planning. Experience has taught me that optional typically translates to not present, and not present translates into defects and rework. Note, on the original list #15 was buy the pizza. I think the Scrum meetings are a good place to occasionally spring for pizza or DONUTS.

The backlog is ‚Äúowned‚ÄĚ by the product owner. The product owner prioritizes the backlog based on interaction with the whole team and other stakeholders. There are many techniques for prioritizing the backlog, ranging from business value, technical complexity, and the squeaky wheel (usually not a good method). Regardless of the method the final prioritization is delivered by the product owner.

As projects progress the backlog evolves. That evolution reflects new stories, new knowledge about the business problem, changes in the implementation approach and the need to break stories into smaller components. The process for making sure stories are well-formed, granular enough to complete and have acceptance criteria is story grooming. Grooming is often a small team affair, however typically the product owner is part of the grooming team. Techniques like the Three Amigos are useful for structuring the grooming approach.

Product owner interprets the sponsor’s (the person with the checkbook and political capital to authorize the project) vision by providing the team with the product vision. The product vision represents the purpose or motivation for the project. Until the project is delivered a vision is the picture that anyone involved with the project should be able to describe. Delivering the vision and vision for the features is a leadership role that helps teams decide on how to deliver a function. Knowing where the project needs to end up provides the team with knowledge that supports making technical decisions.

The product owner is leader, do’er, a visionary and a team member. As the voice of the customer the product owner describes the value proposition for the project from the business’ point of view. As part of the team the product owner interprets and synthesizes information from other team members and outside stakeholders. This is reflected in decision and priorities that shape the project and the value it delivers.

 


Categories: Process Management

Continuous Delivery across multiple providers

Xebia Blog - Wed, 01/21/2015 - 13:04

Over the last year three of the four customers I worked with had a similar challenge with their environments. In different variations they all had their environments setup across separate domains. Ranging from physically separated on-premise networks to having environments running across different hosting providers managed by different parties.

Regardless of the reasoning behind having these kinds of setup it‚Äôs a situation where the continuous delivery concepts really add value. The stereotypical problems that exist with manual deployment and testing practices tend to get amplified when they occur in seperated¬†domains. Things get even worse when you add more parties to the mix (like external application developers). Sticking to doing things manually is a recipe for disaster unless you enjoy going through expansive procedures every time you want to do anything in any of ‚Äėyour‚Äô environments. And if you‚Äôve outsourced your environments to an external party you probably don‚Äôt want to have to (re)hire a lot of people just so you can communicate with your supplier.

So how can continuous delivery help in this situation? By automating your provisioning and deployments you make deploying your applications, if nothing else, repeatable and predictable. Regardless of where they need to run.

Just automating your deployments isn’t enough however, a big question that remains is who does what. A question that is most likely backed by a lengthy contract. Agreements between all the parties are meant to provide an answer to that very question. A development partner develops, an outsourcing partners handles the hardware, etc. But nobody handles bringing everything together...

The process of automating your steps already provides some help with this problem. In order to automate you need some form of agreement on how to provide input for the tooling. This at least clarifies what the various parties need to produce. It also clarifies what the result of a step will be. This removes some of the fuzziness out of the process. Things like is the JVM part of the OS or part of the middleware should become clear. But not everything is that clearcut. It’s parts of the puzzle where pieces actually come together that things turn gray. A single tool may need input from various parties. Here you need to resists the common knee-jerk reaction to shield said tool from other people with procedures and red tape. Instead provide access to those tools to all relevant parties and handle your separation of concerns through a reliable access mechanism. Even then there might be some parts that can’t be used by just a single party and in that case, *gasp*, people will need to work together. 

What this results in is an automated pipeline that will keep your environments configured properly and allow applications to be deployed onto them when needed, within minutes, wherever they may run.

MultiProviderCD

The diagram above shows how we set this up for one of our clients. Using XL Deploy, XL Release and Puppet as the automation tooling of choice.

In the first domain we have a git repository to which developers commit their code. A Jenkins build is used to extract this code, build it and package it in such a way that the deployment automation tool (XL Deploy) understands. It’s also kind enough to make that package directly available in XL Deploy. From there, XL Deploy is used to deploy the application not only to the target machines but also to another instance of XL Deploy running in the next domain, thus enabling that same package to be deployed there. This same mechanism can then be applied to the next domain. In this instance we ensure that the machines we are deploying to are consistent by using Puppet to manage them.

To round things off we use a single instance of XL Release to orchestrate the entire pipeline. A single release process is able to trigger the build in Jenkins and then deploy the application to all environments spread across the various domains.

A setup like this lowers deployment errors that come with doing manual deployments and cuts out all the hassle that comes with following the required procedures. As an added bonus your deployment pipeline also speeds up significantly. And we haven’t even talked about adding automated testing to the mix…

Agile Roles: What does a product owner do?

 

One on the product owner's roles is to buy the pizza (or the sushi!)

One of¬†the product owner’s roles is to buy the pizza (or the sushi!)

The product owner role, one of the three identified in Scrum, is deceptively simple. The product owner is the voice of the customer; a conduit to bring business knowledge into the team. The perceived (the word perceived is important) simplicity of the role leads to a wide range of interpretations. Deconstructing the voice of the customer a bit further yields tasks and activities that include defining what needs to be delivered, dynamically providing answers and feedback to the team and prioritizing the backlog. I recently asked a number of product owners, Scrum masters and process improvement personnel for a list of the four activities that product owner was responsible for. The list (ranked by the number of responses, but without censorship) is shown below:

      1. Makes decisions
      2. Attends Scrum meetings
      3. Prioritizes the user stories (and backlog)
      4. Grooms backlog
      5. Defines product vision and features
      6. Accepts or rejects work
      7. Plans for releases
      8. Involves stakeholders (included customers, users, executives, SMEs)
      9. Sells the project
      10. Trains the business
      11. Buys pizza
      12. Provides the project budget
      13. Tests features
      14. Shares the feature list with business
      15. Generates team consensus

The #1 activity of the product owner is to make decisions. Decisions are a critical input for all project teams. Projects are a reflection of a nearly continuous stream of decisions. Decisions that if not made by the right person could take a project off course. While not all decisions made by a team rise to the level of needing input from product owner, or even more importantly rise to the level of needing immediate input from a product owner, when needed the product owner needs to be available and ready to make the tough calls that are needed.

In the first major technology project I was involved with my company decided to shift from one computer platform to another. It was a big deal. In our first attempt the IT department attempted to manage the process without interacting with the business (I was the business). That first attempt at a conversion was . . . exciting. I learned a number of new poignant phrases in several Eastern European languages. The second time around, a business lead was appointed to act as the voice of the business and to coordinate business involvement. The business lead spent at least ¬Ĺ the day with the project team and 1/2 in the business. Leads from all departments and project teams involved in the project met daily to review progress and issues (sort of Scrum of Scrums back in 1979). The ability to meet, talk and make decisions was critical for delivering the functionality needed by the business.

Making decisions isn’t the only task that product owners are called on to perform, but it is one of a very few that almost everyone can agree upon. Although buying pizza would have been higher up my list!

What would you add to the list? Which do you disagree with?


Categories: Process Management

Focus on Benefits Rather than Features

Mike Cohn's Blog - Tue, 01/20/2015 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

Suppose your boss has not bought into trying an agile approach in your organization. You schedule a meeting with the boss, and stress how your organization should use Scrum because Scrum:

  • Has short time boxes
  • Relies on self-organization
  • Includes three distinct roles

And based on this discussion, your boss isn’t interested.

Why? Because you focused on the features of Scrum rather than its benefits.

Product owners and Scrum teams make the same mistake when working with the product backlog. A feature is something that is in a product—a spell-checker is in a word processor. A benefit is something a user gets from of a product. By using a word processor, the user gets documents free from spelling errors. The spell-checker is the feature, mistake-free documents is the benefit.

It is generally considered a good practice for the items at the top of a product backlog to be small. Each must be small enough to fit into a sprint, and most teams will do at least a handful each sprint.

The risk here is that small items are much more likely to be features than benefits. When a Scrum team (and specifically its product owner) becomes overly focused on features, it is possible to lose sight of the benefits.

Scrum teams commonly mitigate this risk in two common ways. First, they leave stories as epics until they move toward the top of the product. Second, they include a so-that clause in their user stories. These help, but do not fully eliminate the risk.

Let’s return to your attempt to convince your boss to let your team use Scrum. Imagine you had focused on the benefits of Scrum rather than its features. You told your boss how using Scrum would lead to more successful products, more productive teams, higher quality software, more satisfied stakeholders, happier teams, and so on.

Can you see how that conversation would have gone differently than one focused on short time boxes, self-organization and roles?

Software Development Linkopedia January 2015

From the Editor of Methods & Tools - Tue, 01/20/2015 - 14:58
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.¬†This month you will find some interesting information and opinions about managing software developers, software architecture, Agile testing, product owner patterns, mobile testing, continuous improvement, project planning and technical debt. Blog: Killing the Crunch Mode Antipattern Blog: Barry’s Rules of Engineering and Architecture Blog: Lessons Learned Moving From Engineering Into Management Blog: A ScrumMaster experience report on using Feature Teams Article: Using Models to Help Plan Tests in Agile Projects Article: A Mitigation Plan for a Product Owner’s Anti-Pattern Article: Guerrilla Project ...

Try is free in the Future

Xebia Blog - Mon, 01/19/2015 - 09:40

Lately I have seen a few developers consistently use a Try inside of a Future in order to make error handling easier. Here I will investigate if this has any merits or whether a Future on it’s own offers enough error handle.

If you look at the following code there is nothing that a Future can’t supply but a Try can:

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.{Await, Future, Awaitable}
import scala.concurrent.duration._
import scala.util.{Try, Success, Failure}

object Main extends App {

  // Happy Future
  val happyFuture = Future {
    42
  }

  // Bleak future
  val bleakFuture = Future {
    throw new Exception("Mass extinction!")
  }

  // We would want to wrap the result into a hypothetical http response
  case class Response(code: Int, body: String)

  // This is the handler we will use
  def handle[T](future: Future[T]): Future[Response] = {
    future.map {
      case answer: Int => Response(200, answer.toString)
    } recover {
      case t: Throwable => Response(500, "Uh oh!")
    }
  }

  {
    val result = Await.result(handle(happyFuture), 1 second)
    println(result)
  }

  {
    val result = Await.result(handle(bleakFuture), 1 second)
    println(result)
  }
}

After giving it some thought the only situation where I could imagine Try being useful in conjunction with Future is when awaiting a Future but not wanting to deal with error situations yet. The times I would be awaiting a future are very few in practice though. But when needed something like this migth do:

object TryAwait {
  def result[T](awaitable: Awaitable[T], atMost: Duration): Try[T] = {
    Try {
      Await.result(awaitable, atMost)
    }
  }
}

If you do feel that using Trys inside of Futures adds value to your codebase please let me know.

SPaMCAST 325 ‚Äď Product Owners, Kim Pries, Jo Ann Sweeney

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast

Subscribe to the Software Process and Measurement Cast on ITunes

The Software Process and Measurement Cast our essay on product owners.  The role of the product owner is one of the hardest to implement when embracing Agile. However how the role of the product owner is implemented is often a clear determinant of success with Agile.  The ideas in our essay can help you get it right.

We will also have a new column from the Software Sensei, Kim Pries. In this installment Kim discusses the fact that are numerous ways go get something done when writing code.  Some are the right way and some are wrong way. For example, are you willing to sacrifice clarity for cool or fast?

We also continue with Jo Ann Sweeney’s column Explaining Communication. In this installment Jo Ann addresses why knowing who your audiences and stakeholders are will help make your communication more efficient and effective! Visit Jo Ann’s website at http://www.sweeneycomms.com and let her know what you think of her new column.

Next

In the next Software Process and Measurement Cast we will feature our Interview with Steve Tendon.  Steve has been a regular on the podcast in the past but took a break to   hone his ideas on hyper-productive knowledge work.  We discussed his new book Tame The Flow: Hyper-Productive Knowledge-Work Management published J Ross and how teams can raise their game to deliver results that not only raise the bar but jump over it

 

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.¬† Second, we will use the list to drive future ¬†‚ÄúRe-read‚ÄĚ Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th. ¬†Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 325 - Product Owners, Kim Pries, Jo Ann Sweeney

Software Process and Measurement Cast - Sun, 01/18/2015 - 23:00

Subscribe to the Software Process and Measurement Cast on ITunes

The Software Process and Measurement Cast our essay on product owners.  The role of the product owner is one of the hardest to implement when embracing Agile. However how the role of the product owner is implemented is often a clear determinant of success with Agile.  The ideas in our essay can help you get it right.

We will also have a new column from the Software Sensei, Kim Pries. In this installment Kim discusses the fact that are numerous ways go get something done when writing code.  Some are the right way and some are wrong way. For example, are you willing to sacrifice clarity for cool or fast?

We also continue with Jo Ann Sweeney’s column Explaining Communication. In this installment Jo Ann addresses why knowing who your audiences and stakeholders are will help make your communication more efficient and effective! Visit Jo Ann’s website at http://www.sweeneycomms.com and let her know what you think of her new column.

Next

In the next Software Process and Measurement Cast we will feature our Interview with Steve Tendon.  Steve has been a regular on the podcast in the past but took a break to   hone his ideas on hyper-productive knowledge work.  We discussed his new book Tame The Flow: Hyper-Productive Knowledge-Work Management published J Ross and how teams can raise their game to deliver results that not only raise the bar but jump over it

 

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Meteor

Xebia Blog - Sun, 01/18/2015 - 12:11

Did you ever use AngularJS as a frontend framework? Then you should definitely give Meteor a try! Where AngularJS is powerful just as a client framework, meteor is great as a full stack framework. That means you just write your code in one language as if there is no back- and frontend at all. In fact, you get an Android and IOS client for free. Meteor is so incredibly simple that you are productive from the beginning.

Where meteor kicks angular

One of the killing features of meteor is that you'll have a shared code base for frontend and backend. In the next code snippet, you'll see a file shared by backend and frontend:

// Collection shared and synchronized accross client, server and database
Todos = new Mongo.Collection('todos');

// Shared validation logic
validateTodo = function (todo) {
  var errors = {};
  if (!todo.title)
    todo.title = "Please fill in a title";
  return errors;
}

Can you imagine how neat the code above is?

Scan 04 Jan 2015 18.48-page4

With one codebase, you get the full stack!

  1. As in the backend file and in the frontend file one can access and query over the Todos collection. Meteor is responsible for syncing the todos. Even when another user adds an item, it will be visible to your client directly. Meteor accomplishes this by a client-side Mongo implementation (MiniMongo).
  2. One can write validation rules once! And they are executed both on the front-end and on the back-end. So you can give my user quick feedback about invalid input, but you can also guarantee that no invalid data is processed by the backend (when someone bypasses the client). And this is all without duplicated code.

Another killer feature of meteor is that it works out of the box, and it's easy to understand. Angular can be a bit overwhelming; you have to learn concepts like directives, services, factories, filters, isolated scopes, transclusion. For some initial scaffolding, you need to know grunt, yeoman, etcetera. With meteor every developer can create, run and deploy a full-stack application within minutes. After installing meteor you can run your app within seconds.

$ curl https://install.meteor.com | /bin/sh
$ meteor create dummyapp
$ cd dummyapp
$ meteor
$ meteor deploy dummyapp.meteor.com
Screen Shot 2015-01-04 at 19.49.08

Meteor dummy application

Another nice aspect of meteor is that it uses DDP, the Distributed Data Protocol. The team invented the protocol and they are heavily promoting it as "REST for websockets". It is a simple, pragmatic approach allowing it to be used to deliver live updates as data changes in the backend. Remember that this works all out of the box. This talk walks you through the concepts of it. But the result is that if you change data on a client it will be updated immediately on the other client.

And there is so much more, like...

  1. Latency Compensation. On the client, Meteor prefetches data and simulates models to make it look like server method calls return instantly.
  2. Meteor is open source and integrates with existing open source tools and frameworks.
  3. Services (like an official package server and a build farm).
  4. Command line tools
  5. Hot deploys
Where meteor falls short

Yes, meteor is not the answer to all your problems. The reason, I'll still choose angular above meteor for my professional work, is because the view framework of angular rocks. It makes it easy to structure your client code into testable units and connect them via dependency injection. With angular you can separate your HTML from your javascript. With meteor your javascript contains HTML elements, (because their UI-library is based on handlebars. That makes testing harder and large projects will become unstructured very quickly.

Another flaw emerges if your project already has a backend. When you choose meteor, you choose their full stack. That means: Mongo as database and Node.js as backend. Despite you are able to create powerful applications, Meteor doesn't allow you (easily) to change this stack.

Under the hood

Meteor consists out of several subprojects. In fact, it is a library of libraries. In fact, it is a stack; a standard set of core packages that are designed to work well together:

Components used by meteor

  1. To make meteor reactive they've included the components blaze and tracker. The blaze component is heavily based on handlebars.
  2. The DDP component is a new protocol, described by meteor, for modern client-server communication.
  3. Livequery and full stack database take all the pain of data synchronization between the database, backend and frontend away! You don't have to think about in anymore.
  4. The Isobuild package is a unified build system for browser, server and mobile.
Conclusion

If you want to create a website or a mobile app with a backend in no time, with getting lots of functionality out of the box, meteor is a very interesting tool. If you want to have more control or connect to an existing backend, then meteor is probably less suitable.

You can watch this presentation I recently gave, to go along with the article.

Re-read Saturday: Anchoring New Approaches in The Culture, John P. Kotter Chapter 10

index

Consider an elastic band that has been stretched between two points. If the elastic hasn’t lost its stretch, as soon as it is released at one end it will snap back. Organizational culture is like that elastic band. We pull and stretch to make changes and then we want them to settle in. However, we need to anchor the change so that when we change focus the changes don’t disappear. The eighth step in Kotter’s eight-stage model of change discusses this need to anchor the change to avoid reversion.

Culture describes the typical behaviors of a group and the meaning ascribed to those behaviors. Kotter describes culture as the reflection of shared values and group norms. All groups have a specific culture that allows them to operate in a predictable manner. Within a group or organization, culture allows members to interpret behavior and communication, and therefore build bonds of trust. When culture is disrupted bond are scrambled and behavior becomes difficult to predict until culture is reset. If a change program declares victory before the culture is reset, the group or organization tends to revert to back to the original cultural norm.

Culture is powerful because:

  1. The individuals within any group are selected to be part of the group and then indoctrinated into the culture. Cognitive biases are a powerful force that pushes people to hire and interact with people that are like them, homogenizing and reinforcing culture. Culture is further reinforced by training, standards and processes that are used to reduce the level of behavioral variance in the organization. Standardization and indoctrination help lock in culture.
  2. Culture exerts itself through the actions of each individual. While in a small firm, the combination of the number of people in the firm and proximity to the leaders of the change make culture change easier (not easy just easier).  However when we consider mid-sized or large firms in which hundreds or thousands of people need to make a consistent and permanent change to how they act, change gets really complicated. Since culture reflects and is reinforced by how people work, real change requires change each how each affected person behaves which is significantly more difficult to change than words in the personnel manual.
  3. Much actions taken in an organization is¬†not driven by conscious decision which makes it hard to challenge or discuss. A significant amount of our work behavior is governed by shared values and muscle memory. I often hear the statement ‚Äúthat‚Äôs just the way it is done here‚ÄĚ when I ask why a team has taken a specific action. Many of these actions are unconscious and therefore tend to go unrecognized until challenged from the outside. Pushing people away from comfortable patterns of behavior generates cognitive dissonance.

Less power is needed overcome entrenched culture if the change can build on the organization’s base culture rather than having to confront it. Building on to the current culture will often generate some early momentum towards change because those being asked to change see less risk. Alternately change that is at odds with the current culture will require significantly more effort and a greater sense of urgency to generate and sustain.

Kotter argues that culture changes trail behavior. Put another way, culture change happens last. Each of the stages in the model for change are designed to build urgency, momentum and support for organizational changes. Vision provides the direction for the change. Results provide proof that the change works and is better than what it replaced. Continuous communication of vision, direction and results break through the barriers of resistance. Breaking down the layers of resistance challenges old values and pushes people to admit that the change is better. When barriers can’t or won‚Äôt change sometimes change means changing key people. ¬†Nihilistic behavior in the face of results can‚Äôt be allowed to exist. Kotter finally points out that in order to anchor¬†long-term change the¬†organization will need to ensure that both succession planning and promotions reinforce the change rather than allow reversion.

Peter Drucker said, ‚ÄúCulture eats strategy for breakfast.‚ÄĚ Innumerable people have suggested a corollary that says ‚ÄúCulture eats change for breakfast.‚ÄĚ The Eight Stage Model for Significant Change provides a strategy for overcoming the power of an entrenched culture to generate lasting change.

Re-read Summary to-date

Change is a fact of life. John P. Kotter’s book, Leading Change, defines his famous eight-stage model for change. The first stage of the model is establishing a sense of urgency. A sense of urgency provides the energy and rational for any large, long-term change program. Once a sense of urgency has been established, the second stage in the eight-stage model for change is the establishment of a guiding coalition. If a sense of urgency provides energy to drive change, a guiding coalition provides the power for making change happen. A vision, built on the foundation of urgency and a guiding coalition, represents a picture of a state of being at some point in the future. Developing a vision and strategy is only a start, the vision and strategy must be clearly and consistently communicated to build the critical mass needed to make change actually happen. Once an organization wound up and primed, the people within the organization must be empowered and let loose to create change. Short-term wins provide the feedback and credibility needed to deliver on the change vision. The benefits and feedback from the short-term wins and other environmental feedback are critical for consolidating gains and producing more change. Once a change has been made it needs to anchored so that that the organization does not revert to older, comfortable behaviors throwing away the gains they have invest blood, sweat and tears to create.


Categories: Process Management

DevOps Primer: A Tool to Scale Agile

DevOps requires participation and cooperation.

DevOps requires participation and cooperation.

There is a general consensus that Agile frameworks are effective for delivering business value. Approaches and frameworks such as DevOps are often leveraged to ensure that Agile is both effective AND efficient.  Using a DevOps approach becomes even more critical for efficiency and effectiveness as projects, programs and organizations get larger requiring more collaboration and coordination. DevOps is a crucial tool for helping teams ensure deployment readiness and that the proliferation of technical environment and tools are effective when scaling Agile.

Implementing DevOps requires the involvement of a number of roles to deliver business value collaboratively. Involvement requires participation to be effective. Participation between developers, QA and TechOps personnel as part of the same value stream begins at planning. The Scaled Agile Framework Enterprise (SAFe) makes a strong statement about involvement and participation by integrating DevOps in the Agile Release Train (one of SAFe’s core structures). Integrating the concept of DevOps in the flow of a project or program helps to ensure that the team takes steps to maintain environments and deployment readiness that extend from the code base to the needed technical environments to build, test and share.

Deployment readiness includes a significant number of activities, all of which require broad involvement. Examples of these activities include:

  1. Version control. Version control is needed for the code (and all code like objects) so that the product can be properly built and that what is in the build is understood (and supposed to be there). Version control generally requires software tools and mutually agreed upon processes, conventions and standards.
  2. Build automation. Build automation pulls together files, objects and other assets into an executable (or consumable for non-code artifacts) form in a repeatable manner without (or with minimal) human interaction. All of the required processes and steps, such as compilation, packaging or generation of installers. [FRAGMENT] Build automation generally deploys and validates code to development or testing environments. Similar to version control, build automation requires tools, processes, conventions, standards and coding the automation.
  3. Deployment automation. Deployment automation is often a specialized version of build automation whose target is production environments. Deployment automation pushes and installs the final build to the production environment. Automation reduces the overhead, and reduces the chance for error (and therefore saves effort).

Professional teams that build software solutions typically require multiple technical environments during the product lifecycle. A typical progression of environments might be development, test (various) and staging. Generally the staging environment (just prior to production) should be the most production like, while development and test environments will have tools and attributes that make development and testing easier. Each of these environments needs to be provisioned and managed. DevOps brings that provisioning and management closer to the teams generating the code, reducing wait times. Automation of the provisioning can give development and testing teams more control over the technical environments (under the watchful eye of the TechOps groups).

As projects and programs become larger the classic separation of development from TechOps will slow a project down, make it more difficult to deliver more often and potentially generate problems in delivery. Implementing DevOps shortens the communication channels so that development, QA and TechOps personnel can collaborate on the environments and tools needed to deliver value faster, better and cheaper. Automation of substantial portions of the processes needed both to build and deploy code and manage the technical environments further improve the ability of the team or teams to deliver value. The savings in time, effort and defects can be better used to deliver more value for the organization.


Categories: Process Management

Software Architecture Articles of 2014

From the Editor of Methods & Tools - Thu, 01/15/2015 - 15:07
When software features are distributed on multiple infrastructures (server, mobile, cloud) that needs to communicate and synchronize, having a sound and reactive software architecture is a key for success and evolution of business functions. Here are seven software architecture articles published in 2014 that can help you understand the basic topics and the current trends in software architecture: Agile, Cloud, SOA, Security… and even a little bit of data modeling. * Designing Software in a Distributed World This is an overview of what is involved in designing services that use distributed computing ...

Monitoring Akka with Kamon

Xebia Blog - Thu, 01/15/2015 - 13:49

Kamon is a framework for monitoring the health and performance of applications based on akka, the popular actor system framework often used with Scala. It provides good quick indicators, but also allows in-depth analysis.

Tracing

Beyond just collecting local metrics per actor (e.g. message processing times and mailbox size), Kamon is unique in that it also monitors message flow between actors.

Essentially, Kamon introduces a TraceContext that is maintained across asynchronous calls: it uses AOP to pass the context along with messages. None of your own code needs to change.

Because of convenient integration modules for Spray/Play, a TraceContext can be automatically started when an HTTP request comes in.

If nothing else, this can be easily combined with the Logback converter shipped with Kamon: simply logging the token is of great use right out of the gate.

Dashboarding

Kamon does not come with a dashboard by itself (though some work in this direction is underway).

Instead, it provides 3 'backends' to post the data to (4 if you count the 'LogReporter' backend that just dumps some statistics into Slf4j): 2 on-line services (NewRelic and DataDog), and statsd (from Etsy).

statsd might seem like a hassle to set up, as it needs additional components such as grafana/graphite to actually browse the statistics. Kamon fortunately provides a correctly set-up docker container to get you up and running quickly. We unfortunately ran into some issues with the image uploaded to the Docker Hub Registry, but building it ourselves from the definition on github resolved most of these.

Implementation

We found the source code to Kamon to be clear and to-the-point. While we're generally no great fan of AspectJ, for this purpose the technique seems to be quite well-suited.

'Monkey-patching' a core part of your stack like this can of course be dangerous, especially with respect to performance considerations. Unless you enable the heavier analyses (which are off by default and clearly marked), it seems this could be fairly light - but of course only real tests will tell.

Getting Started

Most Kamon modules are enabled by adding their respective akka extension. We found the quickest way to get started is to:

  • Add the Kamon dependencies to your project as described in the official getting started guide
  • Enable the Metrics and LogReporter extensions in your akka configuration
  • Start your application with AspectJ run-time weaving enabled. How to do this depends on how you start your application. We used the¬†sbt-aspectj¬†plugin.

Enabling AspectJ weaving can require a little bit of twiddling, but adding the LogReporter should give you quick feedback on whether you were successful: it should start periodically logging metrics information.

Next steps are:

  • Enabling Spray or Play plugins
  • Adding the trace token to your logging
  • Enabling other backends (e.g. statsd)
  • Adding custom application-specific metrics and trace points
Conclusion

Kamon looks like a healthy, useful tool that not only has great potential, but also provides some great quick wins.

The documentation that is available is of great quality, but there are some parts of the system that are not so well-covered. Luckily, the source code very approachable.

It is clear the Kamon project is not very popular yet, judging by some of the rough edges we encountered. These, however, seem to be mostly superficial: the core ideas and implementation seems solid. We highly recommend taking a look.

 

Remco Beckers

Arnout Engelen

Exploring Akka Stream's TCP Back Pressure

Xebia Blog - Wed, 01/14/2015 - 15:48

Some years ago, when Reactive Streams lived in utopia we got the assignment to build a high-volume message broker. A considerable amount of code of the solution we delivered back then was dedicated to prevent this broker being flooded with messages in case an endpoint became slow.

How would we have solved this problem today with the shiny new Akka Reactive Stream (experimental) implementation just within reach?

In this blog we explore Akka Streams in general and TCP Streams in particular. Moreover, we show how much easier we can solve the challenge we faced backed then using Streams.

A use-case for TCP Back Pressure

The high-volume message broker mentioned in the introduction basically did the following:

  • Read messages (from syslog) from a TCP socket
  • Parse the message
  • Forward the message to another system via a TCP connection

For optimal throughput multiple TCP connections were available, which allowed delivering messages to the endpoint system in parallel. The broker was supposed to handle about 4000 - 6000 messages per second. As follows a schema of the noteworthy components and message flow:

Waterhose2

Naturally we chose Akka as framework to implement this application. Our approach was to have an Actor for every TCP connection to the endpoint system. An incoming message was then forwarded to one of these connection Actors.

The biggest challenge was related to back pressure: how could we prevent our connection Actors from being flooded with messages in case the endpoint system slowed down or was not available? With 6000 messages per second an Actor's mailbox is flooded very quickly.

Another requirement was that message buffering had to be done by the client application, which was syslog. Syslog has excellent facilities for that. Durable mailboxes or something the like was out of the question. Therefore, we had to find a way to pull only as many messages in our broker as it could deliver to the endpoint. In other words: provide our own back pressure implementation.

A considerable amount of code of the solution we delivered back then was dedicated to back pressure. During one of our re-occurring innovation days we tried to figure out how much easier the back pressure challenge would have been if Akka Streams would have been available.

Akka Streams in a nutshell

In case you are new to Akka Streams as follows some basic information that help you understand the rest of the blog.

The core ingredients of a Reactive Stream consist of three building blocks:

  • A Source that produces some values
  • A Flow that performs some transformation of the elements produced by a Source
  • A Sink that consumes the transformed values of a Flow

Akka Streams provide a rich DSL through which transformation pipelines can be composed using the mentioned three building blocks.

A transformation pipeline executes asynchronously. For that to work it requires a so called FlowMaterializer, which will execute every step of the pipeline. A FlowMaterializer uses Actor's for the pipeline's execution even though from a usage perspective you are unaware of that.

A basic transformation pipeline looks as follows:


  import akka.stream.scaladsl._
  import akka.stream.FlowMaterializer
  import akka.actor.ActorSystem

  implicit val actorSystem = ActorSystem()
  implicit val materializer = FlowMaterializer()

  val numberReverserFlow: Flow[Int, String] = Flow[Int].map(_.toString.reverse)

  numberReverserFlow.runWith(Source(100 to 200), ForeachSink(println))

We first create a Flow that consumes Ints and transforms them into reversed Strings. For the Flow to run we call the runWith method with a Source and a Sink. After runWith is called, the pipeline starts executing asynchronously.

The exact same pipeline can be expressed in various ways, such as:


    //Use the via method on the Source that to pass in the Flow
    Source(100 to 200).via(numberReverserFlow).to(ForeachSink(println)).run()

    //Directly call map on the Source.
    //The disadvantage of this approach is that the transformation logic cannot be re-used.
    Source(100 to 200).map(_.toString.reverse).to(ForeachSink(println)).run()

For more information about Akka Streams you might want to have a look at this Typesafe presentation.

A simple reverse proxy with Akka Streams

Lets move back to our initial quest. The first task we tried to accomplish was to create a stream that accepts data from an incoming TCP connection, which is forwarded to a single outgoing TCP connection. In that sense this stream was supposed to act as a typical reverse-proxy that simply forwards traffic to another connection. The only remarkable quality compared to a traditional blocking/synchronous solution is that our stream operates asynchronously while preserving back-pressure.

import java.net.InetSocketAddress
import akka.actor.ActorSystem
import akka.stream.FlowMaterializer
import akka.stream.io.StreamTcp
import akka.stream.scaladsl.ForeachSink

implicit val system = ActorSystem("on-to-one-proxy")
implicit val materializer = FlowMaterializer()

val serverBinding = StreamTcp().bind(new InetSocketAddress("localhost", 6000))

val sink = ForeachSink[StreamTcp.IncomingConnection] { connection =>
      println(s"Client connected from: ${connection.remoteAddress}")
      connection.handleWith(StreamTcp().outgoingConnection(new InetSocketAddress("localhost", 7000)).flow)
}
val materializedServer = serverBinding.connections.to(sink).run()

serverBinding.localAddress(materializedServer)

First we create the mandatory instances every Akka reactive Stream requires, which is an ActorSystem and a FlowMaterializer. Then we create a server binding using the StreamTcp Extension that listens to incoming traffic on localhost:6000. With the ForeachSink[StreamTcp.IncomingConnection] we define how to handle the incoming data for every StreamTcp.IncomingConnection by passing a flow of type Flow[ByteString, ByteString]. This flow consumes ByteStrings of the IncomingConnection and produces a ByteString, which is the data that is sent back to the client.

In our case the flow of type Flow[ByteString, ByteString] is created by means of the StreamTcp().outgoingConnection(endpointAddress).flow. It forwards a ByteString to the given endpointAddress (here localhost:7000) and returns its response as a ByteString as well. This flow could also be used to perform some data transformations, like parsing a message.

Parallel reverse proxy with a Flow Graph

Forwarding a message from one connection to another will not meet our self defined requirements. We need to be able to forward messages from a single incoming connection to a configurable amount of outgoing connections.

Covering this use-case is slightly more complex. For it to work we make use of the flow graph DSL.


  import akka.util.ByteString
  import akka.stream.scaladsl._
  import akka.stream.scaladsl.FlowGraphImplicits._

  private def parallelFlow(numberOfConnections:Int): Flow[ByteString, ByteString] = {
    PartialFlowGraph { implicit builder =>
      val balance = Balance[ByteString]
      val merge = Merge[ByteString]
      UndefinedSource("in") ~> balance

      1 to numberOfConnections map { _ =>
        balance ~> StreamTcp().outgoingConnection(new InetSocketAddress("localhost", 7000)).flow ~> merge
      }

      merge ~> UndefinedSink("out")
    } toFlow (UndefinedSource("in"), UndefinedSink("out"))
  }

We construct a flow graph that makes use of the junction vertices Balance and Merge, which allow us to fan-out the stream to several other streams. For the amount of parallel connections we want to support, we create a fan-out flow starting with a Balance vertex, followed by a OutgoingConnection flow, which is then merged with a Merge vertex.

From an API perspective we faced the challenge of how to connect this flow to our IncomingConnection. Almost all flow graph examples take a concrete Source and Sink implementation as starting point, whereas the IncomingConnection does neither expose a Source nor a Sink. It only accepts a complete flow as input. Consequently, we needed a way to abstract the Source and Sink since our fan-out flow requires them.

The flow graph API offers the PartialFlowGraph class for that, which allows you to work with abstract Sources and Sinks (UndefinedSource and UndefinedSink). We needed quite some time to figure out how they work: simply declaring a UndefinedSource/Sink without a name won't work. It is essential that you give the UndefinedSource/Sink a name which must be identical to the one that is used in the UndefinedSource/Sink passed in the toFlow method. A bit more documentation on this topic would help.

Once the fan-out flow is created, it can be passed to the handleWith method of the IncomingConnection:

...
val sink = ForeachSink[StreamTcp.IncomingConnection] { connection =>
      println(s"Client connected from: ${connection.remoteAddress}")
      val parallelConnections = 20
      connection.handleWith(parallelFlow(parallelConnections))
    }
...

As a result, this implementation delivers all incoming messages to the endpoint system in parallel while still preserving back-pressure. Mission completed!

Testing the Application

To test our solution we wrote two helper applications:

  • A blocking client that pumps as many messages as possible into a socket connection to the parallel reverse proxy
  • A server that delays responses with a configurable latency in order to mimic a slow endpoint. The parallel reverse proxy forwards messages via one of its connections to this endpoint.

The following chart depicts the increase in throughput with the increase in amount of connections. Due to the nondeterministic concurrent behavior there are some spikes in the results but the trend shows a clear correlation between throughput and amount of connections:

Performance_Chart

End-to-end solution

The end-to-end solution can be found here.
By changing the numberOfConnections variable you can see the impact on performance yourself.

Check it out! ...and go with the flow ;-)

Information about TCP back pressure with Akka Streams

At the time of this writing there was not much information available about Akka Streams, due to the fact that it is one of the newest toys of the Typesafe factory. As follows some valuable resources that helped us getting started:

DevOps Primer: Who Is Involved

Untitled

Implementing the concept of DevOps requires that a number of roles work together in a larger team all focused on three simple goals. These teams hereto now, even though focused on the greater good of the organization, are currently operated as silos. Operating in silos forces each team or department to maximize their team-specific goals. Generally, maximizing the efficiency of a specific step or process in the flow work required to deliver value to the business may not yield the most effective or efficient solution overall. DevOps takes a more holistic approach, using a systems thinking approach to view and approach the software value chain. Instead of seeing three or more silos of activity (Agile team, QA and TechOps), a more holistic approach sees the software process as single value chain. The value chain begins with an idea and ends when support is no longer needed. The software value stream can be considered a flow of products and services that are provisioned to deliver a service. The provisioning activity is a metaphor that can be used to highlight who is involved in delivering software in a DevOps environment.

Provisioning is a term often used in the delivery of telecommunications products and services (and other industries), and it is used to describe providing a service and everything needed to a user. Providing the service may include the equipment, network, software, training and the support necessary to begin and to continue using the service. The service is not complete and provisioned until the user can use the service in a manner that meets their needs. Viewing delivery as provisioning enforces a systems view of the processes and environments needed.

Developing, deploying and supporting any software-centric service requires a wide range of roles, products and services, that are often consoldiated into three categories; development teams, QA/testing, technical operations (TechOps).  Examples of  TechOps  roles can include configuration and environment management, security, network, release management and tool support just to name a few. TechOps are charged with providing the environment for services to be delivers and ensuring that those environments are safe, resilient and stable (I can think of any number of other additional attributes).

The roles of all development teams are fairly straightforward (that is not say they are not complex or difficult  Teams, whether Agile or  waterfall, build services in a development environment and then those services migrate through other environments until they are resident in some sort of production environment or environments. Development, QA and TechOps must understand and either create or emulate these environments to ensure that the business needs are met (and that the software runs). Additionally, the development, enhancement and maintenance process generally uses a wide range of tools to make writing, building, debugging, testing, promoting and installing code easier. These tools are part of the environment’s needs to develop and deliver software services.

QA or testing roles are designed to help to ensure that what is being built works, meets the definition of done and delivers the expected value. The process of testing often requires specialized environments to ensure integration and control.  In a similar manner, testing often requires tools for test automation, data preparation and even exploratory testing.

TechOps typically are involved in providing the environment or environments needed to deliver software constructing and allowing access to environments can often cause bottlenecks and constraints in the flow of value to the user. An organization embracing DevOps will actively pursue bottlenecks and constraints that slow the delivery of software. For example many organization leverage automation to provide the development teams with more control over nonproduction environments. Automation shortens the time waiting for another department to take an action and frees TechOps personnel to be actively involved in projects AND to manage overall the organizational technical environment.

DevOps helps to remove the roadblocks that slow the delivery of value by ensuing that Agile teams, QA and TechOps personnel work together so that environmental issues don’t get in the way. We can concieve of DevOps as the intersection of Agile Teams, QA and TechOps however what is more important is the interaction.  Interaction builds trust and empowerment so that the flow through the development, test and production environment is smooth.  The environments used to build software services are critical. Environments will need to be provisioned regardless of which Agile and lean techniques you are using. Even relatively common processes required specific software and storage to function. Consider the tools and coordination needed to use continuous builds and automated testing. If the flow of work needs to stop and wait until the environment is ready the delivery of value will be delayed.


Categories: Process Management

If it needs to happen: Schedule it!

Mike Cohn's Blog - Tue, 01/13/2015 - 15:00

The following is a guest post from Lisa Crispin. Lisa is the co-author with Janet Gregory of "Agile Testing: A Practical Guide for Testers and Agile Teams" and the newly published "More Agile Testing: Learning Journeys for the Whole Team". I highly recommend both of these books--in fact, I recommend reading everything Lisa and Janet write. In the following post, Lisa argues the benefits of scheduling anything that's important. I am a fanatic for doing this. Over the holiday I put fancy new batteries in my smoke detectors that are supposed to last 10 years. So I put a note in my calendar to replace them in 9 years. But, don't schedule time to read Lisa's guest post--just do it now. --Mike

During the holidays, some old friends came over to our house for lunch. We hadn’t seen each other in a few months, though we live 10 minutes apart. We had so much fun catching up. As they prepared to leave, my friend suggested, “Let’s pick a date to get together again soon. So often, six months go by without our seeing each other.” We got out our calendars, and penciled in a date a few weeks away. The chances are good that we will achieve our goal of meeting up soon.

Scheduling time for important activities is key in my professional life, too. Here’s a recent example. My current team has only three testers, and we all have other duties as well, such as customer support. We have multiple code bases, products and platforms to test, so we’re always juggling priorities.

Making time

The product owner for our iOS product wanted us to experiment with automating some regression smoke tests through its UI. Another tester on the team and I decided we’d pair on a spike for this. However, we had so many competing priorities, we kept putting it off. As much as we try avoid multi-tasking, it seems there is always some urgent interruption.

Finally, we created a recurring daily meeting on the calendar, scheduled shortly after lunchtime when few other meetings were going on. As soon as we did that, we started making the time we needed. We might miss a day here or there, but we’re much more disciplined about taking our time for the iOS test automation. As a result, we made steady, if slow, progress, and achieved many of our goals.

Scheduling help

Even though both of us were new to iOS, pairing gave us courage, and two heads were better than one. We’d still get stuck, though, and we needed the expertise of programmers and testers with experience automating iOS tests. Simply adding a meeting to the calendar with the right people has gotten us the help we needed. Even busy people can spare 30 minutes or an hour out of their week. Our iOS team is in another city two time zones away. If we put a meeting on the calendar with a Zoom meeting link, we can easily make contact at the appointed time. Screensharing enables us to make progress quickly, so we can stick to short time boxes.

Another way we ensured time on our schedule for the automation work was to add stories for it to our backlog. For example, we had stories for specific scripts, starting with writing an automated test for logging into the iOS app. Once we had some working test scripts, we added infrastructure-type chores, for example, get the tests running from the command line so we can add them to the team’s continuous integration later. These stories make our efforts more visible. As a result, team members anticipate our meeting invitations and think of ideas to overcome challenges with the automation.

Time for testing

Putting time on the calendar works when I need to pair with a programmer to understand a story better, or when we need help with exploratory testing for a major new feature. I can always ask for help at the morning standup, but if we don’t set a specific time, it’s easy for everyone to get involved with other priorities and forget.

The calendar is your friend. Once you create a meeting, you might still need to negotiate what time works for everyone involved, but you’ve started the collaboration process. Of course, if it’s easy to simply walk over to someone’s desk to ask a question, or pick up the phone if they’re not co-located, do that. But if your team is like ours, where programmers and other roles pair full time, and there’s always a lot going on, a scheduled meeting helps everyone plan the time for it.

If you have a tough project ahead, find a pair and set up a recurring meeting to work together. If you need one-off help, add a time-boxed meeting for today or tomorrow. If you need the whole team to brainstorm about some testing issues, schedule a meeting for the time of day with the fewest distractions. And if you haven’t seen an old friend in too long, schedule a date for that, too!

If it needs to happen: Schedule it!

Mike Cohn's Blog - Tue, 01/13/2015 - 15:00

The following is a guest post from Lisa Crispin. Lisa is the co-author with Janet Gregory of "Agile Testing: A Practical Guide for Testers and Agile Teams" and the newly published "More Agile Testing: Learning Journeys for the Whole Team". I highly recommend both of these books--in fact, I recommend reading everything Lisa and Janet write. In the following post, Lisa argues the benefits of scheduling anything that's important. I am a fanatic for doing this. Over the holiday I put fancy new batteries in my smoke detectors that are supposed to last 10 years. So I put a note in my calendar to replace them in 9 years. But, don't schedule time to read Lisa's guest post--just do it now. --Mike

During the holidays, some old friends came over to our house for lunch. We hadn’t seen each other in a few months, though we live 10 minutes apart. We had so much fun catching up. As they prepared to leave, my friend suggested, “Let’s pick a date to get together again soon. So often, six months go by without our seeing each other.” We got out our calendars, and penciled in a date a few weeks away. The chances are good that we will achieve our goal of meeting up soon.

Scheduling time for important activities is key in my professional life, too. Here’s a recent example. My current team has only three testers, and we all have other duties as well, such as customer support. We have multiple code bases, products and platforms to test, so we’re always juggling priorities.

Making time

The product owner for our iOS product wanted us to experiment with automating some regression smoke tests through its UI. Another tester on the team and I decided we’d pair on a spike for this. However, we had so many competing priorities, we kept putting it off. As much as we try avoid multi-tasking, it seems there is always some urgent interruption.

Finally, we created a recurring daily meeting on the calendar, scheduled shortly after lunchtime when few other meetings were going on. As soon as we did that, we started making the time we needed. We might miss a day here or there, but we’re much more disciplined about taking our time for the iOS test automation. As a result, we made steady, if slow, progress, and achieved many of our goals.

Scheduling help

Even though both of us were new to iOS, pairing gave us courage, and two heads were better than one. We’d still get stuck, though, and we needed the expertise of programmers and testers with experience automating iOS tests. Simply adding a meeting to the calendar with the right people has gotten us the help we needed. Even busy people can spare 30 minutes or an hour out of their week. Our iOS team is in another city two time zones away. If we put a meeting on the calendar with a Zoom meeting link, we can easily make contact at the appointed time. Screensharing enables us to make progress quickly, so we can stick to short time boxes.

Another way we ensured time on our schedule for the automation work was to add stories for it to our backlog. For example, we had stories for specific scripts, starting with writing an automated test for logging into the iOS app. Once we had some working test scripts, we added infrastructure-type chores, for example, get the tests running from the command line so we can add them to the team’s continuous integration later. These stories make our efforts more visible. As a result, team members anticipate our meeting invitations and think of ideas to overcome challenges with the automation.

Time for testing

Putting time on the calendar works when I need to pair with a programmer to understand a story better, or when we need help with exploratory testing for a major new feature. I can always ask for help at the morning standup, but if we don’t set a specific time, it’s easy for everyone to get involved with other priorities and forget.

The calendar is your friend. Once you create a meeting, you might still need to negotiate what time works for everyone involved, but you’ve started the collaboration process. Of course, if it’s easy to simply walk over to someone’s desk to ask a question, or pick up the phone if they’re not co-located, do that. But if your team is like ours, where programmers and other roles pair full time, and there’s always a lot going on, a scheduled meeting helps everyone plan the time for it.

If you have a tough project ahead, find a pair and set up a recurring meeting to work together. If you need one-off help, add a time-boxed meeting for today or tomorrow. If you need the whole team to brainstorm about some testing issues, schedule a meeting for the time of day with the fewest distractions. And if you haven’t seen an old friend in too long, schedule a date for that, too!