Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=6' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Building An Infinitely Scaleable Online Recording Campaign For David Guetta

This is a guest repost of an interview posted by Ryan S. Brown that originally appeared on serverlesscode.com. It continues our exploration of building systems on top of Lambda.

Paging David Guetta fans: this week we have an interview with the team that built the site behind his latest ad campaign. On the site, fans can record themselves singing along to his single, “This One’s For You” and build an album cover to go with it.

Under the hood, the site is built on Lambda, API Gateway, and CloudFront. Social campaigns tend to be pretty spiky – when there’s a lot of press a stampede of users can bring infrastructure to a crawl if you’re not ready for it. The team at parall.ax chose Lambda because there are no long-lived servers, and they could offload all the work of scaling their app up and down with demand to Amazon.

James Hall from parall.ax is going to tell us how they built an internationalized app that can handle any level of demand from nothing in just six weeks.

The Interview
Categories: Architecture

Proper Black Box Testing Case Design – Equivalence Partitioning

Making the Complex Simple - John Sonmez - Wed, 01/20/2016 - 14:00

In today’s IT world, the lines between developers and QA Engineers are being blurred. With the emergence of Agile, Test Driven Development, Continuous Integration, and many other methodologies, software testing is becoming even more critical. To support daily releases, multiple Operating Systems, and multiple browsers, the Development team (QA and Software Engineers) needs the capability […]

The post Proper Black Box Testing Case Design – Equivalence Partitioning appeared first on Simple Programmer.

Categories: Programming

Achieve The Unthinkable using Hyper-Sprints

Xebia Blog - Wed, 01/20/2016 - 13:17
 33177940

2015-06-25 AMSTERDAM - Wereldkampioen sprint Dafne Schippers poseert naast de Nuna 7S van het Nuon Solar Team. De atlete neemt het in Olympisch Stadion op tegen het Nuon Solar Team, de wereldkampioen zonneracen. Projecten zoals Nuna en Forze worden door Hardware Scrum coaches van Xebia begeleid.

In my opinion, the best indicator how "agile" teams actually are, is their sprint length.  The theory says 2-4 weeks. To be honest, as an agile coach, this doesn’t feel agile all the time.

Like I wrote in one of my previous posts, in my opinion the ultimate form of agility is nature. Nature’s sprint length seems to vary from billions of years how the universe is created to a fraction of a second how matter is formed.

Of course, it’s nonsense stating we could end up in sprints of just a few nano-seconds.  But on the other hand, we see our society is speeding up dramatically. Where a service or product could take years before it went to market a couple of years ago, now it can be a matter of days, even hours.  Think about the development of disruptive apps and technology like Uber and 3D-printing.

In these disruptive examples a sprint length of 2 weeks can be a light year.  Even in Scrum we can be trapped in our patterns here. Why don’t we experiment with shorter sprint lengths?  All agile rituals are relative in time; during build parties and hackathons I often use sprints of only 30 or 60 minutes; 5 mins for planning, 45 mins for the sprint, 5 mins for the review/demo, 5 mins for the retrospective.  Combined with a fun party atmosphere and competition, this creates a hyper-productive environment.

Try some hyper sprinting next to your regular sprints. You’ll be surprised how ultra-productive and fun they are. For example, it enables your team to build a car in just an afternoon. Enjoy!

Agile Acceptance Testing Anti-Patterns

21391642416_b33ed0c7a7_k

Don’t just throw it over the wall.

A user acceptance test (AUAT) is a process that confirms that what is produced meets the business needs and requirements. User acceptance testing is a form of validation done by executing the product. For a software product, UAT is performed by executing the code in the user’s environment and comparing the outcome to the users needs. In the classic V-model of testing, acceptance testing is the final proof that the requirements have been transformed into functional code. Agile frameworks have incorporated the steps for acceptance testing numerous ways. Not all of the patterns that have been adopted for Agile User Acceptance Testing (AUAT) are effective, and in some cases they are anti-effective.

Post Development AUAT: Waiting to do any user acceptance testing until all of the code has been developed and integrated has been a widely used, bad practice for as long as I can remember. It is one of those practices that almost every practitioner in the software development world would agree ought to be different, but when push comes to shove, gets adopted. The Agile version of this practice is incorporating all AUAT activities into a hardening sprint. In this scenario the code is placed in a production or production-like environment and AUAT testing is performed. Everyone holds their breath as they wait for surprises to be found. Finding problems that affect the business value of the product this late in the process is always costly. The post development AUAT is often combined with other anti-patterns which only exacerbate the pain.

Fix: Leverage the natural UAT activities built into most Agile frameworks, such as story level acceptance criteria, embedded product owners, demonstrations and continuous integration with testing.

Users Responsibility Only: In many organizations, user acceptance testing is done by users of the software. Often business users have never had any professional training on testing, and therefore, perform only cursory testing. Note: This is not a condemnation of the level of effort or interest that UAT participant have in the process. Some Agile-ish organizations have elected to continue this practices (almost always coupled with post-development UAT).

Fix: The whole Agile team, including the product owner, should actively participate and facilitate user acceptance testing. The word facilitate is chosen carefully. The Agile team should not perform the UAT for the users, but rather help them plan and execute the process.

Tossing The Software Product Over the Wall: In this scenario, one group creates the product and then declares that it is done and tosses it over the proverbial wall, where another group takes possession. In most circumstances, communication between the two groups is limited. This scenario is most common were the development and the enhancements and support are done by different companies (common in the government and large, commercial organizations). The lack of team continuity and communication often leads to delivering poor quality. This scenario is almost always combined with the UAT after development anti-pattern and sometimes with the user’s only anti-pattern. .

Fix: Where this behavior is not legally mandated just don’t do it. The use of a DevOps type models in which the development team, where user subject matter experts (users), and maintenance personnel interact across the product lifecycle, are a useful tool for breaking through the wall.

AUAT in Production: This scenario, like the others, happens regardless of whether an organization is using plan-based or Agile methods. The impact of AUAT done in production is often worse because it reflects a poor understanding of Agile and it is probable that other practices might be askew.

Fix: Leverage the natural UAT activities built into most Agile frameworks, such as story level acceptance criteria, embedded product owners, demonstrations and continuous integration with testing. Also, review all Agile practices to ensure they are effective and are delivering.

Agile user acceptance testing is most effective when users and team members work together to validate that what is being built over the entire development process. Anything else is an anti-pattern.


Categories: Process Management

Cutting Corners

Actively Lazy - Tue, 01/19/2016 - 21:56

The pressure to deliver yesterday is strong. If it’s not customers nagging you, it’s project managers breathing down your neck or your own self-doubt that this should have been simpler: the desire to get the task done quicker can often be irresistible. How do you strike the right balance between cutting corners and polishing the turd?

While working through a feature I maintain a “navigator pad” of things I want to come back to. These are refactorings I’ve spotted, tests that need cleaning up, design smells to look at or just plain questions I’m curious to know the answer to (can foo ever actually be null? is this method really used?) This list ebbs and flows as I’m working through a feature: some days I seem to do nothing but add new things to it, other days I manage to cross half the list off as some much-needed refactoring becomes critical to complete the next change. But the one constant throughout a feature is the nav pad.

Recently I was nearing the end of a feature and my nav pad didn’t seem to be getting any shorter. I’d spent a good bit of time refactoring things, but new problems kept appearing – it didn’t seem like I’d ever be “Done”. The feature was way behind schedule, my self-doubt was growing: I’m trying to do a good job, I don’t want this to take any longer but I keep spotting things I got wrong before or simply missed. Suddenly one morning, within the space of a couple of hours, I crossed 20 items off the nav pad, sat back and realised: it’s empty! I was Done.

The next thing that struck me was what a strange occurrence this was: I couldn’t remember the last time I’d actually crossed everything off the nav pad. There would always be some last refactorings on the list that on balance could wait until another time; some tidy up that could wait until another day; some question that I no longer cared to know the answer to. But for the first time in a long time, I’d crossed everything off!

Then the doubt sets in: have I over-engineered this? Could I have been done quicker? The pressure to cut corners is really strong: we’re always pushed to be done faster, to do the absolute minimum we can get away with. Yet I know what needs to be done, I know what the problems are with this code: I’ve written them all down in the nav pad. If I don’t fix them now, then when?

A pattern I see all-too-frequently when I come up against a design smell: I can see the design is wrong, the tests are a mess, the production code is a mess; there’s definitely a better way, I just can’t see it at the minute. I park the refactoring on the nav pad. I come back to it later after ticking off a few more parts of the feature, but I still can’t see a way to resolve the design smell. I spend a couple of hours refactoring back and forth – in the end I declare bankruptcy and raise an issue in the issue tracker. If I’m lucky I’ll pick up the issue again in a couple of months, have a half-hearted look at it but realise I can’t remember what I was really thinking at the time and close the issue. More likely after a few months with nobody picking up the issue I’ll quietly close it. My code guilt has been neatly dealt with. But the crap code still remains.

The pressure to cut corners is incredibly strong, that pressure is strongest when you’re facing a particularly difficult design change. You’ve identified a problem in the design, probably made obvious by other changes you’ve made. You’re struggling to correct it, which means it isn’t easy to resolve; but it’s obviously a problem because you’ve already spent time trying to resolve it. That means the next time you come through here you’re going to spot the same problem and hate the you of today for not fixing it. And yet, this moment right now is the clearest you’ve ever understood the problem. If you give up now, you’ll have to reload into memory all the context you’ve got right now – what makes you think you’ll be in less of a rush in six months time? That you’ll have time to re-learn this code? Time to do what should have been done today?

The pressure to be done yesterday is strong, but today is the best you’ve ever understood this code: so use that understanding to leave it better than you found it. If you’ve removed all the sharp edges you saw on your way through then at least you’re leaving the code better than you found it. Tomorrow when you pass this way, you’ll pass through a little quicker, with fewer sharp edges to distract you. But today? Today you have code gardening to do.


Categories: Programming, Testing & QA

Automated deployment of Docker Universal Control Plane with Terraform and Ansible

Xebia Blog - Tue, 01/19/2016 - 20:06

You got into the Docker Universal Control Plane beta and you are ready to get going, and then you see a list of manual commands to set it up. As you don't want to do anything manually, this guide will help you setup DUCP in a few minutes by using just a couple of variables. If you don't know what DUCP is, you can read the post I made earlier. The setup is based on one controller, and a configurable amount of replicas which will automatically join the controller to form a cluster. There a few requirements we need to address to make this work, like setting the external (public) IP while running the installer and passing the controller's certificate fingerprint to the replicas during setup. We will use Terraform to spin up the instances, and Ansible to provision the instances and let them connect to each other.

Prerequisites

Updated 2016-01-25: v0.7 has been released, and no Docker Hub account is needed anymore because the images are moved to public namespace. This is updated in the 'master' branch on the Github repository. If you still want to try v0.6, you can checkout tag 'v0.6'!

Before you start cloning a repository and executing commands, let's go over the prerequisites. You will need:

  • Access to the DUCP beta (during installation you will need to login with a Docker Hub account which is added to the 'dockerorca' organization, tested with v0.5, v0.6 and v0.7. Read update notice above for more information.)
  • An active Amazon Web Services and/or Google Cloud Platform account to create resources
  • Terraform (tested with v0.6.8 and v0.6.9)
  • Ansible (tested with 1.9.4 and 2.0.0.1)

Step 1: Clone the repository

CiscoCloud's terraform.py is used as Ansible dynamic discovery to find our Terraform provisioned instances, so --recursive is needed to also get the Git submodule.

git clone --recursive https://github.com/nautsio/ducp-terraform-ansible
cd ducp-terraform-ansible

Step 2.1: AWS specific instructions

These are the AWS specific instructions, if you want to use Google Cloud Platform, skip to step 2.2.

For the AWS based setup, you will be creating an aws_security_group with the rules in place for HTTPS (443) en SSH (22). With Terraform you can easily specify what we need by specifying ingress and egress configurations to your security group. By specifying 'self = true' the rule will be applied to the all resources in that to be created security group. In the single aws_instance for the ducp_controller we use the lookup function to get the right AMI from the list specified in vars.tf. Inside each aws_instance we can reference the created security group by using "${aws_security_group.ducp.name}". This is really easy and it keeps the file generic. To configure the amount of instances for ducp-replica we are using the count parameter. To identify each instance to our Ansible setup, we specify a name by using the tag parameter. Because we use the count parameter, we can generate a name by using a predefined string (ducp-replica) and add the index of the count to it. You can achieve this by using the concat function like so: "${concat("ducp-replica",count.index)}". The sshUser parameter in the tags block is used by Ansible to connect to the instances. The AMIs are configured inside vars.tf and by specifying a region, the correct AMI will be selected from the list.

variable "amis" {
    default = {
        ap-northeast-1 = "ami-46c4f128"
        ap-southeast-1 = "ami-e378bb80"
        ap-southeast-2 = "ami-67b8e304"
        eu-central-1   = "ami-46afb32a"
        eu-west-1      = "ami-2213b151"
        sa-east-1      = "ami-e0be398c"
        us-east-1      = "ami-4a085f20"
        us-west-1      = "ami-fdf09b9d"
        us-west-2      = "ami-244d5345"
    }
}

The list of AMIs

    ami = "${lookup(var.amis, var.region)}"

The lookup function

Let's configure the variables so you can use it to create the instances. Inside the repository you will find a terraform.tfvars.example file. You can copy or move this file to terraform.tfvars so that Terraform will pick it up during a plan or apply.

cd aws
cp terraform.tfvars.example terraform.tfvars

Open terraform.tfvars with your favorite text editor so you can set up all variables to get the instances up and running.

  • region can be selected from available regions
  • access_key and secret_key can be obtained through the console
  • key_name is the name of the key pair to use for the instances
  • replica_count defines the amount of replicas you want

The file could look like the following:

region = "eu-central-1"
access_key = "string_obtained_from_console"
secret_key = "string_obtained_from_console"
key_name = "my_secret_key"
replica_count = "2"

By executing terraform apply you can create the instances, let's do that now. Your command should finish with:

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

 Step 2.2: Google Cloud Platform specific instructions

In GCP, it's a bit easier to set everything up. Because there are no images/AMI's per region, we can use a disk image with a static name. And because the google_compute_instance has a name variable, you can use the same count trick as we did on AWS, but this time without the metadata. By classifying the nodes with the tag https-server, it will automatically open port 443 in the firewall. Because you can specify the user that should be created with your chosen key, setting the ssh_user is needed to connect with Ansible later on.

Let's setup our Google Cloud Platform variables.

cd gce
cp terraform.tfvars.example terraform.tfvars

Open terraform.tfvars with your favorite text editor so you can set up all variables to get the instances up and running.

The file could look like the following:

project = "my_gce_project"
credentials = "/path/to/file.json"
region = "europe-west1"
zone = "europe-west1-b"
ssh_user = "myawesomeusername"
replica_count = "2"

By executing terraform apply you can create the instances, let's do that now. Your command should finish with:

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Step 3: Running Ansible

Instances should all be there, now let's install the controller and add a replica. This setup uses terraform.py to retrieve the created instances (and IP addresses) based on the terraform.tfstate file. To make this work you need to specify the location of the tfstate file by settings TERRAFORM_STATE_ROOT to the current directory. Then you specify the script to lookup the inventory (-i) and the site.yml where you can assign the roles to the hosts.

There are two roles that will be applied to all hosts, called common and extip. Inside common everything is set up to get Docker running on the hosts, so it configures the apt repo, installs the docker-engine package and finally installs the docker-py package because Ansible needs this to use Docker. Inside extip, there are two shell commands to lookup external IP addresses. Because if you want to access DUCP on the external IP, it should be present inside the certificate that DUCP generates. Because the external IP addresses are not found on GCP instances and I wanted a generic approach where you would only need one command to provision both AWS and GCP I chose to look them up, and eventually register the variable extip with the one that was correctly looked up on the instances. Second reason to use the external IP, is that all the replicas need the external IP of the controller to register themselves. By passing the --url parameter to the join command, you can specify to what controller it should register.

--url https://"{{ hostvars['ducp-controller']['extip'] }}"

The extip variable used by replica

This also counts for the certificates fingerprint, the replica should provide the fingerprint of the controllers certificate to register successfully. We can access that variable the same way: "{{ hostvars['ducp-controller']['ducp_controller_fingerprint'].stdout }}. It specifies .stdout to only use the stdout part of the command to get the fingerprint, because it also registers exitcode and so.

To supply external variables, you can inject vars.yml through --extra-vars. Let's setup the vars.yml by copying the example file to vars.yml and configure it.

cd ../ansible
cp vars.yml.example vars.yml

As stated before, the installer will login to the Docker Hub and download images that live under the dockerorca organization. Your account needs to be added to this organization to let it succeed. Fill in your Docker Hub account details in vars.yml, and choose an admin user and admin password for your installation. If you use ssh-agent to store your SSH private keys, you can proceed with the ansible-playbook command, else you can specify your private-key file by adding --private-key <priv_key_location> to the list of arguments.

Let's run Ansible to set up DUCP. You need to change to the directory where the terraform.tfstate file resides, or change TERRAFORM_STATE_ROOT accordingly.

cd ../{gce,aws}
TERRAFORM_STATE_ROOT=. ansible-playbook -i ../terraform.py/terraform.py \
                       ../ansible/site.yml \
                       --extra-vars "@../ansible/vars.yml"

If all went well, you should see something like:

PLAY RECAP *********************************************************************
ducp-controller            : ok=13   changed=9    unreachable=0    failed=0   
ducp-replica0              : ok=12   changed=8    unreachable=0    failed=0   
ducp-replica1              : ok=12   changed=8    unreachable=0    failed=0

To check out our brand new DUCP installation, run the following command to extract the IP addresses from the created instances:

TERRAFORM_STATE_ROOT=. ../terraform.py/terraform.py --hostfile

Copy the IP address listed in front of ducp-controller and open up a web browser. Prefix the address with https://<ip> and now you can login with your chosen username and password.

ducp login

Let me emphasise that this is not a production ready setup, but can definitely help if you want to try out DUCP and maybe build a production ready version from this setup. If you want support for other platforms, please file an issue on Github or submit a pull request. I'll be more than happy to look into it for you.

Sponsored Post: Netflix, Macmillan, Aerospike, TrueSight Pulse, LaunchDarkly, Robinhood, StatusPage.io, Redis Labs, InMemory.Net, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Manager - Site Reliability Engineering: Lead and grow the the front door SRE team in charge of keeping Netflix up and running. You are an expert of operational best practices and can work with stakeholders to positively move the needle on availability. Find details on the position here: https://jobs.netflix.com/jobs/398

  • Macmillan Learning, a premier e-learning institute, is looking for VP of DevOps to manage the DevOps teams based in New York and Austin. This is a very exciting team as the company is committed to fully transitioning to the Cloud, using a DevOps approach, with focus on CI/CD, and using technologies like Chef/Puppet/Docker, etc. Please apply here.

  • DevOps Engineer at Robinhood. We are looking for an Operations Engineer to take responsibility for our development and production environments deployed across multiple AWS regions. Top candidates will have several years experience as a Systems Administrator, Ops Engineer, or SRE at a massive scale. Please apply here.

  • Senior Service Reliability Engineer (SRE): Drive improvements to help reduce both time-to-detect and time-to-resolve while concurrently improving availability through service team engagement.  Ability to analyze and triage production issues on a web-scale system a plus. Find details on the position here: https://jobs.netflix.com/jobs/434

  • Manager - Performance Engineering: Lead the world-class performance team in charge of both optimizing the Netflix cloud stack and developing the performance observability capabilities which 3rd party vendors fail to provide.  Expert on both systems and web-scale application stack performance optimization. Find details on the position here https://jobs.netflix.com/jobs/860482

  • Senior Devops Engineer - StatusPage.io is looking for a senior devops engineer to help us in making the internet more transparent around downtime. Your mission: help us create a fast, scalable infrastructure that can be deployed to quickly and reliably.

  • Software Engineer (DevOps). You are one of those rare engineers who loves to tinker with distributed systems at high scale. You know how to build these from scratch, and how to take a system that has reached a scalability limit and break through that barrier to new heights. You are a hands on doer, a code doctor, who loves to get something done the right way. You love designing clean APIs, data models, code structures and system architectures, but retain the humility to learn from others who see things differently. Apply to AppDynamics

  • Software Engineer (C++). You will be responsible for building everything from proof-of-concepts and usability prototypes to deployment- quality code. You should have at least 1+ years of experience developing C++ libraries and APIs, and be comfortable with daily code submissions, delivering projects in short time frames, multi-tasking, handling interrupts, and collaborating with team members. Apply to AppDynamics
Fun and Informative Events
  • Aerospike, the high-performance NoSQL database, hosts a 1-hour live webinar on January 28 at 1PM PST / 4 PM EST on the topic of "From Development to Deployment" with Docker and Aerospike. This session will cover what Docker is and why it's important to Developers, Admins and DevOps when using Aerospike; it features an interactive demo showcasing the core Docker components and explaining how Aerospike makes developing & deploying multi-container applications simpler. Please click here to register.

  • Your event could be here. How cool is that?
Cool Products and Services
  • Dev teams are using LaunchDarkly’s Feature Flags as a Service to get unprecedented control over feature launches. LaunchDarkly allows you to cleanly separate code deployment from rollout. We make it super easy to enable functionality for whoever you want, whenever you want. See how it works.

  • TrueSight Pulse is SaaS IT performance monitoring with one-second resolution, visualization and alerting. Monitor on-prem, cloud, VMs and containers with custom dashboards and alert on any metric. Start your free trial with no code or credit card.

  • Turn chaotic logs and metrics into actionable data. Scalyr is a tool your entire team will love. Get visibility into your production issues without juggling multiple tools and tabs. Loved and used by teams at Codecademy, ReturnPath, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

Agile Results for 2016

Agile Results is the personal productivity system for high-performance.

Agile Results is a “whole person” approach to personal productivity. It combines proven practices for mind, body, and emotions. It helps you realize your potential the agile way.  Best of all, it helps you make the most of what you’ve got to achieve higher levels of performance with less time, less effort, and more impact.

Agile Results helps you achieve rapid results by focusing on outcomes over activities, spending more time in your strengths, focusing on high-value activities, and using your best energy for your best results.

If you want to use Agile Results, it’s simple. I’ll show you how to get started, right, here, right now. If you already know Agile Results, then this will simply be a refresher.

Write Three Things Down

The way to get started with Agile Results is simple. Write three things down that you want to achieve today. Just ask yourself, “What are your Three Wins that you want to achieve today?”

For me, today, I want to achieve the following:

  1. I want to get agreement on a shared model across a few of our teams.
  2. I want to create a prototype for business model innovation.
  3. I want to create a distilled view of CEO concerns for a Mobile-First, Cloud-First world.

In my mind, I might just remember: shared model, business model innovation, and CEO. I’ll be focused on the outcomes, which are effectively agreement on a model, innovation in business models for a Mobile-First, Cloud-First world, and a clear representation of top CEO pains, needs, and desired outcomes.

Even if I throw away what I write down, or lose it, the value is in the brief moment I spent to prioritize and visualize the results that I want to achieve. 

This little vision will stick with me as a guide throughout my day.

Think in Three Wins

Writing these three items down, helps me focus. It helps me prioritize based on value. It also helps me create a simple vision for my day.

Plus, thinking in Three Wins adds the fun factor.

And, better yet, if somebody asks me tomorrow what my Three Wins were for yesterday, I should be able to tell a story that goes like this: I created rapport and a shared view with our partner teams, I created a working information model for business model innovation for a mobile-first cloud-first world, and I created a simplified view of the key priorities for CEOs in a Mobile-First, Cloud-First world.

When you can articulate the value you create, to yourself and others, it helps provide a sense of progress, and a story of impact.  Progress is actually one of the keys to workplace happiness, and even happiness in life.

In a very pragmatic way, by practicing your Three Wins, you are practicing how to identify and create value.  You are learning what is actually valued, by yourself and others, by the system that you are in.

And value is the ultimate short-cut.  Once you know what value is, you can shave off a lot of waste.

The big idea here is that it’s not your laundry list of To-Dos, activities, and reminders -- it’s your Three Wins or Three Outcomes or Three Results.

Use Your Best Energy for Your Best Results

Some people wonder why only Three Wins?  There is a lot of science behind the Rule of 3, but I find it better to look at how the Rule of 3 has stood the test of time.  The military uses it.  Marketing uses it.  You probably find yourself using it when you chunk things up into threes.

But don’t I have a bazillion things to do?

Yes. But can I do a bazillion things today? No. But what I can do is spend my best energy, on the best things, my best way.

That’s the best I can do.

But that’s actually a lot. When you focus on high-value outcomes and you really focus your time, attention, and energy on those high-value outcomes, you achieve a lot. And you learn a lot.

Will I get distracted? Sure. But I’ll use my Three Wins to get back on track.

Will I get randomized and will new things land on my plate? Of course, it’s the real-world. But I have Three Wins top of mind that I can prioritize against. I can see if I’m trading up for higher-value, higher-priorities, or if I’m simply getting randomized and focusing on lower-value distractions.

Will I still have a laundry list of To-Do items? I will. But, at the top of that list, I’ll have Three Wins that are my “tests for success” for the day, that I can keep going back to, and that will help me prioritize my list of actions, reminders, and To-Dos.

20-Minute Sprints

I’ll use 20-Minute Sprints to achieve most of my results. It will help me make meaningful progress on things, keep a fast pace, stay engaged with what I’m working on, and to use my best energy.

Whether it’s an ultradian rhythms, or just a natural breaking point, 20-Minute Sprints help with focus.

We aren’t very good at focusing if we need to focus “until we are done.” But we are a lot better at focusing if we have a finish line in site. Plus, with what I’m learning about vision, I wonder if spending more than 20-Minutes is where we start to fatigue our eye muscles, and don’t even know it.

Note that I primarily talk about 20-Minute Sprints as timeboxing, after all, that’s what it is, but I think it’s more helpful to use a specific number. I remember that 40-Hour Work Week was a good practice from Extreme Programming before it became Sustainable Pace. Once it became Sustainable Pace, then teams started doing the 70 or 80 hour work week, which is not only ineffective, it does more harm than good.

Net net – start with 20-Minute Sprints. If you find another timebox works better for you, than by all means use it, but there does seem to be something special about 20-Minute Sprints for paving your work through work.

If you’re wondering, what if you can’t complete your task in a 20-Minute Sprint? You do another sprint.

All the 20-Minute Sprint does is give you a simple timebox to focus and prioritize your time, attention, and energy, as well as to remind you to take brain breaks. And, the 20-Minute deadline also helps you sustain a faster pace (more like a “sprint” vs. a “job” or “walk”).

Just Start

I could say so much more, but I’d rather you just start doing Agile Results.

Go ahead and take a moment to think about your Three Wins for today, and go ahead and write them down.

Teach a friend, family member, or colleague Agile Results.  Spread the word.

Help more people bring out their best, even in their toughest situations.

A little clarity creates a lot of courage, and that goes a long when it comes to making big impact.

You Might Also Like

10 Big Ideas from Getting Results the Agile Way

10 Personal Productivity Tools from Agile Results

What Life is Like with Agile Results

Categories: Architecture, Programming

Does a Scrum Team Need a Retrospective Every Sprint?

Mike Cohn's Blog - Tue, 01/19/2016 - 16:00

The Scrum sprint cycle calls for a team to do a retrospective at the end of each sprint. Yet in almost every Certified Scrum Master course I teach, I'm asked if teams really need to do a retrospective every sprint.

Usually the logic of the questioner is along the lines of:

  • Our team is so good, we rarely come up with anything to improve at, so we want to stop.
  • We find retrospectives boring, so we want to stop.
  • We're too busy with real work (or retrospectives take too long), so we want to stop.
  • We simply don't like retrospectives, so we want to stop.

So, in this blog post, I want to briefly counter each of those arguments and say why you should still do a retrospective every sprint. Then, I'll end the post by saying maybe--just maybe--you don't really need to do a retrospective every sprint after all. (Did I surprise you with that? Read on ...)

The Team Is Too Good for Retrospectives

Your team is not too good that it cannot get better. I’ve worked with teams that have been doing Scrum for over 10 years, doing retrospectives every two weeks, and they can still identify ways to improve. It is highly unlikely that your team is so good there are no further improvements either to be identified or worth making.

Retrospectives Are Too Boring

No one said a retrospective should be as exciting as the latest Hollywood blockbuster. But there are things you can to do to liven them up.

For example, mix things up by asking a ScrumMaster from another team to facilitate your retrospective. The change in style can help. (Be sure to return the favor.) Change the venue, possibly holding a retrospective outdoors, even while walking.

Try a totally different format for the meeting. For example, many teams fall into a habit of always looking for new practices to adopt. Dedicate your next retrospectives entirely to discussing what should be dropped from the team’s process. (And, no, “dropping retrospectives” is not allowed.)

Geoff Watts and Paul Goddard offer ten different formats for retrospectives in their video course on retrospectives. Watch it and try some your team hasn’t tried. There are plenty of ways to prevent a retrospective from being boring.

The Team Is Too Busy for Retrospectives

A team that says it is too busy to dedicate time to getting better is taking a very shortsighted view of the future.

In his Seven Habits of Highly Effective People, Stephen Covey used the analogy of a woodcutter cutting a tree for days with a saw. Eventually the saw becomes dull. But with a short-term attitude, the woodcutter will never stop to sharpen the saw.

A Scrum team with a similarly short-term view will never take thirty minutes out of its schedule to look for improvements. Instead they’ll put too much value on the little bit of code that could have been developed during those 30 minutes.

The Team Doesn't Like Retrospectives

This is somewhat a variation on retrospectives being boring. I’ve listed it separately because it does go beyond retrospectives being boring or becoming mundane to some team members. Some team members just flat out don’t like them.

And for those team members, that may just be too bad because everyone on the team is expected to be a professional. And professionals do all parts of their jobs, not just the parts they want to do.

As soon as I finish writing this post, I need to rewrite it to make it better. That isn’t as fun. Then I need to proofread it. That’s not fun at all. Then I have someone else read it. And then I have to accept or reject edits to it. That’s not fun at all.

Not every part of our job gets to be fun. If some team members don’t like retrospectives, but if retrospectives are helping the team find ways to improve, the team should be doing them.

When It's OK Not to Do a Retrospective Every Sprint

But I also said I’d let you know when it’s OK not to do a retrospective every sprint. So, when is that?

If your team:

  • Is really good.
  • Has made significant efforts to make sure retrospectives aren't boring.
  • Is not too busy to invest in improvements that don’t pay them back immediately.
  • And understands the value of doing other than just the most pleasant work.

… and if they work in short sprints, I'll say it's OK for the team do retrospectives less frequently.

Here's why. The general Scrum rule is to run sprints of four weeks or less. So a team that truly wanted out of retrospectives could just adopt a four-week sprint.

Consider a team that has chosen one-week sprints for a variety of good reasons. But this team so detests retrospectives that they switch to a four-week sprint just to conduct retrospectives less frequently.

This would be a bad change, unless the change is appropriate for reasons other than just a desire to do retrospectives less frequently.

And so, a good ScrumMaster should really encourage the team to do retrospectives every sprint, arguing against the four objections covered above. But the ScrumMaster should be open in some rare cases to a team doing a retrospective perhaps every other sprint when using one- or two-week sprints.

I want to be clear that I always try to talk a team out of this. I always try to convince teams to do retrospectives every sprint. But if a team really has achieved a very, very high level of performance and is doing short sprints (usually one week), I am willing to acquiesce to their arguments.

I'll then have them do a retrospective every other sprint. And for most teams, that is more frequent than teams doing four-week retrospectives.

Come Back Next Week for My Favorite Way to Run a Retrospective

Be sure to come back next week. There are many ways to conduct a retrospective, but next week I’ll share my favorite way.

What Do You Think?

Let me know what you think in the comments below. Do you ever skip retrospectives? Under what circumstances do you think it’s OK (if ever)?

Four Tips to Writing Better and Faster

A colleague asked me for some tips about writing. With hundreds of articles, blog posts, and 10 books, I know what works for me. I suspect some of these ideas will work for you, too.

Tip 1:  Write every day.

Write for 15 minutes every day. This practice exercises your writing muscles. For me, it’s a little different than all the email I write :-)

Tip 2: Think about the stories you want to tell in an article.

People love stories. If you include a story, they will identify with it and love your work. That’s because they can identify with the situation, regardless if they agree with you.

You might not like my story approach. Think about what you like to read. What pulls you in? Write like that (not the same words, the same approach).

Tip 3: Writing is not editing.

For me, writing is about 3 parts:

  • Gather the ideas. If you want to outline, this is a great time to do it. Just remember that an outline is a guide, not rules.
  • Write down. 
  • Edit. This is where you use the red squiggly lines and the spell/grammar checker. I excise passive voice in my non-fiction. I look for a lower grade level (about 6 is what I aim for) and a high readability score. 

When I write (down), I don’t edit. I spew words on the page. It’s almost a game: how fast can I write? I write about 750-1000 words an hour. That’s pretty darn close to an entire article. (1000 words) After I’m done with the writing-down, I can edit. 

Tip 4: People will disagree with you

When you write non-fiction, people will disagree with you. (Heck, they probably disagree with fiction, too!) That’s fine. It’s their loss if they disregard your ideas. Everyone has their own experience. If you tell stories/provide tips/write from your experience, you are authentic. You also build your self-confidence. The writing is easier over time.

If you would like to practice your writing, I have an online workshop starting in March. See http://www.jrothman.com/syllabus/2015/12/writing-workshop-1-write-non-fiction-to-enhance-your-business-and-reputation/. You will write at least one article during the workshop. 

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Tue, 01/19/2016 - 13:25

Great equations change the way we perceive the world. They reorchestrate the world - transforming and reintegrating our perception by redefining what belongs together with what. Light and waves. Energy and mass. probability and position. And they do so in a way that often seems unexpected and even strange.

- Robert P. Crease, The Greatest Equations Ever, Physics Web 2004, in The Mathematics Devotional: Celebrating the Wisdom and Beauty of Mathematics, Clifford A. Pickover

Categories: Project Management

Capabilities Based Planning

Herding Cats - Glen Alleman - Mon, 01/18/2016 - 19:21

I'm working two programs where Earned Value Management, through FAR 34.2 and DFARS 234.2 are applicable. These programs are also Software Intensive Systems applying Agile development processes. Capabilities Based Planning is defined as ...

A method involving the functional analysis of operational requirements. Capabilities are identified based on the tasks required… Once the required capability inventory is defined, the most cost effective and efficient options to satisfy the requirements are sought. Capabilities Based Planning is planning, under uncertainty, to provide capabilities suitable for a wide range of modern-day challenges and circumstances while working within an economic framework that necessitates choice.

In tradition Agile (I know agilest will be wincing) the development of requirements is emergent. On Sofware Intensive System of Systems in the domain where FAR / DFARS are applicable, we have deadlines and mandatory Capabilities for the outcomes of the work effort. The system must perform in specific ways on specific dates for specific costs.

This specificity of capabilities, cost, and delivery dates is no different than on Enterprise IT projects

When applying Agile Development has to address how do we bound the program in a contract vehicle? The Agile Manifesto of Customer Collaboration Over Contract Negotiation is fine except when dealing with a sovereign. So here's how its done.

Capabilities Based Planning

The needed Capabilities are on contract. The technical and operational requirements needed to provide those capabilities can be emergent and are suitable to agile development methods.

Here's some posts from the past about Capabilities Based Planning

And of course ...

To make decisions about the analysis of alternatives using Capabilities Based Planning, estimates must be made of the outcomes for each choice in the list of alternatives. Without estimates, there can be no Analysis of Alternatives to determine which capabilities are best suited to meet the mission need or fulfill the business case. There can be no credible decisions in the presence of uncertainty without estimates.

Related articles How Think Like a Rocket Scientist - Irreducible Complexity What Do We Mean When We Say "Agile Community?" Can Enterprise Agile Be Bottom Up? Seven Principles of Agile Software Development in the US DOD Two Books in the Spectrum of Software Development There is No Such Thing as Free Empirical Data Used to Estimate Future Performance Agile Software Development in the DOD Thinking, Talking, Doing on the Road to Improvement
Categories: Project Management

Productivity Power Magazine

image

"Productivity is being able to do things that you were never able to do before." -- Franz Kafka

One of my experiments over the weekend was to do a fast roundup of my productivity articles.

Here it is -- Productivity Power Magazine:

Productivity Power Magazine

I wanted to create a profound knowledge base of principles, patterns, and practices for productivity.  I also wanted to make it fast, really fast, to be able to go through all of my productivity articles that I’ve created for Sources of Insight and on MSDN. 

I also wanted it to be more visual, I wanted thumbnails of each articles, so that I could flip through very quickly.

After looking at a few options, I tried Flipboard.  It’s a simple way to create personal magazines, and world class publications like The New York Times, PEOPLE Magazine, Fast Company and Vanity Fair use Flipboard.

Productivity Power Magazine (A Flipboard Experiment)

Here is my first Flipboard experiment to create Productivity Power Magazine:

Productivity Power Magazine

I think you’ll find Productivity Power Magazine a very fast way to go through all of my productivity articles.  You get to see everything and a glance, scroll through a visual list, and then dive into the ones you want to read.  If you care about productivity, this might be your productivity paradise.

Note that I take a “whole person” approach to productivity, with a focus on well-being.  I draw from positive psychology, sports psychology, project management practices, and a wide variety of sources to help you achieve high-performance.  Ultimately, it’s a patterns and practices approach to productivity to help you think, feel, and do your best, while enjoying the journey.

Some Challenges with Productivity Power Magazine

Flipboard is a fast way to roundup and share articles for a theme.

I do like Flipboard.  I did run into some issues though while creating my Productivity Power Magazine: 1)  I wasn’t able to figure out how to create a simpler URL for the landing page, 2)  I wasn’t able to swap out images if I didn’t like what was in the original article 3) I couldn’t add an image if the article was missing one, 4) I couldn’t easily re-sequence the flow of articles in the magazine, and 5) I can’t get my editorial comments to appear.  It seems like all of my write ups are in the tool, but don’t show on the page.

That said, I don’t know a faster, simpler, better way to create a catalog of all of my productivity articles at a glance.  What’s nice is that I can go across multiple sources, so it’s a powerful way to round up articles and package them for a specific theme, such as productivity in this case.

I can also see how I can use Flilpboard for doing research on the Web, alone or with a team of people, since you can invite people to contribute to your Flipboard.   You can also make Flipboards private, so you can choose which ones you share.

Take Productivity Power Magazine for a spin and let me know how it goes.

Categories: Architecture, Programming

Use Google For Throughput, Amazon And Azure For Low Latency

Which cloud should you use? It may depend on what you need to do with it. What Zach Bjornson needs to do is process large amounts scientific data as fast as possible, which means reading data into memory as fast as possible. So, he made benchmark using Google's new multi-cloud PerfKitBenchmarker, to figure out which cloud was best for the job.

The results are in a very detailed article: AWS S3 vs Google Cloud vs Azure: Cloud Storage Performance. Feel free to datamine the results for more insights, but overall his conclusions are:

Categories: Architecture

Software Development Linkopedia January 2016

From the Editor of Methods & Tools - Mon, 01/18/2016 - 15:44
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about organizational debt, product owner types, flat structures, mature developers, keeping your head cool, working with UX, technical requirements management, software architecture, group problem-solving techniques and thinking slow in software development. […]

Rediscover Your Creativity

Making the Complex Simple - John Sonmez - Mon, 01/18/2016 - 14:00

In “The Necessity of Creativity,” we learned that good creative work is often the difference between spectacular success and bankruptcy for any company. Now let’s examine how we can become more creative and understand the relationship it has with happiness. Are You Naturally Creative? Almost everyone likes to think they are somewhat creative. But is […]

The post Rediscover Your Creativity appeared first on Simple Programmer.

Categories: Programming

DevNexus 2016 in Atlanta, GA

Coding the Architecture - Simon Brown - Mon, 01/18/2016 - 13:33

I'm pleased to say I'll be in the United States next month for the DevNexus 2016 conference that is taking place in Atlanta, GA. In addition to a number of talks about software architecture, I'll also be running my popular "The Art of Visualising Software Architecture" workshop. Related to the (free) book with the same name, this hands-on workshop is about improving communication and specifically software architecture diagrams. We'll talk about UML and some anti-patterns of "boxes and lines" diagrams, but the real focus is on my "C4 model" and some lightweight techniques for communicating software architecture. The agenda and slides for this 1-day workshop are available online. I hope you'll be able to join me.

Categories: Architecture

SPaMCAST 377 – Empathy, Getting Things Done, Culture Change

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

In this week’s Software Process and Measurement Cast will feature three columns.  our essay on empathy. Coaching is a key tool to help individuals and teams reach peak performance. One of the key attributes of a good coach is empathy. Critical to the understanding the role that empathy plays in coaching is understanding the definition of empathy. As a coach, if you can’t connect with those you are coaching you will not succeed. Let’s learn how to become more empathic.

Our second column features the return of the Software Sensei, Kim Pries.  Kim looks at how we might apply David Allen’s concepts for Getting Things Done (after the book of the same name). Please note the comments reflect the Software Sensei’s interpretation of how Allen’s work might be applied to software development.

Anchoring the cast this week is Gene Hughson bringing an entry from the Form Follows Function Blog.  Today Gene discussed his essay, Changing Organizations Without Changing People.  Gene proclaims, “Changing culture is impossible if you claim to value one thing but your actions demonstrate that you really don’t.”

Remember to help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter five, we discussed estimation, calibration and what we know now!

Upcoming Events

I am facilitating the CMMI Capability Challenge.  This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on February 17 at 11 AM EST.

http://cmmiinstitute.com/conferences#thecapabilitychallenge

In other events, I will give a webinar, titled: Discover The Quality of Your Testing Process on January 19, 2016, at  11:00 am EST
Organizations that seek to understand and improve their current testing capabilities can use the Test Maturity Model integration (TMMi) as a guide for best practices. The TMMi is the industry standard model of testing capabilities. Comparing your testing organization’s performance to the model provides a gap analysis and outlines a path towards greater capabilities and efficiency. This webinar will walk attendees through a testing assessment that delivers a baseline of performance and a set of prioritized process improvements.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our Interview with Evan Leybourn. Evan returns to the Software Process and Measurement Cast to discuss the “end to IT projects.” We discussed the idea of #NoProject and continuous delivery and whether this is just an “IT” thing or something that can encompass the entire business.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 377 – Empathy, Getting Things Done, Culture Change

Software Process and Measurement Cast - Sun, 01/17/2016 - 23:00

In this week’s Software Process and Measurement Cast will feature three columns.  our essay on empathy. Coaching is a key tool to help individuals and teams reach peak performance. One of the key attributes of a good coach is empathy. Critical to the understanding the role that empathy plays in coaching is understanding the definition of empathy. As a coach, if you can’t connect with those you are coaching you will not succeed. Let’s learn how to become more empathic. 

Our second column features the return of the Software Sensei, Kim Pries.  Kim looks at how we might apply David Allen’s concepts for Getting Things Done (after the book of the same name). Please note the comments reflect the Software Sensei’s interpretation of how Allen’s work might be applied to software development.

Anchoring the cast this week is Gene Hughson bringing an entry from the Form Follows Function Blog.  Today Gene discussed his essay, Changing Organizations Without Changing People.  Gene proclaims, “Changing culture is impossible if you claim to value one thing but your actions demonstrate that you really don’t.”

Remember to help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter five, we discussed estimation, calibration and what we know now!

Upcoming Events

I am facilitating the CMMI Capability Challenge.  This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on February 17 at 11 AM EST. 

http://cmmiinstitute.com/conferences#thecapabilitychallenge

In other events, I will give a webinar, titled: Discover The Quality of Your Testing Process on January 19, 2016, at  11:00 am EST
Organizations that seek to understand and improve their current testing capabilities can use the Test Maturity Model integration (TMMi) as a guide for best practices. The TMMi is the industry standard model of testing capabilities. Comparing your testing organization's performance to the model provides a gap analysis and outlines a path towards greater capabilities and efficiency. This webinar will walk attendees through a testing assessment that delivers a baseline of performance and a set of prioritized process improvements.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our Interview with Evan Leybourn. Evan returns to the Software Process and Measurement Cast to discuss the "end to IT projects." We discussed the idea of #NoProject and continuous delivery and whether this is just an “IT” thing or something that can encompass the entire business.  

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Neo4j: Cypher – avoid duplicate calls to NOT patterns

Mark Needham - Sun, 01/17/2016 - 13:19

I’ve been reacquainting myself with the meetup.com dataset ahead of Wednesday’s meetup in London and wanted to write a collaborative filtering type query to work out which groups people in my groups were in.

This started simple enough:

MATCH (member:Member {name: "Mark Needham"})-[:MEMBER_OF]->(group:Group)<-[:MEMBER_OF]-(other:Member)-[:MEMBER_OF]->(otherGroup:Group)
RETURN otherGroup, COUNT(*) AS commonMembers
ORDER BY commonMembers DESC
LIMIT 5

And doesn’t take too long to run:

Cypher version: CYPHER 2.3, planner: COST. 1084378 total db hits in 1103 ms.

However, it was showing up several groups that I’m already a member of so I added in a “WHERE NOT” clause to sort that out:

MATCH (member:Member {name: "Mark Needham"})-[:MEMBER_OF]->(group:Group)<-[:MEMBER_OF]-(other:Member)-[:MEMBER_OF]->(otherGroup:Group)
WHERE NOT (member)-[:MEMBER_OF]->(otherGroup)
RETURN otherGroup, COUNT(*) AS commonMembers
ORDER BY commonMembers DESC
LIMIT 5

Unfortunately when I ran this the amount of db hits increased by 14x and it now took 3x as long to run:

Cypher version: CYPHER 2.3, planner: COST. 14061442 total db hits in 3364 ms.


The problem is that we’re making lots of duplicate calls to NOT (member)-[:MEMBER_OF]->(otherGroup) because each group shows up lots of times.

This is the ‘reduce cardinality of work in progress’ tip from Michael Hunger’s blog post:

Bonus Query Tuning Tip: Reduce Cardinality of Work in Progress

When following longer paths, you’ll encounter duplicates. If you’re not interested in all the possible paths – but just distinct information from stages of the path – make sure that you eagerly eliminate duplicates, so that later matches don’t have to be executed many multiple times.

We can reduce the WIP in our query by doing the counting of common members first and then filtering out the groups we’re already a member of:

MATCH (member:Member {name: "Mark Needham"})-[:MEMBER_OF]->(group:Group)<-[:MEMBER_OF]-(other:Member)-[:MEMBER_OF]->(otherGroup:Group)
WITH otherGroup, member, COUNT(*) AS commonMembers
WHERE NOT (member)-[:MEMBER_OF]->(otherGroup)
RETURN otherGroup, commonMembers
ORDER BY commonMembers DESC
LIMIT 5

This gets us back down to something closer to the running time/db hits of our initial query:

Cypher version: CYPHER 2.3, planner: COST. 1097114 total db hits in 1004 ms.
Categories: Programming