Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Project Management Is Dead (Refined)

An Eternal Flame.

An Eternal Flame.

In Mastering Software Project Management (J Ross Publishing 2010), Murali Chemuturi and I define software project management as the activities required to plan and lead software projects.  Historically, IT projects and most non-Agile frameworks have identified a single person to play this role.  Programs, which are made up multiple projects, include multiple project managers that report to program manager. However, many forms of Agile now have eschewed the role of the project manager and instead distributed the activities associated with project management across the core team, including the product owner, the development team and the Scrum Master. Project management as a role is dead, long live project management the concept.

The product owner is responsible for managing a number of the activities that the project manager or administrator would have been tasked with in the past.  The primary role of the product owner is to own and manage the product backlog. Managing the backlog includes the activities of prioritizing backlog items and determining the release plan (including scope and date).  Managing the backlog in many organizations also means the product owner manages the budget, communicating progress, and interacts with external the stakeholders.  The product owner acts as a leader, providing the team with a direction using the backlog. He or she manages the backlog as a tool to exhibit that leadership.

The development team members also pick up some of the project management tasks.  The development team is responsible for identifying and managing the tasks needed to deliver the work they have committed to complete. The development team roles mix creation and innovation with control and management, using techniques such as peer pressure, stand-up meetings and pair working.

The Scrum Master is responsible for facilitating, coaching and motivating the team.  Scrum Masters guide teams so that they learn and use Agile techniques, confront delivery problems as they occur and work together as a well-oiled unit. The Scrum Master also serves as a shepherd to stave off interference from outside the team’s boundaries. The Scrum Master interacts with a team or teams, and then let the team members synthesize and internalize the advice. The Scrum Master is the team’s tactical coach.

In Agile, project management is dead, at least as a single role that leads, directs, controls and administers a project team because those roles are distributed to the team. I was once asked, “In an Agile project, who is the single person I can put my foot on their throat to motivate?” ¬†Without dignifying the question, in an Agile environment the answer is far less obvious than pointing to a project manager. ¬†The role simply isn‚Äôt filled by a single project manager. Now these responsibilities and tasks are distributed to those who that are actually have both the authority and responsibility for project execution.


Categories: Process Management

Sponsored Post: Datadog, Tumblr, Power Admin, Learninghouse, MongoDB, Internap, Aerospike, SignalFx, InMemory.Net, Couchbase, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Make Tumblr fast, reliable and available for hundreds of millions of visitors and tens of millions of users.  As a Site Reliability Engineer you are a software developer with a love of highly performant, fault-tolerant, massively distributed systems. Apply here now! 

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • 90 Days. 1 Bootcamp. A whole new life. Interested in learning how to code? Concordia St. Paul's Coding Bootcamp is an intensive, fast-paced program where you learn to be a software developer. In this full-time, 12-week on-campus course, you will learn either .NET or Java and acquire the skills needed for entry-level developer positions. For more information, read the Guide to Coding Bootcamp or visit bootcamp.csp.edu.

  • June 2nd – 4th, Santa Clara: Register for the largest NoSQL event of the year, Couchbase Connect 2015, and hear how innovative companies like Cisco, TurboTax, Joyent, PayPal, Nielsen and Ryanair are using our NoSQL technology to solve today’s toughest big data challenges. Register Today.

  • The Art of Cyberwar: Security in the Age of Information. Cybercrime is an increasingly serious issue both in the United States and around the world; the estimated annual cost of global cybercrime has reached $100 billion with over 1.5 million victims per day affected by data breaches, DDOS attacks, and more. Learn about the current state of cybercrime and the cybersecurity professionals in charge with combatting it in The Art of Cyberwar: Security in the Age of Information, provided by Russell Sage Online, a division of The Sage Colleges.

  • MongoDB World brings together over 2,000 developers, sysadmins, and DBAs in New York City on June 1-2 to get inspired, share ideas and get the latest insights on using MongoDB. Organizations like Salesforce, Bosch, the Knot, Chico’s, and more are taking advantage of MongoDB for a variety of ground-breaking use cases. Find out more at http://mongodbworld.com/ but hurry! Super Early Bird pricing ends on April 3.
Cool Products and Services
  • Datadog is a monitoring service for scaling cloud infrastructures that bridges together data from servers, databases, apps and other tools. Datadog provides Dev and Ops teams with insights from their cloud environments that keep applications running smoothly. Datadog is available for a 14 day free trial at datadoghq.com.

  • Here's a little quiz for you: What do these companies all have in common? Symantec, RiteAid, CarMax, NASA, Comcast, Chevron, HSBC, Sauder Woodworking, Syracuse University, USDA, and many, many more? Maybe you guessed it? Yep! They are all customers who use and trust our software, PA Server Monitor, as their monitoring solution. Try it out for yourself and see why we’re trusted by so many. Click here for your free, 30-Day instant trial download!

  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Loggly alternative.

  • Instructions for implementing Redis functionality in Aerospike. Aerospike Director of Applications Engineering, Peter Milne, discusses how to obtain the semantic equivalent of Redis operations, on simple types, using Aerospike to improve scalability, reliability, and ease of use. Read more.

  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

It’s Not What You Do. It’s What You Do Next.

Mike Cohn's Blog - Tue, 06/09/2015 - 15:00

I see too many teams and product owners obsessing over their entire product backlogs.

You do not need to have your entire your product backlog figured out.

You do not need to know what to do. You only need to know what to do next.

Do that. See how your users and customers respond. Then let their feedback guide what to do next.

Don't blindly adopt anything.

Scrum is a self-organizing team that is given a challenge and to meet that challenge works in short, time boxed iterations during which they meet daily to quickly synchronize their efforts. At the start of each iteration they meet to plan what they will accomplish. At the end they demonstrate what has been accomplished and reflect on how well they worked together to achieve it.

That's it. Anything else---release planning, burndowns, and so on is optional. Stick to the above and find the local optimizations that fit your environment. No expert knows more about your company than you do.

Software Development Linkopedia June 2015

From the Editor of Methods & Tools - Tue, 06/09/2015 - 13:27
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about software test automation, what is programming, model-based testing, project estimation.. or not, software architecture, agile software development and a description of how Crisp (Henrik Kniberg’s company) works and why. Web […]

Android Design Support: NavigationView

Xebia Blog - Tue, 06/09/2015 - 09:18

When Google announced that Android apps should follow their Material Design standards, they did not give the developers a lot of tools to actually implement this new look and feel. Of course Google’s own apps were all quickly updated and looked amazing, but the rest of us were left with little more than fancy design guidelines and no real components to use in our apps.

So last weeks release of the Android Design Support Library came as a relief to many. It promises to help us quickly create nice looking apps that are consistent with the rest of the platform, without having to roll everything for ourselves. Think of it as AppCompat’s UI-centric companion.

The NavigationView is the part of this library which I thought the most interesting. It helps you create the slick sliding-over-everything navigation drawer that is such a recognizable part of material apps. I will demonstrate how to use this component and how to avoid some common mistakes.

(function(d, t) { var g = d.createElement(t), s = d.getElementsByTagName(t)[0]; g.src = 'http://assets.gfycat.com/js/gfyajax-0.517d.js'; s.parentNode.insertBefore(g, s); }(document, 'script'));

Basic Setup

The basic setup is pretty straightforward, you add a DrawerLayout and NavigationView to your main layout resource:

<android.support.v4.widget.DrawerLayout
  android:id="@+id/drawer_layout"
  xmlns:android="http://schemas.android.com/apk/res/android"
  xmlns:app="http://schemas.android.com/apk/res-auto"
  android:layout_width="match_parent"
  android:layout_height="match_parent"
  android:fitsSystemWindows="true">

  <!-- The main content view -->
  <LinearLayout
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:orientation="vertical">

    <!-- Toolbar instead of ActionBar so the drawer can slide on top -->
    <android.support.v7.widget.Toolbar
      android:id="@+id/toolbar"
      android:layout_width="match_parent"
      android:layout_height="@dimen/abc_action_bar_default_height_material"
      android:background="?attr/colorPrimary"
      android:minHeight="?attr/actionBarSize"
      android:theme="@style/AppTheme.Toolbar"
      app:titleTextAppearance="@style/AppTheme.Toolbar.Title"/>

    <!-- Real content goes here -->
    <FrameLayout
      android:id="@+id/content"
      android:layout_width="match_parent"
      android:layout_height="0dp"
      android:layout_weight="1"/>
  </LinearLayout>

  <!-- The navigation drawer -->
  <android.support.design.widget.NavigationView
    android:id="@+id/navigation"
    android:layout_width="wrap_content"
    android:layout_height="match_parent"
    android:layout_gravity="start"
    android:background="@color/ternary"
    app:headerLayout="@layout/drawer_header"
    app:itemIconTint="@color/drawer_item_text"
    app:itemTextColor="@color/drawer_item_text"
    app:menu="@menu/drawer"/>

</android.support.v4.widget.DrawerLayout>

And a drawer.xml menu resource for the navigation items:

<menu xmlns:android="http://schemas.android.com/apk/res/android">
  <!-- group with single selected item so only one item is highlighted in the nav menu -->
  <group android:checkableBehavior="single">
    <item
      android:id="@+id/drawer_item_1"
      android:icon="@drawable/ic_info"
      android:title="@string/item_1"/>
    <item
      android:id="@+id/drawer_item_2"
      android:icon="@drawable/ic_help"
      android:title="@string/item_2"/>
  </group>
</menu>

Then wire it up in your Activity. Notice the nice onNavigationItemSelected(MenuItem) callback:

public class MainActivity extends AppCompatActivity implements
    NavigationView.OnNavigationItemSelectedListener {

  private static final long DRAWER_CLOSE_DELAY_MS = 250;
  private static final String NAV_ITEM_ID = "navItemId";

  private final Handler mDrawerActionHandler = new Handler();
  private DrawerLayout mDrawerLayout;
  private ActionBarDrawerToggle mDrawerToggle;
  private int mNavItemId;

  @Override
  protected void onCreate(final Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);

    mDrawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout);
    Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar);
    setSupportActionBar(toolbar);

    // load saved navigation state if present
    if (null == savedInstanceState) {
      mNavItemId = R.id.drawer_item_1;
    } else {
      mNavItemId = savedInstanceState.getInt(NAV_ITEM_ID);
    }

    // listen for navigation events
    NavigationView navigationView = (NavigationView) findViewById(R.id.navigation);
    navigationView.setNavigationItemSelectedListener(this);

    // select the correct nav menu item
    navigationView.getMenu().findItem(mNavItemId).setChecked(true);

    // set up the hamburger icon to open and close the drawer
    mDrawerToggle = new ActionBarDrawerToggle(this, mDrawerLayout, toolbar, R.string.open,
        R.string.close);
    mDrawerLayout.setDrawerListener(mDrawerToggle);
    mDrawerToggle.syncState();

    navigate(mNavItemId);
  }
  
  private void navigate(final int itemId) {
    // perform the actual navigation logic, updating the main content fragment etc
  }

  @Override
  public boolean onNavigationItemSelected(final MenuItem menuItem) {
    // update highlighted item in the navigation menu
    menuItem.setChecked(true);
    mNavItemId = menuItem.getItemId();
    
    // allow some time after closing the drawer before performing real navigation
    // so the user can see what is happening
    mDrawerLayout.closeDrawer(GravityCompat.START);
    mDrawerActionHandler.postDelayed(new Runnable() {
      @Override
      public void run() {
        navigate(menuItem.getItemId());
      }
    }, DRAWER_CLOSE_DELAY_MS);
    return true;
  }

  @Override
  public void onConfigurationChanged(final Configuration newConfig) {
    super.onConfigurationChanged(newConfig);
    mDrawerToggle.onConfigurationChanged(newConfig);
  }

  @Override
  public boolean onOptionsItemSelected(final MenuItem item) {
    if (item.getItemId() == android.support.v7.appcompat.R.id.home) {
      return mDrawerToggle.onOptionsItemSelected(item);
    }
    return super.onOptionsItemSelected(item);
  }

  @Override
  public void onBackPressed() {
    if (mDrawerLayout.isDrawerOpen(GravityCompat.START)) {
      mDrawerLayout.closeDrawer(GravityCompat.START);
    } else {
      super.onBackPressed();
    }
  }

  @Override
  protected void onSaveInstanceState(final Bundle outState) {
    super.onSaveInstanceState(outState);
    outState.putInt(NAV_ITEM_ID, mNavItemId);
  }
}
Extra Style

This setup results in a nice-looking menu with some default styling. If you want to go a bit further, you can add a header view to the drawer and add some colors to the navigation menu itself:

<android.support.design.widget.NavigationView
  android:id="@+id/navigation"
  android:layout_width="wrap_content"
  android:layout_height="match_parent"
  android:layout_gravity="start"
  android:background="@color/drawer_bg"
  app:headerLayout="@layout/drawer_header"
  app:itemIconTint="@color/drawer_item"
  app:itemTextColor="@color/drawer_item"
  app:menu="@menu/drawer"/>

Where the drawer_item color is actually a ColorStateList, where the checked state is used by the current active navigation item:

<selector xmlns:android="http://schemas.android.com/apk/res/android">
  <item android:color="@color/drawer_item_checked" android:state_checked="true" />
  <item android:color="@color/drawer_item_default" />
</selector>
Open Issues

The current version of the library does come with its limitiations. My main issue is with the system that highlights the current item in the navigation menu. The itemBackground attribute for the NavigationView does not handle the checked state of the item correctly: somehow either all items are highlighted or none of them are. This makes this attribute basically unusable for most apps. I ran into more trouble when trying to work with submenu's in the navigation items. Once again the highlighting refused to work as expected: updating the selected item in a submenu makes the highlight overlay disappear altogether. In the end it seems that managing the selected item is still a chore that has to be solved manually in the app itself, which is not what I expected from what is supposed to be a drag-and-drop component aimed to take work away from the developers.

Conclusion

I think the NavigationView component missed the mark a little. My initial impression was pretty positive: I was able to quickly put together a nice looking navigation menu with very little code. The issues with the highlighting of the current item makes it more difficult to use than I would expect, but let's hope that these quirks are removed in an upcoming release of the design library.

You can find the complete source code of the demo project on GitHub: github.com/smuldr/design-support-demo.

Unix: Converting a file of values into a comma separated list

Mark Needham - Mon, 06/08/2015 - 23:23

I recently had a bunch of values in a file that I wanted to paste into a Java program which required a comma separated list of strings.

This is what the file looked like:

$ cat foo2.txt | head -n 5
1.0
1.0
1.0
1.0
1.0

And the idea is that we would end up with something like this:

"1.0","1.0","1.0","1.0","1.0"


The first thing we need to do is quote each of the values. I found a nice way to do this using sed:

$ sed 's/.*/"&"/g' foo2.txt | head -n 5
"1.0"
"1.0"
"1.0"
"1.0"
"1.0"

Now that we’ve got all the values quoted we need to get rid of the new lines and replace them with commas. The way I’d normally do this is using ‘tr’ and then just not copy the final comma…

$ sed 's/.*/"&"/g' foo2.txt | tr '\n' ','
"1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0",

…but I learnt that we can actually do one better than this using ‘paste’ which allows you to replace new lines excluding the last one.

The only annoying thing about paste is that you can’t pipe to it so we need to use process substitution instead:

$ paste -s -d ',' <(sed 's/.*/"&"/g' foo2.txt)
"1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0","1.0"

If we’re only a Mac we could even automate the copy/paste step too by piping to ‘pbcopy':

$ paste -s -d ',' <(sed 's/.*/"&"/g' foo2.txt) | pbcopy
Categories: Programming

Extracting software architecture from code

Coding the Architecture - Simon Brown - Mon, 06/08/2015 - 22:47

I ran my Extracting software architecture from code session at Skills Matter this evening and it was a lot of fun. The basic premise of the short workshop part was simple; "here's some code, now draw some software architecture diagrams to describe it". Some people did this individually and others worked in groups. It sounds easy, but you can see for yourself what happened.

Diagrams

There are certainly some common themes, but each diagram is different. Also, people's perception of what architectural information can be extracted from the code differed slightly too, but more on that topic another day. If you want to have a go yourself, the codebase I used was a slightly cutdown version* of the Spring PetClinic web application. The presentation parts of the session were videoed and I'm creating a 1-day version of this workshop that I'll be running at a conference or two in the Autumn.

This again raises some basic questions about the software development industry though. Why, in 2015, do we still not have a consistent way to do this? Why don't we have a shared vocabulary? When will we be able to genuinely call ourselves "software engineers"?

* The original version ships with three different database profile implementations, and I removed two of them for simplicity.

Categories: Architecture

My initial experience with Rust

Eric.Weblog() - Eric Sink - Mon, 06/08/2015 - 19:00
First, a digression about superhero movies

I am apparently incapable of hating any movie about a comic book superhero.

I can usually distinguish the extremes. Yes, I can tell that "The Dark Knight" was much better than "Elektra". My problem is that I tend to think that the worst movies in this genre are still pretty good.

And I have the same sort of unreasonable affection toward programming languages. I have always been fascinated by languages, compilers, and interpreters. My opinions about such things skew toward the positive simply because I find them so interesting.

I do still have preferences. For example, I tend to like strongly typed languages more. In fact, I think it is roughly true that the stricter a compiler is, the more I like it. But I can easily find things to like in languages that I mostly dislike.

I've spent more of my career writing C than any other language. But in use cases where I need something like C, I am increasingly eager for something more modern.

I started learning Rust with two questions:

  • How successful might Rust become as a viable replacement for C?

  • If I enjoy functional programming, how much of that enjoyment can I retain while coding in Rust?

The context

My exploration of Rust has taken place in one of my side projects: https://github.com/ericsink/LSM

LSM is a key-value database with a log-structured merge tree design. It is conceptually similar to Google LevelDB. I first wrote it in C#. Then I rewrote/ported it to F#. Now I have ported it to Rust. (The Rust port is not yet mentioned in the README for that repo, but it's in the top-level directory called 'rs'.)

For the purpose of learning F# and Rust, my initial experience was the same. The first thing I did in each of these languages was to port LSM. In other words, the F# and Rust ports of LSM are on equal footing. Both of them were written by someone who was a newbie in the language.

Anyway, although Rust and F# are very different languages, I have used F# as a reference point for my learning of Rust, so this blog entry walks that path as well.

This is not to say that I think Rust and F# would typically be used for the same kinds of things. I can give you directions from Denver to Chicago without asserting they are similar. Nonetheless, given that Rust is mostly intended to be a modern replacement for C, it has a surprising number of things in common with F#.

The big comparison table   F# Rust Machine model Managed, .NET CLR Native, LLVM Runtime CLR None Style Multi-paradigm, functional-first Multi-paradigm, imperative-first Syntax family ML-ish C-ish Blocks Significant whitespace Curly braces Exception handling Yes No Strings .NET (UTF-16) UTF-8 Free allocated memory Automatic, garbage collector Automatic, static analysis Type inference Yes, but not from method calls Yes, but only within functions Functional immutable collections Yes No Currying Yes No Partial application Yes No Compiler strictness Extremely strict Even stricter Tuples Yes Yes Discriminated unions
type Blob =
    | Stream of Stream
    | Array of byte[]
    | Tombstone
enum Blob {
    Stream(Box),
    Array(Box<[u8]>),
    Tombstone,
}
Mutability To be avoided Safe to use Lambda expressions
let f = 
  (fun acc item -> acc + item)
let f = 
  |acc, &item| acc + item;
Higher-order functions List.fold f 0 a a.iter().fold(0, f) Integer overflow checking No open Checked Yes Let bindings
let x = 1
let mutable y = 2
let x = 1;
let mut y = 2;
if statements are expressions Yes Yes Unit type () () Pattern matching
match cur with
| Some csr -> csr.IsValid()
| None -> false
match cur {
    Some(csr) => csr.IsValid(),
    None => false
}
Primary collection type Linked list Vector Naming types CamelCase CamelCase Naming functions, etc camelCase snake_case Warnings about naming conventions No Yes Type for integer literals Suffix (0uy) Inference (0) or suffix (0u8) Project file foo.fsproj (msbuild) Cargo.toml Testing framework xUnit, NUnit, etc. Built into Cargo Debug prints printf "%A" foo println!("{:?}", foo); Memory safety

I have written a lot of C code over the years. More than once while in the middle of a project, I have stopped to explore ways of getting the compiler to catch my memory leaks. I tried the Clang static analyzer and Frama-C and Splint and others. It just seemed like there should be a way, even if I had to annotate function signatures with information about who owns a pointer.

So perhaps you can imagine my joy when I first read about Rust.

Even more cool, Rust has taken this set of ideas so much further than the simple feature I tried to envision. Rust doesn't just detect leaks, it also:

  • frees everything for you, like a garbage collector, but it's not.

  • prevents access to something that has been freed.

  • prevents modifying an iterator while it is being used.

  • prevents all memory corruption bugs.

  • automatically disposes other kinds of resources, not just allocated memory.

  • prevents two threads from having simultaneous access to something.

That last bullet is worth repeating: With Rust, you never stare at your code trying to figure out if it's thread safe or not. If it compiles, then it's thread safe.

Safety is Rust's killer feature, and it is very compelling.

Mutability

If you come to Rust hoping to find a great functional language, you will be disappointed. Rust does have a bunch of functional elements, but it is not really a functional language. It's not even a functional-first hybrid. Nonetheless, Rust has enough cool functional stuff available that it has been described as "ML in C++ clothing".

I did my Rust port of LSM as a line-by-line translation from the F# version. This was not a particularly good approach.

  • Functional programming is all about avoiding mutable things, typically by using recursion, monads, computation expressions, and immutable collections.

  • In Rust, mutability should not be avoided, because it's safe. If you are trying to use mutability in a way that would not be safe, your code will not compile.

So if you're porting code from a more functional language, you can end up with code that isn't very Rusty.

If you are a functional programming fan, you might be skeptical of Rust and its claims. Try to think of it like this: Rust agrees that mutability is a problem -- it is simply offering a different solution to that problem.

Learning curve

I don't know if Rust is the most difficult-to-learn programming language I have seen, but it is running in that race.

Anybody remember back when Joel Spolsky used to talk about how difficult it is for some programmers to understand pointers? Rust is a whole new level above that. Compared to Rust, regular pointers are simplistic.

With Rust, we don't just have pointers. We also have ownership, borrows, and lifetimes.

As you learn Rust, you will reach a point where you think you are starting to understand things. And then you try to return a reference from a function, or store a reference in a struct. Suddenly you have lifetime<'a> annotations<'a> all<'a> over<'a> the<'a> place<'a>.

And why did you put them there? Because you understood something? Heck no. You started sprinkling explicit lifetimes throughout your code because the compiler error messages told you to.

I'm not saying that Rust isn't worth the pain. I personally think Rust is rather brilliant.

But a little expectation setting is appropriate here. Some programming languages are built for the purpose of making programming easier. (It is a valid goal to want to make software development accessible to a wider group of people.) Rust is not one of those languages.

That said, the Rust team has invested significant effort in excellent documentation (see The Book). And those compiler error messages really are good.

Finally, let me observe that while some things are hard to learn because they are poorly designed, Rust is not one of those things. The deeper I get into this, the more impressed I am. And so far, every single time I thought the compiler was wrong, I was mistaken.

I have found it helpful to try to make every battle with the borrow checker into a learning experience. I do not merely want to end up with the compiler accepting my code. I want to understand more than I did when I started.

Error handling

Rust does not have exceptions for error handling. Instead, error handling is done through the return values of functions.

But Rust actually makes this far less tedious than it might sound. By convention (and throughout the Rust standard library), error handling is done by returning a generic enum type called Result<T,E>. This type can encapsulate either the successful result of the function or an error condition.

On top of this, Rust has a clever macro called try!. Because of this macro, if you read some Rust code, you might think it has exception handling.

// This code was ported from F# which assumes that any Stream
// that supports Seek also can give you its Length.  That method
// isn't part of the Seek trait, but this implementation should
// suffice.
fn seek_len(fs: &mut R) -> io::Result where R : Seek {
    // remember where we started (like Tell)
    let pos = try!(fs.seek(SeekFrom::Current(0)));

    // seek to the end
    let len = try!(fs.seek(SeekFrom::End(0)));

    // restore to where we were
    let _ = try!(fs.seek(SeekFrom::Start(pos)));

    Ok(len)
}

This function returns std::io::Result<u64>. When it calls the seek() method of the trait object it is given, it uses the try! macro, which will cause an early return of the function if it fails.

In practice, I like Rust's Result type very much.

  • The From and Error traits make it easy to combine different kinds of Result/Error values.

  • The distinction between errors and panics seems very clean.

  • I like having the compiler help me be certain that I am propagating errors everywhere I should be. (I dislike scanning library documentation to figure out if I called something that throws an exception I need to handle.)

Nonetheless, when doing a line-by-line port of F# to Rust, this was probably the most tedious issue. Lots of functions that returned () in F# changed to return Result in Rust.

Type inference

Rust does type inference within functions, but it cannot or will not infer the types of function arguments or function return values.

Very often I miss having the more complete form of type inference one gets in F#. But I do remind myself of certain things:

  • The Rust type system is far more complicated than that of F#. Am I holding a Foo? Or do I have a &Foo (a reference to a Foo)? Am I trying to transfer ownership of this value or not? Being a bit more explicit can be helpful.

  • F# type inference has its weaknesses as well. Most notably, inference doesn't work at all with method calls. This gives the object-oriented features of F# a very odd "feel", as if they don't belong in the language, but it would be unthinkable for a CLR language not to have them.

  • Rust has type inference for integer literals but F# does not.

  • The type inference capabilities of Rust may get smarter in the future.

Iterators

Rust iterators are basically like F# seq (which is an alias for .NET IEnumerable). They are really powerful and provide support for functional idioms like List.map. For example:

fn to_hex_string(ba: &[u8]) -> String {
    let strs: Vec = ba.iter()
        .map(|b| format!("{:02X}", b))
        .collect();
    strs.connect("")
}
  • This function takes a slice (a part of an array) of bytes (u8) and returns its representation as a hex string.

  • Vec is a growable array

  • .iter() means something different than it does in F#. Here, it is the function that returns an iterator for a slice

  • .map() is pretty similar to F#. The argument above is Rust's syntax for a closure.

  • .collect() also means something different than it does in F#. Here, it consumes the iterator and puts all the mapped results into the Vec we asked for.

  • .connect("") is basically a join of all the resulting strings.

However, there are a few caveats.

In Rust, you have a lot more flexibility about whether you are dealing with "a Foo" or "a reference to a Foo", and most of the time, it's the latter. Overall, this is just more work than it is in F#, and using iterators feels like it magnifies that effect.

Performance

I haven't done the sort of careful benchmarking that is necessary to say a lot about performance, so I will say only a little.

  • I typically use one specific test for measuring performance changes. It writes 10 LSM segments and then merges them all into one, resulting in a data file.

  • On that test, the Rust version is VERY roughly 5 times faster than the F# version.

  • The Rust and F# versions end up producing exactly the same output file.

  • The test is not all that fair to F#. Writing an LSM database in F# was always kind of a square-peg/round-hole endeavor.

  • With Rust, the difference in compiling with or without the optimizer can be huge. For example, that test runs 15 times faster with compiler optimizations than it does without.

  • With Rust, the LLVM optimizer can't really do its job very well if it can't do function inlining. Which it can't do across crates unless you use explicit inline attributes or turn on LTO.

  • In F#, there often seems to be a negative correlation between "idiomatic-ness" and "performance". In other words, the more functional and idiomatic your code, the slower it will run.

  • F# could get a lot faster if it could take better advantage of the ability of the CLR to do value types. For example, in F#, option and tuple always cause heap allocations.

Integer overflow

Integer overflow checking is one of my favorite features of Rust.

In languages or environments without overflow checking, unsigned types are very difficult to use safely, so people generally use signed integers everywhere, even in cases where a negative value makes no sense. Rust doesn't suffer from this silliness.

For example, the following code will panic:

let x: u8 = 255;
let y = x + 2;
println!("{}", y);

That said, I haven't quite figured out how to get overflow checking to happen on casts. I want the following code (or something very much like it) to panic:

let x: u64 = 257;
let y = x as u8;
println!("{}", y);

Note that, by default, Rust turns off integer overflow checking in release builds, for performance reasons.

Miscellany
  • F# is still probably the most productive and pleasant language I have ever used. But Rust is far better than C in this regard.

  • IMO, the Read, Write, and Seek traits are a much better design than .NET's Stream, which tries to encapsulate all three concepts.

  • 'cargo test' is a nice, easy-to-use testing framework that is built into Cargo. I like it.

  • crates.io is like NuGet for Rust, and it's integrated with Cargo.

  • If 'cargo bench' wants to always report timings in nanoseconds, I wish it would put in a separator every three digits.

  • I actually like the fact that Rust is taking a stance on things like function_names_in_snake_case and TypeNamesInCamelCase, even to the point of issuing compiler warnings for names that do not match the conventions. I don't agree 100% with their style choices, and that's my point. Being opinionated might help avoid a useless discussion about something that never really matters very much anyway.

  • I miss printf-style format strings.

  • I'm not entirely sure I like the automatic dereferencing feature. I kinda wish the compiler wouldn't help me in this manner until I know what I'm doing.

Bottom line

I am seriously impressed with Rust. Then again, I thought that Eric Bana's Hulk movie was pretty good, so you might want to just ignore everything I say.

In terms of maturity and ubiquity, C has no equal. Still, I believe Rust has the potential to become a compelling replacement for C in many situations.

I look forward to using Rust more.

 

Leveraging AWS to Build a Scalable Data Pipeline

While at Netflix and LinkedIn Siddharth "Sid" Anand wrote some great articles for HS. Sid is back, this time as a Data Architect at Agari. Original article is here.

Data-rich companies (e.g. LinkedIn, Facebook, Google, and Twitter) have historically built custom data pipelines over bare metal in custom-designed data centers. In order to meet strict requirements on data security, fault-tolerance, cost control, job scalability, and uptime, they need to closely manage their core technology. Like serving systems (e.g. web application servers and OLTP databases) that need to be up 24x7 to display content to users the world over, data pipelines need to be up and running in order to pick the most engaging and up-to-date content to display. In other words, updated ranking models, new content recommendations, and the like are what make data pipelines an integral part of an end user’s web experience by picking engaging, personalized content. 

Agari, a data-driven email security company, is no different in its demand for a low-latency, reliable, and scalable data pipeline.  It must process a flood of inbound email and email authentication metrics, analyze this data in a timely manner, often enriching it with 3rd party data and model-driven derived data, and publish findings. One twist is that Agari, unlike the companies listed above, operates completely in the cloud, specifically in AWS.  This has turned out to be more a boon than a disadvantage. 

Below is one such data pipeline used at Agari.

Agari Data Pipeline

Data Flow
Categories: Architecture

Gone Fishin' 2015

Well, not exactly Fishin', but I'll be on a month long vacation starting today.

I won't be posting (much) new content, so we'll all have a break. Disappointing, I know.

Please use this time for quiet contemplation and other inappropriate activities.

See you on down the road...

Categories: Architecture

Design Patterns Simplified: The Bridge Pattern

Making the Complex Simple - John Sonmez - Mon, 06/08/2015 - 16:00

Let me ask you a question? Do you really understand design patterns‚ÄĒyou know, the ones in that old Gang of Four book? Perhaps you aren‚Äôt even really familiar with the term ‚Äúdesign patterns.‚ÄĚ It‚Äôs Ok, you are not alone. Design patterns are simply formal names given to common patterns that seem to emerge from solving […]

The post Design Patterns Simplified: The Bridge Pattern appeared first on Simple Programmer.

Categories: Programming

Visionary Leadership: How To Be a Visionary Leader (Or at Least Look Like One)

‚ÄúRemember this: Anticipation is the ultimate power. Losers react; leaders anticipate.‚ÄĚ ‚Äď Tony Robbins

Have you ever noticed how some leaders have a knack for "the art of the possible" and for making it relevant to the current landscape?

They are Visionary Leaders and they practice Visionary Leadership.

Visionary Leaders inspire us and show us how we can change the world, at least our slice of it, and create the change we want to be.

Visionary Leaders see things early and they connect the dots.

Visionary Leaders luck their way into the future.  They practice looking ahead for what's pertinent and what's probable.

Visionary Leaders also practice telling stories.  They tell stories of the future and how all the dots connect in a meaningful way.

And they put those stories of the future into context.  They don't tell disjointed stories, or focus on flavor-of-the-month fads.  That's what Trend Hoppers do.

Instead, Visionary Leaders focus on meaningful trends and insights that will play a role in shaping the future in a relevant way.

Visionary leaders tell us compelling stories of the future in a way that motivates us to take action and to make the most of what's coming our way.

Historians, on the other hand, tell us compelling stories of the past.

They excite us with stories about how we've "been there, and done that."

By contrast, Visionary Leaders win our hearts and minds with "the art of the possible" and inspire us to co-create the future, and to use future insights to own our destiny.

And Followers, well, they follow.

Not because they don't see some things coming.  But because they don't see things early enough, and they don't turn what they see into well-developed stories with coherence.

If you want to build your capacity for vision and develop your skills as a Visionary Leader, start to pay attention to signs of the future and connect the dots in a meaningful way.

With great practice, comes great progress, and progressing even a little in Visionary Leadership can make a world of difference for you and those around you.

You Might Also Like

7 Metaphors for Leadership Transformation

10 Free Leadership Skills for Work and Life

10 Leadership Ideas to Improve Your Influence and Impact

Emotional Intelligence is a Key Leadership Skill

Integrating Generalist

Categories: Architecture, Programming

Diagramming microservices with the C4 model

Coding the Architecture - Simon Brown - Mon, 06/08/2015 - 08:40

Here's a question I'm being asked more and more ... how do you diagram a microservices architecture with the C4 software architecture model?

It's actually quite straightforward providing that you have a defined view of what a microservice is. If a typical modular monolithic application is a container with a number of components running inside it, a microservice is simply a container with a much smaller number of components running inside it. The actual number of components will depend on your implementation strategy. It could range from the very simple (i.e. one, where a microservice is a container with a single component running inside) through to something like a mini-layered or hexagonal architecture. And by "container", I mean anything ranging from a traditional web server (e.g. Apache Tomcat, IIS, etc) through to something like a self-contained Spring Boot or Dropwizard application. In concrete terms then...

  • System context diagram: No changes ... you're still building a system with users (people) and other system dependencies.
  • Containers diagram: If each of your microservices can be deployed individually, then that should be reflected on the containers diagram. In other words, each microservice is represented by a separate container.
  • Component diagrams: Depending on the complexity of your microservices, I would question whether drawing a component diagram for every microservice is worth the effort. Of course, if each microservice is a mini-layered or hexagonal architecture then perhaps there's some value. I would certainly consider using something like Structurizr for doing this automatically from the code though.

So there you go, that's how I would approach diagramming a microservices architecture with the C4 model.

Categories: Architecture

SPaMCAST 345 ‚Äď Cognitive Biases, QA Corner, TameFlow

Software Process and Measurement Cast - Sun, 06/07/2015 - 22:00

The Software Process and Measurement Cast 345 features our essay on Cognitive Biases and two new columns. The essay on cognitive bias provides important tools for anyone that works on a team or interfaces with other people! A sample for the podcast:

‚ÄúThe discussion of cognitive biases is not a theoretical exercise. Even a relatively simple process such as sprint planning in Scrum is influenced by the cognitive biases of the participants. Even the planning process itself is built to use cognitive biases like the anchor bias to help the team come to consensus efficiently. How all the members of the team perceive their environment and the work they commit to delivering will influence the probability of success therefore cognitive biases need to be understood and managed.‚ÄĚ

The first of the new columns is Jeremy Berriault’s QA Corner.  Jeremy’s first QA Corner discusses root cause analysis and some errors that people make when doing root cause analysis. Jeremy, is a leader in the world of quality assurance and testing and was originally interviewed on the Software Process and Measurement Cast 274.

The second new column features Steve Tendon discussing his great new book, Hyper-Productive Knowledge Work Performance.  Our intent is to discuss the book chapter by chapter.  This is very much like the re-read we do on blog weekly but with the author.  Steve has offered the SPaMCAST listeners are great discount if you use the link shown above.

As part of the chapter by chapter discussion of Steve‚Äôs book we are embedding homework questions.¬† The first question we pose is ‚ÄúIs the concept of hyper-productivity transferable from one group or company to another?‚ÄĚ Send your comments to¬†spamcastinfo@gmail.com.

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

Next . . . The Mythical Man-Month Get a copy now and start reading! We will start in four weeks!

Upcoming Events

2015 ICEAA PROFESSIONAL DEVELOPMENT & TRAINING WORKSHOP
June 9 ‚Äď 12
San Diego, California
http://www.iceaaonline.com/2519-2/
I will be speaking on June 10.¬† My presentation is titled ‚ÄúAgile Estimation Using Functional Metrics.‚ÄĚ

Let me know if you are attending!

Also upcoming conferences I will be involved in include and SQTM in September. More on these great conferences next week.

Next SPaMCast

The next Software Process and Measurement Cast will feature our will interview with Jon Quigley.  We discussed configuration management and his new book Configuration Management: Theory, Practice, and Application. Jon co-authored the book with Kim Robertson. Configuration management is one of the most critical practices anyone building a product, writing a piece of code or working on a project with other must learn or face the consequences!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.

Categories: Process Management

SPaMCAST 345 ‚Äď Cognitive Biases, QA Corner, TameFlow

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

The Software Process and Measurement Cast 345 features our essay on Cognitive Biases and two new columns. The essay on cognitive bias provides important tools for anyone that works on a team or interfaces with other people! A sample for the podcast:

‚ÄúThe discussion of cognitive biases is not a theoretical exercise. Even a relatively simple process such as sprint planning in Scrum is influenced by the cognitive biases of the participants. Even the planning process itself is built to use cognitive biases like the anchor bias to help the team come to consensus efficiently. How all the members of the team perceive their environment and the work they commit to delivering will influence the probability of success therefore cognitive biases need to be understood and managed.‚ÄĚ

The first of the new columns is Jeremy Berriault’s QA Corner.  Jeremy’s first QA Corner discusses root cause analysis and some errors that people make when doing root cause analysis. Jeremy, is a leader in the world of quality assurance and testing and was originally interviewed on the Software Process and Measurement Cast 274.

The second new column features Steve Tendon discussing his great new book, Hyper-Productive Knowledge Work Performance.  Our intent is to discuss the book chapter by chapter.  This is very much like the re-read we do on blog weekly but with the author.  Steve has offered the SPaMCAST listeners are great discount if you use the link shown above.

As part of the chapter by chapter discussion of Steve‚Äôs book we are embedding homework questions.¬† The first question we pose is ‚ÄúIs the concept of hyper-productivity transferable from one group or company to another?‚ÄĚ Send your comments to spamcastinfo@gmail.com.

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

Next . . . The Mythical Man-Month Get a copy now and start reading! We will start in four weeks!

Upcoming Events

2015 ICEAA PROFESSIONAL DEVELOPMENT & TRAINING WORKSHOP
June 9 ‚Äď 12
San Diego, California
http://www.iceaaonline.com/2519-2/
I will be speaking on June 10.¬† My presentation is titled ‚ÄúAgile Estimation Using Functional Metrics.‚ÄĚ

Let me know if you are attending!

Also upcoming conferences I will be involved in include and SQTM in September. More on these great conferences next week.

Next SPaMCast

The next Software Process and Measurement Cast will feature our will interview with Jon Quigley.  We discussed configuration management and his new book Configuration Management: Theory, Practice, and Application. Jon co-authored the book with Kim Robertson. Configuration management is one of the most critical practices anyone building a product, writing a piece of code or working on a project with other must learn or face the consequences!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.


Categories: Process Management

Re-Read Saturday: The Goal: A Process of Ongoing Improvement. Part 16

IMG_1249

As part of my day job I am often asked to help a team, project or department find a way to improve the value they deliver.  When dealing with knowledge work having a single, prescriptive path is rarely effective because even the most mundane product support work includes discovery and innovation. Once we have discovered a path it is important to step back and generalize the approach so that teams can use the process in a variety of scenarios.  I have found that developing a generalized approach is rarely as straight forward as changing the personal pronouns in the process to refer to another group. Regardless of this hard won realization, I still read posts and hear about people that are considering adopting best practices or procedures from other groups without tailoring.  Adopting a process, procedure or even a tool using an untailored, out of the box approach is rarely a good idea in knowledge work.  Alex and his team continue to search for a generalized approach that can be used to transform the entire division

Previous Installments:

Part 1       Part 2       Part 3      Part 4      Part 5 
Part 6       Part 7      Part 8     Part 9      Part 10
Part 11     Part 12      Part 13    Part 14    Part 15

 

Chapter 37. Alex and his team continue their daily meetings do discover the answer to the question ‚ÄúWhat are the techniques needed for management?‚ÄĚ In Chapter 36 the team had settled on a generalized five step process which was:

  1. Find the bottleneck,
  2. Exploit the bottleneck,
  3. Subordinate every other step to the bottleneck,
  4. Elevate the bottleneck, then
  5. Repeat if the bottleneck has been broken.

 

Ralph (computer guy) voices a concern that they really had not done step three.  After some discussion the team finds that the by constraining how work and material enter the process they really had subordinated all of the steps in the process to the bottlenecks.  Remember that the work and material entering the process had been constrained so the bottlenecks were 100% utilized (no more, no less).  During the discussion, Stacey (materials) recognized that the earlier red/yellow card approach the team had used to control the flow of work into the bottlenecks was still in place and was the cause of the problems she had been observing (Chapter 36). In order to deal with the problems caused by earlier red/yellow card approach and to keep everyone busy, Stacey admitted to have been releasing extra work into the process therefore building excess inventory of finished goods.  The back of the envelope calculations showed that the plant now had 20% extra capacity therefore they needed more orders to keep the plant at maximum capacity.  Alex decides go see Johnny Jons (sales manager) to see if they can prime the sales pump.

These observations led the team to the understanding that every time they recycled through the process they should have re-questioned and revalidated EVERY change they had previously made. The inertia of thinking something will work because it has in the past or because it has for someone else is often not your friend in process improvement!

Chapter 38. Jons, Alex, Lou (plant controller), Ralph and one of Jons more innovative salesmen meet at headquarters to discuss where they can come up with 10 million dollars of additional orders.  During the discussion it comes to light that Jons has a potential deal that he about to reject because the prices are well below standard margins. Alex points out that since the plant has excess capacity the real cost to produce the product is far lower than Jons is assuming (labor and overhead are already sunk costs). The plant could take the order and make a huge profit.  Alex and his team convince Jons to take the order if the potential client will commit to a one year deal.  They further sweeten the deal by committing to a quick deliveries (something other companies can’t emulate) in order to avoid starting a price war.  Jons agrees to accept the order as the potential client is well outside of the company’s standard area of distribution therefore will not impact the margins they getting on other orders.  On the way back to the plant Alex, Lout and Ralph reflect that they had just seen the same type of inertia that the team discovered the previous day in their process improvement approach and that Alex’s new role in changing the whole division will need to address even more organizational inertia.

Later Alex and Julie (wife) reflect that the key to the management practices Alex is searching for lie in the application of the scientific method.  Instead of collecting a lot of data and making inferences, the approach Johan had taken begins with a single observation, develops a hypothesis, leverages if-then relationships and then tests those relationships.  Alex searches popular scientific books for inspiration to his management questions.  When they discuss the topic again, Julie, who has continued to read the Socratic Dialogs, points out that they follow the same if-then pattern that Alex has described as Johan’s approach.

Re-Read Saturday Notes:

  1. I anticipate that the re-read of The Goal will conclude in two weeks with part 18. Our next book will be The Mythical Man-Month (I am buying a new copy today so if you do not have a copy . . . get a copy today and please use this Man-Month).
  2. Remember that the summary of previous entries in the re-read of The Goal have been shifted to a new page (here).
  3. Also, if you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version

 


Categories: Process Management

Carl Sagan's BS Detector

Herding Cats - Glen Alleman - Sat, 06/06/2015 - 00:24

There are several BS detector paradigms. One of my favorites is Carl Sagan's. This one has been adapted from Reality Charting the book that goes along with the Apollo Method Root Cause Analysis process we use in our governance process in our domain. Any outage, any hard break, and disruption to the weekly release process is cause for Root Cause Analysis. 

Here's Carl's check list applied to the #NoEstimates conjecture that decisions can be made in the absence of estimates

  1. Seek independent facts. Remember, a fact is a cause supported by sensed evidence and should be independently verified by you before it can be deemed legitimate. If you cannot find sensed evidence of causal relationships you should be skeptical. 
    • Writing software for money is based on making decisions in the presence of uncertainty.
    • a primary process for making those those decisions is Microeconomics and Opportunity costs.¬†What's the lost opportunity for one decison over another?
    • To do this we need to estimate both the cost that results from choosing one decision over another and the benefits, opportunity, from that decision.
  2. Welcome open debate on all points of view. Suspend judgment about the event or claim until all cause paths have been pursued to your satisfaction using RealityCharting¬ģ.¬†
    • The notion that #NoEstimates is¬†exploring and seeking conversation is not evidenced.
    • Any challenge questions are labeled as trolling, harassing, and unwanted.
    • You'd think is #NoEstimates is the¬†next big thing responding to any and all challenges would be a terrific marketing opportunity. It's basic sales strategy, find all objections to your products vale propositon, over come them, and the¬†deal is closed.
  3. Always challenge authority. Ask to be educated. Ask the expert how they came to know what they know. If they cannot explain it to your satisfaction using evidence-based causal relationships then be very skeptical. 
    • This is not the same as¬†challenge¬†everything.
    • This is¬†when you hear¬†something¬†that is not backed b tangible evidence¬†challenge¬†it.
    • Ask the person making the claim how they know it will work outside their personal anecdotal experience?
  4. Consider more than one hypothesis. The difference between a genius and a normal person is that when asked to solve a problem the genius doesn’t look for the right answer, he or she looks for how many possible solutions he or she can find. A genius fundamentally understands that there is always another possibility, limited by our fundamental ignorance of what is really happening. 
    • The self proclaimed thought leaders,¬†agile leaders were¬†challenged, I've been¬†challenged, I must be¬†a agile though leader, needs to be tested with actual evidence.
  5. Don’t defend a position because it is yours. All ideas are prototypical because there is no way we can really know all the causes. Seek to understand before seeking to be understood. 
    • Use this on both sides of the conversation.
    • Where can non estimates be applied?
    • Where is it not¬†applicable
    • Provide evidence on both sides.
  6. Try to quantify what you think you know. Can you put numbers to it? 
    • Show the numbers
    • Do the math
  7. If there is a chain of causes presented, every link must work. Use RealityCharting¬ģ to verify that the chain of causes meets the advanced logic checks defined above and that the causes are sufficient in and of themselves.¬†
    1. If estimating is the smell of dysfunction, show the causal chain of the dysfunction.
    2. Confirm that estimating is actually the root cause of management dysfunction.
    3. Is misuse and abuse of estimates caused by the estimating process?
  8. Use Occam’s razor to decide between two hypothesis; If two explanations appear to be equally viable, choose the simpler one if you must. Nature loves simplicity.
    • When conjecturing that stopping estimates fixed the dysfunction, is this the simplist solution?
    • How about stopping Bad Management practices
    • In the upcoming #NoEstimates book, Chapter 1 opens with a blatant Bad Management process of assigning a project to an inexperienced PM that is 100's of times bigger than she has ever seen.
    • Then blaming the estimating process for her failure.
    • This notion is continued on with references to other failed projects, without ever seeking the actual root cause of the failure.
    • No evidence is ever presented to show that stopping estimates will have make the project successful.
  9. Try to prove your hypothesis wrong. Every truth is prototypical and the purpose of science is to disprove that which we think we know. 

    • This notion is lost on those conjecturing #NoEstimates is applicable their personal anecdotal experience.¬†
    • Testing the idea in external domain, not finding a CEO that supports the notion.¬†
    • The null hypothesis test H0, is basic High School statistics.¬†
    • Missing entirely there
  10.  

    Use carefully designed experiments to test all hypotheses.
    • No such thing in the #NoEstimates paradigm

So it's becoming clear #NoEstimates does pass the smell test of the basic BS meter

The Big Questions

  1. What's the answer to how can we make a decision in the presence of uncertainty and not estimate and NOT violate the core principles of Microeconomics
  2.  It's not about the developers like or dislike of estimates. When I was a developer - radar, realtime controls, flight avionics, enterprise IT - I never liked estimates. It's about business. It's not our money. This notion appears to be completely lost. It's the millennials  view of the world. We have two millennials (25 and 26) It's all about ME. Even if those suggesting are millennials, the message appears to be it's all about me. Go talk to the CFO.

The End

The rhetoric  on #NoEstimates has now reached a fever pitch, paid conferences, books, blatant misrepresentations. Time to call BS and move on. This is the last post.  I've met many interesting people in both good and bad ways. And will stay in touch. So long and thanks for the Fish. As Douglas Adams says. Those with the money will have the final say on this idea.

Categories: Project Management

Netty: Testing encoders/decoders

Mark Needham - Fri, 06/05/2015 - 22:25

I’ve been working with Netty a bit recently and having built a pipeline of encoders/decoders as described in this excellent tutorial wanted to test that the encoders and decoders were working without having to send real messages around.

Luckily there is a EmbeddedChannel which makes our life very easy indeed.

Let’s say we’ve got a message ‘Foo’ that we want to send across the wire. It only contains a single integer value so we’ll just send that and reconstruct ‘Foo’ on the other side.

We might write the following encoder to do this:

// Examples uses Netty 4.0.28.Final
public static class MessageEncoder extends MessageToMessageEncoder<Foo>
{
    @Override
    protected void encode( ChannelHandlerContext ctx, Foo msg, List<Object> out ) throws Exception
    {
        ByteBuf buf = ctx.alloc().buffer();
        buf.writeInt( msg.value() );
        out.add( buf );
    }
}
 
public static class Foo
{
    private Integer value;
 
    public Foo(Integer value)
    {
        this.value = value;
    }
 
    public int value()
    {
        return value;
    }
}

So all we’re doing is taking the ‘value’ field out of ‘Foo’ and putting it into the list which gets passed downstream.

Let’s write a test which simulates sending a ‘Foo’ message and has an empty decoder attempt to process the message:

@Test
public void shouldEncodeAndDecodeVoteRequest()
{
    // given
    EmbeddedChannel channel = new EmbeddedChannel( new MessageEncoder(), new MessageDecoder() );
 
    // when
    Foo foo = new Foo( 42 );
    channel.writeOutbound( foo );
    channel.writeInbound( channel.readOutbound() );
 
    // then
    Foo returnedFoo = (Foo) channel.readInbound();
    assertNotNull(returnedFoo);
    assertEquals( foo.value(), returnedFoo.value() );
}
 
public static class MessageDecoder extends MessageToMessageDecoder<ByteBuf>
{
    @Override
    protected void decode( ChannelHandlerContext ctx, ByteBuf msg, List<Object> out ) throws Exception { }
}

So in the test we write ‘Foo’ to the outbound channel and then read it back into the inbound channel and then check what we’ve got. If we run that test now this is what we’ll see:

junit.framework.AssertionFailedError
	at NettyTest.shouldEncodeAndDecodeVoteRequest(NettyTest.java:28)

The message we get back is null which makes sense given that we didn’t bother writing the decoder. Let’s implement the decoder then:

public static class MessageDecoder extends MessageToMessageDecoder<ByteBuf>
{
    @Override
    protected void decode( ChannelHandlerContext ctx, ByteBuf msg, List<Object> out ) throws Exception
    {
        int value = msg.readInt();
        out.add( new Foo(value) );
    }
}

Now if we run our test again it’s all green and happy. We can now go and encode/decode some more complex structures and update our test accordingly.

Categories: Programming

Stuff The Internet Says On Scalability For June 5th, 2015

Hey, it's HighScalability time:


Stunning Multi-Wavelength Image Of The Solar Atmosphere.
  • 4x: amount spent by Facebook users
  • Quotable Quotes:
    • Facebook: Facebook's average data set for CF has 100 billion ratings, more than a billion users, and millions of items. In comparison, the well-known Netflix Prize recommender competition featured a large-scale industrial data set with 100 million ratings, 480,000 users, and 17,770 movies (items).
    • @BenedictEvans: The number of photos shared on social networks this year will probably be closer to 2 trillion than to 1 trillion.
    • @jeremysliew: For every 10 photos shared on @Snapchat, 5 are shared on @Facebook and 1 on @Instagtam. 8,696 photos/sec on Snapchat.
    • @RubenVerborgh: “Don’t ask for an API, ask for data access. Tim Berners-Lee called for open data, not open services.” —@pietercolpaert #SemDev2015 #ESWC2015
    • Craig Timberg: When they thought about security, they foresaw the need to protect the network against potential intruders or military threats, but they didn’t anticipate that the Internet’s own users would someday use the network to attack one another. 
    • Janet Abbate: They [ARPANET inventors] thought they were building a classroom, and it turned into a bank.
    • A.C. Hadfield: The power of accurate observation is often called cynicism by those who don’t possess it.
    • @plightbo: Every business is becoming a software business
    • @potsdamnhacker: Replaced Go service with an Erlang one. Already used hot-code reloading, fault tolerance, runtime inspectability to great effect. #hihaters
    • @PHP_CEO: WE SPENT 18 MONTHS MIGRATING FROM A MONOLITH TO MICROSERVICES RESULT:- GITHUB GETS PAID FOR MORE PRIVATE REPOS - FIND/REPLACE IS HARDER
    • @alsargent: Given continuous deployment, average lifetime of a #Docker image @newrelic is 12hrs. Different ops pattern than VMs. #velocityconf
    • @PHP_CEO: ALSO THE NODE GUY WHO WAS A RUBY GUY THAT REWROTE IT ALL BECAME A RUST GUY AND MOVED TO THAILAND TO BECOME A NOMAD STARTUP GUY
    • @abt_programming: "If you think good architecture is expensive, try bad architecture" - Brian Foote - and Joseph Yoder
    • @KlangianProverb: "I thought studying distributed systems would make me understand software better—it made me understand reality better."—Old Klangian Proverb
    • @rachelmetz: google's error rate for image recognition was 28 percent in 2008, now it's like 5 percent, quoc le says.

  • Fear or strength? Apple’s Tim Cook Delivers Blistering Speech On Encryption, Privacy. With Google Now on Tap Google is saying we've joyously crossed the freaky line and we unapologetically plan to leverage our huge lead in machine learning to the max. Apple knows they can't match this feature. Google knows this is a clear and distinct exploitable marketing idea, like a super thin MacBook Air slowly slipping out of a manila envelope.

  • How does Kubernetes compare to Mesos? cmcluck, who works at Google and was one of the founders of the project explains: we looked really closely at Apache Mesos and liked a lot of what we saw, but there were a couple of things that stopped us just jumping on it. (1) it was written in C++ and the containers world was moving to Go -- we knew we planned to make a sustained and considerable investment in this and knew first hand that Go was more productive (2) we wanted something incredibly simple to showcase the critical constructs (pods, labels, label selectors, replication controllers, etc) and to build it directly with the communities support and mesos was pretty large and somewhat monolithic (3) we needed what Joe Beda dubbed 'over-modularity' because we wanted a whole ecosystem to emerge, (4) we wanted 'cluster environment' to be lightweight and something you could easily turn up or turn down, kinda like a VM; the systems integrators i knew who worked with mesos felt that it was powerful but heavy and hard to setup (though i will note our friends at Mesosphere are helping to change this). so we figured doing something simple to create a first class cluster environment for native app management, 'but this time done right' as Tim Hockin likes to say everyday. < Also, CoreOS (YC S13) Raises $12M to Bring Kubernetes to the Enterprise.

  • If structure arises in the universe because electrons can't occupy the same space, why does structure arise in software?

  • The cost of tyranny is far lower than one might hope. How much would it cost for China to intercept connections and replace content flowing at 1.9-terabits/second? About $200K says Robert Graham in Scalability of the Great Cannon. Low? Probably. But for the price of a closet in the new San Francisco you can edit an entire people's perception of the Internet in real-time.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Traffic Light Measurement Indicators ‚Äď Not All Lights Are Green

Red Light, Green Light

Red Light, Green Light

One of the most common indicators used in measurement and status report are traffic light indicators.  Traffic light indicators have been adopted because they are easy to recognize, represent complex information in a palatable manner and indicators are easy to explain.  The traffic light is elegant in its simplicity; however, that simplicity can also be its undoing. There three critical issues that traffic lights often exhibit which reduce their usefulness.

  1. Traffic light indicators obscure nuances and trends. Traffic light indicators generally use the simple green, yellow and red scale (good to bad). The indicator can only be set to one of those states, and there is no in-between (no orange or greenish yellow). HOWEVER, project status is rarely that cut and dried. For example, how would the indicator be set if a project exhibits serious threatening risks but the stakeholders currently satisfied with progress? Regardless of whether the indicator was set to red or yellow, much of the nuance of the situation would be lost. In addition, the traffic light indicator could not show whether the risks were being mitigated or threatening to become issues.
  2. Traffic light Indicators can generate poor personal and team behaviors. One of the most common problems observed with the usage of traffic light indicators is sticky statuses. A status is green or yellow, then seems to suddenly turn yellow or red overnight. The change from one color to another typically surprises management and stakeholders.  The change of color/status is often resisted because a change is viewed as a failure since there is no mechanism to provide a warning making the change is resisted. A second common problem is that making the indicator change becomes the project leader or team’s most important goal. When the metric becomes the goal, individuals and teams can be incented into trying to game of the metric which removes the focus from the customer.
  3. Traffic light indicators can lead to users of the indicator losing track of how it was calculated. Any high-level indicator, like a traffic light indicator, is a synthesis of many individual measures and metrics. Meg Gillikin, Deloitte Consulting, suggests ‚Äúthat you should have definitions of what each state means with specifics.‚ÄĚ ¬†The users of the indicator need to understand how it is set and the factors that go into setting the metric.¬† Lack of understand of any indicator can lead managers into making poor decisions‚Ķ

D√°cil Castelo, Leda MC, sums up the use of traffic light indicators, ‚ÄúThe use of red, green and yellow provides a quick, visual summary of the status in a simple and easy way (everyone knows what a traffic light is). On the other hand, easy to understood doesn’t mean easy to calculate nor necessarily useful for the user.‚ÄĚ Remember that with any indicator there is a basic issue IF an indicator doesn‚Äôt actually help teams and leaders to delivery of value it will be view as overhead


Categories: Process Management