Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

The Hard Thing About Hard Things – Ben Horowitz: Book Review

Mark Needham - Tue, 10/14/2014 - 00:59

I came across ‘The Hard Thing About Hard Things‘ while reading an article about Ben Horowitz’s venture capital firm and it was intriguing enough that I bought it and then read through it over a couple of days.

Although the blurb suggests that it’s a book about about building and running a startup I think a lot of the lessons are applicable for any business.

These were some of the main points that stood out for me:

  • The Positivity Delusion – CEOs should tell it like it is.

    My single biggest improvement as CEO occurred on the day when I stopped being too positive.

    Horowitz suggests that he used to be too positive and would shield bad news from his employees as he thought he’d make the problem worse by transferring the burden onto them.

    He came to the realisation that this was counter productive since he often wasn’t the best placed person to fix a problem e.g. if it was a problem with the product then the engineering team needed to know so they could write the code to fix it.

    He goes on to suggest that…

    A healthy company culture encourages people to share bad news. A company that discusses its problems freely and openly can quickly solve them. A company that covers up its problems frustrated everyone involved.

    I’ve certainly worked on projects in the past where the view projected by the most senior person is overly positive and seems to ignore any problems that seem obvious to everyone else. This eventually leads to people being unsure whether to take them seriously which isn’t a great situation to be in.

  • Lead Bullets – fix the problem, don’t run away from it.

    Horowitz describes a couple of situations where his products have been inferior to their competitors and it’s been tempting to take the easy way out by not fixing the product.

    There comes a time in every company’s life where it must fight for its life. If you find yourself running when you should be fighting, you need to ask yourself, “If our company isn’t good enough to win, then do we need to exist at all?”.

    I can’t think of any examples around this from my experience but I really like the advice – I’m sure it’ll come in handy in future.

  • Give ground grudgingly – dealing with the company increasing in size.

    Horowitz suggests that the following things become more difficult as a company grows in size:

    • Communication
    • Common Knowledge
    • Decision Making

    but…

    If the company doesn’t expand it will never be much…so the challenge is to grow but degrade as slowly as possible.

    He uses the metaphor of an offensive linesman in American football who has to stop onrushing defensive linesman but giving ground to them slowly by backing up a little at a time.

    I’ve worked in a few different companies now and noticed things become more structured (and in my eyes worse!) as the company grew over time but I hadn’t really thought about why that was happening. The chapter on scaling a company does a decent job.

  • The Law of Crappy People – people baseline against the worst person at a grade level.

    For any title level in a large organisation, the talent on that level will eventually converge to the crappiest person with that title.

    This is something that he’s also written about on his blog and certainly seems very recognisable.

    His suggestion for mitigating the problem is to have a “properly constructed and highly disciplined promotion process” in place. He describes this like so:

    When a manager wishes to promote an employee, she will submit that employee for review with an explanation of why she believes her employee satisfies the skill criteria required for the level.

    The committee should compare the employee to both the level’s skill description and the skills of the other employees at that level to determine whether or not to approve the promotion.

  • Hire people with the right kind of ambition

    The wrong kind of ambition is ambition for the executive’s personal success regardless of the company’s outcome.

    This suggestion comes from the chapter in which Horowitz discusses how to minimise politics in an organisation.

    I really like this idea but it seems like a difficult thing to judge/achieve. In my experience people often have their own goals which aren’t necessarily completely aligned with the company’s. Perhaps complete alignment isn’t as important unless you’re right at the top of the company?

    He also has quite a neat definition of politics:

    What do I mean by politics? I mean people advancing their careers or agendas by means other than merit and contribution.

    He goes on to describe a few stories of how political behaviour can subtly creep into a company without the CEO meaning for it to happen. This chapter was definitely eye opening for me.

There are some other interesting chapters on the best types of CEOs for different companies, when to hire Senior external people, product management and much more.

I realise that the things I’ve picked out are mostly a case of confirmation bias so I’m sure everyone will have different things that stand out for them.

Definitely worth a read.

Categories: Programming

Agile and Risk Management

We brush our teeth to avoid cavities; a process and an outcome.

We brush our teeth to avoid cavities; a process and an outcome.

There is no consensus on how risk is managed in Agile projects or releases or whether there is a need for any special steps. Risk is defined as “any uncertain event that can have an impact on the success of a project.” Risk management, on the other hand, is about predicting the future. Risk management is a reflection of the need manage the flow of work in the face of variability. Variability in software development can be defined as the extent that any outcome differs from expectations. Understanding and exploiting that variability is the essence of risk management. Variability is exacerbated by a number of factors, such as the following.

  1. Complexity in any problem is a reflection of the number of parts to the problem, how those parts interact and the level of intellectual difficulty. The more complex a problem, the less likely you will be able to accurately predict the delivery of a solution to the problem. Risk is reduced by relentlessly focusing on good design and simplifying the problem. Reflecting the Agile principle of simplicity, reducing the work to only what is absolutely needed is one mechanism for reducing variability.
  2. Size of project or release influences the overall variability by increasing the number of people involved in delivering an outcome and therefore the level of coordination needed. Size is a form of complexity. Size increases the possibility that there will be interactions and dependencies between components of the solution. In a perfect world, the work required to deliver each requirement and system component would be independent of each other, however when this can’t be avoided projects need mechanisms to identify and deal with dependencies. An example of a mechanism to address complexity created by size is the release planning event is SAFe.
  3. Ad hoc or uncontrolled processes can’t deliver a consistent output. We adopt processes in order to control the outcome. We brush our teeth to avoid cavities; a process and an outcome. When a process is not under control, the outcome is not predictable. In order for a process to be under control, the process needs to be defined and then performed based on that definition. Processes need to be defined in as simple and lean a manner possible to be predictable.
  4. People are chaotic by nature. Any process that includes people will be more variable than the same process without people. Fortunately we have not reached the stage in software development where machines have replaced people (watch any of the Terminator or Matrix movies). The chaotic behavior of teams or teams of teams can be reduced through alignment. Alignment can be achieved at many levels. At release or program level, developing a common goal and understanding of business need helps to develop alignment. At a team level, stable teams helps team members build interpersonal bonds leading to alignment. Training is another tool to develop or enhance alignment.

Software development, whether the output maintenance, enhancements or new functionality a reflection of processes that have a degree of variability. Variability, while generally conceived of as being bad and therefore to be avoided, could be good or bad. Finding a new was to deliver work that increases productivity and quality is good. The goal for any project or release is to reduce the possibility of negative variance and then learning to use what remains to best advantage.


Categories: Process Management

Fast and Easy integration testing with Docker and Overcast

Xebia Blog - Mon, 10/13/2014 - 18:40
Challenges with integration testing

Suppose that you are writing a MongoDB driver for java. To verify if all the implemented functionality works correctly, you ideally want to test it against a REAL MongoDB server. This brings a couple of challenges:

  • Mongo is not written in java, so we can not embed it easily in our java application
  • We need to install and configure MongoDB somewhere, and maintain the installation, or write scripts to set it up as part of our test run.
  • Every test we run against the mongo server, will change the state, and tests might influence each other. We want to isolate our tests as much as possible.
  • We want to test our driver against multiple versions of MongoDB.
  • We want to run the tests as fast as possible. If we want to run tests in parallel, we need multiple servers. How do we manage them?

Let's try to address these challenges.

First of all, we do not really want to implement our own MonogDB driver. Many implementations exist and we will be reusing the mongo java driver to focus on how one would write the integration test code.

Overcast and Docker

logoWe are going to use Docker and Overcast. Probably you already know Docker. It's a technology to run applications inside software containers. Overcast is the library we will use to manage docker for us. Overcast is a open source java library
developed by XebiaLabs to help you to write test that connect to cloud hosts. Overcast has support for various cloud platforms, including EC2, VirtualBox, Vagrant, Libvirt (KVM). Recently support for Docker has been added by me in Overcast version 2.4.0.

Overcast helps you to decouple your test code from the cloud host setup. You can define a cloud host with all its configuration separately from your tests. In your test code you will only refer to a specific overcast configuration. Overcast will take care of creating, starting, provisioning that host. When the tests are finished it will also tear down the host. In your tests you will use Overcast to get the hostname and ports for this cloud host to be able to connect to them, because usually these are dynamically determined.

We will use Overcast to create Docker containers running a MongoDB server. Overcast will help us to retrieve the dynamically exposed port by the Docker host. The host in our case will always be the docker host. Docker in our case runs on an external Linux host. Overcast will use a TCP connection to communicate with Docker. We map the internal ports to a port on the docker host to make it externally available. MongoDB will internally run on port 27017, but docker will map this port to a local port in the range 49153 to 65535 (defined by docker).

Setting up our tests

Lets get started. First, we need a Docker image with MongoDB installed. Thanks to the Docker community, this is as easy as reusing one of the already existing images from the Docker Hub. All the hard work of creating such an image is already done for us, and thanks to containers we can run it on any host capable of running docker containers. How do we configure Overcast to run the MongoDB container? This is our minimal configuration we put in a file called overcast.conf:

mongodb {
    dockerHost="http://localhost:2375"
    dockerImage="mongo:2.7"
    exposeAllPorts=true
    remove=true
    command=["mongod", "--smallfiles"]
}

That's all! The dockerHost is configured to be localhost with the default port. This is the default value and you can omit this. The docker image called mongo version 2.7 will be automatically pulled from the central docker registry. We set exposeAllPorts to true to inform docker it needs to dynamically map all exposed ports by the docker image. We set remove to true to make sure the container is automatically removed when stopped. Notice we override the default container startup command by passing in an extra parameter "--smallfiles" to improve testing performance. For our setup this is all we need, but overcast also has support for defining static port mappings, setting environment variables, etc. Have a look at the Overcast documentation for more details.

How do we use this overcast host in our test code? Let's have a look at the test code that sets up the Overcast host and instantiates the mongodb client that is used by every test. The code uses the TestNG @BeforeMethod and @AfterMethod annotations.

private CloudHost itestHost;
private Mongo mongoClient;

@BeforeMethod
public void before() throws UnknownHostException {
    itestHost = CloudHostFactory.getCloudHost("mongodb");
    itestHost.setup();

    String host = itestHost.getHostName();
    int port = itestHost.getPort(27017);

    MongoClientOptions options = MongoClientOptions.builder()
        .connectTimeout(300 * 1000)
        .build();

    mongoClient = new MongoClient(new ServerAddress(host, port), options);
    logger.info("Mongo connection: " + mongoClient.toString());
}

@AfterMethod
public void after(){
    mongoClient.close();
    itestHost.teardown();
}

It is important to understand that the mongoClient is the object under test. Like mentioned before, we borrowed this library to demonstrate how one would integration test such a library. The itestHost is the Overcast CloudHost. In before(), we instantiate the cloud host by using the CloudHostFactory. The setup() will pull the required images from the docker registry, create a docker container, and start this container. We get the host and port from the itestHost and use them to build our mongo client. Notice that we put a high connection timeout on the connection options, to make sure the mongodb server is started in time. Especially the first run it can take some time to pull images. You can of course always pull the images beforehand. In the @AfterMethod, we simply close the connection with mongoDB and tear down the docker container.

Writing a test

The before and after are executed for every test, so we will get a completely clean mongodb server for every test, running on a different port. This completely isolates our test cases so that no tests can influence each other. You are free to choose your own testing strategy, sharing a cloud host by multiple tests is also possible. Lets have a look at one of the tests we wrote for mongo client:

@Test
public void shouldCountDocuments() throws DockerException, InterruptedException, UnknownHostException {

    DB db = mongoClient.getDB("mydb");
    DBCollection coll = db.getCollection("testCollection");
    BasicDBObject doc = new BasicDBObject("name", "MongoDB");

    for (int i=0; i < 100; i++) {
        WriteResult writeResult = coll.insert(new BasicDBObject("i", i));
        logger.info("writing document " + writeResult);
    }

    int count = (int) coll.getCount();
    assertThat(count, equalTo(100));
}

Even without knowledge of MongoDB this test should not be that hard to understand. It creates a database, a new collection and inserts 100 documents in the database. Finally the test asserts if the getCount method returns the correct amount of documents in the collection. Many more aspects of the mongodb client can be tested in additional tests in this way. In our example setup, we have implemented two more tests to demonstrate this. Our example project contains 3 tests. When you run the 3 example tests sequentially (assuming the mongo docker image has been pulled), you will see that it takes only a few seconds to run them all. This is extremely fast.

Testing against multiple MongoDB versions

We also want to run all our integration tests against different versions of the mongoDB server to ensure there are no regressions. Overcast allows you to define multiple configurations. Lets add configuration for two more versions of MongoDB:

defaultConfig {
    dockerHost="http://localhost:2375"
    exposeAllPorts=true
    remove=true
    command=["mongod", "--smallfiles"]
}

mongodb27=${defaultConfig}
mongodb27.dockerImage="mongo:2.7"

mongodb26=${defaultConfig}
mongodb26.dockerImage="mongo:2.6"

mongodb24=${defaultConfig}
mongodb24.dockerImage="mongo:2.4"

The default configuration contains the configuration we have already seen. The other three configurations extend from the defaultConfig, and define a specific mongoDB image version. Lets also change our test code a little bit to make the overcast configuration we use in the test setup depend on a parameter:

@Parameters("overcastConfig")
@BeforeMethod
public void before(String overcastConfig) throws UnknownHostException {
    itestHost = CloudHostFactory.getCloudHost(overcastConfig);

Here we used the paramaterized tests feature from TestNG. We can now define a TestNG suite to define our test cases and how to pass in the different overcast configurations. Lets have a look at our TestNG suite definition:

<suite name="MongoSuite" verbose="1">
    <test name="MongoDB27tests">
        <parameter name="overcastConfig" value="mongodb27"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
    <test name="MongoDB26tests">
        <parameter name="overcastConfig" value="mongodb26"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
    <test name="MongoDB24tests">
        <parameter name="overcastConfig" value="mongodb24"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
</suite>

With this test suite definition we define 3 test cases that will pass a different overcast configuration to the tests. The overcast configuration plus the TestNG configuration enables us to externally configure against which mongodb versions we want to run our test cases.

Parallel test execution

Until this point, all tests will be executed sequentially. Due to the dynamic nature of cloud hosts and docker, nothing limits us to run multiple containers at once. Lets change the TestNG configuration a little bit to enable parallel testing:

<suite name="MongoSuite" verbose="1" parallel="tests" thread-count="3">

This configuration will cause all 3 test cases from our test suite definition to run in parallel (in other words our 3 overcast configurations with different MongoDB versions). Lets run the tests now from IntelliJ and see if all tests will pass:

Screen Shot 2014-10-08 at 8.32.38 PM
We see 9 executed test, because we have 3 tests and 3 configurations. All 9 tests have passed. The total execution time turned out to be under 9 seconds. That's pretty impressive!

During test execution we can see docker starting up multiple containers (see next screenshot). As expected it shows 3 containers with a different image version running simultaneously. It also shows the dynamic port mappings in the "PORTS" column:

Screen Shot 2014-10-08 at 8.50.07 PM

That's it!

Summary

To summarise, the advantages of using Docker with Overcast for integration testing are:

  1. Minimal setup. Only a docker capable host is required to run the tests.
  2. Save time. Minimal amount of configuration and infrastructure setup required to run the integration tests thanks to the docker community.
  3. Isolation. All test can run in their isolated environment so the tests will not affect each other.
  4. Flexibility. Use multiple overcast configuration and parameterized tests for testing against multiple versions.
  5. Speed. The docker container starts up very quickly, and overcast and testng allow you to even parallelize the testing by running multiple containers at once.

The example code for our integration test project is available here. You can use Boot2Docker to setup a docker host on Mac or Windows.

Happy testing!

Paul van der Ende 

Note: Due to a bug in the gradle parallel test runner you might run into this random failure when you run the example test code yourself. The work around is to disable parallelism or use a different test runner like IntelliJ or maven.

 

Watch the open files limit when running Riak

Agile Testing - Grig Gheorghiu - Mon, 10/13/2014 - 17:53
I was close to expressing my unbridled joy at how little hand-holding our Riak cluster needs, when we started to see strange increased latencies when hitting the cluster, on calls that should have been very fast. Also, the health of the Riak nodes seems fine in terms of CPU, memory and disk. As usual, our good old friend the error log file pointed us towards the solution. We saw entries like this in /var/log/riak/error.log:

2014-10-11 03:22:40.565 UTC [error] <0.12830.4607> CRASH REPORT Process <0.12830.4607> with 0 neighbours exited with reason: {error,accept_failed} in mochiweb_acceptor:init/3 line 342014-10-11 03:22:40.619 UTC [error] <0.168.0> {mochiweb_socket_server,310,{acceptor_error,{error,accept_failed}}}2014-10-11 03:22:40.619 UTC [error] <0.12831.4607> application: mochiweb, "Accept failed error", "{error,emfile}"
A google search revealed that a possible cause of these errors is the dreaded open file descriptor limit, which is 1024 by default in Ubuntu.

To be perfectly honest, we hadn't done almost any tuning on our Riak cluster, because it had been running so smoothly. But recently we started to throw more traffic at it, hence issues with open file descriptors made sense. To fix it, we followed the advice in this Riak doc and created /etc/default/riak with the contents:
ulimit -n 65536
We also took the opportunity to apply the networking-related kernel tuning recommendations from this other Riak tuning doc and added these lines to /etc/sysctl.conf:
net.ipv4.tcp_max_syn_backlog = 40000net.core.somaxconn=4000net.ipv4.tcp_timestamps = 0net.ipv4.tcp_sack = 1net.ipv4.tcp_window_scaling = 1net.ipv4.tcp_fin_timeout = 15net.ipv4.tcp_keepalive_intvl = 30net.ipv4.tcp_tw_reuse = 1
Then we ran sysctl -p to update the above values in the kernel. Finally we restarted our Riak nodes one at a time.
I am happy to report that ever since, we've had absolutely no issues with our Riak cluster.  I should also say we are running Riak 1.3, and I understand that Riak 2.0 has better tests in place for avoiding this issue.

I do want to give kudos to Basho for an amazingly robust piece of technology, whose only fault is that it gets you into the habit of ignoring it because it just works!

How Digital is Changing Physical Experiences

The business economy is going through massive change, as the old world meets the new world.

The convergence of mobility, analytics, social media, cloud computing, and embedded devices is driving the next wave of digital business transformation, where the physical world meets new online possibilities.

And it’s not limited to high-tech and media companies.

Businesses that master the digital landscape are able to gain strategic, competitive advantage.   They are able to create new customer experiences, they are able to gain better insights into customers, and they are able to respond to new opportunities and changing demands in a seamless and agile way.

In the book, Leading Digital: Turning Technology into Business Transformation: Turning Technology Into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of the ways that businesses are meshing the physical experience with the digital experience to generate new business value.

Provide Customers with an Integrated Experience

Businesses that win find new ways to blend the physical world with the digital world.  To serve customers better, businesses are integrating the experience across physical, phone, mail, social, and mobile channels for their customers.

Via Leading Digital: Turning Technology into Business Transformation:

“Companies with multiple channels to customers--physical, phone, mail, social, mobile, and so on--are experiencing pressure to provide an integrated experience.  Delivering these omni-channel experiences requires envisioning and implementing change across both front-end and operational processes.  Innovation does not come from opposing the old and the new.  But as Burberry has shown,  innovation comes from creatively meshing the digital and the physical to reinvent new and compelling customer experiences and to foster continuous innovation.”

Bridge In-Store Experiences with New Online Possibilities

Starbucks is a simple example of blending digital experiences with their physical store.   To serve customers better, they deliver premium content to their in-store customers.

Via Leading Digital: Turning Technology into Business Transformation:

“Similarly, the unique Starbucks experience is rooted in connecting with customers in engaging ways.  But Starbucks does not stop with the physical store.  It has digitally enriched the customer experience by bridging its local, in-store experience with attractive new online possibilities.  Delivered via a free Wi-Fi connection, the Starbucks Digital Network offers in-store customers premium digital content, such as the New York Times or The Economist, to enjoy alongside their coffee.  The network also offers access to local content, from free local restaurant reviews from Zagat to check-in via Foursquare.”

An Example of Museums Blending Technology + Art

Museums can create new possibilities by turning walls into digital displays.  With a digital display, the museum can showcase all of their collections and provide rich information, as well as create new backdrops, or tailor information and tours for their customers.

Via Leading Digital: Turning Technology into Business Transformation:

“Combining physical and digital to enhance customer experiences is not limited to just commercial enterprises.  Public services are getting on the act.  The Cleveland Museum of Art is using technology to enhance the experience and the management of visitors.  'EVERY museum is searching for this holy grail, this blending of technology and art,' said David Franklin, the director of the museum.

 

Fort-foot-wide touch screens display greeting-card sized images of all three thousand objects, and offers information like the location of the actual piece.  By touching an icon on the image, visitors can transfer it from the wall to an iPad (their own, or rented from the museum for $5 a day), creating a personal list of favorites.  From this list, visitors can design a personalized tour, which they can share with others.

 

'There is only so much information you can put on a wall, and no one walks around with catalogs anymore,' Franklin said.  The app can produce a photo of the artwork's original setting--seeing a tapestry in a room filled with tapestries, rather than in a white-walled gallery, is more interesting.  Another feature lets you take the elements of a large tapestry and rearrange them in either comic-book or movie-trailer format.  The experience becomes fun, educational, and engaging.  This reinvention has lured new technology-savvy visitors, but has also made seasoned museum-goers come more often.”

As you figure out the future capability vision for your business, and re-imagine what’s possible, consider how the Nexus of Forces (Cloud, Mobile, Social, and Big Data), along with the mega mega-trend (Internet-of-Things), can help you shape your digital business transformation.

You Might Also Like

Cloud Changes the Game from Deployment to Adoption

Management Innovation is at the Top of the Innovation Stack

McKinsey on Unleashing the Value of Big Data Analytics

Categories: Architecture, Programming

How League of Legends Scaled Chat to 70 million Players - It takes Lots of minions.

How would you build a chat service that needed to handle 7.5 million concurrent players, 27 million daily players, 11K messages per second, and 1 billion events per server, per day?

What could generate so much traffic? A game of course. League of Legends. League of Legends is a team based game, a multiplayer online battle arena (MOBA), where two teams of five battle against each other to control a map and achieve objectives.

For teams to succeed communication is crucial. I learned that from Michal Ptaszek, in an interesting talk on Scaling League of Legends Chat to 70 million Players (slides) at the Strange Loop 2014 conference. Michal gave a good example of why multiplayer team games require good communication between players. Imagine a basketball game without the ability to call plays. It wouldn’t work. So that means chat is crucial. Chat is not a Wouldn’t It Be Nice feature.

Michal structures the talk in an interesting way, using as a template the expression: Make it work. Make it right. Make it fast.

Making it work meant starting with XMPP as a base for chat. WhatsApp followed the same strategy. Out of the box you get something that works and scales well...until the user count really jumps. To make it right and fast, like WhatsApp, League of Legends found themselves customizing the Erlang VM. Adding lots of monitoring capabilities and performance optimizations to remove the bottlenecks that kill performance at scale.

Perhaps the most interesting part of their chat architecture is the use of Riak’s CRDTs (commutative replicated data types) to achieve their goal of a shared nothing fueled massively linear horizontal scalability. CRDTs are still esoteric, so you may not have heard of them yet, but they are the next cool thing if you can make them work for you. It’s a different way of thinking about handling writes.

Let’s learn how League of Legends built their chat system to handle 70 millions players...

Stats
Categories: Architecture

Managing Your Project With Dilbert Advice — Not!

Herding Cats - Glen Alleman - Mon, 10/13/2014 - 16:11

Scott Adams provides cartons of what not to do for most things technical. Software and Hardware. I actually saw him once, when he worked for PacBell in Pleasanton, CA. I was on a job at major oil company deploying document management systems for OSHA 1910.119 - process safety management and integrating CAD systems for control of safety critical documents.

The most popular use of Dilbert cartoon lately has been with the #NoEstimates community in support of the notion that estimates are somehow evil, used to make commitments that can't be met, and generally should be avoided when spending other people's money.

The cartoon below resonated with me for several reasons. What's happening here is classic misguided, intentionally ignoring the established processes of Reference Class Forecasting. As well, in typical Dilbert fashion, doing stupid things on purpose.

Screen Shot 2014-10-12 at 9.16.06 AM

Reference Class Forecasting is a well developed estimating process used across a broad range of technical, business, and finance domains. The characters above seem not to know anything about RCF. As a result they are DSTOP

Here's how not to DSTOP for cost and schedule estimates and the associated risks and the technical risk that the product you're building can't do what it's supposed to do on or before the date it needs to do it, at or below the cost you need it to do  it in order to stay in business.

The approach below may be complete overkill for your domain. So start by asking what's the Value at Risk. How much of our customers money are we willing to right off, if we don't have a sense of what DONE looks like in units of measure meaningful to the decision makers. Don't know that? then it's likely you've already put that money at risk, you're likely late, and don't really know what capabiltiies will be produced when you run out of time and money.

Don't end up a cartoon character in Dilbert strip. Learn how to properly manage your efforts, the efforts of others, using your customers money.

Managing in the presence of uncertainty from Glen Alleman
Categories: Project Management

Getting Started With Meteor Tutorial (In the Cloud)

Making the Complex Simple - John Sonmez - Mon, 10/13/2014 - 15:00

In this Meteor tutorial, I am going to show you how to quickly get started with the JavaScript Framework, Meteor, to develop a Meteor application completely from the cloud. I’ve been pretty excited about Meteor since I first saw a demo of it, but it can be a little daunting to get started, so I […]

The post Getting Started With Meteor Tutorial (In the Cloud) appeared first on Simple Programmer.

Categories: Programming

Xebia KnowledgeCast Episode 5: Madhur Kathuria and Scrum Day Europe 2014

Xebia Blog - Mon, 10/13/2014 - 10:48

xebia_xkc_podcast
The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this 5th episode, we share key insights of Madhur Kathuria, Xebia India’s Director of Agile Consulting and Transformation, as well as some impressions of our Knowledge Exchange and Scrum Day Europe 2014. And of course, Serge Beaumont will have Fun With Stickies!

First, Madhur Kathuria shares his vision on Agile and we interview Guido Schoonheim at Scrum Day Europe 2014.

In this episode's Fun With Stickies Serge Beaumont talks about wide versus deep retrospectives.

Then, we interview Martin Olesen and Patricia Kong at Scrum Day Europe 2014.

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, send in a voice message so we can put you ON the show!

Credits

Intentional Disregard for Good Engineering Practices?

Herding Cats - Glen Alleman - Mon, 10/13/2014 - 03:22

It seems lately there is an intentional disregard of the core principles of business development of software intensive systems. The #Noestimates community does, but other collections of developers do as well. 

  • We'd rather be writing code than estimating how much it's going to cost writing code.
  • Estimates are a waste.
  • The more precise the estimate, the more deceptive it is
  • We can't predict the future and it's a waste trying to
  • We can make decsision without estimating

These notions of course are seriously misinformed on how probability and statistics work in the estimating paradigm. I've written about this in the past. But there are a few new books we're putting to work in ouyr Software Intensive Systems (SIS) work that may be of interest to those wanted to learn more.

These are foundation texts for the profession of estimating. The continued disregard - ignoring possibly - of these principles has become all to clear. Not just in the sole contributor software development domain,. But all the way to Multi-Billion dolalr programs in defense, space, infrastructure, and other high risk domains. 

Which brings me back to a core conjecture - there is no longer any engineering discipline in the software development domain. At least outside the embedded systems like flight controls, process control, telecommunications equipment, and the like. There was a conjecture awhile back that the Computer Science discipline at the university level should be split - software engineering and coding.

Here's a sample of the Software Intensive System paradigm, where the engineering of the systems is a critical success factor. And Yes Virginia, the Discipline of Agile is applied in the Software Intensive Systems world - emphasis on the term DISCIPLINE.

Categories: Project Management

SPaMCAST 311 – Backlog Grooming, Software Sensei, Containment-Viruses and Software

www.spamcast.net

http://www.spamcast.net

Listen to SPaMCAST 311 now!

SPaMCAST 311 features our essay on backlog grooming. Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time.

We also have a new installment of Kim Pries’s Software Sensei column.  What is the relationship between the containment of diseases and bad software?  Kim makes the case that the process for dealing both are related.

The Essay begins

Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time (although some times are better than others).

Listen to the rest now!

Next

SPaMCAST 312 features our interview with Alex Neginsky.  Alex is a real practitioner in a real company that has really applied Agile.  Almost everyone finds their own path with Agile.  Alex has not only found his path but has gotten it right and is willing to share!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT

Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 311 – Backlog Grooming, Software Sensei, Containment-Viruses and Software

Software Process and Measurement Cast - Sun, 10/12/2014 - 22:00

SPaMCAST 311 features our essay on backlog grooming. Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time.

We also have a new installment of Kim Pries’s Software Sensei column.  What is the relationship between the containment of diseases and bad software?  Kim makes the case that the process for dealing both are related.

The Essay begins

Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time (although some times are better than others).

Listen to the rest now!

Next

SPaMCAST 312 features our interview with Alex Neginsky.  Alex is a real practitioner in a real company that has really applied Agile.  Almost everyone finds their own path with Agile.  Alex has not only found his path but has gotten it right and is willing to share!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT

Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Kaizen and Urgency

Fire Alarm

Hand Drawn Chart Saturday

Kaizen is a Japanese word meaning good change. Change in a dynamic business environment has become an accepted norm. Organizations must adapt or lose relevancy. The concept of kaizen has been adopted within the information technology industry as part of core management practices. In business terms, kaizen has been defined as continuous incremental change. You need energy to make change occur, in many cases, a sense of urgency is the mechanism used to generate the energy to drive continuous change.

John Kotter, author of Leading Change and the eight-step model of change, suggests that without a sense of urgency people don’t give the needed push of hard work to make change happen. Urgency begins by providing a focus that helps people to first become aware of the need for change and then pay attention to the need and the factors causing the need by piercing through the noise. (See the awareness, attention, action model). The energy a sense of urgency injects into the process is needed make the step from paying attention to taking to break through complacency and disrupt the status quo.

The need for urgency in the equation for change can lead to problems. The first is the potential for confusing importance with urgency. In a perfect world, we would only want to react to what is both important and urgent. The second problem area is that of manufactured/false urgency. Both problematic scenarios lead to the sapping of the organization’s energy in the long term, which makes it more difficult to recognize and react when real change is needed. If further there is an over reliance on a manufactured or a false sense of urgency the focus becomes short-term rather than strategic.

My first job out of university was as a statistical analyst/sales forecaster for a woman’s garment manufacturer. We had six “seasons” or product line offerings every year. Quotas were set for the sales force (incrementally bigger than last year). The sales management team for regional sales managers to the national sales manager provided constant “motivation” to the sales force. Motivation always included the dire consequences of missing the quota. There was always a sense of urgency, which drove action, including account prospecting (good behavior) and order padding (bad behavior). The manufactured urgency generated both good and bad change, and when things were going well it was pretty easy to sort those out. However, the business cycle has never been repealed and when an economic downturn occurred it was difficult to differentiate the real urgency. Therefore the organization did not make strategic changes quickly enough. A 80 year old firm with 750 million dollars in sales failed nearly overnight.

Urgency can become a narcotic that makes the need real change harder to recognize and harder to generate. Signs of an over reliance on urgency can include:

  • People that are “too busy” to do the right thing,
  • Generating highly crafted PowerPoint pitches for even small changes,
  • Continually chasing a new silver bullet before the last has been evaluated.

The goal of kaizen is to continually improve the whole organization. The whole organization includes empowering everyone from development and operational personnel to all layers of management to recognize and make change. Motivation is needed to evoke good change, however we need to be careful that motivation does not translate to a need to generate a false sense of urgency.


Categories: Process Management

Principles Trump Practices

Herding Cats - Glen Alleman - Sat, 10/11/2014 - 15:23

PrincipiaPrinciples, Practices, and Processes are the basis of successful project management. It is popular in some circles to think that practices come before Principles. 

The principles of management, project management, software development and its management, product development management are immutable.

What does done look like, what's our plan to reach done, what resources will we need along the way to done, what impediments will we encounter and how will we overcome them, and how are we going to measure our progress toward done in units meaningful to the decision makers?

This are immutable principles. These immutable principles can then be used to test practices and process by asking what is the evidence that the practice or process enables the principle to be applied and how do we know that the principle is being fulfilled.

 

Categories: Project Management

Lessons from running Neo4j based ‘hackathons’

Mark Needham - Sat, 10/11/2014 - 11:52

Over the last 6 months my colleagues and I have been running hands on Neo4j based sessions every few weeks and I was recently asked if I could write up the lessons we’ve learned.

So in no particular order here are some of the things that we’ve learnt:

Have a plan but don’t stick to it rigidly

Something we learnt early on is that it’s helpful to have a rough plan of how you’re going to spend the session otherwise it can feel quite chaotic for attendees.

We show people that plan at the beginning of the session so that they know what to expect and can plan their time accordingly if the second part doesn’t interest them as much.

Having said that, we’ve often gone off on a tangent and since people have been very interested in that we’ve just gone with it.

This sometimes means that you don’t cover everything you had in mind but the main thing is that people enjoy themselves so it’s nothing to worry about.

Prepare for people to be unprepared

We try to set expectations in advanced of the sessions with respect to what people should prepare or have installed on their machines but despite that you’ll have people in varying levels of readiness.

Having noticed this trend over a few months we now allot time in the schedule for getting people up and running and if we’re really struggling then we’ll ask people to pair with each other.

There will also be experience level differences so we always factor in some time to go over the basics for those who are new. We also encourage experienced people to help the others out – after all you only really know if you know something when you try to teach someone else!

Don’t try to do too much

Our first ‘hackathon’-esque event involved an attempt to build a Java application based on a British Library dataset.

I thought we’d be able to model the data set, import it and then wire up some queries to an application in a few hours.

This proved to be ever so slightly ambitious!

It took much longer than anticipated to do those first two steps and we didn’t get to build any of the application – teaching people how to model in a graph is probably a session in its own right.

Show the end state

Feedback we got from attendees to the first few versions was that they’d like to see what the end state should have looked like if they’d completed everything.

In our Clojure Hackathon Rohit got the furthest so we shared his code with everyone afterwards.

An even better approach is to have the final solution ready in advance and have it checked in on a different branch that you can point people at afterwards.

Show the intermediate states

Another thing we noticed was that if people got behind in the first part of the session then they’d never be able to catch up.

Nigel therefore came up with the idea of snapshotting intermediate states so that people could reset themselves after each part of the session. This is something that the Polymer tutorial does as well.

We worked out that we have two solid one hour slots before people start getting distracted by their journey home so we came up with two distinct types of tasks for people to do and then created a branch with the solution at the end of those tasks.

No doubt there will be more lessons to come as we run more sessions but this is where we are at the moment. If you fancy joining in our next session is Java based in a couple of weeks time.

Finally, if you want to see a really slick hands on meetup then you’ll want to head over to the London Clojure DojoBruce Durling has even written up some tips on how you run one yourself.

Categories: Programming

On F# and Object Oriented Guilt

Eric.Weblog() - Eric Sink - Sat, 10/11/2014 - 03:00

I'm writing a key-value database in F#. Because clearly the world needs another key-value database, right?

Actually, I'm doing this to learn F#. I wanted a non-trivial piece of code to write. Something I already know a little bit about.

(My design is a log structured merge tree, conceptually similar to LevelDB or the storage layer of SQLite4. The code is on GitHub.)

As a starting point, I wrote the whole thing in C# first. Then I ported it to F#, in a mostly line-by-line port. My intention was (and still is) to use that port as a starting point, evolving the F# version toward a more functional design.

The final result may not become useful, but I should end up knowing a lot more about F# than I knew when I started.

Learning

Preceding all this code was a lot of reading. I spent plenty of time scouring fsharpforfunandprofit.com and fsharp.org and the F# stuff on MSDN and Stack Overflow.

I learned that F# is closely related to OCaml, about which I knew nothing. But then I learned that OCaml is a descendant of ML, which I vaguely remember from when I took CS 325 as an undergrad with Dr. Campbell. This actually increased my interest, as it tied F# back to a positive experience of mine. (OTOH, if I run across something called L# and find that it has roots in Prolog, my interest shall proceed no further.)

I learned that there is an F# community, and that community is related to (or is a subset of) the functional programming community.

And I picked up on a faint whiff of tension within that community, centered on questions about whether something is purely functional or not. There are purists. And there are pragmatists.

I saw nothing ugly. But I definitely got the impression that I had choices to make, and that I might encounter differing opinions about how I should make those choices.

I decided to just ignore all that and write code. This path has usually served me well.

So I started writing my LSM tree in C#.

Digression: What is a log structured merge tree?

The basic idea of an LSM tree is that a database is implemented as list of lists. Each list is usually called a "sorted run" or a "segment".

The first segment is kept in memory, and is the only one where inserts, updates, and deletes happen.

When the memory segment gets too large, it is flushed out to disk. Once a segment is on disk, it is immutable.

Searching or walking the database requires checking all of the segments. In the proper order -- for a given key, newer segments override later segments.

The more segments there are, the trickier and slower this will be. So we can improve things by merging two disk segments together to form a new one.

The memory segment can be implemented with whatever data structure makes sense. The disk segment is usually something like a B+tree, except it doesn't need to support insert/update/delete, so it's much simpler.

The C# version: OO Everywhere

It's all about ICursor.

public interface ICursor
{
    void Seek(byte[] k, SeekOp sop);
    void First();
    void Last();
    void Next();
    void Prev();
    bool IsValid();
    byte[] Key();
    Stream Value();
    int ValueLength();
    int KeyCompare(byte[] k);
}

(Credit to D. Richard Hipp and the SQLite developers. ICursor is more or less what you would get if you perused the API for the SQLite4 storage engine and translated it to C#.)

This interface defines the methods which can be used to search or iterate over one segment. It's an interface. It doesn't care about how that segment is stored. It could be a memory segment. It could be a disk segment. It could be something else, as long as it plays nicely and follows the rules.

I have three objects which implementat ICursor.

First, there is MemorySegment, which is little more than a wrapper around a System.Collections.Generic.Dictionary<byte[],Stream>. It has a method called OpenCursor() which returns an ICursor. Nothing too interesting here.

The bulk of my code is in dealing with segments on "disk", which are represented as B+trees. To construct a disk segment, call BTreeSegment.Create. Its parameters are a System.IO.Stream (into which the B+tree will be written) and an ICursor (from which the keys and values will be obtained).

To get the ICursor for that BTreeSegment, call BTreeSegment.OpenCursor().

The object that makes it all work is MultiCursor. This is an ICursor that has one or more subcursors. You can search or iterate over a MultiCursor just like any other. Under the hood, it will deal with all the subcursors and present the illusion that they are one segment.

And mostly that's it.

  • To search the database, open a MultiCursor on all the segments and call Seek().

  • To flush a memory segment to disk, pass its cursor to BTreeSegment.Create.

  • To combine any two (or more) segments into one, create a MultiCursor on them and pass it to BTreeSegment.Create.

As I said above, this is not a complete implementation. If this were "LevelDB in F#", there would need to be, for example, something that owns all these segments and makes smart decisions about when to flush the memory segment and when to merge disk segments together. That piece of code is currently absent here.

Porting to F#

Before I started the F# port, I did some soul searching about the overall design. ICursor and its implementations are very elegant. Would I have to give them up in favor of something more functional?

I decided to just ignore all that and write code. This path has usually served me well.

So I proceed to do the line-by-line port to F#:

  • I made a copy of the C# code, which I'll refer to as CsFading.

  • Then I started a new F# library project, which we might call FsGrowing.

  • I changed my xUnit test to reference CsFading instead of my actual C# version.

  • I added a reference from CsFading to [the initially empty] FsGrowing.

Then I executed the following loop:

while (!CsFading.IsEmpty())
{
    var piece = ChooseSomethingFromCFade();
    FsGrowing.AddImplementation(piece);
    CsFading.Remove(piece);
    GetTheTestsPassingAgain();
}

I started with this:

type ICursor =
    abstract member Seek : k:byte[] * sop:SeekOp -> unit
    abstract member First : unit -> unit
    abstract member Last : unit -> unit
    abstract member Next : unit -> unit
    abstract member Prev : unit -> unit
    abstract member IsValid : unit -> bool
    abstract member Key : unit -> byte[]
    abstract member Value : unit -> Stream
    abstract member ValueLength : unit -> int
    abstract member KeyCompare : k:byte[] -> int

And little by little, CsFading got smaller as FsGrowing got bigger.

It was very gratifying when I reached the point where the F# version was passing my test suite.

But I didn't celebrate long. The F# code was a mess. Lots of mutables. Plenty of if-then-else. Nulls. Mutable collections. While loops.

Basically, I had written C# using F# syntax. Even the pseudocode for my approach was imperative.

But there were plenty of positives as well. Porting the C# code to F# actually made the C# code better. It was like a very intensive code review.

And the xUnit tests got very cool when I configured them to run every test four times:

  • Use the C# version
  • Use the F# version
  • Use the C# version to write B+trees and the F# version to read them.
  • Use the F# version to write B+trees and the C# version to read them.

If nothing else, I had convinced myself that full interop between F# and C# was actually quite smooth.

Also, I didn't completely punt on functional stuff during the port. There were a few places where I changed things toward a more F#-ish approach. Here's a function that turned out kinda nicely:

let rec searchLeaf k min max sop le ge = 
    if max < min then
        if sop = SeekOp.SEEK_EQ then -1
        else if sop = SeekOp.SEEK_LE then le
        else ge
    else
        let mid = (max + min) / 2
        let kmid = keyInLeaf(mid)
        let cmp = compareKeyInLeaf mid k
        if 0 = cmp then mid
        else if cmp<0  then searchLeaf k (mid+1) max sop mid ge
        else searchLeaf k min (mid-1) sop le mid

It's a binary search of the sorted key list within a single leaf page of the B+tree. The SeekOp can be used to specify that if the sought key does not exist, the one just before it or after it should be returned.

And yes, searchLeaf would be more idiomatic if it were using pattern matching instead of if-then-else. But at least I got rid of the mutables and the loop and made it recursive! :-)

Anyway, I expected to reach this point with a clear understanding of what to do next. And I didn't have that.

Gleaning from the experience of others

In terms of timing, all of this happened just before and during the Xamarin Evolve conference (which, as I write this, ended today). The C# version was mostly written the week before the conference. The F# port was done during the conference.

And that moment where the F# version passed the test suite, leaving me clueless about how to proceed? That happened Wednesday.

On Thursday, Rachel Reese gave a fantastic session on F#. I left with the impression that maybe a little OO in my F# wasn't so bad.

On Friday, Larry O'Brien gave another fantastic session on F#. And I left with the even stronger impression that even though learning cool functional stuff is great, I don't have to be a functional pursit to benefit from F#.

I also found a fair amount of "Don't feel guilty about OO in F#" in a document called The F# Component Design Guidelines on fsharp.org.

Anyway, for now, ICursor will remain. Maybe there's a more functional way, but right now I don't know what that would look like.

So I'm going to just ignore all that and write code. This path has usually served me well.

 

Clean Up

3-29 2013 poop

Why do death march (gruelingly overworked and therefore high risk) projects still occur? While poor processes, poor estimation and out of control changes are all still factors, I am seeing more that reflect poor choices based on business pressure.  How often have you heard someone say that the end justifies the means?  It is easier to say yes to everything and avoid hard discussions about trade-off and just admonish the troops to work harder and smarter.

In Agile, we would expect backlogs and release plans to help enforce those sorts of discussions. Unless, of course, you stop doing them.  I recently talked to a group that had identified the need to do a better job of breaking accepted backlog items down into tasks during sprint planning (identified during a retrospective), only to fall prey to an admonition to spend less time planning and more time “working” leading to rework and disappointing sprint results.

As you discover messes, whether in the code you are working on or in the processes you are using to guide your work, you are obligated to clean up after yourself.  If you don’t, sooner or later no one will want to play in your park  . . . and  probably neither will you.


Categories: Process Management

Stuff The Internet Says On Scalability For October 10th, 2014

Hey, it's HighScalability time:


Social climber: Instagram explorer scales to new heights in New York.

 

  • 11 billion: world population in 2100; 10 petabytes: Size of Netflix data warehouse on S3; $600 Billion: the loss when a trader can't type; 3.2: 0-60 mph time of probably not my next car.
  • Quotable Quotes:
    • @kahrens_atl: Last week #NewRelic Insights wrote 618 billion events and ran 237 trillion queries with 9 millisecond response time #FS14
    • @sustrik: Imagine debugging on a quantum computer: Looking at the value of a variable changes its value. I hope I'll be out of business by then.
    • Arrival of the Fittest: Solving Evolution's Greatest Puzzle: Every cell contains thousands of such nanomachines, each of them dedicated to a different chemical reaction. And all their complex activities take place in a tiny space where the molecular building blocks of life are packed more tightly than a Tokyo subway at rush hour. Amazing.
    • Eric Schmidt: The simplest outcome is we're going to end up breaking the Internet," said Google's Schmidt. Foreign governments, he said, are "eventually going to say, we want our own Internet in our country because we want it to work our way, and we don't want the NSA and these other people in it.
    • Antirez: Basically it is neither a CP nor an AP system. In other words, Redis Cluster does not achieve the theoretical limits of what is possible with distributed systems, in order to gain certain real world properties.
    • @aliimam: Just so we can fathom the scale of 1B vs 1M: 1,000,000 seconds is 11.5 days. 1,000,000,000 seconds is 31.6 YEARS
    • @kayousterhout: 92% of catastrophic failures in distributed data-intensive systems caused by incorrect error handling https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf … #osdi14
    • @DrQz: 'The purpose of computing is insight, not numbers.' (Hamming) Sometimes numbers ARE the insight so, make them accesible too. (Me)

  • Robert Scoble on the Gillmor Gang said that because of the crush of signups, ello had to throttle invites. Their single PostgreSQL server couldn't handle it captain.

  • Containers are getting much larger with new composite materials. Not that kind of container. Shipping containers. High oil costs have driven ships carrying 5000 containers to evolve. Now they can carry 18,000 and soon 19,000 containers!

  • If you've wanted to make a network game then this is a great start. Making Fast-Paced Multiplayer Networked Games is Hard: Fast-paced multiplayer games over the Internet are hard, but possible. First understanding your constraints then building within them is essential. I hope I have shed some light on what those constraints are and some of the techniques you can use to build within them. No doubt there are other ways out there and ways yet to be used. Each game is different and has its own set of priorities. Learning from what has been done before could help a great deal.

  • Arrival of the Fittest: Solving Evolution's Greatest Puzzle: Environmental change requires complexity, which begets robustness, which begets genotype networks, which enable innovations, the very kind that allow life to cope with environmental change, increase its complexity, and so on, in an ascending spiral of ever-increasing innovability...is the hidden architecture of life.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

New daily stand up questions

Xebia Blog - Fri, 10/10/2014 - 15:51

This post provides some alternate standup questions to let your standup be: aimed forward, goal focused, team focused.

The questions are:

  1. What have I achieved since our last SUM?
  2. What is my goal for today?
  3. What things keep me from reaching my goal?
  4. What is our team goal for the end of our sprint day?

The daily stand up runs on a few standard questions. The traditional questions are:

  • What did I accomplish yesterday?
  • What will I be doing today?
  • What obstacles are impeding my progress?

A couple of effects I see when using the above list are:

  • A lot of emphasis is placed on past activities rather than getting the most out of the day at hand.
  • Team members tell what they will be busy with, but not what they aim to complete.
  • Impediments are not related to daily goals.
  • There is no summary for the team relating to the sprint goal.

If you are experiencing the same issues you could try the alternate questions. They worked for me, but any feedback is appreciated of course. Are you using other questions? Let me know your experience. You could use the PDF below to print out the questions for your scrum board.

STAND_EN

STAND_EN

 

What Informs My Project Management View

Herding Cats - Glen Alleman - Fri, 10/10/2014 - 15:44

In recent discussion (of sorts) about estimating - Not Estimating actually - I realized something that should have been obvious. I travel in a world not shared by the staunch advocates of #NoEstimates. They appear to be sole contributors. I came top this after reading Peter Kretzman's 3rd installment, where he re-quoted a statement by Ron Jeffries,

Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before.

This is a sole contributor or small team paradigm.

So let's pretend we work at Price/Waterhouse/Cooper and we’re playing our roles  - Peter as CIO advisor, me as Program Performance Management adviser. We've been asked by our new customer to develop a product from scratch, estimate the cost and schedule, and provide some confidence level that the needed business capabilities will be available on or before a date and at or below a cost. Why you ask, because that's in the business plan for this new product and if they're late or overrun the planned cost that will be a balance sheet problem.

What would we do? Well, we'd start with PWC resource management database – held by HR – and ask for “past performance experience” people in the business domain and the problem domain. Our new customer did not “invent” a new business domain, so it's likely we'll find people who know what our new customer does for money. We’d look to see where in the 195,433 people in the database that work for PWC world wide there is someone, somewhere, that knows what the customer does for money and what kinds of business capabilities this new system needs to provide. If there is no one, then we'd look in our 10,000 or so partner relationships database to find someone.

If we found no one who knows the business and the needed capabilities, we’d no bid.

This notion of “I've been asked to do something that’s never been done before, so how can I possibly estimate it” really means “I'm doing something I’ve never done before.” And since “I’m a sole contributor, the population of experience in doing this new thing for the new customer is ONE – me." So since I don't know how the problem has been solved in the past, I can't possibly know how to estimate the cost, schedule, and needed capabilities. And of course I'm absolutely correct to say - new development with unknown requirements can't be estimated. Because those unknown requirements are actually Unknown to me, but may be known to another. But in the population of 195,000 other people in our firm, I'm no longer alone in my quest to come up with an estimate.

So the answer to the question, “what if we encounter new and unknown needs, how can we estimate?” is actually a core problem for the sole contributor, or small team. It'd be rare that the sole contributor or small team would have encountered the broad spectrum of domains and technologies needed to establish the necessary Reference Classes to address this open ended question. This is not the fault of the sole contributor. It is simply the situation of small numbers versus large numbers.

This is the reason the PWC’s of the world exist. They get asked to do things the sole contributors never have an opportunity to see.

Related articles Software Requirements Are Vague
Categories: Project Management