Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Trust First, Email Second

I’m shopping for furniture for our new house. I need a chair or a sofa for our family room, lights, all kinds of things.

I was on Facebook, and there was an ad that looked interesting. I thought, “Should I click?” I clicked anyway.

The site wants my email address. I can’t see anything without logging in. First, before I see any furniture, I have to give them my email address.

This action violates the basics of anything about building rapport and trust. I don’t even know these people. Do I want to share yet-another-login and password with this site?

This is not an idle question.

I don’t know about you. I have way more than 100 logins and passwords. I use 1Password to manage my logins and passwords. But it’s still not easy.

This site—and any other ecommerce site—has to gain my trust before I share my email with it.

It’s the same with you. I offer you my Pragmatic Manager email newsletter. I show you past issues, both in chronological and by tag order. I don’t pressure you to sign up.

I offer you blog postings in the hope you will sign up for my email newsletter. I don’t pressure you to sign up. Why would I?

How can you gain trust in me and my offerings if I create a barrier?

It’s the same with that site. I have no trust in them. Why would I give them my email address? How can I possibly trust them?

When you build a product, when you create a team, when you do anything that has people who need to come together, think of how you build trust first. Once you build trust, the rest is much easier.

Categories: Project Management

10 Tips for Optimizing NGINX and PHP-fpm for High Traffic Sites

Adrian Singer has boiled down 7 years of experience to a set of 10 very useful tips on how to best optimize NGINX and PHP-fpm for high traffic sites:

  1. Switch from TCP to UNIX domain sockets. When communicating to processes on the same machine UNIX sockets have better performance the TCP because there's less copying and fewer context switches.
  2. Adjust Worker Processes. Set the worker_processes in your nginx.conf file to the number of cores your machine has and  increase the number of worker_connections.
  3. Setup upstream load balancing. Multiple upstream backends on the same machine produce higher throughout than a single one.
  4. Disable access log files. Log files on high traffic sites involve a lot of I/O that has to be synchronized across all threads. Can have a big impact.
  5. Enable GZip
  6. Cache information about frequently accessed files
  7. Adjust client timeouts.
  8. Adjust output buffers.
  9. /etc/sysctl.conf tuning.
  10. Monitor. Continually monitor the number of open connections, free memory and number of waiting threads and set alerts if thresholds are breached. Install the NGINX stub_status module.

Please take a look at the original article as it includes excellent configuration file examples.

Categories: Architecture

Jersey/Jax RS: Streaming JSON

Mark Needham - Wed, 04/30/2014 - 02:24

About a year ago I wrote a blog post showing how to stream a HTTP response using Jersey/Jax RS and I recently wanted to do the same thing but this time using JSON.

A common pattern is to take our Java object and get a JSON string representation of that but that isn’t the most efficient use of memory because we now have the Java object and a string representation.

This is particularly problematic if we need to return a lot of the data in a response.

By writing a little bit more code we can get our response to stream to the client as soon as some of it is ready rather than building the whole result and sending it all in one go:

@Path("/resource")
public class MadeUpResource
{
    private final ObjectMapper objectMapper;
 
    public MadeUpResource() {
        objectMapper = new ObjectMapper();
    }
 
    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public Response loadHierarchy(@PathParam( "pkPerson" ) String pkPerson) {
        final Map<Integer, String> people  = new HashMap<>();
        people.put(1, "Michael");
        people.put(2, "Mark");
 
        StreamingOutput stream = new StreamingOutput() {
            @Override
            public void write(OutputStream os) throws IOException, WebApplicationException
            {
                JsonGenerator jg = objectMapper.getJsonFactory().createJsonGenerator( os, JsonEncoding.UTF8 );
                jg.writeStartArray();
 
                for ( Map.Entry<Integer, String> person : people.entrySet()  )
                {
                    jg.writeStartObject();
                    jg.writeFieldName( "id" );
                    jg.writeString( person.getKey().toString() );
                    jg.writeFieldName( "name" );
                    jg.writeString( person.getValue() );
                    jg.writeEndObject();
                }
                jg.writeEndArray();
 
                jg.flush();
                jg.close();
            }
        };
 
 
        return Response.ok().entity( stream ).type( MediaType.APPLICATION_JSON ).build()    ;
    }
}

If we run that this is the output we’d see:

[{"id":"1","name":"Michael"},{"id":"2","name":"Mark"}]

It’s a simple example but hopefully it’s easy to see how we could translate that if we wanted to stream more complex data.

Categories: Programming

Clojure: Paging meetup data using lazy sequences

Mark Needham - Wed, 04/30/2014 - 01:20

I’ve been playing around with the meetup API to do some analysis on the Neo4j London meetup and one thing I wanted to do was download all the members of the group.

A feature of the meetup API is that each end point will only allow you to return a maximum of 200 records so I needed to make use of offsets and paging to retrieve everybody.

It seemed like a good chance to use some lazy sequences to keep track of the offsets and then stop making calls to the API once I wasn’t retrieving any more results.

I wrote the following functions to take care of that bit:

(defn unchunk [s]
  (when (seq s)
    (lazy-seq
      (cons (first s)
            (unchunk (next s))))))
 
(defn offsets []
  (unchunk (range)))
 
 
(defn get-all [api-fn]
  (flatten
   (take-while seq
               (map #(api-fn {:perpage 200 :offset % :orderby "name"}) (offsets)))))

I previously wrote about the chunking behaviour of lazy collections which meant that I ended up with a minimum of 32 calls to each URI which wasn’t what I had in mind!

To get all the members in the group I wrote the following function which is passed to get-all:

(:require [clj-http.client :as client])
 
(defn members
  [{perpage :perpage offset :offset orderby :orderby}]
  (->> (client/get
        (str "https://api.meetup.com/2/members?page=" perpage
             "&offset=" offset
             "&orderby=" orderby
             "&group_urlname=" MEETUP_NAME
             "&key=" MEETUP_KEY)
        {:as :json})
       :body :results))

So to get all the members we’d do this:

(defn all-members []
  (get-all members))

I’m told that using lazy collections when side effects are involved is a bad idea – presumably because the calls to the API might never end – but since I only run it manually I can just kill the process if anything goes wrong.

I’d be interested in how others would go about solving this problem – core.async was suggested but that seems to result in much more / more complicated code than this version.

The code is on github if you want to take a look.

Categories: Programming

Why Measure?

We measure, in part, so we can predict what is around the next corner.

We measure, in part, so we can predict what is around the next corner.

I am often asked the question “why measure?” The question, with its sheer simplicity, always stops me in my tracks. It is easy to respond with a number of high-minded and academic reasons describing why you should measure.  The reasons include:

  • To measure performance,
  • To ensure our processes are efficient,
  • To provide input for managing,
  • To estimate, and
  • To pass a CMMI appraisal (I really did not say that but I might have).

All are true, all important and all common reasons to measure, but the answer isn’t complete. On reflection I would add two further reasons to measure:

  • To control specific behavior, and
  • To predict the future.

When we measure we are sending an explicit message about what is important to the organization and therefore sending an explicit signal on how we expect people to act (remember the old adage, “you get what you measure”). The linkage from measurement to behavior has long been known, therefore measurement can be used to guide behavior. Measuring requires us not only to examine the outcome we want to incent but the impact it can have on the whole system that generates that output. If you truly get what you measure then measuring a specific outcome will change relative importance of that outcome in relation to all other outcomes.

The pursuit of predicting the future is a mainstay of human culture. We practice prediction daily. Examples include being able to predict whether there is a predator behind the next rock, whether planting corn or soybeans will bring a greater profit or even whether you favorite sports team will win their next match. Measurement provides the data to predict the future in a more disciplined manner than guessing based on instincts.

Changing behavior and predicting the future are related.  It might almost be redundant to answer the question with both answers.  Incenting a behavior is a mechanism for not only predicting the future by influencing the future but also a means to control the outcome. Why do we measure?  The answer must include the common reasons along with the ideas of measuring to control or leader behavior and to predict the future.


Categories: Process Management

Elements of Project Success

Herding Cats - Glen Alleman - Tue, 04/29/2014 - 18:11

Project-success-factorsThere's a continuing discussion on LinkedIn and Twitter about project success, the waste of certain activities on projects, and of course the argument without end on estimating the cost of producing the value from projects. It's really a argument without evidence, since some of the protagonist in the estimating discussion have yet to come up with alternatives.

I've come to understand Project Success is multidimensional a few years back after reading "Reinventing Project Management." Aaron Shenhar and Dov Dir, Harvard University Press. The other book that changed my view of the world was IT Governance: How Top Performers Manage IT Decision Rights for Superior Results, Peter Weill and Jeanne W. Roos, Harvard University Press. 

This last book should put a stake in the heart of #NoEstimates, since the decision rights for those needing and asking for the cost and schedule for the business capabilities belongs to those with the money, not those spending the money.

A summary of the book can be found in the paper, "Project Success: A Multidimensional Strategic Concept," Aaron Shenhar, Dov Dvir, Ofer Levy, and Alan Maltz, Long Range Planning 34, (2001) pp 699-725.

In many cases there is not a "product" per se, but a service. These are wrapped in a larger context in today's enterprise paradigm as "capabilities." Provided the capabilities to accomplish a goal, mission, or business outcome. This is done through products and processes. Both are used by people, other processes, and other products to accomplish other goals, mission, or outcomes. This is the System of Systems view of the "project" paradigm.

Shenhar and Dvir's research along with Levy and Maltz in the paper showed there are 4 success dimensions.

  1. Project Efficiency - meeting schedule goals, meeting budget goals, meeting the technical project goals.
    • These goals start with estimates of the cost, schedule, and technical performance possibilities.
    • These are estimates and estimates have confidence intervals.
    • With these estimates, the simplest business assessment can be made. The Return on Investment = (Value - Cost) / Cost.
    • More complex assessment are actually needed of course. Capabilities Based Planning is one approach, so is Real Options, Balanced Scorecard, and others.
  2. Impact on Customer - meeting functional performance, meeting technical specifications, fulfilling customer needs, solving a customer's problem, the customer is using the product or service, and customer satisfaction. In the Systems Engineering paradigm these are assessed with Measures of Effectiveness (MOE), Measures of Performance (MOP), Technical Performance Measures (TPM), and Key Performance Parameters (TPP).
    • The customer - who ever that is defined as - bought a capability to do something.
    • This something can be a business process, a mission fulfillment, a service, a process.
    • The something is defined by the users of the something.
  3. Business Success - commercial success, creating larger market share. For public projects other measures are needed, for defense and space, mission accomplishment is another.
    • Success starts with the assessment of the Capability to fulfill its need. This is a strategy making process. 
    • In Balanced Scorecard this is defined in the Strategy Map for the business or the project.
    • In this paradigm, success is assessed by Measures of Effectiveness and Measures of Performance.
  4. Preparing for the Future - creating new markets or new opportunities for further mission success, creating new product lines or basis for expanding capabilities, and developing new technologies that enable missions or goals.
    • Projects and their outcomes rarely stand alone or have a terminal state - retirement or obsolescence of the outcomes.
    • One measure of success of the project is it's ability - the project outcomes, processes used to build them, and the people who did the work - to be the basis of future projects, products, or services. 
    • This evolutionary approach is an assessment in itself. 

With this paradigm, principles, practices, and processes become the basis of "project management," and the resulting product or service. But the measures of success are better described by Shenhar and Dvir model, since that are the direct consequences of all the enablers of that success.

So Here's the Killer Question(s)

  1. If we are working to produce value, do we know the cost of producing that value? Does that cost to produce meet the business goals of those paying us? If it's our own money, does that cost to produce meet our own business goals?
  2. If we are working to produce value, do those paying us - or ourselves - have a time when this value is needed to meet their goals or our goals.
  3. Do those we're working for - or ourselves - have an understanding of what capabilities are needed from our work efforts to meet a business goal, fulfill a mission or accomplishment an outcome?
Related articles Do It Right or Do It Twice Books PMs Should Read and Put To Use What Does It Mean To Be DONE? The Project Diamond 5 Questions That Need Answers for Project Success Why Projects Fail, No Matter the Domain Capabilities Based Planning A Seat At The Table Project Success? Concept of Operations First, then Capabilities, then Requirements
Categories: Project Management

Sponsored Post: Apple, Wargaming.net, PagerDuty, HelloSign, CrowdStrike, Gengo, ScaleOut Software, Couchbase, Tokutek, MongoDB, BlueStripe, AiScaler, Aerospike, LogicMonitor, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple is hiring a Senior Engineer in their Mobile Services team. We seek an accomplished server-side engineer capable of delivering an extraordinary portfolio of features and services based on emerging technologies to our internal customers. Please apply here

  • Apple is hiring a Software Engineer in their Messaging Services team. We build the cloud systems that power some of the busiest applications in the world, including iMessage, FaceTime and Apple Push Notifications. You'll have the opportunity to explore a wide range of technologies, developing the server software that is driving the future of messaging and mobile services. Please apply here.

  • Apple is hiring an Enterprise Software Engineer. Apple's Emerging Technology Services group provides a Java based SOA platform for various applications to interact with each other. The platform is designed to handle millions of messages a day with very low latency. We have an immediate opening for a talented Software Engineer in a highly visible team who is passionate about exploring emerging technologies to create elegant scalable solutions. Please apply here

  • Engine Programmer - C/C++. Wargaming|BigWorld is seeking Engine Programmers to join our team in Sydney, Australia. We offer a relocation package, Australian working visa & great salary + bonus. Your primary responsibility will be to work on our PC engine. Please apply here

  • Senior Engineer wanted for large scale, security oriented distributed systems application that offers career growth and independent work environment. Use your talents for good instead of getting people to click ads at CrowdStrike. Please apply here.

  • Ops Engineer - Are you passionate about scaling and automating cloud-based websites? Love Puppet and deployment scripts? Want to take advantage of both your sys-admin and DevOps skills? Join HelloSign as our second Ops Engineer and help us scale as we grow! Apply at http://www.hellosign.com/info/jobs

  • Human Translation Platform Gengo Seeks Sr. DevOps Engineer. Build an infrastructure capable of handling billions of translation jobs, worked on by tens of thousands of qualified translators. If you love playing with Amazon’s AWS, understand the challenges behind release-engineering, and get a kick out of analyzing log data for performance bottlenecks, please apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • The Biggest MongoDB Event Ever Is On. Will You Be There? Join us in New York City June 23-25 for MongoDB World! The conference lineup includes Amazon CTO Werner Vogels and Cloudera Co-Founder Mike Olson for keynote addresses.  You’ll walk away with everything you need to know to build and manage modern applications. Register before April 4 to take advantage of super early bird pricing.

  • Upcoming Webinar: Practical Guide to SQL - NoSQL Migration. Avoid common pitfalls of NoSQL deployment with the best practices in this May 8 webinar with Anton Yazovskiy of Thumbtack Technology. He will review key questions to ask before migration, and differences in data modeling and architectural approaches. Finally, he will walk you through a typical application based on RDBMS and will migrate it to NoSQL step by step. Register for the webinar.
Cool Products and Services
  • PagerDuty helps operations and DevOps engineers resolve problems as quickly as possible. By aggregating errors from all your IT monitoring tools, and allowing easy on-call scheduling that ensures the right alerts reach the right people, PagerDuty increases uptime and reduces on-call burnout—so that you only wake up when you have to. Thousands of companies rely on PagerDuty, including Netflix, Etsy, Heroku, and Github.

  • GigOM Interviews Aerospike at Structure Data 2014 on Application Scalability. Aerospike Technical Marketing Director, Young Paik explains how you can add rocket fuel to your big data application by running the Aerospike database on top of Hadoop for lightning fast user-profile lookups. Watch this interview.

  • Couchbase: NoSQL and the Hybrid Cloud. If a NoSQL database can be deployed on-premise or it can be deployed in the cloud, why can’t it be deployed on-premise and in the cloud? It can, and it should. Read how in this article converting three use cases for hybrid cloud deployments of NoSQL databases: master / slave, cloud burst, and multi-master.

  • Do Continuous MapReduce on Live Data? ScaleOut Software's hServer was built to let you hold your daily business data in-memory, update it as it changes, and concurrently run continuous MapReduce tasks on it to analyze it in real-time. We call this "stateful" analysis. To learn more check out hServer.

  • LogicMonitor is the cloud-based IT performance monitoring solution that enables companies to easily and cost-effectively monitor their entire IT infrastructure stack – storage, servers, networks, applications, virtualization, and websites – from the cloud. No firewall changes needed - start monitoring in only 15 minutes utilizing customized dashboards, trending graphs & alerting.

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Episode 203: Leslie Lamport on Distributed Systems

Leslie Lamport won a Turing Award in 2013 for his work in distributed and concurrent systems. He also designed the document preparation tool LaTex. Leslie is employed by Microsoft Research, and has recently been working with TLA+, a language that is useful for specifying concurrent systems from a high level. The interview begins with a […]
Categories: Programming

Azure: 99.95% SQL Database SLA, 500 GB DB Size, Improved Performance Self-Service Restore, and Business Continuity

ScottGu's Blog - Scott Guthrie - Tue, 04/29/2014 - 16:13

Earlier this month at the Build conference, we announced a number of great new improvements coming to SQL Databases on Azure including: an improved 99.95% SLA, support for databases up to 500GB in size, self-service restore capability, and new Active Geo Replication support.  This 3 minute video shows a segment of my keynote where I walked through the new capabilities:

image

Last week we made these new capabilities available in preview form, and also introduced new SQL Database service tiers that make it easy to take advantage of them.

New SQL Database Service Tiers

Last week we introduced a new Basic and Standard tier option with SQL Databases – which are additions to the existing Premium tier we previously announced.  Collectively these tiers provide a flexible set of offerings that enable you to cost effectively deploy and host SQL Databases on Azure:

  • Basic Tier: Designed for applications with a light transactional workload. Performance objectives for Basic provide a predictable hourly transaction rate.
  • Standard Tier: Standard is the go-to option for cloud-designed business applications. It offers mid-level performance and business continuity features. Performance objectives for Standard deliver predictable per minute transaction rates.
  • Premium Tier: Premium is designed for mission-critical databases. It offers the highest performance levels and access to advanced business continuity features. Performance objectives for Premium deliver predictable per second transaction rates.

You do not need to buy a SQL Server license in order to use any of these pricing tiers – all of the licensing and runtime costs are built-into the price, and the databases are automatically managed (high availability, auto-patching and backups are all built-in).  We also now provide you the ability to pay for the database at the per-day granularity (meaning if you only run the database for a few days you only pay for the days you had it – not the entire month). 

The price for the new SQL Database Basic tier starts as low as $0.16/day ($4.96 per month) for a 2 GB SQL Database.  During the preview period we are providing an additional 50% discount on top of these prices.  You can learn more about the pricing of the new tiers here.

Improved 99.95% SLA and Larger Database Sizes

We are extending the availability SLA of all of the new SQL Database tiers to be 99.95%.  This SLA applies to the Basic, Standard and Premium tier options – enabling you to deploy and run SQL Databases on Azure with even more confidence.

We are also increasing the maximum sizes of databases that are supported:

  • Basic Tier: Supports databases up to 2 GB in size
  • Standard Tier: Supports databases up to 250 GB in size. 
  • Premium Tier: Supports databases up to 500 GB in size.

Note that the pricing model for our service tiers has also changed so that you no longer need to pay a per-database size fee (previously we charged a per-GB rate) - instead we now charge a flat rate per service tier.

Predictable Performance Levels with Built-in Usage Reports

Within the new service tiers, we are also introducing the concept of performance levels, which are a defined level of database resources that you can depend on when choosing a tier.  This enables us to provide a much more consistent performance experience that you can design your application around.

The resources of each service tier and performance level are expressed in terms of Database Throughput Units (DTUs). A DTU provides a way to describe the relative capacity of a performance level based on a blended measure of CPU, memory, and read and write rates. Doubling the DTU rating of a database equates to doubling the database resources.  You can learn more about the performance levels of each service tier here.

Monitoring your resource usage

You can now monitor the resource usage of your SQL Databases via both an API as well as the Azure Management Portal.  Metrics include: CPU, reads/writes and memory (not available this week but coming soon),  You can also track your performance usage relative (as a percentage) to the available DTU resources within your service tier level:

Performance Metircs

Dynamically Adjusting your Service Tier

One of the benefits of the new SQL Database Service Tiers is that you can dynamically increase or decrease them depending on the needs of your application.  For example, you can start off on a lower service tier/performance level and then gradually increase the service tier levels as your application becomes popular and you need more resources. 

It is quick and easy to change between service tiers or performance levels — it’s a simple online operation.  Because you now pay for SQL Databases by the day (as opposed to the month) this ability to dynamically adjust your service tier up or down also enables you to leverage the elastic nature of the cloud and save money.

Read this article to learn more about how performance works in the new system and the benchmarks for each service tier.

New Service-Service Restore Support

Have you ever had that sickening feeling when you’ve realized that you inadvertently deleted data within a database and might not have a backup?  We now have built-in Service Service Restore support with SQL Databases that helps you protect against this.  This support is available in all service tiers (even the Basic Tier).

SQL Databases now automatically takes database backups daily and log backups every 5 minutes. The daily backups are also stored in geo-replicated Azure Storage (which will store a copy of them at least 500 miles away from your primary region).

Using the new self-service restore functionality, you can now restore your database to a point in time in the past as defined by the specified backup retention policies of your service tier:

  • Basic Tier: Restore from most recent daily backup
  • Standard Tier: Restore to any point in last 7 days
  • Premium Tier: Restore to any point in last 35 days

Restores can be accomplishing using either an API we provide or via the Azure Management Portal:

clip_image004

New Active Geo-replication Support

For Premium Tier databases, we are also adding support that enables you to create up to 4 readable, secondary, databases in any Azure region.  When active geo-replication is enabled, we will ensure that all transactions committed to the database in your primary region are continuously replicated to the databases in the other regions as well:

image

One of the primary benefits of active geo-replication is that it provides application control over disaster recovery at a database level.  Having cross-region redundancy enables your applications to recover in the event of a disaster (e.g. a natural disaster, etc). 

The new active geo-replication support enables you to initiate/control any failovers – allowing you to shift the primary database to any of your secondary regions:

image

This provides a robust business continuity offering, and enables you to run mission critical solutions in the cloud with confidence.  You can learn more about this support here.

Starting using the Preview of all of the Above Features Today!

All of the above features are now available to starting using in preview form. 

You can sign-up for the preview by visiting our Preview center and clicking the “Try Now” button on the “New Service Tiers for SQL Databases” option.  You can then choose which Azure subscription you wish to enable them for.  Once enabled, you can immediately start creating new Basic, Standard or Premium SQL Databases.

Summary

This update of SQL Database support on Azure provides some great new features that enable you to build even better cloud solutions.  If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming

Azure: 99.95% SQL Database SLA, 500 GB DB Size, Improved Performance Self-Service Restore, and Business Continuity

ScottGu's Blog - Scott Guthrie - Tue, 04/29/2014 - 16:13

Earlier this month at the Build conference, we announced a number of great new improvements coming to SQL Databases on Azure including: an improved 99.95% SLA, support for databases up to 500GB in size, self-service restore capability, and new Active Geo Replication support.  This 3 minute video shows a segment of my keynote where I walked through the new capabilities:

image

Last week we made these new capabilities available in preview form, and also introduced new SQL Database service tiers that make it easy to take advantage of them.

New SQL Database Service Tiers

Last week we introduced a new Basic and Standard tier option with SQL Databases – which are additions to the existing Premium tier we previously announced.  Collectively these tiers provide a flexible set of offerings that enable you to cost effectively deploy and host SQL Databases on Azure:

  • Basic Tier: Designed for applications with a light transactional workload. Performance objectives for Basic provide a predictable hourly transaction rate.
  • Standard Tier: Standard is the go-to option for cloud-designed business applications. It offers mid-level performance and business continuity features. Performance objectives for Standard deliver predictable per minute transaction rates.
  • Premium Tier: Premium is designed for mission-critical databases. It offers the highest performance levels and access to advanced business continuity features. Performance objectives for Premium deliver predictable per second transaction rates.

You do not need to buy a SQL Server license in order to use any of these pricing tiers – all of the licensing and runtime costs are built-into the price, and the databases are automatically managed (high availability, auto-patching and backups are all built-in).  We also now provide you the ability to pay for the database at the per-day granularity (meaning if you only run the database for a few days you only pay for the days you had it – not the entire month). 

The price for the new SQL Database Basic tier starts as low as $0.16/day ($4.96 per month) for a 2 GB SQL Database.  During the preview period we are providing an additional 50% discount on top of these prices.  You can learn more about the pricing of the new tiers here.

Improved 99.95% SLA and Larger Database Sizes

We are extending the availability SLA of all of the new SQL Database tiers to be 99.95%.  This SLA applies to the Basic, Standard and Premium tier options – enabling you to deploy and run SQL Databases on Azure with even more confidence.

We are also increasing the maximum sizes of databases that are supported:

  • Basic Tier: Supports databases up to 2 GB in size
  • Standard Tier: Supports databases up to 250 GB in size. 
  • Premium Tier: Supports databases up to 500 GB in size.

Note that the pricing model for our service tiers has also changed so that you no longer need to pay a per-database size fee (previously we charged a per-GB rate) - instead we now charge a flat rate per service tier.

Predictable Performance Levels with Built-in Usage Reports

Within the new service tiers, we are also introducing the concept of performance levels, which are a defined level of database resources that you can depend on when choosing a tier.  This enables us to provide a much more consistent performance experience that you can design your application around.

The resources of each service tier and performance level are expressed in terms of Database Throughput Units (DTUs). A DTU provides a way to describe the relative capacity of a performance level based on a blended measure of CPU, memory, and read and write rates. Doubling the DTU rating of a database equates to doubling the database resources.  You can learn more about the performance levels of each service tier here.

Monitoring your resource usage

You can now monitor the resource usage of your SQL Databases via both an API as well as the Azure Management Portal.  Metrics include: CPU, reads/writes and memory (not available this week but coming soon),  You can also track your performance usage relative (as a percentage) to the available DTU resources within your service tier level:

Performance Metircs

Dynamically Adjusting your Service Tier

One of the benefits of the new SQL Database Service Tiers is that you can dynamically increase or decrease them depending on the needs of your application.  For example, you can start off on a lower service tier/performance level and then gradually increase the service tier levels as your application becomes popular and you need more resources. 

It is quick and easy to change between service tiers or performance levels — it’s a simple online operation.  Because you now pay for SQL Databases by the day (as opposed to the month) this ability to dynamically adjust your service tier up or down also enables you to leverage the elastic nature of the cloud and save money.

Read this article to learn more about how performance works in the new system and the benchmarks for each service tier.

New Service-Service Restore Support

Have you ever had that sickening feeling when you’ve realized that you inadvertently deleted data within a database and might not have a backup?  We now have built-in Service Service Restore support with SQL Databases that helps you protect against this.  This support is available in all service tiers (even the Basic Tier).

SQL Databases now automatically takes database backups daily and log backups every 5 minutes. The daily backups are also stored in geo-replicated Azure Storage (which will store a copy of them at least 500 miles away from your primary region).

Using the new self-service restore functionality, you can now restore your database to a point in time in the past as defined by the specified backup retention policies of your service tier:

  • Basic Tier: Restore from most recent daily backup
  • Standard Tier: Restore to any point in last 7 days
  • Premium Tier: Restore to any point in last 35 days

Restores can be accomplishing using either an API we provide or via the Azure Management Portal:

clip_image004

New Active Geo-replication Support

For Premium Tier databases, we are also adding support that enables you to create up to 4 readable, secondary, databases in any Azure region.  When active geo-replication is enabled, we will ensure that all transactions committed to the database in your primary region are continuously replicated to the databases in the other regions as well:

image

One of the primary benefits of active geo-replication is that it provides application control over disaster recovery at a database level.  Having cross-region redundancy enables your applications to recover in the event of a disaster (e.g. a natural disaster, etc). 

The new active geo-replication support enables you to initiate/control any failovers – allowing you to shift the primary database to any of your secondary regions:

image

This provides a robust business continuity offering, and enables you to run mission critical solutions in the cloud with confidence.  You can learn more about this support here.

Starting using the Preview of all of the Above Features Today!

All of the above features are now available to starting using in preview form. 

You can sign-up for the preview by visiting our Preview center and clicking the “Try Now” button on the “New Service Tiers for SQL Databases” option.  You can then choose which Azure subscription you wish to enable them for.  Once enabled, you can immediately start creating new Basic, Standard or Premium SQL Databases.

Summary

This update of SQL Database support on Azure provides some great new features that enable you to build even better cloud solutions.  If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming

An Agile Approach to a House Remodel

You might have noticed I’ve slowed my blogging in the past few weeks. I’m fine. I’ve been a product owner/customer for our new-to-us house remodel.

In the last several weeks, almost every single day, Mark and I have taken some time to go over to the new house to see the progress and provide feedback to the guys working on site. We do work directly with the site project manager. I don’t know how he meets with the subcontractors (painters, plasterers, tilers, etc.) All I know is that we provide him feedback just about every day.

He tells us when it’s time to select the: tile, faucets, bathtubs,  paint colors, new front door, decide on kitchen design, everything. We meet with the people who are our vendors. They send the specs or material to our builder.

When we were working on the design of the house, we iterated on it with the builder. We had at least three iterations on paper. Maybe four. We discussed how we would live in the house with him.

As we discovered things about the house, we modified the design. (Does any of this sound familiar?) Since it’s a house, we haven’t modified structural things, such as how big the addition is, or where the windows are. However, this week, we changed my bedroom closet. That was because I didn’t understand the initial design, and I called a halt to where we were.

Our builder is great. We are trying to be great customers, too. It’s a two-way street. This is why my blogging has been slow. I’m spending time at the house.

Today, I took these pictures, so you can see what I mean about making decisions about paint colors:

bedroom1 bedroom2 bedroom3 hallwayThe first three images with the blue colors are all in what will be our master bedroom. We were trying to decide between the more intense blue on the left and the other blue on the right. The painted-over colors are from when the painters mistakenly painted our bathroom colors on the bedroom walls.

The purple-gray colors on the right bottom image are from the hallway. We wanted to see what that color would look like in what has the potential to be a dark hallway.

You don’t have to like our colors. We like our colors. Our house is full of teal greens, blues, and purples. We like those colors. If you’ve ever seen me in person, you might understand why. Those are my colors.

Being a product owner/customer is a new experience for me. It’s time-intensive to do it right. You have to make a gazillion little decisions every single day. And, this is with a house, a product that is not malleable in every dimension. A product that is not changeable with a flick of the wrist.

Our house is supposed to be done at the end of May. It will be close. They might need a few more days. Why? Because the kitchen designer is not agile. She is also designing my office closet storage, although maybe not for long. I am running out of patience. (Me? What a surprise!)

Mark and I have really enjoyed the short cycles with this remodel. It’s been intense for us. But, it’s been great to see the house we want and need come together, and so quickly.

Agile works for house building, too. Don’t let anyone tell you otherwise.

Categories: Project Management

2 Times to Play Planning Poker and 1 Time Not To

Mike Cohn's Blog - Tue, 04/29/2014 - 15:00

This post continues my recent series about Planning Poker by focusing on when to estimate.

Because Planning Poker is a consensus-based estimating approach, it is most appropriate to use it on items that require consensus. That is, I recommend using Planning Poker on product backlog items rather than on the tasks that make up a sprint backlog.

Estimates on the user stories of a product backlog serve two purposes:

  1. They allow the product owner to prioritize the product backlog. Consider two product backlog items that the product owner deems to be of equal value. The product owner will want to do first the item that has the lower cost. The estimates given to items are their costs, and so are used by the product owner to make prioritization decisions.
  2. They allow the company to make longer-term predictions. A product owner cannot begin to answer questions like how much can be delivered by a certain date or when a set of features can be delivered without first knowing the size of the work to be performed.
When to Play Planning Poker

Knowing those two reasons for estimating product backlog items is important because it helps answer the question of when a team should estimate. A team should estimate early enough to fulfill these two needs but no earlier than necessary.

Practically, then, this means that a team should estimate with Planning Poker at two different times.

First, a team should play Planning Poker after a meeting like a story-writing workshop in which they have written a large number of product backlog items. I recommend doing such workshops about quarterly.

This allows the product owner and team to identify a larger goal than can be achieved in one sprint, and create the user stories needed to reach the goal.

A typical quarterly story-writing workshop might produce 20 to 50 product backlog items. At a target rate of estimating about 20 items per hour, this could lead to two or so hours spent estimating for each quarter.

The second time a team should play Planning Poker is once per sprint. Some teams will do this as part of a regular product backlog grooming meeting, and I think that’s fine if the whole team participates in grooming.

If the whole team does not participate, another good time to play Planning Poker is following one of the daily scrums late in the sprint. If the team estimates too early in the sprint, there’s a chance they’ll need to repeat the process for any late-arriving stories.

But the team should avoid estimating so late in the sprint that the product owner cannot consider the newly estimated items when deciding what the team will work on in the next sprint.

A Time Not to Play Planning Poker

There’s only one time when I think it’s a mistake to play Planning Poker: at the start of the sprint planning meeting. The first problem with doing it then is that it’s too late for the product owner to adjust priorities based on the new estimates.

Some product owners may be able to rejigger priorities on the fly, but even they could, probably do so better with a little more time to think.

A second problem with playing Planning Poker to start sprint planning is that it almost always causes the team to spend too much time estimating. I think this happens because team members walk into the sprint planning meeting ready to think in detail about their user stories.

But with Planning Poker, they are asked instead to think at a high-level and put a rougher, less precise estimates on the user stories. Many team members seem to have a hard time giving rough estimates in a meeting in which they will next be asked to give much more precise estimates of tasks.

This leads to more involved discussions than I recommend, and that leads to the team taking longer to estimate than when that is done separately. By following the guidelines here, you’ll be able to help your team estimate at the two times they should use Planning Poker, and avoid estimating at the one time they shouldn’t.

Question: When do you play Planning Poker?

2 Times to Play Planning Poker and 1 Time Not To

Mike Cohn's Blog - Tue, 04/29/2014 - 15:00

This post continues my recent series about Planning Poker by focusing on when to estimate.

Because Planning Poker is a consensus-based estimating approach, it is most appropriate to use it on items that require consensus. That is, I recommend using Planning Poker on product backlog items rather than on the tasks that make up a sprint backlog.

Estimates on the user stories of a product backlog serve two purposes:

  1. They allow the product owner to prioritize the product backlog. Consider two product backlog items that the product owner deems to be of equal value. The product owner will want to do first the item that has the lower cost. The estimates given to items are their costs, and so are used by the product owner to make prioritization decisions.
  2. They allow the company to make longer-term predictions. A product owner cannot begin to answer questions like how much can be delivered by a certain date or when a set of features can be delivered without first knowing the size of the work to be performed.
When to Play Planning Poker

Knowing those two reasons for estimating product backlog items is important because it helps answer the question of when a team should estimate. A team should estimate early enough to fulfill these two needs but no earlier than necessary.

Practically, then, this means that a team should estimate with Planning Poker at two different times.

First, a team should play Planning Poker after a meeting like a story-writing workshop in which they have written a large number of product backlog items. I recommend doing such workshops about quarterly.

This allows the product owner and team to identify a larger goal than can be achieved in one sprint, and create the user stories needed to reach the goal.

A typical quarterly story-writing workshop might produce 20 to 50 product backlog items. At a target rate of estimating about 20 items per hour, this could lead to two or so hours spent estimating for each quarter.

The second time a team should play Planning Poker is once per sprint. Some teams will do this as part of a regular product backlog grooming meeting, and I think that’s fine if the whole team participates in grooming.

If the whole team does not participate, another good time to play Planning Poker is following one of the daily scrums late in the sprint. If the team estimates too early in the sprint, there’s a chance they’ll need to repeat the process for any late-arriving stories.

But the team should avoid estimating so late in the sprint that the product owner cannot consider the newly estimated items when deciding what the team will work on in the next sprint.

A Time Not to Play Planning Poker

There’s only one time when I think it’s a mistake to play Planning Poker: at the start of the sprint planning meeting. The first problem with doing it then is that it’s too late for the product owner to adjust priorities based on the new estimates.

Some product owners may be able to rejigger priorities on the fly, but even they could, probably do so better with a little more time to think.

A second problem with playing Planning Poker to start sprint planning is that it almost always causes the team to spend too much time estimating. I think this happens because team members walk into the sprint planning meeting ready to think in detail about their user stories.

But with Planning Poker, they are asked instead to think at a high-level and put a rougher, less precise estimates on the user stories. Many team members seem to have a hard time giving rough estimates in a meeting in which they will next be asked to give much more precise estimates of tasks.

This leads to more involved discussions than I recommend, and that leads to the team taking longer to estimate than when that is done separately. By following the guidelines here, you’ll be able to help your team estimate at the two times they should use Planning Poker, and avoid estimating at the one time they shouldn’t.

Question: When do you play Planning Poker?

Looking for developers to join my group

With Amdocs TeraScale, my previous project, move into production, I moved from to a new role within Amdocs and took over the Technology Research group, which is part of the big data and strategic initiatives business unit.

Now it is time to expand the group and I am looking for a developer-architect and/or senior developer to join my group.

 

  • If you are a technologist at heart and like learning new stuff every day
  • If you can pick up a new a technology and be up and running with it in a day or two
  • If you want to tinker with the latest and greatest technologies (big data, in memory grids, cloud management systems, NFV, columnar database etc.)
  • If you want to help shape the technology roadmap of a large corporation

I am looking for you

The positions are located in Raanaa Israel. If you’re interested you can contact me via the contact form,my twitter, linkedin and/or my work mail at arnon.rotemgaloz at amdocs.com

 

Related articles across the web
Categories: Architecture

Physical Intelligence and How To Live Longer with Skill

I’ve added another category to Sources of Insight:

Physical Intelligence

I think it’s a good way to consolidate, integrate, and synthesize all of the body, health, fitness, and mind-body connection stuff.   I’m also increasingly appreciative of the power of intelligence.    Intelligence provides a nice twist whether we are talking emotional intelligence, financial intelligence, physical intelligence, positive intelligence, social intelligence, spiritual intelligence, etc.

If there’s one post to read on Physical Intelligence, then read the following:

9 Ways to Add 12 Years to Your Life

It’s based on the Blue Zones research.  The Blue Zones are the healthiest places on the planet where people live the longest.

I don’t have a lot of articles on Physical Intelligence yet, but now that I’ve made space for it, I plan to cover a lot more things, including advanced body movements that help you expand what you’re capable of.  It’s worth nothing that Tony Robbins actually prioritizes health as a top value, and he uses his physiology to generate outstanding results.  Similarly, Stephen Covey prioritized fitness and enjoyed the freedom that came from the discipline of training his body so that he could run more freely.

Side note – Tony Robbins actually did a bunch of deep research on how to use breathing exercises to clean your system.  It’s a very specific breathing pattern that you can use to activate your lymphatic system through deep diaphragmatic breathing:

Breathe with Skill to Dramatically Improve Your Health

Interestingly, he claims that if you follow this breathing technique, you’ll actually change your white blood cell count.

One more must read post is about sleep patterns:

Larks, Owls, and Hummingbirds

John Medina provides some simple labels for the three typical sleep patterns that people fall into.  A little self-awareness can go a long way in terms of helping you make the most of what you’ve got.  In this case, we spend a lot of time sleeping (at least us Larks), so it’s worth learning what you can about your own sleep needs and preferences, and sometimes a label can help you gain insight, or at least give you a starting point for some deeper research.

Sleep is actually another topic that I’ll dive a bit deeper into in the future because it plays such a key role in our personal effectiveness, and ultimately in our personal power.   In fact, the cornerstone of physical intelligence might actually be the following triad:

Eating, sleeping, and exercising.

Our personal success patterns for each of those areas dramatically impacts the quality of our lives.

If there are particular topics you want me to dive deep into physical intelligence, be sure to use my contact form and let me know.

Meanwhile, enjoy browsing the current set of Physical Intelligence articles.

Categories: Architecture, Programming

Why Are Managers Interested In Measurement

Measurement is no longer optional

Measurement is no longer optional

Measurement is a topic that most IT practitioners would rather avoid discussing. Many practitioners feel that it not possible to measure software, or that all that matters is whether the customer is satisfied. IT managers tend to have a point of view that incorporates customer satisfaction and at least a notional view of how efficiently they are spending their budget. When managers or other leaders do discuss the topic of measurement their arguments for measurement tend to begin intellectually.  Arguments begin with statements like, “we need to measure to ensure meet a model like the CMMI,” or “we need to measure to build knowledge so that we can estimate.” All of these reasons are good reasons to measure, however many measurement programs are being driven by more basic and powerful needs. The demand for IT continues to explode in every organization I speak with. The problem is that IT, whether developing, enhancing or maintaining software, is expensive. Couple that expense with the cost to acquire and maintain hardware and the budgets of some IT organizations become larger than moderately industrialized countries and are growing. The pressure that growing IT budgets create on the bottom line means that efficiency can no longer be a dirty word in IT organizations. If efficiency is or is about to become important, then organizational measurement can’t be far away.

Much of the effort in the development field over the past ten to twelve years has been focused on effectiveness and customer satisfaction, as evidenced by the Agile movement. Efficiency has begun to creep back into the conversation under the auspices of Lean. The measurement programs focused on customer satisfaction now must be refitted to discuss time-to-market (how much time is needed get a unit of work to market, productivity (how much effort is needed to get a unit of work to market), cost efficiency (how much does a unit of work cost to come to market) and quality per unit of work. All of these metrics and the measures they are based on need to be comparable and combinable across the whole of the IT organization.

A roadmap to develop measurement program can be as straightforward as:

  • Defining Goals and Values
  • Developing Common Measures
  • Mapping the Linkage Between Goals and Common Measures
  • Identifying Measures and Metrics (Including Gap Analysis)
  • Validating Metrics to Needs, Goals and Values
  • Developing Metrics Definitions
  • Mapping Metrics Data Needs
  • Defining an Overall Dashboard

In 2014, many IT organization budgets are beginning to recover and grow, however because of the huge backlog in demand for IT services and products the need for IT department to efficiently their budgets is not going to change. What is going to change is the need for individual teams to solve their own measurement quandary because all IT work needs to be combined, compared and evaluated. Solid measurement programs have to balance efficiency, effectiveness, quality and customer satisfaction.  The process starts with understanding the goals of the organization and then ensuring what gets measured and demonstrate progress against those goals.


Categories: Process Management

The hunt for Google I/O 2014 registration easter eggs

Google Code Blog - Mon, 04/28/2014 - 18:15
By MĂłnica Bagagem, Google Developer Marketing team

During the past two weeks, 300 of our most loyal developers discovered a registration code to Google I/O 2014, upon completion of a space adventure to [37.7829° N, 122.4033° W, Earth], aka the Moscone Center West.
Throughout your hunt for clues, we hope you also had the opportunity to learn more about the variety of documentation and resources available for developers, covering different products and platforms:
  • Google Developers home: the starting point for a diverse set of Google APIs from Cloud, to Games, to Google Wallet. This comprehensive site includes blogs, API documentation, developer tools, and information about Google developer programs, groups, training, and open-source projects.

  • Android and Google Developers YouTube Channels: central resource for developers around the world, of all experience levels, interested in learning more about the Google Developer ecosystem. It includes tutorial videos, high level overviews, and the latest news.
  • Udacity videos: Google Developers has teamed up with Udacity to provide accessible, engaging, and highly effective online education, including cutting-edge classes about Mobile Web Development and HTML5 Game Development.
The lucky Captains who found the leads first, were guided by a robot co-pilot called Icarus Odessa (I.O. initials, get it?!) on a spaced-themed text adventure game, filled with starships, asteroids, and a few sci-fi references. We were seriously impressed by the clever strategies you used to discover our clues, and thrilled to see the community interact throughout the quest. If you’re curious to meet Icarus, have some fun playing the adventure game here.


Our goal was to reward you - our developer power users - with the opportunity to experience the magic of I/O first hand. We know that not everyone will be able to attend in person, but you can still join us virtually: visit google.com/io to watch the live stream, download the mobile app, and learn more about Extended I/O events happening near you.

We hope to see you in June!

MĂłnica Bagagem is part of the Developer Marketing team, working on Google I/O and supporting Designer related efforts. She is a world traveler and a brunch lover.*

Posted by Louis Gray, Googler
Categories: Programming

How Disqus Went Realtime with 165K Messages Per Second and Less than .2 Seconds Latency

How do you add realtime functionality to a web scale application? That's what Adam Hitchcock, a Software Engineer at Disqus talks about in an excellent talk: Making DISQUS Realtime (slides).

Disqus had to take their commenting system and add realtime capabilities to it. Not something that's easy to do when at the time of the talk (2013) they had had just hit a billion unique visitors a month.

What Disqus developed is a realtime commenting system called “realertime” that was tested to handle 1.5 million concurrently connected users, 45,000 new connections per second, 165,000 messages/second, with less than .2 seconds latency end-to-end.

The nature of a commenting system is that it is IO bound and has a high fanout, that is a comment comes in and must be sent out to a lot of readers. It's a problem very similar to what Twitter must solve

Disqus' solution was quite interesting as was the path to their solution. They tried different architectures but settled on a solution built on Python, Django, Nginx Push Stream Module, and Thoonk, all unified by a flexible pipeline architecture. In the process they we are able to substantially reduce their server count and easily handle high traffic loads.

At one point in the talk Adam asks if a pipelined architecture is a good one? For Disqus messages filtering through a series of transforms is a perfect match. And it's a very old idea. Unix System 5 has long had a Streams capability for creating flexible pipelines architectures. It's an incredibly flexible and powerful way of organizing code.

So let's see how Disqus evolved their realtime commenting architecture and created something both old and new in the process...

Categories: Architecture

New Books for Work

Herding Cats - Glen Alleman - Mon, 04/28/2014 - 17:12

Our current work efforts on forecasting Estimate A Completion and connecting the dots between Earned Value Management and Technical Performance Measures and the sources of Measures of Performance an Measures of Effectiveness is proceeding. Conference and journal papers are coming in the May and June. Here are some books that have informed that effort.

Making Multi-Objective DecisionsMaking Multiple-Objective Decisions, Manssoreh Mollaghasemi and Julia Pet-Edwards is a good starting point when faced with deciding anything on a project.

The book is a handbook for decision making, with examples, and step-by-step processes for multi-criteria decision making. Almost all decisions involve consideration of multiple objectives that often conflict. Cost, technical capabilities, deadlines, safety, appearance, efficiency, etc. In order to decide, information is needed about the tradeoffs involved in the selection processes. This of course is why estimating many of the parameters is mandatory for any cedible decision making process and suggestion we can make decision without estimates is essentially nonsense. A companion book is Making Hard Decisions: An Introduction to Decision Analysis, 2nd Edition Robert Cleman, This book is about value focused thinking and decision making. So when we hear about value and spending other peoples money, this is a good place to look.

Forecasting and simulatingForecasting and Simulating Software Development Projects, Troy Magennis. This is a book about forecasting software cost and schedule for Kanban and Scrum projects. Starting with Scrum and Kanban, Troy shows how to estimate cost and schedule using a what if paradigm and his Monte Carlo Simulation tool. 

Since all variables in all projects are random variables, Monte Carlo is one approach to simulating the outcomes. Method of Moments is another, but MCS is a straight forward approach. 

Modeling is the basis of decision making as well. With the model we can ask questions about the future and generate confidence intervals on those answers. The George Box quote - nearly universally misused - all models are wrong some are useful is in play here. A model is an approximation of a process - in this case writing software for money. All models are useful to the extent we understand the processes by which the model was developed and applied. 

This is a core process of all estimating and replaces guessing with modeling.

Towards_Improved_Project_Manag_5_16_2013_11_44_40_PMTowards Improved Project Management Practices: Uncovering the evidence of effective practices throughg emperical research, Terence John Cooke-Davies

We hear the term emperical all the time, but like Ignio Montoya says You keep using that word. I don't think it means what you think it means. Emperical data is gathered from observation. But in the mangement of projects that data must be used to create error signals from that observed performance - the emperical data - when compared to the target data for the projects desired outcomes. 

Failing to have a target to steer toward is called Open Loop Control and a very good way to drive straight into the ditch. So once again, estimates to future desired performance, comparted to the past statistical performance (a few samples used to compute the mean with a 2 standard deviation of the means variance is not credible by the way), must be in place to forecast the future performance. 

Troy's book shows how to deal with all this.

Forecasting Methods fn ApplicationsForecasting Methods and Applications, 3rd Edition, Spyros Makridakis, Steven Wheelwright, and Rob Hyndman. Hydman's site has everything you need to start forecasting the future using your collected empirical data and the R programming language.

Forecasting has been around since the 1950's with George Box's methods. The same George Box people misquote about all models are wrong. Forecasting is all about decision making again. 

The distinction between external - uncontrolled events - and controllable events is many times not made. This creates not only confusion, it's lays the ground work for bad decision making. The much quoted Taleb Black Swans are uncontrolled external event - Externalities in the financial market place. Projects are rarely impacted by externalities if the proper risk management processes are in place. When they are not in place and the project is not Managed then those Black Swans will appear more often. But this is simply bad project management - don't do that.

This book shows how to forecast the future given the past. 

So What Can We Do With This Knowledge?

The first thing to do is realize that decision making is a probabilistic process based on the underlying statistics of the processes we are trying to make decisions about. Thinking that we can make decision in the absence of some form of knowledge about cost, schedule, and technical outcomes is simply not possible. Saying so does not make it so.

Exploring on how to make decisions in the absence of estimating - the kind of statistical estimating described in all these books - is unnecessary. These books are a start, but there is a nearly unlimited wealth of information on how to make informed decisions in the presence of uncertainty.

Re-posting Scott Adams Dilbert cartoons of bad management practices is probably good for Scott Adams, but does ZERO to provide corrective actions for that Bad Management. We all know the problems, how about some solutions? It's trivial to point out the problem actually. And since it's trivial, it's also intellectually lazy.

Read these books, read other books, read papers, explore how other people have addressed the problems of increasing the probability of project success, put in the effort needed to make that increase possible on your project. Stop reading Dilbert and start fixing the problems.

 

Related articles How to Forecast the Future
Categories: Project Management

Value is Everybody’s Job

Back in 2010, Gartner suggested that Business Value Realization would be Enterprise Architecture finally done right.  Related, when people were confused by the scope of Value Realization, all we did was add "Business” up front (i.e. “Business Value Realization”) and that seemed to add instant clarity for people, and they said they got it. 

They realized that it was all about extracting business value and accelerating business value.

The most interesting pattern I think I see is not that value is an individual thing. 

It's that any individual can create value in today’s world – with their network, the ways they work, the technology at their fingertips -- they can focus on their end users and continuous learning, and operate without walls. 

In fact, the enticing promise of the Enterprise Social vision is comprehensive collaboration.

There was an uprising in the developer world to create customer value -- it was agile. 

It seems like the world is experiencing another uprising (and you hear Satya Nadella talk about a focus on individuals whether in business or life, focused on learning, collaborating, and changing the world.)

So it's not the CIO, the CEO, etc.

What is the new uprising?

Value is everybody's job.

Categories: Architecture, Programming