Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Designing for Scale - Three Principles and Three Practices from Tapad Engineering

This is a guest post by Toby Matejovsky, Director of Engineering at Tapad (@TapadEng).

Here at Tapad, scaling our technology strategically has been crucial to our immense growth. Over the last four years we’ve scaled our real-time bidding system to handle hundreds of thousands of queries per second. We’ve learned a number of lessons about scalability along that journey.

Here are a few concrete principles and practices we’ve distilled from those experiences:

  • Principle 1: Design for Many
  • Principle 2: Service-Oriented Architecture Beats Monolithic Application
  • Principle 3: Monitor Everything
  • Practice 1: Canary Deployments
  • Practice 2: Distributed Clock
  • Practice 3: Automate To Assist, Not To Control
Principle 1: Design for Many
Categories: Architecture

Do The Math

Herding Cats - Glen Alleman - Mon, 05/11/2015 - 16:18

MathAll project work is probabilistic. All decision making in the presence of probabilistic systems requires making estimates of future emerging outcomes.

But to do this properly we need to have a standard set of terms that can form the basis of understanding the problem and the solution.

When those terms are redefined for what ever reason, the ability to exchange ideas is lost. For example there is a popular notion that defining terms in support of an idea is useful

  • Forecasting and estimating are different things
    • Estimating is about past, precent and future
    • Forecasting is an estimate about the future
  • Monte Carlo Simulation is the same of Boot Strapping sampling
    •  Monte Carlo is an algorithm process of selecting data from under a probabilistic distribution function
    • Boot  strapping is sampling existing data from past samples
    • MCS uses a PDF not past data
  • Probabilistic  forecasting outperforms estimating every time 
    • Probabilistic forecasting IS estimating
Related articles The Flaw of Empirical Data Used to Make Decisions About the Future Economics of Software Development Who Builds a House without Drawings? Herding Cats: Decision Analysis and Software Project Management
Categories: Project Management

Mr. Franklin's Advice

Herding Cats - Glen Alleman - Mon, 05/11/2015 - 16:04

Quote-if-you-fail-to-plan-you-are-planning-to-fail-benjamin-franklin-46-18-75 Quote-plans-are-nothing-planning-is-everything-dwight-d-eisenhower-56565

 

 

 

 

 

So what does this actually mean in the project management domain?

Plans are strategies for the success of the project. Strategies are hypothesis. Hypothesis needs to be tested to determine their validity. These tests - in the project domain - comes from setting a plan, performing the work, accessing the compliance of the outcomes with the plan, that corrective actions in the next iteration of the works.

This seems obvious, but when we hear about the failures in the execution of the plans, we have to wonder what went wrong. Research has shown many Root Causes of project shortfalls. Here are four from our domain:

  • Unrealistic performance expectations, missing Measures of Effectiveness and Measures of Performance.
  • Unrealistic cost and schedule estimates based on inadequate risk adjusted growth models.
  • Inadequate assessments of risk and unmitigated exposure to these risks with propers handling plans.
  • Unanticipated technical issues without alternative plans and solutions to maintain effectiveness.

The root cause for each of these starts with the lack of the following

Unrealistic Performance Expectations

When we set out to define what performance is needed we must have a means of testing that this expectation can be achieved. There are several ways of doing this:

  • No Prototypes
  • No Modeling and Simulations of performance outcomes
  • No reference design to base modeling on to discover needed changes in baseline system architecture
  • No use of derived products

Unrealistic Cost and Schedule Estimates

  • No basis of estimate
  • Optimism bias
  • No parametric models 
  • No understanding of irreducible uncertainties in duration for work
  • No reference classes.

Inadequate Assessment of Risk

  • Not understanding "Risk management is how adult manage projects" - Tim Lister
  • No risk register connected to planning and scheduling processes
  • No Monte Carlo assessment of risk impacts on cost and schedule
  • No risk mitigation in baseline
  • Inadequate Management Reserve developed from modeling processes

Unanticipated Technical Issues

  • No Plan B
  • No in depth risk assessment of technologies
  • No "on ramps" or "off ramps" for technology changes

Each of these issues can be addressed through a Systems Engineering process using Measures of Effectiveness, Measures of Performance, and Technical Performance Measures. The planning process makes use of these measures to assess the credibility of the plan and the processes to test the hypothesis. 

Related articles Want To Learn How To Estimate? Debunking Let's Stop Guessing and Start Estimating Complex, Complexity, Complicated Estimates
Categories: Project Management

Sometimes It’s OK to Break the Rules

Making the Complex Simple - John Sonmez - Mon, 05/11/2015 - 16:00

I was sitting in my daughter’s dance recital today, waiting for her to get on stage. I had my iPhone all ready to capture her 2 minutes of fame on video so that I could share it with the rest of the family. Finally, it was her turn to come out onto the stage with […]

The post Sometimes It’s OK to Break the Rules appeared first on Simple Programmer.

Categories: Programming

SPaMCAST 341 – Agile Team Decision Making Essay

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

Software Process and Measurement Cast 341 features our essay titled Agile Team Decision Making. Team-based decision-making requires mechanisms and prerequisites for creating consensus among team members. The prerequisites are a decision to be made, trust, knowledge and the tools to make a decision. No one should assume that that team members have the required tools and techniques in their arsenal to effectively make decisions.

Remember:

Jo Ann Sweeney, author of the Explaining Change column, is running her annual Worth Working Summit.  Please visit http://www.worthworkingsummit.com/

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

CMMI Institute Global Congress
May 12-13 Seattle, WA, USA
My topic – Agile Risk Management
http://cmmiconferences.com/

DCG will also have a booth!

Also upcoming conferences I will be involved in include ICEAA in June and SQTM in September. More on these great conferences next week.

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with Ellen Gottesdiener and Mary Gorman.  We discussed their great book, Discover to Deliver, requirements and Agile.  Ellen and Mary are provided penetrating insight into how to work with requirements in an Agile environment from discovery to delivery and beyond.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 341 – Agile Team Decision Making Essay

Software Process and Measurement Cast - Mon, 05/11/2015 - 00:00

Software Process and Measurement Cast 341 features our essay titled Agile Team Decision Making. Team-based decision-making requires mechanisms and prerequisites for creating consensus among team members. The prerequisites are a decision to be made, trust, knowledge and the tools to make a decision. No one should assume that that team members have the required tools and techniques in their arsenal to effectively make decisions.

Remember:

Jo Ann Sweeney, author of the Explaining Change column, is running her annual Worth Working Summit.  Please visit http://www.worthworkingsummit.com/

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

CMMI Institute Global Congress
May 12-13 Seattle, WA, USA
My topic - Agile Risk Management
http://cmmiconferences.com/

DCG will also have a booth!

Also upcoming conferences I will be involved in include ICEAA in June and SQTM in September. More on these great conferences next week.

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with Ellen Gottesdiener and Mary Gorman.  We discussed their great book, Discover to Deliver, requirements and Agile.  Ellen and Mary are provided penetrating insight into how to work with requirements in an Agile environment from discovery to delivery and beyond.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Re-Read Saturday: The Goal: A Process of Ongoing Improvement. Part 12

IMG_1249

Several weeks ago I was discussing why a team was having trouble completing the work that they committed to delivering in a two week sprint. The team felt that one solution was to shift to a three week sprint, in other words to increase batch size be 50%. I asked the team to identify the problem that was causing the problem. While calendar time is a constraint, the bigger problem turned out to be the size of the user stories they were accepting. The team settled on trying an experiment of thinly slicing their user stories and reducing their sprint duration to one week. The combination of smaller stories enforced by an even tighter calendar constraint is a reduction in batch size. They have been at the new cadence for four weeks and have delivered on their commitments for last three sprints and have decided to continue the “experiment” for the foreseeable future. Cutting batch size is a solution that can work in manufacturing and in software development.

Part 1       Part 2       Part 3      Part 4      Part 5      Part 6      Part 7      Part 8    Part 9   Part 10   Part 11

Chapter 29

The plant shows incredible progress. Cutting the batch size in half led to less idle time for non-bottleneck resources, where idle time exists it has been spread out more diffusely, and work is flowing through the plant faster with lower overall inventory. However the classic measures, which focus on cost per step in the process rather than the system cost, do not show the improvement. The additional set ups have associated costs and effort which are a problem. Even through the plan is making more completed product with less inventory, the metrics do not show the improvement. Alex and his staff decide to change the measurement basis from 12 months to 2 months, with the rationale that the new process now in place is more endemic of how work will be done going forward. The decision is made despite the agreement that “Frost”, the Head of Accounting at Corporate, and Alex’s Boss, Bill Peach will not approve of the change.

Chapter 29 ends with Jons, the head of sales, bringing Alex a new order for 1,000 items. The client is offering the order to Alex’s company if they can deliver in a month. Jons believes that IF the plant can deliver, the client will shift all of its business to plant. The Alex and his team consider the order. They decide to not sacrifice everything else by not expediting the order, but if the client will accept a counter proposal of a 250 items every week for four weeks beginning two weeks after signing the order they can deliver. In order to deliver the order Alex and his team will need to order components for the item from another company (and have the parts shipped via air freight) and to cut the batch size in half again. The client likes the idea of the staggered deliver (they probably can’t use all 1,000 instantly either).

Chapter 30

The big order, or should we say four smaller orders, is moving well. The first couple of installments have been made on time and Alex’s staff foresee no issues with the rest. The continued reduction in batch size helped the plant meet its goals and has continued to improve the flow through the plant. The plant’s metrics show that inventory is down and throughput has doubled.

Alex receives two messages from Bill Peach. The first is praise for meeting the order and the second is a summons for a plant review. While preparing for the review, accounting discovers the change in the measurement basis (form 12 months to 2 months) that the plant implemented without permission and reacts badly. Lou, the plant accountant, is reprimanded and the numbers are restated. The plant has not met Bill Peach’s demand for a 15% increase.

Jons and Burnside, the client for the big order, arrive at the plant unannounced in a helicopter. They immediately head into the plant. After validating that the last shipment went out on time and without a quality problem Alex heads into the plant to find Burnside shaking every person’s hand he can find. The ability to deliver on time with great quality has made the plant a huge friend, and Jons confides to Alex that Burnside will sign a huge order next week. Jons goes on that with the plant’s quality and cycle time (the time between accepting an order to delivery) they will blow the competition away.

The chapter ends with Alex and Julie, Alex’s estranged wife, deciding to get remarried. All seems to be going well, but there are still 10 chapters left in the book . . .

The summary previous entries in the re-read of The Goal have been shifted to a new page (click here).   Also, if you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version


Categories: Process Management

R: Neo4j London meetup group – How many events do people come to?

Mark Needham - Sat, 05/09/2015 - 23:33

Earlier this week the number of members in the Neo4j London meetup group creeped over the 2,000 mark and I thought it’d be fun to re-explore the data that I previously imported into Neo4j.

How often do people come to meetups?

library(RNeo4j)
library(dplyr)
 
graph = startGraph("http://localhost:7474/db/data/")
 
query = "MATCH (g:Group {name: 'Neo4j - London User Group'})-[:HOSTED_EVENT]->(event)<-[:TO]-({response: 'yes'})<-[:RSVPD]-(profile)-[:HAS_MEMBERSHIP]->(membership)-[:OF_GROUP]->(g)
         WHERE (event.time + event.utc_offset) < timestamp()
         RETURN event.id, event.time + event.utc_offset AS eventTime, profile.id, membership.joined"
 
df = cypher(graph, query)
 
> df %>% head()
  event.id    eventTime profile.id membership.joined
1 20616111 1.309372e+12    6436797      1.307285e+12
2 20616111 1.309372e+12   12964956      1.307275e+12
3 20616111 1.309372e+12   14533478      1.307290e+12
4 20616111 1.309372e+12   10793775      1.307705e+12
5 24528711 1.311793e+12   10793775      1.307705e+12
6 29953071 1.314815e+12   10595297      1.308154e+12
byEventsAttended = df %>% count(profile.id)
 
> byEventsAttended %>% sample_n(10)
Source: local data frame [10 x 2]
 
   profile.id  n
1   128137932  2
2   126947632  1
3    98733862  2
4    20468901 11
5    48293132  5
6   144764532  1
7    95259802  1
8    14524524  3
9    80611852  2
10  134907492  2

Now let’s visualise the number of people that have attended certain number of events:

ggplot(aes(x = n), data = byEventsAttended) + 
  geom_bar(binwidth = 1, fill = "Dark Blue") +
  scale_y_continuous(breaks = seq(0,750,by = 50))

2015 05 09 01 15 02

Most people come to one meetup and then there’s a long tail after that with fewer and fewer people coming to lots of meetups.

The chart has lots of blank space due to the sparseness of people on the right hand side. If we exclude any people who’ve attended more than 20 events we might get a more interesting visualisation:

ggplot(aes(x = n), data = byEventsAttended %>% filter(n <= 20)) + 
  geom_bar(binwidth = 1, fill = "Dark Blue") +
  scale_y_continuous(breaks = seq(0,750,by = 50))
2015 05 09 01 15 36

Nicole suggested a more interesting visualisation would be a box plot so I decided to try that next:

ggplot(aes(x = "Attendees", y = n), data = byEventsAttended) +
  geom_boxplot(fill = "grey80", colour = "Dark Blue") +
  coord_flip()

2015 05 09 22 31 20

This visualisation really emphasises that the majority are between 1 and 3 and it’s much less obvious how many values there are at the higher end. A quick check of the data with the summary function reveals as much:

> summary(byEventsAttended$n)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
  1.000   1.000   2.000   2.837   3.000  69.000


Now to figure out how to move that box plot a bit to the right :)

Categories: Programming

Understanding the 'sender' in segues and use it to pass on data to another view controller

Xebia Blog - Fri, 05/08/2015 - 22:59

One of the downsides of using segues in storyboards is that you often still need to write code to pass on data from the source view controller to the destination view controller. The prepareForSegue(_:sender:) method is the right place to do this. Sometimes you need to manually trigger a segue by calling performSegueWithIdentifier(_:sender:), and it's there you usually know what data you need to pass on. How can we avoid adding extra state variables in our source view controller just for passing on data? A simple trick is to use the sender parameter that both methods have.

The sender parameter is normally used by storyboards to indicate the UI element that triggered the segue, for example an UIButton when pressed or an UITableViewCell that triggers the segue by selecting it. This allows you to determine what triggered the segue in prepareForSegue:sender:, and based on that (and of course the segue identifier) take some actions and configure the destination view controller, or even determine that it shouldn't perform the segue at all by returning false in shouldPerformSegueWithIdentifier(_:sender:).

When it's not possible to trigger the segue from a UI element in the Storyboard, you need to use performSegueWithIdentifier(_:sender:) instead to manually trigger it. This might happen when no direct user interaction should trigger the action of some control that was created in code. Maybe you want to execute some additional logic when pressing a button and after that perform the segue. Whatever the situation is, you can use the sender argument to your benefit. You can pass in whatever you may need in prepareForSegue(_:sender:) or shouldPerformSegueWithIdentifier(_:sender:).

Let's have a look at some examples.

Screen Shot 2015-05-08 at 23.25.37

Here we have two very simple view controllers. The first has three buttons for different colors. When tapping on any of the buttons, the name of the selected color will be put on a label and it will push the second view controller. The pushed view controller will set its background color to the color represented by the tapped button. To do that, we need to pass on a UIColor object to the target view controller.

Even though this could be handled by creating 3 distinct segues from the buttons directly to the destination view controller, we chose to handle the button tap ourselves and the trigger the segue manually.

You might come up with something like the following code to accomplish this:

class ViewController: UIViewController {

    @IBOutlet weak var label: UILabel!

    var tappedColor: UIColor?

    @IBAction func tappedRed(sender: AnyObject) {
        label.text = "Tapped Red"
        tappedColor = UIColor.redColor()
        performSegueWithIdentifier("ShowColor", sender: sender)
    }

    @IBAction func tappedGreen(sender: AnyObject) {
        label.text = "Tapped Green"
        tappedColor = UIColor.greenColor()
        performSegueWithIdentifier("ShowColor", sender: sender)
    }

    @IBAction func tappedBlue(sender: AnyObject) {
        label.text = "Tapped Blue"
        tappedColor = UIColor.blueColor()
        performSegueWithIdentifier("ShowColor", sender: sender)
    }

    override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
        if segue.identifier == "ShowColor" {
            if let colorViewController = segue.destinationViewController as? ColorViewController {
                colorViewController.color = tappedColor
            }
        }
    }

}

class ColorViewController: UIViewController {

    var color: UIColor?

    override func viewDidLoad() {
        super.viewDidLoad()

        view.backgroundColor = color
    }

}

We created a state variable called tappedColor to keep track of the color that needs to be passed on. It is set in each of the action methods before calling performSegueWithIdentifier("ShowColor", sender: sender) and then read again in prepareForSegue(_:sender:) so we can pass it on to the destination view controller.

The action methods will have the tapped UIButtons set as the sender argument, and since that's the actual element that initiated the action, it makes sense to set that as the sender when performing the segue. So that's what we do in the above code. But since we don't actually use the sender when preparing the segue, we might as well pass on the color directly instead. Here is a new version of the ViewController that does exactly that:

class ViewController: UIViewController {

    @IBOutlet weak var label: UILabel!

    @IBAction func tappedRed(sender: AnyObject) {
        label.text = "Tapped Red"
        performSegueWithIdentifier("ShowColor", sender: UIColor.redColor())
    }

    @IBAction func tappedGreen(sender: AnyObject) {
        label.text = "Tapped Green"
        performSegueWithIdentifier("ShowColor", sender: UIColor.greenColor())
    }

    @IBAction func tappedBlue(sender: AnyObject) {
        label.text = "Tapped Blue"
        performSegueWithIdentifier("ShowColor", sender: UIColor.blueColor())
    }

    override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
        if segue.identifier == "ShowColor" {
            if let colorViewController = segue.destinationViewController as? ColorViewController {
                colorViewController.color = sender as? UIColor
            }
        }
    }

}

This allows us to get rid of our extra tappedColor variable.

It might seem to (and perhaps it does) abuse the sender parameter though, so use it with care and only where appropriate. Be aware of the consequences; if some other code or some element in a Storyboard triggers the same segue (i.e. with the same identifier), then the sender might just be an UI element instead of the object you expected, which will lead to unexpected results and perhaps even crashes when you force cast the sender to something it's not.

You can find the sample code in the form of an Xcode project on https://github.com/lammertw/SegueColorSample.

Announcing Zumero for SQL Server, Release 2.0

Eric.Weblog() - Eric Sink - Fri, 05/08/2015 - 19:00

Zumero for SQL Server (ZSS) is a solution for replication and sync between SQL Server and mobile devices. ZSS can be used to create offline-friendly mobile apps for iOS, Android, Windows Phone, PhoneGap, and Xamarin.

Our 2.0 release is a major step forward in the maturity of the product.

Highlights:

  • Compatibility with Azure SQL -- This release offers improved compatibility with Microsoft Azure SQL Database. Whether you prefer cloud or on-prem, ZSS 2.0 is a robust sync solution.

  • Improved filtering -- In the 2.0 release, filters have become more powerful and easier to use. Arcane limitations of the 1.x filtering feature have been lifted. New capabilities include filtering by date, and filtering of incoming writes.

  • Schema change detection -- The handling of schema changes is now friendlier to the use of other SQL tools. In 1.x, schema changes needed to be performed in the ZSS Manager application. In 2.0, we detect and handle most schema changes automatically, allowing you to integrate ZSS without changing the way you do things.

  • Better UI for configuration -- The ZSS Manager application has been improved to include UI for configuration of multiple backend databases, as well as more fine-grained control of which columns in a table are published for sync.

  • Increased Performance -- Perhaps most important of all, ZSS 2.0 is faster. In some situations, it is a LOT faster.

Lots more info at zumero.com.

 

Stuff The Internet Says On Scalability For May 8th, 2015

Hey, it's HighScalability time:


Not spooky at all. A 1,000 robot self-organizing flash mob.
  • 400 ppm: global CO2 concentration; 13.1 billion: distance in light-years of farthest galaxy
  • Quotable Quotes:
    • Pied Piper: It’s built on a universal compression engine that stacks on any file, data, video or image no matter what size.
    • Bokardo: 1 hour of research saves 10 hours of development time
    • @12Knocksinna: Microsoft uses Cassandra open source tech to help manage the 500+ million events generated by Office 365 hourly (along with SQL and Azure)
    • @antirez: Redis had a lot of client libs ASAP. By reusing the Redis protocol, Disque is getting clients even faster, and 2700 Github stars in 9 days!
    • @blueben: AWS Glacier seems like a great DR option until you realize it costs $180,000 to retrieve your 100TB archive in an emergency.
    • Peter Diamandis: The best way to become a billionaire is to solve a billion-person problem.
    • Cordkillers: YouTube visits up 40% from last year
    • @acroll: "It's about economics not innovation, otherwise we'd all be flying Concorde instead of Jumbo Jets." @JulieMarieMeyer #StrataHadoop
    • @DLoesch: Start time delayed because cable systems are overloaded due to PPV buys. Insane. Don't snooze, don't lose! #MayPac
    • grauenwolf: This is where unit test fanboys piss me off. They claim that they can't use integration tests because they are too slow. I claim that they need integration tests to find their slow queries.
    • nuclearqtip: The open source world needs a standardized trust model for binary artifacts. 
    • Greg Ferro: SDN and SNA are about as similar Model T Ford & any modern car. For the record, no drives a Model T Ford to work everyday. Stop comparing SDN to SNA. Its pointless.
    • Urs Hölzle: Now the decade of work we put into NoSQL is available to everyone using GCP.  One way it shows that we've been working on this longer than anyone else: 99% read latency is 6ms vs ~300ms for other systems.
    • Swardley: Cloud is not about saving money - never was. It's about doing more stuff with exactly the same amount of money. That can cause a real headache in competition. 
    • Johns Hopkins: scientists have discovered that neurons are risk takers: They use minor "DNA surgeries" to toggle their activity levels all day, every day. 

  • Tesla's Powerwall has already sold out. So will Tesla's next gigafactory be a terafactory or a petafactory?

  • Something to keep in mind when hiring: 21% of [NFL] Hall of Fame players were selected in the 4th round or later.

  • Move along, nothing to see here. Brett Slatkin: I wonder how long it will be before people realize that all of this server orchestration business is a waste of time? Ultimately, what you really want is to never think about systems like Borg that schedule processes to run on machines. That's the wrong level of abstraction. You want something like App Engine, vintage 2008 platform as a service, where you run a single command to deploy your system to production with zero configuration.

  • Can any product withstand Aphyr's Jepsen partition torture test? Aeropspike, Elasticsearch, MongoDB, RabbitMQ, Riak, Cassandra, Kafka, NuoDB, Postgres, Redis, all had problems when stress tested under network partitions. Not surprising really, as Aphyr says, "Distributed systems design is really hard." That we find problems in popular well regarded products indicates that "We need formal theory, written proofs, computer verification, and experimental demonstration that our systems make the tradeoffs we think they make. As systems engineers, we continually struggle to erase the assumption of safety before that assumption causes data loss or downtime. We need to clearly document system behaviors so that users can make the right choices. We must understand our systems in order to explain them–and distributed systems are hard to understand." gmagnusson has a good sense of things: "I admire the work that Aphyr does - though at the end of the day, I need to build systems that work for the problem I'm trying to solve (and I have to choose from real things that are available). These technologies in general are trying to address really hard problems and design and architecture is the art of balancing tradeoffs. Nothing is going to be perfect. Yet."  

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Stockholm and Bucharest

Coding the Architecture - Simon Brown - Fri, 05/08/2015 - 16:41

Following on from CRAFT and SATURN last month, the conference tour continues in May with visits to Stockholm and Bucharest.

First up is DevSum in Stockholm, Sweden, where I'll be speaking about Agility and the essence of software architecture. This session looks at my approach to doing "just enough up front" design and how to introduce these techniques into software development teams.

Later during the same week I'll then be delivering the opening keynote at the I T.A.K.E. Unconference in Bucharest, Romania. My talk here is called Software architecture as code. It's about rethinking the way we describe software architecture and how to create a software architecture model as code for living, up to date, software architecture documentation.

See you in a few weeks.

Categories: Architecture

Task Management for Teams

I’m a fan of monthly plans for meaningful work.

Whether you call it a task list or a To-Do list or a product backlog, it helps to have a good view of the things that you’ll invest your time in.

I’m not a fan of everybody trying to make sense of laundry lists of cells in a spreadsheet.

Time changes what’s important and it’s hard to see the forest for the trees, among rows of tasks that all start to look the same.

One of the most important things I’ve learned to do is to map out work for the month in a more meaningful way.

It works for individuals.  It works for teams.  It works for leaders.

It’s what I’ve used for Agile Results for years on projects small and large, and with distributed teams around the world.  (Agile Results is my productivity method introduced in Getting Results the Agile Way.)

A picture is worth a thousand words, so let’s just look at a sample output and then I’ll walk through it:

clip_image002

What I’ve found to be the most effective is to focus on a plan for the month – actually take an hour or two the week before the new month.  (In reality, I’ve done this with teams of 10 or more people in 30 minutes or less.  It doesn’t take long if you just dump things fast on the board, and just keep asking people “What else is on our minds.”)

Dive-in at a whiteboard with the right people in the room and just list out all the top of mind, important things – be exhaustive, then prioritize and prune.

You then step back and identify the 3 most important outcomes (3 Wins for the Month.)

I make sure each work item has a decent name – focused on the noun – so people can refer to it by name (like mini-initiatives that matter.)

I list it in alphabetical by the name of the work so it’s easy to manage a large list of very different things.

That’s the key.

Most people try to prioritize the list, but the reality is, you can use each week to pick off the high-value items.   (This is really important.  Most people spend a lot of time prioritizing lists, and re-prioritizing lists, and yet, people tend to be pretty good prioritizing when they have a quick list to evaluate.   Especially, if they know the priorities for the month, and they know any pressing events or dead-lines.   This is where clarity pays off.)

The real key is listing the work in alphabetical order so that it’s easy to scan, easy to add new items, and easy to spot duplicates.

Plus, it forces you to actually name the work and treat it more like a thing, and less like some fuzzy idea that’s out there.

I could go on and on about the benefits, but here are a few of the things that really matter:

  1. It’s super simple.   By keeping it simple, you can actually do it.   It’s the doing, not just the knowing that matters in the end.
  2. It chops big work down to size.   At the same time, it’s easy to quickly right-size.  Rather than bog down in micro-management, this simple list makes it easy to simply list out the work that matters.
  3. It gets everybody in the game.   Everybody gets to look at a whiteboard and plan what a great month will look like.  They get to co-create the journey and dream up what success will look like.   A surprising thing happens when you just identify Three Wins for the Month.

I find a plan for the month is the most useful.   If you plan a month well, the weeks have a better chance of taking care of themselves.   But if you only plan for the week or every two weeks, it’s easy to lose sight of the bigger picture, and the next thing you know, the months go by.  You’re busy, things happen, but the work doesn’t always accrue to something that matters.

This is a simple way to have more meaningful months.

I also can’t say it enough, that it’s less about having a prioritized list, and more about having an easy to glance at map of the work that’s in-flight.   I’m glad the map of the US is not a prioritized list by states.  And I’m glad that the states are well named.  It makes it easy to see the map.  I can then prioritize and make choices on any trip, because I actually have a map to work from, and I can see the big picture all at once, and only zoom in as I need to.

The big idea behind planning tasks and To-Do lists this way is to empower people to make better decisions.

The counter-intuitive part is first exposing a simple view of the map of the work, so it’s easy to see, and this is what enables simpler prioritization when you need it, regardless of which prioritization you use, or which workflow management tool you plug in to.

And, nothing stops you from putting the stuff into spreadsheets or task management tools afterwards, but the high-value part is the forming and storming and conforming around the initial map of the work for the month, so more people can spend their time performing.

May the power of a simple information model help you organize, prioritize, and optimize your outcomes in a more meaningful way.

If you need a deeper dive on this approach, and a basic introduction to Agile Results, here is a good getting started guide for Agile Results in action.

Categories: Architecture, Programming

Negotiating: Zone of Possible Agreement

Win - Win

Win – Win

Earlier this week I wrote a piece on BATNA (best alternative to a negotiated agreement), a friend sent me an email asking whether I had become a salesman. I think he meant it as a slight dig. The answer is no, but rather since I have been in the software field for over 20 years, I actually became a salesman long ago. All of us in IT must be able to sell and negotiate whether we are dealing in ideas, jobs or commitments.

BATNA represents what you will do if you fail to get an agreement at the end of a negotiation. For example, if you are negotiating for a new house and you can’t come to an agreement (your bid is low or they won’t fix the roof before the sale) perhaps you will look at other houses or rent an apartment. Every party in a negotiation has a walk-away point when they will give up and accept their BATNA. It should be noted that if a walk point or BATNA does not exist for one the parties, the activity being pursued is more of discussion of terms than negotiation. The common area between the parties’ walk-away points is known in negotiating circles as the zone of possible agreement (ZOPA). If there is no overlap, there can be no negotiated agreement (on a grand scale this is the areas were wars occur). Each party in a negotiation will have their own perception of the ZOPA. Each parties perception of ZOPA establishes the boundaries for them to pursue compromise to get an agreement. Deals or suggestions that are outside of the ZOPA will generally not be considered by one party or the other and often lead to the end to the negotiations. Early in my negotiation career, my wife and I wanted to buy a house. We decided to make a low bid for a house to get the negotiations started. Our realtor transmitted the bid and the answer was not a negotiation, but a crisp “no.” Our offer had been below the value of the seller’s BATNA and therefore not within the ZOPA. Frankly, we had not done our homework to try determine what ZOPA for the seller, house and market might be. After that I adopted a different approach which continues to evolve.

As noted in Negotiations: BATNA, establishing your own BATNA before beginning a negotiation is a critical step in a good negotiating process. Be very careful in sharing information that would expose your BATNA or you might give the person/organization you are negotiating with too much power. After you determined your BATNA, attempt to establish the other party’s BATNA. There are all sorts of techniques for gathering information you can use to scope out the other parties BATNA. These include asking questions, market research or evaluating past behavior, exploring others interests and values. Just as I suggested that you are very careful with information about your BATNA, the other party will generally be equally guarded. I have often found that as you build relationships over time, more information can be initially shared which will shorten negotiations and generally results in a win-win outcome. In all negotiations, as the negotiation progresses actual behavior or new information can be used to adjust your perception of the other parties BATNA and therefore the ZOPA.

Negotiations never occur with perfect information. While knowing the ZOPA will increase the likelihood of not making a boneheaded move that will end the negotiation, your perception of the BATNA of party you’re negotiating with be an approximation and therefore your perception of the ZOPA will also be an approximation. Remember where no ZOPA exists there are fewer options, such as walking away, accepting a bad deal or changing the playing field. Changing the playing field is typically done by “enlarging the pie” through a process of establishing common ground and then building on that common ground. Building on the common ground might include expanding the scope of the deal to change the value. For example, in one negotiation when both parties came to the conclusion that there was no common ground for a piece of coaching work, we expanded the scope to include a post-coaching mentoring retainer. The deal was different enough to allow us to identify common ground and come to an agreement.

We all have to negotiate, the question is whether we negotiate well and whether we are aware of the tricks of the trade. BATNA and ZOPA are concepts that help negotiators negotiate better. In end committing to the right work, buying the right house and at the right price or signing the right services contract is the best outcome for everyone because it will reduce the stress on all parties.


Categories: Process Management

Understanding Software Projects Lecture Series

10x Software Development - Steve McConnell - Thu, 05/07/2015 - 16:24

Check out my new lecture series, "Understanding Software Projects." In this lecture series, I explain The Four Factors Lifecycle Model and how understanding that model means understanding virtually every significant aspect of software project dynamics. Current lectures are always free. Check it out at https://cxlearn.com/catalog/22.

Here's a longer description from the website:

Steve McConnell is the author of software industry classics including Code Complete, Rapid Development, and Software Estimation. He has been recognized as one of the three most influential people in the software industry, along with Bill Gates and Linus Torvalds. 

Join Steve for this Groundbreaking Lecture Series that unlocks the secrets of effective software development. These lectures distill hard-won insights from decades of research and experience. They present learnings from Steve's work with hundreds of companies and thousands of projects. Lectures are 10-20 minutes each and are easy to include in your work day.  

Lecture Series Focus

In this lecture series, Steve explains The Four Factors Lifecycle Model, and he explains how understanding that model means understanding virtually every significant aspect of software project dynamics. Topics include:  

  • The role of Size in the Four Factor Model
  • The role of Uncertainty in the Four Factors Model
  • The role of Human Variation in the Four Factors Model
  • The role of Defects in the Four Factors Model 
  • Numerous case studies that illustrate how to apply the model to gain insights into your software projects  
Benefits 

With the deeper understanding of software projects you gain from this lecture series, you will be able to:  

  • Plan your projects to meet their cost, schedule, quality, and functionality goals
  • Diagnose and correct your project's problems faster and more confidently
  • Accelerate the rate of improvement in your organization
  • Respond appropriately to new developments including new technologies and new software development practices  
Accessing the Lectures 

Although the lectures build on each other, they may also be accessed individually. The series is planned to consist of about 50 lectures total. Lectures will be released through 2015 and 2016. 

Steve's most recent lectures will be complimentary at CxLearn.com for the duration of the lecture series. The full set of archived lectures can be accessed for $99; they are also included in Construx eLearning's All Access Pass. 

 

My Coworkers Suck, What Should I Do?

Making the Complex Simple - John Sonmez - Thu, 05/07/2015 - 16:00

In this episode, I answer an email about working in a toxic environment and not getting along with the team. Full transcript: John:               Hey, this is John Sonmez from simpleprogrammer.com and today I’ve got another email. You can see that I’m trying a new set up here with my videos so you’ll have to let […]

The post My Coworkers Suck, What Should I Do? appeared first on Simple Programmer.

Categories: Programming

Let's Stop Guessing and Start Estimating

Herding Cats - Glen Alleman - Thu, 05/07/2015 - 15:24

What’s the difference between estimate and guess?

One way to distinguish between them is degree of care taken when we arrive at a conclusion. A conclusion about how much effort work will take. How much it will cost to perform that work. If that work will hve any risk associated with it. 

Estimate is derived from the Latin word aestimare. “To Value.” The term estimate is also the origin of estimable,  meaning “capable of being estimated” or “worthy of esteem", and of esteem, which meaning “regard."

To make an estimate means to judge - using some method - the extent, nature, or value of something, with the implication that the result is based on expertise, data, a model, or familiarity. An estimate is the resulting calculation or judgment of the outcome or result. The related term is approximation, meaning “close or near.” Estimates have a measure of nearness to the actual value. We may not be able to know the actual value, but the estimate is close to that value. The confidence in the estimate adds more information about the nearness of the estimate to the actual value.

To guess is to believe or suppose, to form an opinion based on little or no evidence, or to be correct by chance or conjecture. A guess is a thought or idea arrived at by one of these methods. Guess is a synonym for  conjecture and surmise, which like estimate, can be used as a verb or noun.

One step between a guess and an estimate is an educated guess, a more casual estimate. An idiomatic term for this conclusion is “ballpark figure.” The origin of this American English idiom, which alludes to a baseball stadium, is not certain. One conclusion is is related to “in the ballpark,” meaning “close” in the sense that one at such a location may not be in a precise location but is in the stadium.

We could have a hunch or an intuition about some outcome, some numerical value. Or we could engage in guesswork or speculation.

An interesting idea is “Dead reckoning” means the same thing as guesswork, though it originally referred to navigation based on reliable information. Near synonyms describing thoughts or ideas developed with more rigor include hypothesis and supposition, as well as theory and thesis.

A guess is a casual,  spontaneous conclusion. 
An estimate is based on some thought and/or
data.

If those paying you can accept a Wild Ass Guess then you're probably done. If they have tolerance (risk tolerance) for loosing their value at risk if you're guess is wrong, then go ahead and Guess. Otherwise some form of estimate is likely needed to inform your decision about some outcome in the future that is uncertain.

Related articles How We Make Decisions is as Important as What We Decide. The Flaw of Empirical Data Used to Make Decisions About the Future Build a Risk Adjusted Project Plan in 6 Steps Want To Learn How To Estimate?
Categories: Project Management

Minimal Viable UX

Xebia Blog - Wed, 05/06/2015 - 20:38

An approach to incorporate UX into the LEAN principle.

User Experience is often interpreted as a process where the ‘UX guru’ holds the ultimate truth in designing for an experience. The guru likes to keep control of his design and doesn’t want to feel less valuable when adopting advice from non-designers, where his concern is becoming a pixel pusher.

Adopting UX in a LEAN way, the feedback from team members minimizes the team going down the wrong path. This prevents the guru from perfecting a design where constraints over time will become clearer and less aligned with the customer needs. Interaction with the team speeds up development time by giving early insight.

Design for User Experience

UX has many different definitions, in the end it enables the user to perform a task with the help of an interface. All disciplines in a software development team should be aware of the user they are designing or developing for, starting in Sprint Zero. UX is not about setting up mockups, wireframes, prototypes and providing designs, it has to be part of the team culture where every team member can attribute to. We are trying to solve problems and problems are not being solved with design documentation but solved with efficient, elegant and sophisticated software.

How to get there

Create user awareness

Being aware of the user helps reduce waste and keeps you focused on things you should care about, functionality that adds value in the perception of the customer.

First, use a set of personas, put them on a wall and let your team members align those users with the functionality they are building. Developers can reflect functionality, interaction designers can optimize interface elements and visual designers can align styling with the user.

Second, use a customer journey map. This is a powerful tool. It helps in creating context, gives an overview of the user experience and helps to find gaps.

Prototype quickly

Prototyping becomes easier by the day, thanks to the amount and quality of tools out there. Prototyping can be performed by using paper, mockups (Balsamiq) or a web framework, such as FramerJS. Pick the type you prefer and which is suitable for the situation and has the appropriate depth.

Diagram of the iterative design and critique process. Warfel, Todd Zaki. 2009. Prototyping: A Practitioner’s Guide. New York: Rosenfeld Media.

Use small portions of prototypes and validate those with a minimal set of users. This helps you to deliver faster, therefore again eliminate waste and improves built-in quality. Iterative design helps you to amplify learning. KISS!

Communicate

Involved parties need to be convinced that what you are saying is based on business needs, the product and the people. You need to befriend and understand all involved parties in order to make it work across the board. Besides that, don’t forget your best friends, the users.

If you don't talk to your customers, how will you know how to talk to your customers? - Will Evans

Start planning for Google I/O!

Google Code Blog - Wed, 05/06/2015 - 18:52

Posted by Mike Pegg, reppin' I/O since 2011

Today we launched the official schedule for Google I/O 2015 at google.com/io. At this year’s event, happening May 28-29 in San Francisco, we’ll host more than 200 talks centered around some important topics which matter to you: Design & Develop, to help you build beautiful, powerful apps; Earn & Engage, where we’ll cover tools to grow your user base and create sustainable, successful businesses; and What’s Next, a peek into Google’s emerging platforms. With just over three weeks until Google I/O, start planning your schedule today!

Start building your schedule

Whether you’re attending in person or virtually, you can get started building your schedule. Don’t worry about converting the start and end times to your local time zone, we’ve taken care of that for you. Simply sign in to the I/O website to add talks directly to “My Schedule.” If you’re using Chrome (on Android or desktop), you can enable notifications for events added to your schedule so that you can be sure to catch them. That way, you won’t miss exciting sessions like Astro Teller’s “Helping Moonshots Survive Contact with the Real World” or an update from the ATAP team on some cool new projects they’re working on. All sessions will be livestreamed, so whether you’re watching from one of the 400 I/O Extended Locations around the world or the comfort of your own desk, we’ve got you covered.

Attending in person

In addition to the traditional breakout sessions, which are livestreamed, if you’re attending in person, you’ll also get a chance to go to more than 100 sandbox talks. These intimate, 20-minute talks are often more technical, and the smaller size means that you’ll get a chance to interact directly with the Googlers teaching them. Together, you can roll up your sleeves and tackle topics ranging from “Memory Performance & Tooling” to “What's new in the Google Play Developer Console.” Most sandbox talks will happen twice throughout the two-day event, so you’ll have more chances to participate.

This year, there are over 100 sandbox talks: intimate, 20-minute technical talks where you can roll up your sleeves and interact directly with Googlers.

Don’t forget to save time in your schedule for a code lab or two. Back by popular demand, these self-paced workshops will showcase a variety of technologies from Google on mobile, wearables, and Cloud to name a few. We’ll provide the workstations and tablets for use on-site - just bring yourself any time during the two days of I/O! If you have your own device, Googlers will be on hand to help you get set up so you can jump into it.

See you soon

We’re getting really excited about Google I/O 2015 and today’s schedule is just a preview of what’s to come. We’ll be adding more sessions, sandbox talks, and events to the schedule as we get closer to I/O. But, we can’t give everything away beforehand. Be sure to check the agenda again after the keynote on Day 1, for those top secret talks. We look forward to connecting with you in-person, at I/O Extended or via I/O Live in a few weeks!

Categories: Programming

Varnish Goes Upstack with Varnish Modules and Varnish Configuration Language

This is a guest post by Denis Brækhus and Espen Braastad, developers on the Varnish API Engine from Varnish Software. Varnish has long been used in discriminating backends, so it's interesting to see what they are up to.

Varnish Software has just released Varnish API Engine, a high performance HTTP API Gateway which handles authentication, authorization and throttling all built on top of Varnish Cache. The Varnish API Engine can easily extend your current set of APIs with a uniform access control layer that has built in caching abilities for high volume read operations, and it provides real-time metrics.

Varnish API Engine is built using well known components like memcached, SQLite and most importantly Varnish Cache. The management API is written in Python. A core part of the product is written as an application on top of Varnish using VCL (Varnish Configuration Language) and VMODs (Varnish Modules) for extended functionality.

We would like to use this as an opportunity to show how you can create your own flexible yet still high performance applications in VCL with the help of VMODs.

VMODs (Varnish Modules)
Categories: Architecture