Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Re-read Saturday: The Mythical Man-Month, Part 3 The Surgical Team

The Mythical Man-Month

The Mythical Man-Month

When we began the re-read of The Mythical Man-Month my plan was to go through two essays every week. To date the planned cadence of two essays has been out of reach. Each of the essays are full of incredibly rich ideas that need to be shared therefore I am amending the plan to one essay per week. Today we re-read The Surgical Team. In this essay, Brooks addresses the impact of team size and team composition on the ability to deliver large projects.

The concept of a small team did not jump into popular discussion with the Agile Manifesto of 2001. Even before The Mythical Man-Month was published in 1975 the software development industry was beginning to coalesce around the idea that smaller teams were more efficient. Smaller teams are easier to coordinate and information sharing is easier because there are fewer communication paths with less people. The problem that Brooks postulated was that big systems, which are needed. can’t be built by a single, small team in any reasonable period of time. Efficiency does not always translate to effectiveness. Paraphrasing Brooks, the question we have to ask is if having an efficient single small team of first class people focused on a problem is great, how do you build the large systems that are needed? If small teams can’t build big systems efficiently the solution is either to not build large solutions all at once, find a mechanism to scale smaller teams or revert back to using brute force (which is often the method of choice).

The brute force method of developing large systems has been the most often leveraged answer to build large systems before (and after) the publication of The Mythical Man-Month. Brute force is a large project team typically coordinated by a team of program and project managers. Earlier in my career I was involved in systems side of bank mergers. During one of the larger mergers I worked on over 300 coders, business analysts, testers, project managers and others worked together to meet a merger date. Unquestionably the approach taken was brute force and as the date got closer the amount of force being brought to bear became more obvious. Brute force is problematic based on lack of efficiency and predictability. Brute force methods are affected by the variability of an individual’s capability and productivity. The goals of the systems portion of the bank mergers were not the efficiency of the process, but rather making the date given to the regulators for cut over to a single system without messing people’s money up and ending up on the front page of The Plain Dealer.

If brute force is an anathema (and it should be), a second solution is to only use small single purpose teams. Products would evolve as small pieces of functionality are conceived, built and then integrated into a larger whole. Scrum, at the team level, uses small teams to achieve a goal. Team level Agile embraces the effectiveness of small teams discussed in The Surgical Team, however does not address bringing large quantities of tightly integrated functionality of market quickly.

Agile has recognized the need to get large pieces of functionality to market faster than incremental evolution without abandoning the use of small teams by adding scaling techniques. A Scrum of Scrums is a technique to scale Scrum and other team-level Agile frameworks. Other scaling frameworks include DSDM, SAFe and Scaled Scrum. All of these frameworks leverage semi-autonomous small teams with some form coordination to keep teams moving in the same direction. Scaling adds some overhead to the process which reduces the efficiency gains from small teams, but allows larger pieces of work to be addressed.

In The Mythical Man-Month, Brooks leverages the metaphor of the surgical team to describe a highly effective AND highly efficient model of a team. In a surgical team the surgeon (responsible party) delivers value with a team that supports him or her. Transferring the surgical team metaphor to a development team, the surgeon writes the code and is responsible for the code and the backup surgeon (Brooks uses the term co-pilot) is the surgeons helper and is typically is less experienced. The backup is not responsible for the code. The rest of the team supports the surgeon and backup. The goal of the support team is to administer, test, remove road blocks and document the operation or project. While we might dicker about the definition of specific roles and what they are called, the concept of the small, goal-oriented team is not out of line with many Scrum teams in today’s Agile environment.

Scrum and most other Agile techniques move past the concept of teams of individuals with specific solo roles towards a teams with more cross functional individuals. Cross-functional teams tend to yield more of a peer relationship than the hierarchy seen in the surgical team. The flatter team will require more complex communication patterns which can be overcome in Scrum with techniques like the stand-up meeting to address communication. The concept of the Scrum team is a natural evolution of the concepts in The Surgical Team.  Scrum tunes the small team concept to software development where a number of coordinated hands can be in the patient simultaneously, if coordinated through the techniques of common goal, stand-up meeting, reviews and continuous integration.

 

Previous installments of Re-read Saturday for the The Mythical Man-Month

Intro and Tar Pit

The Mythical Man-month Part 2


Categories: Process Management

How To Get Smarter By Making Distinctions

"Whatever you do in life, surround yourself with smart people who'll argue with you." -- John Wooden

There’s a very simple way to get smarter.

You can get smarter by creating categories.

Not only will you get smarter, but you’ll also be more mindful, and you’ll expand your vocabulary, which will improve your ability to think more deeply about a given topic or domain.

In my post, The More Distinctions You Make, the Smarter You Get, I walk through the ins and outs of creating categories to increase your intelligence, and I use the example of “fat.”   I attempt to show how “Fat is bad” isn’t very insightful, and how by breaking “fat” down into categories, you can dive deeper and reveal new insight to drive better decisions and better outcomes.

I’m this post, I’m going to walk this through with an example, using “security” as the topic.

The first time I heard the word “security”, it didn’t mean much to me, beyond “protect.”

The next thing somebody taught me, was how I had to focus on CIA:  Confidentiality, Integrity, and Availability.

That was a simple way to break security down into meaningful parts.

And then along came Defense in Depth.   A colleague explained that Defense in Depth meant thinking about security in terms of multiple layers:  Network, Host, Application, and Data.

But then another colleague said, the real key to thinking about security and Defense in Depth, was to think about it in terms of people, process, and technology.

As much as I enjoyed these thought exercises, I didn’t find them actionable enough to actually improve software or application security.  And my job was to help Enterprise developers build better Line-Of-Business applications that were scalable and secure.

So our team went to the drawing board to map out actionable categories to take application security much deeper.

Right off the bat, just focusing on “application” security vs. “network” security or “host” security, helped us to get more specific and make security more tangible and more actionable from an Line-of-Business application perspective.

Security Categories

Here are the original security categories that we used to map out application security and make it more actionable:

  1. Input and Data Validation
  2. Authentication
  3. Authorization
  4. Configuration Management
  5. Sensitive Data
  6. Session Management
  7. Cryptography
  8. Exception Management
  9. Auditing and Logging

Each of these buckets helped us create actionable principles, patterns, and practices for improving security.

Security Categories Explained

Here is a brief description of each application security category:

Input and Data Validation
How do you know that the input your application receives is valid and safe? Input validation refers to how your application filters, scrubs, or rejects input before additional processing. Consider constraining input through entry points and encoding output through exit points. Do you trust data from sources such as databases and file shares?

Authentication
Who are you? Authentication is the process where an entity proves the identity of another entity, typically through credentials, such as a user name and password.

Authorization
What can you do? Authorization is how your application provides access controls for resources and operations.

Configuration Management
Who does your application run as? Which databases does it connect to? How is your application administered? How are these settings secured? Configuration management refers to how your application handles these operational issues.

Sensitive Data
How does your application handle sensitive data? Sensitive data refers to how your application handles any data that must be protected either in memory, over the network, or in persistent stores.

Session Management
How does your application handle and protect user sessions? A session refers to a series of related interactions between a user and your Web application.

Cryptography
How are you keeping secrets (confidentiality)? How are you tamper-proofing your data or libraries (integrity)? How are you providing seeds for random values that must be cryptographically strong? Cryptography refers to how your application enforces confidentiality and integrity.

Exception Management
When a method call in your application fails, what does your application do? How much do you reveal? Do you return friendly error information to end users? Do you pass valuable exception information back to the caller? Does your application fail gracefully?

Auditing and Logging
Who did what and when? Auditing and logging refer to how your application records security-related events.

As you can see, just by calling out these different categories, you suddenly have a way to dive much deeper and explore application security in depth.

The Power of a Security Category

Let’s use a quick example.  Let’s take Input Validation.

Input Validation is a powerful security category, given how many software security flaws and how many vulnerabilities and how many attacks all stem from a lack of input validation, including Buffer Overflows.

But here’s the interesting thing.   After quite a bit of research and testing, we found a powerful security pattern that could help more applications stand up to more security attacks.  It boiled down to the following principle:

Validate for length, range, format, and type.

That’s a pithy, but powerful piece of insight when it comes to implementing software security.

And, when you can’t validate the input, make it safe by sanitizing the output.  And along these lines, keep user input out of the control path, where possible.

All of these insights flow from just focusing on Input Validation as a security category.

Threats, Attacks, Vulnerabilities, and Countermeasures

Another distinction our team made was to think in terms of threats, attacks, vulnerabilities, and countermeasures.  We knew that threats could be intentional and malicious (as in the case of attacks), but they could also be accidental and unintended.

We wanted to identify vulnerabilities as weaknesses that could be addressed in some way.

We wanted to identify countermeasures as the actions to take to help mitigate risks, reduce the attack surface, and address vulnerabilities.

Just by chunking up the application security landscape into threats, attacks, vulnerabilities, and countermeasures, we empowered more people to think more deeply about the application security space.

Security Vulnerabilities Organized by Security Categories

Using the security categories above, we could easily focus on finding security vulnerabilities and group them by the relevant security category.

Here are some examples:

Input/Data Validation

  • Using non-validated input in the Hypertext Markup Language (HTML) output stream
  • Using non-validated input used to generate SQL queries
    Relying on client-side validation
  • Using input file names, URLs, or user names for security decisions
  • Using application-only filters for malicious input
  • Looking for known bad patterns of input
  • Trusting data read from databases, file shares, and other network resources
  • Failing to validate input from all sources including cookies, query string parameters, HTTP headers, databases, and network resources

Authentication

  • Using weak passwords
  • Storing clear text credentials in configuration files
  • Passing clear text credentials over the network
  • Permitting over-privileged accounts
  • Permitting prolonged session lifetime
  • Mixing personalization with authentication

Authorization

  • Relying on a single gatekeeper
  • Failing to lock down system resources against application identities
  • Failing to limit database access to specified stored procedures
  • Using inadequate separation of privileges

Configuration Management

  • Using insecure administration interfaces
  • Using insecure configuration stores
  • Storing clear text configuration data
  • Having too many administrators
  • Using over-privileged process accounts and service accounts

Sensitive Data

  • Storing secrets when you do not need to
  • Storing secrets in code
  • Storing secrets in clear text
  • Passing sensitive data in clear text over networks

Session Management

  • Passing session identifiers over unencrypted channels
  • Permitting prolonged session lifetime
  • Having insecure session state stores
  • Placing session identifiers in query strings

Cryptography

  • Using custom cryptography
  • Using the wrong algorithm or a key size that is too small
  • Failing to secure encryption keys
  • Using the same key for a prolonged period of time
  • Distributing keys in an insecure manner

Exception Management

  • Failing to use structured exception handling
  • Revealing too much information to the client

Auditing and Logging

  • Failing to audit failed logons
  • Failing to secure audit files
  • Failing to audit across application tiers
Threats and Attacks Organized by Security Categories

Again, using our security categories, we could then group threats and attacks by relevant security categories.

Here are some examples of security threats and attacks organized by security categories:

Input/Data Validation

  • Buffer overflows
  • Cross-site scripting
  • SQL injection
  • Canonicalization attacks
  • Query string manipulation
  • Form field manipulation
  • Cookie manipulation
  • HTTP header manipulation

Authentication

  • Network eavesdropping
  • Brute force attacks
  • Dictionary attacks
  • Cookie replay attacks
  • Credential theft

Authorization

  • Elevation of privilege
  • Disclosure of confidential data
  • Data tampering
  • Luring attacks

Configuration Management

  • Unauthorized access to administration interfaces
  • Unauthorized access to configuration stores
  • Retrieval of clear text configuration secrets
  • Lack of individual accountability

Sensitive Data

  • Accessing sensitive data in storage
  • Accessing sensitive data in memory (including process dumps)
  • Network eavesdropping
  • Information disclosure

Session Management

  • Session hijacking
  • Session replay
  • Man-in-the-middle attacks

Cryptography

  • Loss of decryption keys
  • Encryption cracking

Exception Management

  • Revealing sensitive system or application details
  • Denial of service attacks

Auditing and Logging

  • User denies performing an operation
  • Attacker exploits an application without trace
  • Attacker covers his tracks
Countermeasures Organized by Security Categories

Now here is where the rubber really meets the road.  We could group security countermeasures by security categories to make them more actionable.

Here are example security countermeasures organized by security categories:

Input/Data Validation

  • Do not trust input
  • Validate input: length, range, format, and type
  • Constrain, reject, and sanitize input
  • Encode output

Authentication

  • Use strong password policies
  • Do not store credentials
  • Use authentication mechanisms that do not require clear text credentials to be passed over the network
  • Encrypt communication channels to secure authentication tokens
  • Use HTTPS only with forms authentication cookies
  • Separate anonymous from authenticated pages

Authorization

  • Use least privilege accounts
  • Consider granularity of access
  • Enforce separation of privileges
  • Use multiple gatekeepers
  • Secure system resources against system identities

Configuration Management

  • Use least privileged service accounts
  • Do not store credentials in clear text
  • Use strong authentication and authorization on administrative interfaces
  • Do not use the Local Security Authority (LSA)
  • Avoid storing sensitive information in the Web space
  • Use only local administration

Sensitive Data

  • Do not store secrets in software
  • Encrypt sensitive data over the network
  • Secure the channel

Session Management

  • Partition site by anonymous, identified, and authenticated users
  • Reduce session timeouts
  • Avoid storing sensitive data in session stores
  • Secure the channel to the session store
  • Authenticate and authorize access to the session store

Cryptography

  • Do not develop and use proprietary algorithms (XOR is not encryption. Use platform-provided cryptography)
  • Use the RNGCryptoServiceProvider method to generate random numbers
  • Avoid key management. Use the Windows Data Protection API (DPAPI) where appropriate
  • Periodically change your keys

Exception Management

  • Use structured exception handling (by using try/catch blocks)
  • Catch and wrap exceptions only if the operation adds value/information
  • Do not reveal sensitive system or application information
  • Do not log private data such as passwords

Auditing and Logging

  • Identify malicious behavior
  • Know your baseline (know what good traffic looks like)
  • Use application instrumentation to expose behavior that can be monitored

As you can see, the security countermeasures can easily be reviewed, updated, and moved forward, because the actionable principles are well organized by the security categories.

There are many ways to use creating categories as a way to get smarter and get better results.

In the future, I’ll walk through how we created an Agile Security approach, using categories.

Meanwhile, check out my post on The More Distinctions You Make, the Smarter You Get to gain some additional insights into how to use empathy and creating categories to dive deeper, learn faster, and get smarter on any topic you want to take on.

Categories: Architecture, Programming

We Help Our Customers Transform

"Innovation—the heart of the knowledge economy—is fundamentally social." -- Malcolm Gladwell

I’m a big believer in having clarity around what you help your customers do.

I was listening to Satya Nadella’s keynote at the Microsoft Worldwide Partner Conference, and I like how he put it so simply, that we help our customers transform.

Here’s what Satya had to say about how we help our customers transform their business:

“These may seem like technical attributes, but they are key to how we drive business success for our customers, business transformation for our customers, because all of what we do, collectively, is centered on this core goal of ours, which is to help our customers transform.

When you think about any customer of ours, they're being transformed through the power of digital technology, and in particular software.

There isn't a company out there that isn't a software company.

And our goal is to help them differentiate using digital technology.

We want to democratize the use of digital technology to drive core differentiation.

It's no longer just about helping them operate their business.

It is about them excelling at their business using software, using digital technology.

It is about our collective ability to drive agility for our customers.

Because if there is one truth that we are all faced with, and our customers are faced with, it's that things are changing rapidly, and they need to be able to adjust to that.

And so everything we do has to support that goal.

How do they move faster, how do they interpret data quicker, how are they taking advantage of that to take intelligent action.

And of course, cost.

But we'll keep coming back to this theme of business transformation throughout this keynote and throughout WPC, because that's where I want us to center in on.

What's the value we are adding to the core of our customer and their ability to compete, their ability to create innovation.

And anchored on that goal is our technical ambition, is our product ambition.”

Transformation is the name of the game.

You Might Also Like

Satya Nadella is All About Customer Focus

SatyaSatya Nadella on a Mobile-First, Cloud-First World

Satya Nadella on Empower Every Person on the Planet

Satya Nadella on Everyone Has To Be a Leader

Satya Nadella on How the Key To Longevity is To Be a Learning Organization

Satya Nadella on Live and Work a Meaningful Life

Sayta Nadelle on The Future of Software

Categories: Architecture, Programming

Satya Nadella on a Mobile-First, Cloud-First World

You hear Mobile-First, Cloud-First all the time.

But do you ever hear it really explained?

I was listening to Satya Nadella’s keynote at the Microsoft Worldwide Partner Conference, and I like how he walked through how he thinks about a Mobile-First, Cloud-First world.

Here’s what Satya had to say:

“There are a couple of attributes.

When we talk about Mobile-First, we are talking about the mobility of the experience.

What do we mean by that?

As we look out, the computing that we are going to interface with, in our lives, at home and at work, is going to be ubiquitous.

We are going to have sensors that recognize us.

We are going to have computers that we are going to wear on us.

We are going to have computers that we touch, computers that we talk to, the computers that we interact with as holograms.

There is going to be computing everywhere.

But what we need across all of this computing, is our experiences, our applications, our data.

And what enables that is in fact the cloud acting as a control plane that allows us to have that capability to move from device to device, on any given day, at any given meeting.

So that core attribute of thinking of mobility, not by being bound to a particular device, but it's about human mobility, is very core to our vision.

Second, when we think about our cloud, we think distributed computing will remain distributed.

In fact, we think of our servers as the edge of our cloud.

And this is important, because there are going to be many legitimate reasons where people will want digital sovereignty, people will want data residency, there is going to be regulation that we can't anticipate today.

And so we have to think about a distributed cloud infrastructure.

We are definitely going to be one of the key hyper-scale providers.

But we are also going to think about how do we get computing infrastructure, the core compute, storage, network, to be distributed throughout the world.

These may seem like technical attributes, but they are key to how we drive business success for our customers, business transformation for our customers, because all of what we do, collectively, is centered on this core goal of ours, which is to help our customers transform.”

That’s a lot of insight, and very well framed for creating our future and empowering the world.

You Might Also Like

Microsoft Explained: Making Sense of the Microsoft Platform Story

Satya Nadella is All About Customer Focus

Satya Nadella on Empower Every Person on the Planet

Satya Nadella on Everyone Has To Be a Leader

Satya Nadella on How the Key To Longevity is To Be a Learning Organization

Satya Nadella on Live and Work a Meaningful Life

Sayta Nadelle on The Future of Software

Categories: Architecture, Programming

Empower Every Person on the Planet to Achieve More

It’s great to get back to the basics, and purpose is always a powerful starting point.

I was listening to Satya Nadella’s keynote at the Microsoft Worldwide Partner Conference, and I like how he walked through the Microsoft mission in a mobile-first, cloud-first world.

Here’s what Satya had to say:

“Our mission:  Empowering every person and every business on the planet to achieve more.

(We find that by going back into our history and re-discovering that core sense of purpose, that soul ... a PC in every home, democratizing client/server computing.)

We move forward to a Mobile-First, Cloud-First world.

We care about empowerment.

There is no other ecosystem that is primarily, and solely, built to help customers achieve greatness.

We are focused on helping our customers achieve greatness through digital technology.

We care about both individuals and organizations.  That intersection of people and organizations is the cornerstone of what we represent as excellence.

We are a global company.  We want to make sure that the power of technology reaches every country, every vertical, every organization, irrespective of size.

There will be many goals.

What remains constant is this sense of purpose, the reason why this ecosystem exists.

This is a mission that we go and exercise in a Mobile-First, Cloud-First world.”

If I think back to why I originally joined Microsoft, it was to empower every person on the planet to achieve more.

And the cloud is one powerful enabler.

You Might Also Like

Satya Nadella is All About Customer Focus

Satya Nadella on Everyone Has To Be a Leader

Satya Nadella on How the Key To Longevity is To Be a Learning Organization

Satya Nadella on Live and Work a Meaningful Life

Sayta Nadelle on The Future of Software

Categories: Architecture, Programming

R: Blog post frequency anomaly detection

Mark Needham - Sat, 07/18/2015 - 00:34

I came across Twitter’s anomaly detection library last year but haven’t yet had a reason to take it for a test run so having got my blog post frequency data into shape I thought it’d be fun to run it through the algorithm.

I wanted to see if it would detect any periods of time when the number of posts differed significantly – I don’t really have an action I’m going to take based on the results, it’s curiosity more than anything else!

First we need to get the library installed. It’s not on CRAN so we need to use devtools to install it from the github repository:

install.packages("devtools")
devtools::install_github("twitter/AnomalyDetection")
library(AnomalyDetection)

The expected data format is two columns – one containing a time stamp and the other a count. e.g. using the ‘raw_data’ data frame that is in scope when you add the library:

> library(dplyr)
> raw_data %>% head()
            timestamp   count
1 1980-09-25 14:01:00 182.478
2 1980-09-25 14:02:00 176.231
3 1980-09-25 14:03:00 183.917
4 1980-09-25 14:04:00 177.798
5 1980-09-25 14:05:00 165.469
6 1980-09-25 14:06:00 181.878

In our case the timestamps will be the start date of a week and the count the number of posts in that week. But first let’s get some practice calling the anomaly function using the canned data:

res = AnomalyDetectionTs(raw_data, max_anoms=0.02, direction='both', plot=TRUE)
res$plot

2015 07 18 00 09 22

From this visualisation we learn that we should expect both high and low outliers to be identified. Let’s give it a try with the blog post publication data.

We need to get the data into shape so we’ll start by getting a count of the number of blog posts by (week, year) pair:

> df %>% sample_n(5)
                                                           title                date
1425                            Coding: Copy/Paste then refactor 2009-10-31 07:54:31
783  Neo4j 2.0.0-M06 -> 2.0.0-RC1: Working with path expressions 2013-11-23 10:30:41
960                                        R: Removing for loops 2015-04-18 23:53:20
966   R: dplyr - Error in (list: invalid subscript type 'double' 2015-04-27 22:34:43
343                     Parsing XML from the unix terminal/shell 2011-09-03 23:42:11
 
> byWeek = df %>% 
    mutate(year = year(date), week = week(date)) %>% 
    group_by(week, year) %>% summarise(n = n()) %>% 
    ungroup() %>% arrange(desc(n))
 
> byWeek %>% sample_n(5)
Source: local data frame [5 x 3]
 
  week year n
1   44 2009 6
2   37 2011 4
3   39 2012 3
4    7 2013 4
5    6 2010 6

Great. The next step is to translate this data frame into one containing a date representing the start of that week and the number of posts:

> data = byWeek %>% 
    mutate(start_of_week = calculate_start_of_week(week, year)) %>%
    filter(start_of_week > ymd("2008-07-01")) %>%
    select(start_of_week, n)
 
> data %>% sample_n(5)
Source: local data frame [5 x 2]
 
  start_of_week n
1    2010-09-10 4
2    2013-04-09 4
3    2010-04-30 6
4    2012-03-11 3
5    2014-12-03 3

We’re now ready to plug it into the anomaly detection function:

res = AnomalyDetectionTs(data, 
                         max_anoms=0.02, 
                         direction='both', 
                         plot=TRUE)
res$plot

2015 07 18 00 24 20

Interestingly I don’t seem to have any low end anomalies – there were a couple of really high frequency weeks when I first started writing and I think one of the other weeks contains a New Year’s Eve when I was particularly bored!

If we group by month instead only the very first month stands out as an outlier:

data = byMonth %>% 
  mutate(start_of_month = ymd(paste(year, month, 1, sep="-"))) %>%
  filter(start_of_month > ymd("2008-07-01")) %>%
  select(start_of_month, n)
res = AnomalyDetectionTs(data, 
                         max_anoms=0.02, 
                         direction='both',       
                         #longterm = TRUE,
                         plot=TRUE)
res$plot

2015 07 18 00 34 02

I’m not sure what else to do as far as anomaly detection goes but if you have any ideas please let me know!

Categories: Programming

Estimating and Making Decisions in Presence of Uncertainty

Herding Cats - Glen Alleman - Fri, 07/17/2015 - 18:03

There is a nice post from Trent Hone on No Estimates. This triggered some more ideas about why we estimates, what the root cause of the problem #NoEstimates is trying to solve and a summary of the problem

A Few Comments

All project work is probabilistic, driven by the underlying statistical uncertainties. These uncertainties are of two types - reducible and irreducible. Reducible uncertainty is driven by the lack of information. This information can be increased with direct work. We can "buy down" the uncertainty, with testing, alternative designs, redundancy. Reducible uncertainty is "event based." Your power outage for example. DDay being pushed one day by weather.

Irreducible uncertainty is just "part of the environment." It's the natural varaibility embedded in all project work. The "vibrations" of all the variables. This is handled by Margin. Schedule margin, cost margin, technical margin.

Here's an approach to "managing in the presence of uncertainty"

For my experience in Software Intensive Systems in a variety of domains (ERP, Realtime embedded systems, defense, space, nuclear power, pulp and paper, New Drug Development, heavy manufacturing, and more) #NE is a reaction to Bad Management. This inverts the Cause and Effect model of Root Cause Analysis. The conjecture that "estimates are the smell of dysfunction" without stating the dysfunction, the corrective action for that dysfunction, applying that corrective action, then reassessing the conjecture is a hollow statement. So the entire notion of #NE is a house built on sand.

Lastly the Microeconomics of decision making in SWDev in the presence of uncertainty means estimating is needed to "decide" between alternatives - opportunity costs. This paradigm is the basis of any non-trivial business governance process

No Estimates is a solution looking for a problem to solve.

Categories: Project Management

Stuff The Internet Says On Scalability For July 17th, 2015

Hey, it's HighScalability time:


In case you were wondering, the world is weird. Large Hadron Collider discovers new pentaquark particle.

 

  • 3x: Uber bigger than taxi market; 250x: traffic in HotSchedules' DDoS attack; 92%: Apple’s share of the smartphone profit pie; 7: Airbnb rejections
  • Quotable Quotes:
    • Netflix: A slow or unhealthy server is worse than a down server 
    • @inconshreveable: ngrok production servers, max GC pause: Go1.4 (top) vs Go1.5. Holy 85% reduction! /cc Go team
    • Nic Fleming: The fungal internet exemplifies one of the great lessons of ecology: seemingly separate organisms are often connected, and may depend on each other.
    • @IBMResearch: With 20+ billion transistors on new chip, that's a 50% scaling improvement over today’s tech #ibmresearch #7nm 

  • Apple and Google Race to See Who Can Kill the App First. Honest question, how are people supposed to make money in this new world? Apps are quickly becoming just an identity that ties together 10 or so components that appear integrated as part of the OS, but don't look like your app at all. Reminds me of laminar flow. We are seeing a rebirth of CORBA, COM and OLE 2, this time the container is an app bound by deep linking and some ad hoc ways to push messages around. Show developers the money.

  • The dark side of Google 10x: One former exec told Business Insider that the gospel of 10x, which is promoted by top execs including CEO Larry Page, has two sides. “It’s enormously energizing on one side, but on the other it can be totally paralyzing,”

  • Wait, are we going all RAM or all flash? So confusing. MIT Develops Cheaper Supercomputer Clusters By Nixing Costly RAM In Favor Of Flash: researchers presented evidence at the International Symposium on Computer Architecture that if servers executing a distributed computation go to disk for data even just 5 percent of the time, performance takes a hit to where it's comparable with flash memory anyway. 40 servers with 10 terabytes of RAM wouldn't chew through a 10.5TB computation any better than 20 servers with 20TB of flash memory. What's involved here is moving a little computational power off of the servers and onto the chips that control the flash drives.

  • Is disruption merely a Silicon Valley fantasy? Corporate America Hasn’t Been Disrupted: the advantage enjoyed by incumbents, always substantial, has been growing in recent years...more Americans worked for big companies...Large companies are becoming more dominant in part by buying up their rivals...Consolidation could explain at least part of the rising failure rate among startups...The startup rate has declined in every major industry, every state and nearly every city, and the failure rate’s rise has been nearly as universal. 

  • What's a unikernel and why should you care? Amir Chaudhry reveals all in his Unikernels talk given at PolyConf 15. And here's the supporting blog post. Why are we still applications on top of operating systems? Most applications are single purpose so why all the complexity? Why are we building software for the cloud the same way we build it for desktops? We can do better with Unikerels where every application is a single purpose VM with a single address space.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Estimating Processes in Support of Economic Analysis

Herding Cats - Glen Alleman - Fri, 07/17/2015 - 03:34

On any project with significant Value at Risk Economic Analysis provides visibility to the data needed for decision making. This Value at Risk paradigm is a critical starting point for applying all processes of decision making. The choice of decision process must be matched to the opportunity cost (actually the value of the loss for the alternative not chosen).

Screen Shot 2015-07-16 at 3.30.13 PM

  1. Objective - what capabilities need to be produced by this project which the customer will value (in some useful units of measure)? These objectives can be easily described by Capabilities of the Outcomes. Features, stories, requirements are of little use to the customer if they do not directly enable a capabilities to accomplish the business mission or vision. The customer bought the capability, not the feature.
  2. Assumptions and Constraints - there are always assumptions. These are conditions in place that impact the project. 
  3. Alternatives - there is always more than one way to do something. What are costs for each alternatives? 
  4. Benefits - what are the measurable benefits for this work? It can be monetary. It can be some intangible benefit.
  5. Costs - what will it cost to produce the value to be delivered to the customer? Along with this cost, what resources are needed? What schedule are these resources available?
  6. Rank Alternatives - with this information we can rank the alternatives in some objective manner. These measures can be assessment of effectiveness or 
  7. Sensitivity and Risk Analysis - tradeoffs are always probabilistic in nature, since all project work is probabilistic in nature. Rarely if ever are the single value non-varying numbers. This is the case only when the work is complete and no more activities are being performed. These actual values are useful, but they can be used for making future decisions only if there past performance statistical behaviors are collected. This is the Flaw of Averages problem. No average has value without know the variance.
  8. Make a decision - with all this information we can now make decisions. Of course the information about the past can be used and of course there is information about the future. Both are probabilistic nature.

With these probabilistic outcomes driven by the underlying statistical process of all project work, we need to be able to estimate all the values of the random variables and their impact on the processes above.   

Next is an example of applying this probabilistic decision making in the presence of uncertainty for cost and schedule assessment. This can be for other probabilistic variables on the project. Technical Performance Measures, Measures of Effectiveness, Measures of Performance, Key Performance Parameters, and many other ...ilities (maintainability, supportability, survivability, etc.)

Screen Shot 2015-07-16 at 4.51.42 PM

Related articles What Happened to our Basic Math Skills? Information Technology Estimating Quality Making Decisions In The Presence of Uncertainty What's the Smell of Dysfunction?
Categories: Project Management

Blisters: Thoughts on Change

Eww

Eww

Change is the currency of every organization involved developing, enhancing or maintaining software. Change includes the business process being automated, the techniques used to do work or even the technology upon which the software is built. Very little significant change is frictionless. For change to occur the need and benefit to be gained for solving the problem must overcome inertia, the work needed to find a solution and implement the solution.

I recently purchased a new pair of shoes, an act that I had put off for a bit longer than I should have. The pair of shoes I had owed for three years but had been wearing nearly everyday for the last year were a pair of Italian loafers. The leather was exquisite over the three years I had owned the shoes they had become very old but very comfortable friends.  Unfortunately the soles had worn out and because of how the shoes were constructed, they we’re not repairable. As a rule, consultants wearing worn out shoes, however treasured, generally do not confer confidence. The hole in the sole and a need to earn a living established the need for me to change.  The final impetus to overcome inertia was delivered when I found that an important meeting on my schedule in the next week. Why was there any inertia? Unlike my wife, I can’t order 10 pairs of shoes online and then return seven after trying them on for a few reasons. First my need was immediate, the worn out soles were obvious to anyone sitting near me. Secondly, I am fairly hard to fit when it comes to dress shoes. Shopping (something I find as enjoyable as a prostate exam) is an event to be avoided. Deciding to change/shop requires an investment in effort and time. Almost every significant organizational changes require the same upfront investment in time, effort and rationalization to break the day-to-day inertia needed to begin to pursue change.

Breaking through the barrier of inertia by establishing a need and weighing the benefit to be gained by fulfilling that need is only the first step along the path of change. All leaders know that change requires spending blood, sweat and tears to find the correct solution. A team that has decided to change how they are working might have to sort through and evaluate Agile, lean or even classic software development techniques before finding the solution that fits their needs and culture. The process is not terribly different from my shopping for shoes. The shoe story continues with a trip to the local mall with two “major” department stores. Once at the mall I began the process of evaluating options. The process included an hour that I will ever get back in one store being told that that there were no size 10.5 shoes in black that would be suitable for an office in stock. Then being offered a pair of 11’s that I could lace up myself to try on. The last caused me to immediately go to another store where I bought a pair (my normal brand in stylish black). Just like the team finding and deciding on a new development framework, I had to evaluate alternatives, try them on (sort of prototype) and then negotiate the sale. Change is not frictionless.

Once an organization decides to change and settles on how to they will meet their need, implementation remains. Regardless of all the ground work done up to this point important but not sufficient effort and … sometimes pain are required to implement the change. Teams embracing Agile, kanban or even waterfall will need to learn new concepts, practice those techniques and understand that mistakes will be made. Looping back to the shoe story, I am now suffering through a blister. Organizational process change might might not generate physical pain like new shoes however to stress of the change has to accounted for when determining if the cost of change is less than the gains foreseen for addressing the need.
In the end, change is unavoidable whether we are discussing new shoes or process improvements. The question is rarely, will we change but rather when we will change and how big a need do we have to generate to offset the effort and aggregation that any change requires.

Now for something completely different!

I need your help! I am proposing a talk at AgileDC (Washington DC, October 26th). The title is

Budgeting, Estimation, Planning, #NoEstimates and the Agile Planning Onion – They ALL make sense!

Can you go the AgileDC site and like the presentation (click the heart next to the title). The likes will help get the attention of the organizers! I would also like your feedback on the topic.


Categories: Process Management

Product Manager, Product Owner, or Business Analyst?

Do you have a title such as product manager, product owner, or business analyst?

We hear  these titles all the time. What does each do?

Here is how I have seen successful agile projects and programs use people in these positions. Note that I am discussing agile projects and programs.

The product manager creates the roadmap. She has the product vision over the entire life of the product. Typically, what’s In the roadmap are larger than epics—they are themes or feature sets. 

Product management means thinking strategically about the product. You might require several projects or programs to achieve what the product manager wants as a strategic vision.

Product owners (PO) work with agile teams to translate the strategic vision into Minimum Viable Products. The PO decides when the team(s) have done enough to release. See the release frequency image to understand the cost and value of releasing.

The business analyst may do any of these things. In my experience, I have seen business analysts focus on “what does this requirement/feature/story really mean to the team and/or the product?” I have seen fewer BAs do the strategic visioning of the product over its lifetime. I have seen BAs work with POs when the PO was not available enough for the team. I have seen BAs do great work breaking stories into smaller components of value, not architectural components.

Your team might have different names for these positions. Each team needs the strategic lifetime-of-the-product view; the tactical view for the next iteration or so and the knowledge of how to re-rank the backlog; and the ability to translate stories into small valuable chunks. 

Can one person do each of these things? It depends on the person. I have found it difficult to move quickly from the tactical to strategic and back again (or vice versa). Maybe you know how. For me, that is a form of multitasking. 

The more important questions are: do you have the roles you need, at the time you need them on your team? If you are one of these people, do you know how to perform these roles? If you are outside the organization in some way, do you know what you need to do, to perform these roles?

If you don’t know what to do to help your team, consider participating in Product Owner for Agencies training. Marcus Blankenship and I will help you learn what to do, and coach you in real time as to how to do it best for your team. I hope to see you there.

Categories: Project Management

Episode 232: Mark Nottingham on HTTP/2

Stefan Tilkov talks to Mark Nottingham, chair of the IETF (Internet Engineering Task Force) HTTP Working Group and Internet standards veteran, about HTTP/2, the new version of the Web’s core protocol. The discussion provides a glimpse behind the process of building standards. Topics covered include the history of HTTP versions, differences among those versions, and […]
Categories: Programming

This. Just. This.

In response to an honest comment about some of Instagram's rather "ordinary engineering choices", mikeyk (Co-founder @ Instagram) had what I consider the perfect response:  We (at IG) aren't claiming to be doing revolutionary things on infrastructure--but one thing I found super valuable when scaling Instagram in the early days was having access to stories from other companies on how they've scaled. That's the spirit in which I encourage our engineers to blog about our DB scaling, our search infra, etc--I think the more open we are (as a company, but more broadly as an industry) about technical approaches + solutions, the better off we'll be. This could be the anthem for HS and is a key reason our industry continues to get better. And in case you are interested, here are just a few of those stories from Instagram:

On HackerNews

Categories: Architecture

How To Deal With Criticism When Marketing Yourself

Making the Complex Simple - John Sonmez - Thu, 07/16/2015 - 16:00

In this episode, I talk about dealing with criticism when marketing oneself. Full transcript: John:               Hey, John Sonmez from simpleprogrammer.com. So I got a question about marketing yourself. This is a topic that I like to talk about. If you haven’t checked out my course or package on marketing yourself go to devcareerboost.com and check […]

The post How To Deal With Criticism When Marketing Yourself appeared first on Simple Programmer.

Categories: Programming

Neo4j: The football transfers graph

Mark Needham - Thu, 07/16/2015 - 07:40

Given we’re still in pre season transfer madness as far as European football is concerned I thought it’d be interesting to put together a football transfers graph to see whether there are any interesting insights to be had.

It took me a while to find an appropriate source but I eventually came across transfermarkt.co.uk which contains transfers going back at least as far as the start of the Premier League in 1992.

I wrote a quick Python script to create a CSV file of all the transfers. This is what the file looks like:

$ head -n 10 data/transfers.csv
player,from_team,from_team_id,to_team,to_team_id,fee,season
Martin Keown,Everton,29,Arsenal FC,11,"2,10 Mill. £",1992-1993
John Jensen,Bröndby IF,206,Arsenal FC,11,"1,12 Mill. £",1992-1993
Alan Miller,Birmingham,337,Arsenal FC,11,,1992-1993
Jim Will,Sheffield Utd.,350,Arsenal FC,11,,1992-1993
David Rocastle,Arsenal FC,11,Leeds,399,"1,68 Mill. £",1992-1993
Perry Groves,Arsenal FC,11,Southampton FC,180,595 Th. £,1992-1993
Ty Gooden,Arsenal FC,11,Wycombe Wand.,2805,?,1992-1993
Geraint Williams,Derby,22,Ipswich Town,677,525 Th. £,1992-1993
Jason Winters,Chelsea U21,9250,Ipswich Town,677,?,1992-1993

I’m going to create the following graph and then we’ll write some queries which explore chains of transfers involving players and clubs.

2015 07 15 07 28 11

I wrote a few import scripts using Neo4j’s LOAD CSV command, having set up the appropriate indexes first:

create index on :Team(id);
create index on :Season(name);
create index on :Transfer(description);
create index on :Player(name);
// teams
load csv with headers from "file:///Users/markneedham/projects/football-transfers/data/teams.csv" as row
merge (team:Team {id: toint(row.team_id)})
on create set team.name = row.team;
 
// seasons
load csv with headers from "file:///Users/markneedham/projects/football-transfers/data/transfers.csv" as row
merge (season:Season {name: row.season})
ON CREATE SET season.starts =  toint(split(season.name, "-")[0]);
 
// players
load csv with headers from "file:///Users/markneedham/projects/football-transfers/data/transfers.csv" as row
merge (player:Player {name: row.player});
 
// transfers
load csv with headers from "file:///Users/markneedham/projects/football-transfers/data/transfers.csv" as row
match (from:Team {id: toint(row.from_team_id)})
match (to:Team {id: toint(row.to_team_id)})
match (season:Season {name: row.season})
match (player:Player {name: row.player})
 
merge (transfer:Transfer {description: row.player + " from " + from.name + " to " + to.name})
merge (transfer)-[:FROM_TEAM]->(from)
merge (transfer)-[:TO_TEAM]->(to)
merge (transfer)-[:IN_SEASON]->(season)
merge (transfer)-[:PLAYER]->(player);
 
// connect transfers
match (season)<-[:IN_SEASON]-(transfer:Transfer)-[:PLAYER]->(player)
WITH player, season, transfer
ORDER BY player.name, season.starts
WITH player, COLLECT({s: season, t: transfer}) AS transfers
UNWIND range(0, length(transfers)-2) AS idx
WITH player, transfers[idx] AS t1, transfers[idx +1] AS t2
WITH player, t1.t AS t1, t2.t AS t2
MERGE (t1)-[:NEXT]->(t2);

All the files and scripts are on this gist if you want to play around with the data. The only thing you’ll need to change is the file path on each of the ‘LOAD CSV’ lines.

The ‘connect transfers’ query is a bit more complicated than the others – in that one we’re first ordering the transfers in ascending order grouped by player and then creating a linked list of a player’s transfers.

Now that we’ve got the data loaded let’s find out which player was transferred the most:

match path = (:Transfer)-[:NEXT*0..]->(transfer:Transfer)
where NOT (transfer)-[:NEXT]->()
RETURN path 
ORDER BY LENGTH(path) DESC
LIMIT 1
Graph  22

Which other players have moved teams frequently?

match path = (first:Transfer)-[:NEXT*0..]->(transfer:Transfer),
             (player)<-[:PLAYER]-(transfer)
where NOT ((transfer)-[:NEXT]->()) AND NOT ((first)<-[:NEXT]-())
RETURN player.name, LENGTH(path) AS numberOfTransfers 
ORDER BY numberOfTransfers DESC
LIMIT 10
 
==> +--------------------------------------+
==> | player.name      | numberOfTransfers |
==> +--------------------------------------+
==> | "Craig Bellamy"  | 7                 |
==> | "David Unsworth" | 6                 |
==> | "Andrew Cole"    | 6                 |
==> | "Peter Crouch"   | 6                 |
==> | "Les Ferdinand"  | 5                 |
==> | "Kevin Phillips" | 5                 |
==> | "Mark Hughes"    | 5                 |
==> | "Tommy Wright"   | 4                 |
==> | "Carl Tiler"     | 4                 |
==> | "Don Hutchison"  | 4                 |
==> +--------------------------------------+
==> 10 rows

What are the most frequent combinations of clubs involved in transfers?

match (from)<-[:FROM_TEAM]-(t:Transfer)-[:TO_TEAM]->(to), (t)-[:PLAYER]->(p)
RETURN from.name, to.name, COUNT(*) AS times, COLLECT(p.name) AS players
ORDER BY times DESC
LIMIT 10
 
==> +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> | from.name           | to.name               | times | players                                                                                                                                                                                                    |
==> +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> | "West Ham United"   | "Queens Park Rangers" | 13    | ["Keith Rowland","Iain Dowie","Tim Breacker","Ludek Miklosko","Bertie Brayley","Terrell Forbes","Steve Lomas","Hogan Ephraim","Nigel Quashie","Danny Gabbidon","Kieron Dyer","Robert Green","Gary O'Neil"] |
==> | "Tottenham Hotspur" | "Portsmouth FC"       | 12    | ["Paul Walsh","Andy Turner","Rory Allen","Justin Edinburgh","Tim Sherwood","Teddy Sheringham","Noé Pamarot","Pedro Mendes","Sean Davis","Jermain Defoe","Younès Kaboul","Kevin-Prince Boateng"]            |
==> | "Liverpool FC"      | "West Ham United"     | 12    | ["Julian Dicks","David Burrows","Mike Marsh","Don Hutchison","Neil Ruddock","Titi Camara","Rob Jones","Rigobert Song","Craig Bellamy","Joe Cole","Andy Carroll","Stewart Downing"]                         |
==> | "Manchester United" | "Everton FC"          | 9     | ["Andrey Kanchelskis","John O'Kane","Jesper Blomqvist","Phil Neville","Tim Howard","Louis Saha","Darron Gibson","Sam Byrne","Tom Cleverley"]                                                               |
==> | "Newcastle United"  | "West Ham United"     | 9     | ["Paul Kitson","Shaka Hislop","Stuart Pearce","Wayne Quinn","Lee Bowyer","Kieron Dyer","Scott Parker","Nolberto Solano","Kevin Nolan"]                                                                     |
==> | "Blackburn Rovers"  | "Leicester City"      | 9     | ["Steve Agnew","Tim Flowers","Callum Davidson","John Curtis","Keith Gillespie","Craig Hignett","Nils-Eric Johansson","Bruno Berner","Paul Gallagher"]                                                      |
==> | "Chelsea FC"        | "Southampton FC"      | 8     | ["Ken Monkou","Kerry Dixon","Neil Shipperley","Mark Hughes","Paul Hughes","Graeme Le Saux","Jack Cork","Ryan Bertrand"]                                                                                    |
==> | "Birmingham City"   | "Coventry City"       | 8     | ["David Rennie","John Gayle","Liam Daish","Gary Breen","Stern John","Julian Gray","Lee Carsley","Gary McSheffrey"]                                                                                         |
==> | "Southampton FC"    | "Fulham FC"           | 8     | ["Micky Adams","Kevin Moore","Terry Hurlock","Maik Taylor","Alan Neilson","Luís Boa Morte","Antti Niemi","Chris Baird"]                                                                                    |
==> | "Portsmouth FC"     | "Stoke City"          | 8     | ["Kevin Harper","Lewis Buxton","Anthony Pulis","Vincent Péricard","Asmir Begovic","Marc Wilson","Elliot Wheeler","Alex Grant"]                                                                             |
==> +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> 10 rows

Are there ever situations where players get transferred in both directions?

match (from)<-[:FROM_TEAM]-(t:Transfer)-[:TO_TEAM]->(to), (t)-[:PLAYER]->(player)
where id(from) < id(to)
WITH from, to, COUNT(*) AS times, COLLECT(player.name) AS players
match (to)<-[:FROM_TEAM]-(t:Transfer)-[:TO_TEAM]->(from), (t)-[:PLAYER]->(player)
RETURN from.name, to.name, times, COUNT(*) as otherWayTimes, players, COLLECT(player.name) AS otherWayPlayers
ORDER BY times + otherWayTimes DESC
LIMIT 10
 
==> +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> | from.name           | to.name               | times | otherWayTimes | players                                                                                                                                                                                                    | otherWayPlayers                                                                                                                                                                    |
==> +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> | "Tottenham Hotspur" | "Portsmouth FC"       | 12    | 5             | ["Paul Walsh","Andy Turner","Rory Allen","Justin Edinburgh","Tim Sherwood","Teddy Sheringham","Noé Pamarot","Pedro Mendes","Sean Davis","Jermain Defoe","Younès Kaboul","Kevin-Prince Boateng"]            | ["Jermain Defoe","Niko Kranjcar","Younès Kaboul","Peter Crouch","Darren Anderton"]                                                                                                 |
==> | "West Ham United"   | "Liverpool FC"        | 4     | 12            | ["Julian Dicks","Daniel Sjölund","Yossi Benayoun","Javier Mascherano"]                                                                                                                                     | ["Stewart Downing","Andy Carroll","Joe Cole","Craig Bellamy","Rigobert Song","Titi Camara","Rob Jones","Neil Ruddock","Don Hutchison","Julian Dicks","Mike Marsh","David Burrows"] |
==> | "West Ham United"   | "Queens Park Rangers" | 13    | 2             | ["Keith Rowland","Iain Dowie","Tim Breacker","Ludek Miklosko","Bertie Brayley","Terrell Forbes","Steve Lomas","Hogan Ephraim","Nigel Quashie","Danny Gabbidon","Kieron Dyer","Robert Green","Gary O'Neil"] | ["Andy Impey","Trevor Sinclair"]                                                                                                                                                   |
==> | "West Ham United"   | "Tottenham Hotspur"   | 5     | 8             | ["Jermain Defoe","Frédéric Kanouté","Michael Carrick","Jimmy Walker","Scott Parker"]                                                                                                                       | ["Sergiy Rebrov","Mauricio Taricco","Calum Davenport","Les Ferdinand","Matthew Etherington","Bobby Zamora","Ilie Dumitrescu","Mark Robson"]                                        |
==> | "West Ham United"   | "Portsmouth FC"       | 8     | 5             | ["Martin Allen","Adrian Whitbread","Marc Keller","Svetoslav Todorov","Hayden Foxe","Shaka Hislop","Sébastien Schemmel","Hayden Mullins"]                                                                   | ["Stephen Henderson","Teddy Sheringham","Shaka Hislop","Marc Keller","Lee Chapman"]                                                                                                |
==> | "Newcastle United"  | "West Ham United"     | 9     | 3             | ["Paul Kitson","Shaka Hislop","Stuart Pearce","Wayne Quinn","Lee Bowyer","Kieron Dyer","Scott Parker","Nolberto Solano","Kevin Nolan"]                                                                     | ["Demba Ba","Lee Bowyer","David Terrier"]                                                                                                                                          |
==> | "Birmingham City"   | "Coventry City"       | 8     | 4             | ["David Rennie","John Gayle","Liam Daish","Gary Breen","Stern John","Julian Gray","Lee Carsley","Gary McSheffrey"]                                                                                         | ["Scott Dann","David Burrows","Peter Ndlovu","David Smith"]                                                                                                                        |
==> | "Manchester City"   | "Portsmouth FC"       | 8     | 4             | ["Paul Walsh","Carl Griffiths","Fitzroy Simpson","Eyal Berkovic","David James","Andrew Cole","Sylvain Distin","Tal Ben Haim"]                                                                              | ["Benjani","Gerry Creaney","Kit Symons","Paul Walsh"]                                                                                                                              |
==> | "Blackburn Rovers"  | "Southampton FC"      | 5     | 6             | ["David Speedie","Stuart Ripley","James Beattie","Kevin Davies","Zak Jones"]                                                                                                                               | ["Zak Jones","Egil Östenstad","Kevin Davies","Alan Shearer","Jeff Kenna","Tim Flowers"]                                                                                            |
==> | "AFC Bournemouth"   | "West Ham United"     | 3     | 8             | ["Keith Rowland","Paul Mitchell","Scott Mean"]                                                                                                                                                             | ["Steve Jones","Matt Holland","Mohammed Berthé","Scott Mean","Paul Mitchell","Jamie Victory","Mark Watson","Stephen Purches"]                                                      |
==> +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Any players who go back to the same club they were at previously?

match (player:Player)<-[:PLAYER]-(t1:Transfer)-[:FROM_TEAM]->(from)<-[:TO_TEAM]-(t2:Transfer)-[:FROM_TEAM]->(to),
      (t2)-[:PLAYER]->(player), (t1)-[:TO_TEAM]->(to)
WHERE ID(to) < ID(from)
WITH player, COLLECT([ from.name, " ⥄ ", to.name]) AS teams
RETURN player.name, 
       REDUCE(acc = [], item in teams | acc  + REDUCE(acc2 = "", i in item | acc2 + i)) AS thereAndBack
ORDER BY LENGTH(thereAndBack) DESC
LIMIT 10
 
==> +-------------------------------------------------------------------------------------+
==> | player.name       | thereAndBack                                                    |
==> +-------------------------------------------------------------------------------------+
==> | "Mark Stein"      | ["Stoke City ⥄ Chelsea FC","Ipswich Town ⥄ Chelsea FC"]         |
==> | "Peter Beagrie"   | ["Bradford City ⥄ Everton FC","Bradford City ⥄ Wigan Athletic"] |
==> | "Richard Dryden"  | ["Southampton FC ⥄ Stoke City","Southampton FC ⥄ Swindon Town"] |
==> | "Robbie Elliott"  | ["Bolton Wanderers ⥄ Newcastle United"]                         |
==> | "Elliot Grandin"  | ["Blackpool FC ⥄ Crystal Palace"]                               |
==> | "Robert Fleck"    | ["Chelsea FC ⥄ Norwich City"]                                   |
==> | "Paul Walsh"      | ["Portsmouth FC ⥄ Manchester City"]                             |
==> | "Rick Holden"     | ["Manchester City ⥄ Oldham Athletic"]                           |
==> | "Gary McAllister" | ["Liverpool FC ⥄ Coventry City"]                                |
==> | "Lee Bowyer"      | ["West Ham United ⥄ Newcastle United"]                          |
==> +-------------------------------------------------------------------------------------+

That’s all I’ve got for now – if you can think of any other interesting avenues to explore let me know and I’ll take a look.

Categories: Programming

Chromecast drives higher visits, engagement and monetization for app developers

Google Code Blog - Wed, 07/15/2015 - 18:06

Posted by Jeanie Santoso, Merchandise Marketing Manager

Chromecast, our first Google Cast device, has seen great success with 17 million devices already sold and over 1.5 billion touches of the Cast button. Consumers now get all the benefits of their easy to use personal mobile devices, with content displayed on the largest and most beautiful screen in the house. By adding Google Cast functionality to their apps, developers can gain visits, engagement, and/or higher monetization. Here are four real-world examples showing how very different companies are successfully using Google Cast technology. Read on to learn more about their successes and how to get started.

Comedy Central sees 50% more videos viewed by Chromecast users

The Comedy Central app lets fans watch their favorite shows in full and on demand from mobile devices. The company created a cast-enabled app so users could bring their small screen experience to their TVs. Now with Chromecast, users watch at least 50 percent more video, with 1.5 times more visits than the average Comedy Central app user. “The user adoption and volume we saw immediately at launch was pleasantly surprising,” says Ben Hurst, senior vice president, mobile and emerging platforms, Viacom Music and Entertainment Group. “We feel that Chromecast was a clear success for the Comedy Central app.”

Read the full Comedy Central case study here

Just Dance Now sees 2.5x monetization with Chromecast users

Interactive-game giant Ubisoft adopted Google Cast technology as a new way to experience their Just Dance Now game. As the game requires a controller and a main screen on which the game is displayed, Ubisoft saw Chromecast as the simplest and most accessible way to play. Chromecast represents 30 percent of all songs launched on the game in the US. Chromecast players monetize 2.5 times more than other players - they’re more engaged, play longer and more often than other players. Ubisoft also has seen more long-term subscribers with Chromecast. “The best Just Dance Now experience is on a big screen, and Chromecast brings an amazingly quick launch and ease of use for players to get into the game,” says Björn Törnqvist, Ubisoft technical director.

Read the full Just Dance Now case study here

Fitnet sees 35% higher engagement in Chromecast users

Fitnet is a mobile app that delivers video workouts and converts your smartphone’s or tablet’s camera into a biometric sensor to intelligently analyze your synchronicity with the trainer. The app provides a real-time score based on the user’s individual performance. The company turned to Chromecast to provide a compelling, integrated big screen user experience so users don’t need to stare at a tiny display to follow along. Chromecast users now perform 35 percent better on key engagement metrics Fitnet regard as critical to their success”—metrics such as logins, exploring new workouts, and actively engaging in workout content. “Integrating with Google Cast technology has been an excellent investment of our time and resources, and a key feature that has helped us to develop a unique, compelling experience for our users,” Bob Summers, Fitnet founder and CEO.

Read the full Fitnet case study here

table, th, td { border: clear; border-collapse: collapse; } Haystack TV doubled average weekly viewing time

Haystack TV is a personal news channel that lets consumers watch news on any screen, at any time. The company integrated Google Cast technology so users can browse their headline news, choose other videos to play, and even remove videos from their play queue without disrupting the current video on their TV. With Chromecast, average weekly viewing time has doubled. One-third of Haystack TV users now view their news via Chromecast. “We’re in the midst of a revolution in the world of television. More and more people are ‘cutting the cord’ and favoring over-the-top (OTT) services such as Haystack TV,” says Ish Harshawat, Haystack TV co-founder. “Chromecast is the perfect device for watching Haystack TV on the big screen. We wouldn't be where we are today without Chromecast.”

Read the full Haystack TV case study here

Integrate with Google Cast technology today

More and more developers are discovering what Google Cast technology can do for their app. Check out the Google Cast SDK for API references and take a look at our great sample apps to help get you started.

To learn more, visit developers.google.com/cast

Categories: Programming

64 Network DO’s and DON’Ts for Game Engines. Part IIIa: Server-Side

This article originally appeared on ITHare.com. It's one article from an excellent series of articles: Part I. Client Side; Part IIa. Protocols and APIs; Part IIb; Protocols and APIs; Part IIIb. Server-Side (deployment, optimizations, and testing); Part IV. Great TCP-vs-UDP Debate; Part V. UDP; Part VI. TCP.

In Part III of the article, we’ll discuss issues specific to server-side, as well as certain DO’s and DON’Ts related to system testing. Due to the size, part III has been split, and in this part IIIa we’ll concentrate on the issues related to Store-Process-and-Forward architecture.

18. DO consider Event-Driven programming model for Server Side too

As discussed above (see item #1 in Part I), the event-driven programming is a must for the client side; in addition, it also comes handy on the server side. Having multi-threaded logic is still a nightmare for the server-side [NoBugs2010], and keeping logic single-threaded simplifies development a lot. Whether to think that multi-threaded game logic is normal, and single-threaded logic is a big improvement, or to think that single-threaded game logic is normal, and multi-threaded logic is a nightmare – is up to you. What is clear is that if you can keep your game logic single-threaded – you’ll be very happy compared to the multi-threaded alternative.

However, unlike the client-side where performance and scalability rarely pose problems, on the server side where you need to serve hundreds of thousands of players, they become really (or, if your project is successful, “really really”) important. I know two ways of handling performance/scalability for games, while keeping logic single-threaded.

18a. Off-loading
Categories: Architecture

A Very Old Version of the PlentyOfFish Architecture that was Just Sold for $575 Million

PlentyOfFish was acquired by the Match Group for $575 million in cash. And it all goes to Markus Frind. Here's the story of the acquisition

Way back in 2009 I wrote architecture article on PlentyOfFish, which I'll reproduce here for historical perspective. The main theme at that time was how Markus was making great fat stacks of cash from adsense by running this huge site all by himself on a Microsoft stack.

We know the adsense goldmine played out long ago. What else has changed? We don't really know. Sometime ago we stopped getting updates on PlentyOfFish architecture changes, so that's all we have.

I doubt much remains the same however. Now 75 people work at PlentyOfFish, there are 90 million registered users, and a whopping 3.6 million active daily users, so something must be happening.

Anyway, here's the old PlentyOfFish Architecture. It still makes for interesting reading. I'm just wondering, when you get done reading, is being sold for $575 Million the ending you would expect?

PlentyOfFish Architecture
Categories: Architecture

Python: UnicodeDecodeError: ‘ascii’ codec can’t decode byte 0xe2 in position 0: ordinal not in range(128)

Mark Needham - Wed, 07/15/2015 - 07:20

I was recently doing some text scrubbing and had difficulty working out how to remove the ‘†’ character from strings.

e.g. I had a string like this:

>>> u'foo †'
u'foo \u2020'

I wanted to get rid of the ‘†’ character and then strip any trailing spaces so I’d end up with the string ‘foo’. I tried to do this in one call to ‘replace':

>>> u'foo †'.replace(" †", "")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 1: ordinal not in range(128)

It took me a while to work out that “† ” was being treated as ASCII rather than UTF-8. Let’s fix that:

>>> u'foo †'.replace(u' †', "")
u'foo'

I think the following call to unicode, which I’ve written about before, is equivalent:

>>> u'foo †'.replace(unicode(' †', "utf-8"), "")
u'foo'

Now back to the scrubbing!

Categories: Programming

Scaling Agile: Scrum of Scrums – Barnacles

Remember that while barnacles perform important tasks, such as filtering the oceans water, on the hull of ship they cause drag and reduce efficiency.

Remember that while barnacles perform important tasks, such as filtering the oceans water, on the hull of ship they cause drag and reduce efficiency.

The idea of a Scrum of Scrums (SoS) is fairly simple. A bunch of people get together on periodic basis to coordinate the work their team is doing with other teams. The SoS helps everyone involved to work together better in order to deliver the maximum value. The SoS typically use the daily stand-up or scrum as a model. The simplicity of the logistics of the SoS and the overall utility often leads to tinkering with the format to address other organizational needs. Four very typical additions include:

1.     SoS Hierarchy. As projects get larger, more than one scrum team is often needed to deliver the functionality and value needed by the business in a timely fashion. Any time more than one team has to work together, you need to coordinate to ensure that the teams don’t get in each other’s way. A simple Scrum of Scrums will typically suffice for coordination of a few teams, however as soon as the number of teams grows to between 7 -10 a single SoS will begin to lose its effectiveness. When a SoS gets too large, it generally needs to be split into two and once you have two SoSs . . . you will need a third. If a project requires ten teams to deliver, three SoSs will be needed. Two five person SOSs would meet and then send a representative to the third to coordinate and pass information. Hypothetically SOSs could scale infinitely, however in scenarios requiring more than two layers of SoS effectiveness usually suffers due to degraded communications and meeting fatigue.

2.     Backlogs. Scrum of Scrums, even those with variable membership, often build up lists of to-do items that need to be tackled by SoS members outside of the meeting. These to-do items often do not belong on the backlog of the everyday team. ANY item that a SoS needs to tackle belongs on a backlog (or a to-do list). Most SoS teams I coach use Scrumban to prioritize and the work items on their backlog. Items on the backlog are often varied. For example, some the items I have seen on SoS backlogs included: tasks for consolidated demos, release planning tasks, activities for coordinating external reviews, training and even team events. Using a backlog allows the SoS to capture and prioritize work that both affects and requires coordination across multiple teams.

3.     Retrospectives. I have never met a team that could not benefit from a retrospective at some point. Scrum of Scrums teams are no different. Standard retrospective techniques can typically be used without modification. One exception to standard approaches are for SoS retrospectives for teams with variable membership. For example, in cases where common patterns occur, such as scrum masters one day, technical leads meeting on another day and perhaps test leads a third day. In this case I suggest holding three separate retrospectives. In cases where membership purely depends on daily context, I usually invite everyone that attended the SoS within the timeframe being addressed. One side note, I tend to do SoS retrospective(s) over a lunch at the end of sprint to minimize the potential for time contention with the ability of SoS participants to work with their regular team.

4.     Planning. Where there is a backlog and a cadence of events, there needs to be planning. A SoS team should plan its activities. Planning should include the number and cadence of meetings, membership and any outside activities that the SoS might need to address. Planning should be done at the beginning of a sprint and then tuned on a daily basis. I typically add 30 minutes to the first SoS meeting to address logistics and planning (a form of sprint planning) and then have the SoS team(s) work the backlog as part of their meetings. Asking the SoS to plan its own schedule sends a message about the need for teams to self-organize and self-manage.

The four typical additions of hierarchy, planning, backlogs and retrospectives are consistent with the principles of Agile. These additions add a layer of transparency and coordination. The simplicity and effectiveness of the SoS (and the daily stand-up for that matter) often generate suggestion add extras to the meeting. Use the concept of the time box, for example limit all meetings of this type to 15 minutes, and the principles of Agile to ensure that additions don’t reduce the effectiveness of the Scrum of Scrums. Remember that while barnacles perform important tasks, such as filtering the oceans water, on the hull of ship they cause drag and reduce efficiency.


Categories: Process Management