Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Micro services architecture principle #1: Each Micro service delivers a single complete business capability

Xebia Blog - Sat, 05/23/2015 - 21:13

Micro services are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Micro service Architecture. Over the next couple of days we will cover each of these principles in more detail in a series of blog posts.
This blog explains why a Micro service should deliver a complete business capability.

A complete business capability is a process that can be finished consecutively without interruptions or excursions to other services. This means that a business capability should not depend on other services to complete its work.
If a process in a micro service depends on other micro services we would end up in the dependency hell ESBs introduced: in order to service a customer request we need many other services and therefore if one of them fails everything stops. A more robust solution would be to define a service that handles a process that makes sense to a user. Examples include ordering a book in a web shop. This process would start with the selection of a book and end with creating an order. Actually fulfilling the order is a different process that lives in its own service. The fulfillment process might run right after the order process but it doesn’t have to. If the customer orders a PDF version of a book order fulfillment may be completed right away. If the order was for the print version, all the order service can promise is to ask shipping to send the book. Separating these two processes in different services allows us to make choices about the way the process is completed, making sure that a problem or delay in one service has no impact on other services.

So, building a micro service such that it does a single thing well without interruptions or waiting time is at the foundation of a robust architecture.

Re-Read Saturday: The Goal: A Process of Ongoing Improvement. Part 14

IMG_1249

I recently had a long discussion about whether it was more important to solve an urgent and specific business problem or to create a culture of process improvement that would avoid crises in the future. My colleague described the immediate problem as threatening to the entire organization. The obvious answer was that the immediate problem needed to be addressed. The question then became whether consultants should be engaged to provide the answer or to help the organization discover the answer. I suggest that doing the later actually negates the first question by generating a solution to the immediate problem while creating a culture of process improvement. Johan in The Goal illustrates this nicely. He helped Alex and his management team discover the answer while building a culture of process improvement.

Part 1       Part 2       Part 3      Part 4      Part 5      Part 6      Part 7      Part 8    Part 9   Part 10   Part 11  Part 12
Part 13

Chapter 33 begins with Alex working on assembling his new team. He begins with Lou, the plant accountant. Before Alex can ask him to come with him, Lou explains to Alex that another old measurement has been causing problems with how the plant is perceived and how it behaves. Inventory is accounted for as asset on the balance sheet even though inventory is a liability. Since the plant has become more efficient it is carrying less inventory therefore reducing the assets reported on the balance sheet. During the period of time that inventory was drawn down to the levels needed by the more efficient process the plant looked as if it was increasing the amount of liabilities. Now that a new equilibrium in inventory had been established the problem was not an issue, however Lou notes that, “measurement should induce the parts [of the process} to do what is good for the organization as a whole.” Lou is ready to help Alex and is pumped to focus on building a better measurement program.

Alex approaches Bob Donavan, the plant production manager, to become the division’s production manager. Bob points out that the Burnside order that sealed Alex’s new deal was engineered. Alex and his management team had not just “taken” the order, but rather had worked out the best way the order could be delivered and then had negotiated a deal that benefited everyone. Bob wants to find a way create and document a process in which the plant and engineering can be an integral sales. A process and documentation is needed so that the plant leadership team does not need to be intimately involved in every order. Bob Donavan wants to stay at the plant and become the new plant manager and wants Stacey in materials to become the new production manager.

They find Stacey working on a new potential problem. Stacey has identified that there is a class of resources called capacity constraints resources (CCRs). CCRs are resources that have constraints, but are not bottlenecks. As the processing of work through bottlenecks is improved, CCRs risk becoming bottlenecks which Will negatively impact produvtuvity. Process improvements need to be continually be made across the entire system.

Alex finally turns to Ralph. Ralph points out that he now feels like he is an important part of the team rather than just the computer nerd in the corner. He walks Alex through his ideas of building systems to support engineering, managing buffers and for better measurement.

The experimentation that led to changing how the plant works has changed how Alex’s management team thinks about their jobs. Asking questions and experimenting with changes to the process those questions generate has yielded a much higher level of involvement and commitment.

Chapter 34 jumps to Alex and Julie sitting their kitchen drinking tea. They are discussing how each of Alex’s current team is exploring ideas that might not have an answer. Julie points out that if Johan had not cut him off by suggesting he trust his own judgement Alex might be reaching out to Johan for suggestions rather than trying to work on them as a team.

The discussion of Johan brings them back to Johan’s last question to Alex. Johan had asked, “What  are the techniques needed for management?” Julie suggests that since the questions that Alex’s management team each is currently working on will be around after Alex moves to his new job, why not engage them in answering Johan’s question. They have as much of a stake in the answer as Alex does!

Alex pulls the team together and they spend their first session discussing and drawing the many ways Alex could determine what is going on when he start the new job. There are many ways to answer the question of what is going on. Each yields a different answer based on differences in perspective, approach and an arbitrary order of arranging the results. The wide range of ways to think about the problem make it difficult to actually determine a solution. The group agrees to meet the next day.

Chapters 33 and 34 reflect a shift of focus. With the plant saved, Alex is faced with a need to generalize the process that was used so that it can be used for different problems or scaled up to the next level based on his promotion.

Remember that the summary of previous entries in the re-read of The Goal have been shifted to a new page (here).   Also, if you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version


Categories: Process Management

Are You an Integration Specialist?

Some people specialize in a narrow domain.  They are called specialists because they focus on a specific area of expertise, and they build skills in that narrow area.

Rather than focus on breadth, they go for depth.

Others focus on the bigger picture or connecting the dots.  Rather than focus on depth, they go for breadth.

Or do they?

It actually takes a lot of knowledge and depth to be effective at integration and “connecting the dots” in a meaningful way.  It’s like being a skilled entrepreneur or a skilled business developer.   Not just anybody who wants to generalize can be effective.  

True integration specialists are great pattern matchers and have deep skills in putting things together to make a better whole.

I was reading the book Business Development: A Market-Oriented Perspective where Hans Eibe Sørensen introduces the concept of an Integrating Generalist and how they make the world go round.

I wrote a post about it on Sources of Insight:

The Integrating Generalist and the Art of Connecting the Dots

Given the description, I’m not sure which is better, the Integration Specialist or the Integrating Generalist.  The value of the Integrating Generalist is that it breathes new life into people that want to generalize so that they can put the bigger puzzle together.  Rather than de-value generalists, this label puts a very special value on people that are able to fit things together.

In fact, the author claims that it’s Integrating Generalists that make the world go round.

Otherwise, there would be a lot of great pieces and parts, but nothing to bring them together into a cohesive whole.

Maybe that’s a good metaphor for the Integrating Generalist.  While you certainly need all the parts of the car, you also need somebody to make sure that all the parts come together.

In my experience, Integration Generalists are able to help shape the vision, put the functions that matter in place, and make things happen.

I would say the most effective Program Managers I know do exactly that.

They are the Oil and the Glue for the team because they are able to glue everything together, and, at the same time, remove friction in the system and help people bring out their best, towards a cohesive whole.

It’s synergy in action, in more ways than one.

You Might Also Like

Anatomy of a High-Potential

E-Shape People, Not T-Shape

Generalists vs. Specialists

Categories: Architecture, Programming

Python: Refactoring to iterator

Mark Needham - Sat, 05/23/2015 - 11:14

Over the last week I’ve been building a set of scripts to scrape the events from the Bayern Munich/Barcelona game and I’ve ended up with a few hundred lines of nested for statements, if statements and mutated lists. I thought it was about time I did a bit of refactoring.

The following is a function which takes in a match file and spits out a collection of maps containing times & events.

import bs4
import re
from bs4 import BeautifulSoup
from soupselect import select
 
def extract_events(file):
    match = open(file, 'r')
    soup = BeautifulSoup(match.read())
 
    all_events = []
    for event in select(soup, 'div#live-text-commentary-wrapper div#live-text'):
        for child in event.children:
            if type(child) is bs4.element.Tag:
                all_events.append(child.getText().strip())
 
    for event in select(soup, 'div#live-text-commentary-wrapper div#more-live-text'):
        for child in event.children:
            if type(child) is bs4.element.Tag:
                all_events.append(child.getText().strip())
 
    timed_events = []
    for i in range(0, len(all_events)):
        event = all_events[i]
        time =  re.findall("\d{1,2}:\d{2}", event)
        formatted_time = " +".join(time)
        if time:
            timed_events.append({'time': formatted_time, 'event': all_events[i+1]})
    return timed_events

We call it like this:

match_id = "32683310"
for event in extract_events("data/%s" % (match_id))[:10]:
    print event

The file we’re loading is the Bayern Munich vs Barcelona match HTML file which I have saved locally. After we’ve got that read into beautiful soup we locate the two divs on the page which contain the match events.

We then iterate over that list and create a new list containing (time, event) pairs which we return.

I think we should be able to get to our resulting collection without persisting an intermediate list, but first things first – let’s remove the duplicated for loops:

def extract_events(file):
    match = open(file, 'r')
    soup = BeautifulSoup(match.read())
 
    all_events = []
    events = select(soup, 'div#live-text-commentary-wrapper div#live-text')
    more_events = select(soup, 'div#live-text-commentary-wrapper div#more-live-text')
 
    for event in events + more_events:
        for child in event.children:
            if type(child) is bs4.element.Tag:
                all_events.append(child.getText().strip())
 
    timed_events = []
    for i in range(0, len(all_events)):
        event = all_events[i]
        time =  re.findall("\d{1,2}:\d{2}", event)
        formatted_time = " +".join(time)
        if time:
            timed_events.append({'time': formatted_time, 'event': all_events[i+1]})
    return timed_events

The next step is to refactor towards using an iterator. After a bit of reading I realised a generator would make life even easier.

I created a function which returned an iterator of the raw events and plugged that into the original function:

def raw_events(file):
    match = open(file, 'r')
    soup = BeautifulSoup(match.read())
    events = select(soup, 'div#live-text-commentary-wrapper div#live-text')
    more_events = select(soup, 'div#live-text-commentary-wrapper div#more-live-text')
    for event in events + more_events:
        for child in event.children:
            if type(child) is bs4.element.Tag:
                yield child.getText().strip()
 
def extract_events(file):
    all_events = list(raw_events(file))
 
    timed_events = []
    for i in range(0, len(all_events)):
        event = all_events[i]
        time =  re.findall("\d{1,2}:\d{2}", event)
        formatted_time = " +".join(time)
        if time:
            timed_events.append({'time': formatted_time, 'event': all_events[i+1]})
    return timed_events

If we run that function we still get the same output as before which is good. Now we need to work out how to clean up the second bit of the code which groups the appropriate rows together.

The goal is that ‘extract_events’ returns an iterator rather than a list – we need to figure out how to iterate over the output of ‘raw_events’ in such a way that when we find a ‘time row’ we can yield that and the row immediately after.

Luckily I found a Stack Overflow post explaining that you can use the ‘next’ function inside an iterator to achieve this:

def extract_events(file):
    events = raw_events(file)
    for event in events:
        time =  re.findall("\d{1,2}:\d{2}", event)
        formatted_time = " +".join(time)
        if time:
            yield {'time': formatted_time, 'event': next(events)}

It’s not that much less code than the original function but I think it’s an improvement. Any thoughts/tips to simplify it further are always welcome.

Categories: Programming

Android Developer Story: Wooga’s fast iterations on Android and Google Play

Android Developers Blog - Sat, 05/23/2015 - 02:34

Posted by Leticia Lago, Google Play team

In order to make the best possible games, Wooga works on roughly 40 concepts and prototypes per year, out of which 10 go into production, around seven soft launch, and only two make it to global launch. It’s what they call “the hit filter." For their latest title, Agent Alice, they follow up with new episodes every week to maintain player interest and engagement over time.

The ability to quickly iterate both live and under development games is therefore key to Wooga’s business model — Android and Google Play provide them the tools they need and mean that new features and updates are made on Android first, before they get to other platforms.

Find out more from Sebastian Kriese, Head of Partnerships, and Pal Tamas Feher, Head of Engineering, and learn how the iteration features of Android and Google Play have contributed to successes such as Diamond Dash, Jelly Splash, and Agent Alice.

You can find out more about building successful games businesses on Android and Google Play at Google I/O 2015: in person, on the live stream, or session recordings after the event. Check out the following:

  • Developers connecting the world through Google Play - Hear how the new mobile ecosystem including Google Play and Android are empowering developers to make good on the dream of connecting the world through technology to improve people's lives. This session will be live streamed.
  • Growing games with Google — In addition to consoles, PC, and browser gaming, as well as phone and tablet games, there are emerging fields including virtual reality and mobile games in the living room. This talk covers how Google is helping developers across this broad range of platforms. This session will be live streamed.
  • What’s new in the Google Play Developer Console - Google Play’s new launches will help you acquire more users and improve the quality of your app. Hear an overview of the latest features and how you can start taking advantage of them in the Developer Console.
  • Smarter approaches to app testing — Hear about the new ways Google can help maximize the success of your next app launch with cheaper and easier testing strategies.
Categories: Programming

Game Performance: Explicit Uniform Locations

Android Developers Blog - Sat, 05/23/2015 - 02:32

Posted by Shanee Nishry, Games Developer Advocate

Uniforms variables in GLSL are crucial for passing data between the game code on the CPU and the shader program on the graphics card. Unfortunately, up until the availability of OpenGL ES 3.1, using uniforms required some preparation which made the workflow slightly more complicated and wasted time during loading.

Let us examine a simple vertex shader and see how OpenGL ES 3.1 allows us to improve it:

#version 300 es

layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;

uniform mat4 matWorldViewProjection;

out vec2 outTexCoord;

void main()
{
    outTexCoord = vertexUV;
    gl_Position = matWorldViewProjection * vertexPosition;
}

Note: You might be familiar with this shader from a previous Game Performance article on Layout Qualifiers. Find it here.

We have a single uniform for our world view projection matrix:

uniform mat4 matWorldViewProjection;

The inefficiency appears when you want to assign the uniform value.

You need to use glUniformMatrix4fv or glUniform4f to set the uniform’s value but you also need the handle for the uniform’s location in the program. To get the handle you must call glGetUniformLocation.

GLuint program; // the shader program
float matWorldViewProject[16]; // 4x4 matrix as float array

GLint handle = glGetUniformLocation( program, “matWorldViewProjection” );
glUniformMatrix4fv( handle, 1, false, matWorldViewProject );

That pattern leads to having to call glGetUniformLocation for each uniform in every shader and keeping the handles or worse, calling glGetUniformLocation every frame.

Warning! Never call glGetUniformLocation every frame! Not only is it bad practice but it is slow and bad for your game’s performance. Always call it during initialization and save it somewhere in your code for use in the render loop.

This process is inefficient, it requires you to do more work and costs precious time and performance.

Also take into consideration that you might have multiple shaders with the same uniforms. It would be much better if your code was deterministic and the shader language allowed you to explicitly set the locations of your uniforms so you don’t need to query and manage access handles. This is now possible with Explicit Uniform Locations.

You can set the location for uniforms directly in the shader’s code. They are declared like this

layout(location = index) uniform type name;

For our example shader it would be:

layout(location = 0) uniform mat4 matWorldViewProjection;

This means you never need to use glGetUniformLocation again, resulting in simpler code, initialization process and saved CPU cycles.

This is how the example shader looks after the change. Changes are marked in bold:

#version 310 es

layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;

layout(location = 0) uniform mat4 matWorldViewProjection;

out vec2 outTexCoord;

void main()
{
    outTexCoord = vertexUV;
    gl_Position = matWorldViewProjection * vertexPosition;
}

As Explicit Uniform Locations are only supported from OpenGL ES 3.1 we also changed the version declaration to 310.

Now all you need to do to set your matWorldViewProjection uniform value is call glUniformMatrix4fv for the handle 0:

const GLint UNIFORM_MAT_WVP = 0; // Uniform location for WorldViewProjection
float matWorldViewProject[16]; // 4x4 matrix as float array

glUniformMatrix4fv( UNIFORM_MAT_WVP, 1, false, matWorldViewProject );

This change is extremely simple and the improvements can be substantial, producing cleaner code, asset pipeline and improved performance. Be sure to make these changes If you are targeting OpenGL ES 3.1 or creating multiple APKs to support a wide range of devices.

To learn more about Explicit Uniform Locations check out the OpenGL wiki page for it which contains valuable information on different layouts and how arrays are represented.

Join the discussion on

+Android Developers
Categories: Programming

Game Performance: Layout Qualifiers

Android Developers Blog - Sat, 05/23/2015 - 02:30

Today, we want to share some best practices on using the OpenGL Shading Language (GLSL) that can optimize the performance of your game and simplify your workflow. Specifically, Layout qualifiers make your code more deterministic and increase performance by reducing your work.


Let’s start with a simple vertex shader and change it as we go along.

This basic vertex shader takes position and texture coordinates, transforms the position and outputs the data to the fragment shader:
attribute vec4 vertexPosition;
attribute vec2 vertexUV;

uniform mat4 matWorldViewProjection;

varying vec2 outTexCoord;

void main()
{
  outTexCoord = vertexUV;
  gl_Position = matWorldViewProjection * vertexPosition;
}
Vertex Attribute Index To draw a mesh on to the screen, you need to create a vertex buffer and fill it with vertex data, including positions and texture coordinates for this example.

In our sample shader, the vertex data may be laid out like this:
struct Vertex
{
  Vector4 Position;
  Vector2 TexCoords;
};
Therefore, we defined our vertex shader attributes like this:
attribute vec4 vertexPosition;
attribute vec2  vertexUV;
To associate the vertex data with the shader attributes, a call to glGetAttribLocation will get the handle of the named attribute. The attribute format is then detailed with a call to glVertexAttribPointer.
GLint handleVertexPos = glGetAttribLocation( myShaderProgram, "vertexPosition" );
glVertexAttribPointer( handleVertexPos, 4, GL_FLOAT, GL_FALSE, 0, 0 );

GLint handleVertexUV = glGetAttribLocation( myShaderProgram, "vertexUV" );
glVertexAttribPointer( handleVertexUV, 2, GL_FLOAT, GL_FALSE, 0, 0 );
But you may have multiple shaders with the vertexPosition attribute and calling glGetAttribLocation for every shader is a waste of performance which increases the loading time of your game.

Using layout qualifiers you can change your vertex shader attributes declaration like this:
layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;
To do so you also need to tell the shader compiler that your shader is aimed at GL ES version 3.1. This is done by adding a version declaration:
#version 300 es
Let’s see how this affects our shader, changes are marked in bold:
#version 300 es

layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;

uniform mat4 matWorldViewProjection;

out vec2 outTexCoord;

void main()
{
  outTexCoord = vertexUV;
  gl_Position = matWorldViewProjection * vertexPosition;
}
Note that we also changed outTexCoord from varying to out. The varying keyword is deprecated from version 300 es and requires changing for the shader to work.

Note that Vertex Attribute qualifiers and #version 300 es are supported from OpenGL ES 3.0. The desktop equivalent is supported on OpenGL 3.3 and using #version 330.

Now you know your position attributes always at 0 and your texture coordinates will be at 1 and you can now bind your shader format without using glGetAttribLocation:
const int ATTRIB_POS = 0;
const int ATTRIB_UV   = 1;

glVertexAttribPointer( ATTRIB_POS, 4, GL_FLOAT, GL_FALSE, 0, 0 );
glVertexAttribPointer( ATTRIB_UV, 2, GL_FLOAT, GL_FALSE, 0, 0 );
This simple change leads to a cleaner pipeline, simpler code and saved performance during loading time.

To learn more about performance on Android, check out the Android Performance Patterns series.

Posted by Shanee Nishry, Games Developer Advocate Join the discussion on

+Android Developers
Categories: Programming

Rolling out the red carpet for app owners in Search Console

Google Code Blog - Fri, 05/22/2015 - 19:22

Posted by Hillel Maoz, Engineering Lead, Search Console Team and Mariya Moeva, Webmaster Trends Analyst

Originally posted to the Webmaster Central blog

Wouldn’t it be nifty if you could track where your indexed app content shows up in search results, for which queries, which app pages are most popular, and which ones have errors? Yeah, we thought so too! So we’ve equipped our freshly renamed Search Console with new reports to show you how Google understands and treats your app content in search results.

Our goal is to make Search Console a comprehensive source of information for everyone who cares about search, regardless of the format of their content. So, if you own or develop an app, Search Console is your new go-to place for search stats.

Add your app to Search Console

Simply open Search Console and enter your app name: android-app://com.example. Of course, we’ll only show data to authorized app owners, so you need to use your Google Play account to let Search Console know you have access to the app. If you don’t have access to your app in Google Play, ask an owner to verify the app in Search Console and add you next.

Connect your site to your app

Associating your site with your app is necessary for App Indexing to work. Plus, it helps with understanding and ranking the app content better.

Track your app content’s performance in search

The new Search Analytics report provides detailed information on top queries, top app pages, and traffic by country. It also has a comprehensive set of filters, allowing you to narrow down to a specific query type or region, or sort by clicks, impressions, CTR, and positions.

Use the Search Analytics report to compare which app content you consider most important with the content that actually shows up in search and gets the most clicks. If they match, you’re on the right track! Your users are finding and liking what you want them to see. If there’s little overlap, you may need to restructure your navigation, or make the most important content easier to find. Also worth checking in this case: have you provided deep links to all the app content you want your users to find?

Make sure Google understands your app content

If we encounter errors while indexing your app content, we won’t be able to show deep links for those app pages in search results. The Crawl Errors report will show you the type and number of errors we’ve detected.

See your app content the way Google sees it

We’ve created an alpha version of the Fetch as Google tool for apps to help you check if an app URI works and see how Google renders it. It can also be useful for comparing the app content with the webpage content to debug errors such as content mismatch. In many cases, the mismatch errors are caused by blocked resources within the app or by pop-ups asking users to sign in or register. Now you can see and resolve these issues.

To get started on optimizing and troubleshooting your own app, add it to Search Console now. If you want to know more about App Indexing, read about it on our Developer Site. And, as always, you’re welcome to drop by the help forum with more questions.

Categories: Programming

The Dysfunctional Approach to Using "5 Whys"

Herding Cats - Glen Alleman - Fri, 05/22/2015 - 18:29

It's been popular recently in some agile circles to mention we use the 5 whys when asking about dysfunction. This common and misguided approach assumes - wrongly - causal relationship are linear and problems come from a single source. For example

Estimates are the smell of dysfunction. Let's ask the 5 Whys to reveal these dysfunctions

The natural tendency to assume that in asking 5 whys there is a connection from beginning to end for the thread connecting cause and effect. This single source of the problem - the symptom - is labeled the Root Cause. The question is is the root cause that actual root cause. The core problem is the 5 whys is not really seeking a solution but just eliciting more symptoms masked as causes.

A simple example illustrates the problem from Apollo Root Cause Analysis.

Say we're in the fire prevention business. If preventing fires is our goal, let's look for the causes of the fire and determine the correction actions needed to actual prevent fire from occuring. In this example let's says we've identified 3 potential causes of fire. There is ...

  1. An ignition source
  2. Combustible material
  3. Oxygen

So what is the root cause of the fire? To prevent the fire - and in the follow on example prevent a dysfunction - we must find at least one cause of the fire that can be acted on to meet the goals and objectives of preventing the fire AND are within our control.

If we decide to control of combustable materials then the root cause is the combustibles. Same for the oxygen. This can be done by inerting a confined space, say with nitrogen. Same for the ignition sources. This traditional Root Cause Analysis pursues a preventative solution that is within our control and meets the goals and objectives - prevent fire. But this is not actually the pursuit of the Root Cause. By pursuing this approach, we stop on a single cause that may or may not result in the best solution. We're mislead into a categorical thinking process that looks for solutions. This doesn't means there is no root cause. It means we can't that a root cause cannot be labels until we have decided on which solutions we are able to implement. The root cause is actually secondary to and contingent on the solution, not the inverse. Only after solutions have been established can we identify the actual root cause of the fire not be prevented.

The notion that Estimates are the smell of dysfunction in a software development organization and asking the 5 Whys in search for the Root Cause is equally flawed. 

The need to estimate or not estimate has not been established. It is presumed that it is the estimating process that creates the dysfunction, and then the search - through the 5 Whys - is the false attempt to categorize the root causes of this dysfunction. The supposed dysfunction is them reverse engineered to be connected to the estimating process. This is not only a naïve approch to solving the dysfunction is inverts the logic by ignoring the need to estimate. Without confirmation that estimates are needed ot not needed, the search for the cause of the dysfunction has no purposeful outcome. 

The decision that estimates are needed or not need does not belong to those being asked to produce the estimates. That decision belongs to those consuming the estimate information in the decision making process of the business - those whose money is being spent.

And of course those consuming the estimates need to confirm they are operating their decision making processes in some framework that requires estimates. It could very well be those providing the money to be spent by those providing the value don't actual need an estimate. The value at risk may be low enough - 100 hours of development for a DB upgrade. But when the value at risk is sufficiently large - and that determination of done again by those providing the money, then a legitimate need to know how much, when, and what is made by the business In this case, decisions are based on Microeconomics of opportunity cost for uncertain outcomes in the future.

This is the basis of estimating and the determination of the real root causes of the problems with estimates. Saying we're bad at estimating is NOT the root cause. And it is never the reason not to estimate. If we are bad at estimating, and if we do have confirmation and optimism biases, then fix them. Remove the impediments to produce credible estimates. Because those estimates are needed to make decisions in any non-trivial value at risk work. 

 

Related articles Let's Get The Dirt On Root Cause Analysis Essential Reading List for Managing Other People's Money The Fallacy of the Planning Fallacy Mr. Franklin's Advice
Categories: Project Management

Stuff The Internet Says On Scalability For May 22nd, 2015

Hey, it's HighScalability time:


Where is the World Brain? San Fernando marshes in Spain (by Cristobal Serrano)
  • 569TB: 500px total data transfer per month; 82% faster: elite athletes' brains; billions and millions: Facebook's graph store read and write load; 1.3 billion: daily Pinterest spam fighting events; 1 trillion: increase in processing power performance over six decades; 5 trillion: Facebook pub-sub messages per day
  • Quotable Quotes:
    • Silicon Valley: “Tell me the truth,” Gavin demands of a staff member. “Is it Windows Vista bad? Zune bad?” “I’m sorry,” the staffer tells Gavin, “but it’s Apple Maps bad!”
    • @garybernhardt: Reminder to people whose "big data" is under a terabyte: servers with 1 TB RAM can be had about $20k. Your data set fits in RAM.
    • @epc: μServices and AWS Lambda are this year’s containers and Docker at #Gluecon
    • orasis: So by this theory the value of a tech startup is the developer's laptops and the value of a yoga studio is the loaner mats.
    • @ajclayton: An average attacker sits on your network for 229 days, collecting information. @StephenCoty #gluecon
    • @mipsytipsy: people don't *cause* problems, they trigger latent conditions that make failures more likely.  @allspaw on post mortems #srecon15europe
    • @pas256: The future of cloud infrastructure is a secure, elastically scalable, highly reliable, and continuously deployed microservices architecture
    • Kevin Marks: The Web is the network
    • @cdixon: We asked for flying cars and all we got was the entire planet communicating instantly via $34 pocket supercomputers 
    • @ajclayton: Uh oh, @pas256 just suggested that something could be called a "nanoservice"...microservices are already old. #gluecon
    • @jamesurquhart: A sign that containers are interim step? Pkging procs better than pkging servers, but not as good as pkging functs? 
    • @markburgess_osl: Let's rename "immutable infrastructure" to "prefab/disposable" infrastructure, to decouple it from the false association with functionalprog
    • @Beaker: Key to startup success: solve a problem that has been solved before but was constrained due to platform tech cost or non-automated ops scale
    • @mooreds: 10M req/month == $45 for lambda.  Cheap. -- @pas256 #gluecon
    • @ajclayton: Microservices "exist on all points of the hype cycle simultaneously" @johnsheehan #gluecon
    • @oztalip: "Treat web server as a library not as a container, start it inside your application, not the other way around!" -@starbuxman #GOTOChgo
    • @sharonclin: If a site doesn't load in 3 sec, 57% abandon, 80% never return.  @krbenedict #m6xchange #Telerik
    • QuirksMode: Tools don’t solve problems any more, they have become the problem.
    • @rzazueta: Was considering taking a shot every time I saw "Microservices" on the #gluecon hashtag. But I've already gone through two livers.
    • @MariaSallis: "If you don't invest in infrastructure, don't invest in microservices" @johnsheehan #gluecon
    • Brian Gallagher: If the world devolved into a single cloud provider, there would be no need for Cloud Foundry.
    • @b6n: startup idea: use technology from the 70s.
    • Steven Hawking: The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge
    • @aneel: "Monolithic apps have unlimited invisible internal dependencies" -@adrianco #gluecon
    • @windley: microservices don’t reduce complexity, they move it around, from dev to ops. #gluecon
    • @paulsbruce: When everyone has to be an expert in everything, that doesn't scale." @dberkholz @451research #gluecon
    • @oamike: I didn’t do SOA right, I didn’t do REST right, I’m sure as hell not going to do micro services right. #gluecon @kinlane
    • Urs Hölzle: My biggest worry is that regulation will threaten the pace of innovation.
    • @mccrory: There has been an explosion in managed OpenStack solutions - Platform9, MetaCloud, BlueBox
    • @viktorklang: Remember that you heard it here first, CPU L1 cache is the new disk.

  • This is more a measure of the fecundity of the ecosystem than an indication of disease. By its very nature the magic creation machine that it is Silicon Valley must create both wonder and bewilderment. Silicon Valley Is a Big Fat Lie: That gap between the Silicon Valley that enriches the world and the Silicon Valley that wastes itself on the trivial is widening daily.

  • In a liquidity crisis all those promises mean nothing. RadioShack Sold Your Data to Pay Off Its Debts.

  • YouTube has to work at it too. To Take On HBO And Netflix, YouTube Had To Rewire Itself: All of the things that InnerTube has enabled—faster iteration, improved user testing, mobile user analytics, smarter recommendations, and more robust search—have paid off in a big way. As of early 2015, YouTube was finally becoming a destination: On mobile, 80% of YouTube sessions currently originate from within YouTube itself.

  • If you aren't doing web stuff, do you really need to use HTTP? Do you really know why you prefer REST over RPC? There's no reason for API requests to pass through an HTTP stack.

  • If scaling is specialization and the cloud is the computer then why are we still using TCP/IP between services within a datacenter? Remote Direct Memory Access is fast. FaRM: Fast Remote Memory: FaRM’s per-machine throughput of 6.3 million operations per second is 10x that reported for Tao. FaRM’s average latency at peak throughput was 41µs which is 40–50x lower than reported Tao latencies. 

  • MigratoryData with 10 Million Concurrent Connections on a single commodity server. Lots of details on how the benchmark was run and the various configuration options. CPU usage under 50% (with spikes), memory usage was predictable, network traffic was  0.8 Gbps for 168,000 messages per second, 95th Percentile Latency: 374.90 ms. Next up? C100M.

  • Does anyone have a ProductHunt invite that they would be willing share with me?

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Joe Colantonio is Taking Personal Branding to the Next Level

Making the Complex Simple - John Sonmez - Fri, 05/22/2015 - 16:00

Joe Colantonio is one of those guys who really gets personal branding and niching down. I’ve coached a lot of different developers about building out their brand, but unfortunately a majority of people still don’t really do anything. Not Joe, Joe took action. It’s been amazing to watch Joe expand his brand through his blog, […]

The post Joe Colantonio is Taking Personal Branding to the Next Level appeared first on Simple Programmer.

Categories: Programming

Constraints on the use of Dunbar’s Number

17322084523_7ce372fd98_oDunbar’s number represents a theoretical limit on the number of people in a group that can maintain stable social relationships. Stable social relationships are needed to support application of Agile values, principles and techniques. Dunbar’s number is often quoted as 150 people. However, the limit for any individual group is a not only a reflection limits like Dunbar’s number but also context. If we accept there is some theoretical limit that we can’t scale past such as Dunbar’s number, we need to ask how other project or environmental factors further constrain the maximum number of people working on a problem.  Why go to the trouble of scaling up the number of people working a problem? Many Agilistas would suggest that a single small team is optimal. However, many problems will require a larger collation of teams to deliver value and functionality. Additional contextual drivers that modify the theoretical maximum number of people in a group or a team-of-teams include at least four factors. They are:

  1. Cohesion, or how well people stick together. There are many attributes that can generate cohesion. Examples include: big ideas, goals, nationalities, religions and even corporate identities. Cohesion fosters a common relationship, which helps make groups more willing to put forth effort to achieve an end. For example, it often is hard to achieve a cohesive group when members come from multiple external consultancies. Each organization involved in the group have a different set of organizational goals that will reduce cohesion unless they are subjugated to the project goal.  Reducing the number of people below Dunbar’s number makes the use of techniques like peer pressure to institutionalize a project vision to increase cohesion easier.
  2. Complexity is a measure of the number of properties for a project that are outside of the norm. The complexity of a problem reduces the optimal maximum number of people that can be bear because complexity generally requires either more control and coordination or alternately smaller teams to ensure collaboration.
  3. Uncertainty occurs when teams are searching for an answer to a business or technical problem. When a team needs to tackle an unknown business problem or new technology research is often required. Research generally is constrained to small teams with specialized skills reducing the optimal maximum group size for this type of endeavor well below Dunbar’s number. As concepts and ideas are discovered they can be rolled out more broadly to be fleshed out, prototyped and implemented increasing the group working on the project closer to Dunbar’s number.
  4. Dependencies between components often mean that work needs to be single threaded (or at least spread less broadly). Dependencies reduce the number of people or teams that can be effectively leveraged.

The idea of increasing the number of people and teams working on a project often appears to be a mechanism for delivering value more quickly. When adding people to is suggested remember the number of people working on a problem is a constraint than can not be dealt with by linearly increasing the number of people applied to a problem until you reach a limit such as Dunbar’s number.  Context directly impacts how large any group can before overhead and other constraints reduce effectiveness.    It is often said that you can’t get nine women to have a baby in a month. In addition to Dunbar’s number, context plays an important role in defining overall team size.


Categories: Process Management

Software for the Mind

Herding Cats - Glen Alleman - Fri, 05/22/2015 - 00:21

The book Software for Your Head was a seminal work when we were setting up our Program Management Office in 2002 for a mega-project to remove nuclear waste from a very contaminated site in Golden Colorado.

Here's an adaptation of those ideas to the specifics of our domain and problems

Software for your mind from Glen Alleman This approach was a subset of a much larger approach to managing in the presence of uncertainty, very high risk, and even higher rewards, all on a deadline, and fixed budget.  As was stated in the Plan of the Week.
  • Monday - Where are we going this week? 
  • Daily - What are we doing along the way?
  • Friday - Where have we come to?

Do this every week, guided by the 3 year master plan and make sure no one is injured or killed.

That project is documented in the book Making the Impossible Possible summarized here.

Making the impossible possible from Glen Alleman Related articles The Reason We Plan, Schedule, Measure, and Correct The Flaw of Empirical Data Used to Make Decisions About the Future There is No Such Thing as Free
Categories: Project Management

Pie in your face - without the mess

Google Code Blog - Thu, 05/21/2015 - 23:26

Posted by Anthony Maurice, Fun Propulsion Labs at Google

Fun Propulsion Labs at Google* is back with an exciting new release for game developers. We’ve updated Pie Noon (our open source Android game) to add support for Google Cardboard, letting you jump into the action directly using your Android phone as a virtual reality headset! Select your targets by looking at them and throw pies with a flick of the switch.

Look out for incoming pie!

We used the Cardboard SDK for Android, which helps simplify common virtual reality tasks like head tracking, rendering for Cardboard, and handling specialized input events. And you might remember us from before, bringing exciting game technologies like FlatBuffers, Pindrop, and Motive, all of which you can see in use in Pie Noon.

You can grab the latest version of Pie Noon on Google Play to try it out, or crack open the source code, and take a look at how we brought an existing game into virtual reality.

* Fun Propulsion Labs is a team within Google that's dedicated to advancing gaming on Android and other platforms.

Categories: Programming

Always-on and Wi-Fi with the latest Android Wear update

Android Developers Blog - Thu, 05/21/2015 - 18:19

Posted by Wayne Piekarski, Developer Advocate

A new update to Android Wear is rolling out with lots of new features like always-on apps, Wi-Fi connectivity, media browsing, emoji input, and more. Let’s discuss some of the great new capabilities that are available in this release.

Always-on apps

Above all, a watch should make it easy to tell the time. That's why most Android Wear watches have always-on displays, so you can see the time without having to shake your wrist or lift your arm to wake up the display. In this release, we're making it possible for apps to be always-on as well.

With always-on functionality, your app can display dynamic data on the device, even when the app is in ambient mode. This is useful if your app displays information that is continuously updated. For example, running apps like Endomondo, MapMyRun, and Runtastic use the always-on screen to let you keep track of how long and far you’ve been running. Zillow keeps you posted about the median price of homes nearby when you’re house-hunting.

Always-on functionality is also useful for apps that may not update data very frequently, but present information that’s useful for reference over a longer period of time. For example, Bring! lets you keep your shopping list right on your wrist, and Golfshot gives you accurate distances from tee to pin. If you’re at the airport and making your way to your gate, American Airlines, Delta, and KLM let you keep all of your flight info a glance away on your watch.

Note: the above apps will not display always-on functionality on your watch until you receive the update for the latest version of Android Wear.

Always-on functionality works similar to watch faces, in that the power usage of the display and processor is kept to a minimum by reducing the colors and refresh rate of the display. To implement an always-on Activity, you need to make a few small changes to your app's AndroidManifest.xml, your app’s build.gradle, and the Activity to declare that it supports ambient mode. A code sample and documentation are available to show you how it works. Be sure to tune in to the livestream at Google I/O next week for Android Wear: Your app and the always-on screen.

Wi-Fi connectivity and cloud sync

Many existing Android Wear devices already contain hardware support for Wi-Fi, and this release enables software support for Wi-Fi. The saved Wi-Fi networks on your phone are copied to your watch during setup, and your watch automatically connects to those Wi-Fi networks when it loses Bluetooth connection to your phone. Your watch can then connect to your phone over the Internet, even if they’re not on the same Wi-Fi network.

You should continue to use the Data Layer API for all communications between the watch and phone. By using this standard API, your app will always work, no matter what kind of connectivity the user’s wearable supports. Cloud sync also introduces a new virtual node in the Data Layer called the cloud node, which may be returned in calls to getConnectedNodes(). Learn more in the Multi-wearable support section below.

Multi-wearable support

The release of Google Play services 7.3 now allows support for multiple wearable devices to be paired simultaneously to a single phone or tablet, so you can have a wearable for fitness, and another for dressing up. While DataItems will continue to work in the same way, since they are synchronized to all devices, working with the MessageApi is a little different. When you update your build.gradle to use version 7.3 or higher, getConnectedNodes() from the NodeApi will usually return multiple nodes. There is an extra virtual node added to represent the cloud node used to communicate over Wi-Fi, so all developers need to deal with this situation in their code.

To help simplify finding the right node among many devices, we have added a CapabilityApi, allowing your nodes to announce features they provide, for example downloading images or music. You can also now use the ChannelApi to open up a connection to a specific device to transfer large resources such as images or audio streams, without having to send them to all devices like you would when embedding assets into data items. We have updated our Android Wear samples and documentation to show the best practices in implementing this.

MediaBrowser support

The Android 5.0 release added the ability for apps to browse the media content of another app, via the android.media.browse API. With the latest Android Wear update, if your media playback app supports this API, then you will be able to browse to find the next song directly from your watch. This is the same browse capability used in Android Auto. You implement the API once, and it will work across a variety of platforms. To do so, you just need to allow Android Wear to browse your app in the onGetRoot() method validator. You can also add custom actions to the MediaSession that will appear as controls on the watch. We have a Universal Media Player sample that shows you how to implement this functionality.

Updates to existing devices

The latest version of Android Wear will roll out via an over-the-air (OTA) update to all Android Wear watches over the coming weeks. To take advantage of these new features, you will need to use targetSdkVersion 22 and add the necessary dependencies for always-on support. We have also expanded the collection of emulators available via the SDK Manager, to simulate the experience on all the currently available devices, resolutions, and shapes, including insets like the Moto 360.

In this update, we have also disabled support for apps that use the unofficial, activity-based approach for displaying watch faces, as announced in December. These watch faces will no longer work and should be updated to use the new watch face API.

Since the launch of Android Wear last summer, Android Wear has grown into a platform that gives users many possibilities to personalize their watches, with a variety of shapes and styles, a range of watch bands, and thousands of apps and watch faces. Features such as always-on apps and Wi-Fi allow developers even more flexibility to give users amazing experiences with Android Wear.

Categories: Programming

Why Can’t I Get Going With My Dream

Making the Complex Simple - John Sonmez - Thu, 05/21/2015 - 16:00

In this episode, I answer an email about following through with a plan and getting the ideal results. Full transcript: John:               Hey, this is John Sonmez from simpleprogrammer.com and I’ve got another question for you. This question comes from Michael and Michael says, “Hi John, I was wondering if you had time to help me […]

The post Why Can’t I Get Going With My Dream appeared first on Simple Programmer.

Categories: Programming

Python: UnicodeEncodeError: ‘ascii’ codec can’t encode character u’\xfc’ in position 11: ordinal not in range(128)

Mark Needham - Thu, 05/21/2015 - 07:14

I’ve been trying to write some Python code to extract the players and the team they represented in the Bayern Munich/Barcelona match into a CSV file and had much more difficulty than I expected.

I have some scraping code (which is beyond the scope of this article) which gives me a list of (player, team) pairs that I want to write to disk. The contents of the list is as follows:

$ python extract_players.py
(u'Sergio Busquets', u'Barcelona')
(u'Javier Mascherano', u'Barcelona')
(u'Jordi Alba', u'Barcelona')
(u'Bastian Schweinsteiger', u'FC Bayern M\xfcnchen')
(u'Dani Alves', u'Barcelona')

I started with the following script:

with open("data/players.csv", "w") as file:
    writer = csv.writer(file, delimiter=",")
    writer.writerow(["player", "team"])
 
    for player, team in players:
        print player, team, type(player), type(team)
        writer.writerow([player, team])

And if I run that I’ll see this error:

$ python extract_players.py
...
Bastian Schweinsteiger FC Bayern München <type 'unicode'> <type 'unicode'>
Traceback (most recent call last):
  File "extract_players.py", line 67, in <module>
    writer.writerow([player, team])
UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 11: ordinal not in range(128)

So it looks like the ‘ü’ in ‘FC Bayern München’ is causing us issues. Let’s try and encode the teams to avoid this:

with open("data/players.csv", "w") as file:
    writer = csv.writer(file, delimiter=",")
    writer.writerow(["player", "team"])
 
    for player, team in players:
        print player, team, type(player), type(team)
        writer.writerow([player, team.encode("utf-8")])
$ python extract_players.py
...
Thomas Müller FC Bayern München <type 'unicode'> <type 'unicode'>
Traceback (most recent call last):
  File "extract_players.py", line 70, in <module>
    writer.writerow([player, team.encode("utf-8")])
UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 8: ordinal not in range(128)

Now we’ve got the same issue with the ‘ü’ in Müller so let’s encode the players too:

with open("data/players.csv", "w") as file:
    writer = csv.writer(file, delimiter=",")
    writer.writerow(["player", "team"])
 
    for player, team in players:
        print player, team, type(player), type(team)
        writer.writerow([player.encode("utf-8"), team.encode("utf-8")])
$ python extract_players.py
...
Gerard Piqué Barcelona <type 'str'> <type 'unicode'>
Traceback (most recent call last):
  File "extract_players.py", line 70, in <module>
    writer.writerow([player.encode("utf-8"), team.encode("utf-8")])
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 11: ordinal not in range(128)

Now we’ve got a problem with Gerard Piqué because that value has type string rather than unicode. Let’s fix that:

with open("data/players.csv", "w") as file:
    writer = csv.writer(file, delimiter=",")
    writer.writerow(["player", "team"])
 
    for player, team in players:
        if isinstance(player, str):
            player = unicode(player, "utf-8")
        print player, team, type(player), type(team)
        writer.writerow([player.encode("utf-8"), team.encode("utf-8")])

Et voila! All the players are now successfully written to the file.

An alternative approach is to change the default encoding of the whole script to be ‘UTF-8′, like so:

# encoding=utf8
import sys
reload(sys)
sys.setdefaultencoding('utf8')
 
with open("data/players.csv", "w") as file:
    writer = csv.writer(file, delimiter=",")
    writer.writerow(["player", "team"])
 
    for player, team in players:
        print player, team, type(player), type(team)
        writer.writerow([player, team])

It took me a while to figure it out but finally the players are ready to go!

Categories: Programming

We've Been Doing This for 20 Years ...

Herding Cats - Glen Alleman - Thu, 05/21/2015 - 03:58

We've been doing this for 20 years and therefore you can as well

Is a common phrase used when asked in what domain does you approach work? Of course without a test of that idea outside the domain in which the anecdotal example is used, it's going to be hard to know if that idea is actually credible beyond those examples.

So if we hear we've been successful in our domain doing something or better yet NOT doing something, like say NOT estimating, ask in what domain have you been successful? Then the critical question, is there any evidence that the success in that domain is transferable to another domain? This briefing provides a framework - from my domain of aircraft development - illustrating that domains vary widely in their needs, constraints, governance processes and applicable and effective approaches to delivering value.

Paradigm of agile project management from Glen Alleman Google seems to have forgotten how to advance the slides on the Mac. So click on the presentation title (paradigm of agile PM)  to do that. Safari works. Related articles The Reason We Plan, Schedule, Measure, and Correct The Flaw of Empirical Data Used to Make Decisions About the Future There is No Such Thing as Free Root Cause Analysis Domain is King, No Domain Defined, No Way To Test Your Idea Mr. Franklin's Advice
Categories: Project Management

Database Scaling Redefined: Scaling Demanding Queries, High Velocity Data Modifications and Fast Indexing All At Once for Big Data

This is a guest post by Cihan Biyikoglu, Director of Product Management at Couchbase.

Question: A few million people are out looking for a setup to efficiently live and interact. What is the most optimized architecture they can use?

  1. Build one giant high-rise for everyone,
  2. Build many single-family homes OR
  3. Build something in between?

Schools, libraries, retail stores, corporate HQs, homes are all there to optimize variety of interactions. Sizes of groups and type of exchange vary drastically… Turns out, what we have chosen to do is, to build all of the above. To optimize different interactions, different architectures make sense.

While high rises can be effective for interactions with high density of people in a small amount of land, it is impractical to build 500 story buildings. It is also hard to add/remove floors as you need them. So high-rises feel awfully like scaling-up – cluster of processors communicating over fast memory to compute fast but limited scale ceiling and elasticity.

As your home, single-family architecture work great. Nice backyard to play and private space for family dinners... You may need to get in your car to interact with other families, BUT it is easy to build more single family houses: so easy elasticity and scale. Single-family structure feels awfully like scaling-out, doesn't it? Cluster of commodity machines that communicate over slower networks and come with great elasticity.

“How does this all relate to database scalability?” you ask…

Categories: Architecture

Software Development Linkopedia May 2015

From the Editor of Methods & Tools - Wed, 05/20/2015 - 15:24
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about Agile software development, giving feedback, managing technical debt, normalizing user stories, dependency injection, developer griefs, behavior driven development (BDD) and software architecture. Web site: The GROWS Method Blog: The Failure of Agile Blog: Being A Senior Engineer Blog: Criticism and Ineffective Feedback Blog: Your Job Is Not to Write Code Article: Learn how to manage technical debt from a business perspective Article: Team Agreements Article: User Story Normalization Article: Dependency Injection the Easy Way Tools: ...