Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

We throw pie with a little help from our friends

Google Code Blog - Thu, 04/09/2015 - 20:15

Posted by Jon Simantov, Fun Propulsion Labs at Google

Originally posted to the Google Open Source blog

Fun Propulsion Labs at Google* is back today with some new releases for game developers. We’ve updated Pie Noon (our open source Android TV game) with networked multi-screen action, and we’ve also added some delicious new libraries we’ve been baking since the original release: the Pindrop audio library and the Motive animation system.

Pie Noon multi-screen action

Got an Android TV and up to 4 friends with Android phones or tablets? You’re ready for some strategic multi-player mayhem in this updated game mode. Plan your next move in secret on your Android phone: will you throw at an opponent, block an incoming attack, or take the risky approach and wait for a larger pie? Choose your target and action, then watch the Android TV to see what happens!


We used the NearbyConnections API from the most recent version of Google Play Games services to easily connect smartphones to your Android TV and turn our original Pie Noon party game into a game of turn-based strategy. You can grab the latest version of Pie Noon from Google Play to try it out, or crack open the source code and take a look at how we used FlatBuffers to encode data across the network in a fast, portable, bandwidth-efficient way.

Pindrop: an open source game audio library

Pindrop is a cross-platform C++ library for managing your in-game audio. It supports cross compilation to Android, Linux, iOS and OSX. An early version of this code was part of the first Pie Noon release, but it’s now available as a separate library that you can use in your own games. Pindrop handles loading and unloading sound banks, tracking sound locations and listeners, prioritization of your audio channels, and more.

Pindrop is built on top of several other pieces of open source technology:

  • SDL Mixer is used as a backend for actually playing the audio.
  • The loading of data and configuration files is handled by our serialization library, FlatBuffers.
  • Our own math library, MathFu, is used for a number of under-the-hood calculations.

You can download the latest open source release from our GitHub page. Documentation is available here and a sample project is included in the source tree. Please feel free to post any questions in our discussion list.

Motive: an open source animation system

The Motive animation system can breathe life into your static scenes. It does this by applying motion to simple variables. For example, if you’d like a flashlight to shine on a constantly-moving target, Motive can animate the flashlight so that it moves smoothly yet responsively.

Motive animates both spline-based motion and procedural motion. These types of motion are not technically difficult, but they are artistically subtle. It's easy to get the math wrong. It's easy to end up with something that moves as required but doesn't quite feel right. Motive does the math and lets you focus on the feeling.

Motive is scalable. It's designed to be extremely fast. It also has a tight memory footprint -- smaller than traditional animation compression -- that's based on Dual Cubic Splines. Our hope is that you might consider using Motive as a high-performance back-end to your existing full-featured animation systems.

This initial release of Motive is feature-light since we focused our early efforts on doing something simple very quickly. We support procedural and spline-based animation, but we don't yet support data export from animation packages like Blender or Maya. Motive 1.0 is suitable for props -- trees, cameras, extremities -- but not fully rigged character models. Like all FPL technologies, Motive is open source and cross-platform. Please check out the discussion list, too.

What’s Fun Propulsion Labs at Google?

You might remember us from such Android games as Pie Noon, LiquidFun Paint, and VoltAir, and such cross-platform libraries as MathFu, LiquidFun, and FlatBuffers.

Want to learn more about our team? Check out this recent episode of Game On! with Todd Kerpelman for the scoop!


* Fun Propulsion Labs is a team within Google that's dedicated to advancing gaming on Android and other platforms.

Categories: Programming

The Microeconomics of a Project Driven Organization

Herding Cats - Glen Alleman - Thu, 04/09/2015 - 16:30

The notion that we can ignore - many times willfully - the microeconomics of decision making is common in some development domains. Any project driven paradigm has many elements, each interacting with each in random ways, in nonlinear ways, in ways we may not be able to even understand when the maturity of the organization is not yet developed to a level needed to manage in the presence of uncertainty.

ProjectDrivenOrganization

So When We Say Project What Do We Mean?

The term project has an official meaning in many domains. Work that has a finite duration is a good start. But then what is finite? Work that makes a change to an external condition. But what does change mean, and what is external. In most definitions, operations and maintenance are not usually budgeted as projects. There are accounting rules the describe projects as well. Once we land on an operational definition of the project, here's a notional picture of the range of projects.

6a00d8341ca4d953ef01bb07dbef16970dWhen we hear a suggestion about any process for project management, we need to first establish a domain and a context in that domain to test the idea. 

My favorite questionable conjecture is that we can make decisions about the spending of other peoples money without estimating the outcomes for that decisions. Making decisions about an uncertain future is the basis of Microeconomics.

One framework for making decisions in the presence of uncertainty is Organizational Governance. Without establishing a governance framework, ranging from one like that below, to No governance, just DO IT, it's difficult to have a meaningful conversation about the applicability of any project management process.

So when we hear a new and possibly counter intuitive suggestion, start by asking In What Governance Model Do You Think This Idea Might Be Applicable?

Wp1074_ppp_architecture

 

Related articles Decision Analysis and Software Project Management Incremental Delivery of Features May Not Be Desirable
Categories: Project Management

The Microeconomics of Decision Making in the Presence of Uncertainty - Re-Deux

Herding Cats - Glen Alleman - Thu, 04/09/2015 - 14:59

Microeconomics is a branch of economics that studies the behavior of individuals and small impacting organizations in making decisions on the allocation of limited resources.

All engineering is constrained optimization. How do we take the resources we've been given and deliver the best outcomes. That's microeconomics is. Unlike models of mechanical engineering or classical physics, the models of microeconomics are never precise. They are probabilistic, driven by the underlying statistical processes of the two primary actors - suppliers and consumers. 

Let's look at both in light of the allocation of limited resources paradigm.

  • Supplier = development resources - these are limited in both time and capacity for work. And as likely talent and production of latent defects, which cost time and money to remove.
  • Consumer = those paying for the development resources have limited time and money. Limited money is obvious, they have a budget. Limited time, since the time value of money of part of the Return in Capital equation used by the business. Committing capital (not real capital, software development is usually carried on the books as an expense), needs a time when that capital investment will start to return value. 

In both case time, money, capacity for productive value are limited (scarce) and compete with each other and compete with the needs of both the supplier and the consumer. In addition, since the elasticity of labor costs is limited by the market, we can't simply buy cheaper to make up for time and capacity. It's done of course but always to the determent of quality and actual productivity.

So cost is inelastic, time is inelastic, capacity for work is inelastic and other attributes of the developed product constrained. The market need is like constrained as well. Business needs are rarely elastic - oh we really didn't need to pay people in the time keeping system, let's just collect the time sheets, we'll run payroll when that feature gets implemented.

Enough Knowing, Let's Have Some Doing

With the principles of Microeconomics applied to software development, there is one KILLER issue, that if willfully ignored ends the conversation for any business person trying to operate in the presence of limited resources - time, money, capacity for work.

The decisions being made about these limited resources are being made in the presence of uncertainty. This uncertainty - as mentioned - is based on random processes. Random process produce imprecise data. Data drawn from random variables. Random variables with variances, instability (stochastic processes), non-linear stochastic processes. 

Quick Diversion Into Random Variables

There are many mathematical definitions of random variables, but for this post let's use a simple one.

  • A variable is an attribute of a system or project that can take on multiple values. If the value of this variable is fixed for example when someone asks what is the number of people on the project can be known by counting then and writing that down. When someone asked you could count and say say 16.
  • When the values of the variable are random then the variable can take on a range of values just like the non-random variable, but we don't know exactly what those values will be when we want to use that variable to ask a question. If the variable is a random variable and someone asks what will be the cost of this project when it is done, you'll have to provide a range of values and the confidence for each of the numbers in the range. 

A simple example - silly but illustrative - would be HR wants to buy special shoes for the development team, with the company logo on them. If we could not for some reason (doesn't matter why) measure the shoe size of all the males on our project, we could estimate how many shows of what size woudl be needed from the statistical distribution of males shoe sizes for a large population of make coders.

Mod8-image_shoe_male1

This would get use close to how many shoes of what size we need to order. This is a notional example, so please don't place an order for actual shoes. But the underlying probability distribution of the values the random variable can take on can tell us about the people working on the project.

Since all the variables on any project are random variables, we can't know the exact value of them at any one time. But we can know about their possible ranges and the probabilities of any specific value when asked to produce that value for making a decision. 

The viability of the population values and its analysis should not be seen not as a way of making precise predictions about the project outcomes, but as a way of ensuring that all relevant outcomes produced by these variables have been considered, that they have been evaluated appropriately, and that we have a reasonable sense what will happen for the multitude of values produced by a specific variable. It provides a way of structuring our thinking about the problem. 

Making Decisions In The Presence of Random Variables

To make a decision - a choice among several choices - means making an opportunity cost  decision based in random data. And if there is only one choice, then the choice is either take the choice or don't.

This means the factors that go into that decision are themselves random variables. Labor, productivity, defects, capacity, quality, usability, functionality, produced business capability, time. Each is a random variables, interacting in nonlinear ways with the other random variables.

To make a choice in the presence of this paradigm we must make estimates of not only the behaviour of the variables, but also the behaviors of the outcomes.

In other words

To develop software in the presence of limited resources driven by uncertain processes for each resource (time, money, capacity, technical outcomes), we must ESTIMATE the behaviors of these variables that inform our decision.

It's that simple and it's that complex. Anyone conjecturing decisions can be made in the absence of estimates of the future outcomes of that decision is willfully ignoring the Microeconomics of business decision making in the software development domain.

For those interested in further exploring of the core principle of Software Development business beyond this willful ignorance, here's a starting point.

These are the tip of the big pile of books, papers, journal articles on estimating software systems. 

A Final Thought on Empirical Data

Making choices in the presence of uncertainty can be informed by several means:

  • We have data from the past
  • We have a model of the system that can simulated
  • We have reference classes from which we can extract similar information

This is empirical data. But there are several critically important questions that must be answered if we are not going to be disappointed with our empirical data outcomes

  • Is the past representative of the future?
  • Is the sample of data from the past sufficient to make sound forecasts of the future. The number of sample needed greatly influences the confidence intervals on the estimates of the future.

Calculating the number of samples needed for a specific level of confidence requires some statistics. But here's a place to start. Suffice it to say, those conjecturing estimates based on past performance (number of story point in the past) will need to produce the confidence calculation before any non-trivial decisions should be made on their data. Without those calculations the use of past performance be very sporty when spending other peoples money.

Thanks to Richard Askew for suggesting the addition of the random variable background

Categories: Project Management

CoderTrust

From the Editor of Methods & Tools - Thu, 04/09/2015 - 14:21
At a time when the crowdfunding and microfinance initiatives are growing in different domains, it is also important to ask ourselves what we could do to make the (software development) world a better place. In this domain, I would like to share with you the initiative of CodersTrust that works currently with software developers in Bangladesh and India but plans to expand its activity to other areas of the world. The goal of CodersTrust is to “democratizes access to education by creating an entirely new education infrastructure to serve the demand ...

Learning Opportunities for All

If you are not on my Pragmatic Manager email list, you might not know about these opportunities to explore several topics with me this month:

An Estimation hangout with Marcus Blankenship this Friday, April 10, 2:30pm EDT. If you have questions, please email me or Marcus. See the Do You Have Questions About Estimation post. Think of this hangout as a clinic, where I can take your questions about estimation and help you address your concerns.

In the Kitchener-Waterloo area April 29&30, I’m doing two workshops that promise to be quite fun as well as educational:

  • Discovering the Leader Inside of You
  • An Agile and Lean Approach to Managing Your Project Portfolio

To see the descriptions, see the KWSQA site.

You do not have to be a manager to participate in either of these workshops. You do need to be inquisitive and willing to try new things. I believe there  is only room for two people for the leadership workshop. I think there is room for five people in the project portfolio workshop. Please do sign up now.

 

Categories: Project Management

The Black Magic of Systematically Reducing Linux OS Jitter

How do you systematically identify the sources of jitter and remove them one by one in a low-latency trading system? That was the question asked on the mechanical-sympathy email list. 

Gil Tene, Vice President of Technology and CTO, Co-Founder, Azul Systems gave the sort of answer that comes from the accumulated wisdom born from lots of real-life experience. It's an answer that needed sharing. And here it is:

Finding the cause of hiccups/jitters in a a Linux system is black magic. You often look at the spikes and imagine "what could be causing this". 

Based on empirical evidence (across many tens of sites thus far) and note-comparing with others, I use a list of "usual suspects" that I blame whenever they are not set to my liking and system-level hiccups are detected. Getting these settings right from the start often saves a bunch of playing around (and no, there is no "priority" to this - you should set them all right before looking for more advice...).

My current starting point for Linux systems that are interested in avoiding many-msec hiccup levels is:

Categories: Architecture

Quantity Versus Quality Is An Illusion

Making the Complex Simple - John Sonmez - Wed, 04/08/2015 - 16:00

In this video, I respond to a question that I’ve been asked a lot. I produce a lot of stuff, but am I sacrificing quality for quantity? I also talk about how I spent only 3 months to publish my book, Soft Skills.

The post Quantity Versus Quality Is An Illusion appeared first on Simple Programmer.

Categories: Programming

Sun Surveyor brings augmented reality to photographers using Google Maps APIs

Google Code Blog - Tue, 04/07/2015 - 23:31

Originally posted to the Google Geo Developers blog

Editor’s note: This post is written by Adam Ratana, developer of Sun Surveyor. Read how Sun Surveyor is using Google Maps APIs to help photographers capture the perfect photo.

Posted by Adam Ratana, developer of Sun Surveyor

I’m a photography enthusiast, and I’m always looking for ways to improve my work. That’s what led me to develop Sun Surveyor, an iOS and Android app that uses Google Maps APIs to visualize the location of the sun and the moon anywhere in the world. The app makes it easy to figure out when the natural lighting will be just right — and get the ideal shot.

Sun Surveyor uses augmented reality to overlay the paths of the sun and moon on a camera’s view, so you can see where in the sky they’ll be at a specific time and place. Photographers can use it to plan their shots ahead of time, and businesses can use it to gauge things like how best to align solar panels to make the most efficient use of the angle of the sun.

The app uses multiple Google Maps APIs, including the Elevation API, the Time Zone API, the Google Maps SDK for iOS and the Google Maps Android API. The Android API, which includes Street View, was particularly helpful. It allowed me to overlay the path of the sun and moon on any Street View location anywhere in the world. For programming details, see this blog post.

The following screen captures give you a sense of how the app works. They show overlays on top of the iconic Half Dome in Yosemite National Park. The first shows the paths of the sun (yellow line) and moon (blue line) over an aerial view of Yosemite Valley. The green line shows the distance between the photographer and the object to be photographed — in this case, Half Dome.

This next screen capture shows how the app looks when in Street View mode. Again, the yellow line shows the sun’s path, and the blue line shows the moon’s path. The green line represents the horizon. You can see how the app lets you plan the right time to get a shot of the sun behind Half Dome: in this particular instance, 8:06 am.

Nearly 500,000 people around the world have downloaded the free version of Sun Surveyor, and many have paid for the full edition. They’re taking remarkable photos as a result, and what started as a hobby for me has turned into a business — thanks to Google Maps APIs.

Categories: Programming

Neo4j: The learning to cycle dependency graph

Mark Needham - Tue, 04/07/2015 - 21:59

Over the past couple of weeks I’ve been reading about skill building and the break down of skills into more manageable chunks, and recently had a chance to break down the skills required to learn to cycle.

I initially sketched out the skill progression but quickly realised I had drawn a dependency graph and thought that putting it into Neo4j would simplify things.

I started out with the overall goal for cycling which was to ‘Be able to cycle through a public park':

MERGE (:Goal:Task {name: "Be able to cycle through a public park"})

This goal is easy for someone who’s already learnt to cycle but if we’re starting from scratch it’s a bit daunting so we need to break it down into a simpler skill that we can practice.

The mini algorithm that we’re going to employ for task breakdown is this:

  1. Can we do the given task now?
  2. Break the task down into something simpler and return to 1.

One of the things to keep in mind is that we won’t get the break down perfect the first time so we may need to change it. For a diagram drawn on a piece of paper this would be annoying but in Neo4j it’s just a simpler refactoring.

Going back to cycling. Since the goal isn’t yet achievable we need to break that down into something a bit easier. Let’s start with something really simple:

MERGE (task:Task {name: "Take a few steps forward while standing over the bike"})
WITH task
MATCH (goal:Goal:Task {name: "Be able to cycle through a public park"})
MERGE (goal)-[:DEPENDS_ON]->(task)

In the first line we create our new task and then we connect it to our goal which we created earlier.

Graph  9

After we’ve got the hang of walking with the bike we want to get comfortable with cycling forward a few rotations while sitting on the bike but to do that we need to be able to get the bike moving from a standing start. We might also have another step where we cycle forward while standing on the bike as that might be slightly easier.

Let’s update our graph:

// First let's get rid of the relationship between our initial task and the goal
MATCH (initialTask:Task {name: "Take a few steps forward while standing over the bike"})
MATCH (goal:Goal {name: "Be able to cycle through a public park"})
MATCH (goal)-[rel:DEPENDS_ON]->(initialTask)
DELETE rel
 
WITH initialTask, goal, ["Get bike moving from standing start", "Cycle forward while standing", "Cycle forward while sitting"] AS newTasks
 
// Create some nodes for our new tasks
UNWIND newTasks AS newTask
MERGE (t:Task {name: newTask})
WITH initialTask, goal, COLLECT(t) AS newTasks
WITH initialTask, goal, newTasks, newTasks[0] AS firstTask, newTasks[-1] AS lastTask
 
// Connect the last task to the goal
MERGE (goal)-[:DEPENDS_ON]->(lastTask)
 
// And the first task to our initial task
MERGE (firstTask)-[:DEPENDS_ON]->(initialTask)
 
// And all the tasks to each other
FOREACH(i in RANGE(0, length(newTasks) - 2) |
  FOREACH(t1 in [newTasks[i]] | FOREACH(t2 in [newTasks[i+1]] |
    MERGE (t2)-[:DEPENDS_ON]->(t1) 
)))
Graph  10

We don’t strictly need to learn how to cycle while standing up – we could just go straight from getting the bike moving to cycling forward while sitting. Let’s update the graph to reflect that:

MATCH (sitting:Task {name: "Cycle forward while sitting"})
MATCH (moving:Task {name: "Get bike moving from standing start"})
MERGE (sitting)-[:DEPENDS_ON]->(moving)

Graph  11

Once we’ve got the hang of those tasks let’s add in a few more to get us closer to our goal:

WITH [
  {skill: "Controlled stop using brakes/feet", dependsOn: "Cycle forward while sitting"},
  {skill: "Steer around stationary objects", dependsOn: "Controlled stop using brakes/feet"},
  {skill: "Steer around people", dependsOn: "Steer around stationary objects"},
  {skill: "Navigate a small circular circuit", dependsOn: "Steer around stationary objects"},
  {skill: "Navigate a loop of a section of the park", dependsOn: "Navigate a small circular circuit"},
  {skill: "Navigate a loop of a section of the park", dependsOn: "Steer around people"},
  {skill: "Be able to cycle through a public park", dependsOn: "Navigate a loop of a section of the park"}
 
] AS newTasks
 
FOREACH(newTask in newTasks |
  MERGE (t1:Task {name: newTask.skill})   
  MERGE (t2:Task {name: newTask.dependsOn})
  MERGE (t1)-[:DEPENDS_ON]->(t2)
)

Finally let’s get rid of the relationship from our goal to ‘Cycle forward while sitting’ since we’ve replaced that with some intermediate steps:

MATCH (task:Task {name: "Cycle forward while sitting"})
WITH task
MATCH (goal:Goal:Task {name: "Be able to cycle through a public park"})
MERGE (goal)-[rel:DEPENDS_ON]->(task)
DELETE rel

And here’s what the final dependency graph looks like:

Graph  13

Although I put this into Neo4j in order to visualise the dependencies we can now query the data as well. For example, let’s say I know how to cycle forward while sitting on the bike. What steps are there between me and being able to cycle around a park?

MATCH (t:Task {name: "Cycle forward while sitting"}),
      (g:Goal {name: "Be able to cycle through a public park"}),
      path = shortestpath((g)-[:DEPENDS_ON*]->(t))
RETURN path

Graph  14

Or if we want a list of the tasks we need to do next we could restructure the query slightly:

MATCH (t:Task {name: "Cycle forward while sitting"}),
      (g:Goal {name: "Be able to cycle through a public park"}),
      path = shortestpath((t)<-[:DEPENDS_ON*]->(g))
WITH [n in nodes(path) | n.name] AS tasks
UNWIND tasks AS task
RETURN task
 
==> +--------------------------------------------+
==> | task                                       |
==> +--------------------------------------------+
==> | "Cycle forward while sitting"              |
==> | "Controlled stop using brakes/feet"        |
==> | "Steer around stationary objects"          |
==> | "Steer around people"                      |
==> | "Navigate a loop of a section of the park" |
==> | "Be able to cycle through a public park"   |
==> +--------------------------------------------+
==> 6 rows

That’s all for now but I think this is an interesting way of tracking how you’d learn a skill. I’m trying a similar approach for some statistics topics I’m learning about but I’ve found the order of tasks isn’t so linear there – interestingly much more a graph than a tree.

Categories: Programming

Personas, Scenarios and Stories, Part 1

Logo Glass Entry Screen

Personas are a tool to develop an understanding of a user’s needs. There are a number of ways personas can be used as teams develop and deliver value. The well-trodden path between personas and value is through user stories. The most effective way to navigate that path from personas to stories is using scenarios as a stepping stone. In the following two essays we will walk through a process to create personas, scenarios and user stories using the example of a beer glass logging app we have used in the past to describe Agile testing. The example is not meant to be complete, but rather to illustrate of the process the path form personas to user stories can take.

Generating Personas

Many articles on using personas to generate user stories begin with a step that is very close to saying, “get some personas.” When Alan Cooper introduced personas as archetypical users of the product or system being developed, he indicated that personas were to be generated based on research done on target or audience of the product. Unfortunately with out guidance on how to create a persona the idea of doing research has gotten lost. What most people need is a lean process for developing personas. A simple process flow to develop personas is as follows:

  1. Brainstorm an initial set of potential personas
  2. Gather data through research and interviews
  3. Refine list of potential personas
  4. Create the initial persona profiles (use template)
  5. Review, sort and prioritize personas
  6. Finalize (-ish) personas
  7. Post or share personas

Step 1 Gather a cross section of the parties impacted by the system or product. For small teams I generally recommend using the Three Amigos (product owner, lead technical and lead testing personnel), while in larger projects with multiple teams the group tends to grow to include product owners and leads from each team. Use one standard brainstorming technique to generate an initial set of personas. This list will be refined as the process progresses. Common seed questions for the brainstorming session include:

  1. What type of people will use this product or system?
  2. How will this product or system be used?

The goal of the session is to generate an initial list of persona names; however these sessions typically expose a huge amount of information. Write everything down; it will be useful later. The list of personas will not be perfect, nor will it be complete. Do not be concerned as the process uses a review and refinement process to ensure the end product is solid.

Step 2 Gather background information using the initial set of personas as a guide. Research can take many forms, including behavioral surveys, interviews and accumulated team knowledge. If you are going to use surveys to collect data to enhance your persona and you going use the data for product decisions, hire a market research firm or organizational psychologist to construct and execute the survey. The most common technique for internal IT projects is interviews. The technique is fairly easy, cheap and generally effective. I recommend creating a formal set of questions before beginning the interview process. The goal of all of the research techniques is for the team to develop a deeper understanding of users and how they will interact with the system.

Step 3 Refine the list of potential personas. Synthesize the original list with the research gathered in step 2 to update the list of personas. The consolidated list should weed out any personas that show substantial overlaps and expose gaps in the original list. In circumstances where the team does not know much about the system or product being built step 1 many not be possible until research is done therefore step 2 may be the entry point in the process.

Step 4 Create the initial personas by filling in the standard template we documented in Personas: Nuts and Bolts. I generally find that the process of completing the template continues the refinement process by identifying gaps and overlaps.

Step 5 Review the personas with the WHOLE team(s). Use this step to continue the refinement process and as a mechanism to add detail to the descriptions and behaviors documented in the template. Do not hesitate to add, change or delete personas at this stage. In larger projects with multiple teams begin by doing individual team reviews then consolidate the personas based on the input. The review process allows the team to share a broader perspective. A whole group review also creates a common vision of “who” the product or system is being built for.

During the review process prioritize the personas. Prioritization reflects the truth that not all personas are created equal, and some types of users are simply more important to satisfy. Prioritization provides the team with a mechanism to make decisions. Some of the personas represent critical stakeholders and others represent those that are less involved. Consider using a 3 level approach, with the top or primary persona being the most important to satisfy. The second level persona is important but not as important as the primary persona. I often find personas at this level provide direct support to the primary persona. The final category of personas is involved, but perhaps in a secondary support role. For example, a pilot would be a primary persona for an airline flight, the ground crew would be a secondary persona and the TSA agents would be a third-level role.

Step 6 Develop an agreement across the project that the personas that have been captured make sense and are as complete as they can be at this point. Always recognize that the personas can evolve and change. The act of generating a public agreement helps teams (or teams of teams) start on the same page.

Step 7 Post the personas on the team wall so that everyone can use them as a reference. In distributed teams post the personas on the wall in each team room and one the team’s homepage (Sharepoint, WIKI or any other tool being used).

Here is an example persona based on the Beer Logo Glass tracking application:

 Tom “The Brewdog” Pinternick (primary persona)

Persona Name: Tom “The Brewdog” Pinternick (primary persona)

Job: Day job is QA manager, but at night he is a home brewer.

Goal: As a home brewer, Tom likes to visit microbreweries to get a broad exposure to different styles of beer. To mark visits to microbreweries, Tom buys logo pint glasses. Logo pint glasses can be expensive and it does not make sense to collect the same glass twice, therefore he needs to make sure he keeps track of the glasses he has purchased.

Personality: Brewing collectables, including pint glasses, is a badge of honor. Microbreweries without logo pint glasses are an anathema to the “Brew Dog.” A visit without evidence is a visit that never occurred (regardless of memories).

Lifestyle: Tom “The Brewdog” Pinternick lives with his foot on the accelerator, whether family or work, he tends to be single-minded. The balance between work, hobbies and family is important, but so is keeping score.


Categories: Process Management

Ho Ho Ho! Google's Santa Tracker is now open source

Google Code Blog - Tue, 04/07/2015 - 18:40

Posted by Ankur Kotwal, Software Engineer

The holiday spirit is about giving and though we’re early into April, we’re still in that spirit. Today, we’re announcing that Google's Santa Tracker is now open source on GitHub at google/santa-tracker-web and google/santa-tracker-android. Now you can see how we’ve used many of our developer products to build a fun and engaging experience that runs across the web and Android.

Santa Tracker isn’t just about watching Santa’s progress as he delivers presents on December 24. Visitors can also have fun with the winter-inspired games and an interactive North Pole village while Santa prepares for his big journey throughout the holidays.

Below is a summary of what we’ve released as open source.

Android app
  • The Santa Tracker Android app is a single APK, supporting all devices running Ice Cream Sandwich (4.0) and up. The source code for the app can be found here.
  • Santa’s village is a canvas-based graphical launcher for videos, games and the tracker. In order to span 10,000 pixels in width, the village uses the Android resource hierarchy in a unique way to scale graphics without needing assets for multiple density buckets, thereby reducing the APK size.
  • Games on Santa Tracker Android are built using a combination of technologies such as JBox2D (gumball game), Android view hierarchy (memory match game) and OpenGL with a purpose-built rendering engine (jetpack game).
  • To help with user engagement, the app uses the App Indexing API to enable autocomplete support for Santa Tracker games from Google Search. This is done using deep linking.
Android Wear
  • The custom watch faces on Android Wear provide a personalized touch. Having Santa or one of his friendly elves tell the time brings a smile to all. Building custom watch faces is a lot of fun but providing a performant, battery friendly watch face requires certain considerations. The watch face source code can be found here.
  • Santa Tracker uses notifications to let users know when Santa has started his journey. The notifications are further enhanced to provide a great experience on wearables using custom backgrounds and actions that deep link into the app.
On the web
  • Santa Tracker on the web was built using Polymer, a powerful new library from the Chrome team based on Web Components. Santa Tracker’s use of Polymer demonstrates how easy it is to package code into reusable components. Every scene in Santa's village (games, videos, and interactive pages) is a custom element, only loaded when needed, minimizing the startup cost of Santa Tracker.
  • Santa Tracker’s interactive and fun experience is built using the Web Animations API, a standardized JavaScript API for unifying animated content - this is a huge leap up from using CSS animations. Web Animations can be written imperatively, are supported in all modern browsers via polyfills and Polymer uses them internally to create its amazing material design effects. Examples of these animations can be found using this GitHub search.
  • Santa believes in mobile first; this year's experience was built to be optimized for the mobile web, including a fully responsive design, touch gesture support using Polymer, and new features like meta-theme color and the application manifest for add to homescreen.
  • To provide exceptional localization support, a new i18n-msg component was built, a first for Web Components. Modeled after the Chrome extension i18n system, it enables live page-refresh for development but has a build step for optimization.

Now that the source code is also available, developers can see many of the parts that come together to make Santa Tracker. We hope that developers are inspired to make their own magical experiences.

Categories: Programming

Valuing Your Work as an Agile Coach

Mike Cohn's Blog - Tue, 04/07/2015 - 15:00

How should we value our work as agile coaches and consultants? The way I do it is to ask myself if I will have had a positive long-term impact on a team or organization. I’m not particularly interested in short-term impacts. In fact, many coaching engagements could have a negative impact in the short term if I’ve done or suggested anything disruptive.

It would be nice if these changes were always easily and directly measurable. Unfortunately, they really aren’t. To measure the impact of my coaching, we would need at least two identical teams, two identical products, and at least a handful of years.

One team would build their product without my coaching. The other team would build theirs with my coaching. Their sales forces and all other supporting functions would need to be identical.

If all other factors were made equal, though, we could measure the impact of my coaching on that team. We’d simply look at sales for the two products over the handful of years and know which had done better.

In some ways, of course, it will be your clients who determine your value as an agile coach. But sometimes clients are not in a good position to judge value. Some clients want you to parrot back to their teams what they’ve said—regardless of whether that is valuable advice or not. Other clients really do want their teams to receive the best possible advice. These are, of course, the clients that we, as coaches and consultants, treasure.

So ultimately, we are the best judges of the value we add. We can bring a proper long-term view, but we need to look critically at our work. Is our advice helping? Is it pushing people to improve? Is it too disruptive? Not disruptive enough? Is it appropriate for the situation?

A slide in my Certified ScrumMaster class says that a ScrumMaster “unleashes the energy and intelligence of others.” In class I often joke that I want to go home at the end of the day and answer my wife’s question of, “What did you do today?” with, “I unleashed the energy and intelligence of others.”

But, on the days I can do that, I find I’ve delivered value to my clients.

New Advanced services in Apps Script

Google Code Blog - Mon, 04/06/2015 - 22:18

Posted by Kalyan Reddy, Developer Programs Engineer

Originally posted on the Google Apps Developer blog

Apps Script includes many built-in Google services for major products like Gmail and Drive, and lately, we've been working to add other APIs that developers have been clamoring for as advanced Google services. Today, we are launching seven more advanced services, including:

Like all other advanced services in Apps Script, they must first be enabled before use. Once enabled, they are just as easy to use as built-in Apps Script services -- the editor provides autocomplete, and the authentication flow is handled automatically.

Here is a sample using the Apps Activity advanced service that shows how to get a list of users that have performed an action on a particular Google Drive file.


function getUsersActivity() {
var fileId = 'YOUR_FILE_ID_HERE';
var pageToken;
var users = {};
do {
var result = AppsActivity.Activities.list({
'drive.fileId': fileId,
'source': 'drive.google.com',
'pageToken': pageToken
});
var activities = result.activities;
for (var i = 0; i < activities.length; i++) {
var events = activities[i].singleEvents;
for (var j = 0; j < events.length; j++) {
var event = events[j];
users[event.user.name] = true;
}
}
pageToken = result.nextPageToken;
} while (pageToken);
Logger.log(Object.keys(users));
}

This function uses the AppsActivity.Activities.list() method, passing in the required parameters drive.fileId and source, and uses page tokens to get the full list of activities. The full list of parameters this method accepts can be found in the Apps Activity API's reference documentation.

Categories: Programming

Rolling upgrade of Docker applications using CoreOS and Consul

Xebia Blog - Mon, 04/06/2015 - 21:17

In the previous blog post, we showed you how to setup a High Available Docker Container Application platform using CoreOS and Consul. In this short blog post, we will show you how easy it is to perform a rolling upgrade of a deployed application.

Architecture

In this blog post we will use the same architecture but for the deployment we used Vagrant instead of Amazon AWS which is a little bit snappier to use  :-).

coreos-vagrant-cluster

In the Vagrant architecture we have replaced the AWS Elastic Load Balancer with our own consul-http-router Load Balancer and moved it inside the cluster. This is a HA proxy which dynamicly routes traffic to any of the http routers in the cluster.

Getting Started

In order to get your own container platform as a service running on vagrant, clone the repository and start your cluster.


git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service.git
cd coreos-container-platform-as-a-service/vagrant
vagrant up
...
vagrant up
Bringing machine 'core-01' up with 'virtualbox' provider...
Bringing machine 'core-02' up with 'virtualbox' provider...
Bringing machine 'core-03' up with 'virtualbox' provider...
...
Checkout the cluster

After the cluster has started, you can use the following command to checkout whether your cluster is fully operational. You should see 6 units running on each machine.

for node in 1 2 3 ; do
  vagrant ssh -c "systemctl | grep consul" core-0$node
done
...
consul-http-router-lb.service loaded active running consul-http-router-lb
consul-http-router.service loaded active running consul-http-router
consul-server-announcer.service loaded active running Consul Server Announcer
consul-server-registrator.service loaded active running Registrator
consul-server.service loaded active running Consul Server Agent

 

Deploying the application

Once the cluster is running, you can deploy or paas-monitor application. This happens in two stages. First you submit the template.

fleetctl submit paas-monitor\@.service

Then, you load and start the new instances.

fleetctl load paas-monitor\@{1..6}.service
fleetctl start paas-monitor\@{1..6}.service
fleetctl list-units | grep paas-monitor

You can now see the paas-monitor in operation on your machine by opening http://paas-monitor.127.0.0.1.xip.io:8080 and clicking on start.  You should see something like shown in the table below. Leave this running while we are going to update the application!

host release message # of calls avg response time last response time 47ea72be3817:1337 v1 Hello World from v1 53 8 11 cc4227a493d7:1337 v1 Hello World from v1 59 11 8 04a58012910c:1337 v1 Hello World from v1 54 7 6 090caf269f6a:1337 v1 Hello World from v1 58 7 7 096d01a63714:1337 v1 Hello World from v1 53 7 9 d43c0744622b:1337 v1 Hello World from v1 55 7 6 Updating the application

Now we are going to update your application. Normally, we would expect that you specify a higher version of the Docker image into unit file. But instead of changing the version of the image to be executed change the value of the environment variable RELEASE in the unit template file  paas-monitor\@.service.

sed -i -e 's/--env RELEASE=[^ ]*/--env RELEASE=v2/' paas-monitor\@.service

Now we have changed the unit template file, you should destroy the old unit file and submit the new one.

fleetctl destroy paas-monitor\@.service
fleetctl submit paas-monitor\@.service

Now you have two options, a slow one and a fast one.

Slow Option

The slow option is to  iterate over the running instances, stop them one by one, destroy them and start a new instance based on the newly submitted template. Because this is boring, repetitive work we have created a small script that does this :-)


./rolling-upgrade.sh paas-monitor\@.service

Fast Option

The fast option is to start 6 new ones and stop all 6 old ones after the new ones are running.

fleetctl load paas-monitor\@1{1..6}.service
fleetctl start paas-monitor\@1{1..6}.service
fleetctl list-units | grep 'paas-monitor\@1[1-6].service' | grep running
fleetctl stop paas-monitor@{1..6}.service
fleetctl destroy paas-monitor@{1..6}.service

When you watch your monitor, you should see the new instance appear one by one.

host release message # of calls avg response time last response time 47ea72be3817:1337 v1 Hello World from v1 53 8 11 cc4227a493d7:1337 v1 Hello World from v1 59 11 8 04a58012910c:1337 v1 Hello World from v1 54 7 6 090caf269f6a:1337 v1 Hello World from v1 58 7 7 096d01a63714:1337 v1 Hello World from v1 53 7 9 d43c0744622b:1337 v1 Hello World from v1 55 7 6 fee39f857479:1337 v2 Hello World from v2 18 7 9 99c1a5aa3b8b:1337 v2 Hello World from v2 2 7 9 Conclusion

CoreOS and Consul provides all the basic functionality to manage your Docker containers and perform rolling upgrade of your application.

 

What do we know about how Meerkat Works?

“The future is live. The future is real-time. The future is now.” I wrote that in 2010 about live video innovator Justin.tv (which pivoted into Twitch.tv). Five years later it appears the future is now once again.

 Meerkat has burst on the scene with a viral vengeance, so I became curious. Meerkat is throwing around a lot of live video. It must be chewing up cash at an early funding round crushing rate. How does it work?

Unfortunately, after digging deep, I found few specific details on their backend architecture. What do we know?

  • The cash burning surmise seems to be correct. Meerkat secured another $12 million in funding. The streams will continue to flow. Bandwidth is cheaper than it used to be, but it is still expensive. Aether Wu did made an estimate over on Quora: So let's consider a scale of 1m users online simultaneously. Every 20 minutes, it costs 100k gigabyte, which means $4k per hour/$96k per day/ $2.9m per month. So if we scale the business to 10 times bigger, it is about $1m per day?

  • Meerkat was three years in development by an Israeli based team of up to 11 developers.

  • Meerkat can handle thousands of live streams while maintaining good video quality. Perhaps implementation details were discussed in a Meerkat session, but, well, you know. Ben Rubin, the thoughtful and nearly ubiquitous founder of Meerkat, wrote on ProductHunt that “I'm in love with HLS. For Meerkat use case HLS is better despite the 10-15 sec delay as it giving advantages in stable, crystal clear quality. We can use it to shift stream to audio only when connection is low, or do all sort of tricks.” HLS is HTTP Live Streaming.

That’s about it. While the backend architecture remains a mystery, what I did find is still very interesting. It’s the story of how a team creatively hunts and sifts through a space until they create/find that perfect blending of features that makes people fall in love. Twitter did that. SnapChat did that. Now Meerkat has done it. How did they do it?

Stats
Categories: Architecture

5 Ways to Destroy Your Productivity

Making the Complex Simple - John Sonmez - Mon, 04/06/2015 - 16:00

Hey you. Yeah, you. Want to know how to absolutely and utterly destroy your productivity? Good. You’ve come to the right place. Being productive is overrated. I mean really. What good does it get you? The more work you get done, the more work you get asked to do. So, here are a few quick […]

The post 5 Ways to Destroy Your Productivity appeared first on Simple Programmer.

Categories: Programming

Capability Maturity Levels and Implications on Software Estimating

Herding Cats - Glen Alleman - Mon, 04/06/2015 - 14:52

 An estimate is the most knowledgeable statement you can make at a particular point in time regarding, cost/effort, schedule, staffing, risk, the ...ilities of the product or service.[1]

CMMI  for Estimates

 

Immature versus Mature Software Organizations [3]

Setting sensible goals for improving the software development processes requires  understanding the difference between immature and mature organizations. In an immature organization, processes are generally improvised by practitioners and their management during the course of the project. Even if a process has been specified, it is not rigorously followed or enforced.

Immature organizations are reactionary with managers focused on solving immediate crises. Schedules and budgets are routinely exceeded because they are not based on realistic estimates. When hard deadlines are imposed, product functionality and quality are often compromised to meet the schedule.

In immature organizations, there is no objective basis for judging product quality or for solving product or process problems. The result is product quality is difficult to predict. Activities intended to enhance quality, such as reviews and testing, are often curtailed or eliminated when projects fall behind schedule.

In mature organizations possesses guide the organization-wide ability to manage development and maintenance processes. The process is accurately communicated to existing staff and new employees, and work activities are carried out according to the planned process. The processes mandated are usable and consistent with the way the work actually gets done. These defined processes are updated when necessary, and improvements are developed through controlled pilot-tests and/or cost benefit analyses. Roles and responsibilities within the defined process are clear throughout the project and across the organization.

Let's look at the landscape of maturity on estimating the work for those providing the funding for the work.

1. Initial

Projects are small, short, and while important to the customer, not likely critical to the success of the business in terms of cost and schedule. 

  • Informal or no estimating
  • When there are estimates, they are manual, without any processes, and likely considered guesses

The result of this level of maturity is poor forecasting of the cost and schedule of the planned work. And surprise for those paying for the work.

2. Managed

Projects may be small, short, and possibly important. Some for of estimating, either from past experience or from decomposition of the planned work is used to make linear projects of future cost, schedule, and technical performance.

This past performance is usually not adjusted for the variances of the past, just and average. As well the linear average usually doesn't consider changes in the demand for work, technical differences in the works, and other uncertainties in the future for that work.

This is the Flaw of Averages approach to estimating. As well the effort needed to decompose the work into same sized chunks is the basis of all good estimating processes. In the Space and Defense business the 44 day rule is used to bound the duration of work. This answers the question how long are you willing to wait before you find out you're late? For us, the answer is no more than one accounting period. In practice, project status - physical percent complete is done every Thursday afternoon.

3. Defined

There is an estimating process, using recorded past performance and the statistical adjustments of that past performance. Reference Classes are  used to model future performance from the past. Parametric estimates can be used with those reference classes or other estimating processes. Function Points is common in enterprise IT projects where interfaces to legacy systems, database topology, user interfaces, transactions are the basis of the business processes. 

The notion that we've never done this before so how can we estimate, begs the question do you have the right development team? This is a past performance issues. Why hire a team that has no understanding of the problem and then ask then to estimate the cost of the solution? You wouldn't hire a plumber to install a water system if she hadn't done this before in some way.

4. Quantitatively Managed

Measures, Metrics, Architecture assessments - Design Structure Matrix is one we use - are used to construct a model of the future. External Databases referenced to compare internal estimates with external experiences.

5. Optimized 

There is an estimating organization that supports development, starting with the proposal and continuing through project close out. As well there is a risk management organization helping inform the estimates about possible undesirable outcomes in the future.

Resources

[1] Improving Estimate Maturity for More Successful Projects, SEER/Tracer Presentation, March 2010.

[2] Software Engineering Information Repository, Search Results on Estimates

[3] The Capability Maturity Model for Software

Categories: Project Management

SPaMCAST 336 – Yves Hanoulle, Communities and Coaching Retreats

 www.spamcast.net

                    http://www.spamcast.net

Listen Now

Subscribe on iTunes

In this episode of the Software Process and Measurement Cast we feature our interview with Yves Hanoulle, builder of community builders.  We discussed collaboration, coaching retreats and the future of Agile.  Yves is an Agile thought leader among thought leaders and he shared his wisdom the Software Process and Measurement Cast listeners.

Yves’ Bio:

Yves Hanoulle has taken on almost every role in the software development field, from software support, developer, trainer, scrum master to agile coach. Over the last 10 years, Yves has focused on agile coaching. Yves grows community builders. His personal goal is to make his customers independent from him as soon as possible. Yves is the inventor of the Who is Agile series of books and the co-inventor of the leadership game. Although he co-invented Pair Coaching & Coach Retreats, Yves is not interested in being a rock star coach inventing new methodologies, he rather wants to mix existing ideas like a thought disc jockey, adjusting to the needs of the audience.

Connect with Yves at:

Twitter: @yveshanoulle
LinkedIn: https://www.linkedin.com/in/yveshanoulle

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. If not use the link below and support the podcast at the same time!

Dead Tree Version or Kindle Version 

@stevena510 (Steven Adams) has recommended that the next re-read be Fred Brooks masterpiece The Mythical Man-Month.  I think it is a great idea.

Next

In the next SPaMCAST we feature our essay on Agile release planning *** MEG june 10 – 15 2013****.  Unless your project consists of one or two sprints and a cloud of dust (see three yards and a cloud of dust) you will need to tackle release planning.  It does not have to be as hard as many people want you to believe.  We will have new entries from the Software Sensei (Kim Pries) and Jo Ann Sweeney with her Explaining Change column.

Upcoming Events

QAI Quest 2015
April 20 -21 Atlanta, GA, USA
Scale Agile Testing Using the TMMi
http://www.qaiquest.org/2015/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with the builder of community builders, Yves Hanoulle.  Yves and I talked Agile communities, coaching retreats, why the factory metaphor for IT is harmful and the future of Agile. A wonderful interview, full of information and ideas that can improve your development environment!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 336 – Yves Hanoulle, Communities and Coaching Retreats

Software Process and Measurement Cast - Sun, 04/05/2015 - 22:00

In this episode of the Software Process and Measurement Cast we feature our interview with Yves Hanoulle, builder of community builders.  We discussed collaboration, coaching retreats and the future of Agile.  Yves is an Agile thought leader among thought leaders and he shared his wisdom the Software Process and Measurement Cast listeners.

Yves' Bio:

Yves Hanoulle has taken on almost every role in the software development field, from software support, developer, trainer, scrum master to agile coach. Over the last 10 years, Yves has focused on agile coaching. Yves grows community builders. His personal goal is to make his customers independent from him as soon as possible. Yves is the inventor of the Who is Agile series of books and the co-inventor of the leadership game. Although he co-invented Pair Coaching & Coach Retreats, Yves is not interested in being a rock star coach inventing new methodologies, he rather wants to mix existing ideas like a thought disc jockey, adjusting to the needs of the audience.

Connect with Yves at:

Twitter: @yveshanoulle
LinkedIn: https://www.linkedin.com/in/yveshanoulle

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. If not use the link below and support the podcast at the same time!

Dead Tree Version or Kindle Version 

@stevena510 (Steven Adams) has recommended that the next re-read be Fred Brooks masterpiece The Mythical Man-Month.  I think it is a great idea.

Next

In the next SPaMCAST we feature our essay on Agile release planning *** MEG june 10 – 15 2013****.  Unless your project consists of one or two sprints and a cloud of dust (see three yards and a cloud of dust) you will need to tackle release planning.  It does not have to be as hard as many people want you to believe.  We will have new entries from the Software Sensei (Kim Pries) and Jo Ann Sweeney with her Explaining Change column.

 

Upcoming Events

QAI Quest 2015
April 20 -21 Atlanta, GA, USA
Scale Agile Testing Using the TMMi
http://www.qaiquest.org/2015/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with the builder of community builders, Yves Hanoulle.  Yves and I talked Agile communities, coaching retreats, why the factory metaphor for IT is harmful and the future of Agile. A wonderful interview, full of information and ideas that can improve your development environment!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Increasing the tap area of UIButtons made with PaintCode

Xebia Blog - Sun, 04/05/2015 - 21:04

Whether you're a developer or not, every iPhone user has experienced the case where he or she tries to tap a button (with an image and nothing happens. Most likely because the user missed the button and pressed next to it. And that's usually not the fault of the user, but the fault of the developer/designer because the button is too small. The best solution is to have bigger icons (we're only talking about image buttons, no text only buttons) so it's easier for the user to tap. But sometimes you (or your designer) just wants to use a small icon, because it simply looks better. What do you do then?

For buttons with normal images this is very easy. Just make the button bigger. However, for buttons that draw themselves using PaintCode, this is slightly harder. In this blogpost I'll explain why and show two different ways to tackle this problem.

I will not go into the basics of PaintCode. If you're new to PaintCode have a look at their website or go to Freek Wielstra's blogpost Working With PaintCode And Interface Builder In XCode. I will be using the examples of his post as basis for the examples here so it's good to read his post first (though not strictly necessary).

To get a better understanding of how a UIButton with an image differs from a UIButton drawn with PaintCode, I will first show what a non-PaintCode button would look like. Below we have a PNG image of 25 by 19 pixels (for non-Retina). Apple recommends buttons to be 44 by 44 points so we increase the size of the button. The button has a gray background and we can see that the image stays the original size and is nicely in the center while the touch target becomes big enough for the user to easily tap it.

uibutton-content

The reason it behaves like this is because of the Control settings in the Attribute Inspector. We could let the content align different or even make it stretch. But stretching this would be a bad idea since it's not a vector image and it would look pixelated.

That's why we love PaintCode graphics. They can scale to any size so we can use the same graphic in different places and at different screen scales (non-retina, retina @2x and iPhone 6 plus which uses @3x). But what if we don't want the graphic to scale of to the entire size of the button like above?

A bad solution would be not to use a frame in PaintCode. Yes, the image will stay the size you gave it when you put it in a button, but it will be left-top aligned. Also you won't be able to use it anywhere else.

If you are really sure that you will never use the image anywhere else than on your button you can group your vector graphic and tell it to have flexible margins within the frame and have a fixed width and height. Using the email icon from Freek Wielstra this will look something like this:

Screen Shot 2015-04-05 at 20.39.36

You can verify that this works by changing the size of the frame. The email icon should always stay in the center both horizontal and vertical and it should stay the same size. The same will happen if you use this in a custom UIButton.

Now let's have a look at a better solution that will allow us to reuse the graphic at different sizes and have some padding around it in a button. It's easier than you might expect and it follows the actual rules of a UIButton. The UIButton class has a property named contentEdgeInsets with the following description: "The inset or outset margins for the rectangle surrounding all of the button’s content." That sounds exactly like what we need. Our PaintCode graphics are the content of the buttons right? Unfortunately it's not treated as such since we are doing the drawing ourselves which does not adhere to that property. However we can very easily make it adhere to it:

@IBDesignable
class EmailButton: UIButton {

    override func drawRect(rect: CGRect) {
        StyleKit.drawEmail(frame: UIEdgeInsetsInsetRect(rect, contentEdgeInsets), color: tintColor!)
    }
    
}

And that's really all there is to it. Just make sure to put a frame around your graphics in PaintCode and make it stretch to all sides of the frame. Now you can specify the content insets of your buttons in Interface Builder and you icon will have padding around it.

Screen Shot 2015-04-05 at 20.59.42