Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=2' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Two Teams or Not: First Do No Harm (Part 1)

 

Restroom Closed Sign

Sometimes a process change is required!

Coaching is a function of listening, asking questions and then listening some more.  All of this listening and talking has a goal: to help those being coached down a path of self-discovery and to help them to recognize the right choice for action or inaction.  Sometimes the right question is not a question at all, but rather an exercise of visualization.

Recently when a long-time reader and listener came to me with a question about a team with two sub-teams that were not participating well together, I saw several paths to suggest.  The first set of paths focused on how people behave during classic Scrum meetings and how the team could structure stories.  However, another path presented itself as I continued to consider options based on the question.  

As a reminder, the team is composed of 8 – 10 people using Scrum, but the team operates in two basic silos. ¬†One subset works on UI related stories while a second focuses on the backend related stories. Let’s pretend that¬†after a long discussion with the team on whether there were really two teams in one or whether splitting the stories differently would address the issue, the team was still unsure how they wanted to address the problem.

Another path to self-discovery is to start at “nothing” and determine if the siloization is causing substantial problems. ¬†I have found that many times teams feel powerless to address process and organizational structure issues unless they can visualize the problem. ¬†Visualization takes the problem out of the theoretical (something feels wrong but I don‚Äôt know what it is) and makes it tangible. ¬†This is where kanban – or in this case, Scrumban – is valuable as a tool to help the team to identify their own problem. ¬†A simple approach to consider would include the following steps:

  1. Visualize the workflow. Identify the major steps in the process of delivering functionality that is performed by the team.  Begin with the backlog and end when your team has completed working on the story (hopefully, this results with the functionality in the hands of users).  On a whiteboard (butcher paper and sticky notes also work) write the steps across the top with the backlog on the far left and done/production on the far right.  Arrange the steps in the order they happen.

Consider how the tasks related to the two silos interact. If you have two standalone workflows that are independent of each other (independence defined as each sub-team can draw a story, complete their steps, and put the functionality in production) we have two separate teams living under a single roof. Then the question is: is this a bad thing?  It might not be a problem. Until that issue is tackled, relax and move forward as two teams using Scrumban or revert to the current method (there is no harm from visualization).  For all other scenarios go to the next step.   Note: visualization can expose all sorts of process problems that do not relate to the two team issue.  I suggest not making any process changes until you take the next two steps, which position the team to collect data and structured experience.

In the next installment we will progress from visualization to assigning working in process (WIP) , doing work and then using data to recognize problems and make changes which lead to a healthy team.


Categories: Process Management

New features for reviews and experiments in Google Play Developer Console app

Android Developers Blog - Wed, 08/10/2016 - 20:07

Posted by Kobi Glick, Google Play team

With over one million apps published through the Google Play Developer Console, we know how important it is to publish with confidence, acquire users, learn about them, and manage your business. Whether reacting to a critical performance issue or responding to a negative review, checking on your apps when and where you need to is invaluable.

The Google Play Developer Console app, launched in May, has already helped thousands of developers stay informed of crucial business updates on the go.

We’re excited to tell you about new features, available today:

Receive notifications about new reviews

Use filters to find the reviews you want

Review and apply store listing experiment results

Increase the percent of a staged rollout or halt a bad staged rollout

Download the Developer Console app on Google Play and stay on top of your apps and games, wherever you are! Also, get the Playbook for Developers app to stay up-to-date with more features and best practices that will help you grow a successful business on Google Play.

Categories: Programming

Adding a bit more reality to your augmented reality apps with Tango

Google Code Blog - Wed, 08/10/2016 - 19:14

Posted by Sean Kirmani, Software Engineering Intern, Tango

Augmented reality scenes, where a virtual object is placed in a real environment, can surprise and delight people whether they’re playing with dominoes or trying to catch monsters. But without support for environmental lighting, these virtual objects can stick out rather than blend in with their environments. Ambient lighting should bleed onto an object, real objects should be seen in reflective surfaces, and shade should darken a virtual object.

Tango-enabled devices can see the world like we do, and they’re designed to bring mobile augmented reality closer to real reality. To help bring virtual objects to life, we’ve updated the Tango Unity SDK to enable developers to add environmental lighting to their Tango apps. Here’s how to get started:

Let’s dive in!

Before we begin, you’ll need to download the Tango Unity SDK. Then you can follow the steps below to make your reality a little brighter.

Step 1: Create a new Unity project and import the Tango SDK package into the project.

Step 2: Create a new scene. If you need help with this, check out the solar system tutorial from a previous post. Then you’ll add Tango Manager and Tango AR Camera prefabs to your scene and remove the default Main Camera game object. Also remove the artificial directional light. We won’t need that anymore. After doing this, you should see the scene hierarchy like this:

Step 3: In the Tango Manager game object, you’ll want to check Enable Video Overlay and set the method to Texture and Raw Bytes.

Step 4: Under Tango AR Camera, look for the Tango Environmental Lighting component. Make that the the Enable Environmental Lighting checkbox is checked.

Step 5: Add your game object that you’d like to be environmental lit to the scene. In our example, we’ll be using a pool ball. So let’s add a new Sphere.

Step 6: Let’s create a new material for our sphere. Go to Create > Material. We’ll be using our environmental lighting shader on this object. Under Shader, select Tango >Environmental Lighting > Standard.

Step 7: Let’s add a texture to our pool ball and tweak our smoothness parameter. The higher the smoothness, the more reflective our object becomes. Rougher objects have more of a diffuse lighting that is softer and spreads over the surface of the object. You can download the pool_ball_textureand import it into your project.

Step 8: Add your new material to your sphere, so you have a nicer looking pool ball.

Step 9: Compile and run the application again. You should able see environment lit pool ball now!

You can also follow our previous post and be able to place your pool ball on surfaces. You don’t have to worry about your sphere rolling off your surface. Here are some comparison pictures of the pool ball with a static artificial light (left) and with environment lighting (right).

We hope you enjoyed this tutorial combining the joy of environmental lighting with the magic of AR. Stay tuned to this blog for more AR updates and tutorials!

We’re just getting started!

You’ve just created a more realistically light pool ball that live in AR. That’s a great start, but there’s a lot more you can do to make a high performance smartphone AR application. Check out our Unity example code on Github (especially the Augmented Reality example) to learn more about building a good smartphone AR application.

Categories: Programming

Android Developer Story: Hole19 improves user retention with Android Wear

Android Developers Blog - Wed, 08/10/2016 - 18:48

Posted by Lily Sheringham, Google Play team

Based in Lisbon, Portugal, Hole19 is a golfing app which assists golfers before, during, and after their golfing journey with GPS and a digital scorecard. The app connects the golfing community with shared statistics for performance and golf courses, and now has close to 1 million users across all platforms.

Watch Anthony Douglas, Founder & CEO, and F√°bio Carballo, Head Android Developer, explain how Hole19 doubled its number of Android Wear users in 6 months, and improved user engagement and retention on the platform. Also, hear how they are using APIs and the latest Wear 2.0 features to connect users to their golfing data and improve the user experience.


Learn more how to get started with Android Wear and get the Playbook for Developers app to stay up-to-date with more features and best practices that will help you grow a successful business on Google Play.

Categories: Programming

Stuff The Internet Says On Scalability For August 12th, 2016

Hey, it's HighScalability time:

 

 

The big middle finger to the Olympic Committee. They pulled this video of the incredibly beautiful Olympic cauldron at Rio.

 

If you like this sort of Stuff then please support me on Patreon.
  • 25 years ago: the first website went online; $236M: Pokemon Go revenue in 5 weeks in 3 countriesSeveral thousand: work on Apple maps; 2500 Nimitz Carriers: weight of iPhone if implemented using tube transistors; $50 trillion: cost of iPhone in 1950, economic output of the world in your hand; 1000x: faster phase-change RAM; 15lbs: Americans heavier than 20 years ago; 2 years: for hacking the IRS; 3.6PB: hypothetical storage pod based on 60 TB SSD; 330,000: cash registers hacked; 162%: increased love for electric cars in China; 

  • Quotable Quotes:
    • @carllerche: it is hard to imagine how a node app could get closer to the metal with only 20MM LOC between the app and the hardware.
    • David Heinemeier Hansson (RoR)~ Lots and lots of huge systems that are running the gosh darn Internet are built by remote people operating asynchronously. You don't think that's good enough for your little shop?
    • Cesarini: Some frameworks that try to automate activities end up failing to hide complexity. They limit the trade-offs you can make, so they cater only to a subset of systems, often with very detailed requirements. 
    • "Uncle" Bob Martin: I have lived through 22 orders of magnitude growth of growth in hardware.
    • Jovanovic: To use Bitcoin for real-time trades, we need to eliminate its lazy fork-resolution mechanism and adopt strong consistency, a more proactive approach that guarantees transaction persistence.
    • Pedro Ramalhete: one latency distribution plot is worth a thousand throughput measurements
    • @n1ko_w1ll: Impressive numbers:  - 80% cut code with #scala - responsive at 90% load with #akka Impressive numbers: - 80% cut code with #scala- responsive at 90% load with #akka
    • @samkroon: So Aussie government is asking 20 million ppl to login to one web site on the same night... Fail. Should have gone #serverless. #census2016
    • @caitie: "My contribution to RPC is not to make another system based on RPC" @cmeik #NikeTechTalks
    • @krisajenkins: This is your return type: Int / This is your return type on microservices: IO / (Logger (Either HttpError Int)) Microservices: Know the risks.
    • @nosqlonsql: Latency drives throughput if you cannot achieve enough concurrency. Kafka vs Chronicle. Must read by @PeterLawrey
    • reddit: Today's date is 100/1000/10000 in binary
    • @caitie: "The languages we associate with distributed programming are really concurrent languages" @cmeik #NikeTechTalks
    • @goserverless: Lambda down :( #aws #serverless
    • @pkanavos: @goserverless I think I'll PaaS
    • Jan Wedel: So if you plan to build an application from scratch and it is only meant to be used in on-premise scenarios as described, you probably shouldn't go for a microservice architecture.
    • @bmoesta: Any industry that solely focuses on efficiency innovation is on the verge of death. Disruptive innovations that drive progress drive growth
    • flak: It’s quite likely that your crypto will explode sooner or later, and it’s possible that random numbers will be implicated, but it’s very unlikely that some USB gizmo promising “true random” at kilobits per second will save you. Save your money instead.

  • Imagine how much the world has changed in those 25 years. The world's first website went online 25 years ago today. Without the Web the Internet would probably still be a backwater for researchers. The Web was the Internet's killer app. It's hard to imagine Pokemon is Augmented Realities' killer app. AR needs its let the people make it bigger and better technology. Given the balkanization of AR into proprietary silos AR may never have its Web moment. Will there be an HTTP for AR?

  • The phrase "small, reprogrammable quantum computer" doesn't sound remotely present-tense, but it is: Shantanu Debnath and colleagues at the University of Maryland reveal their new device can solve three algorithms using quantum effects to perform calculations in a single step, where a normal computer would require several operations. Although the new device consists of just five bits of quantum information (qubits), the team said it had the potential to be scaled up to a larger computer...the key to the new device was a system of laser pulses that drove the quantum logic gates, which operate like the switches and transistors that power ordinary computers.

  • Turning programmers into a proper profession, like doctors, is not the way to go. How much do doctors innovate? Very little. Doctors as a profession have been pounded into their current shape by two oppressors: fear of lawsuits and educational debt. Doctors are bound by best practices and oaths to do nothing interesting. What must programmers do constantly? Innovate and do the interesting. By not being a profession we are free to do harm, yes, but we are also able to create. Creation is a better failure mode than ossification. "Uncle" Bob Martin - "The Future of Programming". Nice gloss by Eric Fleming: Long story short this was really two talks in one. The first speech was about progress in hardware and software from 1945 to 2015. The second talk is about how there is so much growth in the programming field that there are too many young inexperienced people to do it right which necessitates some self regulatory body to bring young professionals into the flock. Ironically the talk his didn't intend to give, the first one is far more interesting than the talk he did give about how to fix the growing inexperience in industry.

  • Don't let what happened in Turkey happen to your coup attempt. Learn from experience. Here's your step-by-step guide on How to Overthrow a Government. Presented at, you may be surprised to hear, DefCon. First select from a menu of three overthrow methods: regime change: elections, coups and revolution. Next select a crack insurgency team from a handy wizard interface. Then there's a drop down list of intelligence gathering resources and funding options. After a few more clicks just press Go and you have your revolution (you'll certainly choose revolution, you get so many more points that way).

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Quote of the Month August 2016

From the Editor of Methods & Tools - Wed, 08/10/2016 - 09:29
Are there unbreakable laws ruling the process of software development? I asked myself this question while reflecting on a recent project, and the answer leads to many conclusions, some already known and some more revealing. Scientific laws reflect reality and cannot be broken. They have strong implications onto how we build things. For instance, there […]

Avoid Two Teams in One if Possible!

How can this team really work together?

How can this team really work together?

I recently got a question from a long-time reader and listener.  I have removed the name to ensure confidentiality.

Context:

  • The person who asked the question is an experienced Agile leader.
  • The team is not all technically-equal full-stack developers, some developers work on UI stories and others work on backend stories.
  • The team has 8-10 people.

The Problem:

  • During story grooming/sizing, the entire team does not participate equally to¬†offer up their points. UI developers participate on UI stories and are reluctant to chime in on backend work, and vice-versa.

The Question:

  • Scrum seeks to involve the entire team. ¬†How can I get everyone involved (or should I)?¬†

Both questions are interesting questions.  I answered with some advice and some follow-up questions.

Further Thoughts Part 1:

  1. Do you really have two teams that you have grouped together into one for organizational purposes?  Are the stories on your backlog related to each other or could you have two separate backlogs with the resultant code implemented separately?

It is possible that although the organization has decided that this group of people are a single team, they really represent two separate teams serving different masters and pursuing separate goals.  If that is true there is no reason for them to act as a single team.  That said, I suspect the segregation driven by specialization is the real issue.  The leader (who may or may not be the organizational manager) needs to work on breaking down silos.  Historically organizations have adopted the manufacturing model as a tool to increase task efficiency. The manufacturing model does not fit software development and maintenance perfectly and often causes local optimizations, which rarely produce the most output (flow) from the system.  Therefore . . .

Further Thoughts Part 2:

  1. Could you reshape your stories so that they would include both UI and backend programmers?  This would create a scenario where they have to collaborate and it makes co-estimating more than an academic exercise.

Reshape the user stories to represent thin slices of the functionality so that someone from each silo needs to be involved in order to deliver. Functionally slicing the stories creates a reason for UI and backend personnel to coordinate and collaborate.  Creating scenarios that both facilitate and require collaboration will give each silo a reason work together in deciding how much work to commit to in a sprint as a team.  The thin functional slicing drives home the point that unless the story works in production and meets the business need, it isn’t done.  Note: sometimes a story will only impact the back or front end (although I suspect there would need to be testing), but these should be relatively rare IF this is a single team.

Further Thoughts Part 3:

  1. Perhaps this is a scenario where pairing might be an interesting approach to getting team members to have a single vision.

The XP practice of¬†pair programming¬†is when two people and one keyboard collaborate on programming. ¬†In this scenario, put a UI and backend person together with a single story, a single keyboard and then lock the door (ok, that’s a little hyperbole). ¬†This practice will help break down barriers. ¬†Another possible solution is to try mob programming on the first day of the sprint (one keyboard and the WHOLE team collaborating together). ¬†Mob programming on the first day of a sprint is a fantastic tool to get the team moving in the same direction and to focus on common issues.

There are other possibilities, such using a test-first approach (perhaps start with acceptance testing driven development), to help involve business community.  I have occasionally used ATDD to help get lots of eyes on a problem.  All of these topics should be addressed in a retrospective. Until the team recognizes that they have a problem, it will be tough for them to commit to a change.

 

 


Categories: Process Management

Application Architecture and Ransomware

Coding the Architecture - Simon Brown - Tue, 08/09/2016 - 20:00
Ransomware and Cryptolocker

Ransomware is an increasing threat to many organisations - I recently had a conversation with a (non-IT) friend whose employer had been affected, which is why I’m writing this. These are attacks where a system or data are made inaccessible until a ransom is paid. This form of extortion actually dates back to the 1980s but recent variants, such as Crytolocker, are very dangerous and destructive on modern networks.

Often the initial infection is via a phishing email that contains a link to a website, that if clicked, will download the malware. This will scan all files that the user has access to and starts encrypting them. Once the files are encrypted the user will be sent a message telling them of the infection and offering to decrypt in return for payment (usually in bitcoins). Of course the user has no guarantee that their files will be decrypted even if the ransom is paid.

Applications and Processes

If an individual's machine is infected then they might lose all their personal documents. If they are using remote drives and shares, which have multiple users, then the infection may also lock other people's files. If a user has access to a large number of files across an organisation then this could be devastating.

These are all files that a person has access to. This includes any files used by applications along with documents etc. Therefore if a developer or operational user becomes infected then the systems files they have access to can be affected. It’s very common for technical employees to have access to the files of production servers in order to make issue resolution easy. For example; log files, configuration files, data exports/imports etc.

If the technical users have write access to a mapped drive on a production server then it is trivial for the malware to encrypt these files. This may take down the service (if runtime files are affected) or even destroy the data making the service impossible to run even after a reinstall. Remember that your databases will ultimately have their data stored in files on a disk somewhere.

If people with elevated privileges are infected, you can lose entire systems as well as that person's individual files.

Preventative Actions

I won't give advice here on Endpoint Protection (antiviruses etc.) as that out-of-scope for this blog but there are many data related actions you should consider with respect to your applications.

Audit

Many of you will be reading this and thinking "well we don't allow access as you've described here" but technical staff will setup systems to make their jobs easier. Has your organisation ever performed a data audit and classification? Do you know what files, shares and sections of your network each user has access to? If you haven't then I'd strongly advise you do so - you may be surprised at what you find. There are many commercial and free tools to assist you in doing this.

Restrict user access

You should define your users, what groups they are in and what data they have access to. This is good practice anyway (for reasons of privacy, data loss prevention etc) but if you reduce the total number of files accessible than any infection will have less effect.

File Permissions

If someone really needs access to files do they require write access? Log files and configuration files are a perfect example. A user shouldn't be writing to a log file and if they want to change some configuration then they should go through your normal release process rather than hacking it in manually. If you can't release configuration quickly enough, then your release process may be your real issue...

Don't share users between people and applications

A person shouldn't be using an account used by an application and the applications shouldn't be using personal accounts. Again you may claim this isn't happening but technical users often take shortcuts like this to release quickly (or get around approval processes). A good audit should pick up on this.

Don't use the same user for all applications (or use root!)

It's tempting (for ease of management) to create a single account and get all applications to run as this account. If this account is compromised then all data for all applications are vulnerable. Use specific accounts for applications to reduce lateral movement between systems.

Don't give administration permissions to interactive accounts

If a login account is used to run a web browser or email then it should have restricted permissions. Likewise any administrative account should not be able to run a web browser or email. Separate the concerns!

Analyse your Backup Policy

How do you backup your data? If you are using online backups, that are accessible to an infected user, then all your backups may get corrupted too! Maybe you should consider using WORM (write once read many) technology or at least use separate processes to move and permission backups appropriately once they have been taken.

Some malware may be stealthy and stay on your system for a long time before making itself known. Therefore incremental backups can be corrupted far back in time. Make sure you regularly test your restoration processes too.

Conclusion

It's important to remember that your data is the most important part of your application and valuable to your organisation. If something has value then nefarious parties can seek to take advantage of this. It's hard to stop some attacks but you can minimise the damage if you are attacked.

The architecture of a system should take into account where data is stored, how it is permissioned and who/what has access to it. It's very easy to become obsessed with the latest design patterns but basic data management is important and shouldn't be forgotten.

Categories: Architecture

Expand Your Global Reach on Google Play With New Language and Country Analytics

Android Developers Blog - Tue, 08/09/2016 - 18:35

Posted by Rahim Nathwani Product Manager, App Translation Service

With users in 190 countries around the world, Google Play offers you a truly global audience for your apps and games. Localization is one of the most powerful ways to connect with people in different places, which is why we launched translation support for in-app purchase and Universal App Campaigns earlier this year. With over 30 language translation options available via the Developer Console, we updated our app translation service to help you select the most relevant languages, making it quick and easy to get started.

With the launch of new language and country analytics, you gain access to app install analysis on Google Play, including:

  • Information on the top languages and countries where apps have been installed, broken down to the level of your app‚Äôs category
  • The percentage of installs that come from users of those languages
  • Further information to help inform your go-to-market plans for these countries

To make ordering translations easier, we show language bundles that you can add to your order in a single click.

To get started, select Manage translations -> Purchase translations from the Store Listing page in the Google Play Developer Console.

Categories: Programming

Daydream Labs: positive social experiences in VR

Google Code Blog - Tue, 08/09/2016 - 17:52

Posted by Robbie Tilton, UX Designer, Google VR

At Daydream Labs, we have experimented with social interactions in VR. Just like in real reality, people naturally want to share and connect with others in VR. As developers and designers, we are excited to build social experiences that are fun and easy to use‚ÄĒbut it‚Äôs just as important to make it safe and comfortable for all involved. Over the last year, we‚Äôve learned a few ways to nudge people towards positive social experiences.

What can happen without clear social norms

People are curious and will test the limits of your VR experience. For example, when some people join a multiplayer app or game, they might wonder if they can reach their hand through another player’s head or stand inside another avatar’s body. Even with good intentions, this can make other people feel unsafe or uncomfortable.

For example, in a shopping experiment we built for the HTC Vive, two people could enter a virtual storefront and try on different hats, sunglasses, and accessories. There was no limit to how or where they could place a virtual accessory, so some people stuck hats on friends anywhere they would stick‚ÄĒlike in front of their eyes. This had the unfortunate effect of blocking their vision. If they couldn‚Äôt remove the hat in front of their eyes with their controllers, they had no other recourse than to take off their headset and end their VR experience.


Protecting user safety

Everyone should feel safe and comfortable in VR. If we can anticipate the actions of others, then we may be able to discourage negative social behavior before it starts. For example, by designing personal space around each user, you can prevent other people from invading that personal space.

We built an experiment around playing poker where we tried new ways to discourage trolling. If someone left their seat at the poker table, their environment desaturated to black and white and their avatar would disappear from the other player’s view. A glowing blue personal space bubble would guide the person back to their seat. We found it’s enough to prevent a player from approaching their opponents to steal chips or invade personal space.


Reward positive behavior

If you want people to interact in positive ways‚ÄĒlike high-fiving

Categories: Programming

10 Gameday Failure Testing Scenarios from Obama for America

I have dozens if not hundreds of half finished articles and snippets of ideas in the haunted house that is my Google Docs. Walking the house around midnight, with the lights turned off course, I stumbled upon one ghost that has been haunting me since 2012. It is time to perform the ritual of exorcism by just publishing something.

You may or may not remember Obama for America, which in 2012 had a staff of 120 people that built and maintained the infrastructure that helped get out the vote for Obama. 

Harper Reed and Dylan Richard headed up the effort. Around that time they were getting a lot of press. One of the things that interested me was how they held Gameday test events, where they would simulate failure modes in their testing environments. Google calls these DiRT (Disaster Recovery Testing event) exercises

So I asked Harper and Dylan what these exercises actually were and they were kind enough to reply. And I apparently forgot all about it. My apologies. Better late than never? Yah, let's go with that.

Here are some of the failure testing scenarios carried out by the Obama for America team:

  1. Flush memcache
  2. Kill memcache (null route on instances)
  3. Kill replicants (we used security groups to deny access)
  4. Kill master
  5. Kill the backing API (we had a heavy SOA)
  6. Put API in read-only (killing master should accomplish this - but this tests client apps explicitly)
  7. Kill SQS (we used it heavily, particularly for decoupled systems and fall backs)
  8. Emulate an EBS failure (kill all DBs [we used RDS], kill all EBS backed instances)
  9. Emulate full east coast failure (we had a 2 stage failover plan to the west coast - fail to a read only mode which we could do easily, and fail over permanently which would only happen in the case of extended east coast AWS unavailability)
  10. Emulate human error (claim to have done something [scale up, restart a DB, flush the cache, bounce the wsgi proc, etc] but don't actually do it) 

Now there's one less ghost haunting the halls.

Related Articles
Categories: Architecture

The Dangers of a Definition of Ready

Mike Cohn's Blog - Tue, 08/09/2016 - 15:00

Although not as popular as a Definition of Done, some Scrum teams use a Definition of Ready to control what product backlog items can enter an iteration.

You can think of a Definition of Ready as a big, burly bouncer standing at the door of the iteration. Just as a bouncer at a nightclub only lets certain people in—the young, the hip, the stylishly dressed—our Definition-of-Ready bouncer only allows certain user stories to enter the iteration.

And, as each nightclub is free to define who the bouncers should let into the club, each team or organization is free to define its own definition of ready. There is no universal definition of ready that is suggested for all teams.

A Sample Definition of Ready

So what types of stories might our bouncer allow into an iteration? Our bouncer might let stories in that meet rules such as these:

  • The conditions of satisfaction have been fully identified for the story.
  • The story has been estimated and is under a certain size. For example, if the team is using story points, a team might pick a number of points and only allow stories of that size or smaller into the iteration. Often this maximum size is around half of the team’s velocity.
  • The team’s user interface designer has mocked up, or even fully designed, any screens affected by the story.
  • All external dependencies have been resolved, whether the dependency was on another team or on an outside vendor.
A Definition of Ready Defines Pre-Conditions

A Definition of Ready enables a team to specify certain pre-conditions that must be fulfilled before a story is allowed into an iteration. The goal is to prevent problems before they have a chance to start.

For example, by saying that only stories below a certain number of story points can come into an iteration, the team avoids the problem of having brought in a story that is too big to be completed in an iteration.

Similarly, not allowing a story into the iteration that has external dependencies can prevent those dependencies from derailing a story or an entire iteration if the other team fails to deliver as promised.

For example, suppose your team is occasionally dependent on some other team to provide part of the work. Your user stories can only be finished if that other team also finishes their work—and does so early enough in the iteration for your team to integrate the two pieces.

If that team has consistently burned you by not finishing what they said they’d do by the time they said they’d do it, your team might quite reasonably decide to not bring in any story that has a still-open dependency on that particular team.

A Definition of Ready that requires external dependencies to be resolved before a story could be brought into an iteration might be wise for such a team.

A Definition of Ready Is Not Always a Good Idea

So some of the rules our bouncer establishes seem like good ideas. For example, I have no objection against a team deciding not to bring into an iteration stories that are over a certain size.

But some other rules I commonly see on a Definition of Ready can cause trouble—big trouble—for a team. I’ll explain.

A Definition of Ready can be thought of like a gate into the iteration. A set of rules is established and our bouncer ensures that only stories that meet those rules are allowed in.

If these rules include saying that something must be 100 percent finished before a story can be brought into an iteration, the Definition of Ready becomes a huge step towards a sequential, stage-gate approach. This will prevent the team from being agile.

A Definition of Ready Can Lead to Stages and Gates

Let me explain. A stage-gate approach is characterized by a set of defined stages for development. A stage-gate approach also defines gates, or checkpoints. Work can only progress from one stage to the next by passing through the gate.

When I was a young kid, my mom employed a stage-gate approach for dinner. I only got dessert if I ate all my dinner. I was not allowed to eat dinner and dessert concurrently.

As a product development example, imagine a process with separate design and coding stages. To move from design to coding, work must pass through a design-review gate. That gate is put in place to ensure the completeness and thoroughness of the work done in the preceding stage.

When a Definition of Ready includes a rule that something must be done before the next thing can start, it moves the team dangerously close to stage-gate process. And that will hamper the team’s ability to be agile. A stage-gate approach is, after all, another way of describing a waterfall process.

Agile Teams Should Practice Concurrent Engineering

When one thing cannot start until another thing is done, the team is no longer overlapping their work. Overlapping work is one of the most obvious indicators that a team is agile. An agile team should always be doing a little analysis, a little design, a little coding, and a little testing. Putting gates in the development process prevents that from happening.

Agile teams should practice concurrent engineering, in which the various activities to deliver working software overlap. Activities like analysis, design, coding, and testing will never overlap 100%—and that’s not even the goal. The goal is overlap activities as much as possible.

A stage-gate approach prevents that by requiring certain activities to be 100% complete before other activities can start. And a definition of ready can lead directly to a stage-gate approach if such mandates are included in the Definition of Ready.

That’s why, for most teams, I do not recommend using a Definition of Ready. It’s often unnecessary process overhead. And worse, it can be a large and perilous step backwards toward a waterfall approach.

In some cases, though, I do acknowledge that a Definition of Ready can solve problems and may be worth using.

Using a Definition of Ready Correctly

To use a Definition of Ready successfully, you should avoid including rules that require something be 100 percent done before a story is allowed into the iteration—with the possible exception of dependencies on certain teams or vendors. Further, favor guidelines rather than rules on your Definition of Ready.

So, let me give you an example of a Definition of Ready rule I’d recommend that a team rewrite: “Each story must be accompanied by a detailed mock up of all new screens.”

A rule like this is a gate. It prevents work from overlapping. A team with this rule cannot practice concurrent engineering. No work can occur beyond the gate until a detailed design is completed for each story.

A better variation of this would be something more like: “If the story involves significant new screens, rough mock ups of the new screens have been started and are just far enough along that the team can resolve remaining open issues during the iteration.”

Two things occur with a change like that.

  1. The rule has become a guideline.
  2. We’re allowing work to overlap by saying the screen mockups are are sufficiently far along rather than done.

These two changes introduce some subjectivity into the use of a definition of ready. We’re basically telling the bouncer that we still want young, hip and stylishly dressed people in the nightclub. But we’re giving the bouncer more leeway in deciding what exactly “stylishly dressed” means.

SE-Radio Episode 265: Pat Kua on Becoming a Tech Lead

Johannes Th√∂nes talks to Patrick Kua about the role of a technical lead and how people become tech leads. The show covers the definition of a tech lead, the responsibilities of the role and the challenges of becoming a tech lead. Venue:¬†Internet   Related Links Episode 228: Software Architecture Sketches with Simon Brown Article: A […]
Categories: Programming

Mapping Biases to Testing: Confirmation Bias

Xebia Blog - Mon, 08/08/2016 - 20:24
I use terminology from earlier blog posts about biases. If you have missed those posts, read part 1 here. I explain the terminology there. In the second post I wrote about the Anchoring Effect. Let me state the ‚Äėbad news‚Äô up front: you cannot fully avoid the confirmation bias. That‚Äôs actually a good thing, because

SPaMCAST 406 ‚Äď Erik van Veenendaal, Quality, Agile and the TMMi

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 406 features our interview with Erik van Veenendaal.  We discussed Agile testing, risk and testing, the Test Maturity Model Integrated (TMMi), and why in an Agile world quality and testing still matter.

Erik van Veenendaal (www.erikvanveenendaal.nl) is a leading international consultant and trainer, and a recognized expert in the area of software testing and requirement engineering. He is the author of a number of books and papers within the profession, one of the core developers of the TMap testing methodology, a participant in working parties of the International Requirements Engineering Board (IREB). He is one of the founding members of the TMMi Foundation, the lead developer of the TMMi model and currently a member of the TMMi executive committee. Erik is a frequent keynote and tutorial speaker at international testing and quality conferences. For his major contribution to the field of testing, Erik received the European Testing Excellence Award (2007) and the ISTQB International Testing Excellence Award (2015). You can follow Erik on twitter via @ErikvVeenendaal.

Re-Read Saturday News

This week we continue our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 14 and 15.  This week we dive into design and scaling. These chapters  address two critical and controversial topics that XP profoundly rethought.

I am still collecting thoughts on what to read next. Is it time to start thinking about what is next: a re-read or a new read?  Thoughts?

Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.

Next SPaMCAST

The next Software Process and Measurement Cast will focus on our recent revisit of Test Driven Development (TDD).  TDD is an important feature of XP that can be (and should be) used if quality and efficiency are important to your organization.

We will also have a new column from Steve Tendon (welcome back Steve!)  and Gene Hughson AND maybe one more but we will see!   

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

 


Categories: Process Management

SPaMCAST 406 - Erik van Veenendaal, Quality, Agile and the TMMi

Software Process and Measurement Cast - Sun, 08/07/2016 - 22:00

The Software Process and Measurement Cast 406 features our interview with Erik van Veenendaal.  We discussed Agile testing, risk and testing, the Test Maturity Model Integrated (TMMi), and why in an Agile world quality and testing still matter.

Erik van Veenendaal (www.erikvanveenendaal.nl) is a leading international consultant and trainer, and a recognized expert in the area of software testing and requirement engineering. He is the author of a number of books and papers within the profession, one of the core developers of the TMap testing methodology, a participant in working parties of the International Requirements Engineering Board (IREB). He is one of the founding members of the TMMi Foundation, the lead developer of the TMMi model and currently a member of the TMMi executive committee. Erik is a frequent keynote and tutorial speaker at international testing and quality conferences. For his major contribution to the field of testing, Erik received the European Testing Excellence Award (2007) and the ISTQB International Testing Excellence Award (2015). You can follow Erik on twitter via @ErikvVeenendaal.

Re-Read Saturday News

This week we continue our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 14 and 15.  This week we dive into design and scaling. These chapters  address two critical and controversial topics that XP profoundly rethought.

I am still collecting thoughts on what to read next. Is it time to start thinking about what is next: a re-read or a new read?  Thoughts?

Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.

Next SPaMCAST

The next Software Process and Measurement Cast will focus on our recent revisit of Test Driven Development (TDD).  TDD is an important feature of XP that can be (and should be) used if quality and efficiency are important to your organization.

We will also have a new column from Steve Tendon (welcome back Steve!)  and Gene Hughson AND maybe one more but we will see!   

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.



Categories: Process Management

Extreme Programming Explained, Second Edition: Re-Read Week 8

XP Explained Cover

This week we tackle Chapter 14 and 15 in Kent Beck and Cynthia Andres’s Extreme Programing Explained, Second Edition (2005).  Chapter 14 deals with design.  Software design is a transition point in the life cycle that begins with business requirements and ends in functional software. The design translates the need into a cohesive solution. Chapter 15 covers topics related to scaling.  Scaling concepts focus on using XP to solve larger, more complex problems in more complex organizations.   

Next week we have more what will follow Extreme Programing Explained, Second Edition (2005) ; a re-read or a new read?  Thoughts?  In this week’s we tackle two concepts central to XP; planning and testing, both done the XP way.

Chapter 14: Designing: The Value of Time

One of the most important features of XP is the delivery of functionality every week or two weeks (depending on iteration length). In order to facilitate that process, XP embraces the concept of incremental design.

The design is important to software.  The design allows software developers to share  and inherit elements and components or to copy and use pieces of a design in other instantiations or even other applications. While metaphors of physical design are often used in software, those metaphors don’t work for software design and often constrain the options that we consider in the long run.  For example, the design for a physical product such as a bridge is created and set in stone (ok, maybe you can add flag holders and flags, but that is not a significant design extension). Software designs are far more flexible and changeable, and most importantly software designs can be incrementally built.  Once built a suspension bridge can’t be easily modified into a cantilever bridge, on the other hand, an application running in three tier client server can be incrementally shifted to the cloud.

The incremental approach of delivering value (rather than a big bang approach) provides the basis for gathering feedback faster.  The faster users can interact with the functionality and then provide feedback, the faster the XP can adapt.  Beck suggests that the art of designing is to get feedback and then to use the feedback to do only as much design as needed to get the next feedback.  Beck is defining a continuous improvement loop that begins with just enough design upfront.

Recently I visited Italy and was fortunate enough to be able to spend time looking at Michelangelo’s David.  His design was inspiring but unlike the design of David, the team is continuously learning and the technical environment is evolving which means there are alway new and better ways to design the software. If learning, experience, and technical evolution can positively improve the design then delaying the completion of the design provides the greatest chance of increasing the value and quality of the product.  Timing when during a project, a design is needed requires considering the value learning and experience.

The chapter concludes with a set of criteria for evaluating whether a design is simple and useful enough:

  1. The design is appropriate for the intended audience. If those who are going to work on design and code don’t understand the design, then it isn’t appropriate.¬†
  2. Create a design that facilitates communication to the team and stakeholders.
  3. The design should not include duplication of logical structures which can make the design hard to understand.
  4. The design should have the fewest elements possible.  Designing fewer elements requires building and documenting few components.

 

Chapter 15: Scaling XP

Chapter 15 is a reminder that scaling has been a topic of conversation for a long time! Beck describes scaling XP along seven dimensions.

A number of people: One of the classic scaling mechanisms is to add people. Earlier in my career the organization I worked for told clients that it would ‚Äúdarken the skies with SEs‚ÄĚ (software engineers) to deal with their projects. ¬†Beck, influenced by the Mythical Man-Month, suggests a different path.

  1. Break the problem down into smaller parts. Breaking work into smaller parts allows better prioritization, supports getting started and getting feedback faster.  Prioritization also helps to highlight items in the backlog that might be gold plating.
  2. Use simple solutions whenever possible. Keeping things simple allows work to be done faster, gets feedback faster and in the longer run is easier to maintain.
  3. Apply a complex solution to any problem that’s left.

Investment: This not a scaling issue rather it is a reflection of how organizations account for work.  XP does not change the accounting rules for what can be expensed or capitalized in projects. Make sure you discuss how XP works with your financial group before you start to reduce the potential for surprising accounting!

Size of organization: An XP team can be an island in a larger organization by being transparent and maintaining communication. Maintaining communication in a larger organization will require understanding the information needs of those outside of the XP bubble and then finding a mutually agreed upon way to address those needs.  Beck suggests that the project manager role can provide the interface.

Time: XP supports long running projects (scaling using duration) by building a test base from TDD which prevents many of the common maintenance mistakes. TDD tests also act as a history of project or product development.

Problem complexity: Specialization is a common scaling approach used to great effect in manufacturing and by extension software development.  The assembly line is an example of using specialization to address problem complexity. XP uses pair programming as a tool to leverage the history of specialization in IT to generate closer close cooperation which improves a team’s ability to scale.  Pairing helps team members learn a bit about each other’s specialty which deepens cooperation and the ability to share work across the team.

Solution complexity: The XP dictum to break work down and then to build incrementally allows the team (or team-of-teams) to chip away at the solution, deliver and then get feedback.

Consequence of failure: XP practices are built to provide focus on work in a very transparent environment.  Stories or requirements with a high potential consequence of failure may closer verification and validation. For example, the software that controls x-ray machines.  XP practices that help address the consequence of failure include building in tests using TDD to prove that the work meets needs.  Other tools include pair programming, breaking working to smaller parts and continuous builds. That said, nowhere in XP does it say that you can’t add steps to the flow of work to address mission, safety or security critical requirements. 

XP asks us to simplify problems, apply the basic principles and values, leverage the core practices and then to interact and collaborate with teams outside in order to scale to meet the problem, technical and organizational complexity.

Previous installments of Extreme Programing Explained, Second Edition (2005) on Re-read Saturday:

Extreme Programming Explained: Embrace Change Second Edition Week 1, Preface and Chapter 1

Week 2, Chapters 2 ‚Äď 3

Week 3, Chapters 4 ‚Äď 5

Week 4, Chapters 6 ‚Äď 7 ¬†

Week 5, Chapters 8 ‚Äď 9

Week 6, Chapters 10 ‚Äď 11

Week 7, Chapters 12 – 13

 


Categories: Process Management

Stuff The Internet Says On Scalability For August 5th, 2016

Hey, it's HighScalability time:

 

 

What does a 107 football field long battery building Gigafactory look like? A lot like a giant Costco. (tour)

 

If you like this sort of Stuff then please support me on Patreon.
  • 60 billion: Facebook messages per day; 3x: Facebook messages compared to global SMS traffic; $15: min wage increases job growth; 85,000: real world QPS for Twitter's search; 2017: when MRAM finally arrives; $60M: Bitcoin heist, bigger than any bank robbery; 710m: Internet users in China; 

  • Quotable Quotes:
    • @cmeik: When @eric_brewer told me that Go was good for building distributed systems, I couldn't help but think about this.
    • David Rosenthal: We can see the end of the era of data and computation abundance. Dealing with an era of constrained resources will be very different.In particular, enthusiasm for blockchain technology as A Solution To Everything will need to be tempered by its voracious demand for energy.
    • Dr Werner Vogels: What we’ve seen is a revolution where complete applications are being stripped of all their servers, and only code is being run. Quite a few companies are ripping out big pieces of their applications and replacing their servers, their VMs and their containers with just code. Perhaps we no longer have to think about servers.
    • @dsb: agree w serverless future - seeing more startups using that model & entirely eliminates most of my infra diligence questions
    • Emin Gün Sirer: It's too early for a coherent story to emerge from the smoldering ashes of the Bitfinex disaster. 
    • @jeremiahdillon: The coming decades will bring population shrinkage not seen since the Black Death. Good for wages, bad for GDP.
    • Nicole Hemsoth: The chatter is going around, once again, that AWS is looking to deliver a private version of its public cloud infrastructure, something that is not as easy to do as it sounds. 
    • Michael Rabin: I must admit that after many years of work in this area, the efficacy of randomness for so many algorithmic problems is absolutely mysterious to me. It is efficient, it works; but why and how is absolutely mysterious. 
    • Algorithms to Live By: that “bubble sort has no apparent redeeming features,” the research of Ackley and his collaborators suggests that there may be a place for algorithms like Bubble Sort after all. Its very inefficiency—moving items only one position at a time—makes it fairly robust against noise, far more robust than faster algorithms like Mergesort, in which each comparison potentially moves an item a long way. Mergesort’s very efficiency makes it brittle
    • JoshGlazebrook: Looks like Hitachi (HGST) is still leading in terms of reliability. 
    • @SeanMcElwee: don't argue with capitalists. seize the means of production.
    • jondubois: What the author describes, I would not call 'protocols' - The Bitcoin network is a hosted implementation of the Bitcoin protocol - It is not the protocol itself. Tokens in the context of the Bitcoin protocol itself have no value - The value is derived from the popularity of the infrastructure, not from the popularity of the protocol.

  • Where there is Pokemon there is a way. If you don't make an API someone will. Ingenious third party tracking services are one reason Pokemon Go is slow: The company says these services were making the servers unreliable. Pokémon Go doesn’t have an API, so it seems like Pokévision and others created countless of accounts on many servers around the world using Android emulators. With these emulators, they could fake movements around cities and reverse-engineer the game to create a sort of lightweight API and gather Pokémon data.

  • Two years later is appears Facebook creating a separate Messenger app was a good idea. Go figure. This Is The Smartest Thing Facebook Ever Did: In phase one, Facebook grows the user base. “We’re really at the beginning of phase two,” he said, in which the company focuses on growing organic interactions between people and businesses. Once businesses see this is working, the company launches stage three, in which it asks companies to pay up. This strategy has worked well for the company’s other products: Facebook reported $6.44 billion in sales this year, up 59 percent from a year ago. The company’s profits almost tripled to $2.06 billion.

  • So you want a system where the guberment has the master key to all encrypted systems? What a great idea! Anyone can now print out all TSA master keys.

  • This is from WWI! French gov: "WWI sites will be fully cleared of unexploded ordnance in... 300-900 years." Can you imagine what the the aftermath of the cryptowars will be like? Sorry, don't touch that toaster...it will hack your neural lace and make you do crazy shite. Voting booths are all compromised, back to paper. Don't even think of using your all electric AI controlled car. It's now an IDAID (Improvised Destructive AI Device). Remember all those families that drove themselves over the cliff? So sad. After the fifth iteration of this pattern we'll have to melt it all down and start over again, only this time through only steampunk tech will be allowed.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Measuring Test-Driven Development

Measuring TDD is a lot like measuring a cyclone!

Measuring TDD is a lot like measuring a cyclone!

Teams and organizations adopt test-driven development for many reasons, including improving software design, functional quality, time to market or because everyone is doing it (well maybe not that last reason‚Ķyet). ¬†In order to justify the investment in time, effort and even the cash for consultants and coaches, most organizations want some form of proof that there is some return on investment (ROI) from leveraging TDD. The measurement issue is less that something needs to be measured (I am ignoring the ‚Äúyou can‚Äôt measure software development crowd‚ÄĚ), but rather what constitutes an impact and therefore what really should be measured. Erik van Veenendaal, an internationally recognized testing expert stated in an interview that will be published on SPaMCAST 406, ‚Äúunless you spend the time to link your measurement or change program to business needs, they will be short-lived.‚ÄĚ ¬†Just adopting someone else‚Äôs best practices in measurement tends to be counterproductive because every organization has different goals and needs. ¬†This means they will adopt TDD for different reasons and will need different evidence to assure themselves that they are getting a benefit. ¬†There is NO single measure or metric that proves you are getting the benefit you need from TDD. ¬†That is not to say that TDD can‚Äôt or should not be measured. ¬†A pallet of measures that are commonly used based on the generic goal they address are:

Goal:  Improve Customer Satisfaction

  • ¬†¬†¬†¬†¬†¬†Customer Satisfaction Index ‚Äď Measure the satisfaction of customers of the product or project by asking a series of question and then measure how their response changes over time.
  • ¬†¬†¬†¬†¬†¬†¬†¬†Net Promoter ‚Äď Ask customers¬†‚Äúhow likely you are to recommend the product or organization being measured to a friend or colleague?‚ÄĚ The change difference between the percentage that will recommend the product or project and those that will not recommend shows how customer satisfaction is changing.
  • ¬†¬†¬†¬†¬†¬†¬†¬†Delivered Defects: ¬†A count (or a scaled count, such as defects per unit of work) of delivered defects are often used as a proxy for customer satisfaction.

Goal:  Decrease the Cost of Development

  • ¬†¬†¬†¬†¬†¬†¬†¬†Development and Delivered Defects (combined): The change in the counted (or a scaled count, such as defects per unit of work) defects discovered and those delivered across the development process is a direct measure of quality and has been shown to be directly correlated to quality. ¬†As the number of defects created falls cost goes down.
  • ¬†¬†¬†¬†¬†¬†¬†Labor Productivity: The ratio of output per person. Labor productivity measures the efficiency of the labor in the transformation of something into a product of higher value. ¬†Improving efficiency is typically linked to decrease the cost of creating a product or delivering a project.

Goal: Improve Product or Project Time-to-Market

  • ¬†¬†¬†¬†¬†¬†¬†Time to Market: ¬†A measure of the measures the speed an item moves through the development process from backlog to production. This measure is always denominated by calendar time, but the numerator can either be value or size depending on the specific question the organization needs to answer. ¬†
  • ¬†¬†¬†¬†¬†¬†¬†¬†Concept to Cash: A measure of the calendar time that it takes for an idea to go from being accepted into the portfolio backlog until it is first sold (or delivered) in the market place.

Goal: Improve Product or Project Quality

  • ¬†¬†¬†¬†¬†¬†¬†Delivered Defects: ¬†A count (or a scaled count such as defects per unit of work) of delivered defects. ¬†All things being equal, defects that customers (or users) experience negatively affect the perception of product quality. ¬†The higher the number of delivered defects the low the perception of quality.
  • ¬†¬†¬†¬†¬†¬†¬†¬†Defect Removal Efficiency: Defect Removal Efficiency (DRE) is the ratio of the defects found before implementation and removed to the total defects found through some period after delivery (typically thirty to ninety days).
  • ¬†¬†¬†¬†¬†¬†¬†¬†Test Code Coverage: A measure of the number of branches or statements that are covered by a group of tests. For example in TDD, when a developer pulls a story from the backlog, he or she would write a series of tests that would prove they have completed the story, run the tests (they should all fail) then they would write the code and re-run the tests which would all pass. There should be tests written that exercise each line of code written or changed (100% coverage). ¬†In TDD, generally, as the code coverage goes up fewer defects fail to be discovered and get delivered to someone else.

Goal: Improve Software Design

  • ¬†¬†¬†¬†¬†¬†¬†¬†Improved Design: ¬†Measuring design is a can of worms. ¬†Attributes that can be measured include reliability, efficiency, maintainability, and usability. ¬†Which design attribute should be measured is dependent on the needs of the business. ¬†For example, for consumer products increased usability (how easy is the product to use) might be a critical measure.

Goal: Improve Compliance to Development Techniques (note: compliance is an internal goal and only tangentially relates to business goals; therefore, it should be adopted only if the indirect measures can be traced to delivering stated business goals.) 

  • ¬† ¬† ¬† ¬†¬†Ask and Count: A simple approach to measure TDD is when the code for stories is checked in either ask if TDD test cases were created and run or validate that test cases were committed (before and after) along with the code.¬†
  • ¬†¬†¬†¬†¬†¬†¬†¬†Change in the Automated Test Suite: ¬†Count the number of tests that have been added, changed or deleted on a daily basis. This is a simple accounting approach that can be easily tracked. ¬†As stories are accepted to be worked, changes to the automated test base will be apparent.

TDD is an important mechanism that puts the onus for unit testing directly on the members of the development team.  If you code, you test.  When adopting TDD compliance measures, it may be important to show progress HOWEVER they are not as important as measuring business value.Measures and metrics for TDD need to be focused on changing something that is important to the business, such as cost, quality, time-to-market or perhaps usability. Because in the end, that is really what counts!

Are there other options assuming you are going to measure your TDD implementation?


Categories: Process Management

Schell Games gives popular games a twist with Tango

Google Code Blog - Wed, 08/03/2016 - 18:36

Posted by Justin Quimby, Senior Product Manager Tango

At Tech World last month, our team showed off some of the latest Tango-enabled games. One crowd favorite was Domino World by Schell Games which will will be available on the first Tango-enabled device, Lenovo’s Phab 2 Pro, coming this fall. Schell Games has adapted a few classic games, including Jenga, into smartphone augmented reality, and their developers share their experience and considerations they kept in mind as they gave dominoes a new twist.

Google: How did your team first hear about Tango technology?

Schell Games: The Tango team invited us to their Game Developer Workshopwhere we learned about Tango and the types of apps we could develop for this platform.

Google: You took a classic game, and added AR elements. How did you come to dominoes?

Schell Games: At the Game Developer Workshop, we prototyped three games: a racing game, Jenga and a pet game. Of the three games, people connected the most with Jenga.

People loved sharing a device to play the game together‚ÄĒand they loved that they didn‚Äôt have to pick up all the Jenga pieces when the game was over! And from a developer perspective, Jenga was great as it highlighted Tango‚Äôs ability to recognize surfaces.

Based on how much people liked Jenga, we decided that Domino World would be our second game. Domino World gives players all the fun of dominoes, but without the setup effort or mess. We were inspired by YouTube videos where people of all ages were doing really creative things with dominoes. Our goal was to bring that experience to the phone as an immersive and fun augmented-reality experience.

Google: Which Tango features did you use in Jenga and Domino World?

Schell Games: We used motion tracking, which lets people walk around their dominoes or Jenga tower. We also used surface detection with the depth camera, so that the device recognizes when objects are placed on a surface.

Google: How does your development approach differ for AR apps versus standard mobile apps?

Schell Games: With Domino World, for example, our approach to augmented reality thrives on reinforcing the feeling that the player‚Äôs display is a ‚Äúwindow on the world.‚ÄĚ Toys and dominoes are (virtually) placed on the actual surfaces around the player, and the game‚Äôs controls aid players in manipulating objects in the space in front of them. As a result, the player is naturally encouraged move around as they view, adjust and otherwise shape their ever-growing creations.

In contrast, traditional touchscreen controls largely work with metaphors of interacting with the screen’s image itself -- drawing on it, pinch-zooming it, etc. As a result, a more traditional touchscreen-controlled Domino World could have influenced players to remain more static and work with the existing view, as opposed to moving around to different vantage points.

Google: We noticed that you use a landscape orientation for Domino World. How did you decide to take that approach.

Schell Games: The decision to use landscape orientation for Domino World is the result of multiple smaller reasons all put together:

  • Many new players have a tendency to initially build wider versus deeper (possibly due to an instinctive desire to be able to more easily access their domino runs).
  • UI controls at the edges of a landscape layout minimizes HUD overlap when working with wider versus. deeper runs.
  • A landscape orientation naturally places players‚Äô a hands at the device‚Äôs corners, which makes for a more stable grip during gameplay.

Google: What surprised you the most while building with Tango?

Schell Games: We were quite surprised at how easy it was to build with the Tango SDK and add Tango functionality to our apps. We used the Unity Engine which made the whole process quite seamless. It took us just over two weeks to build Jenga and 10 weeks to build Domino World from beginning to end.

Google: How do you think Tango will change the way people play games?

Schell Games: Tango makes it easy to play AR games. You don‚Äôt need to print and cut out AR trackers or markers to place throughout your room to help orient the phone. Instead, your phone always knows where it is in relation to the AR objects and you can easily start playing‚ÄĒwhether you‚Äôre in a living room or on a bus. It‚Äôs incredible to have this experience with just your mobile device.

Categories: Programming