Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=7' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Google Developers to open a startup space in San Francisco

Google Code Blog - Thu, 08/18/2016 - 19:10

Posted by Roy Glasberg Global Lead, Launchpad Accelerator

We’re heading to the city of San Francisco this September to open a new space for developers and startups. With over 14,000 sq. ft. at 301 Howard Street, we’ll have more than enough elbow room to train, educate and collaborate with local and international developers and startups.

The space will hold a range of events: Google Developer Group community meetups, Codelabs, Design Sprints, and Tech Talks. It will also host the third class of Launchpad Accelerator, our equity-free accelerator for startups in emerging markets. During each class, over 20 Google teams provide comprehensive mentoring to late-stage app startups who seek to scale and become leaders in their local markets. The 3-month program starts with an all-expenses-paid two week bootcamp at Google HQ.

Developers are in an ever-changing landscape and seek technical training. We’ve also seen a huge surge in the number of developers starting their own companies. Lastly, this is an unique opportunity to bridge the gap between Silicon Valley and emerging markets. To date Launchpad Accelerator has nearly 50 alumni in India, Indonesia, Brazil and Mexico. Startups in these markets are tackling critical local problems, but they often lack access to the resources and network we have here. This dedicated space will enable us to regularly engage with developers and serve their evolving needs, whether that is to build a product, grow a company or make revenue.

We can’t wait to get started and work with developers to build successful businesses that have a positive impact locally and globally.

Categories: Programming

A Growth Job

Herding Cats - Glen Alleman - Thu, 08/18/2016 - 16:23
  • Is never permanent.
  • Makes you like yourself.
  • Is fun.
  • Is sometimes tedious, painful, frustrating, monotonous, and at the same time gives a sense of accomplishment.
  • Bases compensation on productivity.
  • Is complete: One thinks, plans, manages and is the final judge of one's work.
  • Addresses real need in the world are large - people want what you do because they need it.
  • Involves risk-taking.
  • Has a few sensible entrance requirements.
  • Ends automatically when a task is completed.
  • Encourages self-competitive excellence.
  • Causes anxiety because you don't necessarily know what you're doing.
  • Is one where you manage your time, money and people, and where you are accountable for specific results, which are evaluated by people you serve.
  • Never involves saying Thank God It's Friday.
  • Is where the overall objectives of the organizations are supported by your work.
  • Is where good judgment is one, maybe the only, job qualification. 
  • Gives every jobholder the chance to influence, sustain or change organizational objectives.
  • Is when you can quit or be fired at any time.
  • Encourages reciprocity and and parity between the boss and the bossed.
  • Is when we work from a sense of mission and desire, not obligation and duty.

From If things Don't Improve Soon I May Ask You To Fire Me - Richard K. Irish

Related articles IT Risk Management Applying the Right Ideas to the Wrong Problem Build a Risk Adjusted Project Plan in 6 Steps
Categories: Project Management

The Problems with Schedules #Redux #Redux

Herding Cats - Glen Alleman - Wed, 08/17/2016 - 17:35

Here's an article, recently referenced by a #NoEstimates twitter post. The headline is deceiving, the article DOES NOT suggest we don't need deadlines, but that deadlines without credible assessment of their credibility are the source of many problems on large programs...

Screen Shot 2016-08-10 at 10.12.36 AM

The Core Problem with Project Success

There are many core Root Causes of program problems. Here's 4 from research at PARCA

Bliss Chart

  • Unrealistic performance expectations missing Measures of Effectiveness and Measures of Performance.
  • Unrealistic Cost and Schedule estimates based on inadequate risk adjusted growth models.
  • Inadequate accessment of risk and unmitigated exposure to these risks with proper handling plans.
  • Unanticipated Technical issues with alternative plans and solutions to maintain effectiveness.

Before diving into the details of these, let me address another issue that has come up around project success and estimates. There is a common chart used to show poor performance of projects that compares Ideal project performance with the Actual project performance. Here's the notional replica of that chart.

Screen Shot 2016-08-17 at 10.11.56 AM

This chart shows several things

  • The notion of Ideal is just that - notional. All that line says is this was the baseline Estimate at Completion for the project work. It says nothing about the credibility of that estimate, the possibility that one or all of the Root Causes above are in play.
  • Then the chart shows that many projects cost more or take longer (costing more) in the sample population of projects. 
  • The term Ideal is a misnomer. There is no ideal in the estimating business. Just the estimate.
    • The estimate has two primary attributes - accuracy and precision.
  • The chart (even the notional charts) usually don't say what the accuracy or precision is of the value that make up the line.

So let's look at the estimating process and the actual project performance 

  • There is no such thing as the ideal cost estimate. Estimates are probabilistic. They have a probability distribution function (PDF) around the Mode of the possible values from the estimate. This Mode is the Most Likely value of the estimate. If the PDF is symmetric (as shown above) the upper and lower limits are usually done in some 20/80 bounds. This is typical in or domain. Other domains may vary.
  • This says here's our estimate with an 80% confidence. 
  • So now if the actual cost or schedule, or so technical parameter falls inside the acceptable range (the confidence interval) it's considered GREEN. This range of variances addresses the uncertanty in both the estimate and the project performance.

But here's three problems. First, there is no cause stated for that variance. Second, the ideal line can never be ideal. The straight line is for the estimate of the cost (and schedule) and that estimate is probabilistic. So the line HAS to have a probability distribution around it. The confidence interval on the range of the estimate. The resulting actual cost or schedule may well be within acceptable range of the estimate. Third is are the estimates being updated, when work is performed or new work is discovered and are those updates the result of changing scope? You can't state we did make our estimate if the scope is changing. This is core Performance Measurement Baseline struff we use every week where we work.

As well since ideal line has no probabilistic attributes in the original paper(s), no shown above - Here's how we think about cost, schedule, and technical performance modeling in the presence of the probabilistic and statistical processes of all project work. †

So let's be clear. NO point estimates can be credible. The Ideal line is a point estimate. It's bogus on day and continues to mislead as more data is captured from projects claimed to not match the original estimate. Without the underlying uncertanties (aleatory and epistemic) in the estimating model the ideal estimates are worthless. So when the actual numbers come in and don't match the ideal estimate there is NO way to know why. 

Was the estimate wrong (and all point estimates are wrong) or was one or all of Mr. Bliss's root causes the cause of the actual variance

CostSo another issue with the Ideal Line is there is no confidence intervals around the line. What if the actual cost came inside the acceptable range of the ideal cost? Then would the project be considered on cost and on schedule? Add to that to coupling  between cost, schedule, and the technical performance as shown above. 

The use of the Ideal is Notional. That's fine if your project is Notional.

What's the reason a project or a collection of projects don't match the baselined estimate. That estimate MUST have an accuracy and precision number before being useful to anyone. 

  • Essentially that straight line is likely an unquantified point estimate. And ALL point estimates are WRONG, BOGUS, WORTHLESS. (Yes I am shouting on the internet).
  • Don't ever make decisions in the presence of uncertanty with point estimates.
  • Don't ever do analysis of cost and schedule variances without first understanding the accuracy and precision of the original estimate.
  • Don't ever make suggestions to make changes to the processes without first finding the root cause of why the actual performance has a variance with the planned performance.

 So what's the summary so far:

  • All project work is probabilistic, driven by the underlying uncertainty of many processes. These process are coupled - they have to be for any non-trivial projects. What's the coupling factors? The non-linear couplings? Don't know these, no way to suggest much of anything about the time phased cost and schedule.
  • Knowing the reducible and irreducible uncertainties of the project is the minimal critical success factor for project success.
  • Don't know these? You've doomed the project on day one.

So in the end, any estimate we make in the beginning of the project, MUST be updated at the project proceeds. With this past performance data we can make improved estimates of the future performance as shown below. By the way, when the #NoEstimates advocates suggest using past data (empirical data) and don't apply the statistical assessment of that data to produce a confidence interval for the future estimate (a forecast is an estimate of a future outcome) they have only done half the work needed to inform those paying what is the likelihood of the future cost, schedule, or technical performance.

6a00d8341ca4d953ef01bb07e7f187970d

So Now To The Corrective Actions of The Causes of Project Variance

If we take the 4 root causes in the first chart - courtesy of Mr. Gary Bliss, Director Performance Assessment and Root Cause Analysis (PARCA), let's see what the first approach is to fix these

Unrealistic performance expectations missing Measures of Effectiveness and Measures of Performance

  • Defining the Measures of Performance, the resulting Measures of Effectiveness, and the Technical Performance Measures of the resulting project outcomes is a critical success factor.
  • Along with the Key Performance Parameters, these measures define what DONE looks like in units of measure meaningful to the decision makers.
  • Without these measures, those decision makers and those building the products that implement the solution have no way to know what DONE looks like.

Unrealistic Cost and Schedule estimates based on inadequate risk adjusted growth models

  • Here's where estimating comes in. All project work is subject to uncertainty. Reducible (Epistemic) uncertainty and Irreducible (Aleatory) uncertainty. 
  • Here's how to Manage in the Presence of Uncertainty.
  • Both these cause risk to cost, schedule, and technical outcomes.
  • Determining the range of possible values for aleatory and epistemic uncertainties means making estimates from past performance data or parametric models.

Inadequate assessment of risk and unmitigated exposure to these risks without proper handling plans

  • This type of risk is held in the Risk Register.
  • This means making estimates of the probability of occurrence, probability of impact, probability of the cost to mitigate, the probability of any residual risk, the probability of the impact of this residual risk.
  • Risk management means making estimates. 
  • Risk management is how adults manage projects. No risk management, no adult management. No estimating no adult management.

Unanticipated Technical issues with no alternative plans and solutions to maintain effectiveness

  • Things go wrong, it's called development.
  • When thing go wrong, where's Plan B? Maybe even Plan C.

When we hear we can't estimate, planning is hard or maybe not even needed, we can't forecast the future, let's ask some serious questions.

  • Do you know what DONE looks like in meaningful units of measure?
  • Do you have a plan to get to Done when the customer needs you to, for the cost the customer can afford?
  • Do you have the needed resources to reach Done for the planned cost and schedule?
  • Do you know something about the risk to reaching Done and do you have plans to mitigate those risks in some way?
  • Do you have some way to measure physical percent complete toward Done, again in units meaningful to the decision makers, so you can get feedback (variance) from your work to take corrective actions to keep the project going in the right direction?

The answers should be YES to these Five Immutable Principles of Project Success

If not, you're late, over budget, and have a low probability of success on Day One.

†NRO Cost Group Risk Process, Aerospace Corporation, 2003

Related articles Applying the Right Ideas to the Wrong Problem Herding Cats: #NoEstimates Book Review - Part 1 Some More Background on Probability, Needed for Estimating A Framework for Managing Other Peoples Money Are Estimates Really The Smell of Dysfunction? Five Estimating Pathologies and Their Corrective Actions Qualitative Risk Management and Quantitative Risk Management
Categories: Project Management

Software Development Linkopedia August 2016

From the Editor of Methods & Tools - Wed, 08/17/2016 - 15:25
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about team management, the (new) software crisis, saying no, software testing, user experience, data modeling, Scrum retrospectives, java microservices, Selenium tests and product backlog refinement. Blog: The Principles of Quantum Team […]

Two Teams or Not: First Do No Harm (Part 2)

A pile of empty pizza boxes!

WIP limits are needed to stop waiting in queues.

Recently a long-time reader and listener came to me with a question about a team with two sub-teams that were not participating well together. In a previous entry we began describing how kanban or Scrumban could be leveraged to help teams identify issues with how they work and then to fix them.  We conclude with the last two steps in a simple approach to leveraging kanban or Scrumban:

  1.   Establish beginning WIP limits for each task. Work in Process (WIP) limits indicate how many items any specific task should control at a time (being worked on or waiting in queue). An easy approach to determining an initial WIP limit for a task is to count the number of people whose primary responsibility is to perform that task (Joe is a primarily a coder – count 1 coder) under the assumption that a person can only do one thing at time (good assumption), and then use the count of people as the WIP limit. Roles that are spread across multiple people are a tad messier, but start by summing the fraction of time each person that does the role typically spends in that function (round to the nearest whole person for the WIP limit).  The initial WIP limit is merely a starting point and should be tuned as constraints and bottlenecks are observed (see the next step).

As the team is determining the WIP limits, think about whether there are tasks that only one person can perform that are necessary for a story to get to production. These steps are potential bottlenecks or constraints.  When developing the WIP limits identify alternates that can perform tasks (remember T-shaped people!).  If members of a silo can participate only in their own silo it will be difficult for them to help fellow team members outside their silo, which can be harmful to team morale.  This type of issue suggests a need for cross training (or pair-programming or mob programming) to begin knowledge transfer.  

  1.   Pull stories from the backlog and get to work! Pull highest priority stories into the first task or tasks (if you have multiple independent workflows you will have multiple entry points into the flow).  When a story is complete it should be pulled into the next task, if that task has not reached its WIP limit.   If a task can’t be pulled into the next step, it will have to wait.  When stories have to wait, there is a bottleneck and a learning opportunity for the team.

As soon as stories begin to queue up waiting to get to the next step in the flow, hold an ad-hoc retrospective.  Ask the team to determine why there is a bottleneck. One problem might be that the WIP limit of the previous task is too high.  Ask them how to solve the problem.  If they need help getting started ask if the queue of stories is due to a temporary problem (for example, Joe is out due to the flu) and then ask if there is more capacity to tide things over.  If the reason is not temporary (for example, only a single person can do a specific task, or stories are too large and tend to get stuck) ask the team to identify a solution that can be implemented and tested.  The goal is to have the team identify the solution rather than have the solution imposed on them from someone else (think buy-in).

Using kanban or Scrumban to identify and generate solutions to how teams work facilitates the development of good teams. Good Agile teams exhibit three attributes:

  • Bounded – Team members belong to the team and the relationships that they develop will be “sticky.”
  • Cross-functional – Cross-functional teams spend less time negotiating hand-offs and tracking down who can or should do any piece of work, thereby reducing the potential for bottlenecks.
  • Self-organized and self-managed – Self-organized and self-managed teams don’t need to wait for permission to make the decisions needed to remove bottlenecks or process constraints.

Overlaying kanban or Scrumban on top of the team’s current process does not change anything. . . .to start with. But it does position the team to take action when they SEE a problem.  Visualization of how work is flowing will show the team where bottlenecks occur. The scrum master or coach then needs to challenge the team to eliminate those bottlenecks, promoting the health of the team in the process.

 


Categories: Process Management

A Google Santa Tracker update from Santa's Elves

Google Code Blog - Wed, 08/17/2016 - 00:10

Sam Thorogood, Developer Programs Engineer

Today, we're announcing that the open source version of Google's Santa Tracker has been updated with the Android and web experiences that ran in December 2015. We extended, enhanced and upgraded our code, and you can see how we used our developer products - including Firebase and Polymer - to build a fun, educational and engaging experience.

To get started, you can check out the code on GitHub at google/santa-tracker-weband google/santa-tracker-android. Both repositories include instructions so you can build your own version.

Santa Tracker isn’t just about watching Santa’s progress as he delivers presents on December 24. Visitors can also have fun with the winter-inspired experiences, games and educational content by exploring Santa's Village while Santa prepares for his big journey throughout the holidays.

Below is a summary of what we’ve released as open source.

Android app
  • The Santa Tracker Android app is a single APK, supporting all devices, such as phones, tablets and TVs, running Ice Cream Sandwich (4.0) and up. The source code for the app can be found here.
  • Santa Tracker leverages Firebase features, including Remote Config API, App Invites to invite your friends to play along, and Firebase Analytics to help our elves better understand users of the app.
  • Santa’s Village is a launcher for videos, games and the tracker that responds well to multiple devices such as phones and tablets. There's even an alternative launcher based on the Leanback user interface for Android TVs.

  • Games on Santa Tracker Android are built using many technologies such as JBox2D (gumball game), Android view hierarchy (memory match game) and OpenGL with special rendering engine (jetpack game). We've also included a holiday-themed variation of Pie Noon, a fun game that works on Android TV, your phone, and inside Google Cardboard's VR.
Android Wear

  • The custom watch faces on Android Wear provide a personalized touch. Having Santa or one of his friendly elves tell the time brings a smile to all. Building custom watch faces is a lot of fun but providing a performant, battery friendly watch face requires certain considerations. The watch face source code can be found here.
  • Santa Tracker uses notifications to let users know when Santa has started his journey. The notifications are further enhanced to provide a great experience on wearables using custom backgrounds and actions that deep link into the app.
On the web

  • Santa Tracker is mobile-first: this year's experience was built for the mobile web, including an amazing brand new, interactive - yet fully responsive, village: with three breakpoints, touch gesture support and support for the Web App Manifest.
  • To help us develop Santa at scale, we've upgraded to Polymer 1.0+. Santa Tracker's use of Polymer demonstrates how easy it is to package code into reusable components. Every housein Santa's Village is a custom element, only loaded when needed, minimizing the startup cost of Santa Tracker.

  • Many of the amazing new games (like Present Bounce) were built with the latest JavaScript standards (ES6) and are compiled to support older browsers via the Google Closure Compiler.
  • Santa Tracker's interactive and fun experience is enhanced using the Web Animations API, a standardized JavaScript APIfor unifying animated content.
  • We simplified the Chromecast support this year, focusing on a great screensaver that would countdown to the big event on December 24th - and occasionally autoplay some of the great video content from around Santa's Village.

We hope that this update inspires you to make your own magical experiences based on all the interesting and exciting components that came together to make Santa Tracker!

Categories: Programming

SE-Radio Episode 266: Charles Nutter on the JVM as a Language Platform

Charles Nutter talks to Charles Anderson about the JRuby language and the JVM as a platform for implementing programming languages. They discuss JRuby and its implementation on the JVM as an example of a language other than Java on the JVM. Venue: Skype Related Links Charles Nutter on Twitter: https://twitter.com/headius Charles Nutter on GitHub: https://github.com/headius JRuby […]
Categories: Programming

Range of Domains in Sofwtare Development

Herding Cats - Glen Alleman - Tue, 08/16/2016 - 17:45

Once again I've encountered a conversation about estimating where there was a broad disconnect between the world I work in - Software Intensive System of Systems - and our approach to Agile software development, and someone claiming things that would be unheard of here.

Here's a briefing I built to sort out where on the spectrum you are, before proceeding further with what works in your domain may actually be forbidden in mine. 

So when someone starts stating what can or can't be done, what can or can't be known, what can or can't be a process - ask what domain do you work in?

Paradigm of agile project management from Glen Alleman Related articles Agile Software Development in the DOD How Think Like a Rocket Scientist - Irreducible Complexity Herding Cats: Value and the Needed Units of Measure to Make Decisions The Art of Systems Architecting Complex Project Management
Categories: Project Management

Sponsored Post: Zohocorp, Exoscale, Host Color, Cassandra Summit, Scalyr, Gusto, LaunchDarkly, Aerospike, VividCortex, MemSQL, AiScaler, InMemory.Net

Who's Hiring?
  • IT Security Engineering. At Gusto we are on a mission to create a world where work empowers a better life. As Gusto's IT Security Engineer you'll shape the future of IT security and compliance. We're looking for a strong IT technical lead to manage security audits and write and implement controls. You'll also focus on our employee, network, and endpoint posture. As Gusto's first IT Security Engineer, you will be able to build the security organization with direct impact to protecting PII and ePHI. Read more and apply here.

Fun and Informative Events
  • High-Scalability Database Beer Bash. Come join Aerospike and like-minded peers on Wednesday, September 7 from 6:30-8:30 PM in San Jose, CA for an informal meet-up of great food and libations. You'll have the chance to learn about Aerospike's high-performance NoSQL database for mission-critical applications, and about the use cases of the companies switching to Aerospike from first-generation NoSQL databases such as Cassandra and Redis. Feel free to invite colleagues and peers! RSVP: bit.ly/DBbeer

  • Join database experts from companies like Apple, ING, Instagram, Netflix, and many more to hear about how Apache Cassandra changes how they build, deploy, and scale at Cassandra Summit 2016. This September in San Jose, California is your chance to network, get certified, and trained on the leading NoSQL, distributed database with an exclusive 20% off with  promo code - Academy20. Learn more at CassandraSummit.org
Cool Products and Services
  • Do you want a simpler public cloud provider but you still want to put real workloads into production? Exoscale gives you VMs with proper firewalling, DNS, S3-compatible storage, plus a simple UI and straightforward API. With datacenters in Switzerland, you also benefit from strict Swiss privacy laws. From just €5/$6 per month, try us free now.

  • High Availability Cloud Servers in Europe: High Availability (HA) is very important on the Cloud. It ensures business continuity and reduces application downtime. High Availability is a standard service on the European Cloud infrastructure of Host Color, active by default for all cloud servers, at no additional cost. It provides uniform, cost-effective failover protection against any outage caused by a hardware or an Operating System (OS) failure. The company uses VMware Cloud computing technology to create Public, Private & Hybrid Cloud servers. See Cloud service at Host Color Europe.

  • Dev teams are using LaunchDarkly’s Feature Flags as a Service to get unprecedented control over feature launches. LaunchDarkly allows you to cleanly separate code deployment from rollout. We make it super easy to enable functionality for whoever you want, whenever you want. See how it works.

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

 

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

The Legend of the 5 Monkeys, the Doctor and the Rose

Xebia Blog - Mon, 08/15/2016 - 17:16
As Product Managers people look up to us to carry the vision, to make sure all the noses are aligned, the troops are rallied and that sort of stuff. But what is it that influences behavior? And what makes your team do what they do? The answer has more to do with you than with

How PayPal Scaled to Billions of Transactions Daily Using Just 8VMs

How did Paypal take a billion hits a day system that might traditionally run on a 100s of VMs and shrink it down to run on 8 VMs, stay responsive even at 90% CPU, at transaction densities Paypal has never seen before, with jobs that take 1/10th the time, while reducing costs and allowing for much better organizational growth without growing the compute infrastructure accordingly? 

PayPal moved to an Actor model based on Akka. PayPal told their story here: squbs: A New, Reactive Way for PayPal to Build Applications. They open source squbs and you can find it here: squbs on GitHub.

The stateful service model still doesn't get enough consideration when projects are choosing a way of doing things. To learn more about stateful services there's an article, Making The Case For Building Scalable Stateful Services In The Modern Era, based on an great talk given by Caitie McCaffrey. And if that doesn't convince you here's WhatsApp, who used Erlang, an Akka competitor, to achieve incredible throughput: The WhatsApp Architecture Facebook Bought For $19 Billion.

I refer to the above articles because the PayPal article is short on architectural details. It's more about the factors the led the selection of Akka and the benefits they've achieved by moving to Akka. But it's a very valuable motivating example for doing something different than the status quo. 

What's wrong with services on lots of VMs approach?

Categories: Architecture

SPaMCAST 407 – Magazine with Cagley, Hughson, Pries, and Tendon

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 407 includes four separate columns.  We begin with a short essay refreshing the pros and cons of Test Driven Development. Test Driven Development promises a lot of benefits but all is not light, kittens and puppies. Still, TDD is well worth doing if you go into it with your eyes open.

Our second column features Kim Pries, the Software Sensei.  Kim discusses what makes software “good.” The Software Sensei puts the “good” in quotes because it is actually a difficult word to define but Kim is willing to give the discussion a go!

In our third column, we return to Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban published J Ross (buy a copy here). We tackle Chapter 10 which is titled The Thinking Processes. Thinking processes are key to effectively using  Agile, lean and kanban processes.  

Gene Hughson anchors the cast with an entry from his Form Follows Function Blog.  In this installment, we discuss the blog entry titled “Learning to Deal with the Inevitable.”  Gene and I discussed change which is inevitable and innovation which is not quite as inevitable.

Re-Read Saturday News

This week we continue our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 16 and 17.   Chapter 16 ends Section One with an interview with Brad Jensen.  Section Two addresses the philosophies of XP.  Chapter 17 tells the creation story of XP from Beck’s point of view.

We are going to read The Five Dysfunctions of a Team by Jossey-Bass .  This will be a new book for me, therefore, an initial read (I have not read this book yet), not a re-read!  Steven Adams suggested the book and it has been on my list for a few years! Click the link (The Five Dysfunctions of a Team), buy a copy and in a few weeks, we will begin to read the book together.

Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.

Next SPaMCAST

In the next Software Process and Measurement Cast, we will feature our interview with Kupe Kupersmith. Kupe brings his refreshing take on the role of the business analyst in today’s dynamic environment.  This interview was informative, provocative and entertaining.     

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 407 - Magazine with Cagley, Hughson, Pries, and Tendon

Software Process and Measurement Cast - Sun, 08/14/2016 - 22:00

The Software Process and Measurement Cast 407 includes four separate columns.  We begin with a short essay refreshing the pros and cons of Test Driven Development. Test Driven Development promises a lot of benefits but all is not light, kittens and puppies. Still, TDD is well worth doing if you go into it with your eyes open.

Our second column features Kim Pries, the Software Sensei.  Kim discusses what makes software “good.” The Software Sensei puts the “good” in quotes because it is actually a difficult word to define but Kim is willing to give the discussion a go!

In our third column, we return to Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban published J Ross (buy a copy here). We tackle Chapter 10 which is titled The Thinking Processes. Thinking processes are key to effectively using  Agile, lean and kanban processes.  

Gene Hughson anchors the cast with an entry from his Form Follows Function Blog.  In this installment, we discuss the blog entry titled “Learning to Deal with the Inevitable.”  Gene and I discussed change which is inevitable and innovation which is not quite as inevitable.

Re-Read Saturday News

This week we continue our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 16 and 17.   Chapter 16 ends Section One with an interview with Brad Jensen.  Section Two addresses the philosophies of XP.  Chapter 17 tells the creation story of XP from Beck’s point of view.

We are going to read The Five Dysfunctions of a Team by Jossey-Bass .  This will be a new book for me, therefore, an initial read (I have not read this book yet), not a re-read!  Steven Adams suggested the book and it has been on my list for a few years! Click the link (The Five Dysfunctions of a Team), buy a copy and in a few weeks, we will begin to read the book together.

Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.

Next SPaMCAST

In the next Software Process and Measurement Cast, we will feature our interview with Kupe Kupersmith. Kupe brings his refreshing take on the role of the business analyst in today’s dynamic environment.  This interview was informative, provocative and entertaining.     

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Value and the Needed Units of Measure to Make Decisions

Herding Cats - Glen Alleman - Sun, 08/14/2016 - 20:18

For some reason the notion if value is a big mystery in the agile community. Many blogs, tweets, books are spent of speaking about Value as the priority in agile software development

We focus on Value over cost. We produce Value at the end of every Sprint Value is the most important aspect  of Scrum based development.

Without units of measure of Value beyond time and money, there can be not basis of comparison between one value based choice and another.

In the Systems Engineering world where we work, there are four critical units of measure for all we done.

  • Measures of Effectiveness - these are operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions.
    • MOE's are stated in units of measure meaningful to the buyer
    • They focus on capabilities independent of any technical implementation
    • They are connected with mission success
    • MOP's belong to the End User
  • Measure of Performance - characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.
    • MOP's are attributes that assure the systems to capability to perform
    • They are an assessment of the system to assure it meets the design requirements to satisfy the Measures of Effectiveness 
    • MOP's belong to the project
  • Technical Performance Measures - are attributes that determine how well a system or system element is satisfying or expected to satisfy a technical requirement or goal
    • TPMs assess the design process
    • They define compliance to performance requirements
    • They identify technical risk
    • They are limited to critical thresholds, and
    • They include projected performance
  • Key Performance Parameters - represent the capabilities and characteristics so significant that failure to meet them can be cause for reevaluation, reassessing, or termination of the program
    • KPP's have a threshold or objective value
    • They characterize the major drivers of performance
    • The buyer defines the KPP's during the operational concept development process - KPP's say what DONE looks like

So when you read about value and don't hear about the units of measure of Effectiveness and Performance and their TPM's and KPP's, it's going to be hard to have any meaningful discussion about the return on investment for the cost needed to produce that value.

Here's how these are related...

Screen Shot 2016-08-14 at 1.16.52 PM

Related articles Capabilities Based Planning First Then Requirements Systems Thinking, System Engineering, and Systems Management What Can Lean Learn From Systems Engineering?
Categories: Project Management

Python: matplotlib/seaborn/virtualenv – Python is not installed as a framework

Mark Needham - Sun, 08/14/2016 - 19:56

Over the weekend I was following The Marketing Technologist’s content based recommender tutorial but ran into the following exception when trying to import the seaborn library:

$ python 5_content_based_recommender/run.py 
Traceback (most recent call last):
  File "5_content_based_recommender/run.py", line 14, in <module>
    import seaborn as sns
  File "/Users/markneedham/projects/themarketingtechnologist/tmt/lib/python2.7/site-packages/seaborn/__init__.py", line 6, in <module>
    from .rcmod import *
  File "/Users/markneedham/projects/themarketingtechnologist/tmt/lib/python2.7/site-packages/seaborn/rcmod.py", line 8, in <module>
    from . import palettes, _orig_rc_params
  File "/Users/markneedham/projects/themarketingtechnologist/tmt/lib/python2.7/site-packages/seaborn/palettes.py", line 12, in <module>
    from .utils import desaturate, set_hls_values, get_color_cycle
  File "/Users/markneedham/projects/themarketingtechnologist/tmt/lib/python2.7/site-packages/seaborn/utils.py", line 12, in <module>
    import matplotlib.pyplot as plt
  File "/Users/markneedham/projects/themarketingtechnologist/tmt/lib/python2.7/site-packages/matplotlib/pyplot.py", line 114, in <module>
    _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
  File "/Users/markneedham/projects/themarketingtechnologist/tmt/lib/python2.7/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup
    globals(),locals(),[backend_name],0)
  File "/Users/markneedham/projects/themarketingtechnologist/tmt/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py", line 24, in <module>
    from matplotlib.backends import _macosx
RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are Working with Matplotlib in a virtual enviroment see 'Working with Matplotlib in Virtual environments' in the Matplotlib FAQ

We can see from the stacktrace that seaborn calls matplotlib so that’s where the problem lies. There’s even a page on the matplotlib website suggesting some workarounds.

I’ve come across this error before and been unable to get any of the suggestions to work, but this time I was successful. I needed to create the following function in my bash profile file:


~/.bash_profile

function frameworkpython {
    if [[ ! -z "$VIRTUAL_ENV" ]]; then
        PYTHONHOME=$VIRTUAL_ENV /usr/bin/python "$@"
    else
        /usr/bin/python "$@"
    fi
}

And call that function instead of my virtualenv’s python:

$ frameworkpython 5_content_based_recommender/run.py

This time the matplotlib visualisation works:

2016 08 14 16 16 08

#win

Categories: Programming

Extreme Programming Explained, Second Edition: Re-Read Week 9 (Chapters 16 – 17)

XP Explained Cover

This week we reach a bit of transition in Kent Beck and Cynthia Andres’s Extreme Programing Explained, Second Edition (2005). A short but contextually important entry. Chapter 16 ends Section One with an interview with Brad Jensen.  Section Two addresses the philosophies of XP.  Chapter 17 tell the creation story of XP from Beck’s point of view.

Chapter 16: Interview
The chapter is an interview with Brad Jensen of Saber Airline Solutions that details his organization’s story of implementing and using XP.  The story provides guidance, cautions and a presentation of the benefits gained from using XP.

Jensen reported that Saber experienced a substantial reduction in defects.  The reduction in defects was substantial enough to outperform the CMMI Maturity Level 5 organizations that participated in the Bengaluru (Bangalore) SPIN.

Implementing XP was not easy.  Implementing XP was a slow process of establishing buy-in eventually reaching 80 – 90% penetration. He indicated that at the end of the day they had the courage to fire people who wouldn’t pair program. In essence, those that eventually couldn’t or wouldn’t make the transition were jettisoned so the organization could reap the benefits of quality and productivity.

The interview ends with Jensen’s recommendation to use XP.

Section 2: Philosophy of XP

The section of the book discusses the philosophy of XP.  Over the nine chapters in this section, Beck and Andres explored applying lessons from other to accelerate the application of XP and applying XP in the varied development environments found in the today’s world.

Chapter 17: Creation Story
If you reflect, often the difference between the ideas that catch on and those that don’t is less about effectiveness and more a function of their legends or stories. The story serves to anchor the ideas and to place them in context so they can more easily be consumed.  Daniel Pink, in his book Made to Stick, described how ideas can become sticky. A story that connects with the person consuming the story’s perception of the world is a  major step towards making the idea stick in their mind.  The XP creation story described by Beck reflects a project reality that almost everyone in software development and maintenance has experienced.  The fact that we can relate predisposes developers to listen and accept the ideas that are at the core of XP.

XP was “born” on the Chrysler payroll project. Beck was called in to evaluate and later to rescue a large project that had turned into a “train wreck.” The assessment was that if the project left to continue on the current path that delivery was a pipe dream.  Beck was asked to rescue the project which provided a lavatory for using XP.  In the book, Beck stated, “My goal laying out the project style was to take everything I knew to be valuable about software engineering and turn the dials to 10″ this was the outline of extreme programming.” Leveraging XP helps turns the project that had begun as a train wreck into a business success.  The XP creation story presents us with a version of the hero’s journey beginning where the journey started, the trials along the way, the goal that was attained and the steps to move forward after the goal has been met.

The recitation of the creation story serves multiple purposes.  First, the story makes the point that XP can scale to be used on large projects successfully.  Additionally, the creation story and the interview in Chapter 16 explicitly identifies benefits are attainable using XP which is important in order to explain the value of XP.

Extreme Programming Explained: Embrace Change Second Edition Week 1, Preface and Chapter 1

Week 2, Chapters 2 – 3

Week 3, Chapters 4 – 5

Week 4, Chapters 6 – 7  

Week 5, Chapters 8 – 9

Week 6, Chapters 10 – 11

Week 7, Chapters 12 – 13

Week 8, Chapters 14 – 15

A few quick notes. We are going to read The Five Dysfunctions of a Team by Jossey-Bass .  This will be a new book for me, therefore, an initial read (I have not read this book yet), not a re-read!  Steven Adams suggested the book and it has been on my list for a few years! Click the link (The Five Dysfunctions of a Team), buy a copy and in a few weeks, we will begin to read the book together.

 


Categories: Process Management

Two Teams or Not: First Do No Harm (Part 1)

 

Restroom Closed Sign

Sometimes a process change is required!

Coaching is a function of listening, asking questions and then listening some more.  All of this listening and talking has a goal: to help those being coached down a path of self-discovery and to help them to recognize the right choice for action or inaction.  Sometimes the right question is not a question at all, but rather an exercise of visualization.

Recently when a long-time reader and listener came to me with a question about a team with two sub-teams that were not participating well together, I saw several paths to suggest.  The first set of paths focused on how people behave during classic Scrum meetings and how the team could structure stories.  However, another path presented itself as I continued to consider options based on the question.  

As a reminder, the team is composed of 8 – 10 people using Scrum, but the team operates in two basic silos.  One subset works on UI related stories while a second focuses on the backend related stories. Let’s pretend that after a long discussion with the team on whether there were really two teams in one or whether splitting the stories differently would address the issue, the team was still unsure how they wanted to address the problem.

Another path to self-discovery is to start at “nothing” and determine if the siloization is causing substantial problems.  I have found that many times teams feel powerless to address process and organizational structure issues unless they can visualize the problem.  Visualization takes the problem out of the theoretical (something feels wrong but I don’t know what it is) and makes it tangible.  This is where kanban – or in this case, Scrumban – is valuable as a tool to help the team to identify their own problem.  A simple approach to consider would include the following steps:

  1. Visualize the workflow. Identify the major steps in the process of delivering functionality that is performed by the team.  Begin with the backlog and end when your team has completed working on the story (hopefully, this results with the functionality in the hands of users).  On a whiteboard (butcher paper and sticky notes also work) write the steps across the top with the backlog on the far left and done/production on the far right.  Arrange the steps in the order they happen.

Consider how the tasks related to the two silos interact. If you have two standalone workflows that are independent of each other (independence defined as each sub-team can draw a story, complete their steps, and put the functionality in production) we have two separate teams living under a single roof. Then the question is: is this a bad thing?  It might not be a problem. Until that issue is tackled, relax and move forward as two teams using Scrumban or revert to the current method (there is no harm from visualization).  For all other scenarios go to the next step.   Note: visualization can expose all sorts of process problems that do not relate to the two team issue.  I suggest not making any process changes until you take the next two steps, which position the team to collect data and structured experience.

In the next installment we will progress from visualization to assigning working in process (WIP) , doing work and then using data to recognize problems and make changes which lead to a healthy team.


Categories: Process Management

New features for reviews and experiments in Google Play Developer Console app

Android Developers Blog - Wed, 08/10/2016 - 20:07

Posted by Kobi Glick, Google Play team

With over one million apps published through the Google Play Developer Console, we know how important it is to publish with confidence, acquire users, learn about them, and manage your business. Whether reacting to a critical performance issue or responding to a negative review, checking on your apps when and where you need to is invaluable.

The Google Play Developer Console app, launched in May, has already helped thousands of developers stay informed of crucial business updates on the go.

We’re excited to tell you about new features, available today:

Receive notifications about new reviews

Use filters to find the reviews you want

Review and apply store listing experiment results

Increase the percent of a staged rollout or halt a bad staged rollout

Download the Developer Console app on Google Play and stay on top of your apps and games, wherever you are! Also, get the Playbook for Developers app to stay up-to-date with more features and best practices that will help you grow a successful business on Google Play.

Categories: Programming

Adding a bit more reality to your augmented reality apps with Tango

Google Code Blog - Wed, 08/10/2016 - 19:14

Posted by Sean Kirmani, Software Engineering Intern, Tango

Augmented reality scenes, where a virtual object is placed in a real environment, can surprise and delight people whether they’re playing with dominoes or trying to catch monsters. But without support for environmental lighting, these virtual objects can stick out rather than blend in with their environments. Ambient lighting should bleed onto an object, real objects should be seen in reflective surfaces, and shade should darken a virtual object.

Tango-enabled devices can see the world like we do, and they’re designed to bring mobile augmented reality closer to real reality. To help bring virtual objects to life, we’ve updated the Tango Unity SDK to enable developers to add environmental lighting to their Tango apps. Here’s how to get started:

Let’s dive in!

Before we begin, you’ll need to download the Tango Unity SDK. Then you can follow the steps below to make your reality a little brighter.

Step 1: Create a new Unity project and import the Tango SDK package into the project.

Step 2: Create a new scene. If you need help with this, check out the solar system tutorial from a previous post. Then you’ll add Tango Manager and Tango AR Camera prefabs to your scene and remove the default Main Camera game object. Also remove the artificial directional light. We won’t need that anymore. After doing this, you should see the scene hierarchy like this:

Step 3: In the Tango Manager game object, you’ll want to check Enable Video Overlay and set the method to Texture and Raw Bytes.

Step 4: Under Tango AR Camera, look for the Tango Environmental Lighting component. Make that the the Enable Environmental Lighting checkbox is checked.

Step 5: Add your game object that you’d like to be environmental lit to the scene. In our example, we’ll be using a pool ball. So let’s add a new Sphere.

Step 6: Let’s create a new material for our sphere. Go to Create > Material. We’ll be using our environmental lighting shader on this object. Under Shader, select Tango >Environmental Lighting > Standard.

Step 7: Let’s add a texture to our pool ball and tweak our smoothness parameter. The higher the smoothness, the more reflective our object becomes. Rougher objects have more of a diffuse lighting that is softer and spreads over the surface of the object. You can download the pool_ball_textureand import it into your project.

Step 8: Add your new material to your sphere, so you have a nicer looking pool ball.

Step 9: Compile and run the application again. You should able see environment lit pool ball now!

You can also follow our previous post and be able to place your pool ball on surfaces. You don’t have to worry about your sphere rolling off your surface. Here are some comparison pictures of the pool ball with a static artificial light (left) and with environment lighting (right).

We hope you enjoyed this tutorial combining the joy of environmental lighting with the magic of AR. Stay tuned to this blog for more AR updates and tutorials!

We’re just getting started!

You’ve just created a more realistically light pool ball that live in AR. That’s a great start, but there’s a lot more you can do to make a high performance smartphone AR application. Check out our Unity example code on Github (especially the Augmented Reality example) to learn more about building a good smartphone AR application.

Categories: Programming

Android Developer Story: Hole19 improves user retention with Android Wear

Android Developers Blog - Wed, 08/10/2016 - 18:48

Posted by Lily Sheringham, Google Play team

Based in Lisbon, Portugal, Hole19 is a golfing app which assists golfers before, during, and after their golfing journey with GPS and a digital scorecard. The app connects the golfing community with shared statistics for performance and golf courses, and now has close to 1 million users across all platforms.

Watch Anthony Douglas, Founder & CEO, and Fábio Carballo, Head Android Developer, explain how Hole19 doubled its number of Android Wear users in 6 months, and improved user engagement and retention on the platform. Also, hear how they are using APIs and the latest Wear 2.0 features to connect users to their golfing data and improve the user experience.


Learn more how to get started with Android Wear and get the Playbook for Developers app to stay up-to-date with more features and best practices that will help you grow a successful business on Google Play.

Categories: Programming