Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=7' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

lorem ipsun3

NOOP.NL - Jurgen Appelo - Mon, 03/28/2016 - 15:36

lorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsunt

The post lorem ipsun3 appeared first on NOOP.NL.

Categories: Project Management

lorem ipsun2

NOOP.NL - Jurgen Appelo - Mon, 03/28/2016 - 15:35

lorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsunt

The post lorem ipsun2 appeared first on NOOP.NL.

Categories: Project Management

lorem ipsunt lorem ipsunt lorem ipsunt

NOOP.NL - Jurgen Appelo - Mon, 03/28/2016 - 15:33

lorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsuntlorem ipsunt

The post lorem ipsunt lorem ipsunt lorem ipsunt appeared first on NOOP.NL.

Categories: Project Management

The Pseudo Science of No Estimates

Herding Cats - Glen Alleman - Mon, 03/28/2016 - 15:28

I listened to the final rant of Jon Stewart (BIG CAUTION this show is from Cable and for Adults Only, just like Risk Management) and came away with inspiration for a post, which I've edited a bit to remove the phrases not applicable here. 

Pseudoscience and science ‚Äď the former is an belief¬†based on logical fallacies that is supported by some people who may seem rational; the latter is an actual rational methodology to discover facts about the natural universe. The former is utter bullshit. And the latter is fact. Deal with it. And an almost literal translation of Jon Stewart's last broadcast.

Bullshit Is everywhere. There is very little that you will encounter in life that has not been, in some ways, infused with bullshit.

Not all of it bad. Your general, day-to-day, organic free-range bullshit is often necessary. That kind of bullshit in many ways provides important social-contract fertilizer. It keeps people from making each other cry all day.

But then there‚Äôs the more pernicious bullshit‚Äďyour premeditated, institutional bullshit, designed to obscure and distract. Designed by whom? The bashitocracy.

It comes in three basic flavors. One, making bad things sound like good things. “estimates are the smell of dysfunction, so let's not estimate and the dysfunction will disappear." Because "we're just developers who can't even make high level estimates how much it will cost and we work for bonehead managers who can't tell the difference between a good estimate and a bad estimate," doesn't have the same ring.

"Estimates inhibit creativity, restrict our ability to be flexible, and the other restrictions to our creativity"  sounds better than "we have no clue what we're doing, how much it will cost, or when we'll be done, so juts give us the money so we can start spending," So whenever something’s been titled Pure Agile, or 10X improvement in productivity, searching for the Magic take a good long sniff. Chances are it’s been manufactured in a facility that may contain traces of bullshit

Number two. Hiding the bad things under mountains of bullshit Complexity. You know, I would love to download Drizzy’s latest Meek Mill diss. But I’m not really interested right now in reading Tolstoy’s iTunes agreement. So I’ll just click agree, even if it grants Apple prima note with my spouse. This comes to the discussion of Value at Risk and what are estimates for other than to protect Value at Risk. And the willful ignorance of how every business works projects with probabilistic events and statistical variance and how those uncertainties must be dealt with for the business to have any hope of surviving .

And finally, finally, it’s the bullshit of infinite possibility. The Unicorn approach to solving hard problems, by claiming all big problems can be broken down into little problems, These bullshitters cover their unwillingness to act under the guise of unending inquiry. We can’t do anything because we can't possibly know anything in the presence of uncertainty. We cannot take action to improve that knowledge  until everyone in the world agrees we're not headed down the slippery slope of governance of how we spend other people's money. Until then, I say it leads to controversy.

Now, the good news is this. Bullshitters have gotten pretty lazy. And their work is easily detected. And looking for it is kind of a pleasant way to pass the time.

So when you encounter some claim - estimates are the smell of dysfunction. Or the latest There are countless good ways to make decisions: Estimates is one. Then ask for evidence. Ask for working examples that can be tested against some set of principles. Not just personal anecdotes.  If there are no principles, no testable evidence, then ...

The best defense against bullshit is vigilance. So if you smell something, say something.

Related articles Taxonomy of Logical Fallacies Here, There Be Dragons Late Start = Late Finish Information Technology Estimating Quality
Categories: Project Management

Work Like It Matters

Making the Complex Simple - John Sonmez - Mon, 03/28/2016 - 13:00

Some of you reading this probably have a deadline looming. Some of you probably started reading this on your work computer‚ÄĒor on your phone or some other device‚ÄĒbut regardless, you‚Äôre sitting in an office while you read, and you might not even be the owner of that office. Hopefully, if you have decided to use […]

The post Work Like It Matters appeared first on Simple Programmer.

Categories: Programming

SPaMCAST 387 ‚ÄďStorytelling As A Tool, Critical Roles, QA Career Path

SPaMCAST Logo

http://www.spamcast.net

Listen Now

Subscribe on iTunes

The Software Process and Measurement Cast 387 includes three features.  The first is our essay on storytelling.  Storytelling is a tool that is useful in many scenarios, for presentations, to help people frame their thoughts and for gathering information. A story provides both a deeper and more nuanced connection with information than most lists of PowerPoint bullets or even structured requirements documents. The essay provides an excellent supplement to our interview with Jason Little (which you can listen to here).

The second feature this week is Steve Tendon discussing Chapter 9 of¬†Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban¬†published J Ross. Chapter 9 is titled ‚ÄúCritical Roles, Leadership and More‚ÄĚ.¬† We discuss why leadership roles are important to achieve hyper-productive performance. Sometimes in Agile and other approaches it is easy to overlook the role of leaders outside of the team.

Remember Steve has a great offer for SPaMCAST listeners. Check `out  https://tameflow.com/spamcast for a way to get Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach, and Its Application to Scrum and Kanban at 40% off the list price.

Anchoring the cast this week is a visit to the QA Corner.  Jeremy Berriault discusses whether a career and the path your career might take in testing is an individual or a team sport.  Jeremy dispenses useful advice even if you are not involved in testing.

Re-Read Saturday News

This week we are back with Chapter 14 of How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog.¬† Chapter 14 is titled A Universal Measurement Method.¬† In this chapter Hubbard provides the readers with a process for applying Applied Information Economics.

 

We will read¬†Commitment ‚Äď Novel About Managing Project Risk¬†by¬†Olav Maassen¬†and¬†Chris Matts for our next Re-Read.¬† Buy your copy today and start reading (use the link to support the podcast).¬†In the meantime, vote in our poll for the next book. ¬†As in past polls please vote twice¬†or suggest a write-in candidate in the comments.¬†¬†We will run the poll for two more weeks.

 

 

Upcoming Events

I will be at the QAI Quest 2016 in Chicago beginning April 18th through April 22nd.  I will be teaching a full day class on Agile Estimation on April 18 and presenting Budgeting, Estimating, Planning and #NoEstimates: They ALL Make Sense for Agile Testing! on Wednesday, April 20th.  Register now!

I will be speaking at the CMMI Institute’s Capability Counts 2016 Conference in Annapolis, Maryland May 10th and 11th. Register Now!

Next SPaMCAST

The next Software Process and Measurement Cast will feature our interview with Dr Mark Bojeun.  Dr Bojeun returns to the podcast to discuss how a PMO can be a strategic tool for an organization.  If a PMO is merely a control point or an administrative function, their value and longevity is at risk.  Mark suggests that there is a better way.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 387 ‚ÄďStorytelling As A Tool, Critical Roles, QA Career Path

Software Process and Measurement Cast - Sun, 03/27/2016 - 22:00

The Software Process and Measurement Cast 387 includes three features.  The first is our essay on storytelling.  Storytelling is a tool that is useful in many scenarios, for presentations, to help people frame their thoughts and for gathering information. A story provides both a deeper and more nuanced connection with information than most lists of PowerPoint bullets or even structured requirements documents. The essay provides an excellent supplement to our interview with Jason Little (which you can listen to here).

The second feature this week is Steve Tendon discussing Chapter 9 of¬†Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban¬†published J Ross. Chapter 9 is titled ‚ÄúCritical Roles, Leadership and More‚ÄĚ.¬† We discuss why leadership roles are important to achieve hyper-productive performance. Sometimes in Agile and other approaches, it is easy to overlook the role of leaders outside of the team.

Remember Steve has a great offer for SPaMCAST listeners. Check `out  https://tameflow.com/spamcast for a way to get Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach, and Its Application to Scrum and Kanban at 40% off the list price.

Anchoring the cast this week is a visit to the QA Corner.  Jeremy Berriault discusses whether a career and the path your career might take in testing is an individual or a team sport.  Jeremy dispenses useful advice even if you are not involved in testing.

Re-Read Saturday News

This week we are back with Chapter 14 of How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog.¬† Chapter 14 is titled A Universal Measurement Method.¬† In this chapter, Hubbard provides the readers with a process for applying Applied Information Economics.

We will read¬†Commitment ‚Äď Novel About Managing Project Risk¬†by¬†Olav Maassen¬†and¬†Chris Matts for our next Re-Read.¬† Buy your copy today and start reading (use the link to support the podcast).¬†In the meantime, vote in our poll for the next book. ¬†As in past polls please vote twice¬†or suggest a write-in candidate in the comments.¬†¬†We will run the poll for two more weeks.

Upcoming Events

I will be at the QAI Quest 2016 in Chicago beginning April 18th through April 22nd.  I will be teaching a full day class on Agile Estimation on April 18 and presenting Budgeting, Estimating, Planning and #NoEstimates: They ALL Make Sense for Agile Testing! on Wednesday, April 20th.  Register now!

I will be speaking at the CMMI Institute’s Capability Counts 2016 Conference in Annapolis, Maryland May 10th and 11th. Register Now!

Next SPaMCAST

The next Software Process and Measurement Cast will feature our interview with Dr. Mark Bojeun.  Dr. Bojeun returns to the podcast to discuss how a PMO can be a strategic tool for an organization.  If a PMO is merely a control point or an administrative function, their value and longevity are at risk.  Mark suggests that there is a better way.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Compile-Time Evaluation in Scala with macros

Xebia Blog - Sun, 03/27/2016 - 15:20
Many 'compiled' languages used to have a strict separation between what happens at 'compile-time' and what happens at 'run-time'. This distinction is starting to fade: JIT compilation moves more of the compile phase to run-time, while conversely various kinds of optimizations do 'run-time' work at compile time. Powerful type systems allow the expression things previously

How To Measure Anything, Chapter 14: A Universal Measurement Method: Applied Information Economics

HTMA

How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition

Chapter 14 of¬†How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition¬†is the last chapter in the book.¬† Next week I will spend a few moments reflecting on the value I have gotten from this re-read; HOWEVER, the last chapter continues to deliver content, so let‚Äôs not get ahead of ourselves. ¬†This chapter shows us:

  • A process for applying Applied Information Economics (which I used recently), and that
  • AIE is applicable in nearly every scenario.

Hubbard introduced Applied Information Economics (AIE) in Chapter One (page 9 to be exact).  The methodology includes five steps:

  1. Define the decision. 
  2. Determine what you know. 
  3. Compute the value of additional information. 
  4. Measure where the information value is high. 
  5. Make a decision; act upon it.

AIE is the centerpiece of¬†How To Measure Anything. Chapter 14 brings all pieces together into an overall process populated with procedures and techniques. Hubbard lays out the application of AIE in four phases (0 ‚Äď 3).

Phase 0 is a preparation phase which includes identifying workshop participants, developing the first cut of the measurement questions and then assigning the workshop participants pre-reading (homework) based on those initial questions.  Maximizing the value of the workshops requires priming the participants with homework.  The homework makes sure everyone is prepared for the workshops so that time is note wasted having coming up to speed. This also helps to reset any organizational anchor bias.

Phase 1:  Hold workshop(s) for problem definition, building a decision model, and developing initially calibrated estimates. Calibration exercises aid participants so they can quantify the initial variables as a range at a 90% confidence interval or as a probability distribution, rather than a single number.

Phase 2: This phase focuses on analyzing the value of information, the first cut at the measurement methods, refining the measurement methods, updating the decision model and then re-running the value of information analysis to make sure we don’t have ¬†to change¬†the measurement approach . Hubbard points out (and my experience attests) that during this step, you often determine that most variables have sufficient certainty, so the organization needs to do no further measurement beyond the calibrated estimate.¬†This step ensures that the variables that move forward in the measurement process add value.

Phase 3: Use the data to make the decision(s) to run a Monte Carlo analysis to refine any of the metrics procedures needed, use the data to make the decisions identified and generate a final report and presentation (even Hubbard is a consultant, thus, a presentation).

The basic flow espoused by Hubbard is meant to cut through the standard rationalization to find the real questions.  Then to determine how to answer those questions, using measurement with an emphasis on making sure the organization does not already have the data needed to answer the questions, and then getting the data that make economic sense. The process sounds simple; however as a practitioner, the problem I have observed is often that generating the initial involvement is difficult and that participants often have pet theories that are difficult to disarm.  For example, I once ran across an executive that was firmly convinced that having his software development teams work longer hours would increase productivity (he forgot that productivity equals output divided by input). Therefore, he wanted to measure which monitoring applications would make his developers work more hours.  It took several examples to retrain him to recognize that to increase productivity, he had to increase output (functionality) more than he increased input (effort). The process described by Hubbard is extremely useful, but remember that making it work requires both math and facilitation skills.

The remainder of the chapter focuses on providing examples that show the concepts in the book in action.  The cases cover a wide range of scenarios, from improving logistics (forecasting fuel needs for the Marine Corps) to measuring the value of a department.  Each case provides a lesson for the reader; however three messages make my bottom line:

  • While some say that the data is too hard to get, it usually isn‚Äôt.
  • Reducing uncertainty¬†often requires only one or few measures.
  • Framing the question as a reduction in uncertainty means that almost anything is measurable.

These three bottom line lessons summarize the philosophy of How To Measure Anything. But like the process to apply this philosophy, the devil is in the details.

Past installments of the Re-read Saturday of  How To Measure Anything, Third Edition

Chapter 1: The Challenge of Intangibles

Chapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily

Chapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t

Chapter 4: Clarifying the Measurement Problem

Chapter 5: Calibrated Estimates: How Much Do You Know Now?

Chapter 6: Quantifying Risk Through Modeling

Chapter 7: Quantifying The Value of Information

Chapter 8 The Transition: From What to Measure to How to Measure

Chapter 9: Sampling Reality: How Observing Some Things Tells Us about All Things

Chapter 10: Bayes: Adding To What You Know Now

Chapter 11: Preferences and Attitudes: The Softer Side of Measurement

Chapter 12: The Ultimate Measurement Instrument: Human Judges

Chapter 13: New Measurement Instruments for Management

We continue with the selection process for the next‚Äôish book for the Re-Read Saturday.¬†¬†We will read¬†Commitment ‚Äď Novel About Managing Project Risk¬†by¬†Olav Maassen¬†and¬†Chris Matts next.¬† Buy your copy today and start reading (use the link to support the podcast).¬† Mr. Adams has suggested that we will blow through the read of this book, therefore, doing the poll now will save time in a few weeks! ¬†As in past polls please vote twice¬†or suggest a write-in candidate in the comments.¬†¬†We will run the poll for two more weeks.

Take Our Poll (function(d,c,j){if(!d.getElementById(j)){var pd=d.createElement(c),s;pd.id=j;pd.src='https://s1.wp.com/wp-content/mu-plugins/shortcodes/js/polldaddy-shortcode.js';s=d.getElementsByTagName(c)[0];s.parentNode.insertBefore(pd,s);} else if(typeof jQuery !=='undefined')jQuery(d.body).trigger('pd-script-load');}(document,'script','pd-polldaddy-loader'));
Categories: Process Management

How To Measure Anything, Chapter 14: A Universal Measurement Method: Applied Information Economics

HTMA

How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition

Chapter 14 of¬†How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition¬†is the last chapter in the book.¬† Next week I will spend a few moments reflecting on the value I have gotten from this re-read; HOWEVER, the last chapter continues to deliver content, so let‚Äôs not get ahead of ourselves. ¬†This chapter shows us:

  • A process for applying Applied Information Economics (which I used recently), and that
  • AIE is applicable in nearly every scenario.

Hubbard introduced Applied Information Economics (AIE) in Chapter One (page 9 to be exact).  The methodology includes five steps:

  1. Define the decision. 
  2. Determine what you know. 
  3. Compute the value of additional information. 
  4. Measure where the information value is high. 
  5. Make a decision; act upon it.

AIE is the centerpiece of¬†How To Measure Anything. Chapter 14 brings all pieces together into an overall process populated with procedures and techniques. Hubbard lays out the application of AIE in four phases (0 ‚Äď 3).

Phase 0 is a preparation phase which includes identifying workshop participants, developing the first cut of the measurement questions and then assigning the workshop participants pre-reading (homework) based on those initial questions.  Maximizing the value of the workshops requires priming the participants with homework.  The homework makes sure everyone is prepared for the workshops so that time is note wasted having coming up to speed. This also helps to reset any organizational anchor bias.

Phase 1:  Hold workshop(s) for problem definition, building a decision model, and developing initially calibrated estimates. Calibration exercises aid participants so they can quantify the initial variables as a range at a 90% confidence interval or as a probability distribution, rather than a single number.

Phase 2: This phase focuses on analyzing the value of information, the first cut at the measurement methods, refining the measurement methods, updating the decision model and then re-running the value of information analysis to make sure we don’t have ¬†to change¬†the measurement approach . Hubbard points out (and my experience attests) that during this step, you often determine that most variables have sufficient certainty, so the organization needs to do no further measurement beyond the calibrated estimate.¬†This step ensures that the variables that move forward in the measurement process add value.

Phase 3: Use the data to make the decision(s) to run a Monte Carlo analysis to refine any of the metrics procedures needed, use the data to make the decisions identified and generate a final report and presentation (even Hubbard is a consultant, thus, a presentation).

The basic flow espoused by Hubbard is meant to cut through the standard rationalization to find the real questions.  Then to determine how to answer those questions, using measurement with an emphasis on making sure the organization does not already have the data needed to answer the questions, and then getting the data that make economic sense. The process sounds simple; however as a practitioner, the problem I have observed is often that generating the initial involvement is difficult and that participants often have pet theories that are difficult to disarm.  For example, I once ran across an executive that was firmly convinced that having his software development teams work longer hours would increase productivity (he forgot that productivity equals output divided by input). Therefore, he wanted to measure which monitoring applications would make his developers work more hours.  It took several examples to retrain him to recognize that to increase productivity, he had to increase output (functionality) more than he increased input (effort). The process described by Hubbard is extremely useful, but remember that making it work requires both math and facilitation skills.

The remainder of the chapter focuses on providing examples that show the concepts in the book in action.  The cases cover a wide range of scenarios, from improving logistics (forecasting fuel needs for the Marine Corps) to measuring the value of a department.  Each case provides a lesson for the reader; however three messages make my bottom line:

  • While some say that the data is too hard to get, it usually isn‚Äôt.
  • Reducing uncertainty¬†often requires only one or few measures.
  • Framing the question as a reduction in uncertainty means that almost anything is measurable.

These three bottom line lessons summarize the philosophy of How To Measure Anything. But like the process to apply this philosophy, the devil is in the details.

Past installments of the Re-read Saturday of  How To Measure Anything, Third Edition

Chapter 1: The Challenge of Intangibles

Chapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily

Chapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t

Chapter 4: Clarifying the Measurement Problem

Chapter 5: Calibrated Estimates: How Much Do You Know Now?

Chapter 6: Quantifying Risk Through Modeling

Chapter 7: Quantifying The Value of Information

Chapter 8 The Transition: From What to Measure to How to Measure

Chapter 9: Sampling Reality: How Observing Some Things Tells Us about All Things

Chapter 10: Bayes: Adding To What You Know Now

Chapter 11: Preferences and Attitudes: The Softer Side of Measurement

Chapter 12: The Ultimate Measurement Instrument: Human Judges

Chapter 13: New Measurement Instruments for Management

We continue with the selection process for the next‚Äôish book for the Re-Read Saturday.¬†¬†We will read¬†Commitment ‚Äď Novel About Managing Project Risk¬†by¬†Olav Maassen¬†and¬†Chris Matts next.¬† Buy your copy today and start reading (use the link to support the podcast).¬† Mr. Adams has suggested that we will blow through the read of this book, therefore, doing the poll now will save time in a few weeks! ¬†As in past polls please vote twice¬†or suggest a write-in candidate in the comments.¬†¬†We will run the poll for two more weeks.


Categories: Process Management

Managing for Happiness FAQ

NOOP.NL - Jurgen Appelo - Sat, 03/26/2016 - 21:27
Managing for Happiness cover (front)

In June 2016, John Wiley & Sons will release my “new” book Managing for Happiness, which will be a re-release of last year’s #Workout book. Some people asked me questions about that.

Why do you re-release the #Workout book with a publisher?

My aim is to be a full-time writer. That means I must sell more books so that I can earn a full income from writing. (Right now, I don’t.) A global publisher can help me with that. A second reason is that I want to reach as many people as possible with my message of better management with fewer managers. A third reason is that wider availability of the book (in bookstores and libraries) is not only good for new readers but also for my reputation as a public speaker.

Categories: Project Management

Common Sense Agile Scaling (Part 1: Intro)

Xebia Blog - Sat, 03/26/2016 - 10:05
Agile is all about running experiments and see what works for you. Inspect and adapt. Grow. This also applies to scaling your agile organization. There is no out of the box scaling framework that will fit your organization perfectly and instantly. Experiment, and combine what you need from as many agile models and frameworks as

Thanks For Ruining Another Game Forever, Computers

Coding Horror - Jeff Atwood - Fri, 03/25/2016 - 23:29

In 2006, after visiting the Computer History Museum's exhibit on Chess, I opined:

We may have reached an inflection point. The problem space of chess is so astonishingly large that incremental increases in hardware speed and algorithms are unlikely to result in meaningful gains from here on out.

So. About that. Turns out I was kinda … totally completely wrong. The number of possible moves, or "problem space", of Chess is indeed astonishingly large, estimated to be 1050:

100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000

Deep Blue was interesting because it forecast a particular kind of future, a future where specialized hardware enabled brute force attack of the enormous chess problem space, as its purpose built chess hardware outperformed general purpose CPUs of the day by many orders of magnitude. How many orders of magnitude? In the heady days of 1997, Deep Blue could evaluate 200 million chess positions per second. And that was enough to defeat Kasparov, the highest ever ranked human player – until 2014 at least. Even though one of its best moves was the result of a bug.

200,000,000

In 2006, about ten years later, according to the Fritz Chess benchmark, my PC could evaluate only 4.5 million chess positions per second.

4,500,000

Today, about twenty years later, that very same benchmark says my PC can evaluate a mere 17.2 million chess positions per second.

17,200,000

Ten years, four times faster. Not bad! Part of that is I went from dual to quad core, and these chess calculations scale almost linearly with the number of cores. An eight core CPU, no longer particularly exotic, could probably achieve ~28 million on this benchmark today.

28,000,000

I am not sure the scaling is exactly linear, but it's fair to say that even now, twenty years later, a modern 8 core CPU is still about an order of magnitude slower at the brute force task of evaluating chess positions than what Deep Blue's specialized chess hardware achieved in 1997.

But here's the thing: none of that speedy brute forcing matters today. Greatly improved chess programs running on mere handheld devices can perform beyond grandmaster level.

In 2009 a chess engine running on slower hardware, a 528 MHz HTC Touch HD mobile phone running Pocket Fritz 4 reached the grandmaster level – it won a category 6 tournament with a performance rating of 2898. Pocket Fritz 4 searches fewer than 20,000 positions per second. This is in contrast to supercomputers such as Deep Blue that searched 200 million positions per second.

As far as chess goes, despite what I so optimistically thought in 2006, it's been game over for humans for quite a few years now. The best computer chess programs, vastly more efficient than Deep Blue, combined with modern CPUs which are now finally within an order of magnitude of what Deep Blue's specialized chess hardware could deliver, play at levels way beyond what humans can achieve.

Chess: ruined forever. Thanks, computers. You jerks.

Despite this resounding defeat, there was still hope for humans in the game of Go. The number of possible moves, or "problem space", of Go is estimated to be 10170:

1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000

Remember that Chess had a mere fifty zeroes there? Go has more possible moves than there are atoms in the universe.

Wrap your face around that one.

Deep Blue was a statement about the inevitability of eventually being able to brute force your way around a difficult problem with the constant wind of Moore's Law at your back. If Chess is the quintessential European game, Go is the quintessential Asian game. Go requires a completely different strategy. Go means wrestling with a problem that is essentially impossible for computers to solve in any traditional way.

A simple material evaluation for chess works well – each type of piece is given a value, and each player receives a score depending on his/her remaining pieces. The player with the higher score is deemed to be 'winning' at that stage of the game.

However, Chess programmers innocently asking Go players for an evaluation function would be met with disbelief! No such simple evaluation exists. Since there is only a single type of piece, only the number each player has on the board could be used for a simple material heuristic, and there is almost no discernible correlation between the number of stones on the board and what the end result of the game will be.

Analysis of a problem this hard, with brute force completely off the table, is colloquially called "AI", though that term is a bit of a stretch to me. I prefer to think of it as building systems that can learn from experience, aka machine learning. Here's a talk which covers DeepMind learning to play classic Atari 2600 videogames. (Jump to the 10 minute mark to see what I mean.)

As impressive as this is – and it truly is – bear in mind that games as simple as Pac-Man still remain far beyond the grasp of Deep Mind. But what happens when you point a system like that at the game of Go?

DeepMind built a system, AlphaGo, designed to see how far they could get with those approaches in the game of Go. AlphaGo recently played one of the best Go players in the world, Lee Sedol, and defeated him in a stunning 4-1 display. Being the optimist that I am, I guessed that DeepMind would win one or two games, but a near total rout like this? Incredible. In the space of just 20 years, computers went from barely beating the best humans at Chess, with a problem space of 1050, to definitively beating the best humans at Go, with a problem space of 10170. How did this happen?

Well, a few things happened, but one unsung hero in this transformation is the humble video card, or GPU.

Consider this breakdown of the cost of floating point operations over time, measured in dollars per gigaflop:

1961$8,300,000,000 1984$42,780,000 1997$42,000 2000$1,300 2003$100 2007$52 2011$1.80 2012$0.73 2013$0.22 2015$0.08

What's not clear in this table is that after 2007, all the big advances in FLOPS came from gaming video cards designed for high speed real time 3D rendering, and as an incredibly beneficial side effect, they also turn out to be crazily fast at machine learning tasks.

The Google Brain project had just achieved amazing results ‚ÄĒ it learned to recognize cats and people by watching movies on YouTube. But it required 2,000 CPUs in servers powered and cooled in one of Google‚Äôs giant data centers. Few have computers of this scale. Enter NVIDIA and the GPU. Bryan Catanzaro in NVIDIA Research teamed with Andrew Ng‚Äôs team at Stanford to use GPUs for deep learning. As it turned out, 12 NVIDIA GPUs could deliver the deep-learning performance of 2,000 CPUs.

Let's consider a related case of highly parallel computation. How much faster is a GPU at password hashing?

Radeon 79708213.6 M c/s 6-core AMD CPU52.9 M c/s

Only 155 times faster right out of the gate. No big deal. On top of that, CPU performance has largely stalled in the last decade. While more and more cores are placed on each die, which is great when the problems are parallelizable – as they definitely are in this case – the actual performance improvement of any individual core over the last 5 to 10 years is rather modest.

But GPUs are still doubling in performance every few years. Consider password hash cracking expressed in the rate of hashes per second:

GTX 295200925k GTX 690201254k GTX 780 Ti2013100k GTX 980 Ti2015240k

The latter video card is the one in my machine right now. It's likely the next major revision from Nvidia, due later this year, will double these rates again.

(While I'm at it, I'd like to emphasize how much it sucks to be an 8 character password in today's world. If your password is only 8 characters, that's perilously close to no password at all. That's also why why your password is (probably) too damn short. In fact, we just raised the minimum allowed password length on Discourse to 10 characters, because annoying password complexity rules are much less effective in reality than simply requiring longer passwords.)

Distributed AlphaGo used 1202 CPUs and 176 GPUs. While that doesn't sound like much, consider that as we've seen, each GPU can be up to 150 times faster at processing these kinds of highly parallel datasets — so those 176 GPUs were the equivalent of adding ~26,400 CPUs to the task. Or more!

Even if you don't care about video games, they happen to have a profound accidental impact on machine learning improvements. Every time you see a new video card release, don't think "slightly nicer looking games" think "wow, hash cracking and AI just got 2× faster … again!"

I'm certainly not making the same mistake I did when looking at Chess in 2006. (And in my defense, I totally did not see the era of GPUs as essential machine learning aids coming, even though I am a gamer.) If AlphaGo was intimidating today, having soundly beaten the best human Go player in the world, it'll be no contest after a few more years of GPUs doubling and redoubling their speeds again.

AlphaGo, broadly speaking, is the culmination of two very important trends in computing:

  1. Huge increases in parallel processing power driven by consumer GPUs and videogames, which started in 2007. So if you're a gamer, congratulations! You're part of the problem-slash-solution.

  2. We're beginning to build sophisticated (and combined) algorithmic approaches for entirely new problem spaces that are far too vast to even begin being solved by brute force methods alone. And these approaches clearly work, insofar as they mastered one of the hardest games in the world, one that many thought humans would never be defeated in.

Great. Another game ruined forever by computers. Jerks.

Based on our experience with Chess, and now Go, we know that computers will continue to beat us at virtually every game we play, in the same way that dolphins will always swim faster than we do. But what if that very same human mind was capable of not only building the dolphin, but continually refining it until they arrived at the world's fastest minnow? Where Deep Blue was the more or less inevitable end result of brute force computation, AlphaGo is the beginning of a whole new era of sophisticated problem solving against far more enormous problems. AlphaGo's victory is not a defeat of the human mind, but its greatest triumph.

(If you'd like to learn more about the powerful intersection of sophisticated machine learning algorithms and your GPU, read this excellent summary of AlphaGo and then download the DeepMind Atari learner and try it yourself.)

[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
Categories: Programming

Registering OAuth clients for Google Sign-In

Android Developers Blog - Fri, 03/25/2016 - 22:19

Posted by Isabella Chen, Software Engineer, and Laurence Moroney, Developer Advocate

Starting with Google Play services 8.3, we did a major revamp of the Google Sign-In APIs, supporting both client and server auth. Behind the scenes, these APIs use OAuth 2.0 tokens to ensure secure authentication and authorization. To maintain security, we provide tools in the Google Developers Console to register the clients using these tokens.

In this post, we’ll discuss the important task of registering OAuth clients for Google Sign-In, and the tools that we offer to make this as easy as possible.

Here are some scenarios that might apply to you:

  1. Start by creating a project in the Google Developers Console, which registers the client app on your behalf.
  2. If you have a backend server in your project, you’ll need an OAuth client ID for it, too.
  3. And don't forget to register OAuth clients for other test and release versions of your app, too!

In this post, we’ll cover some details on this process and address common pitfalls.

Getting Started - Create a Project in the Google Developers Console.

If you have not used Google Sign-In before, you can start integrating the API into your app by following the ‚ÄėGet a configuration file‚Äô steps on this site. You‚Äôll be taken to a setup wizard that will create an OAuth 2.0 client ID as shown in Figure 1.

Figure 1. Configuring your app

Once you’ve specified your app, you’ll be taken to a screen to choose and configure services such as Google Sign-In, Cloud Messaging or Google Analytics that you want your app to be able to use.

Choose Google Sign-In. In order to use it, you’ll need to get the SHA-1 of the signing certificate for your Android app. This can either be a debug or a release certificate, and for the purposes of this blog you’ll look at a debug one, but keep in mind that you’ll need to repeat this process for each package / certificate pair you end up using (described in the last section below).

You can get the debug SHA-1 using the keytool command like this:

keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey -storepass android -keypass android

Once you have your SHA-1, enter it as seen in Figure 2.

Figure 2. Enabling Google Sign-in

Now that your project is set up, you can get started with integrating the Sign-In API. But if you need to configure your project to work with a backend server or additional package name / keystores, keep reading the sections below.

Server Config - Ensure your server is registered within the same project.

If you have your own web or cloud server with data for your application, you’ll need OAuth credentials for your backend. Details on doing this can be found in the ID token and server auth code documentation.

Before using these flows, you’ll need to make sure you register your web server correctly in the Google Developers Console. Once there, you’ll be asked to select your project. See Figure 3.

Figure 3. Going directly to a project in the Google Developers Console.

Once you‚Äôve selected your project, press the ‚ÄėContinue‚Äô button, and you‚Äôll go directly to the Credentials tab where all credential types are managed. Check the ‚ÄúOAuth 2.0 client IDs‚ÄĚ section, and you will see the ‚ÄúWeb client‚ÄĚ and ‚ÄúAndroid client for com.my.package.name‚ÄĚ that were created for you by the setup wizard. See Figure 4.

Figure 4. The Credentials Tab on the Developers Console - Web server OAuth client info

Take note of the Client ID for for your Web client, you‚Äôll need it for both your app and server as illustrated below. (If you‚Äôve created your project in the past and there‚Äôs no OAuth 2.0 client ID with Type ‚ÄúWeb application‚ÄĚ, then you will need to create one by selecting ‚ÄėNew Credentials‚Äô -> ‚ÄėOAuth client ID‚Äô.)

If you use an ID token flow for backend authentication, when you start developing your Android app, request an ID token in your GoogleSignInOptions, supplying the web client ID for your server:

GoogleSignInOptions gso =
    new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN)
        .requestIdToken(serverClientId)
        .requestEmail()
        .build();

And then on your server, set the same OAuth client ID for your web application to be the audience:

GoogleIdTokenVerifier verifier =
    new GoogleIdTokenVerifier.Builder(transport, jsonFactory)
        .setAudience(Arrays.asList(serverClientId))
        .setIssuer("https://accounts.google.com")
        .build();

Successful verification will allow you to authenticate and issue a session for this newly signed-in user.

Alternatively, if you are using the server auth code flow for backend access to Google APIs, request a server auth code in your GoogleSignInOptions on Android, again supplying the web client ID for your server:

GoogleSignInOptions gso =
    new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN)
        .requestScopes(new Scope(Scopes.DRIVE_APPFOLDER))
        .requestServerAuthCode(serverClientId)
        .requestEmail()
        .build();

And then on the server, both the OAuth client ID and the ‚ÄúClient secret‚ÄĚ will be useful. The server SDK from Google can directly consume a downloaded JSON configuration file. You can click the download icon to download the JSON file (as shown in Figure 4) and use below code to construct GoogleClientSecrets:

GoogleClientSecrets clientSecrets =
    GoogleClientSecrets.load(
        JacksonFactory.getDefaultInstance(),
        new FileReader(PATH_TO_CLIENT_SECRET_FILE));

At which point you can access authenticated Google APIs on behalf of the signed-in user. Note that the ‚Äúclient secret‚ÄĚ is really a secret that you should never reveal in your Android client.

Handling multiple environments - Registering other client IDs for your project.

Note that it can be common for apps to have different package names as well as different certificates (and thus SHA-1 keys) for various types of environment (such for different developers or test and release environments). Google uses your package name together with SHA-1 signing-certificate fingerprint to uniquely identify your Android application. It’s important to register every package name + SHA1 fingerprint pair in Google Developers Console.

For example, to register the release version of this package, you can do so by selecting ‚ÄėNew Credentials‚Äô -> ‚ÄėOAuth client ID‚Äô, shown in Figure 5 below, and then following the steps to add the package name and production keystore SHA-1.

Figure 5. The Credentials Tab on the Developers Console - create additional OAuth client ID

Now you are ready to handle the different environments where your app might be running and release to your users!

Hopefully, this has been helpful to you in understanding how to register for OAuth keys to keep your apps and servers secure. For more information, check out the Google Developers homepage for Identity.

Categories: Programming

Stuff The Internet Says On Scalability For March 25th, 2016


Did you know there's a field called computational aesthetics? Neither did I. It's cool though.

 

If you like this sort of Stuff then please consider offering your support on Patreon.
  • 51%: of billion-dollar startups founded by immigrants; 2.8 billion: Twitter metric ingestion service writes per minute; 1 billion: Urban Airship push notifications a day; 1.5 billion: Slack messages sent per month; 35 million: server nodes in the world; 10: more regions will be added to Google Cloud;  697 million: WeChat active monthly users; 

  • Quotable Quotes:
    • Dark Territory: When officials in the Air Force or the NSA neglected to let Microsoft (or Cisco, Google, Intel, or any number of other firms) know about vulnerabilities in its software, when they left a hole unplugged so they could exploit the vulnerability in a Russian, Chinese, Iranian, or some other adversary’s computer system, they also left American citizens open to the same exploitations—whether by wayward intelligence agencies or by cyber criminals, foreign spies, or terrorists who happened to learn about the unplugged hole, too. 
    • @xaprb: If you adopt a microservices architecture with 1000x more things to monitor, you should not expect your monitoring cost to stay the same.
    • The Swrve Monetization Report 2016: almost half of all the revenue generated in mobile gaming comes from just 0.19 percent of users.
    • Nassim Taleb: Now some empiricism. Consider that almost all tech companies "in the tails" were not started by "funding". Take companies you are familiar with: Microsoft, Apple, Google, Facebook. These companies started with risk-taking. Funding came in small amounts, way later.
    • @leegomes: In a big shift, Google says a go-anywhere self-driving car might not be ready for 30 years.
    • Google’s Eric Schmidt: Machine learning will be basis of ‘every huge IPO’ in five years.
    • @brendangregg: "Memory bandwidth is the number one issue we see today" Denis at Facebook
    • @ogrisel: PostgreSQL 9.6 will support parallel aggregation! TPC-H Q1 @ 100GB benchmark shows linear scaling up to 30 workers 
    • @sarah_edo: The hardest part of being a developer isn't the code, it's learning that the entire internet is put together with peanut butter and goblins.
    • @beaucronin: "Cryptocurrencies are an emergent property of the Internet – almost a fifth protocol"
    • Thomas Frey: We are moving toward an era of megaprojects. We’ll finish the Pan-American Highway with a 25-mile bridge over the Darien Gap in Panama. 
    • @samphippen: “Do you expect me to talk?” “No Mr. Bond, I expect you to be willing to relocate to san francisco"
    • @brendanbaker: Outside of the core people, who actually know what they're doing, AI is talked about like gamification was three years ago.
    • @RichRogersHDS: Did you know? The collective noun for a group of programmers is a merge-conflict." - @omervk
    • @jbeda: This is how you know Google is serious about cloud. Real money on real facilities. 
    • Farhad Manjoo: The lesson so far in the on-demand world is that Uber is the exception, not the norm. Uber, but for Uber — and not much else.
    • @DKThomp: Airbnb woulda made a killing in 1900: One third of urban families used to make 10%+ of their income from "lodgers" 
    • @AstroKatie: "We can make 'smart drones'!" "Your chatbot became a Nazi in like a day." "OK good point."
    • @adrianco: I agree GCP are setup for next gen apps, think they are missing out on where most of the $ are being spent in the short term.
    • @EdwardTufte: Like book publishers and Silicon Valley, the further the distance from content production, the greater the money. 
    • Biz Carson: Slack grew from 80 to 385 employees in 14 months
    • Chip Overclock®: One of those things is being evidence-based. Don't guess. Test. Measure. Look and see. Ask. If you can avoid guessing, do so.

  • Impressive demo of the new smaller, less dorky looking Meta augmented reality headset. Here's a hands on report. The development kit is $949. This most likely will be the new app store level opportunity so it might be smart to get on it now. The Gold Rush phase is still in the future. The uses are obvious to anyone who reads Science Fiction. This is a TED talk, so of course no details on performance, etc. What are the backend infrastructure opportunities? Hopefully they'll keep all that open instead of building another walled garden.

  • Is artificial intelligence ready to rule the world? IMHO: No. You would need a large training set. The problem is we have so few good examples of ruling the world successfully. You could create an artificial world in VR with a simulated world to generate training data, but that's just another spin on in the long history of Utopian thinking. We should probably learn to govern ourselves first before we pitch it over to an AI.

  • "It's better to have a media strategy than a security strategy." That's Greg Ferro commenting in an episode of Network Break on Home Depot's paltry $19.5 million fine for their massive 2014 data breach. Why pay for security when there's no downside? It's not like people stopped shopping at Home Depot. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Refactoring to Microservices - Introducing a Process Manager

Xebia Blog - Fri, 03/25/2016 - 14:15
A while ago I described the first part of our journey to refactor a monolith to microservices (see here). While this was a useful first step, a lot can be improved. I was inspired by Greg Young's course at Skills Matter, see CQRS/DDD course. Because I think it’s useful to reflect on the steps you

Basics: Difference Between Models, Frameworks, and Methodologies

20160324_165235

Nesting Easter eggs show each layer of the process improvement architecture

One of my children owned a (Russian nesting doll) that is now somewhere in our attic.  I was always struck how one piece fit within the other and how getting the assembly out of order generated a mess. I think I learned more from the toy than my children did.  The matryoshka doll is a wonderful metaphor for models, frameworks, and methodologies. A model represents the outside shell into which a framework fits followed by the next doll representing a methodology placed inside the framework. Like models, frameworks, and methodologies, each individual doll is unique, but they are related to the whole group of dolls.

 

Models, the outside layer of our doll, are an abstraction that provides a rough definition of practices and inter-relationships needed by an organization to deliver a¬†product or service. Models are valuable if they are theoretically consistent, fit the real world¬† and have predictive power. All firms or groups use a model as a pattern to generate a structure. ¬†For example, the hierarchical corporation is a common model for commercial organizations (visualize an organization chart). The Capability Maturity Model Integration ‚Äď Development (CMMI-DEV) is a model leveraged in many software development organizations. The CMMI provides a core set of practices that organizations have typically needed to deliver software. The CMMI defines an outline of what needs to be done but not how or in what order. The model that is chosen influences the models and methods that will be leveraged by different layers of the organization.¬† For example, an organization using a lean start-up model for their corporate governance model might not see the CMMI-DEV model a viable for their development organization (we will discuss this common misperception later on the blog).

Frameworks, the next inner layer in our process architecture matryoshka doll, provide the structure needed to implement a model (or some part of the a model).¬† Shifting the operational metaphor we are using for a moment to that of a skyscraper, the framework is the lattice like a frame that supports all of the components¬†in¬†the superstructure. ¬†The Scaled Agile Framework Enterprise is a framework which leverages other frameworks and methods as tools to deliver value. SAFe defines the flow of work from an organization’s portfolio to the Agile teams that will develop and integrate the code. The Framework calls leverages other frameworks and methodologies such as DevOps, Scrum, Kanban and Extreme Programing.¬†

Methods, nestled inside of frameworks, provide an approach to achieve a specific goal.  In software development, methodologies define (or impose) a disciplined set of the processes so that developing software is more predictable and more efficient. The difference between Agile and heavier methods is the amount of structure that defined. Agile methods provide only just enough structure to allow the method embrace the principles stated in the Agile Manifesto. Extreme Programming is another software development methodology.   

Each layer of our process architecture matryoshka doll is a step closer to a core set of steps and tasks. The doll metaphor is not perfect.  Some small start-up organizations may not seem to need the structure of a framework or may even eschew a method until they grow. In an interview for the Software Process and Measurement Cast, Vinay Patankar, CEO of Process Street, said that as he and his partner began their start-up they used a code and fix approach, but growth forced the adoption of Agile as framework combined with Scrum and Extreme Programing (at least parts) as methodologies. A model provides an environment to implement a framework. You would not implement SAFe inside a waterfall model. Methodologies are the tools that translate a framework into something actionable. Without one or more or of the layers in our doll, what remains of the doll might make a better rattle than a tool to deliver software. 


Categories: Process Management

Introducing the Google API Console

Google Code Blog - Thu, 03/24/2016 - 23:25

Posted by Israel Shalom, Product Manager

Every day, hundreds of thousands of developers send millions of requests to Google APIs, from Maps to YouTube. Thousands of developers visit the console for credentials, quota, and more -- and we want to give them a better and more streamlined experience.

Starting today, we’ll gradually roll out the API Console at console.developers.google.com focusing entirely on your Google API experience. There, you’ll find a significantly cleaner, simpler interface: instead of 20+ sections in the navigation bar, you’ll see API Manager, Billing and Permissions only:

Figure 1: API Console home page

Figure 2: Navigation section for API Console

console.cloud.google.com will remain unchanged. It’ll point to Cloud console, which includes the entire suite of Google Cloud Platform services, just like before. And while the two are different destinations, your underlying resources remain the same: projects created on Cloud Console will still be accessible on API Console, and vice versa.

The purpose of the new API Console is to let you complete common API-related tasks quickly. For instance, we know that once you enable an API in a new project, the next step is usually to create credentials. That’s why we’ve built the credentials wizard: a quick and convenient way for you to figure out what kind of credentials you need, and add them right after enabling an API:

Figure 3: After enabling an API, you’re prompted to go to Credentials

Figure 4: Credentials wizard

Over time, we will continue to tailor the API Console experience for the many developers out there who use Google’s APIs. So if you’re one of these users, we encourage you to try out API Console and use the feedback button to let us know what you think!

Categories: Programming

SE-Radio Episode 253: Fred George on Developer Anarchy

Fred George talks with Eberhard about “Developer Anarchy” – a development approach Fred has been using very successfully in different organizations. Developer Anarchy is a manager-less process. All team members write code. There are no stories. Instead developers figure out how to reach specific business goals. Besides writing code some team members have additional responsibilities: […]
Categories: Programming

Deliver Fast or Deliver as Planned (Update)

Herding Cats - Glen Alleman - Thu, 03/24/2016 - 16:47

A popular mantra in Agile is deliver fast, deliver often. In some domains this may be applicable. Where we work and where agile is becoming the norm, we have another view.

Deliver as planned

The Plan for the delivery of value is shown in the Product Roadmap and implemented in the Cadence Release Plan, or sometimes in the Capabilities Release plan. Fast is a term replaced by Planned. The Plan is based on a Capabilities Based Plan. This Plan shows the increasing maturity of the Capabilities produced by the project. These Capabilities are needed by the business to accomplish the business case or fulfill a Mission. 

Capabilities Flow

Showing up fast is defined by showing up when needed. The need is defined in the Capabilities Based Planning process. 

Capabilities Based Planning

A Capability is  the ability to accomplish something of Value. Here's a sample of what a Capability sounds like.

Screen Shot 2016-03-22 at 9.14.59 AM

These are mission and business case terms, defined by the owners of the mission or Business Case. If y9ou show up Fast, that also means you can't show up early. For a simple reason - early means you may not be able to put that Value to work. It may mean that Value is not needed yet, and that Value may have to change when we get ready to use that Value. This is the role of the Plan.

In what order do we need what Capabilities - and all their associated technical and operational requirements having been fulfilled - for the needed cost, effectiveness, and performance requirements as well.

A final example - one of my favorites is the notion of the intent of the commander as the basis of defining capabilities. I have a colleague, who was General Schwarzkopf's logistic person in the first Gulf war. She was an Army Colonel and one of a small number of women combatants at the time. There are many more now, but she was a pioneer. One of reasons the US Army was able to move up the coast prior to crossing into Iraq in rapid time was because of her and her staffs planning skills. The notion of agile is the basis of all military process, not just 5 coders in the room with their customer.

So this statement says it all in terms of needed Capabilities

Capabilities Patton

So when you hear deliver early and deliver often. Ask a simple question - what are we delivering? Is that deliverable arriving in the right order for the end user - the customer, the warfighter? Are there any predecessors to that deliverable that have to be in place for the FAST deliverable to be of any use? 

This is the role of a Capabilities Based Plan. If your project has no interdependencies, if everything that is produced can be used as a standalone deliverable - arriving in any order - than Capabilities Based Planning is not likely to be of much value. And that's fine. But when we enter the Agile At Scale domain - ERP, Enterprise IT, Software Intensive System of Systems - we've got a separate issue. Order does matter. Fast is no longer of much value. As planned and as needed  are the Critical Success Factors.

And a final thought

If you're going to Deliver Fast, do you have a Plan for how to do that? No, then how in the world are you going to deliver fast of you don't know what you are going to deliver, when that delivery will be done, and how you are going to deliver that value? Without a plan of some sort,  how can you assert the naive notion of deliver fast and deliver often can ever be executed? It's just a platitude, without any actionable outcomes without a Plan on how to do that.

Related articles The Customer Bought a Capability Integrated Master Plan and Department of Energy Program Management Start with Problem, Than Suggest The Solution The Reason We Plan, Schedule, Measure, and Correct Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing Making Decisions In The Presence of Uncertainty
Categories: Project Management