Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=1' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Pushing vs. Pulling Work in Your Agile Project

If you’re thinking about agile or trying to use it, you probably started with iterations in some form. You tried (and might be still trying) to estimate what you can fit into an iteration. That’s called “pushing” work, where you commit to some number of items of work in advance.

And, if you have to service interruptions, such as support on previous projects, or support the production server, or help another project, you might find that you can’t “push” very well. Β You have trouble predicting what you can do every two weeks. While the iteration duration is predictable, what you predict for the content of your iterations is not predictable. Β And, if you try to have longer iterations, they are even less predictable.

On the other hand, you might try “pull” agile, where you set up a kanban board, visualize your flow of value, and see where you have capacityΒ in your team and where you don’t. Flow doesn’t have the built-in notion of project/work cadence. On the other hand, it’s great for visualizing all the work and seeing where you have bottlenecks.

Push and PullHere’s the problem: there is No One Right Way to manage the flow of work through your team. Every team is different.

Iterations provide a cadence for replanning, demos, and retrospectives. That cadence, that project rhythm, helps to set everyone’s expectations about what a team can deliver and when.

Kanban provides the team a perspective on where the work is, and if the team measures it, the delays between stages of work. Kanban helps a team see its bottlenecks.

Here are some options you might consider:

  • Add a kanban board that visualizes your workflow before you change anything. Gather some data about what’s happening. Are your stories quite large, so you have more pressure for more deliverables? Do you have more interruptions than you have project work?
  • Reduce the iteration duration.Β Interruptions are a sign that you might need to change priorities. Some teams discover they can move to one-week iterations and manage the support needs.
  • Ask questions such as these for incoming non-project work: “When do you need this done? Is it more or less important or valuable than the project work?”
  • Make sure you are a cross-functional team. Teams can commit to finishing work in iterations. A group of people who are not interdependent have trouble committing to iterations.

Teams who use only iterations may not know the workflow they really have, or if they have more project or support/interrupting work. Adding an urgent queue might help everyone see the work. And, more explicit columns for analysis, dev & unit test, testing, waiting (as possibilities) in addition to the urgent queue might help the team see where they spend time.

Some teams try to work in two-week (or longer) iterations, but the organization needs more frequent change. Kanban might help visualize this, and allow the team to change what they commit to.

Some POs don’t realize they need to ask questions about value for all the work coming into a team. If the work bypasses a PO, that’s a problem. If the PO does not rank the work, that’s a problem. And, if the team can’t finish anything in a one-week iteration, that’s a sign of interdependencies or stories that are too large for a team. (There might be other problems. Those are the symptoms I see most often.

You can add a kanban board inside your iteration approach to agile. You might consider moving to flow entirely with specific dates for project cadence. For example, you might say, “We do demos every month on the 5th and the 19th of the month.” That would provide a cadence for your project.

You have choices about pushing work or pulling work (or both) for your project. Consider which will provide you most value.

P.S. If you or your PO has trouble making smaller stories, consider my workshop, Practical Product Ownership.

Categories: Project Management

Software Development Conferences Forecast November 2016

From the Editor of Methods & Tools - Tue, 11/29/2016 - 16:00
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP), DevOps and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods […]

The Core Principles of Agile

Mike Cohn's Blog - Tue, 11/29/2016 - 16:00

I recently watched a video titled, "Why Is Modern Art So Bad?"

One of points of the video was that for centuries, art improved because artists demanded of themselves that they meet the highest standards of excellence.

The video claimed that this aspiration in art was eventually pushed out by the claim that "beauty is in the eye of the beholder." All art then became personal expression and essentially anything could be art.

And we've probably all been to museums and seen--or at least heard of--exhibits that left us wondering, "How is that art?"

Without standards of excellence in art, anything can be art.

Agile without Standards

And the same applies to agile. Without standards of excellence for agile, anyone can call anything agile.

I encounter this today in talking to companies that are agile because the boss has declared them agile. They don't deliver frequently. They don't iterate towards solutions. They don't seek continuous improvement. Teams are not empowered and self-organizing. But they must be agile because someone has slapped that label on their process.

Worse, we all see this today in heavyweight methodologies that don't resemble the agile described in the Agile Manifesto. But they must be agile because it's right there in the name of the process.

Many of us, as experienced agilists, can recognize what is truly agile when we see it. Yet agile is hard to define. It's more than just the four value statements or 12 principles of the Agile Manifesto. Or is it less than those?

What Do You Think?

What do you think are the core principles or elements of agility? Please share your thoughts in the comments below.

What I’ve Been Writing Lately

You might have noticed I have not written as much in this blog for the past couple of months as I normally do. That’s because I’ve been involved in another writing project that’s taken a ton of my time.

I’m part of the writing team for the Agile Practice Guide. The Guide is a joint venture between the Agile Alliance and the PMI. See Bridging Mindsets: Creating the Agile Practice Guide.

We work in timeboxes, iterating and incrementing over the topics in the guide. We sometimes pair-write, although we more often write and review each other’s work as a pair.

If you would like to review the guide as a subject matter expert, send me an email.Β You’ll have about a month to review the guide, starting in approximately January 2017. I am not sure of the dates yet, because I am not sure if we will finish all our writing when we originally thought. Yes, our project has risks!

Categories: Project Management

It’s that time again: Google Code-in starts today!

Google Code Blog - Mon, 11/28/2016 - 21:33
Originally posted on Google Open Source Blog
By Mary Radomile, Open Source Programs Office
Today marks the start of the 7th year of Google Code-in (GCI), our pre-university contest introducing students to open source development. GCI takes place entirely online and is open to students between the ages of 13 and 17 around the globe.
The concept is simple: complete bite-sized tasks (at your own pace) created by 17 participating open source organizations on topic areas you find interesting:

  • Coding
  • Documentation/Training
  • Outreach/Research
  • Quality Assurance
  • User Interface

Tasks take an average of 3-5 hours to complete and include the guidance of a mentor to help along the way. Complete one task? Get a digital certificate. Three tasks? Get a sweet Google t-shirt. Finalists get a hoodie. Grand Prize winners get a trip to Google headquarters in California.

Over the last 6 years, 3213 students from 99 countries have successfully completed tasks in GCI. Intrigued? Learn more about GCI by checking out our rules and FAQs. And please visit our contest site and read the Getting Started Guide.

Teachers, if you are interested in getting your students involved in Google Code-in you can find resources here to help you get started.

Categories: Programming

How to Make Your Database 200x Faster Without Having to Pay More?

This is a guest repost Barzan Mozafari, an assistant professor at University of Michigan and an advisor to a new startup, snappydata.io, that recently launched an open source OLTP + OLAP Database built on Spark.

Almost everyone these days is complaining about performance in one way or another. It’s not uncommon for database administrators and programmers to constantly find themselves in a situation where their servers are maxed out, or their queries are taking forever. This frustration is way too common for all of us. The solutions are varied. The most typical one is squinting at the query and blaming the programmer for not being smarter with their query. Maybe they could have used the right index or materialized view or just re-write their query in a better way. Other times, you might have to spin up a few more nodes if your company is using a cloud service. In other cases, when your servers are overloaded with too many slow queries, you might set different priorities for different queries so that at least the more urgent one (e.g., CEO queries) finish faster. When the DB does not support priority queues, your admin might even cancel your queries to free up some resources for the more urgent queries.

No matter which one of these experiences you’ve had, you’re probably familiar with the pain of having to wait for slow queries or having to pay for more cloud instances or buying faster and bigger servers. Most people are familiar with traditional database tuning and query optimization techniques, which come with their own pros and cons. So we’re not going to talk about those here. Instead, in this post, we’re going to talk about more recent techniques that are far less known to people and in many cases actually lead to much better performance and saving opportunities.

To start, consider these scenarios:

Categories: Architecture

Quote of the Month November 2016

From the Editor of Methods & Tools - Mon, 11/28/2016 - 13:32
There is surely no team sport in which every player on the field is not accurately aware of the score at any and every moment of play. Yet in software development it is not uncommon to find team members who do not know the next deadline, or what their colleagues are doing. Nor is it […]

SPaMCAST 419 – Notes on Distributed Stand-ups, QA Corner, Configuration Management, Software Senesi

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 419 features our essay on eight quick hints on dealing with stand-up meetings on distributed teams. Distributed Agile teams require a different level of care and feeding than a co-located team in order to ensure that they are as effective as possible. Remember an update on the old adage: distributed teams, you can’t live with them and you can’t live without them. Β 

We also have a column from the Software Sensei, Kim Pries. Β In this installment, Kim talks about the Β Fullan Change Model. In the Fullan Change Model, all change stems from a moral purpose. Β Reach out to Kim on LinkedIn.

Jon M Quigley brings the next installment of his Alpha and Omega of Product Development to the podcast. Β In this installment, Jon begins a 3 part series on configuration management. Β Configuration management might not be glamorous but it is hugely important to getting work done with quality. Β One of the places you can find Jon is at Value Transformation LLC.

Anchoring the cast this week is Jeremy Berriault and his QA Corner. Β Jeremy explored exploratory testing in this installment of the QA Corner. Β Also, Jeremy has a new blog! Β Check out the QA Corner!

Re-Read Saturday News

The read/re-read of The Five Dysfunctions of a Team by Patrick Lencioni (published by Jossey-Bass) continues on the Blog. Β Lencioni’s model of team dysfunctions is illustrated through a set of Β crises used to illustrate the common problems that make teams into dysfunctional collections of individuals. The current entry features the sections titled Leaks through Plowing On. Β 

Takeaways from this week include:

  • Partial information leads to misinterpretations. Β Β Β Β Β Β Β 
  • Executives need to be ultimately loyal to the executive team rather than their siloed organizations.
  • Productive conflict requires facilitation to learn.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 420 will feature our interview with John Hunter. Β John returns to the podcast to discuss building capability in the organization and Β understanding the impact of Β variation. Β We also talked Demining and why people tack the word improvement on almost anything!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: β€œThis book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 419 - Notes on Distributed Stand-ups, QA Corner, Configuration Management, Software Senesi

Software Process and Measurement Cast - Sun, 11/27/2016 - 23:15

The Software Process and Measurement Cast 419 features our essay on eight quick hints on dealing with stand-up meetings on distributed teams. Distributed Agile teams require a different level of care and feeding than a co-located team in order to ensure that they are as effective as possible. Remember an update on the old adage: distributed teams, you can’t live with them and you can’t live without them. Β 

We also have a column from the Software Sensei, Kim Pries. Β In this installment, Kim talks about the Β Fullan Change Model. In the Fullan Change Model, all change stems from a moral purpose. Β Reach out to Kim on LinkedIn.

Jon M Quigley brings the next installment of his Alpha and Omega of Product Development to the podcast. Β In this installment, Jon begins a 3 part series on configuration management. Β Configuration management might not be glamorous but it is hugely important to getting work done with quality. Β One of the places you can find Jon is at Value Transformation LLC.

Anchoring the cast this week is Jeremy Berriault and his QA Corner. Β Jeremy explored exploratory testing in this installment of the QA Corner. Β Also, Jeremy has a new blog! Β Check out the QA Corner!

Re-Read Saturday News

The read/re-read of The Five Dysfunctions of a Team by Patrick Lencioni (published by Jossey-Bass) continues on the Blog. Β Lencioni’s model of team dysfunctions is illustrated through a set of Β crises used to illustrate the common problems that make teams into dysfunctional collections of individuals. The current entry features the sections titled Leaks through Plowing On. Β 

Takeaways from this week include:

  • Partial information leads to misinterpretations. Β Β Β Β Β Β Β 
  • Executives need to be ultimately loyal to the executive team rather than their siloed organizations.
  • Productive conflict requires facilitation to learn.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 420 will feature our interview with John Hunter. Β John returns to the podcast to discuss building capability in the organization and Β understanding the impact of Β variation. Β We also talked Demining and why people tack the word improvement on almost anything!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: β€œThis book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Estimates, Forecasts, Projections

Herding Cats - Glen Alleman - Sun, 11/27/2016 - 15:33

There is the misuse of the terms of statisticsΒ and probability in many domains. Politics being one. The #Noestimates advocates are another of the most prolificΒ abusers of these terms. Here are the mathematical definitions. not the Wikipedia definition, not the self-made definitions used to support their conjectures.

Estimates

  • An Estimate is a value inferredΒ for a population of valuesΒ based on data collected from a sample of data from that population. The estimate can also be produced parametrically or throughΒ a simulation (Monte Carlo is common, but Method of Moments is another we use).Β 
    • Estimates can be about the past, present, or future.
    • We can estimate the number of clamsΒ in the PleistoceneΒ era that are in the shale formations near our house.
    • We can estimate the number of people sitting in FolsomΒ Field for lat night's game againstΒ Utah. The Buff's won and are now the PAC-12 South Champs.
    • We can estimate the total cost, total duration, and the probability thatΒ all the Features will be delivered on the programΒ we are working for the US Government. Or ANY software project for that matter.
  • Estimates have precision and accuracy.
    • These values are estimates as well.
    • The estimated completion cost for this program is $357,000,000 with an accuracy of $200,000 and a precision of $300,000.
    • Another way to speak about the estimated cost isΒ This program will cost $357,000,000 or less with 80% confidence.
  • An estimate is aΒ statistic about a whole population of possible values from a previous reference period or a model that can generate possible values given the conditions of the model.Β 

An estimate is the calculated approximation of a result

Forecasts

  • Forecasts speculate futureΒ valuesΒ for a population of possible values with a certain level of confidence, based on the current and past values as an expectation (prediction) of what will happen:
    • This is the basis of weather forecasting.
    • If you listen carefully to the weather forecast it saysΒ there is a 30% chance of snow next week over the forecast area.
    • We live at the mouth of a canyon at 5,095' and of there is a 30% chance of snow in Boulder (8 miles south), there is a much lower chance in our neighborhood.
    • Business forecasts, weather forecasts, traffic forecasts are typical. These forecasts usually come from models of the process being forecast. NOAA and NCAR are in our town, so lots ofΒ forecasting going on. Weather as well as climate.
    • Not so typicalΒ to Forecast the cost of a project or forecast the delivery date. Those are estimated values.
    • For example a financial statementΒ presents, to the best of the responsible party's knowledge and belief, an entity's expected financial position, results of operations, and cash flows. [2]
  • In a forecast, the assumptions representΒ expectations of actual future events.
    • Sales forecasts
    • Revenue growth
    • Weather forecasts
    • Forecasts of cattle prices in the spring

A forecast is a prediction of some out come in the future. Forecasts are based on estimating the processes that produce the forecast. The underlying statistcal models (circulation, thermal models) of weather forecasting are estimates of the compressable fluid flow of gases and moisture in the atmsophere (way over simplified).

Projections/Prediction

  • Projections indicate what future values Β may exist for a population of values if the assumed patterns of change were to occur. Projections are not a prediction that the population will change in that manner.
    • Projected revenue for GE aircraft engine sales in 2017 was an articleΒ in this week'sΒ Aviation Week & Space Technology.Β 
    • A projection simply indicates a future value for the populationΒ if the set of underlying assumptions occurs.

A prediction says something about the future.

Project cost, schedule, and technical performance Estimates

All projects contain uncertainty. Uncertainty comes in two forms - aleatory (irreducible) and epistemic (reducible). If we're going to make decisions in the presence of these uncertainties, we need to estimate what their values are, what the range of values are, what the stability of the underlying processes that generate these values are, how these values interact with all the elements of the project and what the impact of these ranges of value will do to the probability of success of our project.

Project decision in the presence of unceratinty cannot be made without estimates. Anyomne claiming othewise does not understand statsiics and probability of orucomes on projects

As well anyone claims estimates are a waste, not needed, misused by management, of any other dysfunction, is doing them wrong. So as we said at Rocky Flats -Β Don't Do Stupid Things On Purpose. Which means when you do hear those phrases, you'll know they are Doing Stupid Things on Purpose.

And when yo hear,Β we don't need estimates we need budget, remember:

In a world of limited funds, as a project manager, Product Owner, or even sole contributor, you’re constantly deciding how to get the most return for your investment. The more accurate your estimate of project cost is, the better able you will be to manage your project’s budget.

Another example of not understanding the probability and statistics of projects and the businesses that fund them is, there are two estimates needed for all projects that operate in the presence of uncertainty:

  • Estimate at Completion (EAC)
    • EAC is the expected cost of the project when it is complete.
    • This can be calculatedΒ bottom-up from the past performance and future projections of performance for the projects work - which in the future will be an estimate.
  • Estimate to Complete
    • ETC is the expected cost to complete the project.
  • The ETC used to calculate the EAC
    • EAC = Actual Costs to Date (AC) + Estimated Cost to Complete (ETC).
    • EAC = Actual performance to date / Some Index of Performance.

This last formula is universal and can be used no matter the software development method.

  • Agile has such a formula - it's called the Burn Down Chart. We'reΒ burning down story points at some rate. If we continue at this rate, we will be done by this estimated date
  • Same for traditional projects.Β We're spending at a current rate - the run rate - if we keep spending at this rate, the project will cost that much
  • Earned Value Management provides the same EAC and can also provide an Estimated Completion Date (ECD)
  • Earned ScheduleΒ provide a better ECD

Β Wrapup

Nothing in progression can rest on its orginal plan - Thomas Monson [5]

All project work is driven by uncertainty. Uncertainty is modeled by random variables. These variables can represent aleatory uncertainty or epistemic uncertainty. This uncertainty creates risk and as stated here often

Risk Management is How Adults Mange Projects - Time Lister

So if you hear the conjecture that decisioins can be made in the presence of uncertainty without estimates, you'll now know that is a complete load a crap, run away.

If this topic interests you here's a Bibliography of materials for estimating and lots of other topics in agile software development that is updated all the time. Please read and use these when you hear unsubstantiated claims around estimating in the presnece of uncertainty. Making estimates is our business and this resoruce has served us well over the decades.

Other Resources

  1. Australian Bureau of Statistics
  2. Financial Forecasts and Projections, AT Β§301.06, AICPA, 2015
  3. Earned Value Management in EIA-748-C
  4. Earned Schedule uses the same values as Earned Value, to produce an estimated complete date, www.earnedschedule.comΒ I started using ES at Rocky Flats to explain to the steel workers that their productivity as measured in Budgeted Cost for Work Complete (BCWP or EV) means they are late. ES told them how late and provides the date of the projectedΒ completion of the planned work
  5. Project Management Analytics: A Data-Driven Approach to Making Rational and Effective Project Decisions, Harjit Singh,Β Pearson FT Press; 1st Edition, November 12, 2015.

Β 

Related articles Why Guessing is not Estimating and Estimating is not Guessing Critical Success Factors of IT Forecasting Eyes Wide Shut - A View of No Estimates IT Risk Management Architecture -Center ERP Systems in the Manufacturing Domain
Categories: Project Management

Rethinking Equivalence Class Partitioning, Part 1

James Bach’s Blog - Sun, 11/27/2016 - 13:41

Wikipedia’s article on equivalence class partitioning (ECP) is a great example of the poor thinking and teaching and writing that often passes for wisdom in the testing field. It’s narrow and misleading, serving to imply that testing is some little game we play with our software, rather than an open investigation of a complex phenomenon.

(No, I’m not going to edit that article. I don’t find it fun or rewarding to offer my expertise in return for arguments with anonymous amateurs. Wikipedia is important because it serves as a nearly universal reference point when criticizing popular knowledge, but just like popular knowledge itself, it is not fixable. The populus will always prevail, and the populus is not very thoughtful.)

In this article I will comment on the Wikipedia post. In a subsequent post I will describe ECP my way, and you can decide for yourself if that is better than Wikipedia.

“Equivalence partitioning or equivalence class partitioning (ECP)[1] is a software testing technique that divides the input data of a software unit into partitions of equivalent data from which test cases can be derived.”

Not exactly. There’s no reason why ECP should be limited to “input data” as such. The ECP thought process may be applied to output, or even versions of products, test environments, or test cases themselves. ECP applies to anything you might be considering to do that involves any variations that may influence the outcome of a test.

Yes, ECP is a technique, but a better word for it is “heuristic.” A heuristic is a fallible method of solving a problem. ECP is extremely fallible, and yet useful.

“In principle, test cases are designed to cover each partition at least once. This technique tries to define test cases that uncover classes of errors, thereby reducing the total number of test cases that must be developed.”

This text is pretty good. Note the phrase “In principle” and the use of the word “tries.” These are softening words, which are important because ECP is a heuristic, not an algorithm.

Speaking in terms of “test cases that must be developed,” however, is a misleading way to discuss testing. Testing is not about creating test cases. It is for damn sure not about the number of test cases you create. Testing is about performing experiments. And the totality of experimentation goes far beyond such questions as “what test case should I develop next?” The text should instead say “reducing test effort.”

“An advantage of this approach is reduction in the time required for testing a software due to lesser number of test cases.”

Sorry, no. The advantage of ECP is not in reducing the number of test cases. Nor is it even about reducing test effort, as such (even though it is true that ECP is “trying” to reduce test effort). ECP is just a way to systematically guess where the bigger bugs probably are, which helps you focus your efforts. ECP is a prioritization technique. It also helps you explain and defend those choices. Better prioritization does not, by itself, allow you to test with less effort, but we do want to stumble into the big bugs sooner rather than later. And we want to stumble into them with more purpose and less stumbling. And if we do that well, we will feel comfortable spending less effort on the testing. Reducing effort is really a side effect of ECP.

“Equivalence partitioning is typically applied to the inputs of a tested component, but may be applied to the outputs in rare cases. The equivalence partitions are usually derived from the requirements specification for input attributes that influence the processing of the test object.”

Typically? Usually? Has this writer done any sort of research that would substantiate that? No.

ECP is a process that we all do informally, not only in testing but in our daily lives. When you push open a door, do you consciously decide to push on a specific square centimeter of the metal push plate? No, you don’t. You know that for most doors it doesn’t matter where you push. All pushable places are more or less equivalent. That is ECP! We apply ECP to anything that we interact with.

Yes, we apply it to output. And yes, we can think of equivalence classes based on specifications, but we also think of them based on all other learning we do about the software. We perform ECP based on all that we know. If what we know is wrong (for instance if there are unexpected bugs) then our equivalence classes will also be wrong. But that’s okay, if you understand that ECP is a heuristic and not a golden ticket to perfect testing.

“The fundamental concept of ECP comes from equivalence class which in turn comes from equivalence relation. A software system is in effect a computable function implemented as an algorithm in some implementation programming language. Given an input test vector some instructions of that algorithm get covered, ( see code coverage for details ) others do not…”

At this point the article becomes Computer Science propaganda. This is why we can’t have nice things in testing: as soon as the CS people get hold of it, they turn it into a little logic game for gifted kids, rather than a pursuit worthy of adults charged with discovering important problems in technology before it’s too late.

The fundamental concept of ECP has nothing to do with computer science or computability. It has to do with logic. Logic predates computers. An equivalence class is simply a set. It is a set of things that share some property. The property of interest in ECP is utility for exploring a particular product risk. In other words, an equivalence class in testing is an assertion that any member of that particular group of things would be more or less equally able to reveal a particular kind of bug if it were employed in a particular kind of test.

If I define a “test condition” as something about a product or its environment that could be examined in a test, then I can define equivalence classes like this: An equivalence class is a set of tests or test conditions that are equivalent with respect to a particular product risk, in a particular context.Β 

This implies that two inputs which are not equivalent for the purposes of one kind of bug may be equivalent for finding another kind of bug. It also implies that if we model a product incorrectly, we will also be unable to know the true equivalence classes. Actually, considering that bugs come in all shapes and sizes, to have the perfectly correct set of equivalence classes would be the same as knowing, without having tested, where all the bugs in the product are. This is because ECP is based on guessing what kind of bugs are in the product.

If you read the technical stuff about Computer Science in the Wikipedia article, you will see that the author has decided that two inputs which cover the same code are therefore equivalent for bug finding purposes. But this is not remotely true! This is a fantasy propagated by people who I suspect have never tested anything that mattered. Off the top of my head, code-coverage-as-gold-standard ignores performance bugs, requirements bugs, usability bugs, data type bugs, security bugs, and integration bugs. Imagine two tests that cover the same code, and both involve input that is displayed on the screen, except that one includes an input which is so long that when it prints it goes off the edge of the screen. This is a bug that the short input didn’t find, even though both inputs are “valid” and “do the same thing” functionally.

The Fundamental Problem With Most Testing Advice Is…

The problem with most testing advice is that it is either uncritical folklore that falls apart as soon as you examine it, or else it is misplaced formalism that doesn’t apply to realistic open-ended problems. Testing advice is better when it is grounded in a general systems perspective as well as a social science perspective. Both of these perspectives understand and use heuristics. ECP is a powerful, ubiquitous, and rather simple heuristic, whose utility comes from and is limited by your mental model of the product. In my next post, I will walk through an example of how I use it in real life.

Categories: Testing & QA

Five Dysfunctions of a Team, Patrick Lencioni: Re-Read Week 9

 

The Five Dysfunctions of a Team Cover

The “Book” during unboxing!

In this week’s re-read ofΒ The Five Dysfunctions of a TeamΒ Β by Patrick Lencioni (Jossey-Bass, Copyright 2002, 33rd printing), we continue with the third section of the book. Β This section begins the second off-site by wrestling with defining who their first team is. Β The concept of the first team can be summarized as who a team member owes their ultimate work loyalty. Β Any scenario that includes a hierarchy of teams faces this issue.

(Remember to buy a copy from the link above and read along.) Β We are well over halfway through this book and I will begin soliciting ideas for the next book soon.

Leaks
Katheryn discovers that others in the organization seem to know bits and pieces of what was going on in the offsite meetings. Opinions are being formed based on incomplete knowledge. Opinions based on partial knowledge tend to be interpreted through the filter of individual and team boundaries. Β Misperceptions can lead confusion and infighting amongst teams.

Offsite Number Two
The second offsite began with Katheryn bring up the topic of what the executive team had told their subordinates about the first offsite meeting. Β The issue was not that they had communicated but rather how they had handled confidential conversations among the team and which team they owed their ultimate loyalty. Β 

The question became who the executive team members are loyal to, their subordinates or the executive team. Several of the executive team were closer to their subordinates than they were to the management team. Members of a team that has different ultimate loyalties than the executive team (and by extension the organization) will but tend to put the needs of their other team ahead of the organization. Β This can lead to a type of local optimization in one team’s performance is optimized at the expense of optimizing the performance of the organization. Β The discussion of shifting loyalties to the executive team caused consternation. Β Team members equated shifting loyalties to bad management and abandoning carefully crafted relationships that had been built and nurtured over time.

Plowing on
Kathryn moved the team on by asking how they were doing. Β What she got was a tactical discussion of the impact of JR quitting and Nick taking over the sales function. What she really wanted was the team’s perception of how they were doing as a team. Β The answer was that the team still had not learned to discuss the hard problems such resource allocation

Carlos suggested that the organization was engineering heavy. Β Which caused Martin to go into defense mode, refusing to accept or discuss the possibility. Kathryn defused the negative conflict by facilitating it toΒ more productive conflict that broke down walls between the individuals. Β By ensuring that everyone developed an understanding that they all wanted the best for the company. Β In the end, Martin went to the whiteboard and educated the team on what everyone in engineering was doing. Martin’s executive team peers were surprised by everything going on within engineering. The rest of the team engaged and going to the whiteboard to facilitate and add their input to the discussion of resources. Β The conversation ended with a decision to make changes so that some engineers moved into a sales support role because that was what was best for the organization and not just for Martin and the engineering group. The section and conversation ended with a lunch break and Katheryn’s announcement that they would talk about dealing with interpersonal discomfort and holding each other accountable next.

Three quick takeaways:

  • Partial information leads to misinterpretations.Β Β Β Β Β Β Β Β 
  • Executives need to be ultimately loyal to the executive team rather than their siloed organizations.
  • Productive conflict requires facilitation to learn.

Previous Installments in the re-read ofΒ Β The Five Dysfunctions of a TeamΒ by Patrick Lencioni:

Week 1 – Introduction through Observations

Week 2 – The Staff through the End Run

Week 3 – Drawing the Line though Pushing Back

Week 4 – Entering Danger though Rebound

Week 5 – Awareness through Goals

Week 6 – Deep Tissue through Exhibition

Week 7 – Film Noir through Application

Week 8 – On-site through Fireworks


Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Sat, 11/26/2016 - 22:35

When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knpowledge is of a meager and unsatisfactory kind. -- Lord KelvinΒ Popular Lectures and Addresses, 1889.

Without numbers, without some means of assessing the situation, the outcomes, the impacts, any conjecture is of little use.Β 

Categories: Project Management

Stuff The Internet Says On Scalability For November 25th, 2016

Hey, it's HighScalability time:

 

Margaret Hamilton was honored with the Presidential Medal of Freedom for writing Apollo guidance software. Oddly, she's absent from best programmers of all time lists.

 

If you like this sort of Stuff then please support me on Patreon.
  • 98 seconds: before camera infected with malware; zeptosecond: smallest fragment of time ever measured; 50%: Google Cloud cheaper than AWS; 50%: of the world is on-line;

  • Quotable Quotes:
    • @skamille: Sometimes I think that human societies just weren't meant to scale to billions of people sharing arbitrary information
    • @joshk0: At @GetArbor we use #kubernetes to host a 30K QPS ad-tech serving platform. Maybe smaller than Pokemon Go but nothing to sneeze at.
    • HFT Guy: 2016 should be remembered as the year Google became a better choice than AWS. If 50% cheaper is not a solid argument, I don’t know what is.
    • Glenn Marcus: Hybrid [Progressive Web App] development takes 260% more effort man hours than Native development.
    • Bruce Schneier: I want to suggest another way of thinking about it in that everything is now a computer: This is not a phone. It’s a computer that makes phone calls. A refrigerator is a computer that keeps things cold. ATM machine is a computer with money inside. Your car is not a mechanical device with a computer. It’s a computer with four wheels and an engine… And this is the Internet of Things, and this is what caused the DDoS attack we’re talking about.
    • Bruce Schneier: I don’t like this. I like the world where the internet can do whatever it wants, whenever it wants, at all times. It’s fun. This is a fun device. But I’m not sure we can do that anymore.
    • southpolesteve: [Lambda] is cheaper and simpler to operate than our previous ec2+Opsworks setup. We get code to production faster and spend more time on actual business problems vs infrastructure problems.
    • Carlo Rovelli: Meaning = Information + Evolution
    • chadscira: We have been using Rancher as well... It allowed us to move away from DO and AWS. Now most of our infra is from OVH :). It's been smooth sailing. Because of massive costs savings we were able to just reinvest it in our own redundancy. Also 12-factor apps are pretty damn resilient.
    • Fiahil: Making separate [Google] accounts might not be enough considering they allegedly banned accounts related to each others by recovery address. Why would you think they would not do the same with accounts sharing occasionally the same laptop, the same ip address, and the same first and last name ?
    • @swardley: Arghhh, one of those "can IBM beat Amazon?" .... the answer has three parts 1) the game has become harder  2) yes it could  3) no it won't
    • fest: Replaying the sensor inputs and evaluating new estimated state is a really good way of debugging failures (because you can't just stop the system mid-air and evaluate internal state). It also helps with regression test suite and trying out new algorithms quickly.
    • @Tibocut: «Institutions prefer to have trillions sitting still than redistributing them towards opportunities» @asymco https://youtu.be/nD8QszyiVTY  at 2h45
    • @AlanaMassey: A gathering of two or more average looking white men is referred to by biologists as "a podcast."
    • @RyanHoliday: "How slow men are in matters when they believe they have time and how swift they are when necessity drives them to it." Machiavelli
    • agataygurturk: We use route53 health checks to invoke API gateway and thus the backend Lambda.
    • Paul Biggar: Yeah, BDSM. It’s San Francisco. Everyone’s into distributed systems and BDSM.
    • @mims: Since the Apollo program, we've privatized the R&D that drives all innovation. That might be a problem.
    • Backblaze:  We have fewer drives because over the last quarter we swapped out more than 3,500 2 terabyte (TB) HGST and WDC hard drives for 2,400 8 TB Seagate drives. So we have fewer drives, but more data.
    • @lee_newcombe: Fun finding from my talk earlier.  40 attendees: 37 on cloud, 3 about to start.  Only one trying serverless.  There's your opportunity folks
    • Resilience Thinking: In resilient systems everything is not necessarily connected to everything else. Overconnected systems are susceptible to shocks and they are rapidly transmitted through the system. A resilient system opposes such a trend; it would maintain or create a degree of modularity.

  • Security expert Rob Graham with a stunning blow by blow twitter story of a botnet infecting his brand new security camera. The whole process starts within 98 seconds of putting the camera on the internet, which is far faster than an ordinary mortal can configure the device to be secure. This was a cheap camera that had good reviews. At some point we need to think about all this too cheap equipment as being funded by a Botnet Subsidy. It's almost too much of a coincidence that all these cheap devices, meant to be bought like candy in the mass consumer market, have such obviously poor security. Maybe it's not an accident? See also, Pre-installed Backdoor On 700 Million Android

  • Their profit margin is your opportunity. With The Era of Cloud Price Discounts Is Fading and the cost of metal continuing to decrease, is now a good time to consider transitioning to bare metal on-premise type infrastructures? The incentives are now coming into alignment. Kubernetes: Finally...A True Cloud Platform by Sam Ghods, Co-founder, Box makes a good case for Kubernetes as the only truly portable infrastructure option.

  • This is both pure genius and a sure sign of the apocalypse. Exclusive Interview: How Jared Kushner Won Trump The White House. Democrats may have thought they had a technological lead because of the last presidential election, but it turns out they were fighting the last war. Technology changed and they did not. Old: targeting, organizing and motivating voters. New: Moneyball meets Social Media with a twist of message tailoring, sentiment manipulation and machine learning. If this presidential election could be represented as a battle between Peter Thiel and Eric Schmidt: Thiel triumphed. Traditional microtargeting is almost quaint. Now, using Facebook's ability to target users with dark posts, a newsfeed message seen by no one aside from the users being targeted, each user can be shown a world specifically tailored to push and prod their particular buttons. For an explanation see The Secret Agenda of a Facebook Quiz. That's why it's both genius and apocalyptical. Things will never be the same. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Kubernetes: Spinning up a Neo4j 3.1 Causal Cluster

Mark Needham - Fri, 11/25/2016 - 17:55

A couple of weeks ago I wrote a blog post explaining how I’d created a Neo4j causal cluster using docker containers directly and for my next pet project I wanted to use Kubernetes as an orchestration layer so that I could declaratively change the number of servers in my cluster.

I’d never used Kubernetes before but I saw a presentation showing how to use it to create an Elastic cluster at the GDG Cloud meetup a couple of months ago.

In that presentation I was introduced to the idea of a PetSet which is an abstraction exposed by Kubernetes which allows us to manage a set of pods (containers) which have a fixed identity. The documentation explains it better:

A PetSet ensures that a specified number of β€œpets” with unique identities are running at any given time. The identity of a Pet is comprised of:

  • a stable hostname, available in DNS
  • an ordinal index
  • stable storage: linked to the ordinal & hostname

In my case I need to have a stable hostname because each member of a Neo4j cluster is given a list of other cluster members with which it can create a new cluster or join an already existing one. This is the first use case described in the documentation:

PetSet also helps with the 2 most common problems encountered managing such clustered applications:

  • discovery of peers for quorum
  • startup/teardown ordering

So the first thing we need to do is create some stable storage for our pods to use.

We’ll create a cluster of 3 members so we need to create one PersistentVolume for each of them. The following script does the job:

volumes.sh

for i in $(seq 0 2); do
  cat <<EOF | kubectl create -f -
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv${i}
  labels:
    type: local
    app: neo4j
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/tmp/${i}"
EOF
 
  cat <<EOF | kubectl create -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: datadir-neo4j-${i}
  labels:
    app: neo4j
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
EOF
done;

If we run this script it’ll create 3 volumes which we can see by running the following command:

$ kubectl get pv
NAME      CAPACITY   ACCESSMODES   STATUS    CLAIM                     REASON    AGE
pv0       1Gi        RWO           Bound     default/datadir-neo4j-0             7s
pv1       1Gi        RWO           Bound     default/datadir-neo4j-1             7s
pv2       1Gi        RWO           Bound     default/datadir-neo4j-2             7s
$ kubectl get pvc
NAME              STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
datadir-neo4j-0   Bound     pv0       1Gi        RWO           26s
datadir-neo4j-1   Bound     pv1       1Gi        RWO           26s
datadir-neo4j-2   Bound     pv2       1Gi        RWO           25s

Next we need to create a PetSet template. After a lot of iterations I ended up with the following:

# Headless service to provide DNS lookup
apiVersion: v1
kind: Service
metadata:
  labels:
    app: neo4j
  name: neo4j
spec:
  clusterIP: None
  ports:
    - port: 7474
  selector:
    app: neo4j
----
# new API name
apiVersion: "apps/v1alpha1"
kind: PetSet
metadata:
  name: neo4j
spec:
  serviceName: neo4j
  replicas: 3
  template:
    metadata:
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
        pod.beta.kubernetes.io/init-containers: '[
            {
                "name": "install",
                "image": "gcr.io/google_containers/busybox:1.24",
                "command": ["/bin/sh", "-c", "echo \"
                unsupported.dbms.edition=enterprise\n
                dbms.mode=CORE\n
                dbms.connectors.default_advertised_address=$HOSTNAME.neo4j.default.svc.cluster.local\n
                dbms.connectors.default_listen_address=0.0.0.0\n
                dbms.connector.bolt.type=BOLT\n
                dbms.connector.bolt.enabled=true\n
                dbms.connector.bolt.listen_address=0.0.0.0:7687\n
                dbms.connector.http.type=HTTP\n
                dbms.connector.http.enabled=true\n
                dbms.connector.http.listen_address=0.0.0.0:7474\n
                causal_clustering.raft_messages_log_enable=true\n
                causal_clustering.initial_discovery_members=neo4j-0.neo4j.default.svc.cluster.local:5000,neo4j-1.neo4j.default.svc.cluster.local:5000,neo4j-2.neo4j.default.svc.cluster.local:5000\n
                causal_clustering.leader_election_timeout=2s\n
                  \" > /work-dir/neo4j.conf" ],
                "volumeMounts": [
                    {
                        "name": "confdir",
                        "mountPath": "/work-dir"
                    }
                ]
            }
        ]'
      labels:
        app: neo4j
    spec:
      containers:
      - name: neo4j
        image: "neo4j/neo4j-experimental:3.1.0-M13-beta3-enterprise"
        imagePullPolicy: Always
        ports:
        - containerPort: 5000
          name: discovery
        - containerPort: 6000
          name: tx
        - containerPort: 7000
          name: raft
        - containerPort: 7474
          name: browser
        - containerPort: 7687
          name: bolt
        securityContext:
          privileged: true
        volumeMounts:
        - name: datadir
          mountPath: /data
        - name: confdir
          mountPath: /conf
      volumes:
      - name: confdir
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
        volume.alpha.kubernetes.io/storage-class: anything
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

The main thing I had trouble with was getting the members of the cluster to talk to each other. The default docker config uses hostnames but I found that pods were unable to contact each other unless I specified the FQDN in the config file. We can run the following command to create the PetSet:

$ kubectl create -f neo4j.yaml 
service "neo4j" created
petset "neo4j" created

We can check if the pods are up and running by executing the following command:

$ kubectl get pods
NAME      READY     STATUS    RESTARTS   AGE
neo4j-0   1/1       Running   0          2m
neo4j-1   1/1       Running   0          14s
neo4j-2   1/1       Running   0          10s

And we can tail neo4j’s log files like this:

$ kubectl logs neo4j-0
Starting Neo4j.
2016-11-25 16:39:50.333+0000 INFO  Starting...
2016-11-25 16:39:51.723+0000 INFO  Bolt enabled on 0.0.0.0:7687.
2016-11-25 16:39:51.733+0000 INFO  Initiating metrics...
2016-11-25 16:39:51.911+0000 INFO  Waiting for other members to join cluster before continuing...
2016-11-25 16:40:12.074+0000 INFO  Started.
2016-11-25 16:40:12.428+0000 INFO  Mounted REST API at: /db/manage
2016-11-25 16:40:13.350+0000 INFO  Remote interface available at http://neo4j-0.neo4j.default.svc.cluster.local:7474/
$ kubectl logs neo4j-1
Starting Neo4j.
2016-11-25 16:39:53.846+0000 INFO  Starting...
2016-11-25 16:39:56.212+0000 INFO  Bolt enabled on 0.0.0.0:7687.
2016-11-25 16:39:56.225+0000 INFO  Initiating metrics...
2016-11-25 16:39:56.341+0000 INFO  Waiting for other members to join cluster before continuing...
2016-11-25 16:40:16.623+0000 INFO  Started.
2016-11-25 16:40:16.951+0000 INFO  Mounted REST API at: /db/manage
2016-11-25 16:40:17.607+0000 INFO  Remote interface available at http://neo4j-1.neo4j.default.svc.cluster.local:7474/
$ kubectl logs neo4j-2
Starting Neo4j.
2016-11-25 16:39:57.828+0000 INFO  Starting...
2016-11-25 16:39:59.166+0000 INFO  Bolt enabled on 0.0.0.0:7687.
2016-11-25 16:39:59.176+0000 INFO  Initiating metrics...
2016-11-25 16:39:59.329+0000 INFO  Waiting for other members to join cluster before continuing...
2016-11-25 16:40:19.216+0000 INFO  Started.
2016-11-25 16:40:19.675+0000 INFO  Mounted REST API at: /db/manage
2016-11-25 16:40:21.029+0000 INFO  Remote interface available at http://neo4j-2.neo4j.default.svc.cluster.local:7474/

I wanted to log into the servers from my host machine’s browser so I setup port forwarding for each of the servers:

$ kubectl port-forward neo4j-0 7474:7474 7687:7687

We can then get an overview of the cluster by running the following procedure:

CALL dbms.cluster.overview()
 
╒════════════════════════════════════╀═════════════════════════════════════════════════════╀════════╕
β”‚id                                  β”‚addresses                                            β”‚role    β”‚
β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════════════════════════════════════════════β•ͺ════════║
β”‚81d8e5e2-02db-4414-85de-a7025b346e84β”‚[bolt://neo4j-0.neo4j.default.svc.cluster.local:7687,β”‚LEADER  β”‚
β”‚                                    β”‚ http://neo4j-0.neo4j.default.svc.cluster.local:7474]β”‚        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚347b7517-7ca0-4b92-b9f0-9249d46b2ad3β”‚[bolt://neo4j-1.neo4j.default.svc.cluster.local:7687,β”‚FOLLOWERβ”‚
β”‚                                    β”‚ http://neo4j-1.neo4j.default.svc.cluster.local:7474]β”‚        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚a5ec1335-91ce-4358-910b-8af9086c2969β”‚[bolt://neo4j-2.neo4j.default.svc.cluster.local:7687,β”‚FOLLOWERβ”‚
β”‚                                    β”‚ http://neo4j-2.neo4j.default.svc.cluster.local:7474]β”‚        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜

So far so good. What if we want to have 5 servers in the cluster instead of 3? We can run the following command to increase our replica size:

$ kubectl patch petset neo4j -p '{"spec":{"replicas":5}}'
"neo4j" patched

Let’s run that procedure again:

CALL dbms.cluster.overview()
 
╒════════════════════════════════════╀═════════════════════════════════════════════════════╀════════╕
β”‚id                                  β”‚addresses                                            β”‚role    β”‚
β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════════════════════════════════════════════β•ͺ════════║
β”‚81d8e5e2-02db-4414-85de-a7025b346e84β”‚[bolt://neo4j-0.neo4j.default.svc.cluster.local:7687,β”‚LEADER  β”‚
β”‚                                    β”‚ http://neo4j-0.neo4j.default.svc.cluster.local:7474]β”‚        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚347b7517-7ca0-4b92-b9f0-9249d46b2ad3β”‚[bolt://neo4j-1.neo4j.default.svc.cluster.local:7687,β”‚FOLLOWERβ”‚
β”‚                                    β”‚ http://neo4j-1.neo4j.default.svc.cluster.local:7474]β”‚        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚a5ec1335-91ce-4358-910b-8af9086c2969β”‚[bolt://neo4j-2.neo4j.default.svc.cluster.local:7687,β”‚FOLLOWERβ”‚
β”‚                                    β”‚ http://neo4j-2.neo4j.default.svc.cluster.local:7474]β”‚        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚28613d06-d4c5-461c-b277-ddb3f05e5647β”‚[bolt://neo4j-3.neo4j.default.svc.cluster.local:7687,β”‚FOLLOWERβ”‚
β”‚                                    β”‚ http://neo4j-3.neo4j.default.svc.cluster.local:7474]β”‚        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚2eaa0058-a4f3-4f07-9f22-d310562ad1ecβ”‚[bolt://neo4j-4.neo4j.default.svc.cluster.local:7687,β”‚FOLLOWERβ”‚
β”‚                                    β”‚ http://neo4j-4.neo4j.default.svc.cluster.local:7474]β”‚        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Neat! And it’s as easy to go back down to 3 again:

$ kubectl patch petset neo4j -p '{"spec":{"replicas":3}}'
"neo4j" patched
CALL dbms.cluster.overview()
 
╒════════════════════════════════════╀══════════════════════════════════════════════════════╀════════╕
β”‚id                                  β”‚addresses                                             β”‚role    β”‚
β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════════════════════════════════════════════════β•ͺ════════║
β”‚81d8e5e2-02db-4414-85de-a7025b346e84β”‚[bolt://neo4j-0.neo4j.default.svc.cluster.local:7687, β”‚LEADER  β”‚
β”‚                                    β”‚http://neo4j-0.neo4j.default.svc.cluster.local:7474]  β”‚        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚347b7517-7ca0-4b92-b9f0-9249d46b2ad3β”‚[bolt://neo4j-1.neo4j.default.svc.cluster.local:7687, β”‚FOLLOWERβ”‚
β”‚                                    β”‚http://neo4j-1.neo4j.default.svc.cluster.local:7474]  β”‚        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚a5ec1335-91ce-4358-910b-8af9086c2969β”‚[bolt://neo4j-2.neo4j.default.svc.cluster.local:7687, β”‚FOLLOWERβ”‚
β”‚                                    β”‚http://neo4j-2.neo4j.default.svc.cluster.local:7474]  β”‚        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Next I need to look at how we can add read replicas into the cluster. These don’t take part in the membership/quorum algorithm so I think I’ll be able to use the more common ReplicationController/Pod architecture for those.

If you want to play around with this the code is available as a gist. I’m using the minikube library for all my experiments but I’ll hopefully get around to trying this on GCE or AWS soon.

Categories: Programming

Vanity Metrics Have a Downside!

Danger!

Danger!

Vanity metrics are not merely just inconvenience; they can be harmful to teams and organizations. Vanity metrics can elicit three major categories of poor behavior.

  1. Distraction. You get what you measure. Vanity metrics can lead teams or organizations into putting time and effort into practices or work products that don’t improve the delivery of value.
  2. Trick teams or organizations into believing they have answers when they don’t. A close cousin to distraction is the belief that the numbers are providing an insight into how to improve value delivery when what is being measured isn’t connected to the flow of value. Β For example, the organization that measures the raw number of stories delivered across the department should not draw many inferences in the velocity of stories delivered on a month-to-month basis.
  3. Make teams or organizations feel good without providing guidance. Another kissing cousin to distraction are metrics that don’t provide guidance. Β Metrics that don’t provide guidance steal time from work that can provide real value becasue they require time to collect, analyze and discuss. On Twitter, Gregor Wikstrand recently pointed out:

@TCagley actually, injuries and sick days are very good inverse indicators of general management ability

While I agree with Greger’s statement, his assessment is premised on someone using the metric to affect how work is done. Β All too often, metrics such as injuries and sick days are used to communicate with the outside world rather than to provide guidance on how work is delivered.

Vanity metrics can distract teams and organizations by sapping time and energy from delivering value. Teams and organizations should invest their energy in collecting metrics that help them make decisions. A simple test for every measure or metric is to ask: Based on the number or trend, do you know what you need to do? If the answer is β€˜no’, you have the wrong metric.


Categories: Process Management

Vanity Metrics Have a Downside!

Danger!

Danger!

Vanity metrics are not merely just inconvenience; they can be harmful to teams and organizations. Vanity metrics can elicit three major categories of poor behavior.

  1. Distraction. You get what you measure. Vanity metrics can lead teams or organizations into putting time and effort into practices or work products that don’t improve the delivery of value.
  2. Trick teams or organizations into believing they have answers when they don’t. A close cousin to distraction is the belief that the numbers are providing an insight into how to improve value delivery when what is being measured isn’t connected to the flow of value. Β For example, the organization that measures the raw number of stories delivered across the department should not draw many inferences in the velocity of stories delivered on a month-to-month basis.
  3. Make teams or organizations feel good without providing guidance. Another kissing cousin to distraction are metrics that don’t provide guidance. Β Metrics that don’t provide guidance steal time from work that can provide real value becasue they require time to collect, analyze and discuss. On Twitter, Gregor Wikstrand recently pointed out:

@TCagley actually, injuries and sick days are very good inverse indicators of general management ability

While I agree with Greger’s statement, his assessment is premised on someone using the metric to affect how work is done. Β All too often, metrics such as injuries and sick days are used to communicate with the outside world rather than to provide guidance on how work is delivered.

Vanity metrics can distract teams and organizations by sapping time and energy from delivering value. Teams and organizations should invest their energy in collecting metrics that help them make decisions. A simple test for every measure or metric is to ask: Based on the number or trend, do you know what you need to do? If the answer is β€˜no’, you have the wrong metric.


Categories: Process Management

Nomad 0.5 configuration templates: consul-template is dead! long live consul-template!

Xebia Blog - Thu, 11/24/2016 - 09:20
Or... has Nomad made the Consul-template tool obsolete? If you employ Consul or Vault to provide service discovery or secrets management to your applications you will love the freshly released 0.5 version of the Nomad workload scheduler: it includes a new 'template' feature to dynamically generate configuration files fromΒ Consul andΒ Vault data for the jobs it

Final update to Android 7.1 Developer Preview

Android Developers Blog - Thu, 11/24/2016 - 01:05

Posted by Dave Burke, VP of Engineering

Today we're rolling out an update to the Android 7.1 Developer Preview -- the last before we release the final Android 7.1.1 platform to the ecosystem. Android 7.1.1 includes the developer features already available on Pixel and Pixel XL devices and adds optimizations and bug fixes on top of the base Android 7.1 platform. With Developer Preview 2, you can make sure your apps are ready for Android 7.1.1 and the consumers that will soon be running it on their devices.

As highlighted in October, we're also expanding the range of devices that can receive this Developer Preview update to Nexus 5X, Nexus 6P, Nexus 9, and Pixel C.

If you have a supported device that's enrolled in the Android Beta Program, you'll receive an update to Developer Preview 2 over the coming week. If you haven't enrolled your device yet, just visit the site to enroll your device and get the update.

In early December, we'll roll out Android 7.1.1 to the full lineup of supported devices as well as Pixel and Pixel XL devices.

What's in this update?

Developer Preview 2 is a release candidate for Android 7.1.1 that you can use to complete your app development and testing in preparation for the upcoming final release. In includes near-final system behaviors and UI, along with the latest bug fixes and optimizations across the system and Google apps.

It also includes the developer features and APIs (API level 25) already introduced in Developer Preview 1. If you haven't explored the developer features, you'll want to take a look at app shortcuts, round icon resources, and image keyboard support, among others -- you can see the full list of developer features here.

With Developer Preview 2, we're also updating the SDK build and platform tools in Android Studio, the Android 7.1.1 platform, and the API Level 25 emulator system images. The latest version of the support library (25.0.1) is also available for you to add image keyboard support, bottom navigation, and other features for devices running API Level 25 or earlier.

For details on API Level 25 check out the API diffs and the updated API reference on the developer preview site.

Get your apps ready for Android 7.1

Now is the time to optimize your apps to look their best on Android 7.1.1. To get started, update to Android Studio 2.2.2 and then download the API Level 25 platform, emulator system images, and tools through the SDK Manager in Android Studio.

After installing the API Level 25 SDK, you can update your project's compileSdkVersion to 25 to build and test against the new APIs. If you're doing compatibility testing, we recommend updating your app's targetSdkVersion to 25 to test your app with compatibility behaviors disabled. For details on how to set up your app with the API Level 25 SDK, see Set up the Preview.

If you're adding app shortcuts or circular launcher icons to your app, you can use Android Studio's built-in Image Asset Studio to quickly help you create icons of different sizes that meet the material design guidelines. You can test your round icons on the Google APIs emulator for API Level 25, which includes support for round icons and the new Google Pixel Launcher.

table.GeneratedTable td, table.GeneratedTable th { border-collapse: collapse; border-width: 1px; border-color: #000000; border-style: solid; }

Android Studio and the Google APIs emulator let you quickly create and test your round icon assets.

If you're adding image keyboard support, you can use the Messenger and Google Keyboard apps included in the preview system images for testing as they include support for this new API.

Scale your tests using Firebase Test Lab for Android

To help scale your testing, make sure to take advantage of Firebase Test Lab for Android and run your tests in the cloud at no charge during the preview period on all virtual devices including the Developer Preview 2 (API 25). You can use the automated crawler (Robo Test) to test your app without having to write any test scripts, or you can upload your own instrumentation (e.g. Espresso) tests. You can upload your tests here.

Publish your apps to alpha, beta or production channels in Google Play

After you've finished final testing, you can publish your updates compiled against, and optionally targeting, API 25 to Google Play. You can publish to your alpha, beta, or even production channels in the Google Play Developer Console. In this way, push your app updates to users whose devices are running Android 7.1, such as Pixel and Android Beta devices.

Get Developer Preview 2 on Your Eligible Device

If you have an eligible device that's already enrolled in the Android Beta Program, the device will get the Developer Preview 2 update over the coming week. No action is needed on your part. If you aren't yet enrolled in program, the easiest way to get started is by visiting android.com/beta and opt-in your eligible Android phone or tablet -- you'll soon receive this preview update over-the-air. As always, you can also download and flash this update manually.

As mentioned above, this Developer Preview update is available for Nexus 5X, Nexus 6P, Nexus 9, and Pixel C devices.

We're expecting to launch the final release of the Android 7.1.1 in just a few weeks Starting in December, we'll roll out Android 7.1.1 to the full lineup of supported preview devices, as well as the recently launched Pixel and Pixel XL devices. At that time, we'll also push the sources to AOSP, so our device manufacturer partners can bring this new platform update to consumers on their devices.

Meanwhile, we continue to welcome your feedback in the Developer Preview issue tracker, N Preview Developer community, or Android Beta community as we work towards the final consumer release in December!

Categories: Programming

Iterations and Increments

general-agile-picture-copyright-1024x645Agile is iterative and incremental development and frequent delivery with cultural change for transparency.

What do the words iterative and incremental mean?

Iterative means weΒ take a one-piece-at-a-time for creating an entire feature. Iterative approaches manage the technical risk. We learnΒ about the risk as we iterate through the entire feature set.

Incremental means we deliver those pieces of value. Incremental approaches manage the schedule risk, because we deliver finished work.

Agile works because we manage both technical and schedule risk. We iterate over a feature set, delivering chunks of value. (One reason to deliver value often is so we can change what theΒ team does next.)

Here’s a way to think about this if I use a feature set called “secure login.” Now, no one wants to log in. But, people want to have security for payment. So secure login might be a way to get to secure payment. The theme, or what I have been calling a feature set, is “Secure login.” A feature set is several related features that deliver a theme.

If you want to iterate on the feature set, you might deliver these valuable increments (I first wrote about this in How to Use Continuous Planning):

  1. Already existing user can log in.
  2. Users can change their password.
  3. Add new user as a user.
  4. Add new user as admin.
  5. Prevent unwanted users from logging in: bots, certain IP addresses, or a physical location. (This is three separate stories.)

If you implement the first story, you can use a flat file. You can still use a flat file for the second story. Once you start to addΒ more than 10 users, you might want to move to some sort of database. That would be a story. It’s not “Create a database.” The story is “Explore options for how to add 10, 100, 1000, 10000 users to our site so we can see what the performance and reliability implications are.”

Notice the explore as part of the story. That would lead to a spike to generate options that the team can discuss with the PO. Some options have performance implications.

Every time the team iterates over the feature set, they deliver an increment. Since many teams use timeboxes, they use “iterations” as the timebox. (If you use Scrum, you use sprints.) Notice the term “iterate over the feature set.”

In incremental life cycles, such as staged delivery, the team would finish all the features in the one feature set. Incremental life cycles did not necessarily use timeboxes to timebox the incremental development. In iterative life cycles, such as spiral or RUP, the team would develop prototypes of features, or even partially finish features, but the final integration and test occurs after all the iterative development was done.

In agile, we iterate over a feature set, delivering incremental value. If you don’t finish your stories, you’re in an iterative life cycle. If you don’t limit the features you finish and finish “all” of them, you are in an incremental life cycle.

There is No One Right Way to select a life cycle for your project. If you want to use agile, you iterate over a feature set in a short time, delivering chunks of value.

If you are having trouble slicing your stories so you can create feature sets, see my Product Owner Workshop (starting in January). Super early bird expires this coming Friday.

Categories: Project Management