Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Stuff The Internet Says On Scalability For January 16th, 2015

Hey, it's HighScalability time:


First people to free-climb the Dawn Wall of El Capitan using nothing but stone knives and bearskins (pics). 
  • $3.3 trillion: mobile revenue in 2014; ~10%: the difference between a good SpaceX landing and a crash; 6: hours for which quantum memory was held stable 
  • Quotable Quotes:
    • @stevesi: "'If you had bought the computing power found inside an iPhone 5S in 1991, it would have cost you $3.56 million.'"
    • @imgurAPI: Where do you buy shares in data structures? The Stack Exchange
    • @postwait: @xaprb agreed. @circonus does per-second monitoring, but *retain* one minute for 7 years; that plus histograms provides magical insight.
    • @iamaaronheld: A single @awscloud datacenter consumes enough electricity to send 24 DeLoreans back in time
    • @rstraub46: "We are becoming aware that the major questions regarding technology are not technical but human questions" - Peter Drucker, 1967
    • @Noahpinion: Behavioral economics IS the economics of information. via @CFCamerer 
    • @sheeshee: "decentralize all the things" (guess what everybody did in the early 90ies & why we happily flocked to "services". ;)
    • New Clues: The Internet is no-thing at all. At its base the Internet is a set of agreements, which the geeky among us (long may their names be hallowed) call "protocols," but which we might, in the temper of the day, call "commandments."

  • Can't agree with this. We Suck at HTTP. HTTP is just a transport. It should only deliver transport related error codes. Application errors belong in application messages, not spread all over the stack. 

  • Apple has lost the functional high ground. It's funny how microservices are hot and one of its wins is the independent evolution of services. Apple's software releases now make everything tied together. It's a strategy tax. The watch just extends the rigidity of the structure. But this is a huge upgrade. Apple is moving to a cloud multi-device sync model, which is a complete revolution. It will take a while for all this to shake out. 

  • This is so cool, I've never heard of Cornelis Drebbel (1620s) before or about his amazing accomplishments. The Vulgar Mechanic and His Magical Oven: His oven is one of the earliest devices that gave human control away to a machine and thus can be seen as a forerunner of the smart machine, the self-deciding automaton, the thinking robot.

  • Do you think there's a DevOps identity crisis, as Baron Schwartz suggests? Does DevOps have a messaging and positioning problem? Is DevOps just old wine in a new skin? Is DevOps made up of echo chambers? I don't know, but an interesting analysis by Baron.

  • How does Hyper-threading double your CPU throughput?: So if you are optimizing for higher throughput – that may be fine. But if you are optimizing for response time, then you may consider running with HT turned off.

  • Underdog.io share's what's Inside Datadog’s Tech Stack: python, javascript and go; the front-end happen in D3 and React; databases are Kafka, redis, Cassandra, S3, ElasticSearch, PostgreSQL; DevOps is Chef, Capistrano, Jenkins, Hubot, and others.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Bandita Joarder on How Presence is Something You Can Learn

Bandita is one of the most amazing leaders in the technology arena.

She’s not just technical, but she also has business skills, and executive presence.

But she didn’t start out that way.

She had to learn presence from the school of hard knocks.   Many people think presence is something that either you have or you don’t.

Bandita proves otherwise.

Here is a guest post by Bandita Joarder on how presence is something you can learn:

Presence is Something You Can Learn

It’s a personal story.  It’s an empowering story.  It’s a story of a challenge and a change, and how learning the power of presence, helped Bandita move forward in her career.

Enjoy.

Categories: Architecture, Programming

Qualitative Risk Management and Quantitative Risk Management

Herding Cats - Glen Alleman - Fri, 01/16/2015 - 14:56

CRM

Qualitative risk analysis includes methods for prioritizing the identified risks for further action, such as risk response.

The project members must revisit qualitative risk analysis during the project’s lifecycle. When the team repeats qualitative analysis for individual risks, trends may emerge in the results. These trends can indicate the need for more or less risk management action on particular risks or even show whether a risk mitigation plan is working.

Quantitative risk analysis is a way of numerically estimating the probability that a project will meet its cost and time objectives. Quantitative analysis is based on a simultaneous evaluation of the impact of all identified and quantified risks, using Monte Carlo simulation.

Quantitative risk analysis simulation starts with the model of the project and either its project schedule or its cost estimate, depending on the objective. The degree of uncertainty in each schedule activity and each line‐item cost element is represented by a probability distribution. The probability distribution is usually specified by determining the optimistic, the most likely, and the pessimistic values for the activity or cost element. This is typically called the “3‐point estimate.” The three points are estimated by the project team or other subject matter experts who focus on the schedule or cost elements one at a time.

Risk Response

Capturing risks, classifying them, prioritizing them, analyzing them is necessary to project success, but not sufficient. 

Mitigating (handling) the risk is needed. This is done in one for four ways: †

  • Avoid. Risk can be avoided by removing the cause of the risk or executing the project in a different way while still aiming to achieve project objectives. Not all risks can be avoided or eliminated, and for others, this approach may be too expensive or time‐consuming. However, this should be the first strategy considered.

  • Transfer. Transferring risk involves finding another party who is willing to take responsibility for its management, and who will bear the liability of the risk should it occur. The aim is to ensure that the risk is owned and managed by the party best able to deal with it effectively. Risk transfer usually involves payment of a premium, and the cost‐effectiveness of this must be considered when deciding whether to adopt a transfer strategy.

  • Mitigate. Risk mitigation reduces the probability and/or impact of an adverse risk event to an acceptable threshold. Taking early action to reduce the probability and/or impact of a risk is often more effective than trying to repair the damage after the risk has occurred. Risk mitigation may require resources or time and thus presents a tradeoff between doing nothing versus the cost of mitigating the risk.

  • Acceptance. This strategy is adopted when it is not possible or practical to respond to the risk by the other strategies, or a response is not warranted by the importance of the risk. When the project manager and the project team decide to accept a risk, they are agreeing to address the risk if and when it occurs. A contingency plan, workaround plan and/or contingency reserve may be developed for that eventuality.

† Project Risk Management: A Scalable Approach, Risk Management Task Group, CalTrans, June 2012.

Related articles The Actual Science in Management Science What is Governance? The Myth and Half-Truths of "Myths and Half-Truths"
Categories: Project Management

How Google Analytics helps you make better decisions for your apps

Android Developers Blog - Fri, 01/16/2015 - 01:12

Posted by Russell Ketchum, Lead Product Manager, Google Analytics for Mobile Apps

Knowing how your customers use your app is the foundation to keeping them happy and engaged. It’s important to track downloads and user ratings, but the key to building a successful business is using data to dive deeper into understanding the full acquisition funnel and what makes users stick around.

Google Analytics is the easiest way to understand more about what your users are doing inside your app on Google Play, while also simultaneously tracking your users across the web and other mobile platforms. To show how Google Analytics can help, we've created a new "Analyze" section on the Android Developers website for you to check out. We provide guidance on how to design a measurement plan and implement effective in-app analytics – and take advantage of features only available between Google Play and Google Analytics.

The Google Play Referral Flow in Analytics

Google Analytics for mobile apps provides a comprehensive view into your app’s full user lifecycle, including user acquisition, composition, in app behavior, and key conversions. Our Analytics Academy course on mobile app analytics is also a great resource to learn the fundamentals.

Eltsoft LLC, a foreign language learning and education app developer for Android, recognized early on how impactful Google Analytics would have on the company's ability to quickly improve on its apps and meet user needs.

Analytics has really helped us to track the effectiveness of the changes to our app. I would say six months ago, that our success was a mystery. The data said we were doing well, but the whys were not clear. Therefore, we couldn’t replicate or push forward. But today, we understand what’s happening and can project our future success. We have not only the data, but can control certain variables allowing us to understand that data. - Jason Byrne, Eltsoft LLC

Here are some powerful tips to make the most of Google Analytics:

  1. Understand the full acquisition funnel
  2. Uniquely integrated with the Google Play Developer Console, Google Analytics gives you a comprehensive view of the Google Play Referral Flow. By linking Analytics to the Developer Console, you can track useful data on how users move through the acquisition flow from your marketing efforts to the Google Play store listing to the action of launching the app. If you find that a significant number of users browse your app in Google Play, but don’t install it, for example, you can then focus your efforts on improving your store listing.
  3. Unlock powerful insights on in-app purchases
  4. Monitoring in-app purchases in the Google Play Developer Console will show you the total revenue your app is generating, but it does not give you the full picture about your paying users. By instrumenting your app with the Google Analytics ecommerce tracking, you’ll get a fuller understanding of what paying users do inside your app. For example, you can find out which acquisition channels deliver users who stay engaged and go on to become the highest value users.
  5. Identify roadblocks and common paths with the Behavior Flow
  6. Understanding how users move through your app is best done with in-app analytics. With Google Analytics, you can easily spot if a significant percentage of users leave your app during a specific section. For example, if you see significant drop off on a certain level of your game, you may want to make that level easier, so that more users complete the level and progress through the game. Similarly, if you find users who complete a tutorial stay engaged with your app, you might put the tutorial front and center for first-time users.
  7. Segment your audience to find valuable insights
  8. Aggregated data can help you answer questions about overall trends in your app. If you want to unlock deeper insights about what drives your users’ behavior, you can slice and dice your data using segmentation, such as demographics, behavior, or install date. If something changes in one of your key metrics, segmentation can help you get to the root of the issue -- for example, was a recent app update unpopular with users from one geographic area, or were users with a certain device or carrier affected by a bug?
  9. Use custom data to measure what matters for your business
  10. Simply activating the Google Analytics library gives you many out-of-the-box metrics without additional work, such as daily and monthly active users, session duration, breakdowns by country, and many more variables. However, it’s likely that your app has many user actions or data types that are unique to it, which are critical to building an engaged user base. Google Analytics provides events, custom dimensions, and custom metrics so you can craft a measurement strategy that fits your app and business.
  11. No more one-size-fits-all ad strategy
  12. If you’re a developer using AdMob to monetize your app, you can now see all of your Analytics data in the AdMob dashboard. Running a successful app business is all about reaching the right user with the right ad or product at the right time. If you create specific user segments in Google Analytics, you can target each segment with different ad products. For example, try targeting past purchasers with in-app purchase ads, while monetizing users who don’t purchase through targeted advertising.

By measuring your app performance on a granular level, you will be able to make better decisions for your business. Successful developers build their measurement plan at the same time as building their app in order to set goals and track progress against key success metrics, but it’s never too late to start.

Choose the implementation that works best for your app to get started with Google Analytics today and find out more about what you can do in the new “Analyze” section of developers.android.com.

Join the discussion on

+Android Developers
Categories: Programming

DevOps Primer: A Tool to Scale Agile

DevOps requires participation and cooperation.

DevOps requires participation and cooperation.

There is a general consensus that Agile frameworks are effective for delivering business value. Approaches and frameworks such as DevOps are often leveraged to ensure that Agile is both effective AND efficient.  Using a DevOps approach becomes even more critical for efficiency and effectiveness as projects, programs and organizations get larger requiring more collaboration and coordination. DevOps is a crucial tool for helping teams ensure deployment readiness and that the proliferation of technical environment and tools are effective when scaling Agile.

Implementing DevOps requires the involvement of a number of roles to deliver business value collaboratively. Involvement requires participation to be effective. Participation between developers, QA and TechOps personnel as part of the same value stream begins at planning. The Scaled Agile Framework Enterprise (SAFe) makes a strong statement about involvement and participation by integrating DevOps in the Agile Release Train (one of SAFe’s core structures). Integrating the concept of DevOps in the flow of a project or program helps to ensure that the team takes steps to maintain environments and deployment readiness that extend from the code base to the needed technical environments to build, test and share.

Deployment readiness includes a significant number of activities, all of which require broad involvement. Examples of these activities include:

  1. Version control. Version control is needed for the code (and all code like objects) so that the product can be properly built and that what is in the build is understood (and supposed to be there). Version control generally requires software tools and mutually agreed upon processes, conventions and standards.
  2. Build automation. Build automation pulls together files, objects and other assets into an executable (or consumable for non-code artifacts) form in a repeatable manner without (or with minimal) human interaction. All of the required processes and steps, such as compilation, packaging or generation of installers. [FRAGMENT] Build automation generally deploys and validates code to development or testing environments. Similar to version control, build automation requires tools, processes, conventions, standards and coding the automation.
  3. Deployment automation. Deployment automation is often a specialized version of build automation whose target is production environments. Deployment automation pushes and installs the final build to the production environment. Automation reduces the overhead, and reduces the chance for error (and therefore saves effort).

Professional teams that build software solutions typically require multiple technical environments during the product lifecycle. A typical progression of environments might be development, test (various) and staging. Generally the staging environment (just prior to production) should be the most production like, while development and test environments will have tools and attributes that make development and testing easier. Each of these environments needs to be provisioned and managed. DevOps brings that provisioning and management closer to the teams generating the code, reducing wait times. Automation of the provisioning can give development and testing teams more control over the technical environments (under the watchful eye of the TechOps groups).

As projects and programs become larger the classic separation of development from TechOps will slow a project down, make it more difficult to deliver more often and potentially generate problems in delivery. Implementing DevOps shortens the communication channels so that development, QA and TechOps personnel can collaborate on the environments and tools needed to deliver value faster, better and cheaper. Automation of substantial portions of the processes needed both to build and deploy code and manage the technical environments further improve the ability of the team or teams to deliver value. The savings in time, effort and defects can be better used to deliver more value for the organization.


Categories: Process Management

All The World's A Random Process

Herding Cats - Glen Alleman - Thu, 01/15/2015 - 18:00

When we hear about making decisions in the absence of estimates of the impact from that decision, the cost of making that decision - the opportunity cost, which is the basis of Microeconomics, or best of all the possible alternatives that might result from that decision - the opportunity costs - we need to stop and think.

Is it actually possible to make a decsion without knowing these things?

The answer is NO. But of course the answer is also YES. Since decisions can't be made in the absence of those estimates. They are made all the time. A little joke in the extreme sports domain, which our son participates in, goes like this.

What are the last four words uttered by a 22 year old back country skier in Crested Butte before arriving at the emergency room?

Hey everyone watch this!

Any estimating the probability of clearing the 20 foot gap? Oh Hell No, let's go...

27197_356627669301_2729117_n

The decision making process here is the same as the decision making process on projects. There are uncertainties that create risk. There are uncertainties that are irreducible and there are uncertainties that that are reducible. Risk of crashing and breaking your collarbone. Riks of crashing the project and breaking the bank or breaking the customer releationship. 

Both these uncertainty types must be addressed if the project has a chance of success, just as both uncertainty types need to be addressed if there is a chance of landing the jump without breaking something in a very painful way.

There are a set of environmental conditions on the project and on the slopes that are helpful and as well as a hindrance to success. Modeling those is the starting point for making the decision to proceed. This is the taxonomy of uncertainty that must be assessed before proceeding with the decision. 

If you're 22 years old and believe you're immortal, then assessing these risks is rarely necessary. It's the let's just try this view of the world. After breaking both collar bones (separate occasions), crashing mountain bikes as well as crashing on skis and being transported down the mountain in a Ski Patrol Sled, feedback prevails and a more mature assessment of the outcome results.

The word uncertainty has a variety of meanings and has a variety of synonyms: error, information, vagueness, scatter, unknown, discord, undefined, ambiguous, probability, stochastic, distribution, confidence, and chance. These create confusion and from the confusion the opportunity to ignore them.

To evaluate the outcomes of our decisions, we need data

This data comes from a model of the world that allows us to translate our observations into information. In this model there are two types of abstraction. Aleatory and Epistemic. Aleatory implies an inherent randomness of the outcomes of the process subject to our decision making. Flipping a coin is modeled as an aleatory process, as is rolling dice. When flipping the coin, the random but observable data is the result.  Since the underlying probability function for flipping the coin has no observable quantities (we can see all the processes that go into holding the coin, flipping with our fingers, the air, the rotation of the earth, etc.) but we can't model the world of coin flipping directly. Instead we can only observe the results from that model.

This is many times the case on projects. The underlying physical processes, which themselves may be deterministic, can't be observed. So all we get is the probability that an outcome will occur. This is a Bernoulli model of the process. 

A Bernoulli trial is an experiment outcome that can be assigned to one of two possible states (e.g., success / failure, heads / tails, yes / no). The outcomes are assigned to two values, 0 and 1. A Bernoulli process is obtained by repeating the same Bernoulli trial, where each trial is independent. If the outcome assigned to the value 1 has probability p, it can be shown that the summation of n Bernoulli trials is binomial distributed.

The Epistemic uncertainty of the processes, both slope style skiing and projects, represents how precise our state of knowledge is about the world model. We can measure the temperature of the snow, we can measure the performance of the database server. We know the wind speed at the top of the kicker, we know the density of the defects on the code base from our last quality assurance review.

The epistemic uncertainty of the process pertains to the degree of knowledge of the model and its parameters. Epistemic comes from the Greek word episteme (knowledge).

We Need Aleatory and Epistemic information to make a decision

The system we want to make a decision about has a reality that can be modeled in some way, to some degree of confidence. When it is suggested otherwise, that simply is not true. Only when Black Swans are introduced - the Unknown Unknowns are applied - does this model not work. In the project world, thsoe UNK UNKs mean one of three things:

  1. We couldn't know - it was a surprise. We didn't understand our world model.
  2. We didn't know - we didn't have enough time or money to find out, or we were simply too lazy to find out. The world model was understandable, but we didn't look hard enough.
  3. We don't what to know - let's just try it and see what happens. We know the world model, but don't want to acknowledge the consequences.

This last situation is best represented In the famous Rumsfeld quote about UNKUNKs he failed to read The Histories, by Herodotus, (484-ca. 425 BCE), who cautioned not to go into that part of the world and engage in a ground war. It turned out bad for Alexander the Great.

So if you're the sort that accepts that decisions can be made in the absence of estimating the cost and impact of that decision - you're in the last two categories. 

A Final Thought

When it is suggested that businesses are seeking deterministic or predictable outcomes - which of course they are not, not can they since all business processes are probabilistic - those processes exist in only a few domains

Such precise processes are the antitheses of aleatory., this is the type of model most familiar to
scientists and engineers and include relationships such as E=mc2, F=ma, F=((G × m1 × m2) / r2).  So if you work with classical mechanics or the like, you can look for predictability. But if you work in the real world of projects or the business of projects - All The World's a Random Process - behave accordingly.

Related articles Decision Making in the Presence of Uncertainty The Actual Science in Management Science Planning is the basis of decision making in the presence of uncertainty Confidence vs. Credibility Intervals What is Governance? Your Project Needs a Budget and Other Things The False Notion of "we haven't seen this before"
Categories: Project Management

Interlocks in Agile

Software Requirements Blog - Seilevel.com - Thu, 01/15/2015 - 16:30
One of the questions we often get from companies/projects trying to move to Agile methodology is how does an Agile team deal with interlocks, especially when those interlocks are on the Waterfall methodology and want their requirements and commitment a year to a year and a half in advance? To be clear and upfront, I […]
Categories: Requirements

Does Manual Testing Have a Future?

Making the Complex Simple - John Sonmez - Thu, 01/15/2015 - 16:00

In this video, I tackle whether or not manual testing has a future or whether someone who is a manual tester should look to move on to a different role.

The post Does Manual Testing Have a Future? appeared first on Simple Programmer.

Categories: Programming

Software Architecture Articles of 2014

From the Editor of Methods & Tools - Thu, 01/15/2015 - 15:07
When software features are distributed on multiple infrastructures (server, mobile, cloud) that needs to communicate and synchronize, having a sound and reactive software architecture is a key for success and evolution of business functions. Here are seven software architecture articles published in 2014 that can help you understand the basic topics and the current trends in software architecture: Agile, Cloud, SOA, Security
 and even a little bit of data modeling. * Designing Software in a Distributed World This is an overview of what is involved in designing services that use distributed computing ...

Monitoring Akka with Kamon

Xebia Blog - Thu, 01/15/2015 - 13:49

Kamon is a framework for monitoring the health and performance of applications based on akka, the popular actor system framework often used with Scala. It provides good quick indicators, but also allows in-depth analysis.

Tracing

Beyond just collecting local metrics per actor (e.g. message processing times and mailbox size), Kamon is unique in that it also monitors message flow between actors.

Essentially, Kamon introduces a TraceContext that is maintained across asynchronous calls: it uses AOP to pass the context along with messages. None of your own code needs to change.

Because of convenient integration modules for Spray/Play, a TraceContext can be automatically started when an HTTP request comes in.

If nothing else, this can be easily combined with the Logback converter shipped with Kamon: simply logging the token is of great use right out of the gate.

Dashboarding

Kamon does not come with a dashboard by itself (though some work in this direction is underway).

Instead, it provides 3 'backends' to post the data to (4 if you count the 'LogReporter' backend that just dumps some statistics into Slf4j): 2 on-line services (NewRelic and DataDog), and statsd (from Etsy).

statsd might seem like a hassle to set up, as it needs additional components such as grafana/graphite to actually browse the statistics. Kamon fortunately provides a correctly set-up docker container to get you up and running quickly. We unfortunately ran into some issues with the image uploaded to the Docker Hub Registry, but building it ourselves from the definition on github resolved most of these.

Implementation

We found the source code to Kamon to be clear and to-the-point. While we're generally no great fan of AspectJ, for this purpose the technique seems to be quite well-suited.

'Monkey-patching' a core part of your stack like this can of course be dangerous, especially with respect to performance considerations. Unless you enable the heavier analyses (which are off by default and clearly marked), it seems this could be fairly light - but of course only real tests will tell.

Getting Started

Most Kamon modules are enabled by adding their respective akka extension. We found the quickest way to get started is to:

  • Add the Kamon dependencies to your project as described in the official getting started guide
  • Enable the Metrics and LogReporter extensions in your akka configuration
  • Start your application with AspectJ run-time weaving enabled. How to do this depends on how you start your application. We used the sbt-aspectj plugin.

Enabling AspectJ weaving can require a little bit of twiddling, but adding the LogReporter should give you quick feedback on whether you were successful: it should start periodically logging metrics information.

Next steps are:

  • Enabling Spray or Play plugins
  • Adding the trace token to your logging
  • Enabling other backends (e.g. statsd)
  • Adding custom application-specific metrics and trace points
Conclusion

Kamon looks like a healthy, useful tool that not only has great potential, but also provides some great quick wins.

The documentation that is available is of great quality, but there are some parts of the system that are not so well-covered. Luckily, the source code very approachable.

It is clear the Kamon project is not very popular yet, judging by some of the rough edges we encountered. These, however, seem to be mostly superficial: the core ideas and implementation seems solid. We highly recommend taking a look.

 

Remco Beckers

Arnout Engelen

Testing on the Toilet: Prefer Testing Public APIs Over Implementation-Detail Classes

Google Testing Blog - Wed, 01/14/2015 - 18:35
by Andrew Trenk

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.


Does this class need to have tests?
class UserInfoValidator {
public void validate(UserInfo info) {
if (info.getDateOfBirth().isInFuture()) { throw new ValidationException()); }
}
}
Its method has some logic, so it may be good idea to test it. But what if its only user looks like this?
public class UserInfoService {
private UserInfoValidator validator;
public void save(UserInfo info) {
validator.validate(info); // Throw an exception if the value is invalid.
writeToDatabase(info);
}
}
The answer is: it probably doesn’t need tests, since all paths can be tested through UserInfoService. The key distinction is that the class is an implementation detail, not a public API.

A public API can be called by any number of users, who can pass in any possible combination of inputs to its methods. You want to make sure these are well-tested, which ensures users won’t see issues when they use the API. Examples of public APIs include classes that are used in a different part of a codebase (e.g., a server-side class that’s used by the client-side) and common utility classes that are used throughout a codebase.

An implementation-detail class exists only to support public APIs and is called by a very limited number of users (often only one). These classes can sometimes be tested indirectly by testing the public APIs that use them.

Testing implementation-detail classes is still useful in many cases, such as if the class is complex or if the tests would be difficult to write for the public API class. When you do test them, they often don’t need to be tested in as much depth as a public API, since some inputs may never be passed into their methods (in the above code sample, if UserInfoService ensured that UserInfo were never null, then it wouldn’t be useful to test what happens when null is passed as an argument to UserInfoValidator.validate, since it would never happen).

Implementation-detail classes can sometimes be thought of as private methods that happen to be in a separate class, since you typically don’t want to test private methods directly either. You should also try to restrict the visibility of implementation-detail classes, such as by making them package-private in Java.

Testing implementation-detail classes too often leads to a couple problems:

- Code is harder to maintain since you need to update tests more often, such as when changing a method signature of an implementation-detail class or even when doing a refactoring. If testing is done only through public APIs, these changes wouldn’t affect the tests at all.

- If you test a behavior only through an implementation-detail class, you may get false confidence in your code, since the same code path may not work properly when exercised through the public API. You also have to be more careful when refactoring, since it can be harder to ensure that all the behavior of the public API will be preserved if not all paths are tested through the public API.
Categories: Testing & QA

StackExchange's Performance Dashboard

StackExchange created a very cool performance dashboard that looks to be updated from real system metrics. Wouldn't it be fascinating if every site had a similar dashboard?

The dashboard contains information like there are 560 million page views per month, 260,000 sustained connections,  34 TB data transferred per month, 9 web servers with 48GB of RAM handling 185 req/s at 15% CPU usage. There are 4 SQL servers, 2 redis servers, 3 tag engine servers, 3 elasticsearch servers, and 2 HAProxy servers, along with stats on each.

There's also an excellent discussion thread on reddit that goes into more interesting details, with questions being answered by folks from StackExchange. 

StackExchange is still doing innovative work and is very much an example worth learning from. They've always danced to their own tune and it's a catchy tune at that. More at StackOverflow Update: 560M Pageviews A Month, 25 Servers, And It's All About Performance.

Categories: Architecture

Exploring Akka Stream's TCP Back Pressure

Xebia Blog - Wed, 01/14/2015 - 15:48

Some years ago, when Reactive Streams lived in utopia we got the assignment to build a high-volume message broker. A considerable amount of code of the solution we delivered back then was dedicated to prevent this broker being flooded with messages in case an endpoint became slow.

How would we have solved this problem today with the shiny new Akka Reactive Stream (experimental) implementation just within reach?

In this blog we explore Akka Streams in general and TCP Streams in particular. Moreover, we show how much easier we can solve the challenge we faced backed then using Streams.

A use-case for TCP Back Pressure

The high-volume message broker mentioned in the introduction basically did the following:

  • Read messages (from syslog) from a TCP socket
  • Parse the message
  • Forward the message to another system via a TCP connection

For optimal throughput multiple TCP connections were available, which allowed delivering messages to the endpoint system in parallel. The broker was supposed to handle about 4000 - 6000 messages per second. As follows a schema of the noteworthy components and message flow:

Waterhose2

Naturally we chose Akka as framework to implement this application. Our approach was to have an Actor for every TCP connection to the endpoint system. An incoming message was then forwarded to one of these connection Actors.

The biggest challenge was related to back pressure: how could we prevent our connection Actors from being flooded with messages in case the endpoint system slowed down or was not available? With 6000 messages per second an Actor's mailbox is flooded very quickly.

Another requirement was that message buffering had to be done by the client application, which was syslog. Syslog has excellent facilities for that. Durable mailboxes or something the like was out of the question. Therefore, we had to find a way to pull only as many messages in our broker as it could deliver to the endpoint. In other words: provide our own back pressure implementation.

A considerable amount of code of the solution we delivered back then was dedicated to back pressure. During one of our re-occurring innovation days we tried to figure out how much easier the back pressure challenge would have been if Akka Streams would have been available.

Akka Streams in a nutshell

In case you are new to Akka Streams as follows some basic information that help you understand the rest of the blog.

The core ingredients of a Reactive Stream consist of three building blocks:

  • A Source that produces some values
  • A Flow that performs some transformation of the elements produced by a Source
  • A Sink that consumes the transformed values of a Flow

Akka Streams provide a rich DSL through which transformation pipelines can be composed using the mentioned three building blocks.

A transformation pipeline executes asynchronously. For that to work it requires a so called FlowMaterializer, which will execute every step of the pipeline. A FlowMaterializer uses Actor's for the pipeline's execution even though from a usage perspective you are unaware of that.

A basic transformation pipeline looks as follows:


  import akka.stream.scaladsl._
  import akka.stream.FlowMaterializer
  import akka.actor.ActorSystem

  implicit val actorSystem = ActorSystem()
  implicit val materializer = FlowMaterializer()

  val numberReverserFlow: Flow[Int, String] = Flow[Int].map(_.toString.reverse)

  numberReverserFlow.runWith(Source(100 to 200), ForeachSink(println))

We first create a Flow that consumes Ints and transforms them into reversed Strings. For the Flow to run we call the runWith method with a Source and a Sink. After runWith is called, the pipeline starts executing asynchronously.

The exact same pipeline can be expressed in various ways, such as:


    //Use the via method on the Source that to pass in the Flow
    Source(100 to 200).via(numberReverserFlow).to(ForeachSink(println)).run()

    //Directly call map on the Source.
    //The disadvantage of this approach is that the transformation logic cannot be re-used.
    Source(100 to 200).map(_.toString.reverse).to(ForeachSink(println)).run()

For more information about Akka Streams you might want to have a look at this Typesafe presentation.

A simple reverse proxy with Akka Streams

Lets move back to our initial quest. The first task we tried to accomplish was to create a stream that accepts data from an incoming TCP connection, which is forwarded to a single outgoing TCP connection. In that sense this stream was supposed to act as a typical reverse-proxy that simply forwards traffic to another connection. The only remarkable quality compared to a traditional blocking/synchronous solution is that our stream operates asynchronously while preserving back-pressure.

import java.net.InetSocketAddress
import akka.actor.ActorSystem
import akka.stream.FlowMaterializer
import akka.stream.io.StreamTcp
import akka.stream.scaladsl.ForeachSink

implicit val system = ActorSystem("on-to-one-proxy")
implicit val materializer = FlowMaterializer()

val serverBinding = StreamTcp().bind(new InetSocketAddress("localhost", 6000))

val sink = ForeachSink[StreamTcp.IncomingConnection] { connection =>
      println(s"Client connected from: ${connection.remoteAddress}")
      connection.handleWith(StreamTcp().outgoingConnection(new InetSocketAddress("localhost", 7000)).flow)
}
val materializedServer = serverBinding.connections.to(sink).run()

serverBinding.localAddress(materializedServer)

First we create the mandatory instances every Akka reactive Stream requires, which is an ActorSystem and a FlowMaterializer. Then we create a server binding using the StreamTcp Extension that listens to incoming traffic on localhost:6000. With the ForeachSink[StreamTcp.IncomingConnection] we define how to handle the incoming data for every StreamTcp.IncomingConnection by passing a flow of type Flow[ByteString, ByteString]. This flow consumes ByteStrings of the IncomingConnection and produces a ByteString, which is the data that is sent back to the client.

In our case the flow of type Flow[ByteString, ByteString] is created by means of the StreamTcp().outgoingConnection(endpointAddress).flow. It forwards a ByteString to the given endpointAddress (here localhost:7000) and returns its response as a ByteString as well. This flow could also be used to perform some data transformations, like parsing a message.

Parallel reverse proxy with a Flow Graph

Forwarding a message from one connection to another will not meet our self defined requirements. We need to be able to forward messages from a single incoming connection to a configurable amount of outgoing connections.

Covering this use-case is slightly more complex. For it to work we make use of the flow graph DSL.


  import akka.util.ByteString
  import akka.stream.scaladsl._
  import akka.stream.scaladsl.FlowGraphImplicits._

  private def parallelFlow(numberOfConnections:Int): Flow[ByteString, ByteString] = {
    PartialFlowGraph { implicit builder =>
      val balance = Balance[ByteString]
      val merge = Merge[ByteString]
      UndefinedSource("in") ~> balance

      1 to numberOfConnections map { _ =>
        balance ~> StreamTcp().outgoingConnection(new InetSocketAddress("localhost", 7000)).flow ~> merge
      }

      merge ~> UndefinedSink("out")
    } toFlow (UndefinedSource("in"), UndefinedSink("out"))
  }

We construct a flow graph that makes use of the junction vertices Balance and Merge, which allow us to fan-out the stream to several other streams. For the amount of parallel connections we want to support, we create a fan-out flow starting with a Balance vertex, followed by a OutgoingConnection flow, which is then merged with a Merge vertex.

From an API perspective we faced the challenge of how to connect this flow to our IncomingConnection. Almost all flow graph examples take a concrete Source and Sink implementation as starting point, whereas the IncomingConnection does neither expose a Source nor a Sink. It only accepts a complete flow as input. Consequently, we needed a way to abstract the Source and Sink since our fan-out flow requires them.

The flow graph API offers the PartialFlowGraph class for that, which allows you to work with abstract Sources and Sinks (UndefinedSource and UndefinedSink). We needed quite some time to figure out how they work: simply declaring a UndefinedSource/Sink without a name won't work. It is essential that you give the UndefinedSource/Sink a name which must be identical to the one that is used in the UndefinedSource/Sink passed in the toFlow method. A bit more documentation on this topic would help.

Once the fan-out flow is created, it can be passed to the handleWith method of the IncomingConnection:

...
val sink = ForeachSink[StreamTcp.IncomingConnection] { connection =>
      println(s"Client connected from: ${connection.remoteAddress}")
      val parallelConnections = 20
      connection.handleWith(parallelFlow(parallelConnections))
    }
...

As a result, this implementation delivers all incoming messages to the endpoint system in parallel while still preserving back-pressure. Mission completed!

Testing the Application

To test our solution we wrote two helper applications:

  • A blocking client that pumps as many messages as possible into a socket connection to the parallel reverse proxy
  • A server that delays responses with a configurable latency in order to mimic a slow endpoint. The parallel reverse proxy forwards messages via one of its connections to this endpoint.

The following chart depicts the increase in throughput with the increase in amount of connections. Due to the nondeterministic concurrent behavior there are some spikes in the results but the trend shows a clear correlation between throughput and amount of connections:

Performance_Chart

End-to-end solution

The end-to-end solution can be found here.
By changing the numberOfConnections variable you can see the impact on performance yourself.

Check it out! ...and go with the flow ;-)

Information about TCP back pressure with Akka Streams

At the time of this writing there was not much information available about Akka Streams, due to the fact that it is one of the newest toys of the Typesafe factory. As follows some valuable resources that helped us getting started:

DevOps Primer: Who Is Involved

Untitled

Implementing the concept of DevOps requires that a number of roles work together in a larger team all focused on three simple goals. These teams hereto now, even though focused on the greater good of the organization, are currently operated as silos. Operating in silos forces each team or department to maximize their team-specific goals. Generally, maximizing the efficiency of a specific step or process in the flow work required to deliver value to the business may not yield the most effective or efficient solution overall. DevOps takes a more holistic approach, using a systems thinking approach to view and approach the software value chain. Instead of seeing three or more silos of activity (Agile team, QA and TechOps), a more holistic approach sees the software process as single value chain. The value chain begins with an idea and ends when support is no longer needed. The software value stream can be considered a flow of products and services that are provisioned to deliver a service. The provisioning activity is a metaphor that can be used to highlight who is involved in delivering software in a DevOps environment.

Provisioning is a term often used in the delivery of telecommunications products and services (and other industries), and it is used to describe providing a service and everything needed to a user. Providing the service may include the equipment, network, software, training and the support necessary to begin and to continue using the service. The service is not complete and provisioned until the user can use the service in a manner that meets their needs. Viewing delivery as provisioning enforces a systems view of the processes and environments needed.

Developing, deploying and supporting any software-centric service requires a wide range of roles, products and services, that are often consoldiated into three categories; development teams, QA/testing, technical operations (TechOps).  Examples of  TechOps  roles can include configuration and environment management, security, network, release management and tool support just to name a few. TechOps are charged with providing the environment for services to be delivers and ensuring that those environments are safe, resilient and stable (I can think of any number of other additional attributes).

The roles of all development teams are fairly straightforward (that is not say they are not complex or difficult  Teams, whether Agile or  waterfall, build services in a development environment and then those services migrate through other environments until they are resident in some sort of production environment or environments. Development, QA and TechOps must understand and either create or emulate these environments to ensure that the business needs are met (and that the software runs). Additionally, the development, enhancement and maintenance process generally uses a wide range of tools to make writing, building, debugging, testing, promoting and installing code easier. These tools are part of the environment’s needs to develop and deliver software services.

QA or testing roles are designed to help to ensure that what is being built works, meets the definition of done and delivers the expected value. The process of testing often requires specialized environments to ensure integration and control.  In a similar manner, testing often requires tools for test automation, data preparation and even exploratory testing.

TechOps typically are involved in providing the environment or environments needed to deliver software constructing and allowing access to environments can often cause bottlenecks and constraints in the flow of value to the user. An organization embracing DevOps will actively pursue bottlenecks and constraints that slow the delivery of software. For example many organization leverage automation to provide the development teams with more control over nonproduction environments. Automation shortens the time waiting for another department to take an action and frees TechOps personnel to be actively involved in projects AND to manage overall the organizational technical environment.

DevOps helps to remove the roadblocks that slow the delivery of value by ensuing that Agile teams, QA and TechOps personnel work together so that environmental issues don’t get in the way. We can concieve of DevOps as the intersection of Agile Teams, QA and TechOps however what is more important is the interaction.  Interaction builds trust and empowerment so that the flow through the development, test and production environment is smooth.  The environments used to build software services are critical. Environments will need to be provisioned regardless of which Agile and lean techniques you are using. Even relatively common processes required specific software and storage to function. Consider the tools and coordination needed to use continuous builds and automated testing. If the flow of work needs to stop and wait until the environment is ready the delivery of value will be delayed.


Categories: Process Management

Efficient Game Textures with Hardware Compression

Android Developers Blog - Tue, 01/13/2015 - 20:43

Posted by Shanee Nishry, Developer Advocate

As you may know, high resolution textures contribute to better graphics and a more impressive game experience. Adaptive Scalable Texture Compression (ASTC) helps solve many of the challenges involved including reducing memory footprint and loading time and even increase performance and battery life.

If you have a lot of textures, you are probably already compressing them. Unfortunately, not all compression algorithms are made equal. PNG, JPG and other common formats are not GPU friendly. Some of the highest-quality algorithms today are proprietary and limited to certain GPUs. Until recently, the only broadly supported GPU accelerated formats were relatively primitive and produced poor results.

With the introduction of ASTC, a new compression technique invented by ARM and standardized by the Khronos group, we expect to see dramatic changes for the better. ASTC promises to be both high quality and broadly supported by future Android devices. But until devices with ASTC support become widely available, it’s important to understand the variety of legacy formats that exist today.

We will examine preferable compression formats which are supported on the GPU to help you reduce .apk size and loading times of your game.

Texture Compression

Popular compressed formats include PNG and JPG, which can’t be decoded directly by the GPU. As a consequence, they need to be decompressed before copying them to the GPU memory. Decompressing the textures takes time and leads to increased loading times.

A better option is to use hardware accelerated formats. These formats are lossy but have the advantage of being designed for the GPU.

This means they do not need to be decompressed before being copied and result in decreased loading times for the player and may even lead to increased performance due to hardware optimizations.

Hardware Accelerated Formats

Hardware accelerated formats have many benefits. As mentioned before, they help improve loading times and the runtime memory footprint.

Additionally, these formats help improve performance, battery life and reduce heating of the device, requiring less bandwidth while also consuming less energy.

There are two categories of hardware accelerated formats, standard and proprietary. This table shows the standard formats:

table { border-collapse: collapse; } table, th, td { border: 1px solid black; } td { padding: 5px; } ETC1 Supported on all Android devices with OpenGL ES 2.0 and above. Does not support alpha channel. ETC2 Requires OpenGL ES 3.0 and above. ASTC Higher quality than ETC1 and ETC2. Supported with the Android Extension Pack.

As you can see, with higher OpenGL support you gain access to better formats. There are proprietary formats to replace ETC1, delivering higher quality and alpha channel support. These are shown in the following table:

table { border-collapse: collapse; } table, th, td { border: 1px solid black; }td { padding: 5px; } ATC Available with Adreno GPU. PVRTC Available with a PowerVR GPU. DXT1 S3 DXT1 texture compression. Supported on devices running Nvidia Tegra platform. S3TC S3 texture compression, nonspecific to DXT variant. Supported on devices running Nvidia Tegra platform.

That’s a lot of formats, revealing a different problem. How do you choose which format to use?

To best support all devices you need to create multiple apks using different texture formats. The Google Play developer console allows you to add multiple apks and will deliver the right one to the user based on their device. For more information check this page.

When a device only supports OpenGL ES 2.0 it is recommended to use a proprietary format to get the best results possible, this means making an apk for each hardware.

On devices with access to OpenGL ES 3.0 you can use ETC2. The GL_COMPRESSED_RGBA8_ETC2_EAC format is an improved version of ETC1 with added alpha support.

The best case is when the device supports the Android Extension Pack. Then you should use the ASTC format which has better quality and is more efficient than the other formats.

Adaptive Scalable Texture Compression (ASTC)

The Android Extension Pack has ASTC as a standard format, removing the need to have different formats for different devices.

In addition to being supported on modern hardware, ASTC also offers improved quality over other GPU formats by having full alpha support and better quality preservation.

ASTC is a block based texture compression algorithm developed by ARM. It offers multiple block footprints and bitrate options to lower the size of the final texture. The higher the block footprint, the smaller the final file but possibly more quality loss.

Note that some images compress better than others. Images with similar neighboring pixels tend to have better quality compared to images with vastly different neighboring pixels.

Let’s examine a texture to better understand ASTC:

This bitmap is 1.1MB uncompressed and 299KB when compressed as PNG.

Compressing the Android jellybean jar texture into ASTC through the Mali GPU Texture Compression Tool yields the following results.

table { border-collapse: collapse; } table, th, td { border: 1px solid black; }td { padding: 5px; } Block Footprint 4x4 6x6 8x8 Memory 262KB 119KB 70KB Image Output Difference Map 5x Enhanced Difference Map

As you can see, the highest quality (4x4) bitrate for ASTC already gains over PNG in memory size. Unlike PNG, this gain stays even after copying the image to the GPU.

The tradeoff comes in the detail, so it is important to carefully examine textures when compressing them to see how much compression is acceptable.

Conclusion

Using hardware accelerated textures in your games will help you reduce the size of your .apk, runtime memory use as well as loading times.

Improve performance on a wider range of devices by uploading multiple apks with different GPU texture formats and declaring the texture type in the AndroidManifest.xml.

If you are aiming for high end devices, make sure to use ASTC which is included in the Android Extension Pack.

Join the discussion on

+Android Developers
Categories: Programming

The Actual Science in Management Science

Herding Cats - Glen Alleman - Tue, 01/13/2015 - 19:38

Screen Shot 2015-01-12 at 5.05.40 PMPlanning for an uncertain future calls for a shift in information management — from single numbers to probability distributions — in order to correct the "flaw of averages."

This, in turn, gives rise to the prospect of a Chief Probability Officer to manage the distributions that underlie risk, real portfolios, real options and many other activities in the global economy.

 - Sam Savage, Stefan Scholtes and Daniel Zweidler

There are some very serious misunderstandings going around about how management in the presence of uncertainty takes places in business. The basic conjecture is

Management Science's Quest: in Search of Predictability †

Let's start with a basic fact for all projects, all business processes - everything is a stochastic process. So searching for predictability is not a goal for any informed business or technical person or organization. If it is, then that defines the maturity of that person or organization. It happens, but it states up front little understanding of the underlying stochastic processes that create probabilistic outcomes of - Everything 

Here's a quick review of both processes in play in all activties. 

ProbabilityandStatistics

In the Decision Making Business there are four reasons why they are hard.

  1. Decisons are hard because of its complexity.
  2. Decisions are difficult because of the inherent uncertainty of the situation.
  3. A decision maker may be interested in working toward multiple objectives, but progress in one direction may impede progress in other directions.
  4. A problem may be more difficult if different perspectives lead to different conclusions. 

So to start with the notion of predictability - it is simply not possible in any real project or business domain, to speak about predictability in the absence of the underlying statistical processes that create probabilistic outcomes.

Any credible business or technical manager knows this. If predictability is assumed or even desired, then the naivety of the manager is the only likely source, or maybe the intentional ignorance of the statistical and probabilistic nature of business and technical process. But predictability is not possible in the sense of absolutes, only probabilities.

So let's look at some less than informed concepts that are popular in some circles ...

  • Predictability is a form of causality - predicting is separated from the source of predictions. And certainty the causality associated with prediction need not be there. Bayesian statistics and Monte Carlo Simulation, need not connect the predicted outcomes with the source of those outcomes - other than the source of the random variables from a generating function.
  • Planning rests on the assumption we can predict - a Plan is a strategy for guiding our efforts to change something in the future or arrive at some place in the future. The Strategy is a Hypothesis and that hypothesis needs an experiment to test the current situation to determine if it will result in the desired outcomes in the future. This is core design of experiments that we all learned in our High School science class. Plans describe an emerging outcome.
  • Goals change with the observation of reality - This dynamic adaptation process is what we, in the Agile community, call a feedback loop - this is true, but a target value is needed to compare that feedback information to generate an error signal. This is called Closed Loop Control and is the foundation of all control systems including Statistical Process Control system. And control systems that are adaptive in the presence of emerging dynamic systems. This is the basis of Learning Systems in stochastic adaptive control.
  • Management techniques must not be based on the existence of a perfect, predictable future - this is a naive understanding of management. Perfect, predictable futures simplay do not exist anywhere for anything. All processes are random processes, many times not even stationary random process.

The suggestions above indicate the lack of understanding of fundamental knowledge of making decisions in the presence of uncertainty as described in the Making Hard Decisions book. The Journal Operations Research and Management Sciences, will put the science back in management science that those conjecturing the topics above seem to have missed.

In Journal papers and many books and related sources all the suggestions that we can't make decision in the presence of uncertainty, that simple minded conjectures like:

The basic problem with most perspectives on management today is that they are static analyses of a future environment. And all decisions are made because we believe we can predict the future.

Are simply not true, and better insight as to why they are not true can be had with straightforward reserch available by joining INFORMS or a variety of other professional societies.

So perhaps before making unsubstantiated claims about how modern statistical and probabilistic management processes are applied to business, some homework might be in order.

Related articles What is Governance? Your Project Needs a Budget and Other Things The False Notion of "we haven't seen this before" Engineering in the face of uncertainty: Stochastic solutions to structural problems A comparison of MC and QMC simulation for the valuation of interest rate derivatives
Categories: Project Management

“A Poor Craftsman blames the (requirements) tool”

Software Requirements Blog - Seilevel.com - Tue, 01/13/2015 - 16:00
The decision to use a requirements tool is dependent on a number of factors: size of project, number of system interlocks, global teams, specific business objectives tied to real dollar figures, just to name a few.  In this post I will describe the key benefits of using a requirements management tool early on to capture […]
Categories: Requirements

If it needs to happen: Schedule it!

Mike Cohn's Blog - Tue, 01/13/2015 - 15:00

The following is a guest post from Lisa Crispin. Lisa is the co-author with Janet Gregory of "Agile Testing: A Practical Guide for Testers and Agile Teams" and the newly published "More Agile Testing: Learning Journeys for the Whole Team". I highly recommend both of these books--in fact, I recommend reading everything Lisa and Janet write. In the following post, Lisa argues the benefits of scheduling anything that's important. I am a fanatic for doing this. Over the holiday I put fancy new batteries in my smoke detectors that are supposed to last 10 years. So I put a note in my calendar to replace them in 9 years. But, don't schedule time to read Lisa's guest post--just do it now. --Mike

During the holidays, some old friends came over to our house for lunch. We hadn’t seen each other in a few months, though we live 10 minutes apart. We had so much fun catching up. As they prepared to leave, my friend suggested, “Let’s pick a date to get together again soon. So often, six months go by without our seeing each other.” We got out our calendars, and penciled in a date a few weeks away. The chances are good that we will achieve our goal of meeting up soon.

Scheduling time for important activities is key in my professional life, too. Here’s a recent example. My current team has only three testers, and we all have other duties as well, such as customer support. We have multiple code bases, products and platforms to test, so we’re always juggling priorities.

Making time

The product owner for our iOS product wanted us to experiment with automating some regression smoke tests through its UI. Another tester on the team and I decided we’d pair on a spike for this. However, we had so many competing priorities, we kept putting it off. As much as we try avoid multi-tasking, it seems there is always some urgent interruption.

Finally, we created a recurring daily meeting on the calendar, scheduled shortly after lunchtime when few other meetings were going on. As soon as we did that, we started making the time we needed. We might miss a day here or there, but we’re much more disciplined about taking our time for the iOS test automation. As a result, we made steady, if slow, progress, and achieved many of our goals.

Scheduling help

Even though both of us were new to iOS, pairing gave us courage, and two heads were better than one. We’d still get stuck, though, and we needed the expertise of programmers and testers with experience automating iOS tests. Simply adding a meeting to the calendar with the right people has gotten us the help we needed. Even busy people can spare 30 minutes or an hour out of their week. Our iOS team is in another city two time zones away. If we put a meeting on the calendar with a Zoom meeting link, we can easily make contact at the appointed time. Screensharing enables us to make progress quickly, so we can stick to short time boxes.

Another way we ensured time on our schedule for the automation work was to add stories for it to our backlog. For example, we had stories for specific scripts, starting with writing an automated test for logging into the iOS app. Once we had some working test scripts, we added infrastructure-type chores, for example, get the tests running from the command line so we can add them to the team’s continuous integration later. These stories make our efforts more visible. As a result, team members anticipate our meeting invitations and think of ideas to overcome challenges with the automation.

Time for testing

Putting time on the calendar works when I need to pair with a programmer to understand a story better, or when we need help with exploratory testing for a major new feature. I can always ask for help at the morning standup, but if we don’t set a specific time, it’s easy for everyone to get involved with other priorities and forget.

The calendar is your friend. Once you create a meeting, you might still need to negotiate what time works for everyone involved, but you’ve started the collaboration process. Of course, if it’s easy to simply walk over to someone’s desk to ask a question, or pick up the phone if they’re not co-located, do that. But if your team is like ours, where programmers and other roles pair full time, and there’s always a lot going on, a scheduled meeting helps everyone plan the time for it.

If you have a tough project ahead, find a pair and set up a recurring meeting to work together. If you need one-off help, add a time-boxed meeting for today or tomorrow. If you need the whole team to brainstorm about some testing issues, schedule a meeting for the time of day with the fewest distractions. And if you haven’t seen an old friend in too long, schedule a date for that, too!

If it needs to happen: Schedule it!

Mike Cohn's Blog - Tue, 01/13/2015 - 15:00

The following is a guest post from Lisa Crispin. Lisa is the co-author with Janet Gregory of "Agile Testing: A Practical Guide for Testers and Agile Teams" and the newly published "More Agile Testing: Learning Journeys for the Whole Team". I highly recommend both of these books--in fact, I recommend reading everything Lisa and Janet write. In the following post, Lisa argues the benefits of scheduling anything that's important. I am a fanatic for doing this. Over the holiday I put fancy new batteries in my smoke detectors that are supposed to last 10 years. So I put a note in my calendar to replace them in 9 years. But, don't schedule time to read Lisa's guest post--just do it now. --Mike

During the holidays, some old friends came over to our house for lunch. We hadn’t seen each other in a few months, though we live 10 minutes apart. We had so much fun catching up. As they prepared to leave, my friend suggested, “Let’s pick a date to get together again soon. So often, six months go by without our seeing each other.” We got out our calendars, and penciled in a date a few weeks away. The chances are good that we will achieve our goal of meeting up soon.

Scheduling time for important activities is key in my professional life, too. Here’s a recent example. My current team has only three testers, and we all have other duties as well, such as customer support. We have multiple code bases, products and platforms to test, so we’re always juggling priorities.

Making time

The product owner for our iOS product wanted us to experiment with automating some regression smoke tests through its UI. Another tester on the team and I decided we’d pair on a spike for this. However, we had so many competing priorities, we kept putting it off. As much as we try avoid multi-tasking, it seems there is always some urgent interruption.

Finally, we created a recurring daily meeting on the calendar, scheduled shortly after lunchtime when few other meetings were going on. As soon as we did that, we started making the time we needed. We might miss a day here or there, but we’re much more disciplined about taking our time for the iOS test automation. As a result, we made steady, if slow, progress, and achieved many of our goals.

Scheduling help

Even though both of us were new to iOS, pairing gave us courage, and two heads were better than one. We’d still get stuck, though, and we needed the expertise of programmers and testers with experience automating iOS tests. Simply adding a meeting to the calendar with the right people has gotten us the help we needed. Even busy people can spare 30 minutes or an hour out of their week. Our iOS team is in another city two time zones away. If we put a meeting on the calendar with a Zoom meeting link, we can easily make contact at the appointed time. Screensharing enables us to make progress quickly, so we can stick to short time boxes.

Another way we ensured time on our schedule for the automation work was to add stories for it to our backlog. For example, we had stories for specific scripts, starting with writing an automated test for logging into the iOS app. Once we had some working test scripts, we added infrastructure-type chores, for example, get the tests running from the command line so we can add them to the team’s continuous integration later. These stories make our efforts more visible. As a result, team members anticipate our meeting invitations and think of ideas to overcome challenges with the automation.

Time for testing

Putting time on the calendar works when I need to pair with a programmer to understand a story better, or when we need help with exploratory testing for a major new feature. I can always ask for help at the morning standup, but if we don’t set a specific time, it’s easy for everyone to get involved with other priorities and forget.

The calendar is your friend. Once you create a meeting, you might still need to negotiate what time works for everyone involved, but you’ve started the collaboration process. Of course, if it’s easy to simply walk over to someone’s desk to ask a question, or pick up the phone if they’re not co-located, do that. But if your team is like ours, where programmers and other roles pair full time, and there’s always a lot going on, a scheduled meeting helps everyone plan the time for it.

If you have a tough project ahead, find a pair and set up a recurring meeting to work together. If you need one-off help, add a time-boxed meeting for today or tomorrow. If you need the whole team to brainstorm about some testing issues, schedule a meeting for the time of day with the fewest distractions. And if you haven’t seen an old friend in too long, schedule a date for that, too!

Does Agile Apply to Your Project?

I have a new column posted at projectmanagement.com. It’s called Does Agile Apply to Your Project? (You might need a free registration.)

 

Categories: Project Management