Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

The Google Test and Development Environment - Pt. 2: Dogfooding and Office Software

Google Testing Blog - Thu, 03/27/2014 - 22:40
by Anthony Vallone

This is the second in a series of articles about our work environment. See the first.

There are few things as frustrating as getting hampered in your work by a bug in a product you depend on. What if it’s a product developed by your company? Do you report/fix the issue or just work around it and hope it’ll go away soon? In this article, I’ll cover how and why Google dogfoods its own products.

Dogfooding

Google makes heavy use of its own products. We have a large ecosystem of development/office tools and use them for nearly everything we do. Because we use them on a daily basis, we can dogfood releases company-wide before launching to the public. These dogfood versions often have features unavailable to the public but may be less stable. Instability is exactly what you want in your tools, right? Or, would you rather that frustration be passed on to your company’s customers? Of course not!

Dogfooding is an important part of our test process. Test teams do their best to find problems before dogfooding, but we all know that testing is never perfect. We often get dogfood bug reports for edge and corner cases not initially covered by testing. We also get many comments about overall product quality and usability. This internal feedback has, on many occasions, changed product design.

Not surprisingly, test-focused engineers often have a lot to say during the dogfood phase. I don’t think there is a single public-facing product that I have not reported bugs on. I really appreciate the fact that I can provide feedback on so many products before release.

Interested in helping to test Google products? Many of our products have feedback links built-in. Some also have Beta releases available. For example, you can start using Chrome Beta and help us file bugs.

Office software

From system design documents, to test plans, to discussions about beer brewing techniques, our products are used internally. A company’s choice of office tools can have a big impact on productivity, and it is fortunate for Google that we have such a comprehensive suite. The tools have a consistently simple UI (no manual required), perform very well, encourage collaboration, and auto-save in the cloud. Now that I am used to these tools, I would certainly have a hard time going back to the tools of previous companies I have worked. I’m sure I would forget to click the save buttons for years to come.

Examples of frequently used tools by engineers:
  • Google Drive Apps (Docs, Sheets, Slides, etc.) are used for design documents, test plans, project data, data analysis, presentations, and more.
  • Gmail and Hangouts are used for email and chat.
  • Google Calendar is used to schedule all meetings, reserve conference rooms, and setup video conferencing using Hangouts.
  • Google Maps is used to map office floors.
  • Google Groups are used for email lists.
  • Google Sites are used to host team pages, engineering docs, and more.
  • Google App Engine hosts many corporate, development, and test apps.
  • Chrome is our primary browser on all platforms.
  • Google+ is used for organizing internal communities on topics such as food or C++, and for socializing.

Thoughts?

We are interested to hear your thoughts on this topic. Do you dogfood your company’s products? Do your office tools help or hinder your productivity? What office software and tools do you find invaluable for your job? Could you use Google Docs/Sheets for large test plans?

(Continue to part 3)
Categories: Testing & QA

The Google Test and Development Environment - Pt. 3: Code, Build, and Test

Google Testing Blog - Thu, 03/27/2014 - 22:40
by Anthony Vallone

This is the third in a series of articles about our work environment. See the first and second.

I will never forget the awe I felt when running my first load test on my first project at Google. At previous companies I’ve worked, running a substantial load test took quite a bit of resource planning and preparation. At Google, I wrote less than 100 lines of code and was simulating tens of thousands of users after just minutes of prep work. The ease with which I was able to accomplish this is due to the impressive coding, building, and testing tools available at Google. In this article, I will discuss these tools and how they affect our test and development process.

Coding and building

The tools and process for coding and building make it very easy to change production and test code. Even though we are a large company, we have managed to remain nimble. In a matter of minutes or hours, you can edit, test, review, and submit code to head. We have achieved this without sacrificing code quality by heavily investing in tools, testing, and infrastructure, and by prioritizing code reviews.

Most production and test code is in a single, company-wide source control repository (open source projects like Chromium and Android have their own). There is a great deal of code sharing in the codebase, and this provides an incredible suite of code to build on. Most code is also in a single branch, so the majority of development is done at head. All code is also navigable, searchable, and editable from the browser. You’ll find code in numerous languages, but Java, C++, Python, Go, and JavaScript are the most common.

Have a strong preference for editor? Engineers are free to choose from many IDEs and editors. The most common are Eclipse, Emacs, Vim, and IntelliJ, but many others are used as well. Engineers that are passionate about their prefered editors have built up and shared some truly impressive editor plugins/tooling over the years.

Code reviews for all submissions are enforced via source control tooling. This also applies to test code, as our test code is held to the same standards as production code. The reviews are done via web-based code review tools that even include automatically generated test results. The process is very streamlined and efficient. Engineers can change and submit code in any part of the repository, but it must get reviewed by owners of the code being changed. This is great, because you can easily change code that your team depends on, rather than merely request a change to code you do not own.

The Google build system is used for building most code, and it is designed to work across many languages and platforms. It is remarkably simple to define and build targets. You won’t be needing that old Makefile book.

Running jobs and tests

We have some pretty amazing machine and job management tools at Google. There is a generally available pool of machines in many data centers around the globe. The job management service makes it very easy to start jobs on arbitrary machines in any of these data centers. Failing machines are automatically removed from the pool, so tests rarely fail due to machine issues. With a little effort, you can also set up monitoring and pager alerting for your important jobs.

From any machine you can spin up a massive number of tests and run them in parallel across many machines in the pool, via a single command. Each of these tests are run in a standard, isolated environment, so we rarely run into the “it works on my machine!” issue.

Before code is submitted, presubmit tests can be run that will find all tests that depend transitively on the change and run them. You can also define presubmit rules that run checks on a code change and verify that tests were run before allowing submission.

Once you’ve submitted test code, the build and test system automatically registers the test, and starts building/testing continuously. If the test starts failing, your team will get notification emails. You can also visit a test dashboard for your team and get details about test runs and test data. Monitoring the build/test status is made even easier with our build orbs designed and built by Googlers. These small devices will glow red if the build starts failing. Many teams have had fun customizing these orbs to various shapes, including a statue of liberty with a glowing torch.

Statue of LORBerty
Running larger integration and end-to-end tests takes a little more work, but we have some excellent tools to help with these tests as well: Integration test runners, hermetic environment creation, virtual machine service, web test frameworks, etc.

The impact

So how do these tools actually affect our productivity? For starters, the code is easy to find, edit, review, and submit. Engineers are free to choose tools that make them most productive. Before and after submission, running small tests is trivial, and running large tests is relatively easy. Since tests are easy to create and run, it’s fairly simple to maintain a green build, which most teams do most of the time. This allows us to spend more time on real problems and less on the things that shouldn’t even be problems. It allows us to focus on creating rigorous tests. It dramatically accelerates the development process that can prototype Gmail in a day and code/test/release service features on a daily schedule. And, of course, it lets us focus on the fun stuff.

Thoughts?

We are interested to hear your thoughts on this topic. Google has the resources to build tools like this, but would small or medium size companies benefit from a similar investment in its infrastructure? Did Google create the infrastructure or did the infrastructure create Google?

Categories: Testing & QA

Minimizing Unreproducible Bugs

Google Testing Blog - Thu, 03/27/2014 - 22:39


by Anthony Vallone

Unreproducible bugs are the bane of my existence. Far too often, I find a bug, report it, and hear back that it’s not a bug because it can’t be reproduced. Of course, the bug is still there, waiting to prey on its next victim. These types of bugs can be very expensive due to increased investigation time and overall lifetime. They can also have a damaging effect on product perception when users reporting these bugs are effectively ignored. We should be doing more to prevent them. In this article, I’ll go over some obvious, and maybe not so obvious, development/testing guidelines that can reduce the likelihood of these bugs from occurring.


Avoid and test for race conditions, deadlocks, timing issues, memory corruption, uninitialized memory access, memory leaks, and resource issues

I am lumping together many bug types in this section, but they are all related somewhat by how we test for them and how disproportionately hard they are to reproduce and debug. The root cause and effect can be separated by milliseconds or hours, and stack traces might be nonexistent or misleading. A system may fail in strange ways when exposed to unusual traffic spikes or insufficient resources. Race conditions and deadlocks may only be discovered during unique traffic patterns or resource configurations. Timing issues may only be noticed when many components are integrated and their performance parameters and failure/retry/timeout delays create a chaotic system. Memory corruption or uninitialized memory access may go unnoticed for a large percentage of calls but become fatal for rare states. Memory leaks may be negligible unless the system is exposed to load for an extended period of time.

Guidelines for development:

  • Simplify your synchronization logic. If it’s too hard to understand, it will be difficult to reproduce and debug complex concurrency problems.
  • Always obtain locks in the same order. This is a tried-and-true guideline to avoid deadlocks, but I still see code that breaks it periodically. Define an order for obtaining multiple locks and never change that order.
  • Don’t optimize by creating many fine-grained locks, unless you have verified that they are needed. Extra locks increase concurrency complexity.
  • Avoid shared memory, unless you truly need it. Shared memory access is very easy to get wrong, and the bugs may be quite difficult to reproduce.

Guidelines for testing:

  • Stress test your system regularly. You don't want to be surprised by unexpected failures when your system is under heavy load.
  • Test timeouts. Create tests that mock/fake dependencies to test timeout code. If your timeout code does something bad, it may cause a bug that only occurs under certain system conditions.
  • Test with debug and optimized builds. You may find that a well behaved debug build works fine, but the system fails in strange ways once optimized.
  • Test under constrained resources. Try reducing the number of data centers, machines, processes, threads, available disk space, or available memory. Also try simulating reduced network bandwidth.
  • Test for longevity. Some bugs require a long period of time to reveal themselves. For example, persistent data may become corrupt over time.
  • Use dynamic analysis tools like memory debuggers, ASan, TSan, and MSan regularly. They can help identify many categories of unreproducible memory/threading issues.


Enforce preconditions

I’ve seen many well-meaning functions with a high tolerance for bad input. For example, consider this function:

void ScheduleEvent(int timeDurationMilliseconds) {
if (timeDurationMilliseconds <= 0) {
timeDurationMilliseconds = 1;
}
...
}

This function is trying to help the calling code by adjusting the input to an acceptable value, but it may be doing damage by masking a bug. The calling code may be experiencing any number of problems described in this article, and passing garbage to this function will always work fine. The more functions that are written with this level of tolerance, the harder it is to trace back to the root cause, and the more likely it becomes that the end user will see garbage. Enforcing preconditions, for instance by using asserts, may actually cause a higher number of failures for new systems, but as systems mature, and many minor/major problems are identified early on, these checks can help improve long-term reliability.

Guidelines for development:

  • Enforce preconditions in your functions unless you have a good reason not to.


Use defensive programming

Defensive programming is another tried-and-true technique that is great at minimizing unreproducible bugs. If your code calls a dependency to do something, and that dependency quietly fails or returns garbage, how does your code handle it? You could test for situations like this via mocking or faking, but it’s even better to have your production code do sanity checking on its dependencies. For example:

double GetMonthlyLoanPayment() {
double rate = GetTodaysInterestRateFromExternalSystem();
if (rate < 0.001 || rate > 0.5) {
throw BadInterestRate(rate);
}
...
}

Guidelines for development:

  • When possible, use defensive programming to verify the work of your dependencies with known risks of failure like user-provided data, I/O operations, and RPC calls.

Guidelines for testing:

  • Use fuzz testing to test your systems hardiness when enduring bad data.


Don’t hide all errors from the user

There has been a trend in recent years toward hiding failures from users at all costs. In many cases, it makes perfect sense, but in some, we have gone overboard. Code that is very quiet and permissive during minor failures will allow an uninformed user to continue working in a failed state. The software may ultimately reach a fatal tipping point, and all the error conditions that led to failure have been ignored. If the user doesn’t know about the prior errors, they will not be able to report them, and you may not be able to reproduce them.

Guidelines for development:

  • Only hide errors from the user when you are certain that there is no impact to system state or the user.
  • Any error with impact to the user should be reported to the user with instructions for how to proceed. The information shown to the user, combined with data available to an engineer, should be enough to determine what went wrong.


Test error handling

The most common sections of code to remain untested is error handling code. Don’t skip test coverage here. Bad error handling code can cause unreproducible bugs and create great risk if it does not handle fatal errors well.

Guidelines for testing:

  • Always test your error handling code. This is usually best accomplished by mocking or faking the component triggering the error.
  • It’s also a good practice to examine your log quality for all types of error handling.


Check for duplicate keys

If unique identifiers or data access keys are generated using random data or are not guaranteed to be globally unique, duplicate keys may cause data corruption or concurrency issues. Key duplication bugs are very difficult to reproduce.

Guidelines for development:

  • Try to guarantee uniqueness of all keys.
  • When not possible to guarantee unique keys, check if the recently generated key is already in use before using it.
  • Watch out for potential race conditions here and avoid them with synchronization.


Test for concurrent data access

Some bugs only reveal themselves when multiple clients are reading/writing the same data. Your stress tests might be covering cases like these, but if they are not, you should have special tests for concurrent data access. Case like these are often unreproducible. For example, a user may have two instances of your app running against the same account, and they may not realize this when reporting a bug.

Guidelines for testing:

  • Always test for concurrent data access if it’s a feature of the system. Actually, even if it’s not a feature, verify that the system rejects it. Testing concurrency can be challenging. An approach that usually works for me is to create many worker threads that simultaneously attempt access and a master thread that monitors and verifies that some number of attempts were indeed concurrent, blocked or allowed as expected, and all were successful. Programmatic post-analysis of all attempts and changing system state may also be necessary to ensure that the system behaved well.


Steer clear of undefined behavior and non-deterministic access to data

Some APIs and basic operations have warnings about undefined behavior when in certain states or provided with certain input. Similarly, some data structures do not guarantee an iteration order (example: Java’s Set). Code that ignores these warnings may work fine most of the time but fail in unusual ways that are hard to reproduce.

Guidelines for development:

  • Understand when the APIs and operations you use might have undefined behavior and prevent those conditions.
  • Do not depend on data structure iteration order unless it is guaranteed. It is a common mistake to depend on the ordering of sets or associative arrays.


Log the details for errors or test failures

Issues described in this article can be easier to reproduce and debug when the logs contain enough detail to understand the conditions that led to an error.

Guidelines for development:

  • Follow good logging practices, especially in your error handling code.
  • If logs are stored on a user’s machine, create an easy way for them to provide you the logs.

Guidelines for testing:

  • Save your test logs for potential analysis later.


Anything to add?

Have I missed any important guidelines for minimizing these bugs? What is your favorite hard-to-reproduce bug that you discovered and resolved?

Categories: Testing & QA

Taking Chrome Experiments to the TV

Google Code Blog - Thu, 03/27/2014 - 19:00
By Igor Clark, Google Creative Lab

With the release of the Google Cast SDK, making interactive experiences for the TV is now as easy as making interactive stuff for the web.

Google Creative Lab and Hook Studios took the SDK for a spin to make Photowall for Chromecast: a new Chrome Experiment that lets people collaborate with images on the TV.

Anyone with a Chromecast can set up a Photowall on their TV and have friends start adding photos to it from their phones and tablets in real time.

So how does it work?

The wall-hosting apps communicate with the Chromecast using the Google Cast SDK’s sender and receiver APIs. A simple call to the requestSession method using the Chrome API or launchApplication on the iOS/Android APIs is all it takes to get started. From there, communication with the Chromecast is helped along using the Receiver API’s getCastMessageBus method and a sendMessage call from the Chrome, iOS or Android APIs.

Using the Google Cast SDK makes it easy to launch a session on a Chromecast device. While a host is creating their new Photowall, they simply select which Chromecast they would like to use for displaying the photos. After a few simple steps, a unique five-digit code is generated that allows guests to connect to the wall from their mobile devices.

The Chromecast device then loads the Photowall application and begins waiting for setup to complete. Once ready, the Chromecast displays the newly-generated wall code and waits for photos to start rolling in. If at any point the Chromecast loses power or internet connection, the device can be relaunched with an existing Photowall right from the administration dashboard.

Tying it all together: The mesh

A mesh network connects the Photowall’s host, the photo-sharing guests, and the Chromecast. The devices communicate with each other via websockets managed by a Google Compute Engine-powered node.js server application. A Google App Engine app coordinates wall creation, authentication and photo storage on the server side, using the App Engine Datastore.

After a unique code has been generated during the Photowall creation process, the App Engine app looks for a Compute Engine instance to use for websocket communication. The instance is then told to route websocket traffic flagged with the new wall’s unique code to all devices that are members of the Photowall with that code.

The instance’s address and the wall code are returned to the AppEngine app. When a guest enters the wall code into the photo-sharing app on their browser, the AppEngine app returns the address of the Compute Engine websocket server associated with that code. The app then connects to that server and joins the appropriate websocket/mesh network, allowing for two-way communication between the host and guests.

Why is this necessary? If a guest uploads a photo that the host decides to delete for whatever reason, the guest needs to be notified immediately so that they don’t try to take further action on it themselves.

A workaround for websockets

Using websockets this way proved to be challenging on iOS devices. When a device is locked or goes to sleep the websocket connection should be terminated. However, in iOS it seems that the Javascript execution can be halted before the websocket close event is fired. This means that we are unaware of the disconnection, and when the phone is unlocked again we are left unaware that the connection has been dropped.

To get around this inconsistent websocket disconnection issue, we implemented a check approximately every 5 seconds to examine the ready state of the socket. If it has disconnected we reconnect and continue monitoring. Messages are buffered in the event of a disconnection and sent in order when a connection is reestablished.

Custom photo editing

The heart of the Photowall mobile web application is photo uploading. We created a custom photo editing experience for guests wanting to add their photos to a Photowall. They can upload photos directly from their device’s camera or choose one directly from its gallery. Then comes the fun stuff: cropping, doodling and captioning.

Photowall for Chromecast has been a fun opportunity to throw out everything we know about what a photo slideshow could be. And it’s just one example of what the Chromecast is capable of beyond content streaming. We barely scratched the surface of what the Google Cast SDK can do. We’re excited to see what’s next for Chromecast apps, and to build another.

For more on what’s under the hood of Photowall for Chromecast, you can tune in to our Google Developers Live event for an in-depth discussion on Thursday, April 3rd, 2014 at 2pm PDT.

Igor Clark is Creative Tech Lead at Google Creative Lab. The Creative Lab is a small team of makers and thinkers whose mission is to remind the world what it is they love about Google.

Posted by Louis Gray, Googler
Categories: Programming

Strategy: Cache Stored Procedure Results

Caching is not new of course, but I don't think I've heard of caching store procedure results before. It's like memoization in the database. Brent Ozar covers this idea in How to Cache Stored Procedure Results.

The benefits are the usual for doing work in the database, it doesn't take per developer per app work, just code it once in the stored proc and it works for everyone, everywhere, for all of time. The disadvantage is the usual as well, it adds extra load to a probably already busy database, so it should only be applied to heavy computations.

Brent positions this strategy as an emergency bandaid to apply when you need to take pressure off a database now. Developers can then work on moving the cache off the database and into its own tier. Interesting idea. And as the comments show the implementation is never as simple as it seems.

Categories: Architecture

How To Market Yourself as a Software Developer Is LIVE!

Making the Complex Simple - John Sonmez - Thu, 03/27/2014 - 16:00

Well, the day has finally arrived. A HUGE amount of work went into creating this package and many sleepless nights were spent getting everything ready, but… How To Market Yourself as a Software Developer is now LIVE! You can go here to get the course. If you have any questions, don’t hesitate to ask. And […]

The post How To Market Yourself as a Software Developer Is LIVE! appeared first on Simple Programmer.

Categories: Programming

My Views On Test Driven Development

Making the Complex Simple - John Sonmez - Thu, 03/27/2014 - 16:00

I used to be a huge supporter of TDD, but lately, I’ve become much more pragmatic about it. In this video, I discuss test driven development and what my current views on it are. Full transcript: John:               Hey, John Sonmez from simpleprogrammer.com.  This week, I want to talk about, yes, a technical topic.  I’m going […]

The post My Views On Test Driven Development appeared first on Simple Programmer.

Categories: Programming

Bringing together the best of PaaS and IaaS

Google Code Blog - Thu, 03/27/2014 - 16:00
By Navneet Joneja, Cloud Platform Team

Cross-posted from the Google Cloud Platform blog

Editor’s note: This post is a follow-up to the announcements we made on March 25th at Google Cloud Platform Live.

For many developers, building a cloud-native application begins with a fundamental decision. Are you going to build it on Infrastructure as a Service (IaaS), or Platform as a Service (PaaS)? Will you build large pieces of plumbing yourself so that you have complete flexibility and control, or will you cede control over the environment to get high productivity?

You shouldn’t have to choose between the openness, flexibility and control of IaaS, and the productivity and auto-management of PaaS. Describing solutions declaratively and taking advantage of intelligent management systems that understand and manage deployments leads to higher availability and quality of service. This frees engineers up to focus on writing code and significantly reduces the need to carry a pager.

Today, we’re introducing Managed Virtual Machines and Deployment Manager. These are our first steps towards enabling developers to have the best of both worlds. Managed Virtual Machines

At Google Cloud Platform Live we took the first step towards ending the PaaS/IaaS dichotomy by introducing Managed Virtual Machines. With Managed VMs, you can build your application (or components of it) using virtual machines running in Google Compute Engine while benefiting from the auto-management and services that Google App Engine provides. This allows you to easily use technology that isn’t built into one of our managed runtimes, whether that is a different programming language, native code, or direct access to the file system or network stack. Further, if you find you need to ssh into a VM in order to debug a particularly thorny issue, it’s easy to “break glass” and do just that.

Moving from an App Engine runtime to a managed VM can be as easy as adding one line to your app.yaml file:

vm:true

At Cloud Platform Live, we also demonstrated how the next stage in the evolution of Managed VMs will allow you to bring your own runtime to App Engine, so you won’t be limited to the runtimes we support out of the box.

Managed Virtual machines will soon launch in Limited Preview, and you can request access here starting today.

Introducing Google Cloud Deployment Manager

A key part of deploying software at scale is ensuring that configuration happens automatically from a single source of truth. This is because accumulated manual configuration often results in “snowflakes” - components that are unique and almost impossible to replicate - which in turn makes services harder to maintain, scale and troubleshoot.

These best practices are baked into the App Engine and Managed VM toolchains. Now, we’d like to make it easy for developers who are using unmanaged VMs to also take advantage of declarative configuration and foundational management capabilities like health-checking and auto-scaling. So, we’re launching Google Cloud Deployment Manager - a new service that allows you to create declarative deployments of Cloud Platform resources that can then be created, actively health monitored, and auto-scaled as needed.

Deployment Manager gives you a simple YAML syntax to create parameterizable templates that describe your Cloud Platform projects, including:
  • The attributes of any Compute Engine virtual machines (e.g. instance type, network settings, persistent disk, VM metadata).
  • Health checks and auto-scaling
  • Startup scripts that can be used to launch applications or other configuration management software (like Puppet, Chef or SaltStack)
Templates can be re-used across multiple deployments. Over time, we expect to extend Deployment Manager to cover additional Cloud Platform resources.

Deployment manager enables you to think in terms of logical infrastructure, where you describe your service declaratively and let Google’s management systems deploy and manage their health on your behalf. Please see the Deployment Manager documentation to learn more and to sign up for the Limited Preview.

We believe that combining flexibility and openness with the ease and productivity of auto-management and a simple tool-chain is the foundation of the next-generation cloud. Managed VMs and Deployment Manager are the first steps we’re taking towards delivering that vision.

Update on operating systems

We introduced support for SuSE and Red Hat Enterprise Linux on Compute Engine in December. Today, SuSE is Generally Available, and we announced Open Preview for Red Hat Enterprise Linux last week. We’re also announcing the limited preview for Windows Server 2008 R2, and you can sign up for access now. Windows Server will be offered at $0.02 per hour for the f1-micro and g1-small instances and $0.04 per core per hour for all other instances (Windows prices are in addition to normal VM charges).

Simple, lower prices

As we mentioned on Tuesday, we think pricing should be simpler and more closely track cost reductions as a result of Moore’s law. So we’re making several changes to our pricing effective April 1, 2014.

First, we’ve cut virtual machine prices by up to 53%:
  • We’ve dropped prices by 32% across the board.
  • We’re introducing sustained-use discounts, which lower your effective price as your usage goes up. Discounts start when you use a VM for over 25% of the month and increase with usage. When you use a VM for an entire month, this amounts to an additional 30% discount.
What’s more, you don’t need to sign up for anything, make any financial commitments or pay any upfront fees. We automatically give you the best price for every VM you run, and you still only pay for the minutes that you use.

Here’s what that looks like for our standard 1-core (n1-standard-1) instance: Finally, we’ve drastically simplified pricing for App Engine, and lowered pricing for instance-hours by 37.5%, dedicated memcache by 50% and Datastore writes by 30%. In addition, many services, including SNI SSL and PageSpeed are now offered to all applications at no extra cost.

We hope you find these new capabilities useful, and look forward to hearing from you! If you haven’t yet done so, you can sign up for Google Cloud Platform here.

Navneet Joneja, Senior Product Manager

Posted by Louis Gray, Googler
Categories: Programming

Being a Great Software Development Manager

From the Editor of Methods & Tools - Thu, 03/27/2014 - 15:33
Any manager, when goals are not being met, identifies the impediments and whenever possible removes them. A good manager goes further by identifying and removing potential impediments before they lead to the goals not being met. A great manager makes this seem easy. Tony Bowden, Socialtext Source: Managing the Unmanageable, Mickey W. Mantle & Ron Lichty, Addison-Wesley

Code of Ethics: A Proposal

UntitledA code of ethics is a compilation of ethical principals brought together into a framework that can be used to guide behavior.  I recently asked friends I work with how many codes of ethics they are bound by, and after a bit of discussion the average was four.  Examples include: IFPUG, PMI, IEEE, SEI, society and religions.  Kevin Brennan, Vice President for Professional Development of IIBA, tweeted me that a group needs a code of ethics to define itself as a profession and they are required for certification bodies under ISO 17024.

I would be the last person to suggest that codes of ethics are a bad idea.  However, the proliferation of codes combined with their relative complexity does give me pause.  An example of the complexity of common of codes of ethics is demonstrated below:

  • IIBA: Code of Ethical Conduct – 22 items
  • PMI : PMI’s Code of Ethics and Professional Conduct – 36 items
  • IEEE: Software Engineering Code of Ethics and Professional Practice – 80 items

All three of these codes are good, however I doubt very few people can recall any of their specifics. That greatly reduces their overall effectiveness.  Layer that on top of association codes, corporate codes like the great code of ethics from Lockheed (approximately 17 items) and the complexity level goes up.  I said all of this complexity gives me pause because I would like to see process improvement professionals embrace a code of ethics, but I do not want to increase the level of ethical complexity unless it has value.  I think we can keep this deadly simple.  The code I am proposing is:

Process Improvement Code of Ethics

  1. Treat others as you would like to be treated.  (Golden Rule)
  2. Follow a process to create and maintain processes. (eat your own dog food)
  3. Meet the commitments you make.
  4.  Avoid personal conflicts of interest.
  5. Only pursue changes that benefit the organization.
  6. Make decisions as transparently as humanly possible.

I would suggest that this code is simple and to the point.  Note: I am making the assumption that you’re adhering to laws and that lying, cheating and stealing your neighbor’s Butterfinger are not on the table, based on other codes of ethics.

What is missing? If you adopted these ethical guidelines how would they affect your projects?  How would they affect your professional life?   I am looking forward to your input, reactions and suggestions.  I would also like your opinions on whether ethics and process improvement should be discussed in the same context.


Categories: Process Management

Dart Flight School - Successful Take-Off

Google Code Blog - Wed, 03/26/2014 - 22:15
Author PhotoBy Seth Ladd, Developer Advocate

In celebration of Dart 1.0, the global developer community organized over 120 Dart Flight School events, and the response was overwhelming. Throughout February, 8500 developers learned how to build modern web (and server!) apps with Dart and AngularDart. Attendees got their Dart wings in Laos, France, Uganda, San Francisco, New Delhi, Bolivia and everywhere in between.

If you missed out, you can watch this Introduction to AngularDart video, build your first Dart app with the Darrrt Pirate Badge code lab, and try the AngularDart code lab.

Here are some of our favorite photos -- some events really embraced the theme!


+Kasper Lund, co-founder of Dart, speaking inside a decommissioned 747 at a Flight School hosted by GDG Netherlands.


GDG Seattle hosted their Flight School in the Museum of Flight.

 


8 cities in China did simultaneous events over Hangouts on Air (on Air, get it?) GDGs in Beijing, Hangzhou, Lanzhou, Shanghai, Suzhou, Xiamen, Xi’an, and Zhangjiakou participated.

Check out more photo highlights from around the world.

Thank you to the amazing community organizers, speakers, volunteers, and attendees that made this possible.

Next time, space!

Seth Ladd is a Developer Advocate on Dart. He's a web engineer, book author, conference organizer, and loves a game of badminton.

Posted by Louis Gray, Googler
Categories: Programming

Google I/O 2014

Google Code Blog - Wed, 03/26/2014 - 20:30
By Billy Rutledge, Director of Developer Relations

Today we launched the Google I/O 2014 website at google.com/io. Play with the experiment, get a preview of this year's conference, and continue to follow the Google Developers blog for updates on the event.

Registration:

Now, on to what I know you're waiting to hear about most. A month ago, we mentioned that this year’s registration process would be different. You won't need to scramble the second registration opens, as we will not be implementing a first-come-first-served model this year. Instead, registration will remain open from April 8 - 10 and you can apply any time during this window. We'll randomly select applicants after the window closes on April 10, and send ticket purchase confirmation emails shortly thereafter.

So sit back, relax, sleep in, and visit the Google I/O website from April 8-10 when the registration window is open. For full details, visit our Help page.

I/O Extended & Live:

If you can't make it to San Francisco, or would just rather experience I/O on your own schedule, we'll be bringing I/O to you in two ways. Watch the livestream of the keynote and sessions from the comfort of your home or office. Or, attend an I/O Extended event in your area. More details on these programs will be available soon.

We're working hard to make sure Google I/O 2014 is the best I/O yet. We hope to see you in June!

Billy Rutledge, Director of Developer Relations

Posted by Louis Gray, Googler
Categories: Programming

Oculus Causes a Rift, but the Facebook Deal Will Avoid a Scaling Crisis for Virtual Reality

Facebook has been teasing us. While many of their recent acquisitions have been surprising, shocking is the only word adequately describing Facebook's 5 day whirlwind acquisition of Oculus, immersive virtual reality visionaries, for a now paltry sounding $2 billion.

The backlash is a pandemic, jumping across social networks with the speed only a meme powered by the directly unaffected can generate.

For more than 30 years VR has been the dream burning in the heart of every science fiction fan. Now that this future might finally be here, Facebook’s ownage makes it seem like a wonderful and hopeful timeline has been choked off, killing the Metaverse before it even had a chance to begin.

For the many who voted for an open future with their Kickstarter dollars, there’s a deep and personal sense of betrayal, despite Facebook’s promise to leave Oculus alone. The intensity of the reaction is because Oculus matters to people. It's new, it's different, it creates a better future. It's important in a way sending messages or taking pictures never can be.

Let’s use Andy Baio as a sane example of the loyal opposition:

I can palpably feel the oxygen sucked out of the room. Infinite bright possibilities from indie game devs, quietly shelved.

Jigsus backs this up on reddit:

Within the last hour EVERY friend I know was developing a rift game has canceled. That's around 11 projects just gone.

So WTF? Why in this reality would Oculus sell to Facebook? At first blush it makes little sense. A more mismatched couple could hardly be created in a fantasy immersive 3D game.

Yet there's a reason and a method here that's not only about about a sweet payoff. When you look deeper at the Facebook deal it’s about Oculus' existential need to scale. And scale is something Facebook is very, very good at.

Let’s explore why this deal makes more sense for Oculus than it might first appear and the central role scalability plays in the decision making...

Categories: Architecture

New developer experiences for Cloud Platform

Google Code Blog - Wed, 03/26/2014 - 17:30
By Cody Bratt, Google Cloud Platform team

Cross-posted from the Google Cloud Platform blog

Editor's note: This post is a follow-up to the announcements we made on March 25th at Google Cloud Platform Live.

Yesterday, we unveiled a new set of developer experiences for Google Cloud Platform that are inspired by the work we've done inside of Google to improve our developers productivity. We want to walk you through these experiences in more detail and how we think they can help you focus on developing your applications and growing your business.

Isolating production problems faster

Understanding how applications are running in production can be challenging and sometimes it's unavoidable that errors will make it into production. We've started adding new capabilities to Cloud Platform this week that make it easier to isolate and fix production problems in your application running on Google App Engine.

We are adding a 'Diff' link to the Releases page (shown below) which will bring you to a rollup of all the commits that happened between deployments and the source code changes they introduced. Invaluable when you are trying to isolate a production issue. deploymentdiff.pngYou can see here where Brad introduced an error into production.But, looking at source changes can still be like looking for a needle in a haystack. This is why we're working to combine data from your application running in production and link it with your source code. logviewer
In the new and improved logs viewer, we aggregate logs from all your instances in a single view in near real time. Of course, with high traffic levels, just browsing is unlikely to be helpful. You can filter based on a number of different criteria including regular expressions and the time when you saw an issue. We've also improved the overall usability by letting you scroll continuously rather than viewing 20 logs per page at a time.

Ordinarily, debugging investigations from the logs viewer would require you to find the code, track down the right file and line and ensure it's the same version that was deployed in production. This can cause you to completely lose context. In order to address this, we've added links from stack traces to the associated code causing them for push-to-deploy users. In one click you are brought to the source viewer at the revision causing the problem with the associated line highlighted. Screen Shot 2014-03-21 at 1.36.21 PM.pngShortening your time to fix

You may have noticed the 'Edit' buttons in the source viewer. We're introducing this because we think that finding the error in production is only part of the effort required when things go awry. We continue to ask ourselves how we can make it even faster for you to fix problems. One of the ways we do this inside Google today is to make the fix immediately from the browser.

So, we're bringing this to you by making it possible to edit a file in place directly from the source viewer in the Console. You can simply click the 'Edit' button, make your changes to that file, and click the 'Commit' button. No local setup required. We also know that sometimes fixes aren't that simple and we're investigating ways we can make it easy to seamlessly open your files in a desktop editor or IDE directly from the Console. commitdialog.pngFix simple problems directly in the browsercommit successful.pngTrigger your push-to-deploy setup instantlySince we've integrated this with your push-to-deploy configuration, the commit triggers any build and test steps you have to ensure your code is fully vetted before it reached production. Further, since we've built this on top of Git, each change is fully attributed and traceable in your Git repository. This is not SSHing into a machine and hacking on code in production.

An integrated ecosystem of tools you know

Speaking of using other editors, we believe that you are most productive when you have access to all the tools you know and love. That's why we're committed to creating a well integrated set of experiences across those tools, rather than than asking you to switch. With the Git push-to-deploy feature we launched last year, we wanted to make it easy for you to deploy your application utilizing standard Git commands while giving you free private hosted Git repositories. In addition, we understand that many of you host your source code on GitHub and in fact so does the Google Cloud Platform team. As you can see from the new 'Releases' page we showed this week, we're introducing the ability to connect your GitHub repository to push-to-deploy. We will register a post-commit hook with GitHub to give you all the deployment benefits without moving your source code. Just push to the master branch of your GitHub repository and your code will be deployed! Push-to-deploy now supports Java buildsNext, we're excited to introduce new release types for you to use with push-to-deploy. As you can see above, this project is set up to trigger a build, run all your unit tests, and if they all pass, deploy. Taking a peek under the covers, we utilize a standard Google Compute Engine virtual machine that you own running in your project to perform this build and test. In this case, Google has automatically provisioned the machine with Maven and Jenkins and everything you need to build and run your tests. Your build and tests can be as complex as they need to be and they can use all the resources available on that machine they need to run. What's more is that all the builds will be done on a clean machine ensuring reliable, repeatable builds on every release. We're starting with Maven-based Java builds, but working to release support for other languages, test frameworks and build systems in the future. releasehistory.pngThe release history showing build, test and deployment statusTying this together, we've simplified getting started on your projects by introducing the new 'gcloud' command in the Google Cloud SDK. The Google Cloud SDK contains all the tools and libraries you need to create and manage resources on the Cloud Platform. We recently posted some Cloud SDK tips and tricks. Now, using the 'gcloud init' command, we make setting up a local copy of your project fast by finding your project's associated Git repository and downloading the source code, so all you have to do is change directories and open your favorite editor. gcloudinit
Once you are initialized, all you need to do is start writing code. After you're done, running 'git push' will push your changes back to the remote repository and trigger your push-to-deploy flow. Then, all your changes will also be available to view on the 'Source' page in the Console.

We'll be rolling out these features to you in stages over the coming weeks, but if you are interested in being a trusted tester for any of them, please contact us here. We're very excited to hear how you're using these tools and what features we can build on top of them to make your developer experience even more seamless and fast. We're just getting started.

Cody Bratt is a product manager on the Google Cloud Platform team

Posted by Louis Gray, Googler
Categories: Programming

Project and Process Improvement Ethics: A Primer

Religion is just one way people learn ethics.

Religion is just one way people learn ethics.

I recently overheard an offhanded comment that went something like, “if you aren’t cheating, you’re not trying hard enough.”  What are ethics?  What is the purpose of ethical frameworks?  Why should they matter to those who manage process improvement?

Wikipedia defines ethics as a branch of philosophy that addresses questions about morality—that is, concepts such as good vs. bad, noble vs. ignoble, right vs. wrong, and matters of justice, love, peace, and virtue.  Hardly the stuff of project management or process improvement, however there is a branch of ethics called applied ethics (doesn’t that sound much more practical?) that seeks to address our daily business lives.

Interest in ethics waxes and wanes over time; they tend to wax when things go awry.  Obvious examples abound, like when Enron  went up in flames, ethics became a hot topic, at MCI during their accounting debacle and even during the BP Oil well crisis. But there are less obvious examples such as the spin to under play a risk in a status report or when someone occasionally conflates effort and progress when a project is behind.  Unless a framework or code is in place to highlight these transgressions, large or small, they go unnoticed and nothing can be changed.

Most ethical frameworks serve two common purposes. The first purpose is to guide behavior so that it is predictable.  Codes of conduct provide a path to guide both the organization’ and the individual’s actions (to an extent they can be different).  Codes of ethics, codes of conduct and the effort to enforce them help to identify deviant behavior before it can injure the organization. The second purpose is as an announcement to the larger world of the higher order rules an organization intends to follow.

The ethics enshrined in these frameworks evolve to guide behavior and provide all affected parties with an understanding of how people (and people proxies) should act.  Rules, laws and manifestos (statements of principles and ethics) are how ethics are applied in the real. The rule, “Thou shall not install un-unit tested code” creates a set of expected behaviors and a set of obligations on all parties.  Living up to the rule is a matter of ethics.  The translation of ethics into codes of conduct, rules, laws or other codes provide a line in the sand so that you can judge whether an action is ethical or not.  The more black and white the ethics rule is the easier it is to follow in real time.

Most corporate codes of conduct or ethics do not address some of the more specific issues with which a project manager of a process improvement projects will need to wrestle.  What can be done?

I begin my process improvement projects by establishing a process improvement manifesto.  The exercise has many benefits; however the primary goal is to help empower the team to make the decisions without having to get permission.  The manifesto acts a basis to form very specific code of ethics to shorten the decision making loop which will improve efficiency for normal IT projects and process improvement projects.


Categories: Process Management

Are integration tests worth the hassle?

Actively Lazy - Tue, 03/25/2014 - 22:19

Whether or not you write integration tests can be a religious argument: either you believe in them or you don’t. What we even mean by integration tests can lead to an endless semantic argument.

What do you mean?

Unit tests are easy to define they test a single unit: a single class, a single method, make a single assertion on the behaviour of that method. You probably need mocks (again, depending on your religious views on mocking).

Integration tests, as fas as I’m concerned, mean they test a deployed (or at least deployable) version of your code, outside in, as close to what your “user” will do as possible. If you’re building a website, use Selenium WebDriver. If you’re writing a web service, write a test client and make requests to a running instance of your service. Get as far outside your code as you reasonably can to mimic what your user will do, and do that. Test that your code, when integrated, actually works.

In between these two extremes exist varying degrees of mess, which some people call integration testing. E.g. testing a web service by instantiating your request handler class and passing a request to it programmatically, letting it run through to the database. This is definitely not unit testing, as it’s hitting the database. But, it’s not a complete integration test, as it misses a layer: what if HTTP requests to your service never get routed to your handler, how would you know?

What’s the problem then?

Integration tests are slow. By definition, you’re interacting with a running application which you have to spin up, setup, interact with, tear down and clean up afterwards. You’re never going to get the speed you do with unit tests. I’ve just started playing with NCrunch, a background test runner for Visual Studio – which is great, but you can’t get it running your slow, expensive integration tests all the time. If your unit tests take 30 seconds to run, I’ll bet you run them before every checkin. If your integration tests take 20 minutes to run, I bet you don’t run them.

You can end up duplicating lower level tests. If you’re following a typical two level approach of writing a failing integration test, then writing unit tests that fail then pass until eventually your integration test passes – there is an inevitable overlap between the integration test and what the unit tests cover. This is expected and by design, but can seem like repetition. When your functionality changes, you’ll have at least two tests to change.

They aren’t always easy to write. If you have a specific case to test, you’ll need to setup the environment exactly right. If your application interacts with other services / systems you’ll have to stub them so you can provide canned data. This may be non-trivial. The biggest cost, in most environments I’ve worked in, with setting up good integration tests is doing all the necessary evil of setting up test infrastructure: faking out web services, third parties, messaging systems, databases blah blah. It all takes time and maintenance and slows down your development process.

Finally integration tests can end up covering uninteresting parts of the application repeatedly, meaning some changes are spectacularly expensive in terms of updating the tests. For example, if your application has a central menu system and you change it, how many test cases need to change? If your website has a login form and you massively change the process, how many test cases require a logged in user?

Using patterns like the page object pattern you can code your tests to minimize this, but it’s not always easy to avoid this class of failure entirely. I’ve worked in too many companies where, even with the best of intentions, the integration tests end up locking in a certain way of working that you either stick with or declare bankruptcy and just delete the failing tests.

What are the advantages then?

Integration tests give you confidence that your application actually works from your user’s perspective. I’d never recommend covering every possible edge case with integration tests – but a happy-path test for a piece of functionality and a failure-case gives you good confidence that the most basic aspects of any given feature work. The complex edge cases you can unit test, but an overall integration test helps you ensure that the feature is basically integrated and you haven’t missed something obvious that unit tests wouldn’t cover.

Your integration tests can be pretty close to acceptance tests. If you’re using a BDD type approach, you should end up with quite readable test definitions that sufficiently technical users could understand. This helps you validate that the basic functionality is as the user expects, not just that it works to what you expected.

What goes wrong?

The trouble is if integration tests are hard to write you won’t write them. You’ll find another piece of test infrastructure you need to invest in, decide it isn’t worth it this time and skip it. If your approach relies on integration tests to get decent coverage of parts of your application – especially true for the UI layer – then skipping them means you can end up with a lot less coverage than you’d like.

Some time ago I was working on a WPF desktop application – I wanted to write integration tests for it. The different libraries for testing WPF applications are basically all crap. Each one of them failed to meet my needs in some annoying, critical way. What I wanted was WebDriver for WPF. So I started writing one. The trouble is, the vagaries of the Windows UI eventing system mean this is hard. After a lot of time spent investing in test infrastructure instead of writing integration tests, I still had a barely usable testing framework that left all sorts of common cases untestable.

Because I couldn’t write integration tests and unit testing WPF UI code can be hard, I’d only unit test the most core internal functionality – this left vast sections of the WPF UI layer untested. Eventually, it became clear this wasn’t acceptable and we returned to the old-school approach of writing unit tests (and unit tests alone) to get as close to 100% coverage as is practical when some of your source code is written in XML.

This brings us back full circle: we have good unit test coverage for a feature, but no integration tests to verify that all the different units are hanging together correctly and work in a deployed application. But, where the trade-off is little test coverage or decent test coverage with systematic blindspots what’s the best alternative?

Conclusion

Should you write integration tests? If you can, easily: yes! If you’re writing a web service, it’s much easier to write integration tests for than almost every other type of application. If you’re writing a relatively traditional, not too-javascript-heavy website, WebDriver is awesome (and the only practical way to get some decent cross-browser confidence). If you’re writing very complex UI code (WPF or JavaScript) it might be very hard to write decent integration tests.

This is where your test approach blurs with architecture: as much as possible, your architecture needs to make testing easy. Subtle changes to how you structure your application might make it easier to get decent test coverage: you can design the application to make it easy to test different elements in isolation (e.g. separate UI layer from a business logic service); you don’t get quite fully integrated tests, but you minimize the opportunity for bugs to slip through the cracks.

Whether or not you write integration tests is fundamentally a question of what tests your architectural choices require you to write to get confidence in your code.


Categories: Programming, Testing & QA

Google Cloud Platform Live: Blending IaaS and PaaS, Moore’s Law for the cloud

Google Code Blog - Tue, 03/25/2014 - 18:00
Author PhotoBy Urs Hölzle, Senior Vice President

Editors note: Tune in to Google Cloud Platform Live for more information about our announcements. And join us during our 27-city Google Cloud Platform Roadshow which kicks off in Paris on April 7.

Today, at Google Cloud Platform Live we're introducing the next set of improvements to Cloud Platform: lower and simpler pricing, cloud-based DevOps tooling, Managed Virtual Machines (VM) for App Engine, real-time Big Data analytics with Google BigQuery, and more.

Industry-leading, simplified pricing

The original promise of cloud computing was simple: virtualize hardware, pay only for what you use, with no upfront capital expenditures and lower prices than on-premise solutions. But pricing hasn't followed Moore's Law: over the past five years, hardware costs improved by 20-30% annually but public cloud prices fell at just 8% per year.

We think cloud pricing should track Moore's Law, so we're simplifying and reducing prices for our various on-demand, pay-as-you-go services by 30-85%:
  • Compute Engine reduced by 32% across all sizes, regions, and classes.
  • App Engine pricing simplified, with significant reductions in database operations and front-end compute instances.
  • Cloud Storage is now priced at a consistent 2.6 cents per GB. That's roughly 68% less for most customers.
  • Google BigQuery on-demand prices reduced by 85%.

Sustained-Use discounts

In addition to lower on-demand prices, you'll save even more money with Sustained-Use Discounts for steady-state workloads. Discounts start automatically when you use a VM for over 25% of the month. When you use a VM for an entire month, you save an additional 30% over the new on-demand prices, for a total reduction of 53% over our original prices. Sustained-Use Discounts automatically reward users who run VMs for over 25% of any calendar month

With our new pricing and sustained use discounts, you get the best performance at the lowest price in the industry. No upfront payments, no lock-in, and no need to predict future use.

Making developers more productive in the cloud

We're also introducing features that make development more productive:
  • Build, test, and release in the cloud, with minimal setup or changes to your workflow. Simply commit a change with git and we'll run a clean build and all unit tests.
  • Aggregated logs across all your instances, with filtering and search tools.
  • Detailed stack traces for bugs, with one-click access to the exact version of the code that caused the issue. You can even make small code changes right in the browser.
We're working on even more features to ensure that our platform is the most productive place for developers. Stay tuned.

Introducing Managed Virtual Machines

You shouldn't have to choose between the flexibility of VMs and the auto-management and scaling provided by App Engine. Managed VMs let you run any binary inside a VM and turn it into a part of your App Engine app with just a few lines of code. App Engine will automatically manage these VMs for you.

Expanded Compute Engine operating system support

We now support Windows Server 2008 R2 on Compute Engine in limited preview and Red Hat Enterprise Linux and SUSE Linux Enterprise Server are now available to everyone.

Real-Time Big Data

BigQuery lets you run interactive SQL queries against datasets of any size in seconds using a fully managed service, with no setup and no configuration. Starting today, with BigQuery Streaming, you can ingest 100,000 records per second per table with near-instant updates, so you can analyze massive data streams in real time. Yet, BigQuery is very affordable: on-demand queries now only cost $5 per TB and 5 GB/sec reserved query capacity starts at $20,000/month, 75% lower than other providers.

Conclusion

This is an exciting time to be a developer and build apps for a global audience. Today we've focused a lot on productivity, making it easier to build and test in the cloud, using the tools you're already familiar with. Managed VMs give you the freedom to combine flexible VMs with the auto-management of App Engine. BigQuery allows big data analysis to just work, at any scale.

And on top of all of that, we're making it more affordable than it's ever been before, reintroducing Moore's Law to the cloud: the cost of virtualized hardware should fall in line with the cost of the underlying real hardware. And you automatically get discounts for sustained use with no long-term contracts, no lock-in, and no upfront costs, so you get the best price and the best performance without needing a PhD in Finance.

We've made a lot of progress this first quarter and you'll hear even more at Google I/O in June.

Posted by Louis Gray, Googler
Categories: Programming

Architect Salary Survey

Software Architecture Zen - Pete Cripp - Tue, 03/25/2014 - 14:48
The first ever architecture specific salary survey has just been published in the UK by FMC Technology. In total over 1000 architects responded to the survey. The report looks at architect roles in six main areas:
  • Architecture Management
  • Enterprise Architecture
  • Business Architecture
  • Information Architecture
  • Application Architecture
  • Technology Architecture
It also looks at pay rises (in 2013), regional and industry differences as well as motivational factors
The results can be viewed here.

Also on my Wordpress blog here.
Categories: Architecture

Software Development Conferences Forecast March 2014

From the Editor of Methods & Tools - Tue, 03/25/2014 - 14:04
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban) software testing, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine: Big Data TechCon, March 31-April 2, Boston, USA Software Testing Analysis & Review Canada Conference, April 5-9 2014, Toronto, Canada Agile Adria Conference, April 7-8 2014, Tuheljske Toplice, Croatia Optional Conference, April 8-9 2014, Budapest, Hungary GOTO Amsterdam 2014, ...

3 Roles That Need to be Involved in Agile Estimating with Planning Poker

Mike Cohn's Blog - Tue, 03/25/2014 - 13:00

When estimating your product backlog with Planning Poker, it’s important to have the right people participate. Let’s go over who needs to be there, and what the job of each is during an agile estimating meeting.

First, of course, we start with the team. All of a Scrum team’s members should be present when playing Planning Poker. You may be tempted to estimate without the individuals in some role—don’t.

An estimate put on a product backlog item (most frequently a user story) needs to represent the total effort to deliver that item. It’s not an estimate of all the work except database changes or all the work except testing.

Those individuals need to be there because they’re needed to deliver many of the product backlog items. They’re also there because they will often have something to contribute to discussions of work they may not directly be involved in.

I’ve seen many examples when a comment by a tester about his or her own work made a programmer realize he overlooked something in his own work.

Programmers are, of course, the best at estimating programming work. And testers are best at estimating testing, and so on. But that doesn’t mean someone with a specialty in one skill won’t have something to add in a discussion about something that will be done by someone else.

Also in a Planning Poker meeting will be the ScrumMaster to estimate product backlog items. A team’s ScrumMaster is its facilitator, and so the ScrumMaster should participate in all regularly scheduled meetings, and a Planning Poker session is no exception.

But should the ScrumMaster estimate alongside the rest of the team? If the ScrumMaster is also a technical contributor such as a programmer, tester, designer, analyst, DBA, etc., then definitely.

By right of contributing in that way, the ScrumMaster holds up a Planning Poker card just like the others.

However, a pure ScrumMaster should estimate only by invitation of the rest of the team. If the others say, “Hey, we have to estimate, why don’t you?” then the ScrumMaster holds up a card each round.

A ScrumMaster who has a solid technical background (in any role) will be especially likely to estimate along with the others.

That brings us to the product owner—and, yes, by all means the product owner needs to be present during any Planning Poker estimating session. The product owner describes each item to the team before they estimate it.

Also, team members will have questions—“what should we do if … ?” “What if the user does … ?”—and the product owner needs to be present to answer those questions.

Product owners, however, generally do not hold up cards during Planning Poker sessions. However, my official answer is the same as for the ScrumMaster: if the team invites the product owner to estimate, the product owner should do so. This is pretty rare, though, and it’s far more common for a ScrumMaster to be invited to estimate.

So, like many things in Scrum, Planning Poker is a whole-team activity.