Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Get Up And Code 047: Miguel Castro Knows How To Stay Fit

Making the Complex Simple - John Sonmez - Sat, 03/29/2014 - 15:00

I finally got a chance to talk with Miguel Castro about fitness and diet and how he incorporates it in his busy life. Miguel has an awesome story about how he lost quite a bit of fat and replaced it with muscle. Full transcript below: show John:               Hey everyone, welcome back to the Get Up and CODE […]

The post Get Up And Code 047: Miguel Castro Knows How To Stay Fit appeared first on Simple Programmer.

Categories: Programming

Optimism: The Good and The Bad

It is good to be optimistic, but we still want airbags...

It is good to be optimistic, but we still want airbags…

Is optimism a good thing?  Optimistic people live longer and are typically happier then less optimistic people.  Michael Sheier and Charles Carver report in Effects of Optimism on Psychological and Physical Well-Being that optimism may lead a person to cope more adaptively with stress.  Unfortunately the personal benefits of optimism do not tend to expend to projects and teams.  So why is optimism good for most of people while it is a problem for teams?

  • Optimism causes risks to be overlooked.
  • Optimism causes estimates to be missed.
  • Optimism causes plans to be slipped.
  • Optimism causes the need for heroism.

In people optimism is a great thing, but for project managers optimistic realism (defined as optimism balanced with “REAL” data) is far healthier. Regardless of my plea most teams or leaders eschew realism and plan optimistically.  We are trained to be problem solvers; trained that we can surmount any problem by sheer force of will, therefore we plan optimistically.  Planning for optimism only counts when you define planning for optimism as a plan based on measured performance plus planned innovation.  Innovation must be planned because projects cannot count on serendipity to achieve their goals and neither can teams nor an organization count on lightening striking twice without a plan and process to attract it.

Agile teams of all shapes and flavors need to separate how they view themselves from how they view their role.  Personal optimism is great, but professional optimism should be replaced with optimistic realism.  Realism is seeing the forest for the trees and planning to make sure you achieve your goals.  A self-managing team must see risk, to predict the future and plan for innovation and change to deliver in a timely, accurate and efficient manner. Teams must focus on performance because they are charged with delivering functionality while making a customer happy.  It takes optimistic realism to deliver.  In short:

  • Optimistic realism causes risks to be identified.
  • Optimistic realism recognizes capacity.
  • Optimistic realism supports making flexible plans.
  • Optimistic realism fosters teams.

Optimism without measurement and data as a balance is unrealistic.


Categories: Process Management

Stuff The Internet Says On Scalability For March 28th, 2014

Hey, it's HighScalability time:


Looks like a multiverse, if you can keep it.
  • Quotable Quotes:
    • @abt_programming: "I am a Unix Creationist. I believe the world was created on January 1, 1970 and as prophesized, will end on January 19, 2038" - @teropa
    • @demisbellot: Cloud prices are hitting attractive price points, one more 40-50% drop and there'd be little reason to go it alone.
    • @scott4arrows: Dentist "Do you floss regularly?" Me "Do you back up your computer data regularly?"
    • @avestal: "I Kickstarted the Oculus Rift, what do I get?" You get a lesson in how capitalism works.
    • @mtabini: “$20 charger that cost $2 to make.” Not pictured here: the $14 you pay for the 10,000 charger iterations that never made it to production.
    • @strlen: "I built the original assembler in JS, because it's what I prefer to use when I need to get down to bare metal." - Adm. Grace Hopper
    • tedchs: I'd like to propose a new rule for Hacker News: only if you have built the thing you're saying someone should save money by building themselves, may you say the person should build that thing.
    • lamby: Bezos predicted they would be good over the long term but said that he didn’t want to repeat “Steve Jobs’s mistake” of pricing the iPhone in a way that was so fantastically profitable that the smartphone market became a magnet for competition.
    • @PariseauTT: I feel a Netflix case study coming...everybody get your drinks ready...#AWSSummit
    • seanmccann: That's no different than startups of the past having to pay thousands to millions of dollars to setup servers. These days those same servers can be setup in minutes for a fraction of the cost. Timing is everything. Sometimes the current market pricing for a commodity is too expensive to make your business viable today but in the future that will not be the case. Just depends how long that will take. 
    • Petyr 'Littlefinger' Baelish: Chaos isn't a pit. Chaos is a ladder. Many who try to climb it fail and never get to try again. The fall breaks them. And some, are given a chance to climb. They refuse, they cling to the realm or the gods or love. Illusions. Only the ladder is real. The climb is all there is.
  • We turn those who first climb the peaks of tall mountains into heros. But what of the mountain? Mariana Mazzucato The Entrepreneurial State: Debunking Private vs. Public Sector Myths: Every feature of the iPhone was created, originally, by multi-decade government-funded research. From DARPA came the microchip, the Internet, the micro hard drive, the DRAM cache, and Siri. From the Department of Defense came GPS, cellular technology, signal compression, and parts of the liquid crystal display and multi-touch screen (joining funding from the CIA, the National Science Foundation, and the Department of Energy, which, by the way, developed the lithium-ion battery.) CERN in Europe created the Web. Steve Jobs’ contribution was to integrate all of them beautifully.

  • This is not an April Fools' joke, as the name might make a certain sort of mind consider. WebScaleSQL: MySQL goes web scale with contributions from MySQL engineering teams at Facebook, Google, LinkedIn, and Twitter. It includes lots of good work on the test suite, performance enhancements, and features to make scaling easier. So many forks at the table (MariaDB, Percona, webscale).

  • This is why we can't have nice code. Martin Sústrik argues In the Defense of Spaghetti Code, quite successfully I think, that it's often clearer to have one large 1500 line function than it is to have a refactored great big ball of mud. Commenters argue from a perfect world stance that if you do X from the start and then continue the same practices over the years and and hundreds of programmers then code can be perfected. Not so. The nature of many domains is they are just messy. At a certain level of abstraction messiness can be hidden, but when you are in the guts of a thing, messiness is preserved. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Categories: Architecture

No Discount for Charity

NOOP.NL - Jurgen Appelo - Fri, 03/28/2014 - 16:14
Management 3.0 Workout

This week I was asked if I offer a discount for charity organizations for my new one-day workshops.

I said, “No”.

I don’t believe in discounts for charity for various reasons. But I’m also sure some people will hate me for saying “No” when I don’t offer a good explanation. “Jurgen is such a bad selfish person! Boo!” Well, yes that’s true. But let me offer you my thoughts.

The post No Discount for Charity appeared first on NOOP.NL.

Categories: Project Management

Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices

Herding Cats - Glen Alleman - Fri, 03/28/2014 - 04:28

There is a popular myth that the estimating problem in  software development starts with comparing software development to bridge design and development. This lays the groundwork that software development is not the same as any other engineering discipline and estimating can't be done for software projects

That some how the project paradigm is excempt from the Five Immutable Principles of every other project domain. These immutable principles are:

  1. Do we know what DONE looks like in units of measure meaningful to the decision makers? These measures are  effectiveness and performance. The Measures of Effectiveness are operational measures closely related to the achievement of the mission or operational objectives evaluated in an operational environment, under a specific set of conditions. The Measures of Performance characterize the physical or functional attributes relating to the system operation, measured or estimated under specific conditions.
  2. Do we have a PLAN - not a schedule, but a strategy - to reach this done state in a logical sequence of work activities? This sequence  delivers the needed capabilities to support the mission or business case to maximize the ROI for each capability.
  3. Do we have some understanding of the RESOURCES needed to execute this plan? These resources include staff, facilities, money, tools, environments all needed for success.
  4. Have we identified all the uncertainties - reducible and non-reducible - that will create risks to our success?
  5. Have we established some form of measures progress toward our goal in units of physical percent complete?

If we are building bridges the answers to these questions are pretty clear. We are  bending metal into money in ways well established from the past. For software project, these questions still have clear and concise answers, although the answers are developed not from blueprints, but from other means.

Economics of writing software for money

If we accept these immutable principles of project success there are few other immutable principles of business and the management of a business that writs software for money. Either internal projects, where funding comes for the company, or external projects where the money comes from a customer.

  1. All value is exchanged for the consumption of time and money.
  2. This value cannot be assessed in the absence of knowing the cost to produce that value

First let's look at Barry Boehm's seminal work Software Economics. Some have suggested this idea is outdated. They would be wrong, especially since that suggestion - being wrong - is not backed by any experience or reference of managing the balance sheet of for profit software project. Internal projects where the cost and cost perfomance is outside the persons management responsibility doesn't count. 

Show me the money (that you have been held accountable for)

And buy accountable I mean, you lose you job when things go bad financially. 

Now with the advent of agile software development, Barry's concepts from long ago need to be updated for iterative and incremental development. Barry by the way instituted the Spiral method in the DOD, replacing the Waterfall method, An recently instituted the evolutionary method replacing the spiral method, starting with Section 803 of the National Defense Authorization Act. 

Core business processes are still in place, not matter the software development methods. Those processes haven't been overturned by agile or any other process. We still exchange money for value. The rate of that exchange must be a positive ratio

Value / Cost > 1.0

must be positive if we expect to stay in business for long. Monetizing value is many times difficult up front. Monetizing the work effort may appear difficult to some, but in fact it is not as difficult as it appears. There are many tools, processes, books, training, professional organizations that have field proven solutions to estimating and managinf cost in the software development domain.

Back to the Future on Estimating Software Development Cost (and Schedule)

  • The estimates aren't for the developers, although development management is a crticial user of the estimates, and the developers are critical contributors to the estimates. The estimats are for the business and the decision processes made by the business. Business takes funds and turns them into products and services. Bending metal into money is a common phrase in the hardgoods business. Bending software into money works for the software development (for money) business (internal or external).
  • Developers are closest to the work, but they are not likley the best at estimating the work. Not because they don't know what needs to be done. If they don't know what needs to be done, they'll produce bad estimates. 
  • Decisions on how to spend the companys money start and end - in any profit focused company - with  when will we earn back our investment?
    • What's our profit margin?
    • What's our ROI, IRR, Breakeven day, Analysis of Alternatives?
    • What staffing impacts will result from starting this project?
    • When will this project complete, so I can know when the spend profile and staffing burden will change
    • What's the expected completion date so we can transition to the business, and then transition to operations. So we can know when to change what we are doing to the new operations  doing.
  • Knowing the costs of products or services provided in exchange for money is the basis of any business strategy.
    • People give you money
    • You expend effort (cost)
    • The difference yuo keep, minus your overhead, fringe, and benefits.
    • If you wait till the end to do this calculation, it's too late to change the course of your efforts.

This approach comes from early in my career. A time by Barry Boehm worked at TRW, along with me and 1,000's of other. My boss, took us young guns aside one day and gave us his standard speach on a pay day Friday.

Everyone take out you pay check. Look in the upper left corner. It says Bank of America and TRW and the branch address. That's NOT where the money comes from to pay you. The money comes from the US Air Force (TRDSS was our program). They pay us to deliver the Statement of Work items for the amount of money we said we woudl on the date (more or less) we said we would. Keep doing that - within the allowable variances - and you'll keep getting those check you can take across the street to deposit in your account.

Customer provides money to do the work needed to provide value. In the TDRSS case, provide the internet in space. When the cost of providing the value exceeds the budget for providing the value, we loose money and likely go out of business.

Not knowing when you're headed for trouble on cost, schedule, or techncial performance is simply bad business. So having an estimated performance target to steer against is mandatory for business success. It can't be any cleared than that. Without that steering target your project is open loop management and you'll be in the ditch before you can steer away from the ditch. 

Related articles Facts and Fallacies of Estimating Software Cost and Schedule
Categories: Project Management

A Really Simple Checklist For Change Readiness Assessment

A wrapping paper change barbarian

A wrapping paper change barbarian

When you begin a change process it is sometimes important to have a reminder of the critical points required to start the journey. If you were getting ready for vacation the checklist might include identifying who is going and who is in charge, deciding on a destination, a map, hotel reservations and a list of people to tell you are going. Beginning a process improvement project is not very different. I have constructed a simple checklist with five of the most critical requirements for preparing to embrace a journey of change. My critical five are:

1. An Identified Goal

2. Proper Sponsorship

3. Sufficient Budget

4. A Communication Strategy and Plan

5. A Tactical Plan

The first item on the checklist is an identified goal. The goal is the definition of where you want to go, the destination in the vacation analogy. A goal provides direction and a filter to understand the potential impact of constraints. Examples of goal can range from something as simple as, “reduce the cost of projects” or as complex as “attain CMMI Maturity Level 5”. The goal also sets the table for discussing all of the other items on the checklist, such as the required budget. One piece of advice: make sure your goal can be concisely and simply stated. In my opinion simplicity increases the chance the goal will be broadly remembered, which reduces the number of times you will need to explain the goal, which will increase the amount of time available for progress.

Proper sponsorship is next on the list. Sponsorship is important because it is provides the basis for the authority needed to propel change. There are many different types and levels of sponsorship. The word “proper” is used in this line item to remind you that there is no one type of sponsorship that fits all events and organizational culture. Examples of different flavors of sponsor include “the barbarian.” The barbarian is the type that will lead the charge, but typically is less collaborative and more a command-driven personality. Barbarians tend to be viewed as zealots who harness their belief structure to provide single minded energy towards the goal they are focused on. Having a barbarian as a sponsor can infuse change projects with an enormous amount of power. The bookend to the barbarian type of sponsorship is the “bureaucrat”. Sponsorship from a bureaucrat is very different. Instead of leading the charge bureaucrats tend to organize and control the charge. They may provide guidance, but they rarely get directly involved in the fray. The examples show two different varieties of sponsorship each that will fit in different organizations. In a life or death situation I would like to have a barbarian for a sponsor, however if I was affecting incremental changes in a command and control organization the bureaucrat would make more sense. Remember sponsorship is important because sponsorship give you access to power.

Budget is next on the checklist. The term budget can cover a wide range of ground ranging from money to availably of human resources (effort). The budget will answer the question “how much of the organization’s formal resources can you apply?” The budget that ends up being identified to support change is always less than what seems to be needed. Use this constraint as a tool to motivate your team to find innovations on the way to attaining the goal rather and a reason to rein in your goal.

The first plan I recommend building is an organizational change management plan (OCMP). The OCMP is a means to frame how your project is going to transform the future state of the organization. An organizational change management plan will integrate the project roles and responsibilities with the requirements for communication, training, oversight, reporting and the strategies to address resistance, and reinforcement activities. We will address concepts of organizational change management in essay later in 2009. The OCMP is a mixture of a high level map and how-to document that is critical to ensure you are as focused on how you are change the organization as to tasks required to define and implement specific processes.

Finally you will need a tactical plan that lays out the tasks you need to accomplish and the order the tasks need to be done. The focus and breadth of the tactical plan you use will be different depending on the project management technique that you use. For example if you use a time boxed technique like SCRUM your tactical plan will focus on identifying tasks for the current sprint based on the backlog of items required to reach your goal. Regardless of the planning technique used, a sprint backlog or detailed project schedule doesn’t matter you must have a tactical plan or risk falling into random activity. Use the technique that conforms to your project’s needs and your organization’s culture. The bottom line is that you will you need to understand the activities and order they occur in to get to your goal.

Change is difficult to accomplish in the best of times and almost impossible if you fail to start properly. The simple checklist for change readiness described in this paper was developed and compiled to help you focus on a set of topics that need to be considered when beginning any process improvement project. Are there other areas that should be on the list? Can each topic area be decomposed into finer levels of granularity? I believe the answer is certainly and I would urge you to augment and decompose the list and further to share your results. In any case a checklist that focuses you on getting your sponsorship, goals, budget and plans in order can only help you start well.


Categories: Process Management

"I'll Send You the Deck"

Software Architecture Zen - Pete Cripp - Fri, 03/28/2014 - 00:02
Warning, this is a rant!

I’m sure we’ve all been here. You’re in a meeting or on a conference call or just having a conversation with a colleague discussing some interesting idea or proposal which he or she has previous experience of and at some point they issue the immortal words “I’ll send you the deck”.

Read more...
Categories: Architecture

Two New Videos About Testing at Google

Google Testing Blog - Thu, 03/27/2014 - 22:41
by Anthony Vallone

We have two excellent, new videos to share about testing at Google. If you are curious about the work that our Test Engineers (TEs) and Software Engineers in Test (SETs) do, you’ll find both of these videos very interesting.

The Life at Google team produced a video series called Do Cool Things That Matter. This series includes a video from an SET and TE on the Maps team (Sean Jordan and Yvette Nameth) discussing their work on the Google Maps team.

Meet Yvette and Sean from the Google Maps Test Team



The Google Students team hosted a Hangouts On Air event with several Google SETs (Diego Salas, Karin Lundberg, Jonathan Velasquez, Chaitali Narla, and Dave Chen) discussing the SET role.

Software Engineers in Test at Google - Covering your (Code)Bases



Interested in joining the ranks of TEs or SETs at Google? Search for Google test jobs.

Categories: Testing & QA

Optimal Logging

Google Testing Blog - Thu, 03/27/2014 - 22:41
by Anthony Vallone

How long does it take to find the root cause of a failure in your system? Five minutes? Five days? If you answered close to five minutes, it’s very likely that your production system and tests have great logging. All too often, seemingly unessential features like logging, exception handling, and (dare I say it) testing are an implementation afterthought. Like exception handling and testing, you really need to have a strategy for logging in both your systems and your tests. Never underestimate the power of logging. With optimal logging, you can even eliminate the necessity for debuggers. Below are some guidelines that have been useful to me over the years.


Channeling Goldilocks

Never log too much. Massive, disk-quota burning logs are a clear indicator that little thought was put in to logging. If you log too much, you’ll need to devise complex approaches to minimize disk access, maintain log history, archive large quantities of data, and query these large sets of data. More importantly, you’ll make it very difficult to find valuable information in all the chatter.

The only thing worse than logging too much is logging too little. There are normally two main goals of logging: help with bug investigation and event confirmation. If your log can’t explain the cause of a bug or whether a certain transaction took place, you are logging too little.

Good things to log:
  • Important startup configuration
  • Errors
  • Warnings
  • Changes to persistent data
  • Requests and responses between major system components
  • Significant state changes
  • User interactions
  • Calls with a known risk of failure
  • Waits on conditions that could take measurable time to satisfy
  • Periodic progress during long-running tasks
  • Significant branch points of logic and conditions that led to the branch
  • Summaries of processing steps or events from high level functions - Avoid logging every step of a complex process in low-level functions.

Bad things to log:
  • Function entry - Don’t log a function entry unless it is significant or logged at the debug level.
  • Data within a loop - Avoid logging from many iterations of a loop. It is OK to log from iterations of small loops or to log periodically from large loops.
  • Content of large messages or files - Truncate or summarize the data in some way that will be useful to debugging.
  • Benign errors - Errors that are not really errors can confuse the log reader. This sometimes happens when exception handling is part of successful execution flow.
  • Repetitive errors - Do not repetitively log the same or similar error. This can quickly fill a log and hide the actual cause. Frequency of error types is best handled by monitoring. Logs only need to capture detail for some of those errors.


There is More Than One Level

Don't log everything at the same log level. Most logging libraries offer several log levels, and you can enable certain levels at system startup. This provides a convenient control for log verbosity.

The classic levels are:
  • Debug - verbose and only useful while developing and/or debugging.
  • Info - the most popular level.
  • Warning - strange or unexpected states that are acceptable.
  • Error - something went wrong, but the process can recover.
  • Critical - the process cannot recover, and it will shutdown or restart.

Practically speaking, only two log configurations are needed:
  • Production - Every level is enabled except debug. If something goes wrong in production, the logs should reveal the cause.
  • Development & Debug - While developing new code or trying to reproduce a production issue, enable all levels.


Test Logs Are Important Too

Log quality is equally important in test and production code. When a test fails, the log should clearly show whether the failure was a problem with the test or production system. If it doesn't, then test logging is broken.

Test logs should always contain:
  • Test execution environment
  • Initial state
  • Setup steps
  • Test case steps
  • Interactions with the system
  • Expected results
  • Actual results
  • Teardown steps


Conditional Verbosity With Temporary Log Queues

When errors occur, the log should contain a lot of detail. Unfortunately, detail that led to an error is often unavailable once the error is encountered. Also, if you’ve followed advice about not logging too much, your log records prior to the error record may not provide adequate detail. A good way to solve this problem is to create temporary, in-memory log queues. Throughout processing of a transaction, append verbose details about each step to the queue. If the transaction completes successfully, discard the queue and log a summary. If an error is encountered, log the content of the entire queue and the error. This technique is especially useful for test logging of system interactions.


Failures and Flakiness Are Opportunities

When production problems occur, you’ll obviously be focused on finding and correcting the problem, but you should also think about the logs. If you have a hard time determining the cause of an error, it's a great opportunity to improve your logging. Before fixing the problem, fix your logging so that the logs clearly show the cause. If this problem ever happens again, it’ll be much easier to identify.

If you cannot reproduce the problem, or you have a flaky test, enhance the logs so that the problem can be tracked down when it happens again.

Using failures to improve logging should be used throughout the development process. While writing new code, try to refrain from using debuggers and only use the logs. Do the logs describe what is going on? If not, the logging is insufficient.


Might As Well Log Performance Data

Logged timing data can help debug performance issues. For example, it can be very difficult to determine the cause of a timeout in a large system, unless you can trace the time spent on every significant processing step. This can be easily accomplished by logging the start and finish times of calls that can take measurable time:
  • Significant system calls
  • Network requests
  • CPU intensive operations
  • Connected device interactions
  • Transactions


Following the Trail Through Many Threads and Processes

You should create unique identifiers for transactions that involve processing across many threads and/or processes. The initiator of the transaction should create the ID, and it should be passed to every component that performs work for the transaction. This ID should be logged by each component when logging information about the transaction. This makes it much easier to trace a specific transaction when many transactions are being processed concurrently.


Monitoring and Logging Complement Each Other

A production service should have both logging and monitoring. Monitoring provides a real-time statistical summary of the system state. It can alert you if a percentage of certain request types are failing, it is experiencing unusual traffic patterns, performance is degrading, or other anomalies occur. In some cases, this information alone will clue you to the cause of a problem. However, in most cases, a monitoring alert is simply a trigger for you to start an investigation. Monitoring shows the symptoms of problems. Logs provide details and state on individual transactions, so you can fully understand the cause of problems.

Categories: Testing & QA

The Google Test and Development Environment - Pt. 1: Office and Equipment

Google Testing Blog - Thu, 03/27/2014 - 22:40
by Anthony Vallone

When conducting interviews, I often get questions about our workspace and engineering environment. What IDEs do you use? What programming languages are most common? What kind of tools do you have for testing? What does the workspace look like?

Google is a company that is constantly pushing to improve itself. Just like software development itself, most environment improvements happen via a bottom-up approach. All engineers are responsible for fine-tuning, experimenting with, and improving our process, with a goal of eliminating barriers to creating products that amaze.

Office space and engineering equipment can have a considerable impact on productivity. I’ll focus on these areas of our work environment in this first article of a series on the topic.

Office layout

Google is a highly collaborative workplace, so the open floor plan suits our engineering process. Project teams composed of Software Engineers (SWEs), Software Engineers in Test (SETs), and Test Engineers (TEs) all sit near each other or in large rooms together. The test-focused engineers are involved in every step of the development process, so it’s critical for them to sit with the product developers. This keeps the lines of communication open.

Google Munich
The office space is far from rigid, and teams often rearrange desks to suit their preferences. The facilities team recently finished renovating a new floor in the New York City office, and after a day of engineering debates on optimal arrangements and white board diagrams, the floor was completely transformed.

Besides the main office areas, there are lounge areas to which Googlers go for a change of scenery or a little peace and quiet. If you are trying to avoid becoming a casualty of The Great Foam Dart War, lounges are a great place to hide.

Google Dublin
Working with remote teams

Google’s worldwide headquarters is in Mountain View, CA, but it’s a very global company, and our project teams are often distributed across multiple sites. To help keep teams well connected, most of our conference rooms have video conferencing equipment. We make frequent use of this equipment for team meetings, presentations, and quick chats.

Google Boston
What’s at your desk?

All engineers get high-end machines and have easy access to data center machines for running large tasks. A new member on my team recently mentioned that his Google machine has 16 times the memory of the machine at his previous company.

Most Google code runs on Linux, so the majority of development is done on Linux workstations. However, those that work on client code for Windows, OS X, or mobile, develop on relevant OSes. For displays, each engineer has a choice of either two 24 inch monitors or one 30 inch monitor. We also get our choice of laptop, picking from various models of Chromebook, MacBook, or Linux. These come in handy when going to meetings, lounges, or working remotely.

Google Zurich
Thoughts?

We are interested to hear your thoughts on this topic. Do you prefer an open-office layout, cubicles, or private offices? Should test teams be embedded with development teams, or should they operate separately? Do the benefits of offering engineers high-end equipment outweigh the costs?

(Continue to part 2)
Categories: Testing & QA

The Google Test and Development Environment - Pt. 2: Dogfooding and Office Software

Google Testing Blog - Thu, 03/27/2014 - 22:40
by Anthony Vallone

This is the second in a series of articles about our work environment. See the first.

There are few things as frustrating as getting hampered in your work by a bug in a product you depend on. What if it’s a product developed by your company? Do you report/fix the issue or just work around it and hope it’ll go away soon? In this article, I’ll cover how and why Google dogfoods its own products.

Dogfooding

Google makes heavy use of its own products. We have a large ecosystem of development/office tools and use them for nearly everything we do. Because we use them on a daily basis, we can dogfood releases company-wide before launching to the public. These dogfood versions often have features unavailable to the public but may be less stable. Instability is exactly what you want in your tools, right? Or, would you rather that frustration be passed on to your company’s customers? Of course not!

Dogfooding is an important part of our test process. Test teams do their best to find problems before dogfooding, but we all know that testing is never perfect. We often get dogfood bug reports for edge and corner cases not initially covered by testing. We also get many comments about overall product quality and usability. This internal feedback has, on many occasions, changed product design.

Not surprisingly, test-focused engineers often have a lot to say during the dogfood phase. I don’t think there is a single public-facing product that I have not reported bugs on. I really appreciate the fact that I can provide feedback on so many products before release.

Interested in helping to test Google products? Many of our products have feedback links built-in. Some also have Beta releases available. For example, you can start using Chrome Beta and help us file bugs.

Office software

From system design documents, to test plans, to discussions about beer brewing techniques, our products are used internally. A company’s choice of office tools can have a big impact on productivity, and it is fortunate for Google that we have such a comprehensive suite. The tools have a consistently simple UI (no manual required), perform very well, encourage collaboration, and auto-save in the cloud. Now that I am used to these tools, I would certainly have a hard time going back to the tools of previous companies I have worked. I’m sure I would forget to click the save buttons for years to come.

Examples of frequently used tools by engineers:
  • Google Drive Apps (Docs, Sheets, Slides, etc.) are used for design documents, test plans, project data, data analysis, presentations, and more.
  • Gmail and Hangouts are used for email and chat.
  • Google Calendar is used to schedule all meetings, reserve conference rooms, and setup video conferencing using Hangouts.
  • Google Maps is used to map office floors.
  • Google Groups are used for email lists.
  • Google Sites are used to host team pages, engineering docs, and more.
  • Google App Engine hosts many corporate, development, and test apps.
  • Chrome is our primary browser on all platforms.
  • Google+ is used for organizing internal communities on topics such as food or C++, and for socializing.

Thoughts?

We are interested to hear your thoughts on this topic. Do you dogfood your company’s products? Do your office tools help or hinder your productivity? What office software and tools do you find invaluable for your job? Could you use Google Docs/Sheets for large test plans?

(Continue to part 3)
Categories: Testing & QA

The Google Test and Development Environment - Pt. 3: Code, Build, and Test

Google Testing Blog - Thu, 03/27/2014 - 22:40
by Anthony Vallone

This is the third in a series of articles about our work environment. See the first and second.

I will never forget the awe I felt when running my first load test on my first project at Google. At previous companies I’ve worked, running a substantial load test took quite a bit of resource planning and preparation. At Google, I wrote less than 100 lines of code and was simulating tens of thousands of users after just minutes of prep work. The ease with which I was able to accomplish this is due to the impressive coding, building, and testing tools available at Google. In this article, I will discuss these tools and how they affect our test and development process.

Coding and building

The tools and process for coding and building make it very easy to change production and test code. Even though we are a large company, we have managed to remain nimble. In a matter of minutes or hours, you can edit, test, review, and submit code to head. We have achieved this without sacrificing code quality by heavily investing in tools, testing, and infrastructure, and by prioritizing code reviews.

Most production and test code is in a single, company-wide source control repository (open source projects like Chromium and Android have their own). There is a great deal of code sharing in the codebase, and this provides an incredible suite of code to build on. Most code is also in a single branch, so the majority of development is done at head. All code is also navigable, searchable, and editable from the browser. You’ll find code in numerous languages, but Java, C++, Python, Go, and JavaScript are the most common.

Have a strong preference for editor? Engineers are free to choose from many IDEs and editors. The most common are Eclipse, Emacs, Vim, and IntelliJ, but many others are used as well. Engineers that are passionate about their prefered editors have built up and shared some truly impressive editor plugins/tooling over the years.

Code reviews for all submissions are enforced via source control tooling. This also applies to test code, as our test code is held to the same standards as production code. The reviews are done via web-based code review tools that even include automatically generated test results. The process is very streamlined and efficient. Engineers can change and submit code in any part of the repository, but it must get reviewed by owners of the code being changed. This is great, because you can easily change code that your team depends on, rather than merely request a change to code you do not own.

The Google build system is used for building most code, and it is designed to work across many languages and platforms. It is remarkably simple to define and build targets. You won’t be needing that old Makefile book.

Running jobs and tests

We have some pretty amazing machine and job management tools at Google. There is a generally available pool of machines in many data centers around the globe. The job management service makes it very easy to start jobs on arbitrary machines in any of these data centers. Failing machines are automatically removed from the pool, so tests rarely fail due to machine issues. With a little effort, you can also set up monitoring and pager alerting for your important jobs.

From any machine you can spin up a massive number of tests and run them in parallel across many machines in the pool, via a single command. Each of these tests are run in a standard, isolated environment, so we rarely run into the “it works on my machine!” issue.

Before code is submitted, presubmit tests can be run that will find all tests that depend transitively on the change and run them. You can also define presubmit rules that run checks on a code change and verify that tests were run before allowing submission.

Once you’ve submitted test code, the build and test system automatically registers the test, and starts building/testing continuously. If the test starts failing, your team will get notification emails. You can also visit a test dashboard for your team and get details about test runs and test data. Monitoring the build/test status is made even easier with our build orbs designed and built by Googlers. These small devices will glow red if the build starts failing. Many teams have had fun customizing these orbs to various shapes, including a statue of liberty with a glowing torch.

Statue of LORBerty
Running larger integration and end-to-end tests takes a little more work, but we have some excellent tools to help with these tests as well: Integration test runners, hermetic environment creation, virtual machine service, web test frameworks, etc.

The impact

So how do these tools actually affect our productivity? For starters, the code is easy to find, edit, review, and submit. Engineers are free to choose tools that make them most productive. Before and after submission, running small tests is trivial, and running large tests is relatively easy. Since tests are easy to create and run, it’s fairly simple to maintain a green build, which most teams do most of the time. This allows us to spend more time on real problems and less on the things that shouldn’t even be problems. It allows us to focus on creating rigorous tests. It dramatically accelerates the development process that can prototype Gmail in a day and code/test/release service features on a daily schedule. And, of course, it lets us focus on the fun stuff.

Thoughts?

We are interested to hear your thoughts on this topic. Google has the resources to build tools like this, but would small or medium size companies benefit from a similar investment in its infrastructure? Did Google create the infrastructure or did the infrastructure create Google?

Categories: Testing & QA

Minimizing Unreproducible Bugs

Google Testing Blog - Thu, 03/27/2014 - 22:39


by Anthony Vallone

Unreproducible bugs are the bane of my existence. Far too often, I find a bug, report it, and hear back that it’s not a bug because it can’t be reproduced. Of course, the bug is still there, waiting to prey on its next victim. These types of bugs can be very expensive due to increased investigation time and overall lifetime. They can also have a damaging effect on product perception when users reporting these bugs are effectively ignored. We should be doing more to prevent them. In this article, I’ll go over some obvious, and maybe not so obvious, development/testing guidelines that can reduce the likelihood of these bugs from occurring.


Avoid and test for race conditions, deadlocks, timing issues, memory corruption, uninitialized memory access, memory leaks, and resource issues

I am lumping together many bug types in this section, but they are all related somewhat by how we test for them and how disproportionately hard they are to reproduce and debug. The root cause and effect can be separated by milliseconds or hours, and stack traces might be nonexistent or misleading. A system may fail in strange ways when exposed to unusual traffic spikes or insufficient resources. Race conditions and deadlocks may only be discovered during unique traffic patterns or resource configurations. Timing issues may only be noticed when many components are integrated and their performance parameters and failure/retry/timeout delays create a chaotic system. Memory corruption or uninitialized memory access may go unnoticed for a large percentage of calls but become fatal for rare states. Memory leaks may be negligible unless the system is exposed to load for an extended period of time.

Guidelines for development:

  • Simplify your synchronization logic. If it’s too hard to understand, it will be difficult to reproduce and debug complex concurrency problems.
  • Always obtain locks in the same order. This is a tried-and-true guideline to avoid deadlocks, but I still see code that breaks it periodically. Define an order for obtaining multiple locks and never change that order.
  • Don’t optimize by creating many fine-grained locks, unless you have verified that they are needed. Extra locks increase concurrency complexity.
  • Avoid shared memory, unless you truly need it. Shared memory access is very easy to get wrong, and the bugs may be quite difficult to reproduce.

Guidelines for testing:

  • Stress test your system regularly. You don't want to be surprised by unexpected failures when your system is under heavy load.
  • Test timeouts. Create tests that mock/fake dependencies to test timeout code. If your timeout code does something bad, it may cause a bug that only occurs under certain system conditions.
  • Test with debug and optimized builds. You may find that a well behaved debug build works fine, but the system fails in strange ways once optimized.
  • Test under constrained resources. Try reducing the number of data centers, machines, processes, threads, available disk space, or available memory. Also try simulating reduced network bandwidth.
  • Test for longevity. Some bugs require a long period of time to reveal themselves. For example, persistent data may become corrupt over time.
  • Use dynamic analysis tools like memory debuggers, ASan, TSan, and MSan regularly. They can help identify many categories of unreproducible memory/threading issues.


Enforce preconditions

I’ve seen many well-meaning functions with a high tolerance for bad input. For example, consider this function:

void ScheduleEvent(int timeDurationMilliseconds) {
if (timeDurationMilliseconds <= 0) {
timeDurationMilliseconds = 1;
}
...
}

This function is trying to help the calling code by adjusting the input to an acceptable value, but it may be doing damage by masking a bug. The calling code may be experiencing any number of problems described in this article, and passing garbage to this function will always work fine. The more functions that are written with this level of tolerance, the harder it is to trace back to the root cause, and the more likely it becomes that the end user will see garbage. Enforcing preconditions, for instance by using asserts, may actually cause a higher number of failures for new systems, but as systems mature, and many minor/major problems are identified early on, these checks can help improve long-term reliability.

Guidelines for development:

  • Enforce preconditions in your functions unless you have a good reason not to.


Use defensive programming

Defensive programming is another tried-and-true technique that is great at minimizing unreproducible bugs. If your code calls a dependency to do something, and that dependency quietly fails or returns garbage, how does your code handle it? You could test for situations like this via mocking or faking, but it’s even better to have your production code do sanity checking on its dependencies. For example:

double GetMonthlyLoanPayment() {
double rate = GetTodaysInterestRateFromExternalSystem();
if (rate < 0.001 || rate > 0.5) {
throw BadInterestRate(rate);
}
...
}

Guidelines for development:

  • When possible, use defensive programming to verify the work of your dependencies with known risks of failure like user-provided data, I/O operations, and RPC calls.

Guidelines for testing:

  • Use fuzz testing to test your systems hardiness when enduring bad data.


Don’t hide all errors from the user

There has been a trend in recent years toward hiding failures from users at all costs. In many cases, it makes perfect sense, but in some, we have gone overboard. Code that is very quiet and permissive during minor failures will allow an uninformed user to continue working in a failed state. The software may ultimately reach a fatal tipping point, and all the error conditions that led to failure have been ignored. If the user doesn’t know about the prior errors, they will not be able to report them, and you may not be able to reproduce them.

Guidelines for development:

  • Only hide errors from the user when you are certain that there is no impact to system state or the user.
  • Any error with impact to the user should be reported to the user with instructions for how to proceed. The information shown to the user, combined with data available to an engineer, should be enough to determine what went wrong.


Test error handling

The most common sections of code to remain untested is error handling code. Don’t skip test coverage here. Bad error handling code can cause unreproducible bugs and create great risk if it does not handle fatal errors well.

Guidelines for testing:

  • Always test your error handling code. This is usually best accomplished by mocking or faking the component triggering the error.
  • It’s also a good practice to examine your log quality for all types of error handling.


Check for duplicate keys

If unique identifiers or data access keys are generated using random data or are not guaranteed to be globally unique, duplicate keys may cause data corruption or concurrency issues. Key duplication bugs are very difficult to reproduce.

Guidelines for development:

  • Try to guarantee uniqueness of all keys.
  • When not possible to guarantee unique keys, check if the recently generated key is already in use before using it.
  • Watch out for potential race conditions here and avoid them with synchronization.


Test for concurrent data access

Some bugs only reveal themselves when multiple clients are reading/writing the same data. Your stress tests might be covering cases like these, but if they are not, you should have special tests for concurrent data access. Case like these are often unreproducible. For example, a user may have two instances of your app running against the same account, and they may not realize this when reporting a bug.

Guidelines for testing:

  • Always test for concurrent data access if it’s a feature of the system. Actually, even if it’s not a feature, verify that the system rejects it. Testing concurrency can be challenging. An approach that usually works for me is to create many worker threads that simultaneously attempt access and a master thread that monitors and verifies that some number of attempts were indeed concurrent, blocked or allowed as expected, and all were successful. Programmatic post-analysis of all attempts and changing system state may also be necessary to ensure that the system behaved well.


Steer clear of undefined behavior and non-deterministic access to data

Some APIs and basic operations have warnings about undefined behavior when in certain states or provided with certain input. Similarly, some data structures do not guarantee an iteration order (example: Java’s Set). Code that ignores these warnings may work fine most of the time but fail in unusual ways that are hard to reproduce.

Guidelines for development:

  • Understand when the APIs and operations you use might have undefined behavior and prevent those conditions.
  • Do not depend on data structure iteration order unless it is guaranteed. It is a common mistake to depend on the ordering of sets or associative arrays.


Log the details for errors or test failures

Issues described in this article can be easier to reproduce and debug when the logs contain enough detail to understand the conditions that led to an error.

Guidelines for development:

  • Follow good logging practices, especially in your error handling code.
  • If logs are stored on a user’s machine, create an easy way for them to provide you the logs.

Guidelines for testing:

  • Save your test logs for potential analysis later.


Anything to add?

Have I missed any important guidelines for minimizing these bugs? What is your favorite hard-to-reproduce bug that you discovered and resolved?

Categories: Testing & QA

Taking Chrome Experiments to the TV

Google Code Blog - Thu, 03/27/2014 - 19:00
By Igor Clark, Google Creative Lab

With the release of the Google Cast SDK, making interactive experiences for the TV is now as easy as making interactive stuff for the web.

Google Creative Lab and Hook Studios took the SDK for a spin to make Photowall for Chromecast: a new Chrome Experiment that lets people collaborate with images on the TV.

Anyone with a Chromecast can set up a Photowall on their TV and have friends start adding photos to it from their phones and tablets in real time.

So how does it work?

The wall-hosting apps communicate with the Chromecast using the Google Cast SDK’s sender and receiver APIs. A simple call to the requestSession method using the Chrome API or launchApplication on the iOS/Android APIs is all it takes to get started. From there, communication with the Chromecast is helped along using the Receiver API’s getCastMessageBus method and a sendMessage call from the Chrome, iOS or Android APIs.

Using the Google Cast SDK makes it easy to launch a session on a Chromecast device. While a host is creating their new Photowall, they simply select which Chromecast they would like to use for displaying the photos. After a few simple steps, a unique five-digit code is generated that allows guests to connect to the wall from their mobile devices.

The Chromecast device then loads the Photowall application and begins waiting for setup to complete. Once ready, the Chromecast displays the newly-generated wall code and waits for photos to start rolling in. If at any point the Chromecast loses power or internet connection, the device can be relaunched with an existing Photowall right from the administration dashboard.

Tying it all together: The mesh

A mesh network connects the Photowall’s host, the photo-sharing guests, and the Chromecast. The devices communicate with each other via websockets managed by a Google Compute Engine-powered node.js server application. A Google App Engine app coordinates wall creation, authentication and photo storage on the server side, using the App Engine Datastore.

After a unique code has been generated during the Photowall creation process, the App Engine app looks for a Compute Engine instance to use for websocket communication. The instance is then told to route websocket traffic flagged with the new wall’s unique code to all devices that are members of the Photowall with that code.

The instance’s address and the wall code are returned to the AppEngine app. When a guest enters the wall code into the photo-sharing app on their browser, the AppEngine app returns the address of the Compute Engine websocket server associated with that code. The app then connects to that server and joins the appropriate websocket/mesh network, allowing for two-way communication between the host and guests.

Why is this necessary? If a guest uploads a photo that the host decides to delete for whatever reason, the guest needs to be notified immediately so that they don’t try to take further action on it themselves.

A workaround for websockets

Using websockets this way proved to be challenging on iOS devices. When a device is locked or goes to sleep the websocket connection should be terminated. However, in iOS it seems that the Javascript execution can be halted before the websocket close event is fired. This means that we are unaware of the disconnection, and when the phone is unlocked again we are left unaware that the connection has been dropped.

To get around this inconsistent websocket disconnection issue, we implemented a check approximately every 5 seconds to examine the ready state of the socket. If it has disconnected we reconnect and continue monitoring. Messages are buffered in the event of a disconnection and sent in order when a connection is reestablished.

Custom photo editing

The heart of the Photowall mobile web application is photo uploading. We created a custom photo editing experience for guests wanting to add their photos to a Photowall. They can upload photos directly from their device’s camera or choose one directly from its gallery. Then comes the fun stuff: cropping, doodling and captioning.

Photowall for Chromecast has been a fun opportunity to throw out everything we know about what a photo slideshow could be. And it’s just one example of what the Chromecast is capable of beyond content streaming. We barely scratched the surface of what the Google Cast SDK can do. We’re excited to see what’s next for Chromecast apps, and to build another.

For more on what’s under the hood of Photowall for Chromecast, you can tune in to our Google Developers Live event for an in-depth discussion on Thursday, April 3rd, 2014 at 2pm PDT.

Igor Clark is Creative Tech Lead at Google Creative Lab. The Creative Lab is a small team of makers and thinkers whose mission is to remind the world what it is they love about Google.

Posted by Louis Gray, Googler
Categories: Programming

Strategy: Cache Stored Procedure Results

Caching is not new of course, but I don't think I've heard of caching store procedure results before. It's like memoization in the database. Brent Ozar covers this idea in How to Cache Stored Procedure Results.

The benefits are the usual for doing work in the database, it doesn't take per developer per app work, just code it once in the stored proc and it works for everyone, everywhere, for all of time. The disadvantage is the usual as well, it adds extra load to a probably already busy database, so it should only be applied to heavy computations.

Brent positions this strategy as an emergency bandaid to apply when you need to take pressure off a database now. Developers can then work on moving the cache off the database and into its own tier. Interesting idea. And as the comments show the implementation is never as simple as it seems.

Categories: Architecture

How To Market Yourself as a Software Developer Is LIVE!

Making the Complex Simple - John Sonmez - Thu, 03/27/2014 - 16:00

Well, the day has finally arrived. A HUGE amount of work went into creating this package and many sleepless nights were spent getting everything ready, but… How To Market Yourself as a Software Developer is now LIVE! You can go here to get the course. If you have any questions, don’t hesitate to ask. And […]

The post How To Market Yourself as a Software Developer Is LIVE! appeared first on Simple Programmer.

Categories: Programming

My Views On Test Driven Development

Making the Complex Simple - John Sonmez - Thu, 03/27/2014 - 16:00

I used to be a huge supporter of TDD, but lately, I’ve become much more pragmatic about it. In this video, I discuss test driven development and what my current views on it are. Full transcript: John:               Hey, John Sonmez from simpleprogrammer.com.  This week, I want to talk about, yes, a technical topic.  I’m going […]

The post My Views On Test Driven Development appeared first on Simple Programmer.

Categories: Programming

Bringing together the best of PaaS and IaaS

Google Code Blog - Thu, 03/27/2014 - 16:00
By Navneet Joneja, Cloud Platform Team

Cross-posted from the Google Cloud Platform blog

Editor’s note: This post is a follow-up to the announcements we made on March 25th at Google Cloud Platform Live.

For many developers, building a cloud-native application begins with a fundamental decision. Are you going to build it on Infrastructure as a Service (IaaS), or Platform as a Service (PaaS)? Will you build large pieces of plumbing yourself so that you have complete flexibility and control, or will you cede control over the environment to get high productivity?

You shouldn’t have to choose between the openness, flexibility and control of IaaS, and the productivity and auto-management of PaaS. Describing solutions declaratively and taking advantage of intelligent management systems that understand and manage deployments leads to higher availability and quality of service. This frees engineers up to focus on writing code and significantly reduces the need to carry a pager.

Today, we’re introducing Managed Virtual Machines and Deployment Manager. These are our first steps towards enabling developers to have the best of both worlds. Managed Virtual Machines

At Google Cloud Platform Live we took the first step towards ending the PaaS/IaaS dichotomy by introducing Managed Virtual Machines. With Managed VMs, you can build your application (or components of it) using virtual machines running in Google Compute Engine while benefiting from the auto-management and services that Google App Engine provides. This allows you to easily use technology that isn’t built into one of our managed runtimes, whether that is a different programming language, native code, or direct access to the file system or network stack. Further, if you find you need to ssh into a VM in order to debug a particularly thorny issue, it’s easy to “break glass” and do just that.

Moving from an App Engine runtime to a managed VM can be as easy as adding one line to your app.yaml file:

vm:true

At Cloud Platform Live, we also demonstrated how the next stage in the evolution of Managed VMs will allow you to bring your own runtime to App Engine, so you won’t be limited to the runtimes we support out of the box.

Managed Virtual machines will soon launch in Limited Preview, and you can request access here starting today.

Introducing Google Cloud Deployment Manager

A key part of deploying software at scale is ensuring that configuration happens automatically from a single source of truth. This is because accumulated manual configuration often results in “snowflakes” - components that are unique and almost impossible to replicate - which in turn makes services harder to maintain, scale and troubleshoot.

These best practices are baked into the App Engine and Managed VM toolchains. Now, we’d like to make it easy for developers who are using unmanaged VMs to also take advantage of declarative configuration and foundational management capabilities like health-checking and auto-scaling. So, we’re launching Google Cloud Deployment Manager - a new service that allows you to create declarative deployments of Cloud Platform resources that can then be created, actively health monitored, and auto-scaled as needed.

Deployment Manager gives you a simple YAML syntax to create parameterizable templates that describe your Cloud Platform projects, including:
  • The attributes of any Compute Engine virtual machines (e.g. instance type, network settings, persistent disk, VM metadata).
  • Health checks and auto-scaling
  • Startup scripts that can be used to launch applications or other configuration management software (like Puppet, Chef or SaltStack)
Templates can be re-used across multiple deployments. Over time, we expect to extend Deployment Manager to cover additional Cloud Platform resources.

Deployment manager enables you to think in terms of logical infrastructure, where you describe your service declaratively and let Google’s management systems deploy and manage their health on your behalf. Please see the Deployment Manager documentation to learn more and to sign up for the Limited Preview.

We believe that combining flexibility and openness with the ease and productivity of auto-management and a simple tool-chain is the foundation of the next-generation cloud. Managed VMs and Deployment Manager are the first steps we’re taking towards delivering that vision.

Update on operating systems

We introduced support for SuSE and Red Hat Enterprise Linux on Compute Engine in December. Today, SuSE is Generally Available, and we announced Open Preview for Red Hat Enterprise Linux last week. We’re also announcing the limited preview for Windows Server 2008 R2, and you can sign up for access now. Windows Server will be offered at $0.02 per hour for the f1-micro and g1-small instances and $0.04 per core per hour for all other instances (Windows prices are in addition to normal VM charges).

Simple, lower prices

As we mentioned on Tuesday, we think pricing should be simpler and more closely track cost reductions as a result of Moore’s law. So we’re making several changes to our pricing effective April 1, 2014.

First, we’ve cut virtual machine prices by up to 53%:
  • We’ve dropped prices by 32% across the board.
  • We’re introducing sustained-use discounts, which lower your effective price as your usage goes up. Discounts start when you use a VM for over 25% of the month and increase with usage. When you use a VM for an entire month, this amounts to an additional 30% discount.
What’s more, you don’t need to sign up for anything, make any financial commitments or pay any upfront fees. We automatically give you the best price for every VM you run, and you still only pay for the minutes that you use.

Here’s what that looks like for our standard 1-core (n1-standard-1) instance: Finally, we’ve drastically simplified pricing for App Engine, and lowered pricing for instance-hours by 37.5%, dedicated memcache by 50% and Datastore writes by 30%. In addition, many services, including SNI SSL and PageSpeed are now offered to all applications at no extra cost.

We hope you find these new capabilities useful, and look forward to hearing from you! If you haven’t yet done so, you can sign up for Google Cloud Platform here.

Navneet Joneja, Senior Product Manager

Posted by Louis Gray, Googler
Categories: Programming

Being a Great Software Development Manager

From the Editor of Methods & Tools - Thu, 03/27/2014 - 15:33
Any manager, when goals are not being met, identifies the impediments and whenever possible removes them. A good manager goes further by identifying and removing potential impediments before they lead to the goals not being met. A great manager makes this seem easy. Tony Bowden, Socialtext Source: Managing the Unmanageable, Mickey W. Mantle & Ron Lichty, Addison-Wesley

Code of Ethics: A Proposal

UntitledA code of ethics is a compilation of ethical principals brought together into a framework that can be used to guide behavior.  I recently asked friends I work with how many codes of ethics they are bound by, and after a bit of discussion the average was four.  Examples include: IFPUG, PMI, IEEE, SEI, society and religions.  Kevin Brennan, Vice President for Professional Development of IIBA, tweeted me that a group needs a code of ethics to define itself as a profession and they are required for certification bodies under ISO 17024.

I would be the last person to suggest that codes of ethics are a bad idea.  However, the proliferation of codes combined with their relative complexity does give me pause.  An example of the complexity of common of codes of ethics is demonstrated below:

  • IIBA: Code of Ethical Conduct – 22 items
  • PMI : PMI’s Code of Ethics and Professional Conduct – 36 items
  • IEEE: Software Engineering Code of Ethics and Professional Practice – 80 items

All three of these codes are good, however I doubt very few people can recall any of their specifics. That greatly reduces their overall effectiveness.  Layer that on top of association codes, corporate codes like the great code of ethics from Lockheed (approximately 17 items) and the complexity level goes up.  I said all of this complexity gives me pause because I would like to see process improvement professionals embrace a code of ethics, but I do not want to increase the level of ethical complexity unless it has value.  I think we can keep this deadly simple.  The code I am proposing is:

Process Improvement Code of Ethics

  1. Treat others as you would like to be treated.  (Golden Rule)
  2. Follow a process to create and maintain processes. (eat your own dog food)
  3. Meet the commitments you make.
  4.  Avoid personal conflicts of interest.
  5. Only pursue changes that benefit the organization.
  6. Make decisions as transparently as humanly possible.

I would suggest that this code is simple and to the point.  Note: I am making the assumption that you’re adhering to laws and that lying, cheating and stealing your neighbor’s Butterfinger are not on the table, based on other codes of ethics.

What is missing? If you adopted these ethical guidelines how would they affect your projects?  How would they affect your professional life?   I am looking forward to your input, reactions and suggestions.  I would also like your opinions on whether ethics and process improvement should be discussed in the same context.


Categories: Process Management