Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/2&page=1' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Testing & QA
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Software Development Linkopedia October 2016

From the Editor of Methods & Tools - Wed, 10/19/2016 - 11:02
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about the life and career of a software developer, technical debt, DevOps, software testing, metrics, microservices, API and mobile testing. Blog: Being A Developer After 40 Blog: Tech Debt Snowball – […]

Quote of the Month October 2016

From the Editor of Methods & Tools - Wed, 10/12/2016 - 12:00
If you blindly accept what clients say they want and proceed with a project on that basis, both you and the clients may be in for a rude awakening. You will have built something the clients cannot use. Often in the process of building a solution, the clients learn what they need is not the […]

A Critical Review of the IoT Hype

From the Editor of Methods & Tools - Thu, 10/06/2016 - 16:58
According to the Gartner Hype Cycle 2015, Internet of Things (IoT) is on the “Peak of Inflated Expectations”. So it’s time to take a good hard look past the promises of IoT , to see what it is really accomplishing. What is IoT really? What do companies mean when they dream about the IoT revolution? […]

Accountability for What You Say is Dangerous and That’s Okay

James Bach’s Blog - Sat, 10/01/2016 - 20:33

[Note: I offered Maaret Pyhäjärvi the right to review this post and suggest edits to it before I published it. She declined.]

A few days ago I was keynoting at the New Testing Conference, in New York City, and I used a slide that has offended some people on Twitter. This blog post is intended to explore that and hopefully improve the chances that if you think I’m a bad guy, you are thinking that for the right reasons and not making a mistake. It’s never fun for me to be a part of something that brings pain to other people. I believe my actions were correct, yet still I am sorry that I caused Maaret hurt, and I will try to think of ways to confer better in the future.

Here’s the theme of this post: Getting up in front of the world to speak your mind is a dangerous process. You will be misunderstood, and that will feel icky. Whether or not you think of yourself as a leader, speaking at a conference IS an act of leadership, and leadership carries certain responsibilities.

I long ago learned to let go of the outcome when I speak in public. I throw the ideas out there, and I do that as an American Aging Overweight Left-Handed Atheist Married Father-And-Father-Figure Rough-Mannered Bearded Male Combative Aggressive Assertive High School Dropout Self-Confident Freedom-Loving Sometimes-Unpleasant-To-People-On-Twitter Intellectual. I know that my ideas will not be considered in a neutral context, but rather in the context of how people feel about all that. I accept that.  But, I have been popular and successful as a speaker in the testing world, so maybe, despite all the difficulties, enough of my message and intent gets through, overall.

What I can’t let go of is my responsibility to my audience and the community at large to speak the truth and to do so in a compassionate and reasonable way. Regardless of what anyone else does with our words, I believe we speakers need to think about how our actions help or harm others. I think a lot about this.

Let me clarify. I’m not saying it’s wrong to upset people or to have disagreement. We have several different culture wars (my reviewers said “do you have to say wars?”) going on in the software development and testing worlds right now, and they must continue or be resolved organically in the marketplace of ideas. What I’m saying is that anyone who speaks out publicly must try to be cognizant of what words do and accept the right of others to react.

Although I’m surprised and certainly annoyed by the dark interpretations some people are making of what I did, the burden of such feelings is what I took on when I first put myself forward as a public scold about testing and software engineering, a quarter century ago. My annoyance about being darkly interpreted is not your problem. Your problem, assuming you are reading this and are interested in the state of the testing craft, is to feel what you feel and think what you think, then react as best fits your conscience. Then I listen and try to debug the situation, including helping you debug yourself while I debug myself. This process drives the evolution of our communities. Jay Philips, Ash Coleman, Mike Talks, Ilari Henrik Aegerter, Keith Klain, Anna Royzman, Anne-Marie Charrett, David Greenlees, Aaron Hodder, Michael Bolton, and my own wife all approached me with reactions that helped me write this post. Some others approached me with reactions that weren’t as helpful, and that’s okay, too.

Leadership and The Right of Responding to Leaders

In my code of conduct, I don’t get to say “I’m not a leader.” I can say no one works for me and no one has elected me, but there is more to leadership than that. People with strong voices and ideas gain a certain amount of influence simply by virtue of being interesting. I made myself interesting, and some people want to hear what I have to say. But that comes with an implied condition that I behave reasonably. The community, over time negotiates what “reasonable” means. I am both a participant and a subject of those negotiations. I recommend that we hold each other accountable for our public, professional words. I accept accountability for mine. I insist that this is true for everyone else. Please join me in that insistence.

People who speak at conferences are tacitly asserting that they are thought leaders– that they deserve to influence the community. If that influence comes with a rule that “you can’t talk about me without my permission” it would have a chilling effect on progress. You can keep to yourself, of course; but if you exercise your power of speech in a public forum you cannot cry foul when someone responds to you. Please join me in my affirmation that we all have the right of response when a speaker takes the microphone to keynote at a conference.

Some people have pointed out that it’s not okay to talk back to performers in a comedy show or Broadway play. Okay. So is that what a conference is to you? I guess I believe that conferences should not be for show. Conferences are places for conferring. However, I can accept that some parts of a conference might be run like infomercials or circus acts. There could be a place for that.

The Slide

Here is the slide I used the other day:

maaret

Before I explain this slide, try to think what it might mean. What might its purposes be? That’s going to be difficult, without more information about the conference and the talks that happened there. Here are some things I imagine may be going through your mind:

  • There is someone whose name is Maaret who James thinks he’s different from.
  • He doesn’t trust nice people. Nice people are false. Is Maaret nice and therefore he doesn’t trust her, or does Maaret trust nice people and therefore James worries that she’s putting herself at risk?
  • Is James saying that niceness is always false? That’s seems wrong. I have been nice to people whom I genuinely adore.
  • Is he saying that it is sometimes false? I have smiled and shook hands with people I don’t respect, so, yes, niceness can be false. But not necessarily. Why didn’t he put qualifying language there?
  • He likes debate and he thinks that Maaret doesn’t? Maybe she just doesn’t like bad debate. Did she actually say she doesn’t like debate?
  • What if I don’t like debate, does that mean I’m not part of this community?
  • He thinks excellence requires attention and energy and she doesn’t?
  • Why is James picking on Maaret?

Look, if all I saw was this slide, I might be upset, too. So, whatever your impression is, I will explain the slide.

Like I said I was speaking at a conference in NYC. Also keynoting was Maaret Pyhäjärvi. We were both speaking about the testing role. I have some strong disagreements with Maaret about the social situation of testers. But as I watched her talk, I was a little surprised at how I agreed with the text and basic concepts of most of Maaret’s actual slides, and a lot of what she said. (I was surprised because Maaret and I have a history. We have clashed in person and on Twitter.) I was a bit worried that some of what I was going to say would seem like a rehash of what she just did, and I didn’t want to seem like I was papering over the serious differences between us. That’s why I decided to add a contrast slide to make sure our differences weren’t lost in the noise. This means a slide that highlights differences, instead of points of connection. There were already too many points of connection.

The slide was designed specifically:

  • for people to see who were in a specific room at a specific time.
  • for people who had just seen a talk by Maaret which established the basis of the contrast I was making.
  • about differences between two people who are both in the spotlight of public discourse.
  • to express views related to technical culture, not general social culture.
  • to highlight the difference between two talks for people who were about to see the second talk that might seem similar to the first talk.
  • for a situation where both I and Maaret were present in the room during the only time that this slide would ever be seen (unless someone tweeted it to people who would certainly not understand the context).
  • as talking points to accompany my live explanation (which is on video and I assume will be public, someday).
  • for a situation where I had invited anyone in the audience, including Maaret, to ask me questions or make challenges.

These people had just seen Maaret’s talk and were about to see mine. In the room, I explained the slide and took questions about it. Maaret herself spoke up about it, for which I publicly thanked her for doing so. It wasn’t something I was posting with no explanation or context. Nor was it part of the normal slides of my keynote.

Now I will address some specific issues that came up on Twitter:

1. On Naming Maaret

Maaret has expressed the belief that no one should name another person in their talk without getting their permission first. I vigorously oppose that notion. It’s completely contrary to the workings of a healthy society. If that principle is acceptable, then you must agree that there should be no free press. Instead, I would say if you stand up and speak in the guise of an expert, then you must be personally accountable for what you say. You are fair game to be named and critiqued. And the weird thing is that Maaret herself, regardless of what she claims to believe, behaves according to my principle of freedom to call people out. She, herself, tweeted my slide and talked about me on Twitter without my permission. Of course, I think that is perfectly acceptable behavior, so I’m not complaining. But it does seem to illustrate that community discourse is more complicated than “be nice” or “never cause someone else trouble with your speech” or “don’t talk about people publicly unless they gave you permission.”

2. On Being Nice

Maaret had a slide in her talk about how we can be kind to each other even though we disagree. I remember her saying the word “nice” but she may have said “kind” and I translated that into “nice” because I believed that’s what she meant. I react to that because, as a person who believes in the importance of integrity and debate over getting along for the sake of appearances, I observe that exhortations to “be nice” or even to “be kind” are often used when people want to quash disturbing ideas and quash the people who offer them. “Be nice” is often code for “stop arguing.” If I stop arguing, much of my voice goes away. I’m not okay with that. No one who believes there is trouble in the world should be okay with that. Each of us gets to have a voice.

I make protests about things that matter to me, you make protests about things that matter to you.

I think we need a way of working together that encourages debate while fostering compassion for each other. I use the word compassion because I want to get away from ritualized command phrases like “be nice.” Compassion is a feeling that you cultivate, rather than a behavior that you conform to or simulate. Compassion is an antithesis of “Rules of Order” and other lists of commandments about courtesy. Compassion is real. Throughout my entire body of work you will find that I promote real craftsmanship over just following instructions. My concern about “niceness” is the same kind of thing.

Look at what I wrote: I said “I don’t trust nice people.” That’s a statement about my feelings and it is generally true, all things being equal. I said “I’m not nice.” Yet, I often behave in pleasant ways, so what did I mean? I meant I seek to behave authentically and compassionately, which looks like “nice” or “kind”, rather than to imagine what behavior would trick people into thinking I am “nice” when indeed I don’t like them. I’m saying people over process, folks.

I was actually not claiming that Maaret is untrustworthy because she is nice, and my words don’t say that. Rather, I was complaining about the implications of following Maaret’s dictum. I was offering an alternative: be authentic and compassionate, then “niceness” and acts of kindness will follow organically. Yes, I do have a worry that Maaret might say something nice to me and I’ll have to wonder “what does that mean? is she serious or just pretending?” Since I don’t want people to worry about whether I am being real, I just tell them “I’m not nice.” If I behave nicely it’s either because I feel genuine good will toward you or because I’m falling down on my responsibility to be honest with you. That second thing happens, but it’s a lapse. (I do try to stay out of rooms with people I don’t respect so that I am not forced to give them opinions they aren’t willing or able to process.)

I now see that my sentence “I want to be authentic and compassionate” could be seen as an independent statement connected to “how I differ from Maaret,” implying that I, unlike her, am authentic and compassionate. That was an errant construction and does not express my intent. The orange text on that line indicated my proposed policy, in the hope that I could persuade her to see it my way. It was not an attack on her. I apologize for that confusion.

3. Debate vs. Dialogue

Maaret had earlier said she doesn’t want debate, but rather dialogue. I have heard this from other Agilists and I find it disturbing. I believe this is code for “I want the freedom to push my ideas on other people without the burden of explaining or defending those ideas.” That’s appropriate for a brainstorming session, but at some point, the brainstorming is done and the judging begins. I believe debate is absolutely required for a healthy professional community. I’m guided in this by dialectical philosophy, the history of scientific progress, the history of civil rights (in fact, all of politics), and the modern adversarial justice system. Look around you. The world is full of heartfelt disagreement. Let’s deal with it. I helped create the culture of small invitational peer conferences in our industry which foster debate. We need those more than ever.

But if you don’t want to deal with it, that’s okay. All that means is that you accept that there is a wall between your friends and those other people whom you refuse to debate with. I will accept the walls if necessary but I would rather resolve the walls. That’s why I open myself and my ideas for debate in public forums.

Debate is not a process of sticking figurative needles into other people. Debate is the exchange of views with the goal of resolving our differences while being accountable for our words and actions. Debate is a learning process. I have occasionally heard from people I think are doing harm to the craft that they believe I debate for the purposes of hurting people instead of trying to find resolution. This is deeply insulting to me, and to anyone who takes his vocation seriously. What’s more, considering that these same people express the view that it’s important to be “nice,” it’s not even nice. Thus, they reveal themselves to be unable to follow their own values. I worry that “Dialogue not debate” is a slogan for just another power group trying to suppress its rivals. Beware the Niceness Gang.

I understand that debating with colleagues may not be fun. But I’m not doing it for fun. I’m doing it because it is my responsibility to build a respectable craft. All testing professionals share this responsibility. Debate serves another purpose, too, managing the boundaries between rival value systems. Through debate we may discover that we occupy completely different paradigms; schools of thought. Debate can’t bridge gaps between entirely different world views, and yet I have a right to my world view just as you have a right to yours.

Jay Philips said on Twitter:

@jamesmarcusbach pointless 2debate w/ U because in your mind you’re right. Slide &points shouldn’t have happened @JokinAspiazu @ericproegler

— Jay Philips (@jayphilips) September 30, 2016

I admire Jay. I called her and we had a satisfying conversation. I filled her in on the context and she advised me to write this post.

One thing that came up is something very important about debate: the status of ideas is not the only thing that gets modified when you debate someone; what also happens is an evolution of feelings.

Yes I think “I’m right.” I acted according to principles I think are eternal and essential to intellectual progress in society. I’m happy with those principles. But I also have compassion for the feelings of others, and those feelings may hold sway even though I may be technically right. For instance, Maaret tweeted my slide without my permission. That is copyright violation. She’s objectively “wrong” to have done that. But that is irrelevant.

[Note: Maaret points out that this is legal under the fair use doctrine. Of course, that is correct. I forgot about fair use. Of course, that doesn’t change the fact that though I may feel annoyed by her selective publishing of my work, that is irrelevant, because I support her option to do that. I don’t think it was wise or helpful for her to do that, but I wouldn’t seek to bar her from doing so. I believe in freedom to communicate, and I would like her to believe in that freedom, too]

I accept that she felt strongly about doing that, so I [would] choose to waive my rights. I feel that people who tweet my slides, in general, are doing a service for the community. So while I appreciate copyright law, I usually feel okay about my stuff getting tweeted.

I hope that Jay got the sense that I care about her feelings. If Maaret were willing to engage with me she would find that I care about her feelings, too. This does not mean she gets whatever she wants, but it’s a factor that influences my behavior. I did offer her the chance to help me edit this post, but again, she refused.

4. Focus and Energy

Maaret said that eliminating the testing role is a good thing. I worry it will lead to the collapse of craftsmanship. She has a slide that says “from tester to team member” which is a sentiment she has expressed on Twitter that led me to say that I no longer consider her a tester. She confirmed to me that I hurt her feelings by saying that, and indeed I felt bad saying it, except that it is an extremely relevant point. What does it mean to be a tester? This is important to debate. Maaret has confirmed publicly (when I asked a question about this during her talk) that she didn’t mean to denigrate testing by dismissing the value of a testing role on projects. But I don’t agree that we can have it both ways. The testing role, I believe, is a necessary prerequisite for maintaining a healthy testing craft. My key concern is the dilution of focus and energy that would otherwise go to improving the testing craft. This is lost when the role is lost.

This is not an attack on Maaret’s morality. I am worried she is promoting too much generalism for the good of the craft, and she is worried I am promoting too much specialism. This is a matter of professional judgment and perspective. It cannot be settled, I think, but it must be aired.

The Slide Should Not Have Been Tweeted But It’s Okay That It Was

I don’t know what Maaret was trying to accomplish by tweeting my slide out of context. Suffice it to say what is right there on my slide: I believe in authenticity and compassion. If she was acting out of authenticity and compassion then more power to her. But the slide cannot be understood in isolation. People who don’t know me, or who have any axe to grind about what I do, are going to cry “what a cruel man!” My friends contacted me to find out more information.

I want you to know that the slide was one part of a bigger picture that depicts my principled objection to several matters involving another thought leader. That bigger picture is: two talks, one room, all people present for it, a lot of oratory by me explaining the slide, as well as back and forth discussion with the audience. Yes, there were people in the room who didn’t like hearing what I had to say, but “don’t offend anyone, ever” is not a rule I can live by, and neither can you. After all, I’m offended by most of the talks I attend.

Although the slide should not have been tweeted, I accept that it was, and that doing so was within the bounds of acceptable behavior. As I announced at the beginning of my talk, I don’t need anyone to make a safe space for me. Just follow your conscience.

What About My Conscience?
  • My conscience is clean. I acted out of true conviction to discuss important matters. I used a style familiar to anyone who has ever seen a public debate, or read an opinion piece in the New York Times. I didn’t set out to hurt Maaret’s feelings and I don’t want her feelings to be hurt. I want her to engage in the debate about the future of the craft and be accountable for her ideas. I don’t agree that I was presuming too much in doing so.
  • Maaret tells me that my slide was “stupid and hurtful.” I believe she and I do not share certain fundamental values about conferring. I will no longer be conferring with her, until and unless those differences are resolved.
  • Compassion is important to me. I will continue to examine whether I am feeling and showing the compassion for my fellow humans that they are due. These conversations and debates I have with colleagues help me do that.
  • I agree that making a safe space for students is important. But industry consultants and pundits should be able to cope with the full spectrum, authentic, principled reactions by their peers. Leaders are held to a higher standard, and must be ready and willing to defend their ideas in public forums.
  • The reaction on Twitter gave me good information about a possible trend toward fragility in the Twitter-facing part of the testing world. There seems to be a significant group of people who prize complete safety over the value that comes from confrontation. In the next conference I help arrange, I will set more explicit ground rules, rather than assuming people share something close to my own sense of what is reasonable to do and expect.
  • I will also start thinking, for each slide in my presentation: “What if this gets tweeted out of context?”

(Oh, and to those who compared me to Donald Trump… Can you even imagine him writing a post like this in response to criticism? BELIEVE ME, he wouldn’t.)

Categories: Testing & QA

Software Development Conferences Forecast September 2016

From the Editor of Methods & Tools - Wed, 09/28/2016 - 15:01
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP), DevOps and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods […]

Testing on the Toilet: What Makes a Good End-to-End Test?

Google Testing Blog - Wed, 09/21/2016 - 23:59
by Adam Bender

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

An end-to-end test tests your entire system from one end to the other, treating everything in between as a black box. End-to-end tests can catch bugs that manifest across your entire system. In addition to unit and integration tests, they are a critical part of a balanced testing diet, providing confidence about the health of your system in a near production state. Unfortunately, end-to-end tests are slower, more flaky, and more expensive to maintain than unit or integration tests. Consider carefully whether an end-to-end test is warranted, and if so, how best to write one.

Let's consider how an end-to-end test might work for the following "login flow":



In order to be cost effective, an end-to-end test should focus on aspects of your system that cannot be reliably evaluated with smaller tests, such as resource allocation, concurrency issues and API compatibility. More specifically:
  • For each important use case, there should be one corresponding end-to-end test. This should include one test for each important class of error. The goal is the keep your total end-to-end count low.
  • Be prepared to allocate at least one week a quarter per test to keep your end-to-end tests stable in the face of issues like slow and flaky dependencies or minor UI changes.
  • Focus your efforts on verifying overall system behavior instead of specific implementation details; for example, when testing login behavior, verify that the process succeeds independent of the exact messages or visual layouts, which may change frequently.
  • Make your end-to-end test easy to debug by providing an overview-level log file, documenting common test failure modes, and preserving all relevant system state information (e.g.: screenshots, database snapshots, etc.).
End-to-end tests also come with some important caveats:
  • System components that are owned by other teams may change unexpectedly, and break your tests. This increases overall maintenance cost, but can highlight incompatible changes
  • It may be more difficult to make an end-to-end test fully hermetic; leftover test data may alter future tests and/or production systems. Where possible keep your test data ephemeral.
  • An end-to-end test often necessitates multiple test doubles (fakes or stubs) for underlying dependencies; they can, however, have a high maintenance burden as they drift from the real implementations over time.
Categories: Testing & QA

Acceptance Tests & Agile Teams Coaching in Methods & Tools Fall 2016 issue

From the Editor of Methods & Tools - Tue, 09/20/2016 - 13:13
Methods & Tools – the free e-magazine for software developers, testers and project managers – has published its Fall 2016 issue that discusses alternatives to acceptance tests, Agile transformation, software project estimation, Agile coaching and the following free software tools: DbFit, generjee, Mox. Methods & Tools Fall 2016 issue content: * Alternatives to Acceptance Tests […]

Software Development Linkopedia September 2016

From the Editor of Methods & Tools - Tue, 09/13/2016 - 15:11
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about being a better developer, software architecture, tech leadership, shrinking the product backlog, customer journey maps, using sprint data, distributed testing, domain driven design, continuous delivery and testing microservices. Blog: Finding […]

What Test Engineers do at Google

Google Testing Blog - Mon, 09/12/2016 - 16:00
by Matt Lowrie, Manjusha Parvathaneni, Benjamin Pick, and Jochen Wuttke

Test engineers (TEs) at Google are a dedicated group of engineers who use proven testing practices to foster excellence in our products. We orchestrate the rapid testing and releasing of products and features our users rely on. Achieving this velocity requires creative and diverse engineering skills that allow us to advocate for our users. By building testable user journeys into the process, we ensure reliable products. TEs are also the glue that bring together feature stakeholders (product managers, development teams, UX designers, release engineers, beta testers, end users, etc.) to confirm successful product launches. Essentially, every day we ask ourselves, “How can we make our software development process more efficient to deliver products that make our users happy?”.

The TE role grew out of the desire to make Google’s early free products, like Search, Gmail and Docs, better than similar paid products on the market at the time. Early on in Google’s history, a small group of engineers believed that the company’s “launch and iterate” approach to software deployment could be improved with continuous automated testing. They took it upon themselves to promote good testing practices to every team throughout the company, via some programs you may have heard about: Testing on the Toilet, the Test Certified Program, and the Google Test Automation Conference (GTAC). These efforts resulted in every project taking ownership of all aspects of testing, such as code coverage and performance testing. Testing practices quickly became commonplace throughout the company and engineers writing tests for their own code became the standard. Today, TEs carry on this tradition of setting the standard of quality which all products should achieve.

Historically, Google has sustained two separate job titles related to product testing and test infrastructure, which has caused confusion. We often get asked what the difference is between the two. The rebranding of the Software engineer, tools and infrastructure (SETI) role, which now concentrates on engineering productivity, has been addressed in a previous blog post. What this means for test engineers at Google, is an enhanced responsibility of being the authority on product excellence. We are expected to uphold testing standards company-wide, both programmatically and persuasively.

Test engineer is a unique role at Google. As TEs, we define and organize our own engineering projects, bridging gaps between engineering output and end-user satisfaction. To give you an idea of what TEs do, here are some examples of challenges we need to solve on any particular day:
  • Automate a manual verification process for product release candidates so developers have more time to respond to potential release-blocking issues.
  • Design and implement an automated way to track and surface Android battery usage to developers, so that they know immediately when a new feature will cause users drained batteries.
  • Quantify if a regenerated data set used by a product, which contains a billion entities, is better quality than the data set currently live in production.
  • Write an automated test suite that validates if content presented to a user is of an acceptable quality level based on their interests.
  • Read an engineering design proposal for a new feature and provide suggestions about how and where to build in testability.
  • Investigate correlated stack traces submitted by users through our feedback tracking system, and search the code base to find the correct owner for escalation.
  • Collaborate on determining the root cause of a production outage, then pinpoint tests that need to be added to prevent similar outages in the future.
  • Organize a task force to advise teams across the company about best practices when testing for accessibility.
Over the next few weeks leading up to GTAC, we will also post vignettes of actual TEs working on different projects at Google, to showcase the diversity of the Google Test Engineer role. Stay tuned!
Categories: Testing & QA

Docker orchestration with Rancher

Agile Testing - Grig Gheorghiu - Fri, 09/09/2016 - 20:27
For the last month or so I've been experimenting with Rancher as the orchestration layer for Docker-based deployments. I've been pretty happy with it so far. Here are some of my notes and a few tips and tricks. I also recommend reading through the very good Rancher  documentation. In what follows I'll assume that the cluster management engine used by Rancher is its own engine called Cattle. Rancher also supports Kubernetes, Mesos and Docker Swarm.

Running the Rancher server

I provisioned an EC2 instance, installed Docker on it, then ran this command to launch the Rancher server as a Docker container (it will also get launched automatically if you reboot the EC2 instance):


# docker run -d --restart=always -p 8080:8080 rancher/server

Creating Rancher environments
It's important to think about the various environments you want to manage in Rancher. If you have multiple projects that you want to manage with Rancher, as well as multiple environments for your infrastructure, such as development, staging and production, I recommend you create a Rancher environment per project/infrastructure-environment combination, for example a Rancher environment called proj1dev, another one called proj1stage, another called proj1prod, and similarly for other projects: proj2dev, proj2stage, proj2prod etc.
Tip: Since all containers in the same Rancher environment can by default connect to all other containers in that Rancher environment, having a project/infrastructure-environment combination as detailed above will provide good isolation and security from one project to another, and from one infrastructure environment to another within the same project. I recommend you become familiar with Rancher environments by reading more about them in the documentation.
In what follows I'll assume the current environment is proj1dev.
Creating Rancher API key pairs
Within each environment, create an API key pair. Copy and paste the two keys (one access key and one secret access key) somewhere safe.

Adding Rancher hosts
Within each environment, you need to add Rancher hosts. They are the compute nodes that will run the various Docker containers that you will orchestrate with Rancher. In my case, I provisioned two hosts per environment as EC2 instances running Docker.
In the Rancher UI, when you go to Infrastructure  -> Hosts then click the Add Host button, you should see a docker run command that you can run on each host in order to launch the Rancher Agent on that host. Something like this:
# docker run -d --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.0.2 http://your-rancher-server-name.example.com:8080/v1/scripts/5536854597A70149E388:1473267600000:rfQVqxXcvIPulNw72fUOQG66iGM
Note that you need to allow UDP ports 500 and 4500 from each Rancher host to/from any other host and to/from the Rancher server. This is because Rancher uses IPSec tunnels for inter-host communication. The Rancher hosts also need to talk to the Rancher server over port 8080 (or whatever port you have exposed for the Rancher server container).
Adding ECR registries
We use ECR as our Docker registry. Within each environment, I had to add our ECR registry. In the Rancher UI, I went to Infrastructure -> Registries, then clicked Add Registry and chose Custom as the registry type. In the attribute fields, I specified:
  • Address: my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com
  • Email: none
  • Username: AWS
  • Password: the result of running these commands (you need to install and configure the awscli for this to work):
    • apt-get install python-pip; pip install awscli
    • aws configure (specify the keys for an IAM user allowed to access the ECR registry)
    • aws ecr get-login | cut -d ' ' -f 6

Application architecture
For this example I will consider an application composed of a Web application based on Apache/PHP running in 2 or more containers and mounting its shared files (configuration, media) over NFS. The Web app talks to a MySQL database server mounting its data files over NFS. The Web app containers are behind one or more instances of a Rancher load balancer, and the Rancher LB instances are fronted by an Amazon Elastic Load Balancer.
Rancher stacks
A 'stack' in Rancher corresponds to a set of services defined in a docker-compose YAML file. These services can also have Rancher-specific attributes (such as desired number of containers aka 'scale', health checks, etc) defined in a special rancher-compose YAML file. I'll show plenty of examples of these files in what follows. My stack naming convention will be projname-environment-stacktype, for example proj1-development-nfs, proj1--development-database etc.
Tip: Try to experiment with creating stacks in the Rancher UI, then either view or export their configurations via the stack settings button in the UI:

This was a life saver for me especially when it comes to lower-level stacks such as NFS or Rancher load balancers. Exporting the configuration will download a zip file containing two files: docker-compose.yml and rancher-compose.yml. It will save you from figuring out on your own the exact syntax you need to use in these files.
Creating an NFS stack
One of the advantages of using Rancher is that it offers an extensive catalog of services ready to be used within your infrastructure. One such service is Convoy NFS. To use it, I started out by going to the Catalog menu option in the Rancher UI, then selecting Convoy NFS. In the following screen I specified proj1-development-nfs as the stack name, as well as the NFS server's IP address and mount point.


Note that I had already set up an EC2 instance to act as an NFS server. I attached an EBS volume per project/environment. So in the example above, I exported a directory called /nfs/development/proj1.
After launching the NFS stack, you should see it in the Stacks screen in the Rancher UI. The stack will consist of 2 services, one called convoy-nfs and the other called convoy-nfs-storagepool:

Once the NFS stack is up and running, you can export its configuration as explained above.

To create or update a stack programmatically, I used the rancher-compose utility and wrapped it inside shell scripts. Here is an example of a shell script that calls rancher-compose to create an NFS stack:
$ cat rancher-nfssetup.sh#!/bin/bash

COMMAND=$@

rancher-compose -p proj1-development-nfs --url $RANCHER_URL --access-key $RANCHER_API_ACCESS_KEY --secret-key $RANCHER_API_SECRET_KEY --env-file .envvars --file docker-compose-nfssetup.yml --rancher-file rancher-compose.yml $COMMAND

Note that there is no command line option for the target Rancher environment. It suffices to use the Rancher API keys for a given environment in order to target that environment.

Here is the docker-compose file for this stack, which I obtained by exporting the stack configuration from the UI:
$ cat docker-compose-nfssetup.ymlconvoy-nfs-storagepool: labels: io.rancher.container.create_agent: 'true' command: - storagepool-agent image: rancher/convoy-agent:v0.9.0 volumes: - /var/run:/host/var/run - /run:/host/run convoy-nfs: labels: io.rancher.scheduler.global: 'true' io.rancher.container.create_agent: 'true' command: - volume-agent-nfs image: rancher/convoy-agent:v0.9.0 pid: host privileged: true volumes: - /lib/modules:/lib/modules:ro - /proc:/host/proc - /var/run:/host/var/run - /run:/host/run - /etc/docker/plugins:/etc/docker/plugins
Here is the portion of my rancher-compose.yml file that has to do with the NFS stack, again obtained by exporting the NFS stack configuration:
convoy-nfs-storagepool: scale: 1 health_check: port: 10241 interval: 2000 unhealthy_threshold: 3 strategy: recreate response_timeout: 2000 request_line: GET /healthcheck HTTP/1.0 healthy_threshold: 2 metadata: mount_dir: /nfs/development/proj1 nfs_server: 172.31.41.108 convoy-nfs: health_check: port: 10241 interval: 2000 unhealthy_threshold: 3 strategy: recreate response_timeout: 2000 request_line: GET /healthcheck HTTP/1.0 healthy_threshold: 2 metadata: mount_dir: /nfs/development/proj1 nfs_server: 172.31.41.108 mount_opts: ''

To create the NFS stack, all I need to do at this point is to call:

$ ./rancher-nfssetup.sh up

To inspect the logs for the stack, I can call:

$ ./rancher-nfssetup.sh logs

Note that I passed various arguments to the rancher-compose utility. Most of them are specified as environment variables. This allows me to add the bash script to version control without worrying about credentials, secrets etc. I also use the --env-file .envvars option, which allows me to define environment variables in the .envvars file and have them interpolated by rancher-compose in the various yml files it uses.
Creating volumes using the NFS stack
One of my goals was to attach NFS-based volumes to Docker containers in my infrastructure. To do this, I needed to create volumes in Rancher. One way to do it is to go to Infrastructure -> Storage in the Rancher UI, then go to the area corresponding to the NFS stack you want and click Add Volume, giving the volume a name and a description. Doing it manually is well and good, but I wanted to do it automatically, so I used another bash script around rancher-compose together with another docker-compose file:
$ cat rancher-volsetup.sh#!/bin/bash COMMAND=$@ rancher-compose -p proj1-development-volsetup --url $RANCHER_URL --access-key $RANCHER_API_ACCESS_KEY --secret-key $RANCHER_API_SECRET_KEY --env-file .envvars --file docker-compose-volsetup.yml --rancher-file rancher-compose.yml $COMMAND

$ cat docker-compose-volsetup.ymlvolsetup: image: ubuntu:14.04 labels: io.rancher.container.start_once: true volumes: - volMysqlData:/var/lib/mysql - volAppShared:/var/www/shared volume_driver: proj1-development-nfs
A few things to note in the docker-compose-volsetup.yml file:
  • I used the ubuntu:14.04 Docker image and I attached two volumes, one called volMysqlData and once called volAppSharedData. The first one will be mounted on the Docker container as /var/lib/mysql and the second one will be mounted as /var/www/shared. These are arbitrary paths, since my goal was just to create the volumes as Rancher resources.
  • I wanted the volsetup service to run once so that the volumes get created, then stop. For that, I used the special Rancher label io.rancher.container.start_once: true
  • I used as the volume_driver the NFS stack proj1-development-nfs I created above. This is important, because I want these volumes to be created within this NFS stack.
I used the following commands to create and start the proj1-development-volsetup stack, then to show its logs, and finally to shut it down and remove its containers, which are not needed anymore once the volumes get created: ./rancher-volsetup.sh up -d sleep 30 ./rancher-volsetup.sh logs ./rancher-volsetup.sh down ./rancher-volsetup.sh rm --force
I haven't figured out yet how to remove a Rancher stack programmatically, so for these 'helper' type stacks I had to use the Rancher UI to delete them.At this point, if you look in the /nfs/development/proj1 directory on the NFS server, you should see 2 directories with the same names as the volumes we created.
Creating a database stack
So far I haven't used any custom Docker images. For the database layer of my application, I will want to use a custom image which I will push to the Amazon ECR registry. I will use this image in a docker-compose file in order to set up and start the database in Rancher.
I have a directory called db containing the following Dockerfile:
$ cat Dockerfile
FROM percona

VOLUME /var/lib/mysql

COPY etc/mysql/my.cnf /etc/mysql/my.cnf
COPY scripts/db_setup.sh /usr/local/bin/db_setup.sh

I have a customized MySQL configuration file my.cnf (in my local directory db/etc/mysql) which gets copied to the Docker image as /etc/mysql.my.cnf. I also have a db_setup.sh bash script in my local directory db/scripts which gets copied to /usr/local/bin in the Docker image. In this script I grant rights to a MySQL user used by the Web app, and I also load a MySQL dump file if it exists:
$ cat scripts/db_setup.sh#!/bin/bash set -e host="$1" until mysql -h "$host" -uroot -p$MYSQL_ROOT_PASSWORD -e "SHOW DATABASES"; do >&2 echo "MySQL is unavailable - sleeping" sleep 1 done >&2 echo "MySQL is up - executing GRANT statement" mysql -h "$host" -uroot -p$MYSQL_ROOT_PASSWORD \ -e "GRANT ALL ON $MYSQL_DATABASE.* TO $MYSQL_USER@'%' IDENTIFIED BY \"$MYSQL_PASSWORD\"" >&2 echo "Starting to load SQL dump" mysql -h "$host" -uroot -p$MYSQL_ROOT_PASSWORD $MYSQL_DATABASE < /dbdump/$MYSQL_DUMP_FILE >&2 echo "Finished loading SQL dump"
Note that the database name, database user name and password, as well as the MySQL root password are all passed in environment variables.
To build this Docker image, I ran:
$ docker build -t my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/db:proj1-development .
Note that I tagged the image with the proj1-development tag.
To push this image to Amazon ECR, I first called:
$(aws get-login)
then:
$ docker push my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/db:proj1-development
To run the db_setup.sh script inside a Docker container in order to set up the database, I put together the following docker-compose file:
$ cat docker-compose-dbsetup.ymlECRCredentials:  environment:    AWS_REGION: $AWS_REGION    AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID    AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY  labels:    io.rancher.container.pull_image: always    io.rancher.container.create_agent: 'true'    io.rancher.container.agent.role: environment    io.rancher.container.start_once: true  tty: true  image: objectpartners/rancher-ecr-credentials  stdin_open: true
db:  image: my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/db:proj1-development  labels:    io.rancher.container.pull_image: always    io.rancher.scheduler.affinity:host_label: dbsetup=proj1  volumes:    - volMysqlData:/var/lib/mysql  volume_driver: proj1-development-nfs  environment:    - MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
dbsetup:  image: my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/db:proj1-development  labels:    io.rancher.container.pull_image: always    io.rancher.container.start_once: true    io.rancher.scheduler.affinity:host_label: dbsetup=proj1  command: /usr/local/bin/db_setup.sh db  links:    - db:db  volumes:    - volMysqlData:/var/lib/mysql    - /dbdump/proj1:/dbdump  volume_driver: proj1-development-nfs  environment:    - MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD    - MYSQL_DATABASE=$MYSQL_DATABASE    - MYSQL_USER=$MYSQL_USER    - MYSQL_PASSWORD=$MYSQL_PASSWORD    - MYSQL_DUMP_FILE=$MYSQL_DUMP_FILE
A few things to note:
  • there are 3 services in this docker-compose file
    • a ECRCredentials service which connects to Amazon ECR and allows the ECR image db:proj1-development to be used by the other 2 services
    • a db service which runs a Docker container based on the db:proj1-development ECR image, and which launches a MySQL database with the root password set to the value of the MYSQL_ROOT_PASSWORD environment variable
    • a dbsetup service that also runs a Docker container based on the db:proj1-development ECR image, but instead of the default command, which would run MySQL, it runs the db_setup.sh script (specified in the command directive); this service also uses environment variables specifying the database to be loaded from the SQL dump file, as well as the user and password that will get grants to that database
  • the dbsetup service links to the db service via the links directive
  • the dbsetup service is a 'run once then stop' type of service, which is why it has the label io.rancher.container.start_once: true attached
  • both the db and the dbsetup service will run on a Rancher host with the label 'dbsetup=proj1'; this is because we want to load the SQL dump from a file that the dbsetup service can find
    • we will put this file on a specific Rancher host in a directory called /dbdump/proj1, which will then be mounted by the dbsetup container as /dbdump
    • the db_setup.sh script will then load the SQL file called MYSQL_DUMP_FILE from the /dbdump directory
    • this can also work if we'd just put the SQL file in the same NFS volume as the MySQL data files, but I wanted to experiment with host labels in this case
  • wherever NFS volumes are used, for example for volMysqlData, the volume_driver needs to be set to the proper NFS stack, proj1-development-nfs in this case
It goes without saying that mounting the MySQL data files from NFS is a potential performance bottleneck, so you probably wouldn't do this in production. I wanted to experiment with NFS in Rancher, and the performance I've seen in development and staging for some of our projects doesn't seem too bad.
To run a Rancher stack based on this docker-compose-dbsetup.yml file, I used this bash script:
$ cat rancher-dbsetup.sh#!/bin/bash
COMMAND=$@
rancher-compose -p proj1-development-dbsetup --url $RANCHER_URL --access-key $RANCHER_API_ACCESS_KEY --secret-key $RANCHER_API_SECRET_KEY --env-file .envvars --file docker-compose-dbsetup.yml --rancher-file rancher-compose.yml $COMMAND
Note that all environment variables referenced in the docker-compose-dbsetup.yml file are set in the .envvars file.
I wanted to run the proj1-development-dbsetup stack and then shut down its services once the dbsetup service completes.  I used these commands as part of a bash script:
./rancher-dbsetup.sh up -d
while :do        ./rancher-dbsetup.sh logs --lines "10" > dbsetup.log 2>&1        grep 'Finished loading SQL dump' dbsetup.log        result=$?        if [ $result -eq 0 ]; then            break        fi        echo Waiting 10 seconds for DB load to finish...        sleep 10done./rancher-dbsetup.sh logs./rancher-dbsetup.sh down./rancher-dbsetup.sh rm --force
Once the database is setup, I want to launch MySQL and keep it running so it can be used by the Web application. I have a separate docker-compose file for that:
$ cat docker-compose-dblaunch.ymlECRCredentials:  environment:    AWS_REGION: $AWS_REGION    AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID    AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY  labels:    io.rancher.container.pull_image: always    io.rancher.container.create_agent: 'true'    io.rancher.container.agent.role: environment    io.rancher.container.start_once: true  tty: true  image: objectpartners/rancher-ecr-credentials  stdin_open: true
db:  image: my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/db:proj1-development  labels:    io.rancher.container.pull_image: always  volumes:    - volMysqlData:/var/lib/mysql  volume_driver: proj1-development-nfs
The db service is similar to the one in the docker-compose-dbsetup.yml file. In this case the database is all set up, so we don't need anything except the NFS volume to mount the MySQL data files from.
As usual, I have a bash script that calls docker-compose in order to create a stack called proj1-development-database:
$ cat rancher-dblaunch.sh#!/bin/bash
COMMAND=$@
rancher-compose -p proj1-development-database --url $RANCHER_URL --access-key $RANCHER_API_ACCESS_KEY --secret-key $RANCHER_API_SECRET_KEY --env-file .envvars --file docker-compose-dblaunch.yml --rancher-file rancher-compose.yml $COMMAND
I call this script like this:
./rancher-dblaunch.sh up -d
At this point, the proj1-development-database stack is up and running and contains the db service running as a container on one of the Rancher hosts in the Rancher 'proj1dev' environment.
Creating a Web application stack

So far, I've been using either off-the-shelf or slightly customized Docker images. For the Web application stack I will be using more heavily customized images. The building block is a 'base' image whose Dockerfile contains directives for installing commonly used packages and for adding users.

Here is the Dockerfile for a 'base' image running Ubuntu 14.04:

FROM ubuntu:14.04

RUN apt-get update && \
    apt-get install -y ntp build-essential build-essential binutils zlib1g-dev \
                       git acl cronolog lzop unzip mcrypt expat xsltproc python-pip curl language-pack-en-base
RUN pip install awscli

RUN adduser --ui 501 --ingroup www-data --shell /bin/bash --home /home/myuser myuser
RUN mkdir /home/myuser/.ssh
COPY files/myuser_authorized_keys /home/myuser/.ssh/authorized_keys
RUN chown -R myuser:www-data /home/myuser/.ssh && \
    chmod 700 /home/myuser/.ssh && \
    chmod 600 /home/myuser/.ssh/authorized_keys 

When I built this image, I tagged it as my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/base:proj1-development.

Here is the Dockerfile for an image (based on the base image above) that installs Apache, PHP 5.6 (using a custom apt repository), RVM, Ruby and the compass gem:

FROM  my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/base:proj1-development

RUN export LC_ALL=en_US.UTF-8 && export LC_ALL=en_US.UTF-8 && export LANG=en_US.UTF-8 && \
        apt-get install -y mysql-client-5.6 software-properties-common && add-apt-repository ppa:ondrej/php5-5.6

RUN apt-get update && \
    apt-get install -y --allow-unauthenticated apache2 apache2-utils libapache2-mod-php5 \
                       php5 php5-mcrypt php5-curl php-pear php5-gd \
                       php5-dev php5-mysql php5-readline php5-xsl php5-xmlrpc php5-intl

# Install composer
RUN curl -sSL https://getcomposer.org/composer.phar -o /usr/bin/composer \
    && chmod +x /usr/bin/composer \
    && composer selfupdate

# Install rvm and compass gem for SASS image compilation

RUN curl https://raw.githubusercontent.com/rvm/rvm/master/binscripts/rvm-installer -o /tmp/rvm-installer.sh && \
        chmod 755 /tmp/rvm-installer.sh && \
        gpg --keyserver hkp://keys.gnupg.net --recv-keys D39DC0E3 && \
        /tmp/rvm-installer.sh stable --path /home/myuser/.rvm --auto-dotfiles --user-install && \
        /home/myuser/.rvm/bin/rvm get stable && \
        /home/myuser/.rvm/bin/rvm reload && \
        /home/myuser/.rvm/bin/rvm autolibs 3

RUN /home/myuser/.rvm/bin/rvm install ruby-2.2.2  && \
        /home/myuser/.rvm/bin/rvm alias create default ruby-2.2.2 && \
        /home/myuser/.rvm/wrappers/ruby-2.2.2/gem install bundler && \
        /home/myuser/.rvm/wrappers/ruby-2.2.2/gem install compass

COPY files/apache2-foreground /usr/local/bin/
EXPOSE 80
CMD ["apache2-foreground"]

When I built this image, I tagged it as  my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/apache-php:proj1-development

With these 2 images as building blocks, I put together 2 more images, one for building artifacts for the Web application, and one for launching it.

Here is the Dockerfile for an image that builds the artifacts for the Web application:

FROM my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/apache-php:proj1-development

ADD ./scripts/app_setup.sh /usr/local/bin/

The heavy lifting takes place in the app_setup.sh script. That's where you would do things such as pull a specified git branch from application repo on GitHub, then run composer (if it's a PHP app) or other build tools in order to generate the artifacts necessary for running the application. At the end of this script, I generate a tar.gz of the code + any artifacts and upload it to S3 so I can use it when I generate the Docker image for the Web app.

When I built this image, I tagged it as  my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/appsetup:proj1-development

To actually run a Docker container based on the appsetup image, I used this docker-compose file:

$ cat docker-compose-appsetup.yml
ECRCredentials:
  environment:
    AWS_REGION: $AWS_REGION
    AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
  labels:
    io.rancher.container.pull_image: always
    io.rancher.container.create_agent: 'true'
    io.rancher.container.agent.role: environment
    io.rancher.container.start_once: true
  tty: true
  image: objectpartners/rancher-ecr-credentials
  stdin_open: true

appsetup:
        image: my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/appsetup:proj1-development
  labels:
    io.rancher.container.pull_image: always
  command: /usr/local/bin/app_setup.sh
  external_links:
    - proj1-development-database/db:db
  volumes:
    - volAppShared:/var/www/shared
  volume_driver: proj1-development-nfs
  environment:
    - GIT_URL=$GIT_URL
    - GIT_BRANCH=$GIT_BRANCH
    - AWS_S3_REGION=$AWS_S3_REGION
    - AWS_S3_ACCESS_KEY_ID=$AWS_S3_ACCESS_KEY_ID
    - AWS_S3_SECRET_ACCESS_KEY=$AWS_S3_SECRET_ACCESS_KEY
    - AWS_S3_RELEASE_BUCKET=$AWS_S3_RELEASE_BUCKET
    - AWS_S3_RELEASE_FILENAME=$AWS_S3_RELEASE_FILENAME

Some things to note:
  • the command executed when a Docker container based on the appsetup service is launched is /usr/local/bin/app_setup.sh, as specified in the command directive
    • the app_setup.sh script runs commands that connect to the database, hence the need for the appsetup service to link to the MySQL database running in the proj1-development-database stack launched above; for that, I used the external_links directive
  • the appsetup service mounts an NFS volume (volAppShared) as /var/www/shared
    • the volume_driver needs to be proj1-development-nfs
    • before running the service, I created proper application configuration files under /nfs/development/proj1/volAppShared on the NFS server, specifying things such as the database server name (which needs to be 'db', since this is how the database container is linked as), the database name, user name and password, etc.
  • the appsetup service uses various environment variables referenced in the environment directive; it will pass these variables to the app_setup.sh script
To run the appsetup service, I used another bash script around the rancher-compose command:
$ cat rancher-appsetup.sh#!/bin/bash
COMMAND=$@
rancher-compose -p proj1-development-appsetup --url $RANCHER_URL --access-key $RANCHER_API_ACCESS_KEY --secret-key $RANCHER_API_SECRET_KEY --env-file .envvars --file docker-compose-appsetup.yml --rancher-file rancher-compose.yml $COMMAND

Tip: When using its Cattle cluster management engine, Rancher does not add services linked to each other as static entries in /etc/hosts on the containers. Instead, it provides an internal DNS service so that containers in the same environment can reach each other by DNS names as long as they link to each other in docker-compose files. If you go to a shell prompt inside a container, you can ping other containers by name even from one Rancher stack to another. For example, from a web container in the proj1-development-app stack you can ping a database container in the proj1-development-database stack linked in the docker-compose file as db and you would get back a name of the type db.proj1-development-app.rancher.internal.
Tip: There is no need to expose ports from containers within the same Rancher environment. I spent many hours troubleshooting issues related to ports and making sure ports are unique across stacks, only to realize that the internal ports that the services listen on (3306 for MySQL, 80 and 443 for Apache) are reachable from the other containers in the same Rancher environment. The only ports you need exposed to the external world in the architecture I am describing are the load balancer ports, as I'll describe below.
Here is the Dockerfile for an image that runs the Web application:
FROM my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/apache-php:proj1-development
# disable interactive functions
ARG DEBIAN_FRONTEND=noninteractive

RUN a2enmod headers \
&& a2enmod rewrite \
&& a2enmod ssl

RUN rm -rf /etc/apache2/ports.conf /etc/apache2/sites-enabled/*
ADD etc/apache2/sites-enabled /etc/apache2/sites-enabled
ADD etc/apache2/ports.conf /etc/apache2/ports.conf

ADD release /var/www/html/release
RUN chown -R myuser:www-data /var/www/html/release
This image is based on the apache-php image but adds Apache customizations, as well as the release directory obtained from the tar.gz file uploaded to S3 by the appsetup service.

When I built this image, I tagged it as  my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/app:proj1-development

Code deployment

My code deployment process is a bash script (which can be used standalone, or as part of a Jenkins job, or can be turned into a Jenkins pipeline) that first runs the appsetup service in order to generate a tar.gz of the code and artifacts, then downloads it from S3 and uses it as the local release directory to be copied into the app image. The script then pushes the app Docker image to Amazon ECR. The environment variables are either defined in an .envvars file or passed via Jenkins parameters. The script assumes that the Dockerfile for the app image is in the current directory, and that the etc directory structure used for the Apache files in the app image is also in the current directory (they are all checked into the project repository, so Jenkins will find them).

./rancher-appsetup.sh up -dsleep 20cp /dev/null appsetup.logwhile :do        ./rancher-appsetup.sh logs >> appsetup.log 2>&1        grep 'Restarting web server apache2' appsetup.log        result=$?        if [ $result -eq 0 ]; then            break        fi        echo Waiting 10 seconds for app code deployment to finish...        sleep 10done./rancher-appsetup.sh logs./rancher-appsetup.sh down./rancher-appsetup.sh rm --force
# download release.tar.gz from S3 and unpack it
set -a. .envvarsset +a
export AWS_ACCESS_KEY_ID=$AWS_S3_ACCESS_KEY_IDexport AWS_SECRET_ACCESS_KEY=$AWS_S3_SECRET_ACCESS_KEY
rm -rf $AWS_S3_RELEASE_FILENAME.tar.gz
aws s3 --region $AWS_S3_REGION ls s3://$AWS_S3_RELEASE_BUCKET/aws s3 --region $AWS_S3_REGION cp s3://$AWS_S3_RELEASE_BUCKET/$AWS_S3_RELEASE_FILENAME.tar.gz .
tar xfz $AWS_S3_RELEASE_FILENAME.tar.gz
# build app docker image and push it to ECR
cat << "EOF" > awscreds[default]aws_access_key_id=$AWS_ACCESS_KEY_IDaws_secret_access_key=$AWS_SECRET_ACCESS_KEYEOF
export AWS_SHARED_CREDENTIALS_FILE=./awscreds $(aws ecr --region=$AWS_REGION get-login)/usr/bin/docker build -t my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/app:proj1-development ./usr/bin/docker push my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/app:proj1-development

Launching the app service

At this point, the Docker image for the app service has been pushed to Amazon ECR, but the service itself hasn't been started. To do that, I use this docker-compose file:

$ cat docker-compose-app.yml
ECRCredentials:
  environment:
    AWS_REGION: $AWS_REGION
    AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
  labels:
    io.rancher.container.pull_image: always
    io.rancher.container.create_agent: 'true'
    io.rancher.container.agent.role: environment
    io.rancher.container.start_once: true
  tty: true
  image: objectpartners/rancher-ecr-credentials
  stdin_open: true

app:
  image: my_ecr_registry_id.dkr.ecr.my_region.amazonaws.com/app:proj1-development
  labels:
    io.rancher.container.pull_image: always
  external_links:
    - proj1-development-database/db:db
  volumes:
    - volAppShared:/var/www/shared
  volume_driver: proj1-development-nfs

Nothing very different about this file compare to the files I've shown so far. The app service mounts the volAppShared NFS volume as /var/www/shared, and links to the MySQL database service db already running in the proj1-development-database Rancher stack, giving it the name 'db'.

To run the app service, I use this bash script wrapping rancher-compose:

$ cat rancher-app.sh
#!/bin/bash

COMMAND=$@

rancher-compose -p proj1-development-app --url $RANCHER_URL --access-key $RANCHER_API_ACCESS_KEY --secret-key $RANCHER_API_SECRET_KEY --env-file .envvars --file docker-compose-app.yml --rancher-file rancher-compose.yml $COMMAND

Since the proj1-development-app stack may already be running with an old version of the app Docker image, I will invoke rancher-app.sh with the force-upgrade option of the rancher-compose command:

./rancher-app.sh up -d --force-upgrade --confirm-upgrade --pull --batch-size "1"

This will perform a rolling upgrade of the app service, by stopping the containers for the app service one at a time (as indicated by the batch-size parameter), then pulling the latest Docker image for the app service, and finally starting each container again. Speaking of 'containers' plural, you can indicate how many containers should run at all times for the app service by adding these lines to rancher-compose.yml:

app:
  scale: 2

In my case, I want 2 containers to run at all times. If you stop one container from the Rancher UI, you will see another one restarted automatically by Rancher in order to preserve the value specified for the 'scale' parameter.

Creating a load balancer stack

When I started to run load balancers in Rancher, I created them via the Rancher UI. I created a new stack, then added a load balancer service to it. It took me a while to figure out that I can then export the stack configuration and generate a docker-compose file and a rancher-compose snippet I can add to my main rancher-compose.yml file.

Here is the docker-compose file I use:

$ cat docker-compose-lbsetup.yml
lb:
  ports:
  - 8000:80
  - 8001:443
  external_links:
  - proj1-development-app/app:app
  labels:
    io.rancher.loadbalancer.ssl.ports: '8001'
    io.rancher.loadbalancer.target.proj1-development-app/app: proj1.dev.mydomain.com:8000=80,8001=443
  tty: true
  image: rancher/load-balancer-service
  stdin_open: true

The ports directive tell the load balancer which ports to expose externally and what ports to map them to. This example shows that port 8000 will be exposed externally and mapped to port 80 on the target service, and port 8001 will be exposed externally and mapped to port 443 on the target service.

The external_links directive tells the load balancer which service to load balance. In this example, it is the app service in the proj1-development-app stack.

The labels directive does layer 7 load balancing by allowing you to specify a domain name that you want to send to a specific port. In this example, I want to send HTTP requests coming on port 8000 for proj1.dev.mydomain.com to port 80 on the target containers for the app service, and HTTPS requests coming on port 8001 for the same proj1.dev.mydomain.com name to port 443 on the target containers.

I could have also added a new line under labels, specifying that I want requests for proj1-admin.dev.mydomain.com coming on port 8000 to be sent to a different port on the target containers, assuming that I had Apache configured to listen on that port. You can read more about the load balancing features available in Rancher in the documentation.

Here is the load balancer section in rancher-compose.yml:

lb:
  scale: 2
  load_balancer_config:
    haproxy_config: {}
  default_cert: proj1.dev.mydomain.com
  health_check:
    port: 42
    interval: 2000
    unhealthy_threshold: 3
    healthy_threshold: 2
    response_timeout: 2000

Note that there is a mention of a default_cert. This is an SSL key + cert that I uploaded to Rancher via the UI by going to Infrastructure -> Certificates and that I named proj1.dev.mydomain.com. The Rancher Catalog does contain an integration for Let's Encrypt but I haven't had a chance to test it yet (from the Rancher Catalog: "The Let's Encrypt Certificate Manager obtains a free (SAN) SSL Certificate from the Let's Encrypt CA and adds it to Rancher's certificate store. Once the certificate is created it is scheduled for auto-renewal 14-days before expiration. The renewed certificate is propagated to all applicable load balancer services.")

Note also that the scale value is 2, which means that there will be 2 containers for the lb service.

Tip: In the Rancher UI, you can open a shell into any container, or view the logs for any container by going to the Settings icon of that container, and choosing Execute Shell or View Logs:

Tip: Rancher load balancers are based on haproxy. You can open a shell into a container running for the lb service, then look at the haproxy configuration file in /etc/haproxy/haproxy.cfg. To troubleshoot haproxy issues, you can enable UDP logging in /etc/rsyslog.conf by removing the comments before the following 2 lines:

#$ModLoad imudp
#$UDPServerRun 514

then restarting the rsyslog service. Then you can restart the haproxy service and inspect its log file in /var/log/haproxy.log.
To run the lb service, I use this bash script:

$ cat rancher-lbsetup.sh
#!/bin/bash

COMMAND=$@

rancher-compose -p proj1-development-lb --url $RANCHER_URL --access-key $RANCHER_API_ACCESS_KEY --secret-key $RANCHER_API_SECRET_KEY --env-file .envvars --file docker-compose-lbsetup.yml --rancher-file rancher-compose.yml $COMMAND

I want to do a rolling upgrade of the lb service in case anything has changed, so I invoke the rancher-compose wrapper script in a similar way to the one for the app service:

./rancher-lbsetup.sh up -d --force-upgrade --confirm-upgrade --batch-size "1"

Putting it all together in Jenkins

First I created a GitHub repository with the following structure:

  • All docker-compose-*.yml files
  • The rancher-compose.yml file
  • All rancher-*.sh bash scripts wrapping the rancher-compose command
  • A directory for the base Docker image (containing its Dockerfile and any other files that need to go into that image)
  • A directory for the apache-php Docker image
  • A directory for the db Docker image
  • A directory for the appsetup Docker image
  • A Dockerfile in the current directory for the app Docker image
  • An etc directory in the current directory used by the Dockerfile for the app image

Each project/environment combination has a branch created in this GitHub repository. For example, for the proj1 development environment I would create a proj1dev branch which would then contain any customizations I need for this project -- usually stack names, Docker tags, Apache configuration files under the etc directory.

My end goal was to use Jenkins to drive the launching of the Rancher services and the deployment of the code. Eventually I will use a Jenkins Pipeline to string together the various steps of the workflow, but for now I have 5 individual Jenkins jobs which all check out the proj1dev branch of the GitHub repo above. The jobs contain shell-type build steps where I actually call the various rancher bash scripts around rancher-compose. The Jenkins jobs also take parameters corresponding to the environment variables used in the docker-compose files and in the rancher bash scripts. I also use the Credentials section in Jenkins to store any secrets such as the Rancher API keys, AWS keys, S3 keys, ECR keys etc. On the Jenkins master and executor nodes I installed the rancher and rancher-compose CLI utilities (I downloaded the rancher CLI from the footer of the Rancher UI).

Job #1 builds the Docker images discussed above: base, apache-php, db, and appsetup (but not the app image yet).

Job #2 runs rancher-nfssetup.sh and rancher-volsetup.sh in order to set up the NFS stack and the volumes used by the dbsetup, appsetup, db and app services.

Job #3 runs rancher-dbsetup.sh and rancher-dblaunch.sh in order to set up the database via the dbsetup service, then launch the db service.

At this point, everything is ready for deployment of the application.

Job #4 is the code deployment job. It runs the sequence of steps detailed in the Code Deployment section above.

Job #5 is the rolling upgrade job for the app service and the lb service. If those services have never been started before, they will get started. If they are already running, they will be upgraded in a rolling fashion, batch-size containers at a time as I detailed above.

When a new code release needs to be pushed to the proj1dev Rancher environment, I would just run job #4 followed by job #5. Obviously you can string these jobs together in a Jenkins Pipeline, which I intend to do next.

Some more Rancher tips and tricks









Quote of the Month September 2016

From the Editor of Methods & Tools - Thu, 09/08/2016 - 15:20
The [Product Owner] role is still incredibly broad, incredibly nuanced and incredibly challenging for a single individual to do well. I’ve probably met 2-300 product owners over the last five or so years and I think perhaps 3-5 of them were doing an outstanding job across all the aspects of the role. So we’re still […]

Breeding Unicorn Software Developers

From the Editor of Methods & Tools - Wed, 09/07/2016 - 16:24
What does the next generation of software developers look like? If I believe the rumors, the guides, the white noise on the internet and what my clients are searching for, the new hire is an excellent, cheap, “T-Shaped” newbie with a lot of experience, the right agile mindset and she’s ready to be a cultural […]

How Michael Bolton and I Collaborate on Articles

James Bach’s Blog - Mon, 09/05/2016 - 07:28

(Someone posted a question on Quora asking how Michael and I write articles together. This is the answer I gave, there.)

It begins with time. We take our time. We rarely write on a deadline, except for fun, self-imposed deadlines that we can change if we really want to. For Michael and I, the quality of our writing always dominates over any other consideration.

Next is our commitment to each other. Neither one of us can contemplate releasing an article that the other of us is not proud of and happy with. Each of us gets to “stop ship” at any time, for any reason. We develop a lot of our work through debate, and sometimes the debate gets heated. I have had many colleagues over the years who tired of my need to debate even small issues. Michael understands that. When our debating gets too hot, as it occasionally does, we know how to stop, take a break if necessary, and remember our friendship.

Then comes passion for the subject. We don’t even try to write articles about things we don’t care about. Otherwise, we couldn’t summon the energy for the debate and the study that we put into our work. Michael and I are not journalists. We don’t function like reporters talking about what other people do. You will rarely find us quoting other people in our work. We speak from our own experiences, which gives us a sort of confidence and authority that comes through in our writing.

Our review process also helps a lot. Most of the work we do is reviewed by other colleagues. For our articles, we use more reviewers. The reviewers sometimes give us annoying responses, and they generally aren’t as committed to debating as we are. But we listen to each one and do what we can to answer their concerns without sacrificing our own vision. The responses can be annoying when a reviewer reads something into our article that we didn’t put there; some assumption that may make sense according to someone else’s methodology but not for our way of thinking. But after taking some time to cool off, we usually add more to the article to build a better bridge to the reader. This is especially true when more than one reviewer has a similar concern. Ultimately, of course, pleasing people is not our mission. Our mission is to say something true, useful, important, and compassionate (in that order of priority, at least in my case). Note that “amiable” and “easy to understand” or “popular” are not on that short list of highest priorities.

As far as the mechanisms of collaboration go, it depends on who “owns” it. There are three categories of written work: my blog, Michael’s blog, and jointly authored standalone articles. For the latter, we use Google Docs until we have a good first draft. Sometimes we write simultaneously on the same paragraph; more normally we work on different parts of it. If one of us is working on it alone he might decide to re-architect the whole thing, subject, of course, to the approval of the other.

After the first full draft (our recent automation article went through 28 revisions, according to Google Docs, over 14-weeks, before we reached that point), one of us will put it into Word and format it. At some point one of us will become the “article boss” and manage most of the actual editing to get it done, while the other one reviews each draft and comments. One heuristic of reviewing we frequently use is to turn change-tracking off for the first re-read, if there have been many changes.  That way whichever of us is reviewing is less likely to object to a change based purely on attachment to the previous text, rather than having an actual problem with the new text.

For the blogs, usually we have a conversation, then the guy who’s going to publish it on his blog writes a draft and does all the editing while getting comments from the other guy. The publishing party decides when to “ship” but will not do so over the other party’s objections.

I hope that makes it reasonably clear.

(Thanks to Michael Bolton for his review.)

Categories: Testing & QA

Attitude versus Knowledge in Software Development

From the Editor of Methods & Tools - Tue, 08/30/2016 - 08:52
In her article for the Summer 2016 issue of Methods & Tools, “Hiring for Agility,” Nadia Smith suggested considering an interesting difference between the need to recruit for “Agile”, defined as the knowledge and experience of Agile practices, and to recruit for “agility”, defined as attitude, values and behavior. Without focusing on Agile, this approach […]

Software Development Conferences Forecast August 2016

From the Editor of Methods & Tools - Wed, 08/24/2016 - 15:34
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP), DevOps and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods […]

GTAC 2014 Wrap-up

Google Testing Blog - Sun, 08/21/2016 - 17:33
by Anthony Vallone on behalf of the GTAC Committee

On October 28th and 29th, GTAC 2014, the eighth GTAC (Google Test Automation Conference), was held at the beautiful Google Kirkland office. The conference was completely packed with presenters and attendees from all over the world (Argentina, Australia, Canada, China, many European countries, India, Israel, Korea, New Zealand, Puerto Rico, Russia, Taiwan, and many US states), bringing with them a huge diversity of experiences.


Speakers from numerous companies and universities (Adobe, American Express, Comcast, Dropbox, Facebook, FINRA, Google, HP, Medidata Solutions, Mozilla, Netflix, Orange, and University of Waterloo) spoke on a variety of interesting and cutting edge test automation topics.

All of the slides and video recordings are now available on the GTAC site. Photos will be available soon as well.


This was our most popular GTAC to date, with over 1,500 applicants and almost 200 of those for speaking. About 250 people filled our venue to capacity, and the live stream had a peak of about 400 concurrent viewers with 4,700 playbacks during the event. And, there was plenty of interesting Twitter and Google+ activity during the event.


Our goal in hosting GTAC is to make the conference highly relevant and useful for, not only attendees, but the larger test engineering community as a whole. Our post-conference survey shows that we are close to achieving that goal:



If you have any suggestions on how we can improve, please comment on this post.

Thank you to all the speakers, attendees, and online viewers who made this a special event once again. To receive announcements about the next GTAC, subscribe to the Google Testing Blog.

Categories: Testing & QA

GTAC 2015 Coming to Cambridge (Greater Boston) in November

Google Testing Blog - Sun, 08/21/2016 - 17:33
Posted by Anthony Vallone on behalf of the GTAC Committee


We are pleased to announce that the ninth GTAC (Google Test Automation Conference) will be held in Cambridge (Greatah Boston, USA) on November 10th and 11th (Toozdee and Wenzdee), 2015. So, tell everyone to save the date for this wicked good event.

GTAC is an annual conference hosted by Google, bringing together engineers from industry and academia to discuss advances in test automation and the test engineering computer science field. It’s a great opportunity to present, learn, and challenge modern testing technologies and strategies.

You can browse presentation abstracts, slides, and videos from previous years on the GTAC site.

Stay tuned to this blog and the GTAC website for application information and opportunities to present at GTAC. Subscribing to this blog is the best way to get notified. We're looking forward to seeing you there!

Categories: Testing & QA

GTAC 2015: Call for Proposals & Attendance

Google Testing Blog - Sun, 08/21/2016 - 17:32
Posted by Anthony Vallone on behalf of the GTAC Committee

The GTAC (Google Test Automation Conference) 2015 application process is now open for presentation proposals and attendance. GTAC will be held at the Google Cambridge office (near Boston, Massachusetts, USA) on November 10th - 11th, 2015.

GTAC will be streamed live on YouTube again this year, so even if you can’t attend in person, you’ll be able to watch the conference remotely. We will post the live stream information as we get closer to the event, and recordings will be posted afterward.

Speakers
Presentations are targeted at student, academic, and experienced engineers working on test automation. Full presentations are 30 minutes and lightning talks are 10 minutes. Speakers should be prepared for a question and answer session following their presentation.

Application
For presentation proposals and/or attendance, complete this form. We will be selecting about 25 talks and 200 attendees for the event. The selection process is not first come first serve (no need to rush your application), and we select a diverse group of engineers from various locations, company sizes, and technical backgrounds (academic, industry expert, junior engineer, etc).

Deadline
The due date for both presentation and attendance applications is August 10th, 2015.

Fees
There are no registration fees, but speakers and attendees must arrange and pay for their own travel and accommodations.

More information
You can find more details at developers.google.com/gtac.

Categories: Testing & QA

The Deadline to Apply for GTAC 2015 is Monday Aug 10

Google Testing Blog - Sun, 08/21/2016 - 17:32
Posted by Anthony Vallone on behalf of the GTAC Committee


The deadline to apply for GTAC 2015 is this Monday, August 10th, 2015. There is a great deal of interest to both attend and speak, and we’ve received many outstanding proposals. However, it’s not too late to submit your proposal for consideration. If you would like to speak or attend, be sure to complete the form by Monday.

We will be making regular updates to the GTAC site (developers.google.com/gtac/2015/) over the next several weeks, and you can find conference details there.

For those that have already signed up to attend or speak, we will contact you directly by mid-September.

Categories: Testing & QA

Announcing the GTAC 2015 Agenda

Google Testing Blog - Sun, 08/21/2016 - 17:31
by Anthony Vallone on behalf of the GTAC Committee 

We have completed the selection and confirmation of all speakers and attendees for GTAC 2015. You can find the detailed agenda at: developers.google.com/gtac/2015/schedule.

Thank you to all who submitted proposals!

There is a lot of interest in GTAC once again this year with about 1400 applicants and about 200 of those for speaking. Unfortunately, our venue only seats 250. We will livestream the event as usual, so fret not if you were not selected to attend. Information about the livestream and other details will be posted on the GTAC site soon and announced here.

Categories: Testing & QA