Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=7' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Social Side of Code, Database CI and REST API Testing in Methods & Tools Winter 2015 issue

From the Editor of Methods & Tools - Mon, 01/11/2016 - 14:29
Methods & Tools – the free e-magazine for software developers, testers and project managers – has published its Winter 2015 issue that discusses the social side of code, database continuous intergration and testing REST API. Methods & Tools Winter 2015 issue content: * Meet the Social Side of Your Codebase * Database Continuous Integration * […]

Quote of the Day

Herding Cats - Glen Alleman - Mon, 01/11/2016 - 00:47

The impression that "our problems are different," is a common disease that afflicts management to work over. They are different, to be sure, but the principles that will help improve the quality of product and service are universal in nature - W. Edwards Deming

So when we hear some new and controversial conjecture is the solution to the smell of dysfunction, ask for tangible, testable, verifiable evidence that this new approach is not a solution looking for a problem to solve. 

Categories: Project Management

SPaMCAST 376 – Women In Tech, Microservices, Capabilities and More

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

This week we are doing something special. Right after the New Year holiday, all of the regulars from the Software Process and Measurement Cast gathered virtually to discuss the topics we felt would be important in 2016.  The panel for the discussion was comprised of Jeremy Berriault (The QA Corner), Steve Tendon (The TameFlow Approach), Kim Pries (The Software Sensei), Gene Hughson (Form Follows Function) and myself. We had a lively discussion that included the topics of women in tech, microservices, capabilities, business/IT integration and a lot more.

Help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Four, we focused on two questions. The first is getting the reader to answer what is the decision that measurement is supposed to support. The second is, what is the definition of the thing being measured in terms of observable consequences?

Upcoming Events

I am facilitating the CMMI Capability Challenge.  This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on January 12 at 1 PM EST.

http://cmmiinstitute.com/conferences#thecapabilitychallenge

The Challenge will continue on February 17th at 11 AM.

In other events, I will give a webinar, titled: Discover The Quality of Your Testing Process on January 19, 2016, at  11:00 am EST
Organizations that seek to understand and improve their current testing capabilities can use the Test Maturity Model integration (TMMi) as a guide for best practices. The TMMi is the industry standard model of testing capabilities. Comparing your testing organization’s performance to the model provides a gap analysis and outlines a path towards greater capabilities and efficiency. This webinar will walk attendees through a testing assessment that delivers a baseline of performance and a set of prioritized process improvements.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on empathy. Coaching is a key tool to help individuals and teams reach peak performance. One of the key attributes of a good coach is empathy. Critical to the understanding the role that empathy plays in coaching is understanding the definition of empathy. As a coach, if you can’t connect with those you are coaching you will not succeed.

We will also have new columns from Kim Pries, The Software Sensei,  and Gene Hughson Form Follows Function.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 376 – Women In Tech, Microservices, Capabilities and More

Software Process and Measurement Cast - Sun, 01/10/2016 - 23:00

This week we are doing something special. Right after the New Year holiday, all of the regulars from the Software Process and Measurement Cast gathered virtually to discuss the topics we felt would be important in 2016.  The panel for the discussion was comprised of Jeremy Berriault (The QA Corner), Steve Tendon (The TameFlow Approach), Kim Pries (The Software Sensei), Gene Hughson (Form Follows Function) and myself. We had a lively discussion that included the topics of women in tech, microservices, capabilities, business/IT integration and a lot more. 

Help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Four, we focused on two questions. The first is getting the reader to answer what is the decision that measurement is supposed to support. The second is, what is the definition of the thing being measured in terms of observable consequences?

Upcoming Events

I am facilitating the CMMI Capability Challenge.  This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on January 12 at 1 PM EST. 

http://cmmiinstitute.com/conferences#thecapabilitychallenge

The Challenge will continue on February 17th at 11 AM.

In other events, I will give a webinar, titled: Discover The Quality of Your Testing Process on January 19, 2016, at  11:00 am EST
Organizations that seek to understand and improve their current testing capabilities can use the Test Maturity Model integration (TMMi) as a guide for best practices. The TMMi is the industry standard model of testing capabilities. Comparing your testing organization's performance to the model provides a gap analysis and outlines a path towards greater capabilities and efficiency. This webinar will walk attendees through a testing assessment that delivers a baseline of performance and a set of prioritized process improvements.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on empathy. Coaching is a key tool to help individuals and teams reach peak performance. One of the key attributes of a good coach is empathy. Critical to the understanding the role that empathy plays in coaching is understanding the definition of empathy. As a coach, if you can’t connect with those you are coaching you will not succeed.

We will also have new columns from Kim Pries, The Software Sensei,  and Gene Hughson Form Follows Function.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Uptime Funk - Best Sysadmin Parody Video Ever!

This is so good! Perfect for your Monday morning jam.

 

Uptime Funk is a music video (parody of Uptown Funk) from SUSECon 2015 in Amsterdam. My favorite:  I'm all green (hot patch)
Called a Penguin and Chameleon
I'm all green (hot patch)
Call Torvalds and Kroah-Hartman
It’s too hot (hot patch)
Yo, say my name you know who I am
It’s too hot (hot patch)
I ain't no simple code monkey
Nuthin's down
Categories: Architecture

How To Measure Anything, Third Edition, Chapter 4: Clarifying the Measurement Problem

HTMA

How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

Chapter 4 of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition, is titled: Clarifying the Measurement Problem. In this chapter Hubbard focuses on two questions.  The first is getting the reader to answer what is the decision that measurement is supposed to support. The second is what is the definition of the thing being measured in terms of observable consequences.  These questions sound very basic; however, I found myself asking variations of those questions more than once recently when reviewing a relatively mature measurement program.  Answering these questions are often at the heart of defining the real mission of any measurement or measurement program and whether a measure will have value.

Chapter 4 begins the second section of the book titled Before you Measure, in which Hubbard begins a deeper dive into his measurement approach initially identified in Chapter One.  The framework is:

  1. Define the decision. This step includes defining not only the dilemma we are attempting to solve with measurement, but all of the relevant variables in unambiguous terms. Chapter 4 focuses on this step.
  2. Determine what you know. This step is about determining the amount of uncertainty in the desicion and measures defined in step 1
  3. Compute the value of additional information. If the value of additional information is low then you are done (go to step 5).
  4. Measure where the information value is high. After collecting new data, repeat steps 2 and 3 until further measurement does not provide enough additional value.
  5. Make a decision; act upon it.

All measurement begins by defining the decision that will be made. The question you need to be ask and answer is what is the problem or dilemma for which a decision needs to be made. In order to truly answer the question of what decision will be made, you need to clearly articulate the specific action the measurement will inform in a clear and unambiguous fashion. Failure to correctly identify the purpose will lead to debates later when the ambiguities are exposed. For example, I recently listened to a debate on whether an organization’s productivity (defined as delivered functionality per staff month of effort) had increased. The debate had broken down into a fierce argument of what delivered functionality meant and whose effort would be included in the definition of a staff month. All of these ambiguities stemmed from a lack of finality on what decision the organization was trying to make with the measure, and therefore what needed to be measured.

Part of the definitional problem is a failure to understand the requirements needed to make a decision. Hubbard provides a set of criteria that need to be met in decision making scenario:

  • A decision requires two or more realistic alternatives.
  • A decision has uncertainty.
  • The decision has a risk.  Risk is the potential and negative consequences if the wrong alternative is taken.
  • A decision has a decision maker.

Failure to meet any one of these criteria means you are not facing a decision-making scenario.  For example, if you were deciding whether to go out for lunch or not and there were no restaurants, food truck or hot dog carts, you would not really have a decision to make.  If there were one restaurant there again would be no uncertainty, therefore no decision to be made.

Modeling a decision is a mechanism to lay bare any remaining ambiguity.  Any decision can be modeled.  The concept of weighted shortest job first (WSJF), a tool to prioritize (a form of decision making) is a tool often used to model which piece of work should be done first.  Model can be simple such as a cost-benefit analysis or complex such as a computer model or Monti-Carlo analysis. Hubbard suggests that every decision is modeled whether even if modeling is expressed by intuition.

Decisions require risk and uncertainty. As with many other measurement concepts, understanding what risk and uncertainty means is critical to being able to measure anything. The chapter concludes with a discussion of the definition of uncertainty and risk.

Uncertainty is the lack of complete certainty of an outcome of a decision. Uncertainty reflects that there exists more than one possible outcomes of a decision.

Measurement of uncertainty a set of probabilities assigned to the possible outcomes.  For example, there are two possibilities for the weather tomorrow, precipitation or no precipitation.  The measurement of certainty might be expressed as a 60% chance of rain (40% chance of no rain can be inferred).

Risk is the uncertainty that a loss or some other bad thing will occur.

Measurement of risk is quantification of the set of possibilities that combines the probability of occurrence with the quantified impact of an outcome. For example, the risk of a decision to spend money on a picnic that would require an expenditure of thrifty dollars on perishable food could be expressed as a 60% chance of rain tomorrow with a potential loss of $30 for the picnic lunch.

Clarifying the measurement problem requires defining what we mean.  Definition begins with unambiguously defining the decision to be made.  Once we know the decision that needs to be made we can define and measure uncertainty and risk for each of possible outcomes.

Previous Installments in Re-read Saturday, How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

How To Measure Anything, Third Edition, Introduction

HTMA, Chapter 1

HTMA, Chapter 2

HTMA, Chapter 3


Categories: Process Management

Get your app featured on the first smartphone with Project Tango from Lenovo

Android Developers Blog - Sat, 01/09/2016 - 04:56

Originally posted on Google Developers Blog

Posted by Johnny Lee, Technical Project Lead, Project Tango

Today, at CES, Lenovo announced the development of the first consumer-ready smartphone with Project Tango. By adding a few extra sensors and some computer vision software, Project Tango transforms your smartphone into a magic lens that lets you place digital information on your physical world.


*Renderings only. Not the official Lenovo device.

To support the continued growth of the ecosystem, we’re also inviting developers from around the world to submit their ideas for gaming and utility apps created using Project Tango. We’ll pick the best ideas and provide funding and engineering support to help bring them to life, as part of the app incubator. Even better, the finished apps will be featured on Lenovo’s upcoming device. The submission period closes on February 15, 2016.

All you need to do is tell us about your idea and explain how Project Tango technologies will enable new experiences. Additionally, we’ll ask you to include the following materials:

  • Project schedule including milestones for development –– we’ll reach out to the selected developers by March 15, 2016
  • Visual mockups of your idea including concept art
  • Smartphone app screenshots and videos, such as captured app footage
  • Appropriate narrative including storyboards, etc.
  • Breakdown of your team and its members
  • One pager introducing your past app portfolio and your company profile

For some inspiration, Lowe's Home Improvement teamed with developer Elementals Web to demonstrate a use case they are each working on for the launch. In the app, you can point your Project Tango-enabled smartphone at your kitchen to see where a new refrigerator or dishwasher might fit virtually.


Elsewhere, developer Schell Games let’s you play virtual Jenga on any surface with friends. But this time, there is no cleanup involved when the blocks topple over.


There are also some amazing featured apps for Project Tango on Google Play. You can pick up your own Project Tango Tablet Development Kit here to brainstorm new fun and immersive experiences that use the space around you. Apply now!

Categories: Programming

Running headless Selenium WebDriver tests in Docker containers

Agile Testing - Grig Gheorghiu - Sat, 01/09/2016 - 00:24
In my previous post, I showed how to install firefox in headless mode on an Ubuntu box and how to use Xvfb to allow Selenium WebDriver scripts to run against firefox in headless mode.

Here I want to show how run each Selenium test suite in a Docker container, so that the suite gets access to its own firefox browser. This makes it easy to parallelize the test runs, and thus allows you to load test your Web infrastructure with real-life test cases.

Install docker-engine on Ubuntu 14.04

We import the dockerproject.org signing key and apt repo into our apt repositories, then we install the linux-image-extra and docker-engine packages.

# apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
# echo “deb https://apt.dockerproject.org/repo ubuntu-trusty main” > /etc/apt/sources.list.d/docker.list
# apt-get update
# apt-get install linux-image-extra-$(uname -r)
# apt-get install docker-engine

Start the docker service and verify that it is operational

Installing docker-engine actually starts up docker as well, but to start the service you do:

# service docker start

To verify that the docker service is operational, run a container based on the public “hello-world” Docker image:

# docker run hello-world
Unable to find image ‘hello-world:latest’ locally
latest: Pulling from library/hello-world
b901d36b6f2f: Pull complete
0a6ba66e537a: Pull complete
Digest: sha256:8be990ef2aeb16dbcb9271ddfe2610fa6658d13f6dfb8bc72074cc1ca36966a7
Status: Downloaded newer image for hello-world:latest
Hello from Docker.This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker Hub account:
https://hub.docker.com
For more examples and ideas, visit:
https://docs.docker.com/userguide/

Pull the ubuntu:trusty public Docker image
# docker pull ubuntu:trusty
trusty: Pulling from library/ubuntu
fcee8bcfe180: Pull complete
4cdc0cbc1936: Pull complete
d9e545b90db8: Pull complete
c4bea91afef3: Pull complete
Digest: sha256:3a7f4c0573b303f7a36b20ead6190bd81cabf323fc62c77d52fb8fa3e9f7edfe
Status: Downloaded newer image for ubuntu:trusty

# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu trusty c4bea91afef3 3 days ago 187.9 MB
hello-world latest 0a6ba66e537a 12 weeks ago 960 B

Build custom Docker image for headless Selenium WebDriver testing
I created a directory called selwd on my host Ubuntu 14.04 box, and in that directory I created this Dockerfile:

FROM ubuntu:trusty
RUN echo “deb http://ppa.launchpad.net/mozillateam/firefox-next/ubuntu trusty main” > /etc/apt/sources.list.d//mozillateam-firefox-next-trusty.list RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys CE49EC21 RUN apt-get update RUN apt-get install -y firefox xvfb python-pip RUN pip install selenium RUN mkdir -p /root/selenium_wd_tests ADD sel_wd_new_user.py /root/selenium_wd_tests ADD xvfb.init /etc/init.d/xvfb RUN chmod +x /etc/init.d/xvfb RUN update-rc.d xvfb defaults
CMD (service xvfb start; export DISPLAY=:10; python /root/selenium_wd_tests/sel_wd_new_user.py)

This Dockerfile tells docker, via the FROM instruction, to create an image based on the ubuntu:trusty image that we pulled before (if we hadn’t pulled it, it would be pulled the first time our image was built).The various RUN instructions specify commands to be run at build time. The above instructions add the Firefox Beta repository and key to the apt repositories inside the image, then install firefox, xvfb and python-pip. Then they install the selenium Python package via pip and create a directory structure for the Selenium tests.

The ADD instructions copy local files to the image. In my case, I copy one Selenium WebDriver Python script, and an init.d-type file for starting Xvfb as a service (by default it starts in the foreground, which is not something I want inside a Docker container).

The last two RUN instructions make the /etc/init.d/xvfb script executable and run update-rc.d to install it as a service. The xvfb script is the usual init.d wrapper around a command, in my case this command:

PROG=”/usr/bin/Xvfb” PROG_OPTIONS=”:10 -ac”

Here is a gist for the xvfb.init script for reference.

Finally, the CMD instruction specifies what gets executed when a container based on this image starts up (assuming no other commands are given in the ‘docker run’ command-line for this container). The CMD instruction in the Dockerfile above starts up the xvfb service (which connects to DISPLAY 10 as specified in the xvfb init script), sets the DISPLAY environment variable to 10, then runs the Selenium WebDriver script sel_wd_new_user.py, which will launch firefox in headless mode and execute its commands against it.

Here’s the official documentation for Dockerfile instructions.To build a Docker image based on this Dockerfile, run:

# docker build -t selwd:v1 .

selwd is the name of the image and v1 is a tag associated with this name. The dot . tells docker to look for a Dockerfile in the current directory.

The build process will take a while intially because it will install all the dependencies necessary for the packages we are installing with apt. Every time you make a modification to the Dockerfile, you need to run ‘docker build’ again, but subsequent runs will be much faster.

Run Docker containers based on the custom image
At this point, we are ready to run Docker containers based on the selwd image we created above.

Here’s how to run a single container:

# docker run --rm selwd:v1
In this format, the command specified in the CMD instruction inside the Dockerfile will get executed, then the container will stop. This is exactly what we need: we run our Selenium WebDriver tests against headless firefox, inside their own container isolated from any other container.

The output of the ‘docker run’ command above is:


Starting : X Virtual Frame Buffer . — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —Ran 1 test in 40.441sOK
(or a traceback if the Selenium test encountered an error)

Note that we also specified the rm flag to ‘docker run’ so that the container gets removed once it stops — otherwise these short-lived containers will be kept around and will pile up, as you can see for yourself if you run:

# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6c9673e59585 selwd:v1 “/bin/bash” 5 minutes ago Exited (130) 5 seconds ago modest_mccarthy 980651e1b167 selwd:v1 “/bin/sh -c ‘(service” 9 minutes ago Exited (0) 8 minutes ago stupefied_turing 4a9b2f4c8c28 selwd:v1 “/bin/sh -c ‘(service” 13 minutes ago Exited (0) 12 minutes ago nostalgic_ride 9f1fa953c83b selwd:v1 “/bin/sh -c ‘(service” 13 minutes ago Exited (0) 12 minutes ago admiring_ride c15b180832f6 selwd:v1 “/bin/sh -c ‘(service” 13 minutes ago Exited (0) 12 minutes ago jovial_booth .....etc

If you do have large numbers of containers that you want to remove in one go, use this command:

# docker rm `docker ps -aq`For troubleshooting purposes, we can run a container in interactive mode (with the -i and -t flags) and specify a shell command to be executed on startup, which will override the CMD instruction in the Dockerfile:

# docker run -it selwd:v1 /bin/bash root@6c9673e59585:/#
At the bash prompt, you can run the shell commands specified by the Dockerfile CMD instruction in order to see interactively what is going on. The official ‘docker run’ documentation has lots of details.

One other thing I found useful for troubleshooting Selenium WebDriver scripts running against headless firefox was to have the scripts take screenshots during their execution with the save_screenshot command:

driver.save_screenshot(“before_place_order.png”)
# Click Place Order driver.find_element_by_xpath("//*[@id='order_submit_button']").click()
driver.save_screenshot(“after_place_order.png”)

I then inspected the PNG files to see what was going on.

Running multiple Docker containers for load testing

Because our Selenium WebDriver tests run isolated in their own Docker container, it enables us to run N containers in parallel to do a poor man’s load testing of our site.We’ll use the -d option to ‘docker run’ to run each container in ‘detached’ mode. Here is a bash script that launches COUNT Docker containers, where COUNT is the 1st command line argument, or 2 by default:


#!/bin/bash
COUNT=$1 if [ -z “$COUNT” ]; then  COUNT=2 fi
for i in `seq 1 $COUNT`; do  docker run -d selwd:v1 done

The output of the script consists in a list of container IDs, one for each container that was launched.

Note that if you launch a container in detached mode with -d, you can’t specify the rm flag to have the container removed automatically when it stops. You will need to periodically clean up your containers with the command I referenced above (docker rm `docker ps -aq`).

To inspect the output of the Selenium scripts in the containers that were launched, first get the container IDs:
# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6fb931689c03 selwd:v1 “/bin/sh -c ‘(service” About an hour ago Exited (0) About an hour ago grave_northcutt 1b82ef59ad46 selwd:v1 “/bin/sh -c ‘(service” About an hour ago Exited (0) About an hour ago admiring_fermat

Then run ‘docker logs <container_id>’ to see the output for a specific container:
# docker logs 6fb931689c03 . — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — Ran 1 test in 68.436s
OK Starting : X Virtual Frame Buffer
Have fun load testing your site!

Stuff The Internet Says On Scalability For January 8th, 2016

Hey, it's HighScalability time:


Finally, a clear diagram of Amazon's industry impact. (MARK A. GARLICK)

 

If you like this Stuff then please consider supporting me on Patreon.
  • 150: # of globular clusters in the Milky Way; 800 million: Facebook Messenger users; 180,000: high-res images of the past; 1 exaflops: 1 million trillion floating-point operations per second; 10%: of Google's traffic is now IPv6; 100 milliseconds: time it takes to remember; 35: percent of all US Internet traffic used by Netflix; 125 million: hours of content delivered each day by Netflix's CDN;

  • Quotable Quotes:
    • Erik DeBenedictis: We could build an exascale computer today, but we might need a nuclear reactor to power it
    • wstrange: What I really wish the cloud providers would do is reduce network egress costs. They seem insanely expensive when compared to dedicated servers.
    • rachellaw: What's fascinating is the bot-bandwagon is mirroring the early app market. With apps, you downloaded things to do things. With bots, you integrate them into things, so they'll do it for you. 
    • erichocean: The situation we're in today with RAM is pretty much the identical situation with the disks of yore.
    • @bernardgolden: @Netflix will spend 2X what HBO does on programming in 2016? That's an amazing stat. 
    • @saschasegan: Huawei's new LTE modem has 18 LTE bands. Qualcomm's dominance of LTE is really ending this year.
    • Unruly Places: The rise of placelessness, on top of the sense that the whole planet is now minutely known and surveilled, has given this dissatisfaction a radical edge, creating an appetite to find places that are off the map and that are somehow secret, or at least have the power to surprise us.
    • @mjpt777: Queues are everywhere. Recognise them, make them first class, model and monitor them for telemetry.
    • Guido de Croon:  the robot exploits the impending instability of its control system to perceive distances. This could be used to determine when to switch off its propellers during landing, for instance.
    • @gaberivera: In the future, all major policy questions will be settled by Twitter debates between venture capitalists
    • Craig McLuckie: It’s not obvious until you start to actually try to run massive numbers of services that you experience an incredible productivity that containers bring
    • Brian Kirsch: One of the biggest things when you look at the benefits of container-based virtualization is its ability to squeeze more and more things onto a single piece of hardware for cost savings. While that is good for budgets, it is excessively horrible when things go bad.
    • @RichardWarburto: It still surprises me that configuration is most popular user of strong consistency models atm. Is config more important than data
    • @jamesurquhart: Five years ago I predicted CFO would stop complaining about up front cost, and start asking to reduce monthly bill. Seeing that happen now.
    • @martinkl: Communities in a nutshell… • Databases research: “In fsync we trust” • Distributed systems research: “In majority vote we trust”
    • @BoingBoing: Tax havens hold $7.6 trillion; 8% of world's total wealth
    • @DrQz: Amazon's actual profits are still tiny, relying heavily on its AWS cloud business.
    • hadagribble: we need to view fast storage as something other than disk behind a block interface and slow memory, especially with all the different flavours of fast persistent storage that seem to be on the horizon. For the one's that attach to the memory bus, the PMFS-style [1] approach of treating them like a file-system for discoverability and then mmaping to allow them to be accessed as memory is pretty attractive.

  • EC2 with a 5% price reduction on certain things in certain places. Not exactly the race to the bottom one would hope for in a commodity market, which means the cloud is not a commodity. Happy New Year – EC2 Price Reduction (C4, M4, and R3 Instances).

  • Since the locus of the Internet is centering on a command line interface in the form of messaging, chatbot integrations may be giving APIs a second life, assuming they are let inside the walled garden. The next big thing in computing is called 'ChatOps,' and it's already happening inside Slack. The advantage chatops has over the old Web + API mashup dream is that messaging platforms come built-in with a business model/app store, large amd growing user base, and network effects. Facebook’s Secret Chat SDK Lets Developers Build Messenger Bots. Slack apps. WeChat API. Telegram API. Alexa API. Google's Voice Actions. How about Siri or iMessage? Nope. njovin likes it: I've worked with the new Chat SDK and our customers' use cases aren't geared toward forcing (or even encouraging) users into using Facebook Messenger. Most of them are just trying to meet demand from their customers. In our particular case, we have customers with a lot of international travelers who have access to data while abroad but not necessarily SMS. IMO it's a lot better than having a dedicated app you have to download to interact with a specific brand.

  • The world watched a lot of porn this year. If you like analytics you'll love Pornhub’s 2015 Year in Review: In 2015 alone, we streamed 75GB of data a second; bandwidth used is 1,892 petabytes; 4,392,486,580 hours of video were watched; 21.2 billion visits.

  • A very interesting way to frame the issue. On the dangers of a blockchain monoculture: The Bitcoin blockchain: the world’s worst database. Would you use a database with these features? Uses approximately the same amount of electricity as could power an average American household for a day per transaction. Supports 3 transactions / second across a global network with millions of CPUs/purpose-built ASICs. Takes over 10 minutes to “commit” a transaction. Doesn’t acknowledge accepted writes: requires you read your writes, but at any given time you may be on a blockchain fork, meaning your write might not actually make it into the “winning” fork of the blockchain (and no, just making it into the mempool doesn’t count). In other words: “blockchain technology” cannot by definition tell you if a given write is ever accepted/committed except by reading it out of the blockchain itself (and even then). Can only be used as a transaction ledger denominated in a single currency, or to store/timestamp a maximum of 80 bytes per transaction. But it’s decentralized!

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

How to Win in the New Year

Making the Complex Simple - John Sonmez - Fri, 01/08/2016 - 14:00

“Dry January”, “turning over a new leaf”, “the new me” are all terms you hear regularly from friends, family, and colleagues as a new year dawns. A month later and 88% of people who set New Year’s resolutions have failed already. This is a shocking statistic. Why, as humans, are we so bad at keeping […]

The post How to Win in the New Year appeared first on Simple Programmer.

Categories: Programming

Get your app featured on the first smartphone with Project Tango from Lenovo

Google Code Blog - Fri, 01/08/2016 - 03:30

Posted by Johnny Lee, Technical Project Lead, Project Tango

Today, at CES, Lenovo announced the development of the first consumer-ready smartphone with Project Tango. By adding a few extra sensors and some computer vision software, Project Tango transforms your smartphone into a magic lens that lets you place digital information on your physical world.


*Renderings only. Not the official Lenovo device.

To support the continued growth of the ecosystem, we’re also inviting developers from around the world to submit their ideas for gaming and utility apps created using Project Tango. We’ll pick the best ideas and provide funding and engineering support to help bring them to life, as part of the app incubator. Even better, the finished apps will be featured on Lenovo’s upcoming device. The submission period closes on February 15, 2016.

All you need to do is tell us about your idea and explain how Project Tango technologies will enable new experiences. Additionally, we’ll ask you to include the following materials:

  • Project schedule including milestones for development –– we’ll reach out to the selected developers by March 15, 2016
  • Visual mockups of your idea including concept art
  • Smartphone app screenshots and videos, such as captured app footage
  • Appropriate narrative including storyboards, etc.
  • Breakdown of your team and its members
  • One pager introducing your past app portfolio and your company profile

For some inspiration, Lowe's Home Improvement teamed with developer Elementals Web to demonstrate a use case they are each working on for the launch. In the app, you can point your Project Tango-enabled smartphone at your kitchen to see where a new refrigerator or dishwasher might fit virtually.


Elsewhere, developer Schell Games let’s you play virtual Jenga on any surface with friends. But this time, there is no cleanup involved when the blocks topple over.


There are also some amazing featured apps for Project Tango on Google Play. You can pick up your own Project Tango Tablet Development Kit here to brainstorm new fun and immersive experiences that use the space around you. Apply now!

Categories: Programming

Top Ten Blog Entries 2015

2015!

2015!

We continue showcasing our 2015 efforts with the ten most accessed blog articles written in 2015.  As with the recap of the ten most downloaded episodes of the Software Process and Measurement Cast, the blog entries that caught our reader’s eyes were widely varied.  I am thrilled that two entries from our Re-Read Saturday feature made the top ten list because the Software Process and Measurement Blog tends to get its most traffic on weekdays, which often means that Saturday and Sunday post are seen less. The inference is that the blog is being accessed primarily from work (we work very hard to keep the blog and the podcast office safe for exactly that reason).

1. Budgeting, Estimation, Planning, #NoEstimates and the Agile Planning Onion

2. The Difference Between a Persona and an Actor

3. Re-read Saturday: Consolidating Gains and Producing More Change, John P. Kotter Chapter 9

4. The House of Lean, or Is That The House of Quality?

5. Prioritization: Weighted Shortest Job First

6. You Are Not Agile If . . .

7. Project Management Is Dead (Refined)

8. You Are Not Agile (If You Only Do Scrum)

9. Re-read Saturday: Anchoring New Approaches in The Culture, John P. Kotter Chapter 10

10. DevOps Primer: Definition

Almost every article was part of a two to four article thread that allowed us to explore a topic in depth. I must note that we build a mind map for each thread adding topics and ideas as we research and write the articles. In almost every case as we conclude a thread we can conclude with the phrase, “to be continued.”

For those that like numbers, traffic on the blog was up by 16.7% between 2014 and 2015 even though we went from daily publishing to four times a week. We may experiment with publishing days later this year but the frequency feels just about right.

We are working on a two month rolling content calendar for 2016 and would be happy to have your input on each of the topics and on future topics. Planned threads for the remainder of January and February:

  1. Contrasting Peer Reviews and Gate Reviews
  2. Agile Acceptance Testing – Revisited and Refined
  3. Minimum Viable Product
  4. Leadership is Not a Task

Current Re-Read Saturday Book:

·How to Measure Anything – Douglas W. Hubbard

I will be polling for the next book in the series sometime in February. However, feel free to suggest books now (maybe even something from the list that our 2015 podcast interviewees suggested).

Thank you for your eyes, thoughts, ideas, comments, likes, tweets, and reblogs. We will work hard to continue to bring great content to in 2016!


Categories: Process Management

Running Selenium WebDriver tests using Firefox headless mode on Ubuntu

Agile Testing - Grig Gheorghiu - Thu, 01/07/2016 - 23:58
Selenium IDE is a very good tool for recording and troubleshooting Selenium tests, but you are limited to clicking around in a GUI. For a better testing workflow, including load testing, you need to use Selenium WebDriver, which can programatically drive a browser and run Selenium test cases.

In its default mode, WebDriver will launch a browser and run the test scripts in the browser, then exit. If you like to work exclusively from the command line, then you need to look into running the browser in headless mode. Fortunately, this is easy to do with Firefox on Ubuntu. Here’s what you need to do:

Install the official Firefox Beta PPA:
$ sudo apt-add-repository ppa:mozillateam/firefox-next

(this will add the file /etc/apt/sources.list.d/mozillateam-firefox-next-trusty.list and also fetch the PPA’s key, which enables your Ubuntu system to verify that the packages in the PPA have not been interfered with since they were built)

Run apt-get update:
$ sudo apt-get update

Install firefox and xvfb (the X windows virtual framebuffer) packages:
$ sudo apt-get install firefox xvfb

Run Xvfb in the background and specify a display number (10 in my example):
$ Xvfb :10 -ac &

Set the DISPLAY variable to the number you chose:
$ export DISPLAY=:10

Test that you can run firefox in the foreground with no errors:
$ firefox
(kill it with Ctrl-C)

Now run your regular Selenium WebDriver scripts (no modifications required if they already use Firefox as their browser).

Here is an example of a script I have written in Python, which clicks on a category link in an e-commerce store, adds an item to the cart, that starts filling out the user’s information in the cart:
# -*- coding: utf-8 -*-
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
import unittest, time, re, random
class SelWebdriverNewUser(unittest.TestCase):
  def setUp(self):
    self.driver = webdriver.Firefox()
    self.driver.implicitly_wait(20)
    self.base_url = “http://myhost.mycompany.com/"
    self.verificationErrors = []
    self.accept_next_alert = True
    def test_sel_webdriver_new_user(self):
    driver = self.driver
    HOST = “myhost.mycompany.com”
    RANDINT = random.random()*10000
    driver.get(“https://” + HOST)

    # Click on category link
    driver.find_element_by_xpath(“//*[@id=’nav’]/ol/li[3]/a”).click()
    # Click on sub-category link
    driver.find_element_by_xpath(“//*[@id=’top’]/body/div/div[2]/div[2]/div/div[2]/ul/li[4]/a/span”).click()
    # Click on product image
    driver.find_element_by_xpath(“//*[@id=’product-collection-image-374']”).click()
    # Click Checkout button
    driver.find_element_by_xpath(“//*[@id=’checkout-button’]/span/span”).click()
    driver.find_element_by_id(“billing:firstname”).clear()
driver.find_element_by_id(“billing:firstname”).send_keys(“selenium”, RANDINT, “_fname”)
    driver.find_element_by_id(“billing:lastname”).clear()
driver.find_element_by_id(“billing:lastname”).send_keys(“selenium”, RANDINT, “_lname”)
    # Click Place Order
    driver.find_element_by_xpath(“//*[@id=’order_submit_button’]”).click()
  def is_element_present(self, how, what):
    try: self.driver.find_element(by=how, value=what)
    except NoSuchElementException as e: return False
    return True
  def is_alert_present(self):
    try: self.driver.switch_to_alert()
    except NoAlertPresentException as e: return False
    return True
  def close_alert_and_get_its_text(self):
    try:
      alert = self.driver.switch_to_alert()
      alert_text = alert.text
      if self.accept_next_alert:
        alert.accept()
      else:
        alert.dismiss()
      return alert_text
    finally: self.accept_next_alert = True
def tearDown(self):
    self.driver.quit()
    self.assertEqual([], self.verificationErrors)
if __name__ == “__main__”:
unittest.main()

To run this script, you first need to install the selenium Python package:$ sudo pip install selenium

Then run the script (called selenium_webdriver_new_user.py in my case):$ python selenium_webdriver_new_user.py

After hopefully not so long of a wait, you should see a successful test run:$ python selenium_webdriver_new_user.py
.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Ran 1 test in 29.317s

A few notes regarding Selenium WebDriver scripts.

I was stumped for a while when I was trying to use the “find_element_by_id” form of finding an HTML element on a Web page. It was working fine in Selenium IDE, but Selenium WebDriver couldn’t find that element. I had to resort to finding all elements via their XPath id using “find_element_by_xpath”. Fortunately, Chrome for example makes it easy to right click an element on a page, choose Inspect, then righ click the HTML code for the element and choose Copy->Copy XPath to get their id which can then be pasted in the Selenium WebDriver script.

I also had to use time.sleep(N) (where N is in seconds at least for Python) to wait for certain elements of the page to load asynchronously. I know it’s not best practices, but it works.

SE-Radio Show 246: John Wilkes on Borg and Kubernetes

John Wilkes from Google talks with Charles Anderson about managing large clusters of machines. The discussion starts with Borg, Google’s internal cluster management program. John discusses what Borg does and what it provides to programmers and system administrators. He also describes Kubernetes, an open-source cluster management system recently developed by Google using lessons learned from […]
Categories: Programming

SE-Radio Episode 246: John Wilkes on Borg and Kubernetes

John Wilkes from Google talks with Charles Anderson about managing large clusters of machines. The discussion starts with Borg, Google’s internal cluster management program. John discusses what Borg does and what it provides to programmers and system administrators. He also describes Kubernetes, an open-source cluster management system recently developed by Google using lessons learned from […]
Categories: Programming

Google Drive: Uploading & downloading files plus the new v3 API redux

Google Code Blog - Thu, 01/07/2016 - 20:51

Originally posted on Google Apps Developer Blog

Posted by Posted by Wesley Chun (@wescpy), Developer Advocate, Google Apps

In case you missed it last week, the Google Drive team announced the release of the next version of their API. Today, we dig deeper into details about the release with developers. In the latest edition of the Launchpad Online developer video series, you'll get everything you need to know about the new release (v3), as well as its relationship with the previous version (v2).

This jam-packed episode features an introduction to the new API, an interview with a Google Drive engineer about the API design and a code walkthrough of real source code you can use today (as with all my Launchpad Online episodes). This time, it's a command-line script that performs Google Drive file uploads and downloads, presented first in v2 followed by a how-to segment on migrating it step-by-step to v3. In addition, the uploading segment includes the option of converting to Google Apps formats while the download portion covers exporting to alternative formats such as PDF®.


To get started using the Drive API, check out the links to the official documentation above (v2 or v3) where you’ll also find quickstart samples in a variety of programming languages to the left. For a deeper dive into both Python code samples covered here, including v3 migration, start with the first of two related posts posted to my blog.

If you’re new to the Launchpad Online, we share technical content aimed at novice Google developers -current tools with a little bit of code to help you launch your next app. Please give us your feedback below and tell us what topics you would like to see in future episodes!

Categories: Programming

Google Apps Script: Tracking add-on usage with Google Analytics

Google Code Blog - Thu, 01/07/2016 - 17:16

Originally posted on Google Apps Developers blog

Posted by Romain Vialard, a Google Developer Expert and developer of Yet Another Mail Merge, a Google Sheets add-on.

Google Apps Script makes it easy to create and publish add-ons for Google Sheets, Docs, and Forms. There are now hundreds of add-ons available and many are reaching hundreds of thousands of users. Google Analytics is one of the best tools to learn what keeps those users engaged and what should be improved to make an add-on more successful.

Cookies and User Identification

Add-ons run inside Google Sheets, Docs, and Forms where they can display content in dialogs or sidebars. These custom interfaces are served by the Apps Script HTML service, which offers client-side HTML, CSS, and JS with a few limitations.

Among those limitations, cookies aren’t persistent. The Google Analytics cookie will be recreated each time a user re-opens your dialog or sidebar, with a new client ID every time. So, Analytics will see each new session as if initiated by a new user, meaning the number of sessions and number of users should be very similar.


Fortunately, it’s possible to use localStorage to store the client ID — a better way to persist user information instead of cookies. After this change, your user metrics should be far more accurate.

Add-ons can also run via triggers, executing code at a recurring interval or when a user performs an action like opening a document or responding to a Google Form. In those cases, there’s no dialog or sidebar, so you should use the Google Analytics Measurement Protocol (see policies on the use of this service) to send user interaction data directly to Google Analytics servers via the UrlFetch service in Google Apps Script.

A Client ID is also required in that case, so I recommend using the Apps Script User properties service. Most examples on the web show how to generate a unique Client ID for every call to Analytics but this won’t give you an accurate user count.

You can also send the client ID generated on client side to the server so as to use the same client ID for both client and server calls to Analytics, but at this stage, it is best to rely on the optional User ID in Google Analytics. While the client ID represents a client / device, the User ID is unique to each user and can easily be used in add-ons as users are authenticated. You can generate a User ID on the server side, store it among the user properties, and reuse it for every call to Analytics (both on the client and the server side).

Custom Dimensions & Metrics

In add-ons, we usually rely on event tracking and not page views. It is possible to add different parameters on each event thanks to categories, actions, labels and value, but it’s also possible to add much more info by using custom dimensions & metrics.

For example, the Yet Another Mail Merge add-on is mostly used to send emails, and we have added many custom dimensions to better understand how it is used. For each new campaign (batch of emails sent), we record data linked to the user (e.g. free or paying customer, gmail.com or Google for Work / EDU user) and data linked to the campaign (e.g. email size, email tracking activated or not). You can then reuse those custom dimensions inside custom reports & dashboards.


Once you begin to leverage all that, you can get very insightful data. Until October 2015, Yet Another Mail Merge let you send up to 100 emails per day for free. But we’ve discovered with Analytics that most people sending more than 50 emails in one campaign were actually sending 100 emails - all the free quota they could get - but we failed to motivate them to switch to our paid plan.


As a result of this insight, we have reduced this free plan to 50 emails/day and at the same time introduced a referral program, letting users get more quota for free (they still don’t pay but they invite more users so it’s interesting for us). With this change, we have greatly improved our revenue and scaled user growth.

Or course, we also use Google Analytics to track the efficiency of our referral program.

To help you get started in giving you more insight into your add-ons, below are some relevant pages from our documentation on the tools described in this post. We hope this information will help your apps become more successful!:


Romain Vialard profile | website

Romain Vialard is a Google Developer Expert. After some years spent as a Google Apps consultant, he is now focused on products for Google Apps users, including add-ons such as Yet Another Mail Merge and Form Publisher.

Categories: Programming

How To Write Professional Emails That People Won’t Ignore

Making the Complex Simple - John Sonmez - Thu, 01/07/2016 - 17:00

Have you ever tried to write a professional email but failed? Your boss ignored you, you didn’t get that job, etc… The thing is that there are certain points that need to be highlighted in order to write a winning email. In this video I highlight some important things you should have in mind in […]

The post How To Write Professional Emails That People Won’t Ignore appeared first on Simple Programmer.

Categories: Programming

Let's Donate Our Organs and Unused Cloud Cycles to Science

There’s a long history of donating spare compute cycles for worthy causes. Most of those efforts were started in the Desktop Age. Now, in the Cloud Age, how can we donate spare compute capacity? How about through a private spot market?

There are cycles to spare. Public Cloud Usage trends:

  • Instances are underutilized with average utilization rates between 8-9%

  • 24% of instance reservations are unused

Maybe all that CapEx sunk into Reserved Instances can be put to some use? Maybe over provisioned instances could be added to the resource pool as well? That’s a lot of power Captain. How could it be put to good use?

There is a need to crunch data. For science. Here’s a great example as described in This is how you count all the trees on Earth. The idea is simple: from satellite pictures count the number of trees. It’s an embarrassingly parallel problem, perfect for the cloud. NASA had a problem. Their cloud is embarrassingly tiny. 400 hypervisors shared amongst many projects. Analysing all the data would would take 10 months. An unthinkable amount of time in this Real-time Age. So they used the spot market on AWS.

The upshot? The test run cost a measly $80, which means that NASA can process data collected for an entire UTM zone for just $250. The cost for all 11 UTM zones in sub-Sarahan Africa and the use of all four satellites comes in at just $11,000.

“We have turned what was a $200,000 job into a $10,000 job and we went from 100 days to 10 days [to complete],” said Hoot. “That is something scientists can build easily into their budget proposals.”

That last quote, That is something scientists can build easily into their budget proposals, stuck in my craw.

Imagine how much science could get done if you didn’t have the budget proposal process slowing down the future? Especially when we know there are so many free cycles available that are already attached to well supported data processing pipelines. How could those cycles be freed up to serve a higher purpose?

Netflix shows the way with their internal spot market. Netflix has so many cloud resources at their disposal, a pool of 12,000 unused reserved instances at peak times, that they created their own internal spot market to drive better utilization. The whole beautiful setup is described Creating Your Own EC2 Spot Market, Creating Your Own EC2 Spot Market -- Part 2, and in High Quality Video Encoding at Scale.

The win: By leveraging the internal spot market Netflix measured the equivalent of a 210% increase in encoding capacity.

Netflix has a long and glorious history of sharing and open sourcing their tools. It seems likely when they perfect their spot market infrastructure it could be made generally available.

Perhaps the Netflix spot market could be extended so unused resources across the Clouds could advertise themselves for automatic integration into a spot market usable by scientists to crunch data and solve important world problems.

Perhaps donated cycles could even be charitable contributions that could help offset the cost of the resource? My wife is a tax accountant and she says this is actually true, under the right circumstances.

This kind of idea has a long history with me. When AWS first started, I like a lot of people wondered, how can I make money off this gold rush? That’s before we knew Amazon was going to make most of the tools to sell to the miners themselves. The idea of exploiting underutilized resources fascinated me for some reason. That is, after all, what VMs do for physical hardware, exploit the underutilized resources of powerful machines. And it is in some ways the idea behind our modern economy. Yet even today software architectures aren’t such that we reach anything close to full utilization of our hardware resources. What I wanted to do was create a memcached system that allowed developers to sell their unused memory capacity (and later CPU, network, storage) to other developers as cheap dynamic pools of memcached storage. Get your cache dirt cheap and developers could make some money back on underused resources. A very similar idea to the spot market notion. But without homomorphic encryption the security issues were daunting, even assuming Amazon would allow it. With the advent of the Container Age sharing a VM is now way more secure and Amazon shouldn’t have a problem with the idea if it’s for science. I hope.

Categories: Architecture

Quote of the Month January 2016

From the Editor of Methods & Tools - Wed, 01/06/2016 - 16:19
Testing by itself does not improve software quality. Test results are an indicator of quality, but in and of themselves, they don’t improve it. Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. What you eat before you step onto the scale […]