Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=4' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

More tips and tricks for running Gatling in Docker containers

Agile Testing - Grig Gheorghiu - Fri, 07/01/2016 - 18:11
This post is a continuation of my previous one on "Running Gatling tests in Docker containers via Jenkins". As I continued to set up Jenkins jobs to run Gatling tests, I found the need to separate those tests for different environments - development, staging and production. The initial example I showed contained a single setup, which is not suitable for multiple environments.

Here is my updated Gatling directory structure

gatling
gatling/conf
gatling/conf/production
gatling/conf/production/gatling.conf
gatling/conf/staging
gatling/conf/staging/gatling.conf
gatling/Dockerfile
gatling/results
gatling/user-files
gatling/user-files/data
gatling/user-files/data/production-urls.csv
gatling/user-files/data/staging-urls.csv
gatling/user-files/simulations
gatling/user-files/simulations/development
gatling/user-files/simulations/production
gatling/user-files/simulations/production/Simulation.scala
gatling/user-files/simulations/staging
gatling/user-files/simulations/staging/Simulation.scala

Note that I created a separate directory under simulations for each environment (development, staging, production), each with its own simulation files.

I also created a data directory under user-files, because that is the default location for CSV files used by Gatling feeders.

Most importantly, I created a separate configuration directory (staging, production) under gatling/conf, each directory containing its own customized gatling.conf file. I started by copying the gatling-defaults.conf file from GitHub to gatling/conf/staging/gatling.conf and gatling/conf/production/gatling.conf respectively.

Here is what I customized in staging/gatling.conf:

mute = true # When set to true, don't ask for simulation name nor run description
simulations = user-files/simulations/staging

I customized production/gatling.conf in a similar way:

mute = true # When set to true, don't ask for simulation name nor run description
simulations = user-files/simulations/production

Setting mute to true is important because without it, running Gatling in a Docker container was segfaulting while waiting for user input for the simulation ID:

Select simulation id (default is 'gatlingsimulation'). Accepted characters are a-z, A-Z, 0-9, - and _ Exception in thread "main" java.lang.NullPointerException at io.gatling.app.Selection$Selector.loop$1(Selection.scala:127) at io.gatling.app.Selection$Selector.askSimulationId(Selection.scala:135) at io.gatling.app.Selection$Selector.selection(Selection.scala:50) at io.gatling.app.Selection$.apply(Selection.scala:33) at io.gatling.app.Gatling.runIfNecessary(Gatling.scala:75) at io.gatling.app.Gatling.start(Gatling.scala:65) at io.gatling.app.Gatling$.start(Gatling.scala:57) at io.gatling.app.Gatling$.fromArgs(Gatling.scala:49) at io.gatling.app.Gatling$.main(Gatling.scala:43) at io.gatling.app.Gatling.main(Gatling.scala)

The other customization was to point the simulations attribute to the specific staging or production sub-directories.
Since the CSV files containing URLs to be load tested are also environment-specific, I modified the Simulation.scala files to take this into account. I also added 2 JAVA_OPTS variables that can be passed at runtime for HTTP basic authentication. Here is the new Crawl object (compare with the one from my previous post):
object Crawl {  val feeder = csv("staging-urls.csv").random
  val userName = System.getProperty("username")  val userPass = System.getProperty("password")
  val crawl = exec(feed(feeder)    .exec(http("${loc}")    .get("${loc}").basicAuth(userName, userPass)    ))}
One more thing is needed: to make Gatling use a specific configuration file instead of its default one, which is conf/gatling.conf. To do that, I set GATLING_CONF as an ENV variable in the Dockerfile, so it can be passed as a 'docker run' command line parameter. Here is the Dockerfile:
# Gatling is a highly capable load testing tool.## Documentation: http://gatling.io/docs/2.2.2/# Cheat sheet: http://gatling.io/#/cheat-sheet/2.2.2
FROM java:8-jdk-alpine
MAINTAINER Denis Vazhenin
# working directory for gatlingWORKDIR /opt
# gating versionENV GATLING_VERSION 2.2.2
# create directory for gatling installRUN mkdir -p gatling
# install gatlingRUN apk add --update wget && \  mkdir -p /tmp/downloads && \  wget -q -O /tmp/downloads/gatling-$GATLING_VERSION.zip \  https://repo1.maven.org/maven2/io/gatling/highcharts/gatling-charts-highcharts-bundle/$GATLING_VERSION/gatling-charts-highcharts-bundle-$GATLING_VERSION-bundle.zip && \  mkdir -p /tmp/archive && cd /tmp/archive && \  unzip /tmp/downloads/gatling-$GATLING_VERSION.zip && \  mv /tmp/archive/gatling-charts-highcharts-bundle-$GATLING_VERSION/* /opt/gatling/
# change context to gatling directoryWORKDIR  /opt/gatling
# set directories below to be mountable from hostVOLUME ["/opt/gatling/conf", "/opt/gatling/results", "/opt/gatling/user-files"]
# set environment variablesENV PATH /opt/gatling/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binENV GATLING_HOME /opt/gatlingENV GATLING_CONF /opt/gatling/confENV JAVA_OPTS ""
ENTRYPOINT ["gatling.sh"]
Finally, here is how I invoke 'docker run' to tie everything together:
docker run --rm -v ${WORKSPACE}/gatling/conf:/opt/gatling/conf -v ${WORKSPACE}/gatling/user-files:/opt/gatling/user-files -v ${WORKSPACE}/gatling/results:/opt/gatling/results -e GATLING_CONF="/opt/gatling/conf/staging" -e JAVA_OPTS="-Dusers=$USERS -Dduration=$DURATION -Dusername=myusername -Dpassword=mypass" /PATH/TO/DOCKER/REGISTRY/gatling
Note the GATLING_CONF parameter passed with -e with the value of /opt/gatling/conf/staging. Also note the username and password JAVA_OPTS parameters.

Happy load testing!

Building for Billions

Google Code Blog - Fri, 07/01/2016 - 17:37

Originally posted on Android Developers blog

Posted by Sam Dutton, Ankur Kotwal, Developer Advocates; Liz Yepsen, Program Manager

‘TOP-UP WARNING.’ ‘NO CONNECTION.’ ‘INSUFFICIENT BANDWIDTH TO PLAY THIS RESOURCE.’

These are common warnings for many smartphone users around the world.

To build products that work for billions of users, developers must address key challenges: limited or intermittent connectivity, device compatibility, varying screen sizes, high data costs, short-lived batteries. We first presented developers.google.com/billionsand related Android and Web resources at Google I/O last month, and today you can watch the video presentations about Android or the Web.

These best practices can help developers reach billions by delivering exceptional performance across a range of connections, data plans, and devices. g.co/dev/billions will help you:

Seamlessly transition between slow, intermediate, and offline environments

Your users move from place to place, from speedy wireless to patchy or expensive data. Manage these transitions by storing data, queueing requests, optimizing image handling, and performing core functions entirely offline.

Provide the right content for the right context

Keep context in mind - how and where do your users consume your content? Selecting text and media that works well across different viewport sizes, keeping text short (for scrolling on the go), providing a simple UI that doesn’t distract from content, and removing redundant content can all increase perception of your app’s quality while giving real performance gains like reduced data transfer. Once these practices are in place, localization options can grow audience reach and increase engagement.

Optimize for mobile hardware

Ensure your app or Web content is served and runs well for your widest possible addressable market, covering all actively used OS versions, while still following best practices, by testing on virtual or actual devices in target markets. Native Android apps should set minimum and target SDKs. Also, remember low cost phones have smaller amounts of RAM; apps should therefore adjust usage accordingly and minimize background running. For in-depth information on minimizing APK size, check out this series of Medium posts. On the Web, optimize JavaScript CPU usage, avoid raster image rendering, and minimize resource requests. Find out more here.

Reduce battery consumption

Low cost phones usually have shorter battery life. Users are sensitive to battery consumption levels and excessive consumption can lead to a high uninstall rate or avoidance of your site. Benchmark your battery usage against sessions on other pages or apps, or using tools such as Battery Historian, and avoid long-running processes which drain batteries.

Conserve data usage

Whatever you’re building, conserve data usage in three simple steps: understand loading requirements, reduce the amount of data required for interaction, and streamline navigation so users get what they want quickly. Conserving data on behalf of your users (and with native apps, offering configurable network usage) helps retain data-sensitive users -- especially those on prepaid plans or contracts with limited data -- as even “unlimited” plans can become expensive when roaming or if unexpected fees are applied.

Have another insight, or a success launching in low-connectivity conditions or on low-cost devices? Let us know on our G+ post.

Categories: Programming

Stuff The Internet Says On Scalability For July 1st, 2016

Hey, it's HighScalability time:


If you can't explain it with Legos then you don't really understand it.

 

If you like this sort of Stuff then please support me on Patreon.

  • 700 trillion: more pixels in Google's Satellite Map; 9,000km: length of new undersea internet cable from Oregon to Japan; 60 terabits per second: that undersea internet cable again; 12%: global average connection speed increase; 76%: WeChat users who spend more than 100RMB ($15) per month; 5 liters: per day pay in beer for Pyramid workers;  680: number of rubber bands it takes to explode a watermelon; 1,000: new Amazon services this year; $15 billion: amount Uber has raised; 7 million: # of feather on on each bird in Piper; 5.8 million: square-feet in Tesla Gigafactory; 2x: full-duplex chip could double phone-network data capacity; 

  • Quotable Quotes:
    • @hyc_symas: A shame everyone is implementing on top of HTTP today. Contemporary "protocol design" is a sick joke.
    • @f3ew: Wehkamp lost dev and accept environments 5 days before launch. Shit happens.  48 hours to recovery. #devopsdays
    • Greg Linden: Ultimately, [serverless computing] this is a good thing, making compute more efficient by allowing more overlapping workloads and making it easier to move compute around. But it does seem like compute on demand could cannibalize renting VMs.
    • @viktorklang: What if we started doing only single-core chips with massive eDRAM on-package and PCI-E peer-writes MPI between? (Micro-blade machines?)
    • Robert Graham: Programmers fetishize editors. Real programmers, those who produce a lot of code, fetishize debuggers
    • @jasonhand: "Systems are becoming more ephemeral and we have to find a way to deal with it" (regarding monitoring) - @adrianco #monitorama
    • @aphyr: Queues *can* improve fault tolerance, but this only happens if they don't lose your messages. The only one I know of that doesn't is Kafka.
    • Tom Simon: There are, accordingly, two ways of reading books; but infinitely many ways to divide up the act of reading into two classes.
    • Puppet: High-performing IT organizations deploy 200 times more frequently than low performers, with 2,555 times faster lead times.
    • @benzobot: “The system scaled with the number of engineers - more engineers, more metrics.” #monitorama
    • fizx: You seem to have installation confused with administration. Off the top of my head you forgot security, monitoring, logging config, backups, handling common production issues such as splitbrains, write multiplication, garbage collection snafus, upgrades between versions with questionably compatible internal apis.
    • @Mark_J_Perry: This has to be one of the most remarkable achievements ever: Global Poverty Fell Below 10% for 1st Time in 2015
    • ewams: If you are a services company, he is right, you should be focusing on outcomes. But, if you can't tell me in 2-3 sentences what problem you are solving and how it benefits the customer you are doing it wrong.
    • @retrohack3r: Dance like nobody is watching. Encrypt like everyone is.
    • @GundersenMarius: ES6 + HTTP/2 + Service Workers + Bloom-filter = efficient module loading without bundlin mariusgundersen.net/module-pusher/ 
    • @timperrett: Distributed systems are about protocols, not implementations. Forget languages, protocols are everything.
    • steveblank: What’s holding large companies back?...companies bought into the false premise that they exist to maximize shareholder value – which said “keep the stock price high.” As a consequence, corporations used metrics like return on net assets (RONA), return on capital deployed, and internal rate of return (IRR) to measure efficiency. These metrics make it difficult for a company that wants to invest in long-term innovation.
    • Greg Linden: Like a lot of things at Amazon, this went through many stages of wrong before we got it right. As I remember it, this went through some unpleasant, heavyweight, and inefficient RPC (esp. CORBA) and pub-sub architectures before an unsanctioned skunkworks project built iquitos for lightweight http-based microservices 
    • @rawrthis:  "We all die." Except my legacy stack. That crap will live forever. #devopsdays
    • Jim Handy: The industry already has more than enough DRAM wafer capacity for the foreseeable future. Why is this happening?  The answer is relatively simple: the gigabytes per wafer on a DRAM wafer are growing faster than the market’s demand for gigabytes.
    • SwellJoe: A container doesn't consume complexity and emit order. The complexity is still in there; you still have to build your containers in a way that is replicable, reliable, and automatable. I'm not necessarily saying configuration management is the only way to address that complexity, but it does have to be addressed in a container-based environment.

  • Deep Learning desires data, so if you want to build an AI that learns how to program this is how you would go about it, you would bring all the open source code into your giant, voracious, data crunching maw. Making open source data more available: Today, [GitHub] we're delighted to announce that, in collaboration with Google, we are releasing a collection of additional BigQuery tables to expand on the GitHub Archive...This 3TB+ dataset comprises the largest released source of GitHub activity to date. It contains activity data for more than 2.8 million open source GitHub repositories including more than 145 million unique commits, over 2 billion different file paths.

  • In almost all cases, the single-threaded implementation outperforms all the others, sometimes by an order of magnitude, despite the distributed systems using 16-128 cores. Scalability! But at what COST? The paper is, at its heart, a criticism of how the performance of current research systems are evaluated. The authors focus on the field of graph processing, but their arguments extend to most distributed computation research where performance is a key factor. They observe that most systems are currently evaluated in terms of their scalability, how their performance changes as more compute resources are used, but that this metric is often both useless and misleading.

  • hinkley: The older I get the more I feel like we're an accelerate version of the fashion industry. At least in fashion you can make an excuse that the design is thirty years old and most people don't remember the last time we did this. With software it's every six or seven. It's hard not to judge my peers for having such short memories. We were in the midst of one of these upheavals when I first started, and so I learned programming in that environment. It also means I have one more cycle than most people near my age. Now it all looks the same to me, and I understand those people who wanted to be more conservative. In fact I probably owe some people an apology.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Innovation and Magical Thinking

Detroit Tigers

I like baseball.  I can’t tell you the number of times I have spent the afternoon listening to a game and hearing the announcers expound on batting averages and on-base percentages as I puttered around the house. As someone with a background in quantitative analysis, I understand that the chances of a game-winning grand slam in the bottom of the ninth inning by a player that has not hit a home run in the major league are small.  However, in my mind’s eye, I can see the event happening and even believe that because I am listening that it will occur. This example is one form of magical thinking.  Magical thinking occurs when we attribute a causal or synchronistic relationship between actions or events that can’t be justified by reason and observation.  The current business environment means that innovation and magical thinking are often intertwined. Innovation without the ground game of implementation and continuous improvement is magical thinking.

Innovation, by definition, represents a substantial deviation from the thought processes of the past.  For example, Scrum and Extreme Programming (XP) are seen as innovations when compared to waterfall software development. Both are considered revolutionary, and when (and often if) adopted required substantial organizational transformations.  Organizational change is rarely easy; therefore, innovations are rarely adopted whole but rather reflect step-wise transformations.  Kent Beck in Extreme Programing Explained, Second Edition (2005) exposed the values of improvement and baby steps.  Once an innovation is identified it need to be implemented and then improved upon.  Believing that an innovation in its own right will change or save any organization is simply magical thinking.  Any innovation is only useful if it is USED and delivers value.

Even today, organizations consider Scrum, XP, and other Agile frameworks to be innovations under the “if it is new to me, it is new” mantra.  Regardless of the difficulty defending that definition of innovation, any organizations that think that they can “adopt” Agile without addressing implementation will up-end the status quo causing disruptions and dislocations. The Agile transformation fails because the bridge between innovation and value is defined by some sort of magical thinking.  Several years ago I was having a drink with a friend on Broadway in New York City.  The friend was describing a new innovative development framework his firm was adopting.  When I asked how it was going to be implemented, I was told that the CEO had mandated its use AND because it was so cool everybody would want to use it.  That was the whole implementation plan. Simply put, he had fallen prey to magical thinking. Within six months both the framework and he were not with the firm. Agile thinking has reinforced the idea that getting quickly started, generating feedback and then reacting to the feedback reduces risk and generates value quickly. Unless you have the luxury to implement an innovative idea, concept or product in a new organization without a past, just hoping that something will happen won’t generate change. 

In 1932 Frank Whittle was granted a patent for the turbojet.  Using the current state and predictability attributes noted in Innovation and Change, the turbojet was certainly not business as usual and could not be predicted from past events.  Whittle’s work was an innovation; however, due to testing and development issues, the RAF dismissed the idea prior to World War 2. The introduction of the jet fell to others.  Innovation did not translate to competitive advantage because of a failure in implementation. Innovation is an important step on the path to competitive advantage; however, it is simply a step. Unless innovation is combined with an implementation, we are just dealing with magical thinking.


Categories: Process Management

Building for Billions

Android Developers Blog - Thu, 06/30/2016 - 22:32

Posted by Sam Dutton, Ankur Kotwal, Developer Advocates; Liz Yepsen, Program Manager

‘TOP-UP WARNING.’ ‘NO CONNECTION.’ ‘INSUFFICIENT BANDWIDTH TO PLAY THIS RESOURCE.’

These are common warnings for many smartphone users around the world.

To build products that work for billions of users, developers must address key challenges: limited or intermittent connectivity, device compatibility, varying screen sizes, high data costs, short-lived batteries. We first presented developers.google.com/billions and related Android and Web resources at Google I/O last month, and today you can watch the video presentations about Android or the Web.

These best practices can help developers reach billions by delivering exceptional performance across a range of connections, data plans, and devices. g.co/dev/billions will help you:

Seamlessly transition between slow, intermediate, and offline environments

Your users move from place to place, from speedy wireless to patchy or expensive data. Manage these transitions by storing data, queueing requests, optimizing image handling, and performing core functions entirely offline.

Provide the right content for the right context

Keep context in mind - how and where do your users consume your content? Selecting text and media that works well across different viewport sizes, keeping text short (for scrolling on the go), providing a simple UI that doesn’t distract from content, and removing redundant content can all increase perception of your app’s quality while giving real performance gains like reduced data transfer. Once these practices are in place, localization options can grow audience reach and increase engagement.

Optimize for mobile hardware

Ensure your app or Web content is served and runs well for your widest possible addressable market, covering all actively used OS versions, while still following best practices, by testing on virtual or actual devices in target markets. Native Android apps should set minimum and target SDKs. Also, remember low cost phones have smaller amounts of RAM; apps should therefore adjust usage accordingly and minimize background running. For in-depth information on minimizing APK size, check out this series of Medium posts. On the Web, optimize JavaScript CPU usage, avoid raster image rendering, and minimize resource requests. Find out more here.

Reduce battery consumption

Low cost phones usually have shorter battery life. Users are sensitive to battery consumption levels and excessive consumption can lead to a high uninstall rate or avoidance of your site. Benchmark your battery usage against sessions on other pages or apps, or using tools such as Battery Historian, and avoid long-running processes which drain batteries.

Conserve data usage

Whatever you’re building, conserve data usage in three simple steps: understand loading requirements, reduce the amount of data required for interaction, and streamline navigation so users get what they want quickly. Conserving data on behalf of your users (and with native apps, offering configurable network usage) helps retain data-sensitive users -- especially those on prepaid plans or contracts with limited data -- as even “unlimited” plans can become expensive when roaming or if unexpected fees are applied.

Have another insight, or a success launching in low-connectivity conditions or on low-cost devices? Let us know on our G+ post.

Categories: Programming

Python: BeautifulSoup – Insert tag

Mark Needham - Thu, 06/30/2016 - 22:28

I’ve been scraping the Game of Thrones wiki in preparation for a meetup at Women Who Code next week and while attempting to extract character allegiances I wanted to insert missing line breaks to separate different allegiances.

I initially tried creating a line break like this:

>>> from bs4 import BeautifulSoup
>>> tag = BeautifulSoup("<br />", "html.parser")
>>> tag
<br/>

It looks like it should work but later on in my script I check the ‘name’ attribute to work out whether I’ve got a line break and it doesn’t return the value I expected it to:

>>> tag.name
u'[document]'

My script assumes it’s going to return the string ‘br’ so I needed another way of creating the tag. The following does the trick:

>>> from bs4 import Tag
>>> tag = Tag(name = "br")
>>> tag
<br></br>
>>> tag.name
'br'

That’s all for now, back to scraping for me!

Categories: Programming

Announcing turndown of the Google Feed API

Google Code Blog - Thu, 06/30/2016 - 21:10

Posted by Dan Ciruli, Product Manager, Google Cloud Platform Team

The Google Feed API was one of Google’s original AJAX APIs, first announced in 2007. It had a good run. However, interest and use of the API has waned over time, and it is running on API infrastructure that is now two generations old at Google.

Along with many of our other free APIs, in April 2012, we announced that we would be deprecating it in three years time. As of April 2015, the deprecation period has elapsed. While we have continued to operate the API in the interim, it is now time to announce the turndown.

As a final courtesy to developers, we plan to operate the API until September 29, 2016, when Google Feed API will cease operation. Please ensure that your application is no longer using this API by then.

Google appreciates how important APIs and developer trust are and we do not take decisions like this one lightly. We remain committed to providing great services and being open and communicative about their statuses.

For those developers who find RSS an essential part of their workflow, there are commercial alternatives that may well fit your use case very well.

Categories: Programming

Announcing turndown of the Google Feed API

Google Code Blog - Thu, 06/30/2016 - 21:10

Posted by Dan Ciruli, Product Manager, Google Cloud Platform Team

The Google Feed API was one of Google’s original AJAX APIs, first announced in 2007. It had a good run. However, interest and use of the API has waned over time, and it is running on API infrastructure that is now two generations old at Google.

Along with many of our other free APIs, in April 2012, we announced that we would be deprecating it in three years time. As of April 2015, the deprecation period has elapsed. While we have continued to operate the API in the interim, it is now time to announce the turndown.

As a final courtesy to developers, we plan to operate the API until September 29, 2016, when Google Feed API will cease operation. Please ensure that your application is no longer using this API by then.

Google appreciates how important APIs and developer trust are and we do not take decisions like this one lightly. We remain committed to providing great services and being open and communicative about their statuses.

For those developers who find RSS an essential part of their workflow, there are commercial alternatives that may well fit your use case very well.

Categories: Programming

Universal rendering with SwiftShader, now open source

Google Code Blog - Thu, 06/30/2016 - 21:01

Originally Posted on Chromium Blog


Posted by Nicolas Capens, Software Engineer and Pixel Pirate
SwiftShader is a software library for high-performance graphics rendering on the CPU. Google already uses this library in multiple products, including Chrome, Android development tools, and cloud services. Starting today, SwiftShader is fully open source, expanding its pool of potential applications.


Since 2009, Chrome has used SwiftShader to enable 3D rendering on systems that can’t fully support hardware-accelerated rendering. While 3D content like WebGL is written for a GPU, some users’ devices don’t have graphics hardware capable of executing this content. Others may have drivers with serious bugs which can make 3D rendering unreliable, or even impossible. Chrome uses SwiftShader on these systems in order to ensure 3D web content is available to all users.

WithWithoutWebGL3.pngChrome running without SwiftShader on a machine with an inadequate GPU (left) cannot run the WebGL Globe experiment. The same machine with SwiftShader enabled (right) is able to fully render the content.


SwiftShader implements the same OpenGL ES graphics API used by Chrome and Android. Making SwiftShader open source will enable other browser vendors to support 3D content universally and move the web platform forward as a whole. In particular, unconditional WebGL support allows web developers to create more engaging content, such as casual games, educational apps, collaborative content creation software, product showcases, virtual tours, and more. SwiftShader also has applications in the cloud, enabling rendering on GPU-less systems.


To provide users with the best performance, SwiftShader uses several techniques to efficiently perform graphics calculations on the CPU. Dynamic code generation enables tailoring the code towards the tasks at hand at run-time, as opposed to the more common compile-time optimization. This complex approach is simplified through the use of Reactor, a custom C++ embedded language with an intuitive imperative syntax. SwiftShader also uses vector operations in SIMT fashion, together with multi-threading technology, to increase parallelism across the CPU’s available cores and vector units. This enables real-time rendering for uses such as app streaming on Android.


Developers can access the SwiftShader source code from its git repository. Sign up for the mailing list to stay up to date on the latest developments and collaborate with other SwiftShader developers from the open-source community.
Categories: Programming

Universal rendering with SwiftShader, now open source

Google Code Blog - Thu, 06/30/2016 - 21:01

Originally Posted on Chromium Blog


Posted by Nicolas Capens, Software Engineer and Pixel Pirate
SwiftShader is a software library for high-performance graphics rendering on the CPU. Google already uses this library in multiple products, including Chrome, Android development tools, and cloud services. Starting today, SwiftShader is fully open source, expanding its pool of potential applications.


Since 2009, Chrome has used SwiftShader to enable 3D rendering on systems that can’t fully support hardware-accelerated rendering. While 3D content like WebGL is written for a GPU, some users’ devices don’t have graphics hardware capable of executing this content. Others may have drivers with serious bugs which can make 3D rendering unreliable, or even impossible. Chrome uses SwiftShader on these systems in order to ensure 3D web content is available to all users.

WithWithoutWebGL3.pngChrome running without SwiftShader on a machine with an inadequate GPU (left) cannot run the WebGL Globe experiment. The same machine with SwiftShader enabled (right) is able to fully render the content.


SwiftShader implements the same OpenGL ES graphics API used by Chrome and Android. Making SwiftShader open source will enable other browser vendors to support 3D content universally and move the web platform forward as a whole. In particular, unconditional WebGL support allows web developers to create more engaging content, such as casual games, educational apps, collaborative content creation software, product showcases, virtual tours, and more. SwiftShader also has applications in the cloud, enabling rendering on GPU-less systems.


To provide users with the best performance, SwiftShader uses several techniques to efficiently perform graphics calculations on the CPU. Dynamic code generation enables tailoring the code towards the tasks at hand at run-time, as opposed to the more common compile-time optimization. This complex approach is simplified through the use of Reactor, a custom C++ embedded language with an intuitive imperative syntax. SwiftShader also uses vector operations in SIMT fashion, together with multi-threading technology, to increase parallelism across the CPU’s available cores and vector units. This enables real-time rendering for uses such as app streaming on Android.


Developers can access the SwiftShader source code from its git repository. Sign up for the mailing list to stay up to date on the latest developments and collaborate with other SwiftShader developers from the open-source community.
Categories: Programming

New video tips to help news publishers find success on Google Play

Android Developers Blog - Wed, 06/29/2016 - 17:57

Posted by Tamzin Taylor - Strategic Partner Lead, Google Play Today we have released a three-part video series ‘Tips for your news app on Google Play’, where you can find actionable tips and learn best practices for developing, launching and monetising a high quality news app. The video series accompanies the recently published News Publisher Playbook.

Watch the video series to learn:

  • 10 tips on how to design and develop your News app
  • 10 tips to help you launch your News app and start gaining readers
  • 10 tips to engage your readers and monetize your News app

You can also get the News Publisher Playbook on the Play Store to help you develop a successful news mobile strategy on Android. It includes tips on mobile website optimization, how to create a Google Play Newsstand edition, how to improve your native app, and more.

Give us your feedback

Once you’ve checked out the video series, we’d love to hear your feedback so we can continue to help you find success with and achieve your business objectives. Leave a comment or a thumbs up, and subscribe to the Android Developers YouTube channel!

Also, check out our other videos in in the Tips for Success on Google Play series, including the recent video on 10 tips to build an app for billions of users.

For more best practices to find success on Google Play, get the new Playbook for Developers app.

Categories: Programming

Scaling Hotjar's Architecture: 9 Lessons Learned

Hotjar offers free website analytics so they have a challenging mission: handle hundreds of millions of requests per day from mostly free users. Marc von Brockdorff, Co-Founder & Director of Engineering at Hotjar, summarized the lessons they've learned in: 9 Lessons Learned Scaling Hotjar's Tech Architecture To Handle 21,875,000 Requests Per Hour.

In response to the criticism their architecture looks like a hot mess, Erik Näslund, Chief Architect at Hotjar, gives the highlights of their architecture:

  • We use nginx + lua for the really hot code paths where python doesn't quite cut it. No language is perfect and you might have to break out of your comfort zone and use something different every now and then.
  • Redis, Memcached, Postgres, Elasticsearch and S3 are all suitable for different kinds of data storage and we eventually needed them all to be able to query and store data in a cost effective way. We didn't start out using 5 different data-stores though...it's something that we "grew into".
  • Each application server is a (majestic) monolith. Micro-services are one way of architecting things, monoliths are another - I'm still waiting to be convinced that one way is superior to the other when it comes to a smaller team of developers.

What have they learned?

Categories: Architecture

25 Ways to Get Better Results

Have you ever wondered why some people seem to be really good at getting results, while others struggle?

What are the true secrets of productivity that set one person apart from another, and help them achieve high performance?

I’ve studied the challenge of personal productivity deeply to really understand the difference that makes the difference.

I’ve put together a list of 25 Keys to Getting Better Results that cut to the chase and get to the bottom line of extreme productivity:

25 Keys to Getting Better Results

This list of productivity secrets is from my best-selling productivity book, Getting Results the Agile Way, and reflects the best of what I’ve learned in terms of what really makes some people stand out when it comes to personal productivity, high performance, and making things happen.

After all, who doesn’t want better results in work and life?

You can read the list of productivity secrets, but here I want to touch on a few key ideas.

Vision

Vision is probably the single most important starting point when it comes to getting better results. 

It’s hard to do anything if you don’t know what it’s supposed to be or do when it’s done.

If you can see the end-in-mind, then you are off to a good start.

And if you feel really good about the end-in-mind, then you have something to pull you forward versus trying to push yourself forward.

When your vision pulls you forward, you are on the right path.

Value

Value is in the eye of the beholder and in the eye of the stakeholder.

Value is also the ultimate short-cut.

If you know what good looks like or if you know what’s actually valued, you can refocus your efforts on high-value activities that produce those outcomes.

When you don’t know who the value is for, or what good looks like, you are in trouble.

That’s why it’s important to check with the people you are producing value for, whether you are actually nailing their pains, needs, and desired outcomes.

If not, no problem – learn and adapt.

Learn early, learn often, and really get curious about which of your activities produce the highest value results.

Velocity

Speed is the name of the game.

As John Wooden would often say, “Be quick, but don’t hurry.”

In other words, make your moves quickly, but with intention, and with quality.

Quality comes through practice and repetition.  It’s how you learn.

Try things.   But try then quickly, and experiment in how you product results.

Use speed as your friend to learn faster, and to build momentum.

Don’t get bogged down.  Use speed to cut through your challenges, and quickly prioritize your best bets, and create a flow of continuous value.

Sometimes you will need to slow down to speed up.

But more often than not, you will need to speed up, so you that you are effectively taking massive action.

Few challenges withstand the onslaught of massive action, as long as you keep on learning and improving.

Well, that’s about it for now.

I hope that gives you at least a big of an edge that you can use in your day, every day, to get better, faster, simpler results for work and life.

Enjoy.

Categories: Architecture, Programming

Product Owners and Learning, Part 5

When I think of POs and the team, I think of learning in several loops:

  • The PO learns when the team finishes small features or creates a prototype so the PO can see what the team is thinking/delivering.
  • The team learns more about its process and what the PO wants.
  • If the Product Manager sees the demo, the Product Manager sees the progress against the roadmap and can integrate that learning into thinking about what the product needs now, and later.

Note that no one can learn if they can’t see progress against the backlog and roadmap.

There are two inter-related needs: Small stories so the team can deliver and seeing how those small stories fit into the big picture.

I don’t know how to separate these two needs in agile. If you can’t deliver something small, no one, the team, the PO, the customer, can’t learn from it. If you don’t deliver, you can’t change the big picture (or the small picture) of where the product is headed. If you can’t change, you find yourself not delivering the product you want when you want. It’s a mess.

When you don’t have small stories and you can’t deliver value frequently, you end up with interdependent features. These features don’t have to be interdependent. The interdependencies arise from the organization (who does what) and think they are talking about interdependencies in the features, but a root cause of those interdependencies are the fact that those features are not small and coherent. See my curlicue features post.

That means that the PO needs to learn about the features in depth. BAs can certainly help. Product Managers can help. And, the PO is with the team more often than the Product Manager. The PO needs to help the team realize when they have a structure that does not work for small features. Or, when the PO can’t know how to create feature sets out of a humungous feature. The team and the PO have to work together to get the most value from the team on a regular basis.

This is why I see the learning at several levels:

  • The Product Manager works with the customers to understand what customers need when, and when to ignore customers. It is both true that the customer is always right and the customer does not know what she wants. (I won’t tell you how long it took me to get a smart phone. Now, I don’t know how I could live without one. You cannot depend on only customers to guide your product decisions.)
  • The PO Value Team discusses the ranking/when the customers need which features. When I see great PO Value teams, they start discussing when to have which features from the feature sets.
  • The PO (and BA) work with the team to learn what the team can do when so they can provide small stories. They also learn from the team when the team delivers finished work.

The larger the features the less feedback and the less learning.

So, I’ve written a lot here. Let me summarize.

Part 1 was about the “problem” of only addressing features, not the defects or technical debt. If you have a big picture, you can see the whole product as you want it, over time. For me, the PO “problem” is that the PO cannot be outside-facing and inward-working at the same time. It is not possible for one human to do so.

Part 2 was about how you can think about making smaller stories, so you have feature sets, not one humungous feature.

Part 3 was about ranking. If you think about value, you are on the right track. I particularly like value for learning. That might mean the team spikes, or delivers some quick wins, or several features across many feature sets (breadth-first, not depth-first). Or, it could mean you attack some technical debt or defects. What is most valuable for you now? (If you, as a PO have feature-itis, you are doing yourself and your team a disservice. Think about the entire customer experience.)

Part 4 talked about how you might want to organize a Product Owner value team. Only the PO works with the team on a backlog, and the PO does not have to do “everything” alone.

If you would like to learn how to be a practical, pragmatic Product Owner, please join me at the Practical Product Owner workshop, beginning Aug 23, 2016. You will learn by working on your roadmaps, stories, and your particular challenges.  You will learn how to deliver what your customers value and need—all your customers, including your product development team.

Categories: Project Management

The Purpose Alignment Model

Xebia Blog - Wed, 06/29/2016 - 09:45
When scaling Agile/Scrum, we invariably run into the alignment vs. autonomy problem. In short, you cannot have autonomous, self-directing teams if they have no clue what direction they should go. Or, even shorter, alignment breeds autonomy. But how do we create alignment? And what tools can we use to quickly evaluate whether or not what

Two Differences Between Innovation and Change

Innovations are limited!

Innovations are limited!

Innovation is a word that has seen heavy use for a long time.  In the many uses of the word innovation, the term has been ascribed an equally wide range of meanings.  At one end of the spectrum are definitions that suggest that anything that deviates from the norm can be construed as an innovation.  One adage holds, “if it’s new to me, it is new.”  However, definitions of this sort conflate the terms “change” and “innovation”.  At the other end of the spectrum, some definitions provide a clear separation between evolutionary and discontinuous change. In narrower definitions, innovation is a subset of change.  In software development, business or even–more broadly–life, change is inevitable and continuous while innovation is not inevitable and far more abrupt. In practical terms, change and innovation often differ in a number of critical attributes.

Current State Knowledge Requirements: Change efforts require establishing a timeframe and the knowledge of the environment that will be impacted before and after the change.  Understanding both states allows us to determine whether the change effort was successful. For example, we can assess the change in a software program by comparing two versions of the code. Innovation does not require knowledge of the starting point because the starting point did not previously exist. The introduction of COBOL was an innovation: one day COBOL did not exist, and then–BOOM–it existed.  The lack of a past that provides context for a change is a strong identifier that something is an innovation

Predictability:  Change is often relatively predictable, often building from the past towards the future.  Consider the evolution of Ruby since its public release in 1995 (version .95) to its most recent release in April 2016 (version 2.3.1), or the change in processing power predicted by Moore’s Law.  Innovation is far less predictable because it is less anchored to a current state.

Why do we care?  The focus of much online discussion is on the need for innovation. Innovation is important for the economy and for individual firms.  Innovation rearranges the playing field and can disrupt whole industries.  Uber is an oft-cited innovative business model innovation that has yielded creative destruction.  At the risk of calling forth the trolls, I would suggest that Lyft, meanwhile, is more reflective of change than discontinuous innovation.  Innovation is important. However, managing and directing change are2.3.1 important too, because, in the end, the only things we can count on are death, taxes, and change.


Categories: Process Management

Running Gatling load tests in Docker containers via Jenkins

Agile Testing - Grig Gheorghiu - Wed, 06/29/2016 - 00:16
Gatling is a modern load testing tool written in Scala. As part of the Jenkins setup I am in charge of, I wanted to run load tests using Gatling against a collection of pages for a given website. Here are my notes on how I managed to do this.

Running Gatling as a Docker container locally

There is a Docker image already available on DockerHub, so you can simply pull down the image locally:


$ docker pull denvazh/gatling:2.2.2
Instructions on how to run a container based on this image are available on GitHub:
$ docker run -it --rm -v /home/core/gatling/conf:/opt/gatling/conf \-v /home/core/gatling/user-files:/opt/gatling/user-files \-v /home/core/gatling/results:/opt/gatling/results \ denvazh/gatling:2.2.2
Based on these instructions, I created a local directory called gatling, and under it I created 3 sub-directories: conf, results and user-files. I left the conf and results directories empty, and under user-files I created a simulations directory containing a Gatling load test scenario written in Scala. I also created a file in the user-files directory called urls.csv, containing a header named loc and a URL per line for each page that I want to load test.
Assuming the current directory is gatling, here are examples of these files:
$ cat user-files/urls.csvlochttps://my.website.comhttps://my.website.com/category1https://my.website.com/category2/product3
$ cat user-files/simulations/Simulation.scala
package my.gatling.simulation
import io.gatling.core.Predef._import io.gatling.http.Predef._import scala.concurrent.duration._
class GatlingSimulation extends Simulation {
  val httpConf = http    .baseURL("http://127.0.0.1")    .acceptHeader("text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8")    .doNotTrackHeader("1")    .acceptLanguageHeader("en-US,en;q=0.5")    .acceptEncodingHeader("gzip, deflate")    .userAgentHeader("Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Firefox/31.0")
  val scn1 = scenario("Scenario1")    .exec(Crawl.crawl)
  val userCount = Integer.getInteger("users", 1)  val durationInSeconds  = java.lang.Long.getLong("duration", 10L)  setUp(    scn1.inject(rampUsers(userCount) over (durationInSeconds seconds))  ).protocols(httpConf)}
object Crawl {
  val feeder = csv("/opt/gatling/user-files/urls.csv").random
  val crawl = exec(feed(feeder)    .exec(http("${loc}")    .get("${loc}")    ))}

I won't go through the different ways of writing Gatling load tests scenarios here. There are good instructions on the Gatling website -- see the Quickstart and the Advanced Tutorial. What the scenario above does is it reads the file urls.csv and randomly picks a URL from it, then runs a load test against that URL.
I do want to point out 2 variables in the above script:

  val userCount = Integer.getInteger("users", 1)  val durationInSeconds  = java.lang.Long.getLong("duration", 10L)
These variables specify the max number of users we want to ramp up to, and the duration of the ramp-up. They are used in the inject call:

scn1.inject(rampUsers(userCount) over (durationInSeconds seconds))

The special thing about these 2 variables is that they are read from JAVA_OPTS by Gatling. So if you have a -Dusers Java option and a -Dduration Java option, Gatling will know how to read them and how to set the userCount and durationInSeconds variables accordingly. This is a good thing, because it allows you to specify those numbers outside of Gatling, without hardcoding them in your simulation script. Here is more info on passing parameters via the command line to Gatling.

While pulling the Gatling docker image and running it is the simplest way to run Gatling, I prefer to understand what's going on in that image. I started off by getting the Dockerfile from GitHub:

$ cat Dockerfile
# Gatling is a highly capable load testing tool.## Documentation: http://gatling.io/docs/2.2.2/# Cheat sheet: http://gatling.io/#/cheat-sheet/2.2.2
FROM java:8-jdk-alpine
MAINTAINER Denis Vazhenin
# working directory for gatlingWORKDIR /opt
# gating versionENV GATLING_VERSION 2.2.2
# create directory for gatling installRUN mkdir -p gatling
# install gatlingRUN apk add --update wget && \  mkdir -p /tmp/downloads && \  wget -q -O /tmp/downloads/gatling-$GATLING_VERSION.zip \  https://repo1.maven.org/maven2/io/gatling/highcharts/gatling-charts-highcharts-bundle/$GATLING_VERSION/gatling-charts-highcharts-bundle-$GATLING_VERSION-bundle.zip && \  mkdir -p /tmp/archive && cd /tmp/archive && \  unzip /tmp/downloads/gatling-$GATLING_VERSION.zip && \  mv /tmp/archive/gatling-charts-highcharts-bundle-$GATLING_VERSION/* /opt/gatling/
# change context to gatling directoryWORKDIR  /opt/gatling
# set directories below to be mountable from hostVOLUME ["/opt/gatling/conf", "/opt/gatling/results", "/opt/gatling/user-files"]
# set environment variablesENV PATH /opt/gatling/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binENV GATLING_HOME /opt/gatling
ENTRYPOINT ["gatling.sh"]
I then added a way to pass JAVA_OPTS via an environment variable. I added this line after the ENV GATLING_HOME line:
ENV JAVA_OPTS ""
I dropped this Dockerfile in my gatling directory, then built a local Docker image off of it:
$ docker build -t gatling:local .
I  then invoked 'docker run' to launch a container based on this image, using the csv and simulation files from above. The current directory is still gatling.

$ docker run --rm -v `pwd`/conf:/opt/gatling/conf -v `pwd`/user-files:/opt/gatling/user-files -v `pwd`/results:/opt/gatling/results -e JAVA_OPTS="-Dusers=10 -Dduration=60" gatling:local -s MySimulationName

Note the -s flag which denotes a simulation name (which can be any string you want). If you don't specify this flag, the gatling.sh script which is the ENTRYPOINT in the container will wait for some user input and you will not be able to fully automate your load test.

Another thing to note is the use of JAVA_OPTS. In the example above, I pass -Dusers=10 and   -Dduration=60 as the two JAVA_OPTS parameters. The JAVA_OPTS variable itself is passed to 'docker run' via the -e option, which tells Docker to replace the default value for ENV JAVA_OPTS (which is "") with the value passed with -e.

Running Gatling as a Docker container from Jenkins

Once you have a working Gatling container locally, you can upload the Docker image built above to a private Docker registry. I used a private EC2 Container Registry (ECR).  

I also added the gatling directory and its sub-directories to a GitHub repository called devops.

In Jenkins, I created a new "Freestyle project" job with the following properties:

  • Parameterized build with 2 string parameters: USERS (default value 10) and DURATION in seconds (default value 60)
  • Git repository - add URL and credentials for the devops repository which contains the gatling files
  • An "Execute shell" build command similar to this one:
docker run --rm -v ${WORKSPACE}/gatling/conf:/opt/gatling/conf -v ${WORKSPACE}/gatling/user-files:/opt/gatling/user-files -v ${WORKSPACE}/gatling/results:/opt/gatling/results -e JAVA_OPTS="-Dusers=$USERS -Dduration=$DURATION"  /PATH/TO/DOCKER/REGISTRY/gatling -s MyLoadTest 


Note that we mount the gatling directories as Docker volumes, similarly to when we ran the Docker container locally, only this time we specify ${WORKSPACE} as the base directory. The 2 string parameters USERS and DURATION are passed as variables in JAVA_OPTS.
A nice thing about running Gatling via Jenkins is that the reports are available in the Workspace directory of the project. If you go to the Gatling project we created in Jenkins, click on Workspace, then on gatling, then results, you should see directories named gatlingsimulation-TIMESTAMP for each Gatling run. Each of these directories should have an index.html file, which will show you the Gatling report dashboard. Pretty neat.

SE-Radio Episode 261: David Heinemeier Hansson on the State of Rails, Monoliths, and More

David Heinemeier Hansson, creator of the Ruby on Rails framework and a partner at the software development company Basecamp, talks to Stefan Tilkov about the state of Ruby on Rails and its suitability for long-term development. He addresses some of its common criticisms, such as perceived usefulness for only simple problems, claimed lack of scalability, […]
Categories: Programming

Daydream Labs: animating 3D objects in VR

Google Code Blog - Tue, 06/28/2016 - 17:37

Rob Jagnow, Software Engineer, Google VR

Whether you're playing a game or watching a video, VR lets you step inside a new world and become the hero of a story. But what if you want to tell a story of your own?

Producing immersive 3D animation can be difficult and expensive. It requires complex software to set keyframes with splined interpolation or costly motion capture setups to track how live actors move through a scene. Professional animators spend considerable effort to create sequences that look expressive and natural.

At Daydream Labs, we've been experimenting with ways to reduce technical complexity and even add a greater sense of play when animating in VR. In one experiment we built, people could bring characters to life by picking up toys, moving them through space and time, and then replay the scene.


As we saw people play with the animation experiment we built, we noticed a few things:

The need for complex metaphors goes away in VR: What can be complicated in 2D can be made intuitive in 3D. Instead of animating with graph editors or icons representing location, people could simply reach out, grab a virtual toy, and carry it through the scene. These simple animations had a handmade charm that conveyed a surprising degree of emotion.

The learning curve drops to zero: People were already familiar with how to interact with real toys, so they jumped right in and got started telling their stories. They didn't need a lengthy tutorial, and they were able to modify their animations and even add new characters without any additional help.

People react to virtual environments the same way they react to real ones: When people entered a playful VR environment, they understood it was safe space to play with the toys around them. They felt comfortable performing and speaking in funny voices. They took more risks knowing the virtual environment was designed for play.

To create more intricate animations, we also built another experiment that let people independently animate the joints of a single character. It let you record your character’s movement as you separately animated the feet, hands, and head — just like you would with a puppet.


VR allows us to rethink software and make certain use cases more natural and intuitive. While this kind of animation system won’t replace professional tools, it can allow anyone to tell their own stories. There are many examples of using VR for storytelling, especially with video and animation, and we’re excited to see new perspectives as more creators share their stories in VR.

Categories: Programming

The Ultimate Tester: Sharing Knowledge

Xebia Blog - Tue, 06/28/2016 - 09:50
In the past three blog posts we have explored some aspects of being an Ultimate Tester: How we can add value, how our curiosity helps us to test the crazy stuff and how we can build quality in. We learn a lot about these things during work time (and hopefully during personal time as well),