Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Google Code Blog
Syndicate content
ewoodhttp://www.blogger.com/profile/12341551220176883769noreply@blogger.comBlogger1672125
Updated: 4 hours 25 min ago

XLA - TensorFlow, compiled

Mon, 03/06/2017 - 22:18
By the XLA team within Google, in collaboration with the TensorFlow team


One of the design goals and core strengths of TensorFlow is its flexibility. TensorFlow was designed to be a flexible and extensible system for defining arbitrary data flow graphs and executing them efficiently in a distributed manner using heterogenous computing devices (such as CPUs and GPUs).

But flexibility is often at odds with performance. While TensorFlow aims to let you define any kind of data flow graph, it’s challenging to make all graphs execute efficiently because TensorFlow optimizes each op separately. When an op with an efficient implementation exists or when each op is a relatively heavyweight operation, all is well; otherwise, the user can still compose this op out of lower-level ops, but this composition is not guaranteed to run in the most efficient way.


This is why we’ve developed XLA (Accelerated Linear Algebra), a compiler for TensorFlow. XLA uses JIT compilation techniques to analyze the TensorFlow graph created by the user at runtime, specialize it for the actual runtime dimensions and types, fuse multiple ops together and emit efficient native machine code for them - for devices like CPUs, GPUs and custom accelerators (e.g. Google’s TPU).


Fusing composable ops for increased performance

Consider the tf.nn.softmax op, for example. It computes the softmax activations of its parameter as follows:
CodeCogsEqn.gif

Softmax can be implemented as a composition of primitive TensorFlow ops (exponent, reduction, elementwise division, etc.):
softmax = exp(logits) / reduce_sum(exp(logits), dim)

This could potentially be slow, due to the extra data movement and materialization of temporary results that aren’t needed outside the op. Moreover, on co-processors like GPUs such a decomposed implementation could result in multiple “kernel launches” that make it even slower.

XLA is the secret compiler sauce that helps TensorFlow optimize compositions of primitive ops automatically. Tensorflow, augmented with XLA, retains flexibility without sacrificing runtime performance, by analyzing the graph at runtime, fusing ops together and producing efficient machine code for the fused subgraphs.

For example, a decomposed implementation of softmax as shown above would be optimized by XLA to be as fast as the hand-optimized compound op.

More generally, XLA can take whole subgraphs of TensorFlow operations and fuse them into efficient loops that require a minimal number of kernel launches. For example:
Many of the operations in this graph can be fused into a single element-wise loop. Consider a single element of the bias vector being added to a single element from the matmul result, for example. The result of this addition is a single element that can be compared with 0 (for ReLU). The result of the comparison can be exponentiated and divided by the sum of exponents of all inputs, resulting in the output of softmax. We don’t really need to create the intermediate arrays for matmul, add, and ReLU in memory.


s[j] = softmax[j](ReLU(bias[j] + matmul_result[j]))

A fused implementation can compute the end result within a single element-wise loop, without allocating needless memory. In more advanced scenarios, these operations can even be fused into the matrix multiplication.


XLA helps TensorFlow retain its flexibility while eliminating performance concerns.

On internal benchmarks, XLA shows up to 50% speedups over TensorFlow without XLA on Nvidia GPUs. The biggest speedups come, as expected, in models with long sequences of elementwise operations that can be fused to efficient loops. However, XLA should still be considered experimental, and some benchmarks may experience slowdowns.


In this talk from TensorFlow Developer Summit, Chris Leary and Todd Wang describe how TensorFlow can make use of XLA, JIT, AOT, and other compilation techniques to minimize execution time and maximize computing resources.

Extreme specialization for executable size reduction

In addition to improved performance, TensorFlow models can benefit from XLA for restricted-memory environments (such as mobile devices) due to the executable size reduction it provides. tfcompile is a tool that leverages XLA for ahead-of-time compilation (AOT) - a whole graph is compiled to XLA, which then emits tight machine code that implements the ops in the graph. Coupled with a minimal runtime this scheme provides considerable size reductions.

For example, given a 3-deep, 60-wide stacked LSTM model on android-arm, the original TF model size is 2.6 MB (1 MB runtime + 1.6 MB graph); when compiled with XLA, the size goes down to 600 KB.
Click here for a higher resolution image.
This size reduction is achieved by the full specialization of the model implied by its static compilation. When the model runs, the full power and flexibility of the TensorFlow runtime is not required - only the ops implementing the actual graph the user is interested in are compiled to native code. That said, the performance of the code emitted by the CPU backend of XLA is still far from optimal; this part of the project requires more work.

Support for alternative backends and devices

To execute TensorFlow graphs on a new kind of computing device today, one has to re-implement all the TensorFlow ops (kernels) for the new device. Depending on the device, this can be a very significant amount of work.

By design, XLA makes supporting new devices much easier by adding custom backends. Since TensorFlow can target XLA, one can add a new device backend to XLA and thus enable it to run TensorFlow graphs. XLA provides a significantly smaller implementation surface for new devices, since XLA operations are just the primitives (recall that XLA handles the decomposition of complex ops on its own). We’ve documented the process for adding a custom backend to XLA on this page. Google uses this mechanism to target TPUs from XLA.

Conclusion and looking forward

XLA is still in early stages of development. It is showing very promising results for some use cases, and it is clear that TensorFlow can benefit even more from this technology in the future. We decided to release XLA to TensorFlow Github early to solicit contributions from the community and to provide a convenient surface for optimizing TensorFlow for various computing devices, as well as retargeting the TensorFlow runtime and models to run on new kinds of hardware.

Categories: Programming

Introducing Python Fire, a library for automatically generating command line interfaces

Fri, 03/03/2017 - 18:47
By David Bieber, Software Engineer on Google Brain

Originally posted on the Google Open Source Blog

Today we are pleased to announce the open-sourcing of Python Fire. Python Fire generates command line interfaces (CLIs) from any Python code. Simply call the Fire function in any Python program to automatically turn that program into a CLI. The library is available from pypi via `pip install fire`, and the source is available on GitHub.

Python Fire will automatically turn your code into a CLI without you needing to do any additional work. You don't have to define arguments, set up help information, or write a main function that defines how your code is run. Instead, you simply call the `Fire` function from your main module, and Python Fire takes care of the rest. It uses inspection to turn whatever Python object you give it -- whether it's a class, an object, a dictionary, a function, or even a whole module -- into a command line interface, complete with tab completion and documentation, and the CLI will stay up-to-date even as the code changes.

To illustrate this, let's look at a simple example.#!/usr/bin/env pythonimport fire

class Example(object):  def hello(self, name='world'):    """Says hello to the specified name."""    return 'Hello {name}!'.format(name=name)

def main():  fire.Fire(Example)

if __name__ == '__main__':  main()

When the Fire function is run, our command will be executed. Just by calling Fire, we can now use the Example class as if it were a command line utility.

$ ./example.py helloHello world!$ ./example.py hello DavidHello David!$ ./example.py hello --name=GoogleHello Google!

Of course, you can continue to use this module like an ordinary Python library, enabling you to use the exact same code both from Bash and Python. If you're writing a Python library, then you no longer need to update your main method or client when experimenting with it; instead you can simply run the piece of your library that you're experimenting with from the command line. Even as the library changes, the command line tool stays up to date.

At Google, engineers use Python Fire to generate command line tools from Python libraries. We have an image manipulation tool built by using Fire with the Python Imaging Library, PIL. In Google Brain, we use an experiment management tool built with Fire, allowing us to manage experiments equally well from Python or from Bash.

Every Fire CLI comes with an interactive mode. Run the CLI with the `--interactive` flag to launch an IPython REPL with the result of your command, as well as other useful variables already defined and ready to use. Be sure to check out Python Fire's documentation for more on this and the other useful features Fire provides.

Between Python Fire's simplicity, generality, and power, we hope you find it a useful library for your own projects.
Categories: Programming

Introducing Python Fire, a library for automatically generating command line interfaces

Fri, 03/03/2017 - 18:47
By David Bieber, Software Engineer on Google Brain

Originally posted on the Google Open Source Blog

Today we are pleased to announce the open-sourcing of Python Fire. Python Fire generates command line interfaces (CLIs) from any Python code. Simply call the Fire function in any Python program to automatically turn that program into a CLI. The library is available from pypi via `pip install fire`, and the source is available on GitHub.

Python Fire will automatically turn your code into a CLI without you needing to do any additional work. You don't have to define arguments, set up help information, or write a main function that defines how your code is run. Instead, you simply call the `Fire` function from your main module, and Python Fire takes care of the rest. It uses inspection to turn whatever Python object you give it -- whether it's a class, an object, a dictionary, a function, or even a whole module -- into a command line interface, complete with tab completion and documentation, and the CLI will stay up-to-date even as the code changes.

To illustrate this, let's look at a simple example.#!/usr/bin/env pythonimport fire

class Example(object):  def hello(self, name='world'):    """Says hello to the specified name."""    return 'Hello {name}!'.format(name=name)

def main():  fire.Fire(Example)

if __name__ == '__main__':  main()

When the Fire function is run, our command will be executed. Just by calling Fire, we can now use the Example class as if it were a command line utility.

$ ./example.py helloHello world!$ ./example.py hello DavidHello David!$ ./example.py hello --name=GoogleHello Google!

Of course, you can continue to use this module like an ordinary Python library, enabling you to use the exact same code both from Bash and Python. If you're writing a Python library, then you no longer need to update your main method or client when experimenting with it; instead you can simply run the piece of your library that you're experimenting with from the command line. Even as the library changes, the command line tool stays up to date.

At Google, engineers use Python Fire to generate command line tools from Python libraries. We have an image manipulation tool built by using Fire with the Python Imaging Library, PIL. In Google Brain, we use an experiment management tool built with Fire, allowing us to manage experiments equally well from Python or from Bash.

Every Fire CLI comes with an interactive mode. Run the CLI with the `--interactive` flag to launch an IPython REPL with the result of your command, as well as other useful variables already defined and ready to use. Be sure to check out Python Fire's documentation for more on this and the other useful features Fire provides.

Between Python Fire's simplicity, generality, and power, we hope you find it a useful library for your own projects.
Categories: Programming

Apply now to Launchpad Accelerator---now including Africa and Europe!

Wed, 03/01/2017 - 20:09
Posted By: Roy Glasberg, Global Lead, Launchpad Program & Accelerator

After recently hosting another amazing group of startups for Launchpad Accelerator, we're ready to kick things off again with the next class! Apply hereby 9am PST on April 24, 2017.

Starting today, we'll be accepting applications from growth-stage innovative tech startups from these countries:
  • Asia: India, Indonesia, Thailand, Vietnam, Malaysia and the Philippines
  • Latin America: Argentina, Brazil, Chile, Colombia and Mexico
And we're delighted to expand the program to countries in Africa and Europe for the first time!
  • Africa: Kenya, Nigeria and South Africa
  • Europe: Czech Republic, Hungary and Poland
The equity-free program will begin on July 17th, 2017 at the Google Developers Launchpad Space in San Francisco and will include 2 weeks of all-expense-paid training.

What are the benefits?

The training at Google HQ includes intensive mentoring from 20+ Google teams, and expert mentors from top technology companies and VCs in Silicon Valley. Participants receive equity-free support, credits for Google products, PR support and continue to work closely with Google back in their home country during the 6-month program.

What do we look for when selecting startups?

Each startup that applies to the Launchpad Accelerator is considered holistically and with great care. Below are general guidelines behind our process to help you understand what we look for in our candidates.

All startups in the program must:
  • Be a technological startup.
  • Be targeting their local markets.
  • Have proven product-market fit (beyond ideation stage).
  • Be based in the countries listed above.
Additionally, we are interested in what kind of startup you are. We also consider:
  • The problem you are trying to solve. How does it create value for users? How are you addressing a real challenge for your home city, country or region?
  • Does your management team have a leadership mindset and the drive to become an influencer?
  • Will you share what you learn in Silicon Valley for the benefit of other startups in your local ecosystem?
If you're based outside of these countries, stay tuned, as we expect to add more countries to the program in the future.

We can't wait to learn more about your startup and work together to solve your challenges and help grow your business.
Categories: Programming

Apply now to Launchpad Accelerator---now including Africa and Europe!

Wed, 03/01/2017 - 20:09
Posted By: Roy Glasberg, Global Lead, Launchpad Program & Accelerator

After recently hosting another amazing group of startups for Launchpad Accelerator, we're ready to kick things off again with the next class! Apply hereby 9am PST on April 24, 2017.

Starting today, we'll be accepting applications from growth-stage innovative tech startups from these countries:
  • Asia: India, Indonesia, Thailand, Vietnam, Malaysia and the Philippines
  • Latin America: Argentina, Brazil, Chile, Colombia and Mexico
And we're delighted to expand the program to countries in Africa and Europe for the first time!
  • Africa: Kenya, Nigeria and South Africa
  • Europe: Czech Republic, Hungary and Poland
The equity-free program will begin on July 17th, 2017 at the Google Developers Launchpad Space in San Francisco and will include 2 weeks of all-expense-paid training.

What are the benefits?

The training at Google HQ includes intensive mentoring from 20+ Google teams, and expert mentors from top technology companies and VCs in Silicon Valley. Participants receive equity-free support, credits for Google products, PR support and continue to work closely with Google back in their home country during the 6-month program.

What do we look for when selecting startups?

Each startup that applies to the Launchpad Accelerator is considered holistically and with great care. Below are general guidelines behind our process to help you understand what we look for in our candidates.

All startups in the program must:
  • Be a technological startup.
  • Be targeting their local markets.
  • Have proven product-market fit (beyond ideation stage).
  • Be based in the countries listed above.
Additionally, we are interested in what kind of startup you are. We also consider:
  • The problem you are trying to solve. How does it create value for users? How are you addressing a real challenge for your home city, country or region?
  • Does your management team have a leadership mindset and the drive to become an influencer?
  • Will you share what you learn in Silicon Valley for the benefit of other startups in your local ecosystem?
If you're based outside of these countries, stay tuned, as we expect to add more countries to the program in the future.

We can't wait to learn more about your startup and work together to solve your challenges and help grow your business.
Categories: Programming

“Level up” your gaming business with new innovations for apps

Wed, 03/01/2017 - 15:09
Originally shared on the Inside AdMob Blog
Posted by Sissie Hsiao, Product Director, Mobile Advertising, Google. Last played Fire Emblem Heroes for Android

Mobile games mean more than just fun. They mean business. Big business. According to App Annie, game developers should capture almost half of the $189B global market for in-app purchases and advertising by 20201.

Later today, at the Games Developer Conference (GDC) in San Francisco, I look forward to sharing a series of new innovations across ad formats, monetization tools and measurement insights for apps.

  • New playable and video ad formats to get more people into your game
  • Integrations to help you create better monetization experiences 
  • Measurement tools that provide insights about how players are interacting with your game
Let more users try your game with a playable ad format

There’s no better way for a new user to experience your game than to actually play it. So today, we introduced playables, an interactive ad format in Universal App Campaigns that allows users to play a lightweight version of your game, right when they see it in any of the 1M+ apps in the Google Display Network.

studio.justad.mobi-Management-studio-test_ad.php-browser&saved&id=703423(Nexus 5X)_nexus5x-portrait.pngJam City’s playable ad for Cookie Jam

Playables help you get more qualified installs from users who tried your game in the ad and made the choice to download it for more play time. By attracting already-engaged users into your app, playables help you drive the long-term outcomes you care about — rounds played, levels beat, trophies won, purchases made and more.

"Jam City wants to put our games in the hands of more potential players as quickly as possible. Playables get new users into the game right from the ad, which we've found drives more engagement and long-term customer value." Josh Yguado, President & COO Jam City, maker of Panda Pop and Cookie Jam.

Playables will be available for developers through Universal App Campaigns in the coming months, and will be compatible with HTML5 creatives built through Google Web Designer or third-party agencies.

Improve the video experience with ads designed for mobile viewing

Most mobile video ad views on the Google Display Network are watched on devices held vertically2. This can create a poor experience when users encounter video ad creatives built for horizontal viewing.

Developers using Universal App Campaigns will soon be able to use an auto-flip feature that automatically orients your video ads to match the way users are holding their phones. If you upload a horizontal video creative in AdWords, we will automatically create a second, vertical version for you.

Cookie Jam horizontal video and vertical-optimized video created through auto-flip technology

The auto-flip feature uses Google's machine learning technology to identify the most important objects in every frame of your horizontal video creative. It then produces an optimized, vertical version of your video ad that highlights those important components of your original asset. Early tests show that click-through rates are about 20% higher on these dynamically-generated vertical videos than on horizontal video ads watched vertically3.

Unlock new business with rewarded video formats, and free, unlimited reporting

Developers have embraced AdMob's platform to mediate rewarded video ads as a way to let users watch ads in exchange for an in-app reward. Today, we are delighted to announce that we are bringing Google’s video app install advertising demand from AdWords to AdMob, significantly increasing rewarded demand available to developers. Advertisers that use Universal App Campaigns can seamlessly reach this engaged, game-playing audience using your existing video creatives.

We are also investing in better measurement tools for developers by bringing the power of Firebase Analytics to more game developers with a generally available C++ SDK and an SDK for Unity, a leading gaming engine.

002-v1-entryPoint_v2.pngC++ and Unity developers can now access Firebase Analytics for real-time player insights

With Firebase Analytics, C++ and Unity developers can now capture billions of daily events — like level completes and play time — to get more nuanced player insights and gain a deeper understanding of metrics like daily active users, average revenue per user and player lifetime value.

This is an exciting time to be a game developer. It’s been a privilege to meet so many of you at GDC 2017 and learn about the amazing games that you’re all building. We hope the innovations we announced today help you grow long-term gaming businesses and we look forward to continuing on this journey with you.

Until next year, GDC!

1 - App Monetization Report, November 2016, App Annie
2 - More than 80% of video ad views in mobile apps on the Google Display Network are from devices held vertically video, Google Internal Data
3 - Google Internal Data
Categories: Programming

Adding text and shapes with the Google Slides API

Thu, 02/23/2017 - 20:26
Originally shared on the G Suite Developers Blog

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite
When the Google Slidesteam launched their very first API last November, it immediately opened up a whole new class of applications. These applications have the ability to interact with the Slides service, so you can perform operations on presentations programmatically. Since its launch, we've published several videos to help you realize some of those possibilities, showing you how to:
Today, we're releasing the latest Slides API tutorial in our video series. This one goes back to basics a bit: adding text to presentations. But we also discuss shapes—not only adding shapes to slides, but also adding text within shapes. Most importantly, we cover one best practice when using the API: create your own object IDs. By doing this, developers can execute more requests while minimizing API calls.



Developers use insertText requests to tell the API to add text to slides. This is true whether you're adding text to a textbox, a shape or table cell. Similar to the Google Sheets API, all requests are made as JSON payloads sent to the API's batchUpdate() method. Here's the JavaScript for inserting text in some object (objectID) on a slide:
{
"insertText": {
"objectId": objectID,
"text": "Hello World!\n"
}
Adding shapes is a bit more challenging, as you can see from itssample JSON structure:

{
"createShape": {
"shapeType": "SMILEY_FACE",
"elementProperties": {
"pageObjectId": slideID,
"size": {
"height": {
"magnitude": 3000000,
"unit": "EMU"
},
"width": {
"magnitude": 3000000,
"unit": "EMU"
}
},
"transform": {
"unit": "EMU",
"scaleX": 1.3449,
"scaleY": 1.3031,
"translateX": 4671925,
"translateY": 450150
}
}
}
}
Placing or manipulating shapes or images on slides requires more information so the cloud service can properly render these objects. Be aware that it does involve some math, as you can see from the Page Elements page in the docs as well as the Transforms concept guide. In the video, I drop a few hints and good practices so you don't have to start from scratch.

Regardless of how complex your requests are, if you have at least one, say in an array named requests, you'd make an API call with the aforementioned batchUpdate() method, which in Python looks like this (assuming SLIDES is the service endpoint and a presentation ID of deckID):

SLIDES.presentations().batchUpdate(presentationId=deckID,
body=requests).execute()
For a detailed look at the complete code sample featured in the DevByte, check out the deep dive post. As you can see, adding text is fairly straightforward. If you want to learn how to format and style that text, check out the Formatting Text post and video as well as the text concepts guide.
To learn how to perform text search-and-replace, say to replace placeholders in a template deck, check out the Replacing Text & Images post and video as well as the merging data into slides guide. We hope these developer resources help you create that next great app that automates the task of producing presentations for your users!
Categories: Programming

Debug TensorFlow Models with tfdbg

Fri, 02/17/2017 - 23:32
Posted by Shanqing Cai, Software Engineer, Tools and Infrastructure.

We are excited to share TensorFlow Debugger (tfdbg), a tool that makes debugging of machine learning models (ML) in TensorFlow easier.
TensorFlow, Google's open-source ML library, is based on dataflow graphs. A typical TensorFlow ML program consists of two separate stages:
  1. Setting up the ML model as a dataflow graph by using the library's Python API,
  2. Training or performing inference on the graph by using the Session.run()method.
If errors and bugs occur during the second stage (i.e., the TensorFlow runtime), they are difficult to debug.

To understand why that is the case, note that to standard Python debuggers, the Session.run() call is effectively a single statement and does not exposes the running graph's internal structure (nodes and their connections) and state (output arrays or tensors of the nodes). Lower-level debuggers such as gdb cannot organize stack frames and variable values in a way relevant to TensorFlow graph operations. A specialized runtime debugger has been among the most frequently raised feature requests from TensorFlow users.

tfdbg addresses this runtime debugging need. Let's see tfdbg in action with a short snippet of code that sets up and runs a simple TensorFlow graph to fit a simple linear equation through gradient descent.

import numpy as np
import tensorflow as tf
import tensorflow.python.debug as tf_debug
xs = np.linspace(-0.5, 0.49, 100)
x = tf.placeholder(tf.float32, shape=[None], name="x")
y = tf.placeholder(tf.float32, shape=[None], name="y")
k = tf.Variable([0.0], name="k")
y_hat = tf.multiply(k, x, name="y_hat")
sse = tf.reduce_sum((y - y_hat) * (y - y_hat), name="sse")
train_op = tf.train.GradientDescentOptimizer(learning_rate=0.02).minimize(sse)

sess = tf.Session()
sess.run(tf.global_variables_initializer())

sess = tf_debug.LocalCLIDebugWrapperSession(sess)
for _ in range(10):
sess.run(train_op, feed_dict={x: xs, y: 42 * xs})

As the highlighted line in this example shows, the session object is wrapped as a class for debugging (LocalCLIDebugWrapperSession), so the calling the run() method will launch the command-line interface (CLI) of tfdbg. Using mouse clicks or commands, you can proceed through the successive run calls, inspect the graph's nodes and their attributes, visualize the complete history of the execution of all relevant nodes in the graph through the list of intermediate tensors. By using the invoke_stepper command, you can let the Session.run() call execute in the "stepper mode", in which you can step to nodes of your choice, observe and modify their outputs, followed by further stepping actions, in a way analogous to debugging procedural languages (e.g., in gdb or pdb).

A class of frequently encountered issue in developing TensorFlow ML models is the appearance of bad numerical values (infinities and NaNs) due to overflow, division by zero, log of zero, etc. In large TensorFlow graphs, finding the source of such nodes can be tedious and time-consuming. With the help of tfdbg CLI and its conditional breakpoint support, you can quickly identify the culprit node. The video below demonstrates how to debug infinity/NaN issues in a neural network with tfdbg:

A screencast of the TensorFlow Debugger in action, from this tutorial.

Compared with alternative debugging options such as Print Ops, tfdbg requires fewer lines of code change, provides more comprehensive coverage of the graphs, and offers a more interactive debugging experience. It will speed up your model development and debugging workflows. It offers additional features such as offline debugging of dumped tensors from server environments and integration with tf.contrib.learn. To get started, please visit this documentation. This research paperlays out the design of tfdbg in greater detail.

The minimum required TensorFlow version for tfdbgis 0.12.1. To report bugs, please open issues on TensorFlow's GitHub Issues Page. For general usage help, please post questions on StackOverflow using the tag tensorflow.
Acknowledgements
This project would not be possible without the help and feedback from members of the Google TensorFlow Core/API Team and the Applied Machine Intelligence Team.





Categories: Programming

Announcing TensorFlow 1.0

Wed, 02/15/2017 - 20:43
Posted By: Amy McDonald Sandjideh, Technical Program Manager, TensorFlow

In just its first year, TensorFlow has helped researchers, engineers, artists, students, and many others make progress with everything from language translation to early detection of skin cancer and preventing blindness in diabetics. We're excited to see people using TensorFlow in over 6000 open-source repositories online.


Today, as part of the first annual TensorFlow Developer Summit, hosted in Mountain View and livestreamed around the world, we're announcing TensorFlow 1.0:


It's faster: TensorFlow 1.0 is incredibly fast! XLA lays the groundwork for even more performance improvements in the future, and tensorflow.org now includes tips & tricksfor tuning your models to achieve maximum speed. We'll soon publish updated implementations of several popular models to show how to take full advantage of TensorFlow 1.0 - including a 7.3x speedup on 8 GPUs for Inception v3 and 58x speedup for distributed Inception v3 training on 64 GPUs!


It's more flexible: TensorFlow 1.0 introduces a high-level API for TensorFlow, with tf.layers, tf.metrics, and tf.losses modules. We've also announced the inclusion of a new tf.keras module that provides full compatibility with Keras, another popular high-level neural networks library.


It's more production-ready than ever: TensorFlow 1.0 promises Python API stability (details here), making it easier to pick up new features without worrying about breaking your existing code.

Other highlights from TensorFlow 1.0:

  • Python APIs have been changed to resemble NumPy more closely. For this and other backwards-incompatible changes made to support API stability going forward, please use our handy migration guide and conversion script.
  • Experimental APIs for Javaand Go
  • Higher-level API modules tf.layers, tf.metrics, and tf.losses - brought over from tf.contrib.learnafter incorporating skflowand TF Slim
  • Experimental release of XLA, a domain-specific compiler for TensorFlow graphs, that targets CPUs and GPUs. XLA is rapidly evolving - expect to see more progress in upcoming releases.
  • Introduction of the TensorFlow Debugger (tfdbg), a command-line interface and API for debugging live TensorFlow programs.
  • New Android demos for object detection and localization, and camera-based image stylization.
  • Installation improvements: Python 3 docker images have been added, and TensorFlow's pip packages are now PyPI compliant. This means TensorFlow can now be installed with a simple invocation of pip install tensorflow.

We're thrilled to see the pace of development in the TensorFlow community around the world. To hear more about TensorFlow 1.0 and how it's being used, you can watch the TensorFlow Developer Summit talks on YouTube, covering recent updates from higher-level APIs to TensorFlow on mobile to our new XLA compiler, as well as the exciting ways that TensorFlow is being used:



Click herefor a link to the livestream and video playlist (individual talks will be posted online later in the day).


The TensorFlow ecosystem continues to grow with new techniques like Foldfor dynamic batching and tools like the Embedding Projector along with updatesto our existing tools like TensorFlow Serving. We're incredibly grateful to the community of contributors, educators, and researchers who have made advances in deep learning available to everyone. We look forward to working with you on forums like GitHub issues, Stack Overflow, @TensorFlow, the discuss@tensorflow.orggroup, and at future events.



Categories: Programming

G Suite Developer Sessions at Google Cloud Next 2017

Wed, 02/08/2017 - 19:18
Originally posted on the G Suite Developers Blog

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

There are over 200 sessions happening next month at Google Cloud's Next 2017 conferencein San Francisco... so many choices! Along with content geared towards Google Cloud Platform, this year features the addition of G Suite so all 3 pillars of cloud computing (IaaS, PaaS, SaaS) are represented!


There are already thousands of developers including Independent Software Vendors (ISVs) creating solutions to help schools and enterprises running the G Suite collaboration and productivity suite (formerly Google Apps). If you're thinking about becoming one, consider building applications that extend, enhance, and integrate G Suite apps and data with other mission critical systems to help businesses and educational institutions succeed.


Looking for inspiration? Here's a preview of some of the sessions that current and potential G Suite developers should consider:


The first is Automating internal processes using Apps Script and APIs for Docs editors. Not only will you hear directly from senior engineers on the Google Sheets & Google Slides REST API teams, but you'll also find out how existing customers are already doing so! Can't wait to get started with these APIs? Here are the intro blog post & video for the latest Google Sheets API as well as the intro blog post & video for the Google Slides API. Part of the talk also covers Google Apps Script, the Javascript-in-the-cloud solution that gives developers programmatic access to authorized G Suite data along with the ability to connect to other Google and external services.


If that's not enough Apps Script for you, or you're new to that technology, swing by to hear its Product Manager give you an introduction in his talk, Using Google Apps Script to automate G Suite. If you haven't heard of Apps Script before, you'll be wondering why you haven't until now! If you want a headstart, here's a quick intro video to give you an idea of what you can do with it!


Did you know that Apps Script also powers "add-ons" which extend the functionality of Google Docs, Sheets, and Forms? Then come to "Building G Suite add-ons with Google Apps Script". Learn how you can leverage the power of Apps Script to build custom add-ons for your business, or monetize by making them available in the G Suite Marketplace where administrators or employees can install your add-ons for their organizations.


In addition to Apps Script apps, all your Google Docs, Sheets, and Slides documents live in Google Drive. But did you know that Drive is not just for individual file storage? Hear directly from a Drive Product Manager on how you can, "[Use] Drive APIs to customize Google Drive." With the Drive API and Team Drives, you can extend what Drive can do for your organization. One example from the most recent Google I/O tells the story of how WhatsApp used the Drive API to back up all your conversations! To get started with your own Drive API integration, check out this blog post and short video. Confused by when you should use Google Drive or Google Cloud Storage? I've got an app, err video, for that too! :-)


Not a software engineer but still code as part of your profession? Want to build a custom app for your department or line of business without having to worry about IT overhead? You may have heard about Google App Maker, our low-code development tool that does exactly that. Curious to learn more about it? Hear directly from its Product Manager lead in his talk entitled, "Citizen developers building low-code apps with App Maker on G Suite."


All of these talks are just waiting for you at Next, the best place to get your feet wet developing for G Suite, and of course, the Google Cloud Platform. Start by checking out the session schedule. Next will also offer many opportunities to meet and interact with industry peers along with representatives from all over Google who love the cloud. Register today and see you in San Francisco!




Categories: Programming

Introducing Google Developers India: A Local Youtube Channel for India’s Mobile Development Revolution

Tue, 02/07/2017 - 21:36
Posted By Peter Lubbers, Senior Program Manager

Today, we're launching the Google Developers India channel: a brand new Youtube channel tailored for Indian Developers. The channel will include original content like interviews with local experts, developer spotlights, technical tutorials, and complete Android courses to help you be a successful developer.

Why India?

By 2018, India will have the largest developer base in the world with over 4 million developers. Our initiative to train 2 million Indian developers, along with the tremendous popularity of mobile development in the country and the desire to build better mobile apps, will be best catered by an India-specific developers channel featuring Indian developers, influencers, and experts.



Here is a taste of what's to come in 2017:
  • Tech Interviews: Advice from India's top developers, influencers and tech experts.
  • Developer Stories: Inspirational stories of Indian developers.
  • DevShow India: A weekly show that will keep new and seasoned developers updated on all the news, trainings, and API's from Google.
  • Skilled to Scaled: A real life developer journey that takes us from the germination of an idea for an app, all the way to monetizing it on Google Play.
So what's next?


The channel is live now. Click hereto check it out.


Categories: Programming

What’s in an AMP URL?

Mon, 02/06/2017 - 20:00
Posted by Alex Fischer, Software Engineer, Google Search.

TL;DR: Today, we're adding a feature to the AMP integration in Google Search that allows users to access, copy, and share the canonical URL of an AMP document. But before diving deeper into the news, let's take a step back to elaborate more on URLs in the AMP world and how they relate to the speed benefits of AMP.

What's in a URL? On the web, a lot - URLs and origins represent, to some extent, trust and ownership of content. When you're reading a New York Times article, a quick glimpse at the URL gives you a level of trust that what you're reading represents the voice of the New York Times. Attribution, brand, and ownership are clear.

Recent product launches in different mobile apps and the recent launch of AMP in Google Search have blurred this line a little. In this post, I'll first try to explain the reasoning behind some of the technical decisions we made and make sense of the different kinds of AMP URLs that exist. I'll then outline changes we are making to address the concerns around URLs.

To start with, AMP documents have three different kinds of URLs:
  • Original URL: The publisher's document written in the AMP format. http://www.example.com/amp/doc.html
  • AMP Cache URL: The document served through an AMP Cache (e.g., all AMPs served by Google are served through the Google AMP Cache). Most users will never see this URL. https://www-example-com.cdn.ampproject.org/c/www.example.com/amp/doc.html
  • Google AMP Viewer URL: The document displayed in an AMP viewer (e.g., when rendered on the search result page). https://www.google.com/amp/www.example.com/amp.doc.html



Although having three different URLs with different origins for essentially the same content can be confusing, there are two main reasons why these different URLs exist: caching and pre-rendering. Both are large contributors to AMP's speed, but require new URLs and I will elaborate on why that is.
AMP Cache URLsLet's start with AMP Cache URLs. Paul Bakaus, a Google Developer Advocate for AMP, has an excellent post describing why AMP Caches exist. Paul's post goes into great detail describing the benefits of AMP Caches, but it doesn't quite answer the question why they require new URLs. The answer to this question comes down to one of the design principles of AMP: build for easy adoption. AMP tries to solve some of the problems of the mobile web at scale, so its components must be easy to use for everyone.

There are a variety of options to get validation, proximity to users, and other benefits provided by AMP Caches. For a small site, however, that doesn't manage its own DNS entries, doesn't have engineering resources to push content through complicated APIs, or can't pay for content delivery networks, a lot of these technologies are inaccessible.

For this reason, the Google AMP Cache works by means of a simple URL "transformation." A webmaster only has to make their content available at some URL and the Google AMP Cache can then cache and serve the content through Google's world-wide infrastructure through a new URL that mirrors and transforms the original. It's as simple as that. Leveraging an AMP Cache using the original URL, on the other hand, would require the webmaster to modify their DNS records or reconfigure their name servers. While some sites do just that, the URL-based approach is easier to use for the vast majority of sites.
AMP Viewer URLsIn the previous section, we learned about Google AMP Cache URLs -- URLs that point to the cached version of an AMP document. But what about www.google.com/amp URLs? Why are they needed? These are "AMP Viewer" URLs and they exist because of pre-rendering.
AMP's built-in support for privacy and resource-conscientious pre-rendering is rarely talked about and often misunderstood. AMP documents can be pre-rendered without setting off a cascade of resource fetches, without hogging up users' CPU and memory, and without running any privacy-sensitive analytics code. This works regardless of whether the embedding application is a mobile web page or a native application. The need for new URLs, however, comes mostly from mobile web implementations, so I am using Google's mobile search result page (SERP) as an illustrative example.
How does pre-rendering work?When a user performs a Google search that returns AMP-enabled results, some of these results are pre-rendered behind the scenes. When the user clicks on a pre-rendered result, the AMP page loads instantly.

Pre-rendering works by loading a hidden iframe on the embedding page (the search result page) with the content of the AMP page and an additional parameter that indicates that the AMP document is only being pre-rendered. The JavaScript component that handles the lifecycle of these iframes is called "AMP Viewer".

The AMP Viewer pre-renders an AMP document in a hidden iFrame.

The user's browser loads the document and the AMP runtime and starts rendering the AMP page. Since all other resources, such as images and embeds, are managed by the AMP runtime, nothing else is loaded at this point. The AMP runtime may decide to fetch some resources, but it will do so in a resource and privacy sensible way.

When a user clicks on the result, all the AMP Viewer has to do is show the iframe that the browser has already rendered and let the AMP runtime know that the AMP document is now visible.
As you can see, this operation is incredibly cheap - there is no network activity or hard navigation to a new page involved. This leads to a near-instant loading experience of the result.
Where do google.com/amp URLs come from?All of the above happens while the user is still on the original page (in our example, that's the search results page). In other words, the user hasn't gone to a different page; they have just viewed an iframe on the same page and so the browser doesn't change the URL.

We still want the URL in the browser to reflect the page that is displayed on the screen and make it easy for users to link to. When users hit refresh in their browser, they expect the same document to show up and not the underlying search result page. So the AMP viewer has to manually update this URL. This happens using the History API. This API allows the AMP Viewer to update the browser's URL bar without doing a hard navigation.

The question is what URL the browser should be updated to. Ideally, this would be the URL of the result itself (e.g., www.example.com/amp/doc.html); or the AMP Cache URL (e.g., www-example-com.cdn.ampproject.org/www.example.com/amp/doc.html). Unfortunately, it can't be either of those. One of the main restrictions of the History API is that the new URL must be on the same origin as the original URL (reference). This is enforced by browsers (for security reasons), but it means that in Google Search, this URL has to be on the www.google.com origin.
Why do we show a header bar?The previous section explained restrictions on URLs that an AMP Viewer has to handle. These URLs, however, can be confusing and misleading. They can open up the doors to phishing attacks. If an AMP page showed a login page that looks like Google's and the URL bar says www.google.com, how would a user know that this page isn't actually Google's? That's where the need for additional attribution comes in.

To provide appropriate attribution of content, every AMP Viewer must make it clear to users where the content that they're looking at is coming from. And one way of accomplishing this is by adding a header bar that displays the "true" origin of a page.

What's next?I hope the previous sections made it clear why these different URLs exist and why there needs to be a header in every AMP viewer. We have heard how you feel about this approach and the importance of URLs. So what next? As you know, we want to be thoughtful in what we do and ensure that we don't break the speed and performance users expect from AMP pages.

Since the launch of AMP in Google Search in Feb 2016, we have taken the following steps:
  • All Google URLs (i.e., the Google AMP cache URL and the Google AMP viewer URL) reflect the original source of the content as best as possible: www.google.com/amp/www.example.com/amp/doc.html.
  • When users scroll down the page to read a document, the AMP viewer header bar hides, freeing up precious screen real-estate.
  • When users visit a Google AMP viewer URL on a platform where the viewer is not available, we redirect them to the canonical page for the document.
In addition to the above, many users have requested a way to access, copy, and share the canonical URL of a document. Today, we're adding support for this functionality in form of an anchor button in the AMP Viewer header on Google Search. This feature allows users to use their browser's native share functionality by long-tapping on the link that is displayed.


In the coming weeks, the Android Google app will share the original URL of a document when users tap on the app's share button. This functionality is already available on the iOS Google app.

Lastly, we're working on leveraging upcoming web platform APIs that allow us to improve this functionality even further. One such API is the Web Share API that would allow AMP viewers to invoke the platform's native sharing flow with the original URL rather than the AMP viewer URL.

We as Google have every intention in making the AMP experience as good as we can for both, users and publishers. A thriving ecosystem is very important to us and attribution, user trust, and ownership are important pieces of this ecosystem. I hope this blog post helps clear up the origin of the three URLs of AMP documents, their role in making AMP fast, and our efforts to further improve the AMP experience in Google Search. Lastly, an ecosystem can only flourish with your participation: give us feedback and get involved with AMP.

Categories: Programming

Introducing Associate Android Developer Certification by Google

Wed, 02/01/2017 - 21:27
Originally posted on Android Developer Blog

The Associate Android Developer Certification program was announced at Google I/O 2016, and launched a few months later. Since then, over 322 Android developers spanning 61 countries have proven their competency and earned the title of Google Certified Associate Android Developer.
To establish a professional standard for what it means to be an Associate Android developer in the current job market, Google created this certification, which allows us to recognize developers who have proven themselves to uphold that standard.

We conducted a job task analysis to determine the required competencies and content of the certification exam. Through field research and interviews with experts, we identified the knowledge, work practices, and essential skills expected of an Associate Android developer.

The certification process consists of a performance-based exam and an exit interview. The certification fee includes three exam attempts. The cost for certification is $149 USD, or 6500 INR if you reside in India. After payment, the exam will be available for download, and you have 48 hours to complete and submit it for grading.

In the exam, you will implement missing features and debug an Android app using Android Studio. If you pass, you will undergo an exit interview where, you will answer questions about your exam and demonstrate your knowledge of Associate Android Developer competencies.

Check out this short video for a quick overview of the Associate Android Developer certification process:



Earning your AAD Certification signifies that you possess the skills expected of an Associate Android developer, as determined by Google. You can showcase your credential on your resume and display your digital badge on your social media profiles. As a member of the AAD Alumni Community, you will also have access to program benefits focused on increasing your visibility as a certified developer.

Test your Android development skills and earn the title of Google Certified Associate Android Developer. Visit g.co/devcertification to get started!


Categories: Programming

New resources for building inclusive tech hubs

Tue, 01/31/2017 - 20:10
Posted by Amy Schapiro and the Women Techmakers team 

For the tech industry to thrive and create groundbreaking technology that supports the global ecosystem, it is criticalto increase the diversity and inclusion of communities that make the technology. To support this global network of tech hubs - incubators, community organizations, accelerators and coworking spaces - Women Techmakers partnered with Change Catalyst to develop an in-depth video series and set of guides on how to build inclusive technology hubs.


Watch the videos on the Women Techmakers YouTube channel, and access the how-to guides on the Change Catalyst site [via this link].


For more information about Women Techmakers, Google's global program supporting women in technology, and to join the Membership program, visit womentechmakers.com.
Categories: Programming

Welcoming Fabric to Google

Wed, 01/18/2017 - 22:26
Originally posted on the Firebase Blog

Posted by Francis Ma, Firebase Product Manager

Almost eight months ago, we launchedthe expansion of Firebase to help developers build high-quality apps, grow their user base, and earn more money across iOS, Android and the Web. We've already seen great adoption of the platform, which brings together the best of Google's core businesses from Cloud to mobile advertising.

Our ultimate goal with Firebase is to free developers from so much of the complexity associated with modern software development, giving them back more time and energy to focus on innovation.

As we work towards that goal, we've continued to improve Firebase, working closely with our user community. We recently introducedmajor enhancements to many core features, including Firebase Analytics, Test Lab and Cloud Messaging, as well as added support for game developers with a C++ SDK and Unity plug-in.


We're deeply committed to Firebase and are doubling down on our investment to solve developer challenges.
Fabric and Firebase Joining Forces

Today, we're excited to announce that we've signed an agreement to acquire Fabric to continue the great work that Twitter put into the platform. Fabric will join Google's Developer Product Group, working with the Firebase team. Our missions align closely: help developers build better apps and grow their business.
As a popular, trusted tool over many years, we expect that Crashlytics will become the main crash reporting offering for Firebase and will augment the work that we have already done in this area. While Fabric was built on the foundation of Crashlytics, the Fabric team leveraged its success to launch a broad set of important tools, including Answers and Fastlane. We'll share further details in the coming weeks after we close the deal, as we work closely together with the Fabric team to determine the most efficient ways to further combine our strengths. During the transition period, Digits, the SMS authentication services, will be maintained by Twitter.


The integration of Fabric is part of our larger, long-term effort of delivering a comprehensive suite of features for iOS, Android and mobile Web app development.

This is a great moment for the industry and a unique opportunity to bring the best of Firebase with the best of Fabric. We're committed to making mobile app development seamless, so that developers can focus more of their time on building creative experiences.
Categories: Programming

Silence speaks louder than words when finding malware

Tue, 01/17/2017 - 23:06
Originally posted on Android Developer Blog

Posted by Megan Ruthven, Software Engineer
In Android Security, we're constantly working to better understand how to make Android devices operate more smoothly and securely. One security solution included on all devices with Google Play is Verify apps. Verify apps checks if there are Potentially Harmful Apps (PHAs) on your device. If a PHA is found, Verify apps warns the user and enables them to uninstall the app.

But, sometimes devices stop checking up with Verify apps. This may happen for a non-security related reason, like buying a new phone, or, it could mean something more concerning is going on. When a device stops checking up with Verify apps, it is considered Dead or Insecure (DOI). An app with a high enough percentage of DOI devices downloading it, is considered a DOI app. We use the DOI metric, along with the other security systems to help determine if an app is a PHA to protect Android users. Additionally, when we discover vulnerabilities, we patch Android devices with our security update system. This blog post explores the Android Security team's research to identify the security-related reasons that devices stop working and prevent it from happening in the future.
Flagging DOI Apps
To understand this problem more deeply, the Android Security team correlates app install attempts and DOI devices to find apps that harm the device in order to protect our users.
With these factors in mind, we then focus on 'retention'. A device is considered retained if it continues to perform periodic Verify apps security check ups after an app download. If it doesn't, it's considered potentially dead or insecure (DOI). An app's retention rate is the percentage of all retained devices that downloaded the app in one day. Because retention is a strong indicator of device health, we work to maximize the ecosystem's retention rate. Therefore, we use an app DOI scorer, which assumes that all apps should have a similar device retention rate. If an app's retention rate is a couple of standard deviations lower than average, the DOI scorer flags it. A common way to calculate the number of standard deviations from the average is called a Z-score. The equation for the Z-score is below.
  • N = Number of devices that downloaded the app.
  • x = Number of retained devices that downloaded the app.
  • p = Probability of a device downloading any app will be retained.

In this context, we call the Z-score of an app's retention rate a DOI score. The DOI score indicates an app has a statistically significant lower retention rate if the Z-score is much less than -3.7. This means that if the null hypothesis is true, there is much less than a 0.01% chance the magnitude of the Z-score being as high. In this case, the null hypothesis means the app accidentally correlated with lower retention rate independent of what the app does.
This allows for percolation of extreme apps (with low retention rate and high number of downloads) to the top of the DOI list. From there, we combine the DOI score with other information to determine whether to classify the app as a PHA. We then use Verify apps to remove existing installs of the app and prevent future installs of the app.
Difference between a regular and DOI app download on the same device.
Results in the wild
Among others, the DOI score flagged many apps in three well known malware families— Hummingbad, Ghost Push, and Gooligan. Although they behave differently, the DOI scorer flagged over 25,000 apps in these three families of malware because they can degrade the Android experience to such an extent that a non-negligible amount of users factory reset or abandon their devices. This approach provides us with another perspective to discover PHAs and block them before they gain popularity. Without the DOI scorer, many of these apps would have escaped the extra scrutiny of a manual review.
The DOI scorer and all of Android's anti-malware work is one of multiple layers protecting users and developers on Android. For an overview of Android's security and transparency efforts, check out our page.
Categories: Programming

Google AMP Cache, Optimizations for Slow Networks, and the Need for Speed

Wed, 01/11/2017 - 23:11
Posted by Huibao Lin and Eyal Peled, Software Engineers, Google
Editor’s Note: This blog post was amended to remove the term “AMP Lite”. This was a code name for a project to make AMP better for slow networks but many readers interpreted this as a separate version of AMP. We apologize for the confusion.
At Google we believe in designing products with speed as a core principle. The Accelerated Mobile Pages (AMP) format helps ensure that content reliably loads fast, but we can do even better.

Smart caching is one of the key ingredients in the near instant AMP experiences users get in products like Google Search and Google News & Weather. With caching, we can make content be, in general, physically closer to the users who are requesting it so that bytes take a shorter trip over the wire to reach the user. In addition, using a single common infrastructure like a cache provides greater consistency in page serving times despite the content originating from many hosts, which might have very different—and much larger—latency in serving the content as compared to the cache.

Faster and more consistent delivery are the major reasons why pages served in Google Search's AMP experience come from the Google AMP Cache. The Cache's unified content serving infrastructure opens up the exciting possibility to build optimizations that scale to improve the experience across hundreds of millions of documents served. Making it so that any document would be able to take advantage of these benefits is one of the main reasons the Google AMP Cache is available for free to anyone to use.

In this post, we'll highlight two improvements we've recently introduced: (1) optimized image delivery and (2) enabling content to be served more successfully in bandwidth-constrained conditions.

Image optimizations by the Google AMP Cache
On average across the web, images make up 64% of the bytesof a page. This means images are a very promising target for impactful optimizations.

Applying image optimizations is an effective way for cutting bytes on the wire. The Google AMP Cache employs the image optimization stack used by the PageSpeed Modules and Chrome Data Compression. (Note that in order to make the above transformations, the Google AMP Cache disregards the "Cache-Control: no-transform" header.) Sites can get the same image optimizations on their origin by installing PageSpeed on their server.

Here's a rundown of some of the optimizations we've made:

1) Removing data which is invisible or difficult to see
We remove image data that is invisible to users, such as thumbnail and geolocation metadata. For JPEG images, we also reduce quality and color samples if they are higher than necessary. To be exact, we reduce JPEG quality to 85 and color samples to 4:2:0 — i.e., one color sample per four pixels. Compressing a JPEG to quality higher than this or with more color samples takes more bytes, but the visual difference is difficult to notice.

The reduced image data is then exhaustively compressed. We've found that these optimizations reduce bytes by 40%+ while not being noticeable to the user's eye.

2) Converting images to WebP format
Some image formats are more mobile-friendly. We convert JPEG to WebP for supported browsers. This transformation leads to an additional 25%+ reduction in bytes with no loss in quality.

3) Adding srcset
We add "srcset" if it has not been included. This applies to "amp-img" tags with "src" but no "srcset" attribute. The operation includes expanding "amp-img" tag as well as resizing the image to multiple dimensions. This reduces the byte count further on devices with small screens.

4) Using lower quality images under some circumstances
We decrease the quality of JPEG images when there is an indication that this is desired by the user or for very slow network conditions (as discussed below). For example, we reduce JPEG image quality to 50 for Chrome users who have turned on Data Saver. This transformation leads to another 40%+ byte reduction to JPEG images.

The following example shows the images before (left) and after(right) optimizations. Originally the image has 241,260 bytes, and after applying Optimizations 1, 2, & 4 it becomes 25,760 bytes. After the optimizations the image looks essentially the same, but 89% of the bytes have been saved.



AMP for Slow Network Conditions
Many people around the world access the internet with slow connection speeds or on devices with low RAM and we've found that some AMP pages are not optimized for these severely bandwidth constrained users. For this reason, Google has also started an effort to remove even more bytes from AMP pages for these users.

With this initiative, we apply all of the above optimizations to images. In particular, we always use lower quality levels (see Bullet 4 above).

In addition, we optimize external fonts by using the amp-fonttag, setting the font loading timeout to 0 seconds so pages can be displayed immediately regardless of whether the external font was previously cached or not.

We will be rolling out these optimizations for bandwidth-constrained users in several countries such as Vietnam and Malaysia and for holders of low ram devices globally. Note that these optimizations may modify the fine details of some images, but do not affect other parts of the page including ads.

* * *

All told, we see a combined 45% reduction in bytes across all optimizations listed above.
We hope to go even further in making more efficient use of users' data to provide even faster AMP experiences.
Categories: Programming

Some Tips for Boosting your App's Quality in 2017

Wed, 01/04/2017 - 21:59
Originally posted on the Firebase blog by Doug Stevenson /Developer Advocate


I've got to come clean with everyone: I'm making no new year's resolutions for 2017. Nor did I make any in 2016. In fact, I don't think I've ever made one! It's not so much that I take a dim view of new year's resolutions. I simply suspect that I would likely break them by the end of the week, and feel bad about it!

One thing I've found helpful in the past is to see the new year as a pivoting point to try new things, and also improve the work I'm already doing. For me (and I hope for you too!), 2017 will be a fresh year for boosting app quality.

The phrase "app quality" can take a bunch of different meanings, based on what you value in the software you create and use. As a developer, traditionally, this means fixing bugs that cause problems for your users. It could also be a reflection of the amount of delight your users take in your app. All of this gets wrapped up into the one primary metric that we have to gauge the quality of a mobile app, which is your app's rating on the store where it's published. I'm sure every one of you who has an app on a storefront has paid much attention to the app's rating at some point!

Firebase provides some tools you can use to boost your app's quality, and if you're not already using them, maybe a fresh look at those tools would be helpful this year?

Firebase Crash Reporting
The easiest tool to get started with is Firebase Crash Reporting. It takes little to no lines of code to integrate it into your iOS and Android app, and once you do, the Firebase console will start showing crashes that are happening to your users. This gives you a "hit list" of problems to fix.
One thing I find ironic about being involved with the Crash Reporting team is how we view the influx of total crashes received as we monitor our system. Like any good developer product, we strive to grow adoption, which means we celebrate graphs that go "up and to the right". So, in a strange sense, we like to see more crashes, because that means more developers are using our stuff! But for all of you developers out there, more crashes is obviously a *bad* thing, and you want to make those numbers go down! So, please, don't be like us - make your crash report graphs go down and to the right in 2017!

Firebase Test Lab for Android
Even better than fixing problems for your users is fixing those problems before they even reach your users. For your Android apps, you can use Firebase Test Lab to help ensure that your apps work great for your users among a growing variety of actual devices that we manage. Traditionally, it's been kind of a pain to acquire and manage a good selection of devices for testing. However, with Test Lab, you simply upload your APK and tests, and it will install and run them to our devices. After the tests complete, we'll provide all the screenshots, videos, and logs of everything that happened for you to examine in the Firebase console.

With Firebase Test Lab for Android now available with generous daily quotas at no charge for projects on the free Spark tier, 2017 is a great time to get started with that. And, if you haven't set up your Android app builds in a continuous integration environment, you could set that up, then configure it to run your tests automatically on Test Lab.

If you're the kind of person who likes writing tests for your code (which is, admittedly, not very many of us!), it's natural to get those tests running on Test Lab. But, for those of us who aren't maintaining a test suite with our codebase, we can still use Test Lab's automated Robo test to get automated test coverage right away, with no additional lines of code required. That's not quite that same as having a comprehensive suite of tests, so maybe 2017 would be a good time to learn more about architecting "testable" apps, and how those practices can raise the bar of quality for your app. I'm planning on writing more about this later this year, so stay tuned to the Firebase Blog for more!

Firebase Remote Config
At its core, Firebase Remote Config is a tool that lets you configure your app using parameters that you set up in the Firebase console. It can be used to help manage the quality of your app, and there's a couple neat tricks you can do with it. Maybe this new year brings new opportunities to give them a try!

First of all, you can use Remote Config to carefully roll out a new feature to your users. It works like this:
  1. Code your new feature and restrict its access to the user by a Remote Config boolean parameter. If the value is 'false', your users don't see the feature. Make 'false' the default value in the app.
  2. Configure that parameter in the Firebase console to also be initially 'false' for everyone.
  3. Publish your app to the store.
  4. When it's time to start rolling out the new feature to a small segment of users, configure the parameter to be 'true' for, say, five percent of your user base.
  5. Stay alert for new crashes in Firebase Crash Reporting, as well as feedback from your users.
  6. If there is a problem with the new feature, immediately roll back the new feature by setting the parameter to 'false' in the console for everyone.
  7. Or, if things are looking good, increase the percentage over time until you reach 100% of your users.


This is much safer than publishing your new feature to everyone with a single app update, because now you have the option to immediately disable a serious problem, and without having to build and publish a whole new version of your app. And, if you can act quickly, most of your users will never encounter the problem to begin with. This works well with the email alerts you get from Firebase Crash Reporting when a new crash is observed.

Another feature of Remote Config is the ability to experiment with some aspect of your app in order to find out what works better for the users of your app, then measure the results in Firebase Analytics. I don't know about you, but I'm typically pretty bad at guessing what people actually prefer, and sometimes I'm surprised at how people might actually *use* an app! Don't guess - instead, do an experiment and know /for certain/ what delights your users more! It stands to reason that apps finely tuned like this can get better ratings and make more money.

Firebase Realtime Database
It makes sense that if you make it easier for you user to perform tasks in your app, they will enjoy using it more, and they will come back more frequently. One thing I have always disliked is having to check for new information by refreshing, or navigating back and forward again. Apps that are always fresh and up to date, without requiring me to take action, are more pleasant to use.

You can achieve this for your app by making effective use of Firebase Realtime Databaseto deliver relevant data directly to your users at the moment it changes in the database. Realtime Database is reactive by nature, because the client API is designed for you set up listeners at data locations that get triggered in the event of a change. This is far more convenient than having to poll an API endpoint repeatedly to check for changes, and also much more respectful of the user's mobile data and battery life. Users associate this feeling of delight with apps of high quality.

What does 2017 have in store for your app?
I hope you'll join me this year in putting more effort into making our users even more delighted. If you're with me, feel free to tweet me at @CodingDoug and tell me what you're up to in 2017!
Categories: Programming

Start with a line, let the planet complete the picture

Thu, 12/15/2016 - 19:03
Jeff Nusz, Data Arts Team

Take a break this holiday season and paint with satellite images of the Earth through a new experiment called Land Lines. The project lets you explore Google Earth images in unexpected ways through gesture. Earth provides the palette; your fingers, the paintbrush.
There are two ways to explore–drag or draw. "Draw" to find satellite images that match your every line. "Drag" to create an infinite line of connected rivers, highways and coastlines. Here's a quick demo:


Everything runs in real time in your phone's web browser without any servers. The responsiveness of the project is a result of using machine learning, data optimization, and vantage-point trees to analyze the images and store that data.

We preprocessed the images using a combination of Open CV's Structured Forests machine learning based edge detection and ImageJ's Ridge Detection library. This culled the initial dataset of over fifty thousand high res images down to just a few thousand selected for their presence of lines, as shown in the example below. What ordinarily would take days was completed in just a few hours.


Example output from the line detection processing. The dominant line is highlighted in red while secondary lines are highlighted in green.

In the drawing exploration, we stored the resulting data in a vantage-point tree. This enabled us to efficiently run gesture matching against all the images and have results appear in milliseconds.


An early example of gesture matching using vantage point trees, where the drawn input is on the right and the closest results on the left.


Another example of user gesture analysis, where the drawn input is on the right and the closest results on the left.

Built in collaboration with Zach Lieberman, Land Lines is an experiment in big visual data that explores themes of connection. We tried several machine learning libraries in our development process. The learnings from that experience can be found in the case study, while the project code is available open-source on Git Hub. Start with a line at g.co/landlines.
Categories: Programming

Formatting text with the Google Slides API

Wed, 12/14/2016 - 19:18
Originally posted on G Suite Developers blog

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

It's common knowledge that presentations utilize a set of images to impart ideas to the audience. As a result, one of the best practices for creating great slide decks is to minimize the overall amount of text. It means that if you do have text in a presentation, the (few) words you use must have higher impact and be visually appealing. This is even more true when the slides are generated by a software application, say using the Google Slides API, rather than being crafted by hand.

The G Suite team recently launched the first Slides API, opening up a whole new category of applications. Since then, we've published several videos to help you realize some of those possibilities, showing you how to replace text and images in slides as well as how to generate slides from spreadsheet data. To round out this trifecta of key API use cases, we're adding text formatting to the conversation.

Developers manipulate text in Google Slides by sending API requests. Similar to the Google Sheets API, these requests come in the form of JSON payloads sent to the API's batchUpdate() method. Here's the JavaScript for inserting text in some shape (shapeID) on a slide:

{
"insertText": {
"objectId": shapeID,
"text": "Hello World!\n"
}

In the video, developers learn that writing text, such as the request above, is less complex than reading or formatting because both the latter require developers to know how text on a slide is structured. Notice for writing that just the copy, and optionally an index, are all that's required. (That index defaults to zero if not provided.)

Assuming "Hello World!" has been successfully inserted in a shape on a slide, a request to bold just the "Hello" looks like this:

{
"updateTextStyle": {
"objectId": shapeID,
"style": {
"bold": true
},
"textRange": {
"type": "FIXED_RANGE",
"startIndex": 0,
"endIndex": 5
},
"fields": "bold"
}
If you've got at least one request, like the ones above, in an array named requests, you'd ask the API to execute them with just one call to the API, which in Python looks like this (assuming SLIDES is your service endpoint and the slide deck ID is deckID):
SLIDES.presentations().batchUpdate(presentationId=deckID,
body=requests).execute()

To better understand text structure & styling in Google Slides, check out the text concepts guidein the documentation. For a detailed look at the complete code sample featured in the DevByte, check out the deep dive post. To see more samples for common API operations, take a look at this page. We hope the videos and all these developer resources help you create that next great app that automates producing highly impactful presentations for your users!

Categories: Programming