Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Google Code Blog
Syndicate content
Updated: 4 hours 8 min ago

Run Apps Script code from anywhere using the Execution API

Fri, 09/25/2015 - 20:39

Originally posted to the Google Apps Developer blog

Posted by Edward Jones, Software Engineer, Google Apps Script and Wesley Chun, Developer Advocate, Google Apps

Have you ever wanted a server API that modifies cells in a Google Sheet, to execute a Google Apps Script app from outside of Google Apps, or a way to use Apps Script as an API platform? Today, we’re excited to announce you can do all that and more with the Google Apps Script Execution API.

The Execution API allows developers to execute scripts from any client (browser, server, mobile, or any device). You provide the authorization, and the Execution API will run your script. If you’re new to Apps Script, it’s simply JavaScript code hosted in the cloud that can access authorized Google Apps data using the same technology that powers add-ons. The Execution API extends the ability to execute Apps Script code and unlocks the power of Docs, Sheets, Forms, and other supported services for developers.

One of our launch partners, Pear Deck, used the new API to create an interactive presentation tool that connects students to teachers by converting slide decks into interactive experiences. Their app calls the Execution API to automatically generate a Google Doc customized for each student, so everyone gets a personalized set of notes from the presentation. Without the use of Apps Script, their app would be limited to using PDFs and other static file types. Check out the video below to see how it works.

Bruce McPherson, a Google Developer Expert (GDE) for Google Apps, says: “The Execution API is a great tool for enabling what I call ‘incremental transition’ from Microsoft Office (and VBA) to Apps (and Apps Script). A mature Office workflow may involve a number of processes currently orchestrated by VBA, with data in various formats and locations. It can be a challenge to move an entire workload in one step, especially an automated process with many moving parts. This new capability enables the migration of data and process in manageable chunks.” You can find some of Bruce’s sample migration code using the Execution API here.

The Google Apps Script Execution API is live and ready for you to use today. To get started, check out the developer documentation and quickstarts. We invite you to show us what you build with the Execution API!

Categories: Programming

Learn about Vulkan and 3D Graphics: Coffee with Shannon Woods

Fri, 09/18/2015 - 22:35

Posted by, Laurence Moroney, Developer Advocate

Vulkan is the new generation, open standard API for efficient access to graphics and compute on modern GPUs. In this episode of Coffee with a Googler, Laurence meets with Shannon Woods, a Technical Program Manager in Google’s rendering teams to talk about plumbing code from your app down to the GPU!

Historically mobile apps have used Open GL ES to communicate with the GPU, but the hardware and API have evolved separately, impacting efficiency. Vulkan has been designed to organize the graphics space in much the same way as the underlying GPU, so it can be more efficient.

Android will support both Open GL ES and Vulkan, so developers can choose which API is right for them — and with Vulkan, precise control over the commands executed by the GPU allows for great optimization, as well as parallelization of code.

We also learn about the famous Utah Teapot, a standard reference object for 3D modellers, and how it is found in popular culture -- such as showing up in most animated movies. Have you spotted it?

Watch this episode for some great guidance from Shannon on what you need to do as a developer to prepare for Vulkan, and how using could be of benefit to your apps!

Categories: Programming

The Polymer Summit 2015 Roundup

Wed, 09/16/2015 - 18:23

Posted by Taylor Savage, Product Manager, Polymer

Yesterday in Amsterdam we kicked off the first ever Polymer Summit, joined live by 800 developers. We focused on three key themes: Develop, Design and Deploy, giving concrete advice on how you can build your web app from start to finish. You can watch a replay of the keynote here.

It has been amazing to see how much the community has grown and how far the project has come: what started as an experiment in a new way of developing on the web platform has steadily grown into the range of tools, product lines, and community contributions we saw presented throughout the Summit. Since Polymer 1.0 launched in May we’ve seen more than 150,000 public facing pages created with Polymer.

In case you missed any of the sessions, we’ve consolidated all of the recordings below:




Be sure to visit our YouTube channel for the session recordings. For the latest news and upcoming Polymer events, subscribe to the Polymer blog and follow us on Twitter @Polymer.

Categories: Programming

Google Calendar API invites users to great experiences

Tue, 09/15/2015 - 17:59

Posted by Wesley Chun, Developer Advocate, Google Apps

Have you ever booked a dining reservation, plane ticket, hotel room, concert ticket, or seats to the game from your favorite app, only to have to exit that booking app to enter the details into your calendar? It doesn’t make for a friendly user experience. Why can’t today’s apps do that for you automatically?

In case you missed it the episode 97 of #GoogleDev100 the other week, I aim to inspire how app developers can streamline that process with the help of the Google Calendar API. A short Python script, anchored by the following snippet, is illustrated to show developers how easy it is to programmatically add calendar events:

CALENDAR ='calendar', 'v3', http=creds.authorize(Http()))
GMT_OFF = '-07:00' # PDT/MST/GMT-7
'summary': 'Dinner with friends',
'start': {'dateTime': '2015-09-18T19:00:00%s' % GMT_OFF},
'end': {'dateTime': '2015-09-18T22:00:00%s' % GMT_OFF},
'attendees': [
{'email': ''},
{'email': ''},
}'primary', body=EVENT).execute()

For deeper dive into the script, check out the corresponding blogpost. With code like that, your app can automatically insert your relevant events into your users’ calendars, saving them the effort of manually doing it themselves. One of the surprising aspects is that a limited set of actions, such as RSVPing, is even available to non-Google Calendar users. By the way, inserting events is just the beginning. Developers can also delete or update events instantly in case that upcoming dinner gets pushed back a few weeks. Events can even be repeated with a recurrence rule. Attachments are also supported so you can provide your users a PDF of the concert tickets they just booked. Those are just some of the things the API is capable of.

Ready to get started? Much more information, including code samples in Java, PHP, .NET, Android, iOS, and more, can be found in the Google Calendar API documentation. If you’re new to the Launchpad Online developer series, we share technical content aimed at novice Google developers… the latest tools and features with a little bit of code to help you launch that app. Please give us your feedback below and tell us what topics you would like to see in future episodes!

Categories: Programming

Coffee With a Googler: Learn about App Indexing and Search

Mon, 09/14/2015 - 20:15

Posted by Laurence Moroney, Developer Advocate.

App Indexing helps you get your mobile app found in Google Search. Once your app is indexed, mobile users who search for content related to your app can be guided directly to your app, helping you to increase your install base and improve user engagement.

In this episode of Coffee with a Googler, Laurence meets with Jennifer Lin from the App Indexing team, who demonstrates the possibilities!

Jennifer shares that Google has indexed over 50 billion deep links into apps, with searches returning these links to users, taking them directly into your app. She shares how the Daily Mail newspaper in the UK saw a 22% boost in search impressions, and app users spent around 20% more time reading and sharing articles when they came in via a deep link from Search. Additionally, Tabelog, a premier restaurant review app and site in Japan, saw an increase of 9.6% in page views within their app, and a 63% increase in Search impressions after adding their app to the index.

When searching with Google Search on your phone, if the app is already installed, and has content that matches what you’re looking for -- you can be directed straight into the app to get a very rich experience. Alternatively, if the app isn’t yet installed, but has matching content, you can be guided through an install experience for the app, without losing context, so that when the app launches, you’ll go straight to the content you were looking for! Jennifer demos both scenarios using real apps, showing how straightforward the user experience is.

You can learn more about App Indexing, including how to get started on the Google Developers App Indexing site. For more information about other Google Search for Developers APIs, check out

Categories: Programming

Get Ready for the Polymer Summit 2015

Thu, 09/10/2015 - 19:14

Posted by Taylor Savage, Product Manager, Polymer

The Polymer Summit is almost here! We’ll officially kick off live from Amsterdam at 9:00AM GMT+2 this coming Tuesday, September 15th. To get the most out of the event, make sure to check out the speaker list and talk schedule on our site.

Can’t join us in person? Don’t worry, we’ve got you covered! You can tune into the summit live on We will stream the keynote and all sessions over the course of the event. If you want us to send you a reminder to tune into the livestream, sign up here. We’ll also be publishing all of the talks as videos on the Chrome Developers YouTube Channel.

We’re looking forward to seeing you in person or remotely on Tuesday. Don’t forget to join the social conversations at #PolymerSummit!

Categories: Programming

100 days of Google Dev

Tue, 09/08/2015 - 21:21

Posted by Reto Meier, Team Lead, Scalable Developer Advocacy

For the past 100 days, Google Developers has delivered a series of daily videos to keep you informed about everything you need to develop, engage and earn.

We’ve covered everything from the Android Marshmallow launch: how you can get started developing with beacons:

...and continued our coverage of everything Polymer and Geo:

Thank you for following along and learning with us about all the ways you can use Google tools to make your apps awesome. Let us know what your favourite video was using #GoogleDev100. In the meantime, check out this short sizzle reel looking back at our most memorable moments -- we hope you’ve enjoyed watching them as much as we’ve enjoyed making them:

Categories: Programming

Coffee with a Googler: Learn about Google Voice Actions

Sat, 09/05/2015 - 00:07

Posted by Laurence Moroney, Developer Advocate

Google Voice Actions let your users quickly complete tasks in your app using voice commands. It’s a great way to drive usage of your app, and now users’ voice action requests can lead directly from Search to your Android app. In this episode of Coffee With a Googler, Laurence meets with Sunil Vemuri, product manager of Google Voice Actions.

Sunil tells us about how the speech field has progressed, and how the quality of algorithms for detecting speech have drastically improved in a short space of time. In 2013, the average error rate for speech detection was 23 percent -- almost a quarter of all words weren’t recognized. By 2015, at Google I/O, we announced that the rate was down to 8 percent, and it continues to get better.

The episode will also share how developers can get started with building for voice actions using System Actions, where the voice action can be routed from Google Search directly to your app by declaring an intent to capture that action. If you need voice actions that aren’t in the system, you can also set Custom Actions. A developer can tell Google the phrases that they’d like to have triggered (e.g. ‘Ok Google, Turn on the Lights on MyApp’) and the Google app can then fire off the Intent that you specify. In addition, you can build Voice Interactions where your app can ask the user follow-up questions before performing an action. For example, when the user asks to play some music, the app could ask for the genre.

You can learn more about Voice Actions, how they work, and how to get started at the Google Developers site for Voice Actions.

If you have any questions for Laurence or Sunil, please leave them in the comments below.

If there are any guests, technologies, or anything Google that you’d like us to chat about over Coffee, please also drop us a line!

Categories: Programming

Expanding our developer video channel lineup

Thu, 09/03/2015 - 19:07

Posted by, Reto Meier

Starting today, the Android Developers, Chrome Developers, and Google Developers YouTube channels will host the videos that apply to each specific topic area. By subscribing to each channel, you will only be notified about content that matches your interests.

The Google Developers YouTube channel has been bringing you content across many platforms and product offerings to help inspire, inform, and delight you. Recently, we’ve been posting a variety of recurring shows that cover many broad topics across all of our developer offerings, such as Android Performance Patterns, Polycasts and Coffee With A Googler.

As we produce more and more videos, covering an ever increasing range of topics, we want to make it easier for you to find the information you need.

This means that for the Android Developers Channel, you will get content that is more focused to Android, such as Android Performance Patterns. Similarly, the Chrome Developers Channel will host more web focused content, such as Polycasts, HTTP203, Totally Tooling Tips, and New in Chrome. The Google Developers Channel will continue to broadcast broader Google Developer focused content like our DevBytes covering Google Play services releases and our Coffee With A Googler series.

We look forward to bringing you lots more video to inspire, inform, and delight -- to avoid missing any of it, you can subscribe to each of our YouTube channels using the following links, also be sure to turn notifications on in YouTube’s settings (more info here) so that you can get updates as we post new content:

Google Developers | Android Developers | Chrome Developers

Categories: Programming

Wow your users with Google Cast

Wed, 09/02/2015 - 23:34

Posted by Alex Danilo, Developer Advocate

When you develop applications for Google Cast, you’re building a true multi-screen experience to ‘wow’ your users and provide a unique perspective. Part of hitting that wow factor is making the app enjoyable and easy to use.

While designing the Google Cast user experience, we performed a huge amount of user testing to refine a model that works for your users in as many scenarios as possible.

The video below gives a quick explanation of the overall user experience for Google Cast enabled applications.

We’ve also produced some targeted videos to highlight important aspects of the core Google Cast design principles.

The placement of the Cast icon is one of the most important UX guidelines since it directly affects your users familiarity with the ability to Cast. Watch this explanation to help understand why we designed it that way:

Another important design consideration is how the connection between your application and the Google Cast device should work and that’s covered in this short video:

When your users are connected to a Google Cast device that’s playing sound, it’s vital that they can control the audio volume easily. Here’s another video that covers volume control in Cast enabled applications:

To get more detailed information about our UX design principles, we have great documentation and a convenient UX guidelines checklist.

By following the Google Cast UX guidelines in your app, you will give your users a great interactive experience that’ll wow them and have them coming back for more!

Join fellow developers in the Cast Developers Google+ community for more tips, tricks and pointers to all kinds of development resources.

Categories: Programming

Docker and Containers: Coffee With A Googler meets Brian Dorsey

Mon, 08/31/2015 - 22:15

Posted by Laurence Moroney, Developer Advocate

If you’ve worked with Web or cloud tech over the last 18 months, you’ll have heard about Containers and about how they let you spend more time on building software, instead of managing infrastructure. In this episode of Coffee with a Googler, we chat with Brian Dorsey into the benefits of using Containers in Google Cloud Platform for simplifying infrastructure management.

Important discussion topics covered in this episode include:

  • Containers improve the developer experience. Regardless of how large the final deployment is, they are there to make it easier for you to succeed.
  • Kubernetes -- an open source project to allow you to manage containers and fleets of containers.

Brian shares an example from Julia Ferraioli who used Containers (with Docker) to configure a Minecraft server, with many plugins, and Kubernetes to manage it.

You can learn more about Google Cloud platform, including Docker and Kubernetes at the Google Cloud Platform site.

Categories: Programming

Angular 1 and Angular 2 integration: the path to seamless upgrade

Fri, 08/28/2015 - 20:05

Originally posted on the Angular blog.

Posted by, Misko Hevery, Software Engineer, Angular

Have an existing Angular 1 application and are wondering about upgrading to Angular 2? Well, read on and learn about our plans to support incremental upgrades.


Good news!

    • We're enabling mixing of Angular 1 and Angular 2 in the same application.
    • You can mix Angular 1 and Angular 2 components in the same view.
    • Angular 1 and Angular 2 can inject services across frameworks.
    • Data binding works across frameworks.

Why Upgrade?

Angular 2 provides many benefits over Angular 1 including dramatically better performance, more powerful templating, lazy loading, simpler APIs, easier debugging, even more testable and much more. Here are a few of the highlights:

Better performance

We've focused across many scenarios to make your apps snappy. 3x to 5x faster on initial render and re-render scenarios.

    • Faster change detection through monomorphic JS calls
    • Template precompilation and reuse
    • View caching
    • Lower memory usage / VM pressure
    • Linear (blindingly-fast) scalability with observable or immutable data structures
    • Dependency injection supports incremental loading
More powerful templating
    • Removes need for many directives
    • Statically analyzable - future tools and IDEs can discover errors at development time instead of run time
    • Allows template writers to determine binding usage rather than hard-wiring in the directive definition
Future possibilities

We've decoupled Angular 2's rendering from the DOM. We are actively working on supporting the following other capabilities that this decoupling enables:

    • Server-side rendering. Enables blinding-fast initial render and web-crawler support.
    • Web Workers. Move your app and most of Angular to a Web Worker thread to keep the UI smooth and responsive at all times.
    • Native mobile UI. We're enthusiastic about supporting the Web Platform in mobile apps. At the same time, some teams want to deliver fully native UIs on their iOS and Android mobile apps.
    • Compile as build step. Angular apps parse and compile their HTML templates. We're working to speed up initial rendering by moving the compile step into your build process.
Angular 1 and 2 running together

Angular 2 offers dramatic advantages over Angular 1 in performance, simplicity, and flexibility. We're making it easy for you to take advantage of these benefits in your existing Angular 1 applications by letting you seamlessly mix in components and services from Angular 2 into a single app. By doing so, you'll be able to upgrade an application one service or component at a time over many small commits.

For example, you may have an app that looks something like the diagram below. To get your feet wet with Angular 2, you decide to upgrade the left nav to an Angular 2 component. Once you're more confident, you decide to take advantage of Angular 2's rendering speed for the scrolling area in your main content area.

For this to work, four things need to interoperate between Angular 1 and Angular 2:

    • Dependency injection
    • Component nesting
    • Transclusion
    • Change detection

To make all this possible, we're building a library named ng-upgrade. You'll include ng-upgrade and Angular 2 in your existing Angular 1 app, and you'll be able to mix and match at will.

You can find full details and pseudocode in the original upgrade design doc or read on for an overview of the details on how this works. In future posts, we'll walk through specific examples of upgrading Angular 1 code to Angular 2.

Dependency Injection

First, we need to solve for communication between parts of your application. In Angular, the most common pattern for calling another class or function is through dependency injection. Angular 1 has a single root injector, while Angular 2 has a hierarchical injector. Upgrading services one at a time implies that the two injectors need to be able to provide instances from each other.

The ng-upgrade library will automatically make all of the Angular 1 injectables available in Angular 2. This means that your Angular 1 application services can now be injected anywhere in Angular 2 components or services.

Exposing an Angular 2 service into an Angular 1 injector will also be supported, but will require that you to provide a simple mapping configuration.

The result is that services can be easily moved one at a time from Angular 1 to Angular 2 over independent commits and communicate in a mixed-environment.

Component Nesting and Transclusion

In both versions of Angular, we define a component as a directive which has its own template. For incremental migration, you'll need to be able to migrate these components one at a time. This means that ng-upgrade needs to enable components from each framework to nest within each other.

To solve this, ng-upgrade will allow you to wrap Angular 1 components in a facade so that they can be used in an Angular 2 component. Conversely, you can wrap Angular 2 components to be used in Angular 1. This will fully work with transclusion in Angular 1 and its analog of content-projection in Angular 2.

In this nested-component world, each template is fully owned by either Angular 1 or Angular 2 and will fully follow its syntax and semantics. This is not an emulation mode which mostly looks like the other, but an actual execution in each framework, dependending on the type of component. This means that components which are upgraded to Angular 2 will get all of the benefits of Angular 2, and not just better syntax.

This also means that an Angular 1 component will always use Angular 1 Dependency Injection, even when used in an Angular 2 template, and an Angular 2 component will always use Angular 2 Dependency Injection, even when used in an Angular 1 template.

Change Detection

Mixing Angular 1 and Angular 2 components implies that Angular 1 scopes and Angular 2 components are interleaved. For this reason, ng-upgrade will make sure that the change detection (Scope digest in Angular 1 and Change Detectors in Angular 2) are interleaved in the same way to maintain a predictable evaluation order of expressions.

ng-upgrade takes this into account and bridges the scope digest from Angular 1 and change detection in Angular 2 in a way that creates a single cohesive digest cycle spanning both frameworks.

Typical application upgrade process

Here is an example of what an Angular 1 project upgrade to Angular 2 may look like.

  1. Include the Angular 2 and ng-upgrade libraries with your existing application
  2. Pick a component which you would like to migrate
    1. Edit an Angular 1 directive's template to conform to Angular 2 syntax
    2. Convert the directive's controller/linking function into Angular 2 syntax/semantics
    3. Use ng-upgrade to export the directive (now a Component) as an Angular 1 component (this is needed if you wish to call the new Angular 2 component from an Angular 1 template)
  3. Pick a service which you would would like to migrate
    1. Most services should require minimal to no change.
    2. Configure the service in Angular 2
    3. (optionally) re-export the service into Angular 1 using ng-upgrade if it's still used by other parts of your Angular 1 code.
  4. Repeat doing step #2 and #3 in order convenient for your application
  5. Once no more services/components need to be converted drop the top level Angular 1 bootstrap and replace with Angular 2 bootstrap.

Note that each individual change can be checked in separately and the application continues working letting you continue to release updates as you wish.

We are not planning on adding support for allowing non-component directives to be usable on both sides. We think most of the non-component directives are not needed in Angular 2 as they are supported directly by the new template syntax (i.e. ng-click vs (click) )


I heard Angular 2 doesn't support 2-way bindings. How will I replace them?

Actually, Angular 2 supports two way data binding and ng-model, though with slightly different syntax.

When we set out to build Angular 2 we wanted to fix issues with the Angular digest cycle. To solve this we chose to create a unidirectional data-flow for change detection. At first it was not clear to us how the two way forms data-binding of ng-model in Angular 1 fits in, but we always knew that we had to make forms in Angular 2 as simple as forms in Angular 1.

After a few iterations we managed to fix what was broken with multiple digests and still retain the power and simplicity of ng-model in Angular 1.

Two way data-binding has a new syntax: [(property-name)]="expression" to make it explicit that the expression is bound in both directions. Because for most scenarios this is just a small syntactic change we expect easy migration.

As an example, if in Angular 1 you have: <input type="text" ng-model="" />

You would convert to this in Angular 2: <input type="text" [(ng-model)]="" />

What languages can I use with Angular 2?

Angular 2 APIs fully support coding in today's JavaScript (ES5), the next version of JavaScript (ES6 or ES2015), TypeScript, and Dart.

While it's a perfectly fine choice to continue with today's JavaScript, we'd like to recommend that you explore ES6 and TypeScript (which is a superset of ES6) as they provide dramatic improvements to your productivity.

ES6 provides much improved syntax and intraoperative standards for common libraries like promises and modules. TypeScript gives you dramatically better code navigation, automated refactoring in IDEs, documentation, finding errors, and more.

Both ES6 and TypeScript are easy to adopt as they are supersets of today's ES5. This means that all your existing code is valid and you can add their features a little at a time.

What should I do with $watch in our codebase?

tldr; $watch expressions need to be moved into declarative annotations. Those that don't fit there should take advantage of observables (reactive programing style).

In order to gain speed and predictability, in Angular 2 you specify watch expressions declaratively. The expressions are either in HTML templates and are auto-watched (no work for you), or have to be declaratively listed in the directive annotation.

One additional benefit from this is that Angular 2 applications can be safely minified/obfuscated for smaller payload.

For interapplication communication Angular 2 offers observables (reactive programing style).

What can I do today to prepare myself for the migration?

Follow the best practices and build your application using components and services in Angular 1 as described in the AngularJS Style Guide.

Wasn't the original upgrade plan to use the new Component Router?

The upgrade plan that we announced at ng-conf 2015 was based on upgrading a whole view at a time and having the Component Router handle communication between the two versions of Angular.

The feedback we received was that while yes, this was incremental, it wasn't incremental enough. We went back and redesigned for the plan as described above.

Are there more details you can share?

Yes! In the Angular 1 to Angular 2 Upgrade Strategy design doc.

We're working on a series of upcoming posts on related topics including:

  1. Mapping your Angular 1 knowledge to Angular 2.
  2. A set of FAQs on details around Angular 2.
  3. Detailed migration guide with working code samples.

See you back here soon!

Categories: Programming

Learn app monetization best practices with Udacity and Google

Wed, 08/26/2015 - 18:17

Posted by, Ido Green, Developer Advocate

There is no higher form of user validation than having customers support your product with their wallets. However, the path to a profitable business is not necessarily an easy one. There are many strategies to pick from and a lot of little things that impact the bottom line. If you are starting a new business (or thinking how to improve the financial situation of your current startup), we recommend this course we've been working on with Udacity!

This course blends instruction with real-life examples to help you effectively develop, implement, and measure your monetization strategy. By the end of this course you will be able to:

  • Choose & implement a monetization strategy relevant to your service or product.
  • Set performance metrics & monitor the success of a strategy.
  • Know when it might be time to change methods.

Go try it at:–ud518

We hope you will enjoy and earn from it!

Categories: Programming

Breaking the SQL Barrier: Google BigQuery User-Defined Functions

Tue, 08/25/2015 - 16:55

Posted by, Thomas Park, Senior Software Engineer, Google BigQuery

Many types of computations can be difficult or impossible to express in SQL. Loops, complex conditionals, and non-trivial string parsing or transformations are all common examples. What can you do when you need to perform these operations but your data lives in a SQL-based Big data tool? Is it possible to retain the convenience and speed of keeping your data in a single system, when portions of your logic are a poor fit for SQL?

Google BigQuery is a fully managed, petabyte-scale data analytics service that uses SQL as its query interface. As part of our latest BigQuery release, we are announcing support for executing user-defined functions (UDFs) over your BigQuery data. This gives you the ability to combine the convenience and accessibility of SQL with the option to use a familiar programming language, JavaScript, when SQL isn’t the right tool for the job.

How does it work?

BigQuery UDFs are similar to map functions in MapReduce. They take one row of input and produce zero or more rows of output, potentially with a different schema.

Below is a simple example that performs URL decoding. Although BigQuery provides a number of built-in functions, it does not have a built-in for decoding URL-encoded strings. However, this functionality is available in JavaScript, so we can extend BigQuery with a simple User-Defined Function to decode this type of data:

function decodeHelper(s) {
try {
return decodeURI(s);
} catch (ex) {
return s;

// The UDF.
function urlDecode(r, emit) {
emit({title: decodeHelper(r.title),
requests: r.num_requests});

BigQuery UDFs are functions with two formal parameters. The first parameter is a variable to which each input row will be bound. The second parameter is an “emitter” function. Each time the emitter is invoked with a JavaScript object, that object will be returned as a row to the query.

In the above example, urlDecode is the UDF that will be invoked from BigQuery. It calls a helper function decodeHelper that uses JavaScript’s built-in decodeURI function to transform URL-encoded data into UTF-8.

Note the use of try / catch in decodeHelper. Data is sometimes dirty! If we encounter an error decoding a particular string for any reason, the helper returns the original, un-decoded string.

To make this function visible to BigQuery, it is necessary to include a registration call in your code that describes the function, including its input columns and output schema, and a name that you’ll use to reference the function in your SQL:

'urlDecode', // Name used to call the function from SQL.

['title', 'num_requests'], // Input column names.

// JSON representation of output schema.
[{name: 'title', type: 'string'},
{name: 'requests', type: 'integer'}],

urlDecode // The UDF reference.

The UDF can then be invoked by the name “urlDecode” in the SQL query, with a source table or subquery as an argument. The following query looks for the most-visited French Wikipedia articles from April 2015 that contain a cédille character (ç) in the title:

SELECT requests, title
title, sum(requests) AS num_requests
WHERE language = 'fr'
WHERE title LIKE '%ç%'
ORDER BY requests DESC

This query processes data from a 5.6 billion row / 380 Gb dataset and generally runs in less than two minutes. The cost? About $1.37, at the time of this writing.

To use a UDF in a query, it must be described via UserDefinedFunctionResource elements in your JobConfiguration request. UserDefinedFunctionResource elements can either contain inline JavaScript code or pointers to code files stored in Google Cloud Storage.

Under the hood

JavaScript UDFs are executed on instances of Google V8 running on Google servers. Your code runs close to your data in order to minimize added latency.

You don’t have to worry about provisioning hardware or managing pipelines to deal with data import / export. BigQuery automatically scales with the size of the data being queried in order to provide good query performance.

In addition, you only pay for what you use - there is no need to forecast usage or pre-purchase resources.

Developing your function

Interested in developing your JavaScript UDF without running up your BigQuery bill? Here is a simple browser-based widget that allows you to test and debug UDFs.

Note that not all JavaScript functionality supported in the browser is available in BigQuery. For example, anything related to the browser DOM is unsupported, including Window and Document objects, and any functions that require them, such as atob() / btoa().

Tips and tricks

Pre-filter input

In our URL-decoding example, we are passing a subquery as the input to urlDecode rather than the full table. Why?

There are about 5.6 billion rows in [fh-bigquery:wikipedia.pagecounts_201504]. However, one of the query predicates will filter the input data down to the rows where language is “fr” (French) - this is about 262 million rows. If we ran the UDF over the entire table and did the language and cédille filtering in a single WHERE clause, that would cause the JavaScript framework to process over 21 times more rows than it would with the filtered subquery. This equates to a lot of CPU cycles wasted doing unnecessary data conversion and marshalling.

If your input can easily be filtered down before invoking a UDF by using native SQL predicates, doing so will usually lead to a faster (and potentially cheaper) query.

Avoid persistent mutable state

You must not store and access mutable state across UDF execution for different rows. The following contrived example illustrates this error:

// myCode.js
var numRows = 0;

function dontDoThis(r, emit) {
emit(rowCount: ++numRows);

// The query.
SELECT max(rowCount) FROM dontDoThis(myTable);

This is a problem because BigQuery will shard your query across multiple nodes, each of which has independent V8 instances and will therefore accumulate separate values for numRows.

Expand select *

You cannot execute SELECT * FROM urlDecode(...) at this time; you must explicitly list the columns being selected from the UDF: select requests, title from urlDecode(...)

For more information about BigQuery User-Defined Functions, see the full feature documentation.

Categories: Programming

Beacons, the Internet of things, and more: Coffee with Timothy Jordan

Sat, 08/22/2015 - 00:31

Posted by Laurence Moroney, Developer Advocate

In this episode of Coffee With a Googler, Laurence meets with Developer Advocate Timothy Jordan to talk about all things Ubiquitous Computing at Google. Learn about the platforms and services that help developers reach their users wherever it makes sense.

We discuss Brillo, which extends the Android Platform to 'Internet of Things' embedded devices, as well as Weave, which is a services layer that helps all those devices work together seamlessly.

We also chat about beacons and how they can give context to the world around you, making the user experience simpler. Traditionally, users need to tell you about their location, and other types of context. But with beacons, the environment can speak to you. When it comes to developing for beacons, Timothy introduces us to Eddystone, a protocol specification for BlueTooth Low Energy (BLE) beacons, the Proximity Beacon API that allows developers to register a beacon and associate data with it, and the Nearby Messages API which helps your app 'sight' and retrieve data about nearby beacons.

Timothy and his team have produced a new Udacity series on ubiquitous computing that you can access for free! Take the course to learn more about ubiquitous computing, the design paradigms involved, and the technical specifics for extending to Android Wear, Google Cast, Android TV, and Android Auto.

Also, don't forget to join us for a ubiquitous computing summit on November 9th & 10th in San Francisco. Sign up here and we'll keep you updated.

Categories: Programming

Project Tango I/O Apps now released in Google Play

Fri, 08/21/2015 - 20:15

Posted by Larry Yang, Lead Product Manager, Project Tango

At Google I/O, we showed the world many of the cool things you can do with Project Tango. Now you can experience it yourself by downloading these apps on Google Play onto your Project Tango Tablet Development Kit.

A few examples of creative experiences include:

MeasureIt is a sample application that shows how easy it is to measure general distances. Just point a Project Tango device at two or more points. No tape measures and step ladders required.

Constructor is a sample 3D content creation tool where you can scan a room and save the scan for further use.

Tangosaurs lets you walk around and dig up hidden fossils that unlock a portal into a virtual dinosaur world.

Tango Village and Multiplayer VR are simple apps that demonstrate how Project Tango’s motion tracking enables you to walk around VR worlds without requiring an input device.

Tango Blaster lets you blast swarms of robots in a virtual world, and can even work with the Tango device mounted on a toy gun.

We also showed a few partner apps that are also now available in Google Play. Break A Leg is a fun VR experience where you’re a magician performing tricks on stage.

SideKick’s Castle Defender uses Project Tango’s depth perception capability to place a virtual world onto a physical playing surface.

Defective Studio’s VRMT is a world-building sandbox designed to let anyone create, collaborate on, and share their own virtual worlds and experiences. VRMT gives you libraries of props and intuitive tools, to make the virtual creation process as streamlined as possible.

We hope these applications inspire you to use Project Tango’s motion tracking, area learning and depth perception technologies to create 3D experiences. We encourage you to explore the physical space around the user, including precise navigation without GPS, windows into virtual 3D worlds, measurement of spaces, and games that know where they are in the room and what’s around them.

As we mentioned in our previous post, Project Tango Tablet Development Kits will go on sale in the Google Store in Denmark, Finland, France, Germany, Ireland, Italy, Norway, Sweden, Switzerland and the United Kingdom starting August 26.

We have a lot more to share over the coming months! Sign-up for our monthly newsletter to keep up with the latest news. Connect with the 5,000 other developers in our Google+ community. Get help from other developers by using the Project Tango tag in Stack Overflow. See what others are creating on our YouTube channel. And share your story on Twitter with #ProjectTango.

Join us on our journey.

Categories: Programming

Polymer Summit Schedule Released!

Thu, 08/20/2015 - 20:12

Posted by Taylor Savage, Product Manager

We’re excited to announce that the full speaker list and talk schedule has been released for the first ever Polymer Summit! Find the latest details on our newly launched site here. Look forward to talks about topics like building full apps with Polymer, Polymer and ES6, adaptive UI with Material Design, and performance patterns in Polymer.

The Polymer Summit will start on Monday, September 14th with an evening of Code Labs, followed by a full day of talks on Tuesday, September 15th. All of this will be happening at the Muziekgebouw aan ‘t IJ, right on the IJ river in downtown Amsterdam. All tickets to the summit were claimed on the first day, but you can sign up for the waitlist to be notified, should any more tickets become available.

Can’t make it to the summit? Sign up here if you’d like to receive updates on the livestream and tune in live on September 15th on We’ll also be publishing all of the talks as videos on the Google Developers YouTube Channel.

Categories: Programming

What’s in a message? Getting attachments right with the Google beacon platform

Thu, 08/20/2015 - 19:17

Posted by Hoi Lam, Developer Advocate

If your users’ devices know where they are in the world – the place that they’re at, or the objects they’re near – then your app can adapt or deliver helpful information when it matters most. Beacons are a great way to explicitly label the real-world locations and contexts, but how does your app get the message that it’s at platform 9, instead of the shopping mall or that the user is standing in front of a food truck, rather than just hanging out in the parking lot?

With the Google beacon platform, you can associate information with registered beacons by using attachments in Proximity Beacon API, and serve those attachments back to users’ devices as messages via the Nearby Messages API. In this blog post, we will focus on how we can use attachments and messages most effectively, making our apps more context-aware.

Think per message, not per beacon

Suppose you are creating an app for a large train station. You’ll want to provide different information to the user who just arrived and is looking for the ticket machine, as opposed to the user who just wants to know where to stand to be the closest to her reserved seat. In this instance, you’ll want more than one beacon to label important places, such as the platform, entrance hall and waiting area. Some of the attachments for each beacon will be the same (e.g. the station name), others will be different (e.g. platform number). This is where the design of Proximity Beacon API, and the Nearby Messages API in Android and iOS helps you out.

When your app retrieves the beacon attachments via the Nearby Messages API, each attachment will appear as an individual message, not grouped by beacon. In addition, Nearby Messages will automatically de-duplicate any attachments (even if they come from different beacons). So the situation looks like this:

This design has several advantages:

  • It abstracts the API away from implementation (beacon in this case), so if in the future we have other kinds of devices which send out messages, we can adopt them easily.
  • Built in deduplication means that you do not need to build your own to react to the same message, such as the station name in the above example.
  • You can add finer grained context messages later on, without re-deploying.

In designing your beacon user experience, think about the context of your user, the places and objects that are important for your app, and then label those places. The Proximity Beacon API makes beacon management easy, and Nearby Messages API abstract the hardware away, allowing you to focus on creating relevant and timely experiences. The beacons themselves should be transparent to the user.

Using beacon attachments with external resources

In most cases, the data you store in attachments will be self-contained and will not need to refer to an external database. However, there are several exceptions where you might want to keep some data separately:

  • Large data items such as pictures and videos.
  • Where the data resides on a third party database system that you do not control.
  • Confidential or sensitive data that should not be stored in beacon attachments.
  • If you run a proprietary authentication system that relies on your own database.

In these cases, you might need to use a more generic identifier in the beacon attachment to lookup the relevant data from your infrastructure.

Combining the virtual and the real worlds

With beacons, we have an opportunity to delight users by connecting the virtual world of personalization and contextual awareness with real world places and things that matter most. Through attachments, the Google beacon platform delivers a much richer context for your app that goes beyond the beacon identifier and enables your apps to better serve your users. Let’s build the apps that connect the two worlds!

Categories: Programming

Saving a life through technology - Coffee with a Googler

Fri, 08/14/2015 - 18:05

Posted by Laurence Moroney, Developer Advocate

In this week’s Coffee with a Googler, we’re joined by Heidi Dohse from the Google Cloud team to talk about how she saved her own life through technology. At Google she works on the Cloud Platform team that supports our genomics work, and has a passion for the future of the delivery of health care.

When she was a child, Heidi Dohse had an erratic heartbeat, but, without knowing anything about it, she just ignored it. As a teen she became a competitive windsurfer and skier, and as part of a surgery on her knee between seasons, she had an EKG and discovered that her heart was beating irregularly at 270bpm.

She had an experimental surgery and was immediately given a pacemaker, and became a young heart patient, expecting not to live much longer. Years later, Heidi is now on her 7th pacemaker, but it doesn’t stop her from staying active as an athlete. She’s a competitive cyclist, often racing hundreds of miles, and climbing tens of thousands of feet as she races.

At the core of all this is her use of technology. She has carefully managed her health by collecting and analyzing the data from her pacemaker. The data goes beyond just heartbeat, and includes things such as the gyroscope, oxygen utilization, muscle stimulation and electrical impulses, she can pro-actively manage her health.

It’s the future of health care -- instead of seeing a doctor for an EKG every few months, with this technology and other wearables, one can constantly check their health data, and know ahead of time if there would be any health issues. One can proactively ensure their health, and pre-empt any health issues.

Learn more in the video.

Categories: Programming

Learn about Google Translate in Coffee with a Googler

Mon, 08/10/2015 - 18:35

Posted by Laurence Moroney, Developer Advocate

Over the past few months, we’ve been bringing you 'Coffee with a Googler', giving you a peek at people working on cool stuff that you might find inspirational and useful. Starting with this week’s episode, we’re going to accompany each video with a short post for more details, while also helping you make the most of the tech highlighted in the video.

This week we meet with MacDuff Hughes from the Google Translate team. Google Translate uses statistics based translation. By finding very large numbers of examples of translations from one language to another, it uses statistics to see how various phrases are treated, so it can make reasonable estimates at the correct phrases that are natural sounding in the target language. For common phrases, there are many candidate translations, so the engine converts them within the context of the passage that the phrase is in. Images can also be translated. When you point your mobile device at printed text and it will translate to the preferred for you.

Translate is adding languages all the time, and part of its mission is to serve languages that are less frequently used such as Gaelic, Welsh or Maori, in the same manner as the more commonly used ones, such as English, Spanish and French. To this end, Translate supports more than 90 languages. In the quest to constantly improve translation, the technology provides a way for the community to validate translations, and this is particularly used by less commonly used translations, effectively helping them to grow and thrive. It also enhances the machine translation by having people involved too.

You can learn more about Google Translate, and the translate app here.

Developers have a couple of options to use translate:

  • The Free Website Translate plugin that you can add to your site and have translations available immediately.
  • The Cloud-based Translate API that you can use to build apps or sites that have translation in them.

Watch the episode here:

Categories: Programming