Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Google Code Blog
Syndicate content
ewoodhttp://www.blogger.com/profile/12341551220176883769noreply@blogger.comBlogger1668125
Updated: 1 hour 12 min ago

Adding text and shapes with the Google Slides API

Thu, 02/23/2017 - 20:26
Originally shared on the G Suite Developers Blog

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite
When the Google Slidesteam launched their very first API last November, it immediately opened up a whole new class of applications. These applications have the ability to interact with the Slides service, so you can perform operations on presentations programmatically. Since its launch, we've published several videos to help you realize some of those possibilities, showing you how to:
Today, we're releasing the latest Slides API tutorial in our video series. This one goes back to basics a bit: adding text to presentations. But we also discuss shapes—not only adding shapes to slides, but also adding text within shapes. Most importantly, we cover one best practice when using the API: create your own object IDs. By doing this, developers can execute more requests while minimizing API calls.



Developers use insertText requests to tell the API to add text to slides. This is true whether you're adding text to a textbox, a shape or table cell. Similar to the Google Sheets API, all requests are made as JSON payloads sent to the API's batchUpdate() method. Here's the JavaScript for inserting text in some object (objectID) on a slide:
{
"insertText": {
"objectId": objectID,
"text": "Hello World!\n"
}
Adding shapes is a bit more challenging, as you can see from itssample JSON structure:

{
"createShape": {
"shapeType": "SMILEY_FACE",
"elementProperties": {
"pageObjectId": slideID,
"size": {
"height": {
"magnitude": 3000000,
"unit": "EMU"
},
"width": {
"magnitude": 3000000,
"unit": "EMU"
}
},
"transform": {
"unit": "EMU",
"scaleX": 1.3449,
"scaleY": 1.3031,
"translateX": 4671925,
"translateY": 450150
}
}
}
}
Placing or manipulating shapes or images on slides requires more information so the cloud service can properly render these objects. Be aware that it does involve some math, as you can see from the Page Elements page in the docs as well as the Transforms concept guide. In the video, I drop a few hints and good practices so you don't have to start from scratch.

Regardless of how complex your requests are, if you have at least one, say in an array named requests, you'd make an API call with the aforementioned batchUpdate() method, which in Python looks like this (assuming SLIDES is the service endpoint and a presentation ID of deckID):

SLIDES.presentations().batchUpdate(presentationId=deckID,
body=requests).execute()
For a detailed look at the complete code sample featured in the DevByte, check out the deep dive post. As you can see, adding text is fairly straightforward. If you want to learn how to format and style that text, check out the Formatting Text post and video as well as the text concepts guide.
To learn how to perform text search-and-replace, say to replace placeholders in a template deck, check out the Replacing Text & Images post and video as well as the merging data into slides guide. We hope these developer resources help you create that next great app that automates the task of producing presentations for your users!
Categories: Programming

Debug TensorFlow Models with tfdbg

Fri, 02/17/2017 - 23:32
Posted by Shanqing Cai, Software Engineer, Tools and Infrastructure.

We are excited to share TensorFlow Debugger (tfdbg), a tool that makes debugging of machine learning models (ML) in TensorFlow easier.
TensorFlow, Google's open-source ML library, is based on dataflow graphs. A typical TensorFlow ML program consists of two separate stages:
  1. Setting up the ML model as a dataflow graph by using the library's Python API,
  2. Training or performing inference on the graph by using the Session.run()method.
If errors and bugs occur during the second stage (i.e., the TensorFlow runtime), they are difficult to debug.

To understand why that is the case, note that to standard Python debuggers, the Session.run() call is effectively a single statement and does not exposes the running graph's internal structure (nodes and their connections) and state (output arrays or tensors of the nodes). Lower-level debuggers such as gdb cannot organize stack frames and variable values in a way relevant to TensorFlow graph operations. A specialized runtime debugger has been among the most frequently raised feature requests from TensorFlow users.

tfdbg addresses this runtime debugging need. Let's see tfdbg in action with a short snippet of code that sets up and runs a simple TensorFlow graph to fit a simple linear equation through gradient descent.

import numpy as np
import tensorflow as tf
import tensorflow.python.debug as tf_debug
xs = np.linspace(-0.5, 0.49, 100)
x = tf.placeholder(tf.float32, shape=[None], name="x")
y = tf.placeholder(tf.float32, shape=[None], name="y")
k = tf.Variable([0.0], name="k")
y_hat = tf.multiply(k, x, name="y_hat")
sse = tf.reduce_sum((y - y_hat) * (y - y_hat), name="sse")
train_op = tf.train.GradientDescentOptimizer(learning_rate=0.02).minimize(sse)

sess = tf.Session()
sess.run(tf.global_variables_initializer())

sess = tf_debug.LocalCLIDebugWrapperSession(sess)
for _ in range(10):
sess.run(train_op, feed_dict={x: xs, y: 42 * xs})

As the highlighted line in this example shows, the session object is wrapped as a class for debugging (LocalCLIDebugWrapperSession), so the calling the run() method will launch the command-line interface (CLI) of tfdbg. Using mouse clicks or commands, you can proceed through the successive run calls, inspect the graph's nodes and their attributes, visualize the complete history of the execution of all relevant nodes in the graph through the list of intermediate tensors. By using the invoke_stepper command, you can let the Session.run() call execute in the "stepper mode", in which you can step to nodes of your choice, observe and modify their outputs, followed by further stepping actions, in a way analogous to debugging procedural languages (e.g., in gdb or pdb).

A class of frequently encountered issue in developing TensorFlow ML models is the appearance of bad numerical values (infinities and NaNs) due to overflow, division by zero, log of zero, etc. In large TensorFlow graphs, finding the source of such nodes can be tedious and time-consuming. With the help of tfdbg CLI and its conditional breakpoint support, you can quickly identify the culprit node. The video below demonstrates how to debug infinity/NaN issues in a neural network with tfdbg:

A screencast of the TensorFlow Debugger in action, from this tutorial.

Compared with alternative debugging options such as Print Ops, tfdbg requires fewer lines of code change, provides more comprehensive coverage of the graphs, and offers a more interactive debugging experience. It will speed up your model development and debugging workflows. It offers additional features such as offline debugging of dumped tensors from server environments and integration with tf.contrib.learn. To get started, please visit this documentation. This research paperlays out the design of tfdbg in greater detail.

The minimum required TensorFlow version for tfdbgis 0.12.1. To report bugs, please open issues on TensorFlow's GitHub Issues Page. For general usage help, please post questions on StackOverflow using the tag tensorflow.
Acknowledgements
This project would not be possible without the help and feedback from members of the Google TensorFlow Core/API Team and the Applied Machine Intelligence Team.





Categories: Programming

Announcing TensorFlow 1.0

Wed, 02/15/2017 - 20:43
Posted By: Amy McDonald Sandjideh, Technical Program Manager, TensorFlow

In just its first year, TensorFlow has helped researchers, engineers, artists, students, and many others make progress with everything from language translation to early detection of skin cancer and preventing blindness in diabetics. We're excited to see people using TensorFlow in over 6000 open-source repositories online.


Today, as part of the first annual TensorFlow Developer Summit, hosted in Mountain View and livestreamed around the world, we're announcing TensorFlow 1.0:


It's faster: TensorFlow 1.0 is incredibly fast! XLA lays the groundwork for even more performance improvements in the future, and tensorflow.org now includes tips & tricksfor tuning your models to achieve maximum speed. We'll soon publish updated implementations of several popular models to show how to take full advantage of TensorFlow 1.0 - including a 7.3x speedup on 8 GPUs for Inception v3 and 58x speedup for distributed Inception v3 training on 64 GPUs!


It's more flexible: TensorFlow 1.0 introduces a high-level API for TensorFlow, with tf.layers, tf.metrics, and tf.losses modules. We've also announced the inclusion of a new tf.keras module that provides full compatibility with Keras, another popular high-level neural networks library.


It's more production-ready than ever: TensorFlow 1.0 promises Python API stability (details here), making it easier to pick up new features without worrying about breaking your existing code.

Other highlights from TensorFlow 1.0:

  • Python APIs have been changed to resemble NumPy more closely. For this and other backwards-incompatible changes made to support API stability going forward, please use our handy migration guide and conversion script.
  • Experimental APIs for Javaand Go
  • Higher-level API modules tf.layers, tf.metrics, and tf.losses - brought over from tf.contrib.learnafter incorporating skflowand TF Slim
  • Experimental release of XLA, a domain-specific compiler for TensorFlow graphs, that targets CPUs and GPUs. XLA is rapidly evolving - expect to see more progress in upcoming releases.
  • Introduction of the TensorFlow Debugger (tfdbg), a command-line interface and API for debugging live TensorFlow programs.
  • New Android demos for object detection and localization, and camera-based image stylization.
  • Installation improvements: Python 3 docker images have been added, and TensorFlow's pip packages are now PyPI compliant. This means TensorFlow can now be installed with a simple invocation of pip install tensorflow.

We're thrilled to see the pace of development in the TensorFlow community around the world. To hear more about TensorFlow 1.0 and how it's being used, you can watch the TensorFlow Developer Summit talks on YouTube, covering recent updates from higher-level APIs to TensorFlow on mobile to our new XLA compiler, as well as the exciting ways that TensorFlow is being used:



Click herefor a link to the livestream and video playlist (individual talks will be posted online later in the day).


The TensorFlow ecosystem continues to grow with new techniques like Foldfor dynamic batching and tools like the Embedding Projector along with updatesto our existing tools like TensorFlow Serving. We're incredibly grateful to the community of contributors, educators, and researchers who have made advances in deep learning available to everyone. We look forward to working with you on forums like GitHub issues, Stack Overflow, @TensorFlow, the discuss@tensorflow.orggroup, and at future events.



Categories: Programming

G Suite Developer Sessions at Google Cloud Next 2017

Wed, 02/08/2017 - 19:18
Originally posted on the G Suite Developers Blog

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

There are over 200 sessions happening next month at Google Cloud's Next 2017 conferencein San Francisco... so many choices! Along with content geared towards Google Cloud Platform, this year features the addition of G Suite so all 3 pillars of cloud computing (IaaS, PaaS, SaaS) are represented!


There are already thousands of developers including Independent Software Vendors (ISVs) creating solutions to help schools and enterprises running the G Suite collaboration and productivity suite (formerly Google Apps). If you're thinking about becoming one, consider building applications that extend, enhance, and integrate G Suite apps and data with other mission critical systems to help businesses and educational institutions succeed.


Looking for inspiration? Here's a preview of some of the sessions that current and potential G Suite developers should consider:


The first is Automating internal processes using Apps Script and APIs for Docs editors. Not only will you hear directly from senior engineers on the Google Sheets & Google Slides REST API teams, but you'll also find out how existing customers are already doing so! Can't wait to get started with these APIs? Here are the intro blog post & video for the latest Google Sheets API as well as the intro blog post & video for the Google Slides API. Part of the talk also covers Google Apps Script, the Javascript-in-the-cloud solution that gives developers programmatic access to authorized G Suite data along with the ability to connect to other Google and external services.


If that's not enough Apps Script for you, or you're new to that technology, swing by to hear its Product Manager give you an introduction in his talk, Using Google Apps Script to automate G Suite. If you haven't heard of Apps Script before, you'll be wondering why you haven't until now! If you want a headstart, here's a quick intro video to give you an idea of what you can do with it!


Did you know that Apps Script also powers "add-ons" which extend the functionality of Google Docs, Sheets, and Forms? Then come to "Building G Suite add-ons with Google Apps Script". Learn how you can leverage the power of Apps Script to build custom add-ons for your business, or monetize by making them available in the G Suite Marketplace where administrators or employees can install your add-ons for their organizations.


In addition to Apps Script apps, all your Google Docs, Sheets, and Slides documents live in Google Drive. But did you know that Drive is not just for individual file storage? Hear directly from a Drive Product Manager on how you can, "[Use] Drive APIs to customize Google Drive." With the Drive API and Team Drives, you can extend what Drive can do for your organization. One example from the most recent Google I/O tells the story of how WhatsApp used the Drive API to back up all your conversations! To get started with your own Drive API integration, check out this blog post and short video. Confused by when you should use Google Drive or Google Cloud Storage? I've got an app, err video, for that too! :-)


Not a software engineer but still code as part of your profession? Want to build a custom app for your department or line of business without having to worry about IT overhead? You may have heard about Google App Maker, our low-code development tool that does exactly that. Curious to learn more about it? Hear directly from its Product Manager lead in his talk entitled, "Citizen developers building low-code apps with App Maker on G Suite."


All of these talks are just waiting for you at Next, the best place to get your feet wet developing for G Suite, and of course, the Google Cloud Platform. Start by checking out the session schedule. Next will also offer many opportunities to meet and interact with industry peers along with representatives from all over Google who love the cloud. Register today and see you in San Francisco!




Categories: Programming

Introducing Google Developers India: A Local Youtube Channel for India’s Mobile Development Revolution

Tue, 02/07/2017 - 21:36
Posted By Peter Lubbers, Senior Program Manager

Today, we're launching the Google Developers India channel: a brand new Youtube channel tailored for Indian Developers. The channel will include original content like interviews with local experts, developer spotlights, technical tutorials, and complete Android courses to help you be a successful developer.

Why India?

By 2018, India will have the largest developer base in the world with over 4 million developers. Our initiative to train 2 million Indian developers, along with the tremendous popularity of mobile development in the country and the desire to build better mobile apps, will be best catered by an India-specific developers channel featuring Indian developers, influencers, and experts.



Here is a taste of what's to come in 2017:
  • Tech Interviews: Advice from India's top developers, influencers and tech experts.
  • Developer Stories: Inspirational stories of Indian developers.
  • DevShow India: A weekly show that will keep new and seasoned developers updated on all the news, trainings, and API's from Google.
  • Skilled to Scaled: A real life developer journey that takes us from the germination of an idea for an app, all the way to monetizing it on Google Play.
So what's next?


The channel is live now. Click hereto check it out.


Categories: Programming

What’s in an AMP URL?

Mon, 02/06/2017 - 20:00
Posted by Alex Fischer, Software Engineer, Google Search.

TL;DR: Today, we're adding a feature to the AMP integration in Google Search that allows users to access, copy, and share the canonical URL of an AMP document. But before diving deeper into the news, let's take a step back to elaborate more on URLs in the AMP world and how they relate to the speed benefits of AMP.

What's in a URL? On the web, a lot - URLs and origins represent, to some extent, trust and ownership of content. When you're reading a New York Times article, a quick glimpse at the URL gives you a level of trust that what you're reading represents the voice of the New York Times. Attribution, brand, and ownership are clear.

Recent product launches in different mobile apps and the recent launch of AMP in Google Search have blurred this line a little. In this post, I'll first try to explain the reasoning behind some of the technical decisions we made and make sense of the different kinds of AMP URLs that exist. I'll then outline changes we are making to address the concerns around URLs.

To start with, AMP documents have three different kinds of URLs:
  • Original URL: The publisher's document written in the AMP format. http://www.example.com/amp/doc.html
  • AMP Cache URL: The document served through an AMP Cache (e.g., all AMPs served by Google are served through the Google AMP Cache). Most users will never see this URL. https://www-example-com.cdn.ampproject.org/c/www.example.com/amp/doc.html
  • Google AMP Viewer URL: The document displayed in an AMP viewer (e.g., when rendered on the search result page). https://www.google.com/amp/www.example.com/amp.doc.html



Although having three different URLs with different origins for essentially the same content can be confusing, there are two main reasons why these different URLs exist: caching and pre-rendering. Both are large contributors to AMP's speed, but require new URLs and I will elaborate on why that is.
AMP Cache URLsLet's start with AMP Cache URLs. Paul Bakaus, a Google Developer Advocate for AMP, has an excellent post describing why AMP Caches exist. Paul's post goes into great detail describing the benefits of AMP Caches, but it doesn't quite answer the question why they require new URLs. The answer to this question comes down to one of the design principles of AMP: build for easy adoption. AMP tries to solve some of the problems of the mobile web at scale, so its components must be easy to use for everyone.

There are a variety of options to get validation, proximity to users, and other benefits provided by AMP Caches. For a small site, however, that doesn't manage its own DNS entries, doesn't have engineering resources to push content through complicated APIs, or can't pay for content delivery networks, a lot of these technologies are inaccessible.

For this reason, the Google AMP Cache works by means of a simple URL "transformation." A webmaster only has to make their content available at some URL and the Google AMP Cache can then cache and serve the content through Google's world-wide infrastructure through a new URL that mirrors and transforms the original. It's as simple as that. Leveraging an AMP Cache using the original URL, on the other hand, would require the webmaster to modify their DNS records or reconfigure their name servers. While some sites do just that, the URL-based approach is easier to use for the vast majority of sites.
AMP Viewer URLsIn the previous section, we learned about Google AMP Cache URLs -- URLs that point to the cached version of an AMP document. But what about www.google.com/amp URLs? Why are they needed? These are "AMP Viewer" URLs and they exist because of pre-rendering.
AMP's built-in support for privacy and resource-conscientious pre-rendering is rarely talked about and often misunderstood. AMP documents can be pre-rendered without setting off a cascade of resource fetches, without hogging up users' CPU and memory, and without running any privacy-sensitive analytics code. This works regardless of whether the embedding application is a mobile web page or a native application. The need for new URLs, however, comes mostly from mobile web implementations, so I am using Google's mobile search result page (SERP) as an illustrative example.
How does pre-rendering work?When a user performs a Google search that returns AMP-enabled results, some of these results are pre-rendered behind the scenes. When the user clicks on a pre-rendered result, the AMP page loads instantly.

Pre-rendering works by loading a hidden iframe on the embedding page (the search result page) with the content of the AMP page and an additional parameter that indicates that the AMP document is only being pre-rendered. The JavaScript component that handles the lifecycle of these iframes is called "AMP Viewer".

The AMP Viewer pre-renders an AMP document in a hidden iFrame.

The user's browser loads the document and the AMP runtime and starts rendering the AMP page. Since all other resources, such as images and embeds, are managed by the AMP runtime, nothing else is loaded at this point. The AMP runtime may decide to fetch some resources, but it will do so in a resource and privacy sensible way.

When a user clicks on the result, all the AMP Viewer has to do is show the iframe that the browser has already rendered and let the AMP runtime know that the AMP document is now visible.
As you can see, this operation is incredibly cheap - there is no network activity or hard navigation to a new page involved. This leads to a near-instant loading experience of the result.
Where do google.com/amp URLs come from?All of the above happens while the user is still on the original page (in our example, that's the search results page). In other words, the user hasn't gone to a different page; they have just viewed an iframe on the same page and so the browser doesn't change the URL.

We still want the URL in the browser to reflect the page that is displayed on the screen and make it easy for users to link to. When users hit refresh in their browser, they expect the same document to show up and not the underlying search result page. So the AMP viewer has to manually update this URL. This happens using the History API. This API allows the AMP Viewer to update the browser's URL bar without doing a hard navigation.

The question is what URL the browser should be updated to. Ideally, this would be the URL of the result itself (e.g., www.example.com/amp/doc.html); or the AMP Cache URL (e.g., www-example-com.cdn.ampproject.org/www.example.com/amp/doc.html). Unfortunately, it can't be either of those. One of the main restrictions of the History API is that the new URL must be on the same origin as the original URL (reference). This is enforced by browsers (for security reasons), but it means that in Google Search, this URL has to be on the www.google.com origin.
Why do we show a header bar?The previous section explained restrictions on URLs that an AMP Viewer has to handle. These URLs, however, can be confusing and misleading. They can open up the doors to phishing attacks. If an AMP page showed a login page that looks like Google's and the URL bar says www.google.com, how would a user know that this page isn't actually Google's? That's where the need for additional attribution comes in.

To provide appropriate attribution of content, every AMP Viewer must make it clear to users where the content that they're looking at is coming from. And one way of accomplishing this is by adding a header bar that displays the "true" origin of a page.

What's next?I hope the previous sections made it clear why these different URLs exist and why there needs to be a header in every AMP viewer. We have heard how you feel about this approach and the importance of URLs. So what next? As you know, we want to be thoughtful in what we do and ensure that we don't break the speed and performance users expect from AMP pages.

Since the launch of AMP in Google Search in Feb 2016, we have taken the following steps:
  • All Google URLs (i.e., the Google AMP cache URL and the Google AMP viewer URL) reflect the original source of the content as best as possible: www.google.com/amp/www.example.com/amp/doc.html.
  • When users scroll down the page to read a document, the AMP viewer header bar hides, freeing up precious screen real-estate.
  • When users visit a Google AMP viewer URL on a platform where the viewer is not available, we redirect them to the canonical page for the document.
In addition to the above, many users have requested a way to access, copy, and share the canonical URL of a document. Today, we're adding support for this functionality in form of an anchor button in the AMP Viewer header on Google Search. This feature allows users to use their browser's native share functionality by long-tapping on the link that is displayed.


In the coming weeks, the Android Google app will share the original URL of a document when users tap on the app's share button. This functionality is already available on the iOS Google app.

Lastly, we're working on leveraging upcoming web platform APIs that allow us to improve this functionality even further. One such API is the Web Share API that would allow AMP viewers to invoke the platform's native sharing flow with the original URL rather than the AMP viewer URL.

We as Google have every intention in making the AMP experience as good as we can for both, users and publishers. A thriving ecosystem is very important to us and attribution, user trust, and ownership are important pieces of this ecosystem. I hope this blog post helps clear up the origin of the three URLs of AMP documents, their role in making AMP fast, and our efforts to further improve the AMP experience in Google Search. Lastly, an ecosystem can only flourish with your participation: give us feedback and get involved with AMP.

Categories: Programming

Introducing Associate Android Developer Certification by Google

Wed, 02/01/2017 - 21:27
Originally posted on Android Developer Blog

The Associate Android Developer Certification program was announced at Google I/O 2016, and launched a few months later. Since then, over 322 Android developers spanning 61 countries have proven their competency and earned the title of Google Certified Associate Android Developer.
To establish a professional standard for what it means to be an Associate Android developer in the current job market, Google created this certification, which allows us to recognize developers who have proven themselves to uphold that standard.

We conducted a job task analysis to determine the required competencies and content of the certification exam. Through field research and interviews with experts, we identified the knowledge, work practices, and essential skills expected of an Associate Android developer.

The certification process consists of a performance-based exam and an exit interview. The certification fee includes three exam attempts. The cost for certification is $149 USD, or 6500 INR if you reside in India. After payment, the exam will be available for download, and you have 48 hours to complete and submit it for grading.

In the exam, you will implement missing features and debug an Android app using Android Studio. If you pass, you will undergo an exit interview where, you will answer questions about your exam and demonstrate your knowledge of Associate Android Developer competencies.

Check out this short video for a quick overview of the Associate Android Developer certification process:



Earning your AAD Certification signifies that you possess the skills expected of an Associate Android developer, as determined by Google. You can showcase your credential on your resume and display your digital badge on your social media profiles. As a member of the AAD Alumni Community, you will also have access to program benefits focused on increasing your visibility as a certified developer.

Test your Android development skills and earn the title of Google Certified Associate Android Developer. Visit g.co/devcertification to get started!


Categories: Programming

New resources for building inclusive tech hubs

Tue, 01/31/2017 - 20:10
Posted by Amy Schapiro and the Women Techmakers team 

For the tech industry to thrive and create groundbreaking technology that supports the global ecosystem, it is criticalto increase the diversity and inclusion of communities that make the technology. To support this global network of tech hubs - incubators, community organizations, accelerators and coworking spaces - Women Techmakers partnered with Change Catalyst to develop an in-depth video series and set of guides on how to build inclusive technology hubs.


Watch the videos on the Women Techmakers YouTube channel, and access the how-to guides on the Change Catalyst site [via this link].


For more information about Women Techmakers, Google's global program supporting women in technology, and to join the Membership program, visit womentechmakers.com.
Categories: Programming

Welcoming Fabric to Google

Wed, 01/18/2017 - 22:26
Originally posted on the Firebase Blog

Posted by Francis Ma, Firebase Product Manager

Almost eight months ago, we launchedthe expansion of Firebase to help developers build high-quality apps, grow their user base, and earn more money across iOS, Android and the Web. We've already seen great adoption of the platform, which brings together the best of Google's core businesses from Cloud to mobile advertising.

Our ultimate goal with Firebase is to free developers from so much of the complexity associated with modern software development, giving them back more time and energy to focus on innovation.

As we work towards that goal, we've continued to improve Firebase, working closely with our user community. We recently introducedmajor enhancements to many core features, including Firebase Analytics, Test Lab and Cloud Messaging, as well as added support for game developers with a C++ SDK and Unity plug-in.


We're deeply committed to Firebase and are doubling down on our investment to solve developer challenges.
Fabric and Firebase Joining Forces

Today, we're excited to announce that we've signed an agreement to acquire Fabric to continue the great work that Twitter put into the platform. Fabric will join Google's Developer Product Group, working with the Firebase team. Our missions align closely: help developers build better apps and grow their business.
As a popular, trusted tool over many years, we expect that Crashlytics will become the main crash reporting offering for Firebase and will augment the work that we have already done in this area. While Fabric was built on the foundation of Crashlytics, the Fabric team leveraged its success to launch a broad set of important tools, including Answers and Fastlane. We'll share further details in the coming weeks after we close the deal, as we work closely together with the Fabric team to determine the most efficient ways to further combine our strengths. During the transition period, Digits, the SMS authentication services, will be maintained by Twitter.


The integration of Fabric is part of our larger, long-term effort of delivering a comprehensive suite of features for iOS, Android and mobile Web app development.

This is a great moment for the industry and a unique opportunity to bring the best of Firebase with the best of Fabric. We're committed to making mobile app development seamless, so that developers can focus more of their time on building creative experiences.
Categories: Programming

Silence speaks louder than words when finding malware

Tue, 01/17/2017 - 23:06
Originally posted on Android Developer Blog

Posted by Megan Ruthven, Software Engineer
In Android Security, we're constantly working to better understand how to make Android devices operate more smoothly and securely. One security solution included on all devices with Google Play is Verify apps. Verify apps checks if there are Potentially Harmful Apps (PHAs) on your device. If a PHA is found, Verify apps warns the user and enables them to uninstall the app.

But, sometimes devices stop checking up with Verify apps. This may happen for a non-security related reason, like buying a new phone, or, it could mean something more concerning is going on. When a device stops checking up with Verify apps, it is considered Dead or Insecure (DOI). An app with a high enough percentage of DOI devices downloading it, is considered a DOI app. We use the DOI metric, along with the other security systems to help determine if an app is a PHA to protect Android users. Additionally, when we discover vulnerabilities, we patch Android devices with our security update system. This blog post explores the Android Security team's research to identify the security-related reasons that devices stop working and prevent it from happening in the future.
Flagging DOI Apps
To understand this problem more deeply, the Android Security team correlates app install attempts and DOI devices to find apps that harm the device in order to protect our users.
With these factors in mind, we then focus on 'retention'. A device is considered retained if it continues to perform periodic Verify apps security check ups after an app download. If it doesn't, it's considered potentially dead or insecure (DOI). An app's retention rate is the percentage of all retained devices that downloaded the app in one day. Because retention is a strong indicator of device health, we work to maximize the ecosystem's retention rate. Therefore, we use an app DOI scorer, which assumes that all apps should have a similar device retention rate. If an app's retention rate is a couple of standard deviations lower than average, the DOI scorer flags it. A common way to calculate the number of standard deviations from the average is called a Z-score. The equation for the Z-score is below.
  • N = Number of devices that downloaded the app.
  • x = Number of retained devices that downloaded the app.
  • p = Probability of a device downloading any app will be retained.

In this context, we call the Z-score of an app's retention rate a DOI score. The DOI score indicates an app has a statistically significant lower retention rate if the Z-score is much less than -3.7. This means that if the null hypothesis is true, there is much less than a 0.01% chance the magnitude of the Z-score being as high. In this case, the null hypothesis means the app accidentally correlated with lower retention rate independent of what the app does.
This allows for percolation of extreme apps (with low retention rate and high number of downloads) to the top of the DOI list. From there, we combine the DOI score with other information to determine whether to classify the app as a PHA. We then use Verify apps to remove existing installs of the app and prevent future installs of the app.
Difference between a regular and DOI app download on the same device.
Results in the wild
Among others, the DOI score flagged many apps in three well known malware families— Hummingbad, Ghost Push, and Gooligan. Although they behave differently, the DOI scorer flagged over 25,000 apps in these three families of malware because they can degrade the Android experience to such an extent that a non-negligible amount of users factory reset or abandon their devices. This approach provides us with another perspective to discover PHAs and block them before they gain popularity. Without the DOI scorer, many of these apps would have escaped the extra scrutiny of a manual review.
The DOI scorer and all of Android's anti-malware work is one of multiple layers protecting users and developers on Android. For an overview of Android's security and transparency efforts, check out our page.
Categories: Programming

Google AMP Cache, Optimizations for Slow Networks, and the Need for Speed

Wed, 01/11/2017 - 23:11
Posted by Huibao Lin and Eyal Peled, Software Engineers, Google
Editor’s Note: This blog post was amended to remove the term “AMP Lite”. This was a code name for a project to make AMP better for slow networks but many readers interpreted this as a separate version of AMP. We apologize for the confusion.
At Google we believe in designing products with speed as a core principle. The Accelerated Mobile Pages (AMP) format helps ensure that content reliably loads fast, but we can do even better.

Smart caching is one of the key ingredients in the near instant AMP experiences users get in products like Google Search and Google News & Weather. With caching, we can make content be, in general, physically closer to the users who are requesting it so that bytes take a shorter trip over the wire to reach the user. In addition, using a single common infrastructure like a cache provides greater consistency in page serving times despite the content originating from many hosts, which might have very different—and much larger—latency in serving the content as compared to the cache.

Faster and more consistent delivery are the major reasons why pages served in Google Search's AMP experience come from the Google AMP Cache. The Cache's unified content serving infrastructure opens up the exciting possibility to build optimizations that scale to improve the experience across hundreds of millions of documents served. Making it so that any document would be able to take advantage of these benefits is one of the main reasons the Google AMP Cache is available for free to anyone to use.

In this post, we'll highlight two improvements we've recently introduced: (1) optimized image delivery and (2) enabling content to be served more successfully in bandwidth-constrained conditions.

Image optimizations by the Google AMP Cache
On average across the web, images make up 64% of the bytesof a page. This means images are a very promising target for impactful optimizations.

Applying image optimizations is an effective way for cutting bytes on the wire. The Google AMP Cache employs the image optimization stack used by the PageSpeed Modules and Chrome Data Compression. (Note that in order to make the above transformations, the Google AMP Cache disregards the "Cache-Control: no-transform" header.) Sites can get the same image optimizations on their origin by installing PageSpeed on their server.

Here's a rundown of some of the optimizations we've made:

1) Removing data which is invisible or difficult to see
We remove image data that is invisible to users, such as thumbnail and geolocation metadata. For JPEG images, we also reduce quality and color samples if they are higher than necessary. To be exact, we reduce JPEG quality to 85 and color samples to 4:2:0 — i.e., one color sample per four pixels. Compressing a JPEG to quality higher than this or with more color samples takes more bytes, but the visual difference is difficult to notice.

The reduced image data is then exhaustively compressed. We've found that these optimizations reduce bytes by 40%+ while not being noticeable to the user's eye.

2) Converting images to WebP format
Some image formats are more mobile-friendly. We convert JPEG to WebP for supported browsers. This transformation leads to an additional 25%+ reduction in bytes with no loss in quality.

3) Adding srcset
We add "srcset" if it has not been included. This applies to "amp-img" tags with "src" but no "srcset" attribute. The operation includes expanding "amp-img" tag as well as resizing the image to multiple dimensions. This reduces the byte count further on devices with small screens.

4) Using lower quality images under some circumstances
We decrease the quality of JPEG images when there is an indication that this is desired by the user or for very slow network conditions (as discussed below). For example, we reduce JPEG image quality to 50 for Chrome users who have turned on Data Saver. This transformation leads to another 40%+ byte reduction to JPEG images.

The following example shows the images before (left) and after(right) optimizations. Originally the image has 241,260 bytes, and after applying Optimizations 1, 2, & 4 it becomes 25,760 bytes. After the optimizations the image looks essentially the same, but 89% of the bytes have been saved.



AMP for Slow Network Conditions
Many people around the world access the internet with slow connection speeds or on devices with low RAM and we've found that some AMP pages are not optimized for these severely bandwidth constrained users. For this reason, Google has also started an effort to remove even more bytes from AMP pages for these users.

With this initiative, we apply all of the above optimizations to images. In particular, we always use lower quality levels (see Bullet 4 above).

In addition, we optimize external fonts by using the amp-fonttag, setting the font loading timeout to 0 seconds so pages can be displayed immediately regardless of whether the external font was previously cached or not.

We will be rolling out these optimizations for bandwidth-constrained users in several countries such as Vietnam and Malaysia and for holders of low ram devices globally. Note that these optimizations may modify the fine details of some images, but do not affect other parts of the page including ads.

* * *

All told, we see a combined 45% reduction in bytes across all optimizations listed above.
We hope to go even further in making more efficient use of users' data to provide even faster AMP experiences.
Categories: Programming

Some Tips for Boosting your App's Quality in 2017

Wed, 01/04/2017 - 21:59
Originally posted on the Firebase blog by Doug Stevenson /Developer Advocate


I've got to come clean with everyone: I'm making no new year's resolutions for 2017. Nor did I make any in 2016. In fact, I don't think I've ever made one! It's not so much that I take a dim view of new year's resolutions. I simply suspect that I would likely break them by the end of the week, and feel bad about it!

One thing I've found helpful in the past is to see the new year as a pivoting point to try new things, and also improve the work I'm already doing. For me (and I hope for you too!), 2017 will be a fresh year for boosting app quality.

The phrase "app quality" can take a bunch of different meanings, based on what you value in the software you create and use. As a developer, traditionally, this means fixing bugs that cause problems for your users. It could also be a reflection of the amount of delight your users take in your app. All of this gets wrapped up into the one primary metric that we have to gauge the quality of a mobile app, which is your app's rating on the store where it's published. I'm sure every one of you who has an app on a storefront has paid much attention to the app's rating at some point!

Firebase provides some tools you can use to boost your app's quality, and if you're not already using them, maybe a fresh look at those tools would be helpful this year?

Firebase Crash Reporting
The easiest tool to get started with is Firebase Crash Reporting. It takes little to no lines of code to integrate it into your iOS and Android app, and once you do, the Firebase console will start showing crashes that are happening to your users. This gives you a "hit list" of problems to fix.
One thing I find ironic about being involved with the Crash Reporting team is how we view the influx of total crashes received as we monitor our system. Like any good developer product, we strive to grow adoption, which means we celebrate graphs that go "up and to the right". So, in a strange sense, we like to see more crashes, because that means more developers are using our stuff! But for all of you developers out there, more crashes is obviously a *bad* thing, and you want to make those numbers go down! So, please, don't be like us - make your crash report graphs go down and to the right in 2017!

Firebase Test Lab for Android
Even better than fixing problems for your users is fixing those problems before they even reach your users. For your Android apps, you can use Firebase Test Lab to help ensure that your apps work great for your users among a growing variety of actual devices that we manage. Traditionally, it's been kind of a pain to acquire and manage a good selection of devices for testing. However, with Test Lab, you simply upload your APK and tests, and it will install and run them to our devices. After the tests complete, we'll provide all the screenshots, videos, and logs of everything that happened for you to examine in the Firebase console.

With Firebase Test Lab for Android now available with generous daily quotas at no charge for projects on the free Spark tier, 2017 is a great time to get started with that. And, if you haven't set up your Android app builds in a continuous integration environment, you could set that up, then configure it to run your tests automatically on Test Lab.

If you're the kind of person who likes writing tests for your code (which is, admittedly, not very many of us!), it's natural to get those tests running on Test Lab. But, for those of us who aren't maintaining a test suite with our codebase, we can still use Test Lab's automated Robo test to get automated test coverage right away, with no additional lines of code required. That's not quite that same as having a comprehensive suite of tests, so maybe 2017 would be a good time to learn more about architecting "testable" apps, and how those practices can raise the bar of quality for your app. I'm planning on writing more about this later this year, so stay tuned to the Firebase Blog for more!

Firebase Remote Config
At its core, Firebase Remote Config is a tool that lets you configure your app using parameters that you set up in the Firebase console. It can be used to help manage the quality of your app, and there's a couple neat tricks you can do with it. Maybe this new year brings new opportunities to give them a try!

First of all, you can use Remote Config to carefully roll out a new feature to your users. It works like this:
  1. Code your new feature and restrict its access to the user by a Remote Config boolean parameter. If the value is 'false', your users don't see the feature. Make 'false' the default value in the app.
  2. Configure that parameter in the Firebase console to also be initially 'false' for everyone.
  3. Publish your app to the store.
  4. When it's time to start rolling out the new feature to a small segment of users, configure the parameter to be 'true' for, say, five percent of your user base.
  5. Stay alert for new crashes in Firebase Crash Reporting, as well as feedback from your users.
  6. If there is a problem with the new feature, immediately roll back the new feature by setting the parameter to 'false' in the console for everyone.
  7. Or, if things are looking good, increase the percentage over time until you reach 100% of your users.


This is much safer than publishing your new feature to everyone with a single app update, because now you have the option to immediately disable a serious problem, and without having to build and publish a whole new version of your app. And, if you can act quickly, most of your users will never encounter the problem to begin with. This works well with the email alerts you get from Firebase Crash Reporting when a new crash is observed.

Another feature of Remote Config is the ability to experiment with some aspect of your app in order to find out what works better for the users of your app, then measure the results in Firebase Analytics. I don't know about you, but I'm typically pretty bad at guessing what people actually prefer, and sometimes I'm surprised at how people might actually *use* an app! Don't guess - instead, do an experiment and know /for certain/ what delights your users more! It stands to reason that apps finely tuned like this can get better ratings and make more money.

Firebase Realtime Database
It makes sense that if you make it easier for you user to perform tasks in your app, they will enjoy using it more, and they will come back more frequently. One thing I have always disliked is having to check for new information by refreshing, or navigating back and forward again. Apps that are always fresh and up to date, without requiring me to take action, are more pleasant to use.

You can achieve this for your app by making effective use of Firebase Realtime Databaseto deliver relevant data directly to your users at the moment it changes in the database. Realtime Database is reactive by nature, because the client API is designed for you set up listeners at data locations that get triggered in the event of a change. This is far more convenient than having to poll an API endpoint repeatedly to check for changes, and also much more respectful of the user's mobile data and battery life. Users associate this feeling of delight with apps of high quality.

What does 2017 have in store for your app?
I hope you'll join me this year in putting more effort into making our users even more delighted. If you're with me, feel free to tweet me at @CodingDoug and tell me what you're up to in 2017!
Categories: Programming

Start with a line, let the planet complete the picture

Thu, 12/15/2016 - 19:03
Jeff Nusz, Data Arts Team

Take a break this holiday season and paint with satellite images of the Earth through a new experiment called Land Lines. The project lets you explore Google Earth images in unexpected ways through gesture. Earth provides the palette; your fingers, the paintbrush.
There are two ways to explore–drag or draw. "Draw" to find satellite images that match your every line. "Drag" to create an infinite line of connected rivers, highways and coastlines. Here's a quick demo:


Everything runs in real time in your phone's web browser without any servers. The responsiveness of the project is a result of using machine learning, data optimization, and vantage-point trees to analyze the images and store that data.

We preprocessed the images using a combination of Open CV's Structured Forests machine learning based edge detection and ImageJ's Ridge Detection library. This culled the initial dataset of over fifty thousand high res images down to just a few thousand selected for their presence of lines, as shown in the example below. What ordinarily would take days was completed in just a few hours.


Example output from the line detection processing. The dominant line is highlighted in red while secondary lines are highlighted in green.

In the drawing exploration, we stored the resulting data in a vantage-point tree. This enabled us to efficiently run gesture matching against all the images and have results appear in milliseconds.


An early example of gesture matching using vantage point trees, where the drawn input is on the right and the closest results on the left.


Another example of user gesture analysis, where the drawn input is on the right and the closest results on the left.

Built in collaboration with Zach Lieberman, Land Lines is an experiment in big visual data that explores themes of connection. We tried several machine learning libraries in our development process. The learnings from that experience can be found in the case study, while the project code is available open-source on Git Hub. Start with a line at g.co/landlines.
Categories: Programming

Formatting text with the Google Slides API

Wed, 12/14/2016 - 19:18
Originally posted on G Suite Developers blog

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

It's common knowledge that presentations utilize a set of images to impart ideas to the audience. As a result, one of the best practices for creating great slide decks is to minimize the overall amount of text. It means that if you do have text in a presentation, the (few) words you use must have higher impact and be visually appealing. This is even more true when the slides are generated by a software application, say using the Google Slides API, rather than being crafted by hand.

The G Suite team recently launched the first Slides API, opening up a whole new category of applications. Since then, we've published several videos to help you realize some of those possibilities, showing you how to replace text and images in slides as well as how to generate slides from spreadsheet data. To round out this trifecta of key API use cases, we're adding text formatting to the conversation.

Developers manipulate text in Google Slides by sending API requests. Similar to the Google Sheets API, these requests come in the form of JSON payloads sent to the API's batchUpdate() method. Here's the JavaScript for inserting text in some shape (shapeID) on a slide:

{
"insertText": {
"objectId": shapeID,
"text": "Hello World!\n"
}

In the video, developers learn that writing text, such as the request above, is less complex than reading or formatting because both the latter require developers to know how text on a slide is structured. Notice for writing that just the copy, and optionally an index, are all that's required. (That index defaults to zero if not provided.)

Assuming "Hello World!" has been successfully inserted in a shape on a slide, a request to bold just the "Hello" looks like this:

{
"updateTextStyle": {
"objectId": shapeID,
"style": {
"bold": true
},
"textRange": {
"type": "FIXED_RANGE",
"startIndex": 0,
"endIndex": 5
},
"fields": "bold"
}
If you've got at least one request, like the ones above, in an array named requests, you'd ask the API to execute them with just one call to the API, which in Python looks like this (assuming SLIDES is your service endpoint and the slide deck ID is deckID):
SLIDES.presentations().batchUpdate(presentationId=deckID,
body=requests).execute()

To better understand text structure & styling in Google Slides, check out the text concepts guidein the documentation. For a detailed look at the complete code sample featured in the DevByte, check out the deep dive post. To see more samples for common API operations, take a look at this page. We hope the videos and all these developer resources help you create that next great app that automates producing highly impactful presentations for your users!

Categories: Programming

Google Developers YouTube Channel Reaches 1 Million Subscribers!

Wed, 12/14/2016 - 01:00
Posted by David Unger, Program Manager

Earlier this week, the Google Developers YouTube channel crossed the 1 million subscribers threshold. This is a monumental achievement for us, and we are extremely honored that so many of you have found our content valuable enough to click that red ‘Subscribe’ button.
The Google Developers YouTube channel has been bringing you content for just over 8 years, covering major developer events, like Google I/O and Playtime, as well as providing best practices on the latest tools to help you build high quality experiences! In that time, we’ve shared over 2,000 videos that have been viewed over 100 million times. Here is a look back at how we got to this milestone:

We are gearing up for another year of videos to help developers all over the world. To avoid missing any of it, you can subscribe to each of our YouTube channels using the following links:
Google Developers | Android Developers | Chrome Developers | Firebase
Categories: Programming

Scratch Blocks update: Making it easier to develop coding apps for kids

Tue, 12/13/2016 - 19:02

Posted by Champika Fernando, Product Manager, Google, and Kasia Chmielinski, Product Lead, MIT Scratch Team

We want to empower developers to build great creative learning apps for kids. That's why, earlier this year, we announced Scratch Blocks, a free, open-source project created by the MIT Scratch and Google Kids Coding teams. Together, we are building this highly tinkerableand playful block-based programming grammar based on MIT's popular Scratch language and Blockly's architecture. With Scratch Blocks, developers can integrate Scratch-style coding into apps for kids.

Today, we're excited to share our progress in a number of areas:

1. Designing for tinkerability

Research from the MIT Media Lab has highlighted the importance of providing children with opportunities for quick experimentation and rapid cycles of iteration. For example, the Scratch programming environment makes it easy for kids to adjust the code while it's running, as well as try coding blocks by just tapping on them. Since our initial announcement in May, we've focused on supporting this type of tinkerability in the Scratch Blocks project by making it very easy for developers to connect Scratch Blocks directly to the Scratch VM (a related open-source project being developed by MIT). In this model, instead of blocks being converted to a text-based language like JavaScript which is then interpreted, the blocks themselves are the code. The result is a more tinkerable experience for the end-user.

2. Designing for all levels

Computational thinking1 is a valuable skill for everyone. In order to support developers building a wide diversity of coding experiences with Scratch Blocks, we've designed two related block grammars that can be used in a variety of contexts. One grammar uses icon-based blocks that connect horizontally, while the other uses text-based blocks that connect vertically.

We started by developing the horizontal grammar, which is well-suited for beginners of all ages due to its simplicity and limited number of blocks; additionally, this grammar is easier to manipulate on small screens. You can see an example of the horizontal icon-based grammar in Code a Snowflake (an activity in this year's Google Santa Tracker) built by the Google Kids Coding Team. More recently, we've added the vertical grammar, which supports a wider range of complex concepts. The horizontal grammar can also be translated into vertical blocks, making it possible to transition between the grammars. We imagine this will be useful in a number of situations, including designing for multiple screen sizes, or as an element of the app's learning experience.

3. Designing for all devices

We're building Scratch Blocks for a world that is increasingly mobile, where people of all ages will tinker with code in a variety of environments. We've improved the mobile experience in many key areas, both in Scratch Blocks as well as the underlying Blockly project:

  • Redesigned blocks for improved touchability
  • Fast loading of large projects on low-powered devices
  • Optimization of block manipulation and code editing on touch screens
  • Redesigned multi-touch support for a better experience on touch devices

How to get involved

These first six months of Scratch Blocks have been a lot of work - and a ton of fun. To stay up to date on the project, check out our Github project, and learn more on our Developer Page.

[1] Learn more about Computational Thinking

Categories: Programming

Announcing updates to Google’s Internet of Things platform: Android Things and Weave

Tue, 12/13/2016 - 18:08
Originally posted on Android Developer Blog

Posted by Wayne Piekarski, Developer Advocate for IoT

The Internet of Things (IoT) will bring computing to a whole new range of devices. Today we're announcing two important updates to our IoT developer platform to make it faster and easier for you to create these smart, connected products.

We're releasing a Developer Preview of Android Things, a comprehensive way to build IoT products with the power of Android, one of the world's most supported operating systems. Now any Android developer can quickly build a smart device using Android APIs and Google services, while staying highly secure with updates direct from Google. We incorporated the feedback from Project Brillo to include familiar tools such as Android Studio, the Android Software Development Kit (SDK), Google Play Services, and Google Cloud Platform. And in the coming months, we will provide Developer Preview updates to bring you the infrastructure for securely pushing regular OS patches, security fixes, and your own updates, as well as built-in Weave connectivity and more.

There are several turnkey hardware solutions available for you to get started building real products with Android Things today, including Intel Edison, NXP Pico, and Raspberry Pi 3. You can easily scale to large production runs with custom designs of these solutions, while continuing to use the same Board Support Package (BSP) from Google.

We are also updating the Weave platform to make it easier for all types of devices to connect to the cloud and interact with services like the Google Assistant. Device makers like Philips Hue and Samsung SmartThings already use Weave, and several others like Belkin WeMo, LiFX, Honeywell, Wink, TP-Link, and First Alert are implementing it. Weave provides all the cloud infrastructure, so that developers can focus on building their products without investing in cloud services. Weave also includes a Device SDK for supported microcontrollers and a management console. The Weave Device SDK currently supports schemas for light bulbs, smart plugs and switches, and thermostats. In the coming months we will be adding support for additional device types, custom schemas/traits, and a mobile application API for Android and iOS. Finally, we're also working towards merging Weave and Nest Weave to enable all classes of devices to connect with each other in a secure and reliable way. So whether you started with Google Weave or Nest Weave, there is a path forward in the ecosystem.

This is just the beginning of the IoT ecosystem we want to build with you. To get started, check out Google's IoT developer site, or go directly to the Android Things, Weave, and Google Cloud Platform sites for documentation and code samples. You can also join Google's IoT Developers Community on Google+ to get the latest updates and share and discuss ideas with other developers.

Categories: Programming

Start building Actions on Google

Thu, 12/08/2016 - 18:03

Posted by Jason Douglas, PM Director for Actions on Google

The Google Assistant brings together all of the technology and smarts we've been building for years, from the Knowledge Graph to Natural Language Processing. To be a truly successful Assistant, it should be able to connect users across the apps and services in their lives. This makes enabling an ecosystem where developers can bring diverse and unique services to users through the Google Assistant really important.

In October, we previewedActions on Google, the developer platform for the Google Assistant. Actions on Google further enhances the Assistant user experience by enabling you to bring your services to the Assistant. Starting today, you can build Conversation Actions for Google Home and request to become an early access partner for upcoming platform features.

Conversation Actions for Google Home

Conversation Actions let you engage your users to deliver information, services, and assistance. And the best part? It really is a conversation -- users won't need to enable a skill or install an app, they can just ask to talk to your action. For now, we've provided two developer samples of what's possible, just say "Ok Google, talk to Number Genie " or try "Ok Google, talk to Eliza' for the classic 1960s AI exercise.

You can get started today by visiting the Actions on Google website for developers. To help create a smooth, straightforward development experience, we worked with a number of development partners, including conversational interaction development tools API.AI and Gupshup, analytics tools DashBot and VoiceLabs and consulting companies such as Assist, Notify.IO, Witlingo and Spoken Layer. We also created a collection of samples and voice user interface (VUI) resources or you can check out the integrations from our early access partners as they roll out over the coming weeks.

Introduction to Conversation Actions by Wayne Piekarski

Coming soon: Actions for Pixel and Allo + Support for Purchases and Bookings

Today is just the start, and we're excited to see what you build for the Google Assistant. We'll continue to add more platform capabilities over time, including the ability to make your integrations available across the various Assistant surfaces like Pixel phones and Google Allo. We'll also enable support for purchases and bookings as well as deeper Assistant integrations across verticals. Developers who are interested in creating actions using these upcoming features should register for our early access partner program and help shape the future of the platform.

Build, explore and let us know what you think about Actions on Google! And to say in the loop, be sure to sign up for our newsletter, join our Google+ community, and use the “actions-on-google” tag on StackOverflow.
Categories: Programming

Modifying email signatures with the Gmail API

Wed, 12/07/2016 - 21:12
Originally posted on G Suite Developer blog

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

The Gmail API team introduced a new settings feature earlier this year, and today, we're going to explore some of that goodness, showing developers how to update Gmail user settings with the API.

Email continues to be a dominant form of communication, personally and professionally, and our email signature serves as both a lightweight introduction and a business card. It's also a way to slip-in a sprinkling of your personality. Wouldn't it be interesting if you could automatically change your signature whenever you wanted without using the Gmail settings interface every time? That is exactly what our latest video is all about.

If your app has already created a Gmail API service endpoint, say in a variable named GMAIL, and you have the YOUR_EMAIL email address whose signature should be changed as well as the text of the new signature, updating it via the API is as pretty straightforward, as illustrated by this Python call to the GMAIL.users().settings().sendAs().patch() method:

signature = {'signature': '"I heart cats."  ~anonymous'}
GMAIL.users().settings().sendAs().patch(userId='me',
sendAsEmail=YOUR_EMAIL, body=signature).execute()

For more details about the code sample used in the requests above as well as in the video, check out the deepdive post. In addition to email signatures, other settings the API can modify include: filters, forwarding (addresses and auto-forwarding), IMAP and POP settings to control external email access, and the vacation responder. Be aware that while API access to most settings are available for any G Suite Gmail account, a few sensitive operations, such as modifying send-as aliases or forwarding, are restricted to users with domain-wide authority.

Developers interested in using the Gmail API to access email threads and messages instead of settings can check out this other video where we show developers how to search for threads with a minimum number of messages, say to look for the most discussed topics from a mailing list. Regardless of your use-case, you can find out more about the Gmail API in the developer documentation. If you're new to the API, we suggest you start with the overview page which can point you in the right direction!

Be sure to subscribe to the Google Developers channel and check out other episodes in the G Suite Dev Show video series.

Categories: Programming

AMP Cache Updates

Tue, 12/06/2016 - 00:39

Posted by John Coiner, Software Engineer

Today we are announcing a change to the domain scheme of the Google AMP Cache. Beginning soon, the Google AMP Cache will serve each site from its own subdomain of https://cdn.ampproject.org. This change will allow content served from the Google AMP Cache to be protected by the fundamental security model of the web: the HTML5 origin.

No immediate changes are required for most publishers of AMP documents. However, to benefit from the additional security, it is recommended that all AMP publishers update their CORS implementation in preparation for the new Google AMP Cache URL scheme. The Google AMP Cache will continue to support existing URLs, but those URLs will eventually redirect to the new URL scheme.

How subdomain names will be created on the Google AMP Cache

The subdomains created by the Google AMP Cache will be human-readable when character limits and technical specs allow, and will closely resemble the publisher's own domain.

When possible, the Google AMP Cache will create each subdomain by first converting the AMP document domain from IDN (punycode) to UTF-8. Every "-" (dash) will be replaced with "--"(2 dashes) and every "." (dot) will be replaced with a "-" (dash). For example, pub.com will map to pub-com.cdn.ampproject.org. Where technical limitations prevent a human readable subdomain, a one-way hash will be used instead.

Updates needed for hosts and service providers with remote endpoints

Due to the changes described above, CORS endpoints will begin seeing requests with new origins. The following updates will be required:

  • Expand request acceptance to the new subdomain: Sites that currently only accept CORS requests from https://cdn.ampproject.organd the publisher's own origins must update their systems to accept requests from https://[pub-com].cdn.ampproject.org, https://cdn.ampproject.org, and the AMP publisher's own origins.
  • Tighten request acceptance for security: Sites that currently accept CORS requests from https://*.ampproject.org as described in the AMP spec, can improve security by restricting acceptance to requests from https://[pub-com].cdn.ampproject.org, https://cdn.ampproject.org, and the AMP publisher's own origins. Support for https://*.ampproject.org is no longer necessary.
  • Support for new subdomain pattern by ads, analytics, and other technology providers: Service providers such as analytics and ads vendors that have a CORS endpoint will also need to ensure that their systems accept requests from the Google AMP Cache's subdomains (e.g.https://ampbyexample-com.cdn.ampproject.org), in addition to their own hosts.
Retrieving the Google AMP Cache URL

For platforms that display AMP documents and serve from the Google AMP Cache, the best way to retrieve Google AMP Cache URLs is to continue using the Google AMP Cache URL API. The Google AMP Cache URL API will be updated in Q1 2017 to return the new cache URL scheme that includes the subdomain.

You can use an interactive tool to find the Google AMP Cache subdomain generated for each site over at ampbyexample.com.

src="https://amp-by-example-api.appspot.com/iframe/amp-url-converter.html?url=https://ampproject.com">
Timing and testing resources

Google Search is planning to begin using the new URL scheme as soon as possible and is monitoring sites' compatibility.

In addition, a developer testing sandbox is available at g.co/ampdemo/cache to help ensure a smooth transition. After making the updates described above, please use the sandbox to test accessing your site via Google Search. The sandbox loads AMP pages using the new domain scheme, so if you spot CORS-related errors in this configuration, these issues should be addressed to avoid errors when the domain scheme change is fully rolled out.

Categories: Programming