Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/sources/29' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Google Code Blog
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.
Syndicate content
ewoodhttp://www.blogger.com/profile/12341551220176883769noreply@blogger.comBlogger1619125
Updated: 2 hours 33 min ago

How to set up Ads on your AMP Pages

7 hours 53 min ago

Posted by Arudea Mahartianto, Google AMP Specialist

From conception, the open source Accelerated Mobile Pages Project has had a clear goal --- to make the mobile web experience better and faster for users. This extends beyond content to creating a user-first approach to advertising as well.

To realize this vision, the AMP team created an advertising solution that follows four core principles:

  • Faster is better - There is no reason ads in AMP can’t be as fast as the AMP document itself.
  • Beautiful matters - Ensure ads in AMP are beautiful and relevant.
  • Security is a must - Require all creatives to utilize the HTTPS protocol.
  • We’re better together - AMP isn’t about supporting a single advertising entity, but an entire industry. Success requires broad industry participation.

Ads in AMP are delivered using the amp-ad component. Using this component you can configure your ads in a number of ways such as the width, length, layout mode and ad loading strategy. Different ad networks might allow even more options.

Here is an example of an DoubleClick responsive ad implementation in AMP:

<amp-ad
width="414"
height="457"
layout=”responsive”
type="doubleclick"
data-slot="/35096353/amptesting/image/flex">
</amp-ad>

The type attribute informs the amp-ad component which ad platform to use. In this case we want DoubleClick and therefore the type value is doubleclick. For above the fold responsive ad implementation please use layout="fixed-height" instead and limit the ad height so users will get a fast loading content-focused experience from the very start.

Any attributes starting with data- in amp-ad are ad platform-specific attributes, including the data-slot attribute in the snippet above. Each ad platform will have different attributes available to configure. For example, compare the above DoubleClick example with another AMP ad example that uses the Rubicon platform:

<amp-ad width=320 height=50
type="rubicon"
data-method="smartTag"
data-account="14062"
data-site="70608"
data-zone="335918"
data-size="43">
</amp-ad>

For more amp-ad implementation examples, please check out AMP By Example. You can also check out the amp-ad documentation for the complete list of supported ad networks and their configuration semantics.

The team is also developing newer, better ways to bring the benefits of AMP to the ads ecosystem with initiatives like AMP for Ads and AMP Ad Landing Pages. These solutions will enable advertisers to design creatives and ad landing pages that are more consistent with the AMP experience publishers are bringing to users. We believe this will bring us closer to the goal of making the entire mobile web experience faster and better for everybody.

Categories: Programming

Learn by doing with the Udacity VR Developer Nanodegree

Tue, 09/27/2016 - 18:37

Posted by Nathan Martz, Product Manager, Google VR

With Google Cardboard and Daydream, our Google VR team is working to bring virtual reality to everyone. In addition to making VR more accessible by using the smartphone in your pocket, we recently launched the Google VR SDK out of beta, with native integration for Unity and UE4, to help make it easier for more developers to join the fold.

To further support and encourage new developers to build VR experiences, we’ve partnered with Udacity to create the VR Developer Nanodegree. Students will learn how to create 3D environments, define behaviors, and make VR experiences comfortable, immersive, and performant.


Even with more than 50 million installs of Google Cardboard apps on Google Play, these are still the early days of VR. Students who complete the VR Developer Nanodegree learn by doing, and will graduate having completed a portfolio of VR experiences.

Learn more and sign up to receive VR Developer Nanodegree program updates at https://www.udacity.com/vr

Categories: Programming

Shopping made simple with Tango and WayfairView

Tue, 09/27/2016 - 16:58

Posted by Sophie Miller, Tango Business Development

Window shopping and showrooms let us imagine what that couch might look like in our living room or if that stool is the right height, but Tango can help take out the guesswork using augmented reality. Place virtual furniture in your real room, walk around, and try different colors.

Tango-enabled apps like WayfairView make it easy to visualize and rearrange new furniture in your home. We sat down with the Wayfair team to learn more about their app and see how Tango helps power new AR shopping experiences:

Google: Please tell us about your Tango app.

Mike: Wayfair offers a massive selection of products online. We believe that the ability for customers to visualize products in their living space augments our online experience, and solves real customer problems such as: Will this product fit in my space? and Will this match the rest of my environment?

Why are you excited for your customers to start using WayfairView?

One of the biggest barriers that online shopping poses is the inability for a customer to get a good sense of how a product would fit in their room, and what it would look like in their living space. With WayfairView, we aim to help our customers better visualize our products - going above and beyond a flat, 2D image and providing them with an accurate 3D rendering of what the full-size item could look like in their home. Not only is this a great extension of the customer experience, it’s also a practical approach to figure out how the product fits into the user’s space before ordering it.


How did you get started developing for Tango?

I signed up to buy a dev kit in 2014 because he was personally interested in scanning 3D objects and environments. I ended up using it for a hackathon to build the first prototype of what is now WayfairView. One of my teammates, Shrenik Sadalgi, has always been interested in AR technology and had participated in Tango hackathons in years prior. He thought this particular flavor of AR, i.e Markerless in the form factor of a mobile device, had the potential of providing a seamless, easy user experience for Wayfair customers.

Was there something unique to the Tango platform that made it particularly appealing?

AR technology has been around for a while, but Tango is making it accessible by providing the technology in a way that is user friendly. Specifically, the Tango platform excels in accurate tracking, which allowed Wayfair’s R&D team to focus on building a great experience for our customers. No markers, no HMDs, no cords that can get tangled, but still powerful.

What were some of the challenges you faced building for Tango?

The biggest challenge Wayfair faces with AR technology is more about the experience than the device, which is in big part thanks to Tango. Our goal was to introduce an entirely new way of shopping for furniture in a way that is user friendly. Not having to worry about the inner workings of Tango helped us focus on making the furniture look as real as possible, scaling the app with our massive catalog, and getting to market in a short period of time.

What surprised you during the Tango development process?

The learning curve for Tango was minimal. We were able to get started very quickly using example code. It was pretty remarkable how the stability of the platform (primarily the tracking) kept improving over the period of time that we worked on the app.

Which platform did you build your Tango app on, and why?

We wrote the core of the app using Unity in C# - we wanted all the 2D UI to be in native Android to match the Wayfair native Android experience. This also gave us the opportunity to re-use code from the existing Wayfair Android app. We saw significant performance improvements by using native Android to create the 2D UI as well, which also makes the UI easier to update when the next UI theme of Android comes along.

What features can customers look forward to in a future WayfairView update?

We would love to add the ability to search for products by space: imagine drawing a cube in your real space and finding all products that fit the space. We also want to allow users to stack virtual products on top of each other to help them visualize how a virtual table lamp would look on top of a virtual table. Of course, we also want to make the products look even more real and add more products that can be visualized on WayfairView.

How do you think that this will change the way people shop for household goods?

WayfairView makes it easier than ever for customers to visualize online goods in their home at full scale, giving them an extra level of confidence when making an online purchase. We believe Tango has the potential to become a ubiquitous technology, just like smartphone cameras and mobile GPS. Ultimately, we anticipate that this will further accelerate the shift from brick and mortar to online.

We also imagine that WayfairView will be a very useful tool for our designers as they share their design proposal and vision with their customers.

Categories: Programming

How to set up Analytics on your AMP pages

Mon, 09/26/2016 - 18:57

Originally posted on Google Analytics blog

Posted by Arudea Mahartianto, Google AMP Specialist

In the digital world, whether you’re writing stories for your loyal readers, creating creative content that your fans love, helping the digital community, or providing items and services for your customer, understanding your audience is at the heart of it all. Key to unlocking that information is access to tools for measuring your audience and understanding their behavior. In addition to making your page load faster, Accelerated Mobile Pages (AMP) provides multiple analytics options without compromising on performance.

You can choose to use a solution like amp-pixel that behaves like a simple tracking pixel. It uses a single URL that allows variable substitutions, so it’s very customizable. See the amp-pixel documentation for more detail.

The amp-analytics component, on the other hand, is a powerful solution that recognizes many types of event triggers to help you collect specific metrics. Since amp-analytics is supported by multiple analytics providers, this means you can use amp-analytics to configure multiple endpoints and data sets. AMP then manages all of the instrumentation to come up with the data specified and shares it with these analytics solution providers.

To use amp-analytics, include the component library in your document's <head>:

<script async custom-element="amp-analytics"


src="https://cdn.ampproject.org/v0/amp-analytics-0.1.js"></script>

And then include the component as follows (for these examples, make sure to specify your own account number instead of the placeholder):

<amp-analytics type="googleanalytics">


<script type="application/json">
{
"vars": {
"account": "UA-YYYY-Y"
},
"triggers": {
"defaultPageview": {
"on": "visible",
"request": "pageview",
"vars": {
"title": "Name of the Article"
}
}
}
}
</script>
</amp-analytics>

The JSON format is super flexible for describing several different types of events and it does not include any JavaScript code which could potentially lead to mistakes.

Expanding the above example, we can add another trigger, clickOnHeader:

<amp-analytics type="googleanalytics">
<script type="application/json">
{
"vars": {
"account": "UA-YYYY-Y"
},
"triggers": {
"defaultPageview": {
"on": "visible",
"request": "pageview",
"vars": {
"title": "Name of the Article"
}
},
"clickOnHeader": {
"on": "click",
"selector": "#header",
"request": "event",
"vars": {
"eventCategory": "examples",
"eventAction": "clicked-header"
}
}
}
}
</script>
</amp-analytics>

For a detailed description of data sets you can request, as well as the complete list of analytics providers supporting amp-analytics, check out the amp-analytics documentation. You can also see more implementation examples in the Amp By Example site.

If you want to conduct a user experience experiment on your AMP pages, such as an A/B test, you can use the amp-experimentelement. Any configurations done in this element will also be exposed to amp-analytics and amp-pixel, so you can easily do a statistical analysis of your experiment.

There are still plenty of ongoing developments for AMP analytics to help you gain insights as you AMPlify the user experience on your site. Visit the AMP Project roadmap to see a summary of what the team is cooking up. If you see some features missing, please file a request on GitHub.

Categories: Programming

Google VR SDK graduates out of beta

Thu, 09/22/2016 - 18:00

Posted by Nathan Martz, Product Manager, Google VR

At Google I/O, we announced Daydream—Google's platform for high quality, mobile virtual reality—and released early developer resources to get the community started with building for Daydream. Since then, the team has been hard at work, listening to feedback and evolving these resources into a suite of powerful developer tools.

Today, we are proud to announce that the Google VR SDK 1.0 with support for Daydream has graduated out of beta, and is now available on the Daydream developer site. Our updated SDK simplifies common VR development tasks so you can focus on building immersive, interactive mobile VR applications for Daydream-ready phones and headsets, and supports integrated asynchronous reprojection, high fidelity spatialized audio, and interactions using the Daydream controller.

To make it even easier to start developing with the Google VR SDK 1.0, we’ve partnered with Unity and Unreal so you can use the game engines and tools you’re already familiar with. We’ve also updated the site with full documentation, reference sample apps, and tutorials.

Native Unity integration

This release marks the debut of native Daydream integration in Unity, which enables Daydream developers to take full advantage of all of Unity’s optimizations in VR rendering. It also adds support for features like head tracking, deep linking, and easy Android manifest configuration. Many Daydream launch apps are already working with the newest integration features, and you can now download the new Unity binary here and the Daydream plugin here.

Native UE4 integration

We’ve made significant improvements to our UE4 native integration that will help developers build better production-quality Daydream apps. The latest version introduces Daydream controller support in the editor, a neck model, new rendering optimizations, and much more. UE4 developers can download the source here.

Get started today

While the first Daydream-ready phones and headset are coming this fall, you can start developing high-quality Daydream apps right now with the Google VR SDK 1.0 and the DIY developer kit.

We’re also opening applications to our Daydream Access Program (DAP) so we can work closely with even more developers building great content for Daydream. Submit your Daydream app proposal to apply to be part of our DAP.

When you create content for the Daydream platform, you know your apps will work seamlessly across every Daydream-ready phone and headset. Daydream is just getting started, and we’re looking forward to working together to help you build new immersive, interactive VR experiences. Stay tuned for more information about Daydream-ready phones and the Daydream headset and controller coming soon.

Categories: Programming

Increased account security via OAuth 2.0 token revocation

Wed, 09/21/2016 - 23:35
Originally posted on Google Apps Developers Blog

Posted by Michael Winser, Product Lead, Google Apps and Wesley Chun, Developer Advocate, Google Apps

Last week, we clarified the expectations and responsibilities when accessing Google user data via OAuth 2.0. Today, we’re announcing that in order to better protect users, we are increasing account security for enterprise Gmail users effective October 5, 2016. At this time, a new policy will take effect whereby users in a Google Apps domain, while changing their passwords on or after this date, will result in the revocation of the OAuth 2.0 tokens of apps that access their mailboxes using Gmail-based authorization scopes. Please note that users will not notice any specific changes on this date and their applications will continue to work. It is only when a user changes their password from that point moving forward that their Gmail-related tokens become invalid.

Developers should modify their applications to handle HTTP 400 or 401 error codes resulting from revoked tokens and prompt their users to go through the OAuth flow again to re-authorize those apps, such that they can access the user’s mailbox again (additional details below). Late last year, we announceda similar, planned change to our security policy that impacted a broader set of authorization scopes. We later decidednot to move forward with that change for Apps customers and began working on a less impactful update as described above.

What is a revoked token?

A revoked OAuth 2.0 token no longer provides access to a user’s resources. Any attempt to use a revoked token in API calls will result in an error. Any existing token strings will no longer have any value and should be discarded. Applications accessing Google APIs should be modified to handle failed API calls.

Token revocation itself is not a new feature. Users have always been able to revoke access to applications in Security Checkup, and Google Apps admins have the ability to do the same in the Admin console. In addition, tokens that were not used for extended periods of time have always been subject to expiration or revocation. This change in our security policy will likely increase the rate of revoked tokens that applications see, since in some cases the process will now take place automatically.

What APIs and scopes are impacted?

To achieve the security benefits of this policy change with minimal admin confusion and end-user disruption, we’ve decided to limit its application to mail scopes only and to exclude Apps Script tokens. Apps installed via the Google Apps Marketplace are also not subject to the token revocation. Once this change is in effect, third-party mail apps like Apple Mail and Thunderbird―as well as other applications that use multiple scopes that include at least one mail scope―will stop accessing data upon password reset until a new OAuth 2.0 token has been granted. Your application will need to detect this scenario, notify the user that your application has lost access to their account data, and prompt them to go through the OAuth 2.0 flow again.

Mobile mail applications are also included in this policy change. For example, users who use the native mail application on iOS will have to re-authorize with their Google account credentials when their password has been changed. This new behavior for third-party mail apps on mobile aligns with the current behavior of the Gmail apps on iOS and Android, which also require re-authorization upon password reset.

How can I determine if my token was revoked?

Both short-lived access tokens and long-lived refresh tokens will be revoked when a user changes their password. Using a revoked access token to access an API or to generate a new access token will result in either HTTP 400 or 401 errors. If your application uses a library to access the API or handle the OAuth flow, then these errors will likely be thrown as exceptions. Consult the library’s documentation for information on how to catch these exceptions. NOTE: because HTTP 400 errors may be caused by a variety of reasons, expect the payload from a 400 due to a revoked token to be similar to the following:

{
"error_description": "Token has been revoked.",
"error": "invalid_grant"
}

How should my application handle revoked tokens?

This change emphasizes that token revocation should be considered a normal condition, not an error scenario. Your application should expect and detect the condition, and your UI should be optimized for restoring tokens.

To ensure that your application works correctly, we recommend doing the following:

  • Add error handling code around API calls and token refreshes that can detect revoked tokens.
  • Upon detecting a revoked token, disable any application features that rely on Google API access until the user can re-authorize your application. For example, suspend any recurring background jobs that sync data with a Google API which may be affected.
  • Notify the user that access has been revoked and prompt them to re-authorize access to their resources.
    • If your app interacts directly with the user, you will need to prompt the user to re-authorize, i.e., send an email to the user and/or show them an alert the next time they open your application.
    • However, if your app runs independently of the user, say a background app that uses the Gmail API, you'll need to notify the user through email or some other mechanism.
    • Provide a streamlined UI for re-authorizing access. Avoid having users navigate through your application to find the original setting.
    • Note that revoked tokens will result in similar error messages regardless of how the token was revoked. Your messaging should not assume that the token was revoked due to a password change.

If your application uses incremental authorization to accrue multiple scopes in the same token, you should track which features and scopes a given user has enabled. The end result is that if your app requested and obtained authorization for multiple scopes, and at least one of them is a mail scope, that token will be revoked, meaning you will need to prompt your user to re-authorize for all scopes originally granted.

Many applications use tokens to perform background or server-to-server API calls. Users expect this background activity to continue reliably. Since this policy change also affects those apps, this makes prompt notification requesting re-authorization even more important.

What is the timeline for this change?

To summarize, properly configured applications should be expected to handle invalid tokens in general, whether they be from expiration, non-existence, and revocation as normal conditions. We encourage developers to make any necessary changes to give their users the best experience possible. The policy change is planned to take effect on October 5, 2016.

Please see this Help Center article and FAQ for more details and the full list of mail scopes. Moving forward, any additional scopes to be added to the policy will be communicated in advance. We will provide those details as they become available.

Categories: Programming

Launchpad Build Event Series - Sub-Saharan Africa

Wed, 09/21/2016 - 21:23
Posted by Mercy Orangi, Developer Ecosystem Community Manager

Back in May at Google I/O, we announced the expansion Firebase, a mobile platform that enables you to quickly develop high-quality applications, grow your user base and earn more money. To help developers better understand the range of features in Firebase, our Developer Relations team in Sub-Saharan Africa will be hosting the Launchpad Build Event Series in Sub-Saharan Africa The first leg will be held in Lagos (22nd Sep), followed by Nairobi (26th Sep) and finally Cape Town (29th Sep).

Launchpad Build is an event series aimed at raising awareness, amongst intermediate and expert developers with an existing Web or Android application, around important tools available today.

At this event, engage in talks and hands-on codelabs focused on Firebase Analytics, Firebase Cloud Messaging, Firebase Crash Reporting, Firebase Test Lab, Pirate Metrics, Serverless with Firebase, Tensor Flow and much more. Through the Launchpad Build event, developers will get skills and resources necessary to start using Firebase in their applications.

This is a technical event, with multiple sessions on Firebase, facilitated by Googlers and Google Developer Experts from around the world.

For further information, visit the Launchpad Build Event Series Sub-Saharan Africa Website.

Register now: bit.ly/lpbuildssa2016

Applicants will be contacted with necessary details.

Categories: Programming

Making Conversational Interfaces Easier to Build

Mon, 09/19/2016 - 21:30
Posted by Scott Huffman, VP of Engineering

We have been investing in the core machine learning technologies that enable natural language interfaces for years. To continue that investment, we’re excited to welcome API.AI to Google!

API.AI has a proven track record for helping developers design, build and continuously improve their conversational interfaces. Over 60,000 developers are using API.AI to build conversational experiences, for environments such as Slack, Facebook Messenger and Kik, to name just a few. API.AI offers one of the leading conversational user interface platforms and they’ll help Google empower developers to continue building great natural language interfaces.

Stay tuned for more on details on integrations into Google. And if you’re already using API.AI, keep building your conversational interfaces and if you’re not, start today!

Categories: Programming

Apply now for the Google Developers Launchpad Accelerator

Wed, 09/07/2016 - 19:20

Posted by Roy Glasberg, Global Lead, Launchpad Accelerator

We’re delighted to open our call for applications for the third class of the Launchpad Accelerator. If you are a late-stage app startup from Brazil, India, Indonesia, or Mexico, we encourage you to apply here by October 24, 2016. Based outside of these countries? Stay tuned, as we expect to add more countries to the program in the future!

The equity-free program will begin on January 30, 2017 at the new Google Developers Launchpad Space in San Francisco and will include 2 weeks of all-expense-paid training.

What are the benefits?

During the kick-off bootcamp we deliver in-depth technical and business mentoring that enables our startups to tackle their specific challenges and successfully scale. Launchpad mentors hail from around the world and more than 20 teams across Google. In total, startups receive access to Google’s expertise and resources for 6 months.

What do we look for when selecting startups?

Each startup that applies to the Launchpad Accelerator is considered holistically and with great care. Below are general guidelines behind our process to help you understand what we look for in our candidates.

All startups in the program must:

  • Be a technological startup.
  • Be targeting their local markets.
  • Be based in Brazil, India, Indonesia, or Mexico.
  • Have proven product-market fit (beyond ideation stage).

Additionally, we are interested in what kind of startup you are. We also consider:

  • The problem you are trying to solve. How does it create value for users? How are you addressing a real challenge for your home city, country or region?
  • Does the management team have a leadership mindset and the drive to become an influencer? Will they share what they learn in Silicon Valley for the benefit of other startups in their local ecosystem?

We look forward to learning more about your startup and working closely with you on building a successful business that has both a local and global impact.

Categories: Programming

Get ready! DevFest Season has started!

Thu, 09/01/2016 - 20:33

Posted by Ale Borba and Adriana Cerundolo, Developer Relations Program Managers

Today kicks off the annual DevFest season, a series of developer community-run events which will be happening over the next 3 months. Google Developer Group (GDG) chapters from around the world will host #DevFest16 events, bringing together developers to exchange knowledge, share ideas, and express their passion for technology.

At its core, #DevFest16 is powered by a shared belief that when developers come together to exchange ideas, amazing things can happen! Itis an opportunity for GDGs to share speakers and event resources with each other to put on quality events uniquely tailored to the needs of the local developer communities. Attendees can expect technical content around Google developer technologies, including Firebase, Google Cloud Platform, machine learning with TensorFlow, web development and much more.

We anticipate 50k developers from over 70 countries to participate in #DevFest16 this year; from Canada to Australia, throughout South America, Africa, Europe and Asia, GDGs will be joining together to bring you an enriching developer experience. Go find a DevFest near to you!

Happy Festing!

Categories: Programming

Closure Compiler in JavaScript

Wed, 08/31/2016 - 17:01
Posted by Sam Thorogood, Developer Programs Engineer

The Closure Compiler was originally released, in Java, back in 2009. Today, we're announcing the very same Closure Compiler is now available in pure JavaScript, for use without Java. It's designed to run under NodeJS with support for some popular build tools.

If you've not heard of the Closure Compiler, it's a JavaScript optimizer, transpiler and type checker, which compiles your code into a high-performance, minified version. Nearly every web frontend at Google uses it to serve the smallest, fastest code possible.

It supports new features in ES2015, such as let, const, arrow functions, and provides polyfills for ES2015 methods not supported everywhere. To help you write better, maintainable and scalable code, the compiler also checks syntax, correct use of types, and provides warnings for many JavaScript gotchas. To find out more about the compiler itself, including tutorials, head to Google Developers.

How does this work?

This isn't a rewrite of Closure in JavaScript. Instead, we compile the Java source to JS to run under Node, or even inside a plain old browser. Every post or resource you see about Closure Compiler will also apply to this version.

To find out more about Closure Compiler's internals, be sure to check out this post by Dimitris (who works on the Closure team at Google), other posts on the Closure Tools blog, or read an exploratory post about Closure and how it can help your project in 2016.

Note that the JS version is experimental. It may not perform in the same way as the native Java version, but we believe it's an interesting new addition to the compiler landscape, and the Closure team will be working to improve and support it over time.

How can I use it?

To include the JS version of Closure Compiler in your project, you should add it as a dependency of your project via NPM-


npm install --save-dev google-closure-compiler-js

To then use the compiler with Gulp, you can add a task like this-

const compiler = require('google-closure-compiler-js').gulp();
gulp.task('script', function() {
// select your JS code here
return gulp.src('./src/**/*.js', {base: './'})
.pipe(compiler({
compilation_level: 'SIMPLE',
warning_level: 'VERBOSE',
output_wrapper: '(function(){\n%output%\n}).call(this)',
js_output_file: 'output.min.js', // outputs single file
create_source_map: true
}))
.pipe(gulp.dest('./dist'));
});

If you'd like to migrate from google-closure-compiler (which requires Java), you'll have to use gulp.src() or equivalents to load your JavaScript before it can be compiled. As this compiler runs in pure JavaScript, the compiler cannot load or save files from your filesystem directly.

For more information, check out Usage, supported Flags, or a demo project. Not all flags supported in the Java release are currently available in this experimental version. However, the compiler will let you know via exception if you've hit any missing ones.

Categories: Programming

Tango developer workshop brings stories to life

Mon, 08/29/2016 - 17:21

Posted by Eitan Marder-Eppstein, Senior Software Engineer for Tango

Technology helps us connect and communicate with others -- from sharing commentary and photos on social media to a posting a video with breaking news, digital tools enable us to craft stories and share them with the world.

Tango can enhance storytelling by bringing augmented reality into our surroundings. Recently, the Tango team hosted a three-day developer workshop around how to use this technology to tell incredible stories through mobile devices. The workshop included a wide range of participants, from independent filmmakers and developers to producers and creatives at major media companies. By the end of the workshop, a number of new app prototypes had been created. Here are some of the workshop highlights:

  • The New York Times experimented with ways to connect people with news stories by creating 3D models of the places where the events happened.
  • The Wall Street Journal prototyped an app called ViewPoint to bring location-based stories to life. When you’re in front of a monument, for example, you can see AR content and pictures that someone else took at that site.
  • Line experimented with bringing 3D characters to life. For example, app users could see AR superheros in front of them, and then their friend could jump into the characters’ costumes.
  • Google’s Mobile Vision Team brought music to life by letting people point their phones at various objects and visualize the vibrations that music makes on them.

We even had an independent developer use Tango to create realtime video stabilization tool. We’re looking forward to seeing these apps—and many more—come to life. If you want to start building your own storytelling and visual communication apps for augmented reality, check out our developer page and join our G+ community.

Categories: Programming

Modernizing OAuth interactions in Native Apps for Better Usability and Security

Mon, 08/22/2016 - 22:29

Posted by William Denniss, Product Manager, Identity and Authentication

The Identity team is constantly striving to help Google users sign-in to third-party applications with their Google account in a secure and seamless way, and enable users to share select information from their account such as their calendar or contact information with other apps, when they wish to do so.

Under the hood these interactions happen via OAuth requests, and over the years Google has supported a number of ways for developers to implement OAuth flows with us. With improved security and usability in mind, we will soon be ending the support for one of these ways. In the coming months, we will no longer allow OAuth requests to Google in embedded browsers known as “web-views”, such as the WebView UI element on Android and UIWebView/WKWebView on iOS, and equivalents on Windows and OS X.
Using the device browser for OAuth requests instead of an embedded web-view can improve the usability of your apps significantly: users only need to sign-in to Google once per device, improving conversion rates of sign-in and authorization flows in your app. Modern “in-app browser tab” patterns available on some operating systems, such as Chrome Custom Tabs on Android and SFSafariViewController on iOS offer further UX improvements for browser-based OAuth flows.

In contrast, the outdated method of using embedded browsers for OAuth means a user must sign-in to Google each time, instead of using the existing logged-in session from the device. The device browser also provides improved security as apps are able to inspect and modify content in a web-view, but not content shown in the browser.

To help you migrate, we offer libraries and samples that follow modern best practices which you can use:

  • Google Sign-In for Android and iOS, our recommended SDK for sign-in and OAuth with Google Accounts.
  • AppAuth for Android, iOS, and OS X, an open source OAuth client library that can be used with Google and other OAuth providers. We also offer GTMAppAuth (for iOS and OS X), a library which enables AppAuth support for the Google APIs Client Library for Objective-C, and the GTM Session Fetcherprojects.
  • Google Sign-in and OAuth Examples for Windows, examples demonstrating how to use the browser to authenticate Google users in various Windows environments such as Universal Windows Platform (UWP), console and desktop apps.

You can also read protocol-level documentation for our standards-based support of OAuth for Native Apps, and an IETF best current practice draft on this topic.

Versions of Google Sign-In on iOS prior to version 3.0 don’t support the current industry best practices of the in-app browser tab, and therefore are also deprecated. If you use Google Sign-In, please update to the latest version to get all the recent security and usability improvements. For now, this policy does not remove our support of WebView on iOS 8, however we may start to display notices encouraging users to upgrade their device for better security.

The rollout schedule for the deprecation of web-views for OAuth requests to Google is as follows. Starting October 20, 2016, we will prevent new OAuth clients from using web-views on platforms with a viable alternative, and will phase in user-facing notices for existing OAuth clients. On April 20, 2017, we will start blocking OAuth requests using web-views for all OAuth clients on platforms where viable alternatives exist.

If you have any questions with the migration, please post to Stack Overflow tagged with “google-oauth”.

Categories: Programming

Google Developers to open a startup space in San Francisco

Thu, 08/18/2016 - 19:10

Posted by Roy Glasberg Global Lead, Launchpad Accelerator

We’re heading to the city of San Francisco this September to open a new space for developers and startups. With over 14,000 sq. ft. at 301 Howard Street, we’ll have more than enough elbow room to train, educate and collaborate with local and international developers and startups.

The space will hold a range of events: Google Developer Group community meetups, Codelabs, Design Sprints, and Tech Talks. It will also host the third class of Launchpad Accelerator, our equity-free accelerator for startups in emerging markets. During each class, over 20 Google teams provide comprehensive mentoring to late-stage app startups who seek to scale and become leaders in their local markets. The 3-month program starts with an all-expenses-paid two week bootcamp at Google HQ.

Developers are in an ever-changing landscape and seek technical training. We’ve also seen a huge surge in the number of developers starting their own companies. Lastly, this is an unique opportunity to bridge the gap between Silicon Valley and emerging markets. To date Launchpad Accelerator has nearly 50 alumni in India, Indonesia, Brazil and Mexico. Startups in these markets are tackling critical local problems, but they often lack access to the resources and network we have here. This dedicated space will enable us to regularly engage with developers and serve their evolving needs, whether that is to build a product, grow a company or make revenue.

We can’t wait to get started and work with developers to build successful businesses that have a positive impact locally and globally.

Categories: Programming

A Google Santa Tracker update from Santa's Elves

Wed, 08/17/2016 - 00:10

Sam Thorogood, Developer Programs Engineer

Today, we're announcing that the open source version of Google's Santa Tracker has been updated with the Android and web experiences that ran in December 2015. We extended, enhanced and upgraded our code, and you can see how we used our developer products - including Firebase and Polymer - to build a fun, educational and engaging experience.

To get started, you can check out the code on GitHub at google/santa-tracker-weband google/santa-tracker-android. Both repositories include instructions so you can build your own version.

Santa Tracker isn’t just about watching Santa’s progress as he delivers presents on December 24. Visitors can also have fun with the winter-inspired experiences, games and educational content by exploring Santa's Village while Santa prepares for his big journey throughout the holidays.

Below is a summary of what we’ve released as open source.

Android app
  • The Santa Tracker Android app is a single APK, supporting all devices, such as phones, tablets and TVs, running Ice Cream Sandwich (4.0) and up. The source code for the app can be found here.
  • Santa Tracker leverages Firebase features, including Remote Config API, App Invites to invite your friends to play along, and Firebase Analytics to help our elves better understand users of the app.
  • Santa’s Village is a launcher for videos, games and the tracker that responds well to multiple devices such as phones and tablets. There's even an alternative launcher based on the Leanback user interface for Android TVs.

  • Games on Santa Tracker Android are built using many technologies such as JBox2D (gumball game), Android view hierarchy (memory match game) and OpenGL with special rendering engine (jetpack game). We've also included a holiday-themed variation of Pie Noon, a fun game that works on Android TV, your phone, and inside Google Cardboard's VR.
Android Wear

  • The custom watch faces on Android Wear provide a personalized touch. Having Santa or one of his friendly elves tell the time brings a smile to all. Building custom watch faces is a lot of fun but providing a performant, battery friendly watch face requires certain considerations. The watch face source code can be found here.
  • Santa Tracker uses notifications to let users know when Santa has started his journey. The notifications are further enhanced to provide a great experience on wearables using custom backgrounds and actions that deep link into the app.
On the web

  • Santa Tracker is mobile-first: this year's experience was built for the mobile web, including an amazing brand new, interactive - yet fully responsive, village: with three breakpoints, touch gesture support and support for the Web App Manifest.
  • To help us develop Santa at scale, we've upgraded to Polymer 1.0+. Santa Tracker's use of Polymer demonstrates how easy it is to package code into reusable components. Every housein Santa's Village is a custom element, only loaded when needed, minimizing the startup cost of Santa Tracker.

  • Many of the amazing new games (like Present Bounce) were built with the latest JavaScript standards (ES6) and are compiled to support older browsers via the Google Closure Compiler.
  • Santa Tracker's interactive and fun experience is enhanced using the Web Animations API, a standardized JavaScript APIfor unifying animated content.
  • We simplified the Chromecast support this year, focusing on a great screensaver that would countdown to the big event on December 24th - and occasionally autoplay some of the great video content from around Santa's Village.

We hope that this update inspires you to make your own magical experiences based on all the interesting and exciting components that came together to make Santa Tracker!

Categories: Programming

Adding a bit more reality to your augmented reality apps with Tango

Wed, 08/10/2016 - 19:14

Posted by Sean Kirmani, Software Engineering Intern, Tango

Augmented reality scenes, where a virtual object is placed in a real environment, can surprise and delight people whether they’re playing with dominoes or trying to catch monsters. But without support for environmental lighting, these virtual objects can stick out rather than blend in with their environments. Ambient lighting should bleed onto an object, real objects should be seen in reflective surfaces, and shade should darken a virtual object.

Tango-enabled devices can see the world like we do, and they’re designed to bring mobile augmented reality closer to real reality. To help bring virtual objects to life, we’ve updated the Tango Unity SDK to enable developers to add environmental lighting to their Tango apps. Here’s how to get started:

Let’s dive in!

Before we begin, you’ll need to download the Tango Unity SDK. Then you can follow the steps below to make your reality a little brighter.

Step 1: Create a new Unity project and import the Tango SDK package into the project.

Step 2: Create a new scene. If you need help with this, check out the solar system tutorial from a previous post. Then you’ll add Tango Manager and Tango AR Camera prefabs to your scene and remove the default Main Camera game object. Also remove the artificial directional light. We won’t need that anymore. After doing this, you should see the scene hierarchy like this:

Step 3: In the Tango Manager game object, you’ll want to check Enable Video Overlay and set the method to Texture and Raw Bytes.

Step 4: Under Tango AR Camera, look for the Tango Environmental Lighting component. Make that the the Enable Environmental Lighting checkbox is checked.

Step 5: Add your game object that you’d like to be environmental lit to the scene. In our example, we’ll be using a pool ball. So let’s add a new Sphere.

Step 6: Let’s create a new material for our sphere. Go to Create > Material. We’ll be using our environmental lighting shader on this object. Under Shader, select Tango >Environmental Lighting > Standard.

Step 7: Let’s add a texture to our pool ball and tweak our smoothness parameter. The higher the smoothness, the more reflective our object becomes. Rougher objects have more of a diffuse lighting that is softer and spreads over the surface of the object. You can download the pool_ball_textureand import it into your project.

Step 8: Add your new material to your sphere, so you have a nicer looking pool ball.

Step 9: Compile and run the application again. You should able see environment lit pool ball now!

You can also follow our previous post and be able to place your pool ball on surfaces. You don’t have to worry about your sphere rolling off your surface. Here are some comparison pictures of the pool ball with a static artificial light (left) and with environment lighting (right).

We hope you enjoyed this tutorial combining the joy of environmental lighting with the magic of AR. Stay tuned to this blog for more AR updates and tutorials!

We’re just getting started!

You’ve just created a more realistically light pool ball that live in AR. That’s a great start, but there’s a lot more you can do to make a high performance smartphone AR application. Check out our Unity example code on Github (especially the Augmented Reality example) to learn more about building a good smartphone AR application.

Categories: Programming

Daydream Labs: positive social experiences in VR

Tue, 08/09/2016 - 17:52

Posted by Robbie Tilton, UX Designer, Google VR

At Daydream Labs, we have experimented with social interactions in VR. Just like in real reality, people naturally want to share and connect with others in VR. As developers and designers, we are excited to build social experiences that are fun and easy to use—but it’s just as important to make it safe and comfortable for all involved. Over the last year, we’ve learned a few ways to nudge people towards positive social experiences.

What can happen without clear social norms

People are curious and will test the limits of your VR experience. For example, when some people join a multiplayer app or game, they might wonder if they can reach their hand through another player’s head or stand inside another avatar’s body. Even with good intentions, this can make other people feel unsafe or uncomfortable.

For example, in a shopping experiment we built for the HTC Vive, two people could enter a virtual storefront and try on different hats, sunglasses, and accessories. There was no limit to how or where they could place a virtual accessory, so some people stuck hats on friends anywhere they would stick—like in front of their eyes. This had the unfortunate effect of blocking their vision. If they couldn’t remove the hat in front of their eyes with their controllers, they had no other recourse than to take off their headset and end their VR experience.


Protecting user safety

Everyone should feel safe and comfortable in VR. If we can anticipate the actions of others, then we may be able to discourage negative social behavior before it starts. For example, by designing personal space around each user, you can prevent other people from invading that personal space.

We built an experiment around playing poker where we tried new ways to discourage trolling. If someone left their seat at the poker table, their environment desaturated to black and white and their avatar would disappear from the other player’s view. A glowing blue personal space bubble would guide the person back to their seat. We found it’s enough to prevent a player from approaching their opponents to steal chips or invade personal space.


Reward positive behavior

If you want people to interact in positive ways—like high-fiving

Categories: Programming

Schell Games gives popular games a twist with Tango

Wed, 08/03/2016 - 18:36

Posted by Justin Quimby, Senior Product Manager Tango

At Tech World last month, our team showed off some of the latest Tango-enabled games. One crowd favorite was Domino World by Schell Games which will will be available on the first Tango-enabled device, Lenovo’s Phab 2 Pro, coming this fall. Schell Games has adapted a few classic games, including Jenga, into smartphone augmented reality, and their developers share their experience and considerations they kept in mind as they gave dominoes a new twist.

Google: How did your team first hear about Tango technology?

Schell Games: The Tango team invited us to their Game Developer Workshopwhere we learned about Tango and the types of apps we could develop for this platform.

Google: You took a classic game, and added AR elements. How did you come to dominoes?

Schell Games: At the Game Developer Workshop, we prototyped three games: a racing game, Jenga and a pet game. Of the three games, people connected the most with Jenga.

People loved sharing a device to play the game together—and they loved that they didn’t have to pick up all the Jenga pieces when the game was over! And from a developer perspective, Jenga was great as it highlighted Tango’s ability to recognize surfaces.

Based on how much people liked Jenga, we decided that Domino World would be our second game. Domino World gives players all the fun of dominoes, but without the setup effort or mess. We were inspired by YouTube videos where people of all ages were doing really creative things with dominoes. Our goal was to bring that experience to the phone as an immersive and fun augmented-reality experience.

Google: Which Tango features did you use in Jenga and Domino World?

Schell Games: We used motion tracking, which lets people walk around their dominoes or Jenga tower. We also used surface detection with the depth camera, so that the device recognizes when objects are placed on a surface.

Google: How does your development approach differ for AR apps versus standard mobile apps?

Schell Games: With Domino World, for example, our approach to augmented reality thrives on reinforcing the feeling that the player’s display is a “window on the world.” Toys and dominoes are (virtually) placed on the actual surfaces around the player, and the game’s controls aid players in manipulating objects in the space in front of them. As a result, the player is naturally encouraged move around as they view, adjust and otherwise shape their ever-growing creations.

In contrast, traditional touchscreen controls largely work with metaphors of interacting with the screen’s image itself -- drawing on it, pinch-zooming it, etc. As a result, a more traditional touchscreen-controlled Domino World could have influenced players to remain more static and work with the existing view, as opposed to moving around to different vantage points.

Google: We noticed that you use a landscape orientation for Domino World. How did you decide to take that approach.

Schell Games: The decision to use landscape orientation for Domino World is the result of multiple smaller reasons all put together:

  • Many new players have a tendency to initially build wider versus deeper (possibly due to an instinctive desire to be able to more easily access their domino runs).
  • UI controls at the edges of a landscape layout minimizes HUD overlap when working with wider versus. deeper runs.
  • A landscape orientation naturally places players’ a hands at the device’s corners, which makes for a more stable grip during gameplay.

Google: What surprised you the most while building with Tango?

Schell Games: We were quite surprised at how easy it was to build with the Tango SDK and add Tango functionality to our apps. We used the Unity Engine which made the whole process quite seamless. It took us just over two weeks to build Jenga and 10 weeks to build Domino World from beginning to end.

Google: How do you think Tango will change the way people play games?

Schell Games: Tango makes it easy to play AR games. You don’t need to print and cut out AR trackers or markers to place throughout your room to help orient the phone. Instead, your phone always knows where it is in relation to the AR objects and you can easily start playing—whether you’re in a living room or on a bus. It’s incredible to have this experience with just your mobile device.

Categories: Programming

Autotrack turns 1.0

Tue, 08/02/2016 - 22:31

Posted by Philip Walton, Developer Programs Engineer

Autotrack is a JavaScript library built for use with analytics.jsthat provides developers with a wide range of plugins to track the most common user interactions relevant to today's modern web.

The first version of autotrack for analytics.js was released on Github earlier this year, and since then the response and adoption from developers has been amazing. The project has been starred over a thousand times, and sites using autotrack are sending millions of hits to Google Analytics every single day.

Today I'm happy to announce that we've released autotrack version 1.0, which includes several new plugins, improvements to the existing plugins, and tons of new ways to customize autotrack to meet your needs.

Note: autotrack is not an official Google Analytics product and does not qualify for Google Analytics 360 support. It is maintained by members of the Google Analytics developer platform team and is primarily intended for a developer audience.

New plugins

Based on the feedback and numerous feature requests we received from developers over the past few months, we've added the following new autotrack plugins:

Impression Tracker

The impression tracker plugin allows you to track when an element is visible within the browser viewport. This lets you much more reliably determine whether a particular advertisement or call-to-action button was seen by the user.

Impression tracking has been historically tricky to implement on the web, particularly in a way that doesn't degrade the performance of your site. This plugin leverages new browser APIs that are specifically designed to track these kinds of interactions in a highly performant way.

Clean URL Tracker

If your analytics implementation sends pageviews to Google Analytics without modifying the URL, then you've probably experienced the problem of seeing multiple different page paths in your reports that all point to the same place. Here's an example:

  • /contact
  • /contact/
  • /contact?hl=en
  • /contact/index.html

The clean URL tracker plugin avoids this problem by letting you set your preferred URL format (e.g. strip trailing slashes, remove index.html filenames, remove query parameters, etc.), and the plugin automatically updates all page URLs based on your preference before sending them to Google Analytics.

Note: setting up View Filters in your Google Analytics view settings is another way to modify the URLs sent to Google Analytics.

Page Visibility Tracker

It's becoming increasingly common for users to visit sites on the web and then leave them open in an inactive browser tab for hours or even days. And when users return to your site, they often won't reload the page, especially if your site fetches new content in the background.

If your site implements just the default javascript tracking snippet, these types of interactions will never be captured.

The page visibility tracker plugin takes a more modern approach to what should constitute a pageview. In addition to tracking when a page gets loaded, it also tracks when the visibility state of the page changes (i.e. when the tab goes into or comes out of the background). These additional interaction events give you more insight into how users behave on your site.

Updates and improvements

In addition to the new plugins added to autotrack, the existing plugins have undergone some significant improvements, most notably in the ability to customize them to your needs.

All plugins that send data to Google Analytics now give you 100% control over precisely what fieldsget sent, allowing you to set, modify, or remove anything you want. This gives advanced users the ability to set their own custom dimensions on hits or change the interaction setting to better reflect how they choose to measure bounce rate.

Users upgrading from previous versions of autotrack should refer to the upgrade guide for a complete list of changes (note: some of the changes are incompatible with previous versions).

Who should use autotrack

Perhaps the most common question we received after the initial release of autotrack is who should use it. This was especially true of Google Tag Managerusers who wanted to take advantage of some of the more advanced autotrack features.

Autotrack is a developer project intended to demonstrate and streamline some advanced tracking techniques with Google Analytics, and it's primarily intended for a developer audience. Autotrack will be a good fit for small to medium sized developer teams who already have analytics.js on their website or who prefer to manage their tracking implementation in code.

Large teams and organizations, those with more complex collaboration and testing needs, and those with tagging needs beyond just Google Analytics should instead consider using Google Tag Manager. While Google Tag Manager does not currently support custom analytics.js plugins like those that are part of autotrack, many of the same tracking techniques are easy to achieve with Tag Manager’s built-in triggers, and others may be achieved by pushing data layer events based on custom code on your site or in Custom HTML tags in Google Tag Manager. Read Google Analytics Events in the Google Tag Manager help center to learn more about automatic event tracking based on clicks and form submissions.

Next steps

If you're not already using autotrack but would like to, check out the installation and usage section of the documentation. If you already use autotrack and want to upgrade to the latest version, be sure to read the upgrade guide first.

To get a sense of what the data captured by autotrack looks like, the Google Analytics Demos & Tools site includes several reports displaying its own autotrack usage data. If you want to go deeper, the autotrack library is open source and can be a great learning resource. Have a read through the plugin source code to get a better understanding of how some of the advanced analytics.js features work.

Lastly, if you have feedback or suggestions, please let us know. You can report bugs or submit any issues on Github.

Categories: Programming

Mobile web and machine learning solutions: Case studies from Launchpad Accelerator

Thu, 07/28/2016 - 19:27

Roy Glasberg, Global Lead, Launchpad and Launchpad Accelerator

Last month, the second cohort of Launchpad Accelerator, Google’s high-touch global program for late-stage startups, came and conquered their app challenges with the help of mentors at Google HQ.

What did they learn that they’d like to share with developers across the world? Check out the video below for solutions from 3 different startups, and an in-depth review of MagicPin’s mobile web challenge and solution.


Startup:

MagicPinfrom India is a social network app that curates a local user base around locations, allowing merchants to connect with these specific audiences.

Mobile web challenge:

In India, downloading an app requires a high commitment. On average a user would keep 5 or 6 apps on their phone. According to Anshoo Sharma, Co-Founder and CEO, MagicPin, “If you want to be the next app that they download, there is a high barrier.”

Jordan Adler, Google Developer Advocate: “Devices in markets like India have limited space--on average 128 MB of memory--and when you add in system features only 40 bytes of user space is left. And if a typical APK is a few megabytes, you can only have a few apps before you have to stop downloading.”

Solution:

Jordan Adler: “One of the great things about Progressive Web Apps is you don’t have to request the commitment (to download an app) upfront. You can start to build a relationship with the user through the web interface, and over time the web app can become more like a native app, it can be housed on a device, cache content and work offline.”

Anshoo Sharma: “In the last 1.5 weeks we have been here we have already launched a micro version of our platform on Progressive Web Apps. And the experience is great! Without using the (mobile) app people can get as good an experience.”

About Launchpad Accelerator

Launchpad Accelerator is a six-month accelerator that enables late-stage app startups from emerging markets to successfully scale. Here's a two-minute video about the Accelerator.

Categories: Programming