Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/sources/29' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Google Code Blog
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.
Syndicate content
ewoodhttp://www.blogger.com/profile/12341551220176883769noreply@blogger.comBlogger1599125
Updated: 3 min 58 sec ago

I/O session: Location and Proximity Superpowers: Eddystone + Google Beacon Platform

Mon, 07/25/2016 - 19:10

Originally posted on Geo Developers blog

Bluetooth beacons mark important places and objects in a way that your phone understands. Last year, we introduced the Google beacon platform including Eddystone, Nearby Messages and the Proximity Beacon API that helps developers build beacon-powered proximity and location features in their apps.
Since then, we’ve learned that when deployment of physical infrastructure is involved, it’s important to get the best possible value from your investment. That’s why the Google beacon platform works differently from the traditional approach.
We don’t think of beacons as only pointing to a single feature in an app, or a single web resource. Instead, the Google beacon platform enables extensible location infrastructure that you can manage through your Google Developer project and reuse many times. Each beacon can take part in several different interactions: through your app, through other developers’ apps, through Google services, and the web. All of this functionality works transparently across Eddystone-UID and Eddystone-EID -- because using our APIs means you never have to think about monitoring for the individual bytes that a beacon is broadcasting.
For example, we’re excited that the City of Amsterdam has adopted Eddystone and the newly released publicly visible namespace feature for the foundation of their open beacon network. Or, through Nearby Notifications, Eddystone and the Google beacon platform enable explorers of the BFG Dream Jar Trail to discover cloud-updateable content in Dream Jars across London.
To make getting started as easy as possible we’ve provided a set of tools to help developers, including links to beacon manufacturers that can help you with Eddystone, Beacon Tools (for Android and iOS), the Beacon Dashboard, a codelab and of course our documentation. And, if you were not able to attend Google I/O in person this year, you can watch my session, Location and Proximity Superpowers: Eddystone + Google Beacon Platform: We can’t wait to see what you build! author image
About Peter: I am a Product Manager for the Google beacon platform, including the open beacon format Eddystone, and Google's cloud services that integrate beacon technology with first and third party apps. When I’m not working at Google I enjoy taking my dog, Oscar, for walks on Hampstead Heath.
Categories: Programming

Building for Billions

Fri, 07/01/2016 - 17:37

Originally posted on Android Developers blog

Posted by Sam Dutton, Ankur Kotwal, Developer Advocates; Liz Yepsen, Program Manager

‘TOP-UP WARNING.’ ‘NO CONNECTION.’ ‘INSUFFICIENT BANDWIDTH TO PLAY THIS RESOURCE.’

These are common warnings for many smartphone users around the world.

To build products that work for billions of users, developers must address key challenges: limited or intermittent connectivity, device compatibility, varying screen sizes, high data costs, short-lived batteries. We first presented developers.google.com/billionsand related Android and Web resources at Google I/O last month, and today you can watch the video presentations about Android or the Web.

These best practices can help developers reach billions by delivering exceptional performance across a range of connections, data plans, and devices. g.co/dev/billions will help you:

Seamlessly transition between slow, intermediate, and offline environments

Your users move from place to place, from speedy wireless to patchy or expensive data. Manage these transitions by storing data, queueing requests, optimizing image handling, and performing core functions entirely offline.

Provide the right content for the right context

Keep context in mind - how and where do your users consume your content? Selecting text and media that works well across different viewport sizes, keeping text short (for scrolling on the go), providing a simple UI that doesn’t distract from content, and removing redundant content can all increase perception of your app’s quality while giving real performance gains like reduced data transfer. Once these practices are in place, localization options can grow audience reach and increase engagement.

Optimize for mobile hardware

Ensure your app or Web content is served and runs well for your widest possible addressable market, covering all actively used OS versions, while still following best practices, by testing on virtual or actual devices in target markets. Native Android apps should set minimum and target SDKs. Also, remember low cost phones have smaller amounts of RAM; apps should therefore adjust usage accordingly and minimize background running. For in-depth information on minimizing APK size, check out this series of Medium posts. On the Web, optimize JavaScript CPU usage, avoid raster image rendering, and minimize resource requests. Find out more here.

Reduce battery consumption

Low cost phones usually have shorter battery life. Users are sensitive to battery consumption levels and excessive consumption can lead to a high uninstall rate or avoidance of your site. Benchmark your battery usage against sessions on other pages or apps, or using tools such as Battery Historian, and avoid long-running processes which drain batteries.

Conserve data usage

Whatever you’re building, conserve data usage in three simple steps: understand loading requirements, reduce the amount of data required for interaction, and streamline navigation so users get what they want quickly. Conserving data on behalf of your users (and with native apps, offering configurable network usage) helps retain data-sensitive users -- especially those on prepaid plans or contracts with limited data -- as even “unlimited” plans can become expensive when roaming or if unexpected fees are applied.

Have another insight, or a success launching in low-connectivity conditions or on low-cost devices? Let us know on our G+ post.

Categories: Programming

Announcing turndown of the Google Feed API

Thu, 06/30/2016 - 21:10

Posted by Dan Ciruli, Product Manager, Google Cloud Platform Team

The Google Feed API was one of Google’s original AJAX APIs, first announced in 2007. It had a good run. However, interest and use of the API has waned over time, and it is running on API infrastructure that is now two generations old at Google.

Along with many of our other free APIs, in April 2012, we announced that we would be deprecating it in three years time. As of April 2015, the deprecation period has elapsed. While we have continued to operate the API in the interim, it is now time to announce the turndown.

As a final courtesy to developers, we plan to operate the API until September 29, 2016, when Google Feed API will cease operation. Please ensure that your application is no longer using this API by then.

Google appreciates how important APIs and developer trust are and we do not take decisions like this one lightly. We remain committed to providing great services and being open and communicative about their statuses.

For those developers who find RSS an essential part of their workflow, there are commercial alternatives that may well fit your use case very well.

Categories: Programming

Announcing turndown of the Google Feed API

Thu, 06/30/2016 - 21:10

Posted by Dan Ciruli, Product Manager, Google Cloud Platform Team

The Google Feed API was one of Google’s original AJAX APIs, first announced in 2007. It had a good run. However, interest and use of the API has waned over time, and it is running on API infrastructure that is now two generations old at Google.

Along with many of our other free APIs, in April 2012, we announced that we would be deprecating it in three years time. As of April 2015, the deprecation period has elapsed. While we have continued to operate the API in the interim, it is now time to announce the turndown.

As a final courtesy to developers, we plan to operate the API until September 29, 2016, when Google Feed API will cease operation. Please ensure that your application is no longer using this API by then.

Google appreciates how important APIs and developer trust are and we do not take decisions like this one lightly. We remain committed to providing great services and being open and communicative about their statuses.

For those developers who find RSS an essential part of their workflow, there are commercial alternatives that may well fit your use case very well.

Categories: Programming

Universal rendering with SwiftShader, now open source

Thu, 06/30/2016 - 21:01

Originally Posted on Chromium Blog


Posted by Nicolas Capens, Software Engineer and Pixel Pirate
SwiftShader is a software library for high-performance graphics rendering on the CPU. Google already uses this library in multiple products, including Chrome, Android development tools, and cloud services. Starting today, SwiftShader is fully open source, expanding its pool of potential applications.


Since 2009, Chrome has used SwiftShader to enable 3D rendering on systems that can’t fully support hardware-accelerated rendering. While 3D content like WebGL is written for a GPU, some users’ devices don’t have graphics hardware capable of executing this content. Others may have drivers with serious bugs which can make 3D rendering unreliable, or even impossible. Chrome uses SwiftShader on these systems in order to ensure 3D web content is available to all users.

WithWithoutWebGL3.pngChrome running without SwiftShader on a machine with an inadequate GPU (left) cannot run the WebGL Globe experiment. The same machine with SwiftShader enabled (right) is able to fully render the content.


SwiftShader implements the same OpenGL ES graphics API used by Chrome and Android. Making SwiftShader open source will enable other browser vendors to support 3D content universally and move the web platform forward as a whole. In particular, unconditional WebGL support allows web developers to create more engaging content, such as casual games, educational apps, collaborative content creation software, product showcases, virtual tours, and more. SwiftShader also has applications in the cloud, enabling rendering on GPU-less systems.


To provide users with the best performance, SwiftShader uses several techniques to efficiently perform graphics calculations on the CPU. Dynamic code generation enables tailoring the code towards the tasks at hand at run-time, as opposed to the more common compile-time optimization. This complex approach is simplified through the use of Reactor, a custom C++ embedded language with an intuitive imperative syntax. SwiftShader also uses vector operations in SIMT fashion, together with multi-threading technology, to increase parallelism across the CPU’s available cores and vector units. This enables real-time rendering for uses such as app streaming on Android.


Developers can access the SwiftShader source code from its git repository. Sign up for the mailing list to stay up to date on the latest developments and collaborate with other SwiftShader developers from the open-source community.
Categories: Programming

Universal rendering with SwiftShader, now open source

Thu, 06/30/2016 - 21:01

Originally Posted on Chromium Blog


Posted by Nicolas Capens, Software Engineer and Pixel Pirate
SwiftShader is a software library for high-performance graphics rendering on the CPU. Google already uses this library in multiple products, including Chrome, Android development tools, and cloud services. Starting today, SwiftShader is fully open source, expanding its pool of potential applications.


Since 2009, Chrome has used SwiftShader to enable 3D rendering on systems that can’t fully support hardware-accelerated rendering. While 3D content like WebGL is written for a GPU, some users’ devices don’t have graphics hardware capable of executing this content. Others may have drivers with serious bugs which can make 3D rendering unreliable, or even impossible. Chrome uses SwiftShader on these systems in order to ensure 3D web content is available to all users.

WithWithoutWebGL3.pngChrome running without SwiftShader on a machine with an inadequate GPU (left) cannot run the WebGL Globe experiment. The same machine with SwiftShader enabled (right) is able to fully render the content.


SwiftShader implements the same OpenGL ES graphics API used by Chrome and Android. Making SwiftShader open source will enable other browser vendors to support 3D content universally and move the web platform forward as a whole. In particular, unconditional WebGL support allows web developers to create more engaging content, such as casual games, educational apps, collaborative content creation software, product showcases, virtual tours, and more. SwiftShader also has applications in the cloud, enabling rendering on GPU-less systems.


To provide users with the best performance, SwiftShader uses several techniques to efficiently perform graphics calculations on the CPU. Dynamic code generation enables tailoring the code towards the tasks at hand at run-time, as opposed to the more common compile-time optimization. This complex approach is simplified through the use of Reactor, a custom C++ embedded language with an intuitive imperative syntax. SwiftShader also uses vector operations in SIMT fashion, together with multi-threading technology, to increase parallelism across the CPU’s available cores and vector units. This enables real-time rendering for uses such as app streaming on Android.


Developers can access the SwiftShader source code from its git repository. Sign up for the mailing list to stay up to date on the latest developments and collaborate with other SwiftShader developers from the open-source community.
Categories: Programming

Daydream Labs: animating 3D objects in VR

Tue, 06/28/2016 - 17:37

Rob Jagnow, Software Engineer, Google VR

Whether you're playing a game or watching a video, VR lets you step inside a new world and become the hero of a story. But what if you want to tell a story of your own?

Producing immersive 3D animation can be difficult and expensive. It requires complex software to set keyframes with splined interpolation or costly motion capture setups to track how live actors move through a scene. Professional animators spend considerable effort to create sequences that look expressive and natural.

At Daydream Labs, we've been experimenting with ways to reduce technical complexity and even add a greater sense of play when animating in VR. In one experiment we built, people could bring characters to life by picking up toys, moving them through space and time, and then replay the scene.


As we saw people play with the animation experiment we built, we noticed a few things:

The need for complex metaphors goes away in VR: What can be complicated in 2D can be made intuitive in 3D. Instead of animating with graph editors or icons representing location, people could simply reach out, grab a virtual toy, and carry it through the scene. These simple animations had a handmade charm that conveyed a surprising degree of emotion.

The learning curve drops to zero: People were already familiar with how to interact with real toys, so they jumped right in and got started telling their stories. They didn't need a lengthy tutorial, and they were able to modify their animations and even add new characters without any additional help.

People react to virtual environments the same way they react to real ones: When people entered a playful VR environment, they understood it was safe space to play with the toys around them. They felt comfortable performing and speaking in funny voices. They took more risks knowing the virtual environment was designed for play.

To create more intricate animations, we also built another experiment that let people independently animate the joints of a single character. It let you record your character’s movement as you separately animated the feet, hands, and head — just like you would with a puppet.


VR allows us to rethink software and make certain use cases more natural and intuitive. While this kind of animation system won’t replace professional tools, it can allow anyone to tell their own stories. There are many examples of using VR for storytelling, especially with video and animation, and we’re excited to see new perspectives as more creators share their stories in VR.

Categories: Programming

TensorFlow v0.9 now available with improved mobile support

Mon, 06/27/2016 - 19:56

Posted by Pete Warden, Software Engineer

When we started building TensorFlow, supporting mobile devices was a top priority. We were already supporting many of Google’s mobile apps like Translate, Maps, and the Google app, which use neural networks running on devices. We knew that we had to make mobile a first-class part of open source TensorFlow.

TensorFlow has been available to developers on Android since launch, and today we're happy to add iOS in v0.9 of TensorFlow, along with Raspberry Pi support and new compilation options.

To build TensorFlow on iOS we’ve created a set of scripts, including a makefile, to drive the cross-compilation process. The makefile can also help you build TensorFlow without using Bazel, which is not always available.

All this is in the latest TensorFlowdistribution. You can read more by visiting our Mobile TensorFlow guide and the documentation in our iOS samples and Android sample. The mobile samples allow you to classify images using the ImageNet Inception v1 classifier.

These mobile samples are just the beginning---we'd love your help and your contributions. Tag social media posts with #tensorflow so we can hear about your projects!

See the full TensorFlow 0.9.0 release notes here.

Categories: Programming

New Google Cast SDK released for Android and iOS

Mon, 06/27/2016 - 19:28

Posted by Adam Champy, Product Manager for Google Cast SDK

Google Cast makes it easy for developers to extend their mobile experience to the most beautiful screens and speakers in the home.

At Google I/O, we announced our new Google Cast SDK. This new SDK focuses on making development for Cast quicker, more reliable, and easier to maintain. We’ve introduced full state management that helps you implement the right abstraction between your app and Google Cast. We’ve also delivered a full Cast user experience, matching the Google Cast design checklist.

Today we are releasing this SDK for Android and iOS Senders, including an introductory video, full documentation, and reference sample apps and codelab tutorials for both platforms. Initial developer feedback is that first-time implementations can save significant development time compared with our previous SDKs.


A few things we’ve announced will be coming in the next few months, including a customizable Expanded Controller and adding customization to the Mini Controller, to help accelerate development even further.

Drop by our Cast developer site to learn about the new SDK and APIs, and join our developer community on Google+ at g.co/googlecastdev to discuss this with other developers.

Categories: Programming

Project Bloks: Making code physical for kids

Mon, 06/27/2016 - 16:46

Originally posted on Google Research Blog


Posted by Steve Vranakis and Jayme Goldstein, Executive Creative Director and Project Lead, Google Creative Lab
At Google, we’re passionate about empowering children to create and explore with technology. We believe that when children learn to code, they’re not just learning how to program a computer—they’re learning a new language for creative expression and are developing computational thinking: a skillset for solving problems of all kinds.
In fact, it’s a skillset whose importance is being recognised around the world—from President Obama’s CS4All program to the inclusion of Computer Science in the UK National Curriculum. We’ve long supported and advocated the furthering of CS education through programs and platforms such as Blockly, Scratch Blocks, CS First and Made w/ Code.
Today, we’re happy to announce Project Bloks, a research collaboration between Google, Paulo Blikstein (Stanford University) and IDEO with the goal of creating an open hardware platform that researchers, developers and designers can use to build physical coding experiences. As a first step, we’ve created a system for tangible programming and built a working prototype with it. We’re sharing our progress before conducting more research over the summer to inform what comes next.
Physical codingKids are inherently playful and social. They naturally play and learn by using their hands, building stuff and doing things together. Making code physical - known as tangible programming - offers a unique way to combine the way children innately play and learn with computational thinking.
Project Bloks is preceded and shaped by a long history of educational theory and research in the area of hands-on learning. From Friedrich Froebel, Maria Montessori and Jean Piaget’s pioneering work in the area of learning by experience, exploration and manipulation, to the research started in the 1970s by Seymour Papert and Radia Perlman with LOGO and TORTIS. This exploration has continued to grow and includes a wide range of research and platforms.
However, designing kits for tangible programming is challenging—requiring the resources and time to develop both the software and the hardware. Our goal is to remove those barriers. By creating an open platform, Project Bloks will allow designers, developers and researchers to focus on innovating, experimenting and creating new ways to help kids develop computational thinking. Our vision is that, one day, the Project Bloks platform becomes for tangible programming what Blockly is for on-screen programming. The Project Bloks systemWe’ve designed a system that developers can customise, reconfigure and rearrange to create all kinds of different tangible programming experiences. A birdseye view of the customisable and reconfigurable Project Bloks systemThe Project Bloks system is made up of three core components the “Brain Board”, “Base Boards” and “Pucks”. When connected together they create a set of instructions which can be sent to connected devices, things like toys or tablets, over wifi or Bluetooth. The three core components of the Project Bloks systemPucks: abundant, inexpensive, customisable physical instructionsPucks are what make the Project Bloks system so versatile. They help bring the infinite flexibility of software programming commands to tangible programming experiences. Pucks can be programmed with different instructions, such as ‘turn on or off’, ‘move left’ or ‘jump’. They can also take the shape of many different interactive forms—like switches, dials or buttons. With no active electronic components, they’re also incredibly cheap and easy to make. At a minimum, all you'd need to make a puck is a piece of paper and some conductive ink. Pucks allow for the creation and customisation of endless amount of different domain-specific physical instructions cheaply and easily.Base Boards: a modular design for diverse tangible programming experiencesBase Boards read a Puck’s instruction through a capacitive sensor. They act as a conduit for a Puck’s command to the Brain Board. Base Boards are modular and can be connected in sequence and in different orientations to create different programming flows and experiences. The modularity of the Base Boards means they can be arranged in different configurations and flowsEach Base Board is fitted with a haptic motor and LEDs that can be used to give end-users real time feedback on their programming experience. The Base Boards can also trigger audio feedback from the Brain Board’s built-in speaker. Brain Board: control any device that has an API over WiFi or BluetoothThe Brain Board is the processing unit of the system, built on a Raspberry Pi Zero. It also provides the other boards with power, and contains an API to receive and send data to the Base Boards. It sends the Base Boards’ instructions to any device with WiFi or Bluetooth connectivity and an API. As a whole, the Project Bloks system can take on different form factors and be made out of different materials. This means developers have the flexibility to create diverse experiences that can help kids develop computational thinking: from composing music using functions to playing around with sensors or anything else they care to invent. The Project Bloks system can be used to create all sorts of different physical programming experiences for kidsThe Coding KitTo show how designers, developers, and researchers might make use of system, the Project Bloks team worked with IDEO to create a reference device, called the Coding Kit. It lets kids learn basic concepts of programming by allowing them to put code bricks together to create a set of instructions that can be sent to control connected toys and devices—anything from a tablet, to a drawing robot or educational tools for exploring science like LEGO® Education WeDo 2.0. What’s next?We are looking for participants (educators, developers, parents and researchers) from around the world who would like to help shape the future of Computer Science education by remotely taking part in our research studies later in the year. If you would like to be part of our research study or simply receive updates on the project, please sign up. If you want more context and detail on Project Bloks, you can read our position paper. Finally, a big thank you to the team beyond Google who’ve helped us get this far—including the pioneers of tangible learning and programming who’ve inspired us and informed so much of our thinking.
Categories: Programming

Introducing Firebase Authentication

Thu, 06/23/2016 - 21:35

Originally posted on Firebase blog

Posted by Laurence Moroney, Developer Advocate and Alfonso Gómez Jordana, Associate Product Manager

For most developers, building an authentication system for your app can feel a lot like paying taxes. They are both relatively hard to understand tasks that you have no choice but doing, and could have big consequences if you get them wrong. No one ever started a company to pay taxes and no one ever built an app just so they could create a great login system. They just seem to be inescapable costs.

But now, you can at least free yourself from the auth tax. With Firebase Authentication, you can outsource your entire authentication system to Firebase so that you can concentrate on building great features for your app. Firebase Authentication makes it easier to get your users signed-in without having to understand the complexities behind implementing your own authentication system. It offers a straightforward getting started experience, optional UX components designed to minimize user friction, and is built on open standards and backed by Google infrastructure.

Implementing Firebase Authentication is relatively fast and easy. From the Firebase console, just choose from the popular login methods that you want to offer (like Facebook, Google, Twitter and email/password) and then add the Firebase SDK to your app. Your app will then be able to connect securely with the real time database, Firebase storage or to your own custom back end. If you have an auth system already, you can use Firebase Authentication as a bridge to other Firebase features.

Firebase Authentication also includes an open source UI library that streamlines building the many auth flows required to give your users a good experience. Password resets, account linking, and login hints that reduce the cognitive load around multiple login choices - they are all pre-built with Firebase Authentication UI. These flows are based on years of UX research optimizing the sign-in and sign-up journeys on Google, Youtube and Android. It includes Smart Lock for Passwords on Android, which has led to significant improvements in sign-in conversion for many apps. And because Firebase UI is open source, the interface is fully customizable so it feels like a completely natural part of your app. If you prefer, you are also free to create your own UI from scratch using our client APIs.

And Firebase Authentication is built around openness and security. It leverages OAuth 2.0 and OpenID Connect, industry standards designed for security, interoperability, and portability. Members of the Firebase Authentication team helped design these protocols and used their expertise to weave in latest security practices like ID tokens, revocable sessions, and native app anti-spoofing measures to make your app easier to use and avoid many common security problems. And code is independently reviewed by the Google Security team and the service is protected in Google’s infrastructure.

Fabulous use Firebase Auth to quickly implement sign-in

Fabulous uses Firebase Authentication to power their login system. Fabulous is a research-based app incubated in Duke University’s Center for Advanced Hindsight. Its goal is to help users to embark on a journey to reset poor habits, replacing them with healthy rituals, with the ultimate goal of improving health and well-being.

The developers of Fabulous wanted to implement an onboarding flow that was easy to use, required minimal updates, and reduced friction with the end user. They wanted an anonymous option so that users could experiment with it before signing up. They also wanted to support multiple login types, and have an option where the user sign-in flow was consistent with the look and feel of the app.

“I was able to implement auth in a single afternoon. I remember that I spent weeks before creating my own solution that I had to update each time the providers changed their API”
- Amine Laadhari, Fabulous CTO.

Malang Studio cut time-to market by months using Firebase Auth

Chu-Day is an application (available on Android and iOS) that helps couples to never forget the dates that matter most to them. It was created by the Korean firm Malang Studio, that develops character-centric, gamified lifestyle applications.

Generally, countdown and anniversary apps do not require users to sign-in, but Malang Studio wanted to make Chu-day special, and differentiate it from others by offering the ability to connect couples so they could jointly countdown to a special anniversary date. This required a sign-in feature, and in order to prevent users from dropping out, Chu-day needed to make the sign-in process seamless.

Malang Studio was able to integrate an onboarding flow in for their apps, using Facebook and Google Sign-in, in one day, without having to worry about server deployment or databases. In addition, Malang Studio has also been taking advantage of the Firebase User Management Console, which helped them develop and test their sign-in implementation as well as manage their users:

“Firebase Authentication required minimum configuration so implementing social account signup was easy and fast. User management feature provided in the console was excellent and we could easily implement our user auth system.”
- Marc Yeongho Kim, CEO / Founder from Malang Studio

For more about Firebase Authentication, visit the developers site and watch our I/O 2016 session, “Best practices for a great sign-in experience.”

Categories: Programming

Introducing the Android Basics Nanodegree

Wed, 06/22/2016 - 22:50

Posted by Shanea King-Roberson, Lead Program Manager Twitter: @shaneakr Instagram: @theshanea


Do you have an idea for an app but you don’t know where to start? There are over 1 billion Android devices worldwide, providing a way for you to deliver your ideas to the right people at the right time. Google, in partnership with Udacity, is making Android development accessible and understandable to everyone, so that regardless of your background, you can learn to build apps that improve the lives of people around you.

Enroll in the new Android Basics Nanodegree. This series of courses and services teaches you how to build simple Android apps--even if you have little or no programming experience. Take a look at some of the apps built by our students:

The app "ROP Tutorial" built by student Arpy Vanyan raises awareness of a potentially blinding eye disorder called Retinopathy of Prematurity that can affect newborn babies.

And user Charles Tommo created an app called “Dr Malaria” that teaches people ways to prevent malaria.

With courses designed by Google, you can learn skills that are applicable to building apps that solve real world problems. You can learn at your own pace to use Android Studio(Google’s official tool for Android app development) to design app user interfaces and implement user interactions using the Java programming language.

The courses walk you through step-by-step on how to build an order form for a coffee shop, an app to track pets in a shelter, an app that teaches vocabulary words from the Native American Miwok tribe, and an app on recent earthquakes in the world. At the end of the course, you will have an entire portfolio of apps to share with your friends and family.

Upon completing the Android Basics Nanodegree, you also have the opportunity to continue your learning with the Career-track Android Nanodegree (for intermediate developers). The first 50 participants to finish the Android Basics Nanodegree have a chance to win a scholarship for the Career-track Android Nanodegree. Please visit udacity.com/legal/scholarshipfor additional details and eligibility requirements. You now have a complete learning path to help you become a technology entrepreneur or most importantly, build very cool Android apps, for yourself, your communities, and even the world.

All of the individual courses that make up this Nanodegree are available online for no charge at udacity.com/google. In addition, Udacity provides paid services, including access to coaches, guidance on your project, help staying on track, career counseling, and a certificate upon completion for a fee.

You will be exposed to introductory computer science concepts in the Java programming language, as you learn the following skills.

  • Build app user interfaces
  • Implement user interactions
  • Store information in a database
  • Pull data from the internet into your app
  • Identify and fix unexpected behavior in the app
  • Localize your app to support other languages

To enroll in the Android Basics Nanodegree program, click here.

See you in class!

Categories: Programming

Introducing the Google Sheets API v4: Transferring data from a SQL database to a Sheet

Thu, 06/16/2016 - 17:23

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Apps

At Google I/O 2016, we launched a new Google Sheets API—click hereto watch the entire announcement. The updated API includes many new features that weren’t available in previous versions, including access to functionality found in the Sheets desktop and mobile user interfaces. My latest DevBytevideo shows developers how to get data into and out of a Google Sheet programmatically, walking through a simple script that reads rows out of a relational database and transferring the data to a brand new Google Sheet.

Let’s take a sneak peek of the code covered in the video. Assuming that SHEETS has been established as the API service endpoint, SHEET_ID is the ID of the Sheet to write to, and datais an array with all the database rows, this is the only call developers need to make to write that raw data into the Sheet:


SHEETS.spreadsheets().values().update(spreadsheetId=SHEET_ID,
range='A1', body=data, valueInputOption='RAW').execute()
Reading rows out of a Sheet is even easier. With SHEETS and SHEET_ID again, this is all you need to read and display those rows:
rows = SHEETS.spreadsheets().values().get(spreadsheetId=SHEET_ID,
range='Sheet1').execute().get('values', [])
for row in rows:
print(row)

If you’re ready to get started, take a look at the Python or other quickstarts in a variety of languages before checking out the DevByte. If you want a deeper dive into the code covered in the video, check out the post at my Python blog. Once you get going with the API, one of the challenges developers face is in constructing the JSON payload to send in API calls—the common operations samples can really help you with this. Finally, if you’re ready to get going with a meatier example, check out our JavaScript codelab where you’ll write a sample Node.js app that manages customer orders for a toy company, the database of which is used in this DevByte, preparing you for the codelab.

We hope all these resources help developers create amazing applications and awesome tools with the new Google Sheets API! Please subscribe to our channel, give us your feedback below, and tell us what topics you would like to see in future episodes!

Categories: Programming

Web Standards and Coffee with Googler Alex Danilo

Thu, 06/09/2016 - 22:14

Posted by Laurence Moroney, Developer Advocate

“Without standards, things don’t work right,” said Alex Danilo, a Googler working on the HTML5 specs, trying to help us all build a better web.

In 1999, the Mars Climate Orbiter mission failed because of a bug, where onboard software represented output in one standard of measurement, while a different software module needed data in a different format. Alex discusses many other examples of how the lack of industry standards can result in problems, such as early rail systems having different gauge widths in different states, impeding travel.

Alex works with the Web Platform Working Group, whose charter is to continue the development of the HTML language, improving client-side application development, including APIs and markup vocabularies.

He shares with us details of the upcoming HTML 5.1, a refinement of HTML 5, showing us the great validator tool that makes it easier for developers to ensure that their markup is meeting standards, and the test the web forward initiative to help uncover bugs and compatibility issues between browsers.

You can learn more about Google and Web development at the Web Fundamentals site.

Categories: Programming

Surface new proximity-based experiences to users with Nearby

Thu, 06/09/2016 - 17:16

Posted by Akshay Kannan, Product Manager

Today we're launching Nearby on Android, a new surface for users to discover and interact with the things around them. This extends the Nearby APIs we launched last year, which make it easy to discover and communicate with other nearby devices and beacons. Earlier this year, we also started experimenting with Physical Web beacons in Chrome for Android. With Nearby, we’re taking this a step further.

Imagine pulling up a barcode scanner when you’re at the store, or discovering an audio tour while you’re exploring a museum–these are the sorts of experiences that Nearby can enable. To make this possible, we're allowing developers to associate their mobile app or a website with a beacon.



A number of developers have already been building compelling proximity-based experiences, using beacons and Nearby:

Getting started is simple. First, get some Eddystone Beacons- you can order these from any one of our Eddystone-certified manufacturers. Android devices and and other BLE-equipped smart devices can also be configured to broadcast in the Eddystone Format.

Second, configure your beacon to point to your desired experience. This can be a mobile web page using the Physical Web, or you can link directly to an experience in your app. For users who don’t have your app, you can either provide a mobile web fallback or request a direct app install.

Nearby has started rolling out to users as part of the upcoming Google Play Services release and will work on Android devices running 4.4 (KitKat) and above. Check out our developer documentation to get started. To learn more about Nearby Notifications in Android, also check out our I/O 2016 session, starting at 17:10.

Categories: Programming

Daydream Labs: VR plays well with others

Tue, 06/07/2016 - 17:04

Posted by Rob Jagnow, Software Engineer, Google VR

At Daydream Labs, we pair engineers with designers to rapidly prototype virtual reality concepts, and we’ve already started to share our learnings with the VR community. This week, we focus on social. In many of our experiments, we’ve found that being in VR with others amplifies and improves experiences in VR, as long as you take a few things into account. Here’s what we’ve learned so far:

Simplicity can be powerful: Avatars (or the virtual representations of people in VR) can be simplified to just a floating head with googly eyes and still convey a surprising degree of emotion, intent, and number of social cues. Eyes give people a location to look to and speak towards, but they also increase face-to-face communication by making even basic avatars feel more human. When we combine this with hands and a spatially-located voice, it comes together to create a sense of shared presence.



Connecting the real and the virtual: Even when someone is alone in VR, you can make them feel connected. For example, you can continue to carry a conversation even if you’re not in VR with them. Your voice can serve as a subtle reminder that they’re spanning two spaces—the real and the virtual. This asymmetric experience can be a fun way to help ground party games where one player is in VR but other players aren’t, like with charades or Pictionary.

But when someone else joins that virtual world with them, we’ve seen time and time again that the real world melts away. For most multiplayer activities, this is ideal because it makes the experience incredibly engaging.



Join the party: When you first start a VR experience with others, it can be tough to know where to begin. After all, it’s easier to join a party than to start one! Create shared goals for multi-player experiences. When you give people something to play with together, it can help them break the ice, allow them to make friends, and have more fun in VR.

You think you know somebody: Lastly, people who know each other offline immediately notice stature or differences in a person’s height in VR. We can re-calibrate environments to play with height and scale values to build a VR world where everyone appears to be the same height. Or we can adjust display settings to make each person feel like they’re the tallest person in the room. Height is such a powerful social cue in the real world and we can tune these settings in VR to nudge people into having more friendly, prosocial interactions.

If you’d like to learn more about Daydream Labs and what we’ve learned so far, check out our recent Lessons Learned from VR Prototyping talk at Google I/O.

Categories: Programming

Announcing the Certification of Agencies as part of Google Developers Agency Program

Tue, 06/07/2016 - 16:58

Posted by Uttam Kumar Tripathi, Global Lead, Developer Agency Program

Back in December 2015, we had shared our initial plans to offer a unique program to software development agencies working on mobile apps.

The Agency Program is an effort by Google’s Developer Relations team to work closely with development agencies around the world and help them build high quality user experiences. It includes providing agencies with personalized training through local events and hangouts, dedicated content, priority support from product and developer relations teams, and early access to upcoming developer products.

Over the past few months, the program drew a lot of interest from hundreds of Agencies and we have since successfully launched this program in a number of countries including India, UK, Russia, Indonesia, USA and Canada.

Having worked with various agencies for several months, the Agency Program has now launched certification for those partners that have undergone the required training and have demonstrated excellence in building Android applications using our platforms. The Agency Program hopes that doing so would make it easier for clients who’re looking to hire an agency to make an informed decision while also pushing the entire development agency ecosystem to improve.

The list of our first set of certified agencies Agencies is available here.


We do plan to review and add more agencies to this list over the year and also expand the program to other countries.

Categories: Programming

Behind the scenes: Firebass ARG Challenge

Mon, 06/06/2016 - 21:47

Originally posted on Firebase blog

Posted by Karin Levi, Firebase Marketing Manager

This year's Google I/O was an exciting time for Firebase. In addition to sharing the many innovations in our platform, we also hatched a time-traveling digital fish named Firebass.

Firebass is an Alternate Reality Game (ARG) that lives across a variety of static web pages. If you haven’t played it yet, you might want to stop reading now and go fishing. After you’ve caught the Firebass and passed the challenge, come back -- we’re going to talk about how we built Firebass.

How we began

We partnered with Instrument, a Portland-based digital creative agency, to help us to create an ARG. We chose ARG because this allowed us to utilize developers’ own software tools and ingenuity for game functionality.

Our primary objective behind Firebass was to make you laugh, while teaching you a little bit about the new version of Firebase. The payoff for us? We had a blast building it. The payoff for you? A chance to win a free ticket to I/O 2017.

To begin, we needed to establish a central character and theme. Through brainstorming and a bit of serendipity, Firebass was born. Firebass is the main character who has an instinctive desire to time-travel back through prior eras of the web. Through developing the story, we had the chance to revisit the old designs and technologies from the past that we all find memorable -- as you can imagine, this was really fun.

Getting started

We put together a functional prototype of the first puzzle to test with our own developers here at Google. This helped us gauge both the enjoyment level of the puzzle and their difficulty. Puzzle clues were created by thinking of various ways to obfuscate information that developers would be able to recognize and manipulate. Ideas included encoding information in binary, base64, hex, inside images, and other assets such as audio files.

The core goal with each of the puzzles was to make them both logical but not too difficult -- we wanted to make sure players stayed engaged. A bulk of the game’s content was stored in Firebase, which allowed us to prevent players from accessing certain game details too early via inspecting the source code. As an added bonus, this also allowed us to demonstrate a use-case for Firebase remote data storage.

Driving the game forward

One of our first challenges was to find a way to communicate a story through static web pages. Our solution was to create a fake command line interface that acted as an outlet for Firebass to interact with players.

In order to ground our time travel story further, we kept the location of Firebass consistent at https://probassfinders.foo/ but changed the design with each puzzle era.

Continuing the journey

After establishing the Pro Bass Finders site and fake terminal as the centerpieces of the game, we focused on flushing out the rest of the puzzle mechanics. Each puzzle began with the era-specific design of the Pro Bass Finders home page. We then concepted new puzzle pieces and designed additional pages to support them. An example of this was creating a fake email archive to hide additional clues.

Another clue was the QR code pieces in puzzle 2.

The QR codes demonstrate Firebase time-based read permissions and provide a way to keep players revisiting the site prior to reaching the end of puzzle 2. There were a total of three pieces of a QR code that each displayed at different times during the day. It was really fun and impressive to see all of the different ways players were able to come up with the correct answer. The full image translates to ‘Locating’, making the answer the letter ‘L’, but many players managed to solve this without needing to read the QR code. You're all smart cookies.

Final part of the Puzzle

Puzzle 3 encompassed our deep nostalgia for the early web, and we did our best to authentically represent the anti-design look and feel of the 90s.

In one of the clues, we demonstrated Firebase Storage by storing an audio file remotely. Solving this required players to reference Firebase documentation to finish writing the code to retrieve the file.


<!-- connect to Firebase Storage below -->
<script>
console.log('TODO: Complete connection to Firebase Storage');
var storageRef = firebase.app().storage().ref();
var file = storageRef.child('spectrogram.wav');

// TODO: Get download URL for file (https://developers.google.com/firebase/docs/storage/web/download-files)
</script>
The finale

While the contest was still active, players who completed the game were given a URL to submit their information for a chance to win a ticket to Google I/O 2017. After the contest was closed, we simply changed the final success message to provide a URL directly to the Firebass Gift Shop, a treasure in and of itself. :)

Until next time

This was an unforgettable experience with a fervently positive reaction. When puzzle 3 unlocked, server traffic increased 30x! The community response in sharing photos, Slack channels, music, jokes, posts, etc. was incredible. And all because of one fish. We can’t wait to see all the swimmer winners next year at I/O 2017. Until then, try playing the game yourself at firebase.foo. Thank you, Firebass. Long may you swim.

><(((°<
Categories: Programming

Auto-generating Google Forms

Mon, 06/06/2016 - 21:07

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Apps


pre.CICodeFormatter{ font-family:arial; font-size:12px; width:99%; height:auto; overflow:auto; line-height:20px; padding:0px; color:#000000; text-align:left; } pre.CICodeFormatter code{ color:#000000; word-wrap:normal; }
 function createForm() {  
// create & name Form
var item = "Speaker Information Form";
var form = FormApp.create(item)
.setTitle(item);

// single line text field
item = "Name, Title, Organization";
form.addTextItem()
.setTitle(item)
.setRequired(true);

// multi-line "text area"
item = "Short biography (4-6 sentences)";
form.addParagraphTextItem()
.setTitle(item)
.setRequired(true);

// radiobuttons
item = "Handout format";
var choices = ["1-Pager", "Stapled", "Soft copy (PDF)", "none"];
form.addMultipleChoiceItem()
.setTitle(item)
.setChoiceValues(choices)
.setRequired(true);

// (multiple choice) checkboxes
item = "Microphone preference (if any)";
choices = ["wireless/lapel", "handheld", "podium/stand"];
form.addCheckboxItem()
.setTitle(item)
.setChoiceValues(choices);
}

If you’re ready to get started, you can find more information, including another intro code sample, in the Google Forms reference section of the Apps Script docs. In the video, I challenge viewers to enhance the code snippet above to read in “forms data” from an outside source such as a Google Sheet, Google Doc, or even an external database (accessible via Apps Script’s JDBC Service) to generate multiple Forms with. What are other things you can do with Forms?

One example is illustrated by this Google Docs add-on I created for users to auto-generate Google Forms from a formatted Google Doc. If you’re looking to do integration with a variety of Google services, check out this advanced Forms quickstart that uses Google Sheets, Docs, Calendar, and Gmail! Finally, Apps Script also powers add-ons for Google Forms. To learn how to write those, check out this Forms add-on quickstart.

We hope the DevByte and all these examples inspire you to create awesome tools with Google Forms, and taking the manual creation burden off your shoulders! If you’re new to the Launchpad Online developer series, we share technical content aimed at novice Google developers, as well as discuss the latest tools and features to help you build your app. Please subscribe to our channel, give us your feedback below, and tell us what topics you would like to see in future episodes!

Categories: Programming

Introducing Firebase Remote Config

Fri, 06/03/2016 - 18:14
Posted by Todd Kerpelman, Firebase Developer Advocate and Safa Alai, Remote Config Product Manager

Turning a great app into a successful business requires more than simply releasing your app and calling it a day. You need to quickly adapt to your user’s feedback, test out new features and deliver content that your users care about most.

This is what Firebase Remote Config is made for. By allowing you to change the look and feel of your app from the cloud, Firebase Remote Config enables you to stay responsive to your user’s needs. Firebase Remote Config also enables you to deliver different content to different users, so you can run experiments, gradually roll out features, and even deliver customized content based on how your users interact within your app.

Let's look at what you can accomplish when your wire up your app to work with Remote Config.

Update your app without updating your app

We've all had the experience of shipping an app and discovering soon afterwards that it was less than perfect. Maybe you had incorrect or confusing text that your users don't like. Maybe you made a level in your game too difficult, and players aren't able to progress past it. Or maybe it was something as simple as adding an animation that takes too long to complete.

Traditionally, you'd need to fix these kinds of mistakes by updating those values in your app's code, building and publishing a new version of your app, and then waiting for all your users to download the new version.

But if you've wired up your app for Remote Config in the Firebase platform, you can quickly and easily change those values directly in the cloud. Remote Config can download those new values the next time your user starts your app and address your users' needs, all without having to publish a new version of your app.

Deliver the Right Content to the Right People

Firebase Remote Config allows you to deliver different configurations to targeted groups of users by making use of conditions, which use targeting rules to deliver specific values for different users. For example, you can send down custom Remote Config data to your users in different countries. Or, you can send down different data sets separately to iOS and Android devices.

You can can also deliver different values based on audiences you've defined in Firebase Analytics for some more sophisticated targeting. So if you want to change the look of your in-app store just for players who have visited your store in the past, but haven't purchased anything yet, that's something you can do by creating Remote Config values just for that audience.

Run A/B Tests and Gradual Rollouts

Remote Config conditions also allow you to deliver different values to random sets of users. You can take advantage of this feature to run A/B tests or to gradually rollout new features.

If you are launching a new feature in your app but aren't sure if your audience is going to love it, you can hide it behind a flag in your code. Then, you can change the value of that flag using Remote Config to turn the feature on or off. By defining a "My New Feature Experiment" condition that is active for, say, 10% of the population, you can turn on this new feature for a small subset of your users, and make sure it's a great experience before you turn it on for the rest of your population.

Similarly, you can run A/B tests by supplying different values to different population groups. Want to see if people are more likely to complete a purchase if your in-app purchase button says, "Buy now" or "Checkout"? That's the kind of experiment you can easily run using A/B tests.

If you want to track the results of these A/B tests, you can do that today by setting a user property in Firebase Analytics based on your experiment. Then, you can filter any of your Firebase Analytics reports (like whether or not the user started the purchase process) by this property. Watch this space for news on upcoming improvements to A/B testing.

A Fabulous Improvement in Retention

Many of our early partners have already been using Firebase Remote config to test out changes within their app.

Fabulous, an app from Duke University's designed to help people adopt better lifestyle habits, wanted to experiment with their getting started flow to see which methods were most effective for getting their users up and running in their app. They not only A/B tested changes like images, text, and button labels, but they also A/B tested the entire onboarding process by using Remote Config to determine what dialogs people saw and in what order.

Thanks to their experiments with Remote Config, Fabulous was able to increase the number of people who completed their onboarding flow from 42% to 64%, and their one-day retention rate by 27%.

Research has shown that an average app loses the majority of their users in the first 3 days, so making these kinds of improvements to your app's onboarding process -- and confirming their effectiveness by conducting A/B tests -- can be crucial to ensuring the long-term success of your app.

Is Your App Wired Up?

When you use remote config Remote Config, you can supply all of your default values locally on the device, then only send down new values from the cloud where they differ from your defaults. This gives you the flexibility to wire up every value in your app to be potentially configurable through Remote Config, while keeping your network calls lightweight because you're only sending down changes. So feel free to take all your hard-coded strings, constants, and that AppConstants file you've got sitting around (it's okay, we all have one), and wire 'em up for Remote Config!

Firebase Remote Config is part of the Firebase platform and is available for free on both iOS and Android. If you want to find out more, please see our documentation and be sure to explore all the features of the Firebase SDK.

Categories: Programming