Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Google Code Blog
Syndicate content
ewoodhttp://www.blogger.com/profile/12341551220176883769noreply@blogger.comBlogger1472125
Updated: 3 hours 33 min ago

Learn app monetization best practices with Udacity and Google

Wed, 08/26/2015 - 18:17

Posted by, Ido Green, Developer Advocate

There is no higher form of user validation than having customers support your product with their wallets. However, the path to a profitable business is not necessarily an easy one. There are many strategies to pick from and a lot of little things that impact the bottom line. If you are starting a new business (or thinking how to improve the financial situation of your current startup), we recommend this course we've been working on with Udacity!

This course blends instruction with real-life examples to help you effectively develop, implement, and measure your monetization strategy. By the end of this course you will be able to:

  • Choose & implement a monetization strategy relevant to your service or product.
  • Set performance metrics & monitor the success of a strategy.
  • Know when it might be time to change methods.

Go try it at: udacity.com/course/app-monetization–ud518

We hope you will enjoy and earn from it!

Categories: Programming

Breaking the SQL Barrier: Google BigQuery User-Defined Functions

Tue, 08/25/2015 - 16:55

Posted by, Thomas Park, Senior Software Engineer, Google BigQuery

Many types of computations can be difficult or impossible to express in SQL. Loops, complex conditionals, and non-trivial string parsing or transformations are all common examples. What can you do when you need to perform these operations but your data lives in a SQL-based Big data tool? Is it possible to retain the convenience and speed of keeping your data in a single system, when portions of your logic are a poor fit for SQL?

Google BigQuery is a fully managed, petabyte-scale data analytics service that uses SQL as its query interface. As part of our latest BigQuery release, we are announcing support for executing user-defined functions (UDFs) over your BigQuery data. This gives you the ability to combine the convenience and accessibility of SQL with the option to use a familiar programming language, JavaScript, when SQL isn’t the right tool for the job.

How does it work?

BigQuery UDFs are similar to map functions in MapReduce. They take one row of input and produce zero or more rows of output, potentially with a different schema.

Below is a simple example that performs URL decoding. Although BigQuery provides a number of built-in functions, it does not have a built-in for decoding URL-encoded strings. However, this functionality is available in JavaScript, so we can extend BigQuery with a simple User-Defined Function to decode this type of data:



function decodeHelper(s) {
try {
return decodeURI(s);
} catch (ex) {
return s;
}
}

// The UDF.
function urlDecode(r, emit) {
emit({title: decodeHelper(r.title),
requests: r.num_requests});
}

BigQuery UDFs are functions with two formal parameters. The first parameter is a variable to which each input row will be bound. The second parameter is an “emitter” function. Each time the emitter is invoked with a JavaScript object, that object will be returned as a row to the query.

In the above example, urlDecode is the UDF that will be invoked from BigQuery. It calls a helper function decodeHelper that uses JavaScript’s built-in decodeURI function to transform URL-encoded data into UTF-8.

Note the use of try / catch in decodeHelper. Data is sometimes dirty! If we encounter an error decoding a particular string for any reason, the helper returns the original, un-decoded string.

To make this function visible to BigQuery, it is necessary to include a registration call in your code that describes the function, including its input columns and output schema, and a name that you’ll use to reference the function in your SQL:



bigquery.defineFunction(
'urlDecode', // Name used to call the function from SQL.

['title', 'num_requests'], // Input column names.

// JSON representation of output schema.
[{name: 'title', type: 'string'},
{name: 'requests', type: 'integer'}],

urlDecode // The UDF reference.
);

The UDF can then be invoked by the name “urlDecode” in the SQL query, with a source table or subquery as an argument. The following query looks for the most-visited French Wikipedia articles from April 2015 that contain a cédille character (ç) in the title:



SELECT requests, title
FROM
urlDecode(
SELECT
title, sum(requests) AS num_requests
FROM
[fh-bigquery:wikipedia.pagecounts_201504]
WHERE language = 'fr'
GROUP EACH BY title
)
WHERE title LIKE '%ç%'
ORDER BY requests DESC
LIMIT 100

This query processes data from a 5.6 billion row / 380 Gb dataset and generally runs in less than two minutes. The cost? About $1.37, at the time of this writing.

To use a UDF in a query, it must be described via UserDefinedFunctionResource elements in your JobConfiguration request. UserDefinedFunctionResource elements can either contain inline JavaScript code or pointers to code files stored in Google Cloud Storage.

Under the hood

JavaScript UDFs are executed on instances of Google V8 running on Google servers. Your code runs close to your data in order to minimize added latency.

You don’t have to worry about provisioning hardware or managing pipelines to deal with data import / export. BigQuery automatically scales with the size of the data being queried in order to provide good query performance.

In addition, you only pay for what you use - there is no need to forecast usage or pre-purchase resources.

Developing your function

Interested in developing your JavaScript UDF without running up your BigQuery bill? Here is a simple browser-based widget that allows you to test and debug UDFs.

Note that not all JavaScript functionality supported in the browser is available in BigQuery. For example, anything related to the browser DOM is unsupported, including Window and Document objects, and any functions that require them, such as atob() / btoa().

Tips and tricks

Pre-filter input

In our URL-decoding example, we are passing a subquery as the input to urlDecode rather than the full table. Why?

There are about 5.6 billion rows in [fh-bigquery:wikipedia.pagecounts_201504]. However, one of the query predicates will filter the input data down to the rows where language is “fr” (French) - this is about 262 million rows. If we ran the UDF over the entire table and did the language and cédille filtering in a single WHERE clause, that would cause the JavaScript framework to process over 21 times more rows than it would with the filtered subquery. This equates to a lot of CPU cycles wasted doing unnecessary data conversion and marshalling.

If your input can easily be filtered down before invoking a UDF by using native SQL predicates, doing so will usually lead to a faster (and potentially cheaper) query.

Avoid persistent mutable state

You must not store and access mutable state across UDF execution for different rows. The following contrived example illustrates this error:



// myCode.js
var numRows = 0;

function dontDoThis(r, emit) {
emit(rowCount: ++numRows);
}

// The query.
SELECT max(rowCount) FROM dontDoThis(myTable);

This is a problem because BigQuery will shard your query across multiple nodes, each of which has independent V8 instances and will therefore accumulate separate values for numRows.

Expand select *

You cannot execute SELECT * FROM urlDecode(...) at this time; you must explicitly list the columns being selected from the UDF: select requests, title from urlDecode(...)

For more information about BigQuery User-Defined Functions, see the full feature documentation.

Categories: Programming

Beacons, the Internet of things, and more: Coffee with Timothy Jordan

Sat, 08/22/2015 - 00:31

Posted by Laurence Moroney, Developer Advocate

In this episode of Coffee With a Googler, Laurence meets with Developer Advocate Timothy Jordan to talk about all things Ubiquitous Computing at Google. Learn about the platforms and services that help developers reach their users wherever it makes sense.

We discuss Brillo, which extends the Android Platform to 'Internet of Things' embedded devices, as well as Weave, which is a services layer that helps all those devices work together seamlessly.

We also chat about beacons and how they can give context to the world around you, making the user experience simpler. Traditionally, users need to tell you about their location, and other types of context. But with beacons, the environment can speak to you. When it comes to developing for beacons, Timothy introduces us to Eddystone, a protocol specification for BlueTooth Low Energy (BLE) beacons, the Proximity Beacon API that allows developers to register a beacon and associate data with it, and the Nearby Messages API which helps your app 'sight' and retrieve data about nearby beacons.

Timothy and his team have produced a new Udacity series on ubiquitous computing that you can access for free! Take the course to learn more about ubiquitous computing, the design paradigms involved, and the technical specifics for extending to Android Wear, Google Cast, Android TV, and Android Auto.

Also, don't forget to join us for a ubiquitous computing summit on November 9th & 10th in San Francisco. Sign up here and we'll keep you updated.

Categories: Programming

Project Tango I/O Apps now released in Google Play

Fri, 08/21/2015 - 20:15

Posted by Larry Yang, Lead Product Manager, Project Tango

At Google I/O, we showed the world many of the cool things you can do with Project Tango. Now you can experience it yourself by downloading these apps on Google Play onto your Project Tango Tablet Development Kit.

A few examples of creative experiences include:

MeasureIt is a sample application that shows how easy it is to measure general distances. Just point a Project Tango device at two or more points. No tape measures and step ladders required.

Constructor is a sample 3D content creation tool where you can scan a room and save the scan for further use.

Tangosaurs lets you walk around and dig up hidden fossils that unlock a portal into a virtual dinosaur world.

Tango Village and Multiplayer VR are simple apps that demonstrate how Project Tango’s motion tracking enables you to walk around VR worlds without requiring an input device.

Tango Blaster lets you blast swarms of robots in a virtual world, and can even work with the Tango device mounted on a toy gun.

We also showed a few partner apps that are also now available in Google Play. Break A Leg is a fun VR experience where you’re a magician performing tricks on stage.

SideKick’s Castle Defender uses Project Tango’s depth perception capability to place a virtual world onto a physical playing surface.

Defective Studio’s VRMT is a world-building sandbox designed to let anyone create, collaborate on, and share their own virtual worlds and experiences. VRMT gives you libraries of props and intuitive tools, to make the virtual creation process as streamlined as possible.

We hope these applications inspire you to use Project Tango’s motion tracking, area learning and depth perception technologies to create 3D experiences. We encourage you to explore the physical space around the user, including precise navigation without GPS, windows into virtual 3D worlds, measurement of spaces, and games that know where they are in the room and what’s around them.

As we mentioned in our previous post, Project Tango Tablet Development Kits will go on sale in the Google Store in Denmark, Finland, France, Germany, Ireland, Italy, Norway, Sweden, Switzerland and the United Kingdom starting August 26.

We have a lot more to share over the coming months! Sign-up for our monthly newsletter to keep up with the latest news. Connect with the 5,000 other developers in our Google+ community. Get help from other developers by using the Project Tango tag in Stack Overflow. See what others are creating on our YouTube channel. And share your story on Twitter with #ProjectTango.

Join us on our journey.

Categories: Programming

Polymer Summit Schedule Released!

Thu, 08/20/2015 - 20:12

Posted by Taylor Savage, Product Manager

We’re excited to announce that the full speaker list and talk schedule has been released for the first ever Polymer Summit! Find the latest details on our newly launched site here. Look forward to talks about topics like building full apps with Polymer, Polymer and ES6, adaptive UI with Material Design, and performance patterns in Polymer.

The Polymer Summit will start on Monday, September 14th with an evening of Code Labs, followed by a full day of talks on Tuesday, September 15th. All of this will be happening at the Muziekgebouw aan ‘t IJ, right on the IJ river in downtown Amsterdam. All tickets to the summit were claimed on the first day, but you can sign up for the waitlist to be notified, should any more tickets become available.

Can’t make it to the summit? Sign up here if you’d like to receive updates on the livestream and tune in live on September 15th on polymer-project.org/summit. We’ll also be publishing all of the talks as videos on the Google Developers YouTube Channel.

Categories: Programming

What’s in a message? Getting attachments right with the Google beacon platform

Thu, 08/20/2015 - 19:17

Posted by Hoi Lam, Developer Advocate

If your users’ devices know where they are in the world – the place that they’re at, or the objects they’re near – then your app can adapt or deliver helpful information when it matters most. Beacons are a great way to explicitly label the real-world locations and contexts, but how does your app get the message that it’s at platform 9, instead of the shopping mall or that the user is standing in front of a food truck, rather than just hanging out in the parking lot?

With the Google beacon platform, you can associate information with registered beacons by using attachments in Proximity Beacon API, and serve those attachments back to users’ devices as messages via the Nearby Messages API. In this blog post, we will focus on how we can use attachments and messages most effectively, making our apps more context-aware.

Think per message, not per beacon

Suppose you are creating an app for a large train station. You’ll want to provide different information to the user who just arrived and is looking for the ticket machine, as opposed to the user who just wants to know where to stand to be the closest to her reserved seat. In this instance, you’ll want more than one beacon to label important places, such as the platform, entrance hall and waiting area. Some of the attachments for each beacon will be the same (e.g. the station name), others will be different (e.g. platform number). This is where the design of Proximity Beacon API, and the Nearby Messages API in Android and iOS helps you out.

When your app retrieves the beacon attachments via the Nearby Messages API, each attachment will appear as an individual message, not grouped by beacon. In addition, Nearby Messages will automatically de-duplicate any attachments (even if they come from different beacons). So the situation looks like this:

This design has several advantages:

  • It abstracts the API away from implementation (beacon in this case), so if in the future we have other kinds of devices which send out messages, we can adopt them easily.
  • Built in deduplication means that you do not need to build your own to react to the same message, such as the station name in the above example.
  • You can add finer grained context messages later on, without re-deploying.

In designing your beacon user experience, think about the context of your user, the places and objects that are important for your app, and then label those places. The Proximity Beacon API makes beacon management easy, and Nearby Messages API abstract the hardware away, allowing you to focus on creating relevant and timely experiences. The beacons themselves should be transparent to the user.

Using beacon attachments with external resources

In most cases, the data you store in attachments will be self-contained and will not need to refer to an external database. However, there are several exceptions where you might want to keep some data separately:

  • Large data items such as pictures and videos.
  • Where the data resides on a third party database system that you do not control.
  • Confidential or sensitive data that should not be stored in beacon attachments.
  • If you run a proprietary authentication system that relies on your own database.

In these cases, you might need to use a more generic identifier in the beacon attachment to lookup the relevant data from your infrastructure.

Combining the virtual and the real worlds

With beacons, we have an opportunity to delight users by connecting the virtual world of personalization and contextual awareness with real world places and things that matter most. Through attachments, the Google beacon platform delivers a much richer context for your app that goes beyond the beacon identifier and enables your apps to better serve your users. Let’s build the apps that connect the two worlds!

Categories: Programming

Saving a life through technology - Coffee with a Googler

Fri, 08/14/2015 - 18:05

Posted by Laurence Moroney, Developer Advocate

In this week’s Coffee with a Googler, we’re joined by Heidi Dohse from the Google Cloud team to talk about how she saved her own life through technology. At Google she works on the Cloud Platform team that supports our genomics work, and has a passion for the future of the delivery of health care.

When she was a child, Heidi Dohse had an erratic heartbeat, but, without knowing anything about it, she just ignored it. As a teen she became a competitive windsurfer and skier, and as part of a surgery on her knee between seasons, she had an EKG and discovered that her heart was beating irregularly at 270bpm.

She had an experimental surgery and was immediately given a pacemaker, and became a young heart patient, expecting not to live much longer. Years later, Heidi is now on her 7th pacemaker, but it doesn’t stop her from staying active as an athlete. She’s a competitive cyclist, often racing hundreds of miles, and climbing tens of thousands of feet as she races.

At the core of all this is her use of technology. She has carefully managed her health by collecting and analyzing the data from her pacemaker. The data goes beyond just heartbeat, and includes things such as the gyroscope, oxygen utilization, muscle stimulation and electrical impulses, she can pro-actively manage her health.

It’s the future of health care -- instead of seeing a doctor for an EKG every few months, with this technology and other wearables, one can constantly check their health data, and know ahead of time if there would be any health issues. One can proactively ensure their health, and pre-empt any health issues.

Learn more in the video.

Categories: Programming

Learn about Google Translate in Coffee with a Googler

Mon, 08/10/2015 - 18:35

Posted by Laurence Moroney, Developer Advocate

Over the past few months, we’ve been bringing you 'Coffee with a Googler', giving you a peek at people working on cool stuff that you might find inspirational and useful. Starting with this week’s episode, we’re going to accompany each video with a short post for more details, while also helping you make the most of the tech highlighted in the video.

This week we meet with MacDuff Hughes from the Google Translate team. Google Translate uses statistics based translation. By finding very large numbers of examples of translations from one language to another, it uses statistics to see how various phrases are treated, so it can make reasonable estimates at the correct phrases that are natural sounding in the target language. For common phrases, there are many candidate translations, so the engine converts them within the context of the passage that the phrase is in. Images can also be translated. When you point your mobile device at printed text and it will translate to the preferred for you.

Translate is adding languages all the time, and part of its mission is to serve languages that are less frequently used such as Gaelic, Welsh or Maori, in the same manner as the more commonly used ones, such as English, Spanish and French. To this end, Translate supports more than 90 languages. In the quest to constantly improve translation, the technology provides a way for the community to validate translations, and this is particularly used by less commonly used translations, effectively helping them to grow and thrive. It also enhances the machine translation by having people involved too.

You can learn more about Google Translate, and the translate app here.

Developers have a couple of options to use translate:

  • The Free Website Translate plugin that you can add to your site and have translations available immediately.
  • The Cloud-based Translate API that you can use to build apps or sites that have translation in them.

Watch the episode here:


Categories: Programming

Lean ops for startups: 4 leaders share their secrets

Wed, 08/05/2015 - 20:05

Posted by Ori Weinroth, Google Cloud Platform Marketing

As a CTO, VP R&D, or CIO at a technology startup you typically need to maneuver and make the most out of limited budgets. Chances are, you’ve never had your CEO walk in and tell you, “We’ve just closed our Series A round. You now have unlimited funding to launch version 1.5.”

So how do you extract real value from what you’re given to work with? We’re gathering four start technology leaders for a free webinar discussion around exactly that: their strategies and tactics for operating lean. They will cover key challenges and share tips and tricks for:

  • Reducing burn rate by making smart tech choices
  • Getting the most out of a critical but finite resource - your dev team
  • Avoiding vendor lock-in so as to maximize cost efficiencies

We’ve invited the following technology leaders from some of today’s most dynamic startups:

Sign up for our Lean Ops Webinar in your timezone to hear their take:

Americas
Wednesday, 13 August 2015
11:00 AM PT
[Click here to register]

Europe, Middle East and Africa
Wednesday, 13 August 2015
10:00 AM (UK), 11:00 AM (France), 12:00 PM (Israel)
[Click here to register]

Asia Pacific
Wednesday, 13 August 2015
10:30AM (India), 1:00 PM (Singapore/Hong Kong), 3:00PM (Sydney, AEDT)
[Click here to register]

Our moderator will be Amir Shevat, senior program manager at Google Developer Relations. We look forward to an insightful and open discussion and hope you can attend.

Categories: Programming

Project Tango Tablet Development Kits coming to select countries

Tue, 08/04/2015 - 19:32

Posted by Larry Yang, Product Manager, Project Tango

Project Tango Tablet Development Kits are available in South Korea and Canada starting today, and on August 26, will be available in Denmark, Finland, France, Germany, Ireland, Italy, Norway, Sweden, Switzerland, and the United Kingdom. The dev kit is intended for software developers only. To order a device, visit the Google Store.

Project Tango is a mobile platform that uses computer vision to give devices the ability to sense the world around them. The Project Tango Tablet Development Kit is a standard Android device plus a wide-angle camera, a depth sensing camera, accurate sensor timestamping, and a software stack that exposes this new computer vision technology to application developers. Learn more on our website.

The Project Tango community is growing. We’ve shipped more than 3,000 developer devices so far. Developers have already created hundreds of applications that enable users to explore the physical space around them, including precise navigation without GPS, windows into virtual 3D worlds, measurement of physical spaces, and games that know where they are in the room and what’s around them. And we have an app development contest in progress right now.

We’ve released 13 software updates that make it easier to create Area Learning experiences with new capabilities such as high-accuracy and building-scale ADFs, more accurate re-localization, indoor navigation, and GPS/maps alignment. Depth Perception improvements include the addition of real-time textured and Unity meshing. Unity developers can take advantage of an improved Unity lifecycle. The updates have also included improvements in IMU characterization, performance, thermal management and drift-reduction. Visit our developer site for details.

We have a lot more to share over the coming months. Sign-up for our monthly newsletter to keep up with the latest news. Join the conversation in our Google+ community. Get help from other developers by using the Project Tango tag in Stack Overflow. See what other’s are saying on our YouTube channel. And share your story on Twitter with #ProjectTango.

Join us on our journey.

Categories: Programming

Using schema.org markup to promote your critic reviews within Google Search

Tue, 08/04/2015 - 18:23

Posted by Jonathan Wald, Product Manager

When Google announced Rich Snippets for reviews six years ago, it provided publishers with an entirely new way to promote their content by incorporating structured markup into their webpages. Since then, structured data has only become more important to Google Search and we’ve been building out the Knowledge Graph to better understand the world, the web, and users’ queries. When a user asks “did ex machina get good reviews?”, Google is now aware of the semantics - recognizing that the user wants critic reviews for the 2015 film Ex Machina and, equally importantly, where to find them.

With the recent launch of critic reviews in the Knowledge Graph, we’ve leveraged this technology to once again provide publishers with an opportunity to increase the discoverability and consumption of their reviews using markup. This feature, available across mobile, tablet, and desktop, organizes publishers’ reviews into a prominent card at the top of the page.

By using markup to identify their reviews and how they relate to Knowledge Graph entities, publishers can increase the visibility of their reviews and expose their reviews to a new audience whenever a Knowledge Graph card for a reviewed entity is surfaced.

Critic reviews are currently launched for movie entities, but we’re expanding the feature to other verticals like TV shows and books this year! Publishers with long-form reviews for these verticals can get up and running by selecting snippets from their reviews and then adding schema.org markup to their webpages. This process, detailed in our critic reviews markup instructions, allows publishers to communicate to Google which snippet they prefer, what URL is associated with the review, and other metadata about the reviewed item that allows us to ensure that we’re showing the right review for the right entity.

Google can understand a variety of markup formats, including the JSON+LD data format, which makes it easier than ever to incorporate structured data about reviews into your webpage! Get started here.


Categories: Programming

#NoHacked: How to avoid being the target of hackers

Fri, 07/31/2015 - 19:03

Originally posted by the Webmaster Central Blog.

If you publish anything online, one of your top priorities should be security. Getting hacked can negatively affect your online reputation and result in loss of critical and private data. Over the past year Google has noticed a 180% increase in the number of sites getting hacked. While we are working hard to combat this hacked trend, there are steps you can take to protect your content on the web.

This week, Google Webmasters has launched a second #NoHacked campaign. We’ll be focusing on how to protect your site from hacking and give you better insight into how some of these hacking campaigns work. You can follow along with #NoHacked on Twitter and Google+. We’ll also be wrapping up with a Google Hangout focused on security where you can ask our security experts questions.

We’re kicking off the campaign with some basic tips on how to keep your site safe on the web.

1. Strengthen your account security

Creating a password that’s difficult to guess or crack is essential to protecting your site. For example, your password might contain a mixture of letters, numbers, symbols, or be a passphrase. Password length is important. The longer your password, the harder it will be to guess. There are many resources on the web that can test how strong your password is. Testing a similar password to yours (never enter your actual password on other sites) can give you an idea of how strong your password is.

Also, it’s important to avoid reusing passwords across services. Attackers often try known username and password combinations obtained from leaked password lists or hacked services to compromise as many accounts as possible.

You should also turn on 2-Factor Authentication for accounts that offer this service. This can greatly increase your account’s security and protect you from a variety of account attacks. We’ll be talking more about the benefits of 2-Factor Authentication in two weeks.

2. Keep your site’s software updated

One of the most common ways for a hacker to compromise your site is through insecure software on your site. Be sure to periodically check your site for any outdated software, especially updates that patch security holes. If you use a web server like Apache, nginx or commercial web server software, make sure you keep your web server software patched. If you use a Content Management System (CMS) or any plug-ins or add-ons on your site, make sure to keep these tools updated with new releases. Also, sign up to the security announcement lists for your web server software and your CMS if you use one. Consider completely removing any add-ons or software that you don't need on your website -- aside from creating possible risks, they also might slow down the performance of your site.

3. Research how your hosting provider handles security issues

Your hosting provider’s policy for security and cleaning up hacked sites is in an important factor to consider when choosing a hosting provider. If you use a hosting provider, contact them to see if they offer on-demand support to clean up site-specific problems. You can also check online reviews to see if they have a track record of helping users with compromised sites clean up their hacked content.

If you control your own server or use Virtual Private Server (VPS) services, make sure that you’re prepared to handle any security issues that might arise. Server administration is very complex, and one of the core tasks of a server administrator is making sure your web server and content management software is patched and up to date. If you don't have a compelling reason to do your own server administration, you might find it well worth your while to see if your hosting provider offers a managed services option.

4. Use Google tools to stay informed of potential hacked content on your site

It’s important to have tools that can help you proactively monitor your site.The sooner you can find out about a compromise, the sooner you can work on fixing your site.

We recommend you sign up for Search Console if you haven’t already. Search Console is Google’s way of communicating with you about issues on your site including if we have detected hacked content. You can also set up Google Alerts on your site to notify you if there are any suspicious results for your site. For example, if you run a site selling pet accessories called www.example.com, you can set up an alert for [site:example.com cheap software] to alert you if any hacked content about cheap software suddenly starts appearing on your site. You can set up multiple alerts for your site for different spammy terms. If you’re unsure what spammy terms to use, you can use Google to search for common spammy terms.

We hope these tips will keep your site safe on the web. Be sure to follow our social campaigns and share any tips or tricks you might have about staying safe on the web with the #NoHacked hashtag.

If you have any additional questions, you can post in the Webmaster Help Forums where a community of webmasters can help answer your questions. You can also join our Hangout on Air about Security on August 26th.

Posted by Eric Kuan, Webmaster Relations Specialist and Yuan Niu, Webspam Analyst

Categories: Programming

Easier Auth for Google Cloud APIs: Introducing the Application Default Credentials feature.

Mon, 07/20/2015 - 19:27

Originally posted to the Google Cloud Platform blog

When you write applications that run on Google Compute Engine instances, you might want to connect them to Google Cloud Storage, Google BigQuery, and other Google Cloud Platform services. Those services use OAuth2, the global standard for authorization, to help ensure that only the right callers can make the right calls. Unfortunately, OAuth2 has traditionally been hard to use. It often requires specialized knowledge and a lot of boilerplate auth setup code just to make an initial API call.

Today, with Application Default Credentials (ADC), we're making things easier. In many cases, all you need is a single line of auth code in your app:

Credential credential = GoogleCredential.getApplicationDefault();

If you're not already familiar with auth concepts, including 2LO, 3LO, and service accounts, you may find this introduction useful.

ADC takes all that complexity and packages it behind a single API call. Under the hood, it makes use of:

  • 2-legged vs. 3-legged OAuth (2LO vs. 3LO) -- OAuth2 includes support for user-owned data, where the user, the API provider, and the application developer all need to participate in the authorization dance. Most Cloud APIs don't deal with user-owned data, and therefore can use much simpler two-party flows between the API provider and the application developer.
  • gcloud CLI -- while you're developing and debugging your app, you probably already use the gcloud command-line tool to explore and manage Cloud Platform resources. ADC lets your application piggyback on the auth flows in gcloud, so you only have to set up your credentials once.
  • service accounts -- if your application runs on Google App Engine or Google Compute Engine, it automatically has access to the built-in "service account", that helps the API provider to trust that the API calls are coming from a trusted source. ADC lets your application benefit from that trust.

You can find more about Google Application Default Credentials here. This is available for Java, Python, Node.js, Ruby, and Go. Libraries for PHP and .Net are in development.

Categories: Programming

Chromecast drives higher visits, engagement and monetization for app developers

Wed, 07/15/2015 - 18:06

Posted by Jeanie Santoso, Merchandise Marketing Manager

Chromecast, our first Google Cast device, has seen great success with 17 million devices already sold and over 1.5 billion touches of the Cast button. Consumers now get all the benefits of their easy to use personal mobile devices, with content displayed on the largest and most beautiful screen in the house. By adding Google Cast functionality to their apps, developers can gain visits, engagement, and/or higher monetization. Here are four real-world examples showing how very different companies are successfully using Google Cast technology. Read on to learn more about their successes and how to get started.

Comedy Central sees 50% more videos viewed by Chromecast users

The Comedy Central app lets fans watch their favorite shows in full and on demand from mobile devices. The company created a cast-enabled app so users could bring their small screen experience to their TVs. Now with Chromecast, users watch at least 50 percent more video, with 1.5 times more visits than the average Comedy Central app user. “The user adoption and volume we saw immediately at launch was pleasantly surprising,” says Ben Hurst, senior vice president, mobile and emerging platforms, Viacom Music and Entertainment Group. “We feel that Chromecast was a clear success for the Comedy Central app.”

Read the full Comedy Central case study here

Just Dance Now sees 2.5x monetization with Chromecast users

Interactive-game giant Ubisoft adopted Google Cast technology as a new way to experience their Just Dance Now game. As the game requires a controller and a main screen on which the game is displayed, Ubisoft saw Chromecast as the simplest and most accessible way to play. Chromecast represents 30 percent of all songs launched on the game in the US. Chromecast players monetize 2.5 times more than other players - they’re more engaged, play longer and more often than other players. Ubisoft also has seen more long-term subscribers with Chromecast. “The best Just Dance Now experience is on a big screen, and Chromecast brings an amazingly quick launch and ease of use for players to get into the game,” says Björn Törnqvist, Ubisoft technical director.

Read the full Just Dance Now case study here

Fitnet sees 35% higher engagement in Chromecast users

Fitnet is a mobile app that delivers video workouts and converts your smartphone’s or tablet’s camera into a biometric sensor to intelligently analyze your synchronicity with the trainer. The app provides a real-time score based on the user’s individual performance. The company turned to Chromecast to provide a compelling, integrated big screen user experience so users don’t need to stare at a tiny display to follow along. Chromecast users now perform 35 percent better on key engagement metrics Fitnet regard as critical to their success”—metrics such as logins, exploring new workouts, and actively engaging in workout content. “Integrating with Google Cast technology has been an excellent investment of our time and resources, and a key feature that has helped us to develop a unique, compelling experience for our users,” Bob Summers, Fitnet founder and CEO.

Read the full Fitnet case study here

table, th, td { border: clear; border-collapse: collapse; } Haystack TV doubled average weekly viewing time

Haystack TV is a personal news channel that lets consumers watch news on any screen, at any time. The company integrated Google Cast technology so users can browse their headline news, choose other videos to play, and even remove videos from their play queue without disrupting the current video on their TV. With Chromecast, average weekly viewing time has doubled. One-third of Haystack TV users now view their news via Chromecast. “We’re in the midst of a revolution in the world of television. More and more people are ‘cutting the cord’ and favoring over-the-top (OTT) services such as Haystack TV,” says Ish Harshawat, Haystack TV co-founder. “Chromecast is the perfect device for watching Haystack TV on the big screen. We wouldn't be where we are today without Chromecast.”

Read the full Haystack TV case study here

Integrate with Google Cast technology today

More and more developers are discovering what Google Cast technology can do for their app. Check out the Google Cast SDK for API references and take a look at our great sample apps to help get you started.

To learn more, visit developers.google.com/cast

Categories: Programming

Lighting the way with BLE beacons

Tue, 07/14/2015 - 16:06

Posted by Chandu Thota, Engineering Director and Matthew Kulick, Product Manager

Just like lighthouses have helped sailors navigate the world for thousands of years, electronic beacons can be used to provide precise location and contextual cues within apps to help you navigate the world. For instance, a beacon can label a bus stop so your phone knows to have your ticket ready, or a museum app can provide background on the exhibit you’re standing in front of. Today, we’re beginning to roll out a new set of features to help developers build apps using this technology. This includes a new open format for Bluetooth low energy (BLE) beacons to communicate with people’s devices, a way for you to add this meaningful data to your apps and to Google services, as well as a way to manage your fleet of beacons efficiently.

Eddystone: an open BLE beacon format

Working closely with partners in the BLE beacon industry, we’ve learned a lot about the needs and the limitations of existing beacon technology. So we set out to build a new class of beacons that addresses real-life use-cases, cross-platform support, and security.

At the core of what it means to be a BLE beacon is the frame format—i.e., a language—that a beacon sends out into the world. Today, we’re expanding the range of use cases for beacon technology by publishing a new and open format for BLE beacons that anyone can use: Eddystone. Eddystone is robust and extensible: It supports multiple frame types for different use cases, and it supports versioning to make introducing new functionality easier. It’s cross-platform, capable of supporting Android, iOS or any platform that supports BLE beacons. And it’s available on GitHub under the open-source Apache v2.0 license, for everyone to use and help improve.

By design, a beacon is meant to be discoverable by any nearby Bluetooth Smart device, via its identifier which is a public signal. At the same time, privacy and security are really important, so we built in a feature called Ephemeral Identifiers (EIDs) which change frequently, and allow only authorized clients to decode them. EIDs will enable you to securely do things like find your luggage once you get off the plane or find your lost keys. We’ll publish the technical specs of this design soon.


Eddystone for developers: Better context for your apps

Eddystone offers two key developer benefits: better semantic context and precise location. To support these, we’re launching two new APIs. The Nearby API for Android and iOS makes it easier for apps to find and communicate with nearby devices and beacons, such as a specific bus stop or a particular art exhibit in a museum, providing better context. And the Proximity Beacon API lets developers associate semantic location (i.e., a place associated with a lat/long) and related data with beacons, stored in the cloud. This API will also be used in existing location APIs, such as the next version of the Places API.

Eddystone for beacon manufacturers: Single hardware for multiple platforms

Eddystone’s extensible frame formats allow hardware manufacturers to support multiple mobile platforms and application scenarios with a single piece of hardware. An existing BLE beacon can be made Eddystone compliant with a simple firmware update. At the core, we built Eddystone as an open and extensible protocol that’s also interoperable, so we’ll also introduce an Eddystone certification process in the near future by closely working with hardware manufacturing partners. We already have a number of partners that have built Eddystone-compliant beacons.

Eddystone for businesses: Secure and manage your beacon fleet with ease

As businesses move from validating their beacon-assisted apps to deploying beacons at scale in places like stadiums and transit stations, hardware installation and maintenance can be challenging: which beacons are working, broken, missing or displaced? So starting today, beacons that implement Eddystone’s telemetry frame (Eddystone-TLM) in combination with the Proximity Beacon API’s diagnostic endpoint can help deployers monitor their beacons’ battery health and displacement—common logistical challenges with low-cost beacon hardware.

Eddystone for Google products: New, improved user experiences

We’re also starting to improve Google’s own products and services with beacons. Google Maps launched beacon-based transit notifications in Portland earlier this year, to help people get faster access to real-time transit schedules for specific stations. And soon, Google Now will also be able to use this contextual information to help prioritize the most relevant cards, like showing you menu items when you’re inside a restaurant.

We want to make beacons useful even when a mobile app is not available; to that end, the Physical Web project will be using Eddystone beacons that broadcast URLs to help people interact with their surroundings.

Beacons are an important way to deliver better experiences for users of your apps, whether you choose to use Eddystone with your own products and services or as part of a broader Google solution like the Places API or Nearby API. The ecosystem of app developers and beacon manufacturers is important in pushing these technologies forward and the best ideas won’t come from just one company, so we encourage you to get some Eddystone-supported beacons today from our partners and begin building!

Update (July 16, 2015 11.30am PST) To clarify, beacons registered with proper place identifiers (as defined in our Places API) will be used in Place Picker. You have to use Proximity Beacon API to map a beacon to a place identifier. See the post on Google's Geo Developer Blog for more details.

Categories: Programming

Connect With the World Around You Through Nearby APIs

Tue, 07/14/2015 - 16:03

Posted by Akshay Kannan, Product Manager

Mobile phones have made it easy to communicate with anyone, whether they’re right next to you or on the other side of the world. The great irony, however, is that those interactions can often feel really awkward when you're sitting right next to someone.

Today, it takes several steps -- whether it’s exchanging contact information, scanning a QR code, or pairing via bluetooth -- to get a simple piece of information to someone right next to you. Ideally, you should be able to just turn to them and do so, the same way you do in the real world.

This is why we built Nearby. Nearby provides a proximity API, Nearby Messages, for iOS and Android devices to discover and communicate with each other, as well as with beacons.

Nearby uses a combination of Bluetooth, Wi-Fi, and inaudible sound (using the device’s speaker and microphone) to establish proximity. We’ve incorporated Nearby technology into several products, including Chromecast Guest Mode, Nearby Players in Google Play Games, and Google Tone.

With the latest release of Google Play services 7.8, the Nearby Messages API becomes available to all developers across iOS and Android devices (Gingerbread and higher). Nearby doesn’t use or require a Google Account. The first time an app calls Nearby, users get a permission dialog to grant that app access.

A few of our partners have built creative experiences to show what's possible with Nearby.

Edjing Pro uses Nearby to let DJs publish their tracklist to people around them. The audience can vote on tracks that they like, and their votes are updated in realtime.

Trello uses Nearby to simplify sharing. Share a Trello board to the people around you with a tap of a button.

Pocket Casts uses Nearby to let you find and compare podcasts with people around you. Open the Nearby tab in Pocket Casts to view a list of podcasts that people around you have, as well as podcasts that you have in common with others.

Trulia uses Nearby to simplify the house hunting process. Create a board and use Nearby to make it easy for the people around you to join it.

To learn more, visit developers.google.com/nearby.

Categories: Programming

This is Material Design Lite

Mon, 07/13/2015 - 18:05

Posted by Addy Osmani, Staff Developer Platform Engineer

Back in 2014, Google published the material design specification with a goal to provide guidelines for good design and beautiful UI across all device form factors. Today we are releasing our first effort to bring this to websites using vanilla CSS, HTML and JavaScript. We’re calling it Material Design Lite (MDL).

MDL makes it easy to add a material design look and feel to your websites. The “Lite” part of MDL comes from several key design goals: MDL has few dependencies, making it easy to install and use. It is framework-agnostic, meaning MDL can be used with any of the rapidly changing landscape of front-end tool chains. MDL has a low overhead in terms of code size (~27KB gzipped), and a narrow focus—enabling material design styling for websites.

Get started now and give it a spin or try one of our examples on CodePen.

MDL is a complimentary implementation to the Paper elements built with Polymer. The Paper elements are fully encapsulated components that can be used individually or composed together to create a material design-style site, and support more advanced user interaction. That said, MDL can be used alongside the Polymer element counterparts.

Out-of-the-box Templates

MDL optimises for websites heavy on content such as marketing pages, text articles and blogs. We've built responsive templates to show the broadness of sites that can be created using MDL that can be downloaded from our Templates page. We hope these inspire you to build great looking sites.

Blogs:

Text-heavy content sites:

Dashboards:

Standalone articles:

and more.

Technical details and browser support

MDL includes a rich set of components, including material design buttons, text-fields, tooltips, spinners and many more. It also include a responsive grid and breakpoints that adhere to the new material design adaptive UI guidelines.

The MDL sources are written in Sass using BEM. While we hope you'll use our theme customizer or pre-built CSS, you can also download the MDL sources from GitHub and build your own version. The easiest way to use MDL is by referencing our CDN, but you can also download the CSS or import MDL via npm or Bower.

The complete MDL experience works in all modern evergreen browsers (Chrome, Firefox, Opera, Edge) and Safari, but gracefully degrades to CSS-only in browsers like IE9 that don’t pass our Cutting-the-mustard test. Our browser compatibility matrix has the most up to date information on the browsers MDL officially supports.

More questions?

We've been working with the designers evolving material design to build in additional thinking for the web. This includes working on solutions for responsive templates, high-performance typography and missing components like badges. MDL is spec compliant for today and provides guidance on aspects of the spec that are still being evolved. As with the material design spec itself, your feedback and questions will help us evolve MDL, and in turn, how material design works on the web.

We’re sure you have plenty of questions and we have tried to cover some of them in our FAQ. Feel free to hit us up on GitHub or Stack Overflow if you have more. :)

Wrapping up

MDL is built on the core technologies of the web you already know and use every day—CSS, HTML and JS. By adopting MDL into your projects, you gain access to an authoritative and highly curated implementation of material design for the web. We can’t wait to see the beautiful, modern, responsive websites you’re going to build with Material Design Lite.

Categories: Programming

Cast Remote Display API: Processing

Fri, 07/10/2015 - 18:08

Posted by Leon Nicholls, Developer Programs Engineer

Remote Display on Google Cast allows your app to display both on your mobile and Cast device at the same time. Processing is a programming language that allows artists and hobbyists to create advanced graphics and interactive exhibitions. By putting these two things together we were able to quickly create stunning visual art and display it on the big screen just by bringing our phone to the party or gallery. This article describes how we added support for the Google Cast Remote Display APIs to Processing for Android and how you can too.

An example app from the popular Processing toxiclibs library on Cast. Download the code and run it on your own Chromecast! A little background

Processing has its own IDE and has many contributed libraries that hide the technical details of various input, output and rendering technologies. Users of Processing with just basic programming skills can create complicated graphical scenes and visualizations.

To write a program in the Processing IDE you create a “sketch” which involves adding code to life-cycle callbacks that initialize and draw the scene. You can run the sketch as a Java program on your desktop. You can also enable support for Processing for Android and then run the same sketch as an app on your Android mobile device. It also supports touch events and sensor data to interact with the generated apps.

Instead of just viewing the graphics on the small screen of the Android device, we can do better by projecting the graphics on a TV screen. Google Cast Remote Display APIs makes it easy to bring graphically intensive apps to Google Cast receivers by using the GPUs, CPUs and sensors available on the mobile devices you already have.

How we did it

Adding support for Remote Display involved modifying the Processing for Android Mode source code. To compile the Android Mode you first need to compile the source code of the Processing IDE. We started with the source code of the current stable release version 2.2.1 of the Processing IDE and compiled it using its Ant build script (detailed instructions are included along with the code download). We then downloaded the Android SDK and source code for the Android Mode 0232. After some minor changes to its build config to support the latest Android SDK version, we used Ant to build the Android Mode zip file. The zip file was unzipped into the Processing IDE modes directory.

We then used the IDE to open one of the Processing example sketches and exported it as an Android project. In the generated project we replaced the processing-core.jar library with the source code for Android Mode. We also added a Gradle build config to the project and then imported the project into Android Studio.

The main Activity for a Processing app is a descendent of the Android Mode PApplet class. The PApplet class uses a GLSurfaceView for rendering 2D and 3D graphics. We needed to change the code to use that same GLSurfaceView for the Remote Display API.

It is a requirement in the Google Cast Design Checklist for the Cast button to be visible on all screens. We changed PApplet to be an ActionBarActivity so that we can show the Cast button in the action bar. The Cast button was added by using a MediaRouteActionProvider. To only list Google Cast devices that support Remote Display, we used a MediaRouteSelector with an App ID we obtained from the Google Cast SDK Developer Console for a Remote Display Receiver.

Next, we created a class called PresentationService that extends CastRemoteDisplayLocalService. The service allows the app to keep the remote display running even when it goes into the background. The service requires a CastPresentation instance for displaying its content. The CastPresentation instance uses the GLSurfaceView from the PApplet class for its content view. However, setting the CastPresentation content view requires some changes to PApplet so that the GLSurfaceView isn’t initialized in its onCreate, but waits until the service onRemoteDisplaySessionStarted callback is invoked.

When the user selects a Cast device in the Cast button menu and the MediaRouter onRouteSelected event is called, the service is started with CastRemoteDisplayLocalService.startService. When the user disconnects from a Cast device using the Cast button, MediaRouter onRouteUnselected event is called and the service is stopped by using CastRemoteDisplayLocalService.stopService.

For the mobile display, we display an image bitmap and forward the PApplet touch events to the existing surfaceTouchEvent method. When you run the Android app, you can use touch gestures on the display of the mobile device to control the interaction on the TV. Take a look at this video of some of the Processing apps running on a Chromecast.

Most of the new code is contained in the PresentationService and RemoteDisplayHelper classes. Your mobile device needs to have at least Android KitKat and Google Play services version 7.5.71.

You can too

Now you can try the Remote Display APIs in your Processing apps. Instead of changing the generated code every time you export your Android Mode project, we recommend that you use our project as a base and simply copy your generated Android code and libraries to our project. Then simply modify the project build file and update the manifest to start the app with your sketch’s main Activity.

To see a more detailed description on how to use the Remote Display APIs, read our developer documentation. We are eager to see what Processing artists can do with this code in their projects.

Categories: Programming

What’s new with Google Fit: Distance, Calories, Meal data, and new apps and wearables

Tue, 06/30/2015 - 18:52

Posted by Angana Ghosh, Lead Product Manager, Google Fit

To help users keep track of their physical activity, we recently updated the Google Fit app with some new features, including an Android Wear watch face that helps users track their progress throughout the day. We also added data types to the Google Fit SDK and have new partners tracking data (e.g. nutrition, sleep, etc.) that developers can now use in their own apps. Find out how to integrate Google Fit into your app and read on to check out some of the cool new stuff you can do.

table, th, td { border: clear; border-collapse: collapse; }

Distance traveled per day

The Google Fit app now computes the distance traveled per day. Subscribe to it using the Recording API and query it using the History API.

Calories burned per day

If a user has entered their details into the Google Fit app, the app now computes their calories burned per day. Subscribe to it using the Recording API and query it using the History API.

Nutrition data from LifeSum, Lose It!, and MyFitnessPal

LifeSum and Lose It! are now writing nutrition data, like calories consumed, macronutrients (proteins, carbs, fats), and micronutrients (vitamins and minerals) to Google Fit. MyFitnessPal will start writing this data soon too. Query it from Google Fit using the History API.

Sleep activity from Basis Peak and Sleep as Android

Basis Peak and Sleep as Android are now writing sleep activity segments to Google Fit. Query this data using the History API.

New workout sessions and activity data from even more great apps and fitness wearables!

Endomondo, Garmin, the Daily Burn, the Basis Peak and the Xiaomi miBand are new Google Fit partners that will allow users to store their workout sessions and activity data. Developers can access this data with permission from the user, which will also be shown in the Google Fit app.

How are developers using the Google Fit platform?

Partners like LifeSum, and Lose It! are reading all day activity to help users keep track of their physical activity in their favorite fitness app.

Runkeeper now shows a Google Now card to its users encouraging them to “work off” their meals, based on their meals written to Google Fit by other apps.

Instaweather has integrated Google Fit into a new Android Wear face that they’re testing in beta. To try out the face, first join this Google+ community and then follow the link to join the beta and download the app.

We hope you enjoy checking out these Google Fit updates. Thanks to all our partners for making it possible! Find out more about integrating the Google Fit SDK into your app.

Categories: Programming

Quake® III on your TV with Cast Remote Display API

Wed, 06/24/2015 - 18:45

Posted by Leon Nicholls, Developer Programs Engineer and Antonio Fontan, Software Engineer

At Google I/O 2015 we announced the new Google Cast Remote Display APIs for Android and iOS that make it easy for mobile developers to bring graphically intensive apps or games to Google Cast receivers. Now you can use the powerful GPUs, CPUs and sensors of the mobile device in your pocket to render both a local display and a virtual one to the TV. This dual display model also allows you to design new game experiences for the display on the mobile device to show maps, game pieces and private game information.

We wanted to show you how easy it is to take an existing high performance game and run it on a Chromecast. So, we decided to port the classic Quake® III Arena open source engine to support Cast Remote Display. We reached out to ID Software and they thought it was a cool idea too. When all was said and done, during our 2015 I/O session “Google Cast Remote Display APIs for Games” we were able to present the game in 720p at 60 fps!

During the demo we used a wired USB game controller to play the game, but we've also experimented with using the mobile device sensors, a bluetooth controller, a toy gun and even a dance mat as game controllers.

Since you're probably wondering how you can do this too, here's the details of how we added Cast Remote Display to Quake. The game engine was not modified in any way and the whole process took less than a day with most of our time spent removing UI code not needed for the demo. We started by using an existing source port of Quake III to Android which includes some usage of kwaak3 and ioquake3 source code.

Next, we registered a Remote Display App ID using the Google Cast SDK Developer Console. There’s no need to write a Cast receiver app as the Remote Display APIs are supported natively by all Google Cast receivers.

To render the local display, the existing main Activity was converted to an ActionBarActivity. To discover devices and to allow a user to select a Cast device to connect to, we added support for the Cast button using MediaRouteActionProvider. The MediaRouteActionProvider adds a Cast button to the action bar. We then set the MediaRouteSelector for the MediaRouter using the App ID we obtained and added a callback listener using MediaRouter.addCallback. We modified the existing code to display an image bitmap on the local display.

To render the remote display, we extended CastPresentation and called setContentView with the game’s existing GLSurfaceView instance. Think of the CastPresentation as the Activity for the remote display. The game audio engine was also started at that point.

Next we created a service extending CastRemoteDisplayLocalService which would then create an instance of our CastPresentation class. The service will manage the remote display even when the local app goes into the background. The service automatically provides a convenient notification to allow the user to dismiss the remote display.

Then we start our service when the MediaRouter onRouteSelected event is called by using CastRemoteDisplayLocalService.startService and stop the service when the MediaRouter onRouteUnselected event is called by using CastRemoteDisplayLocalService.stopService.

To see a more detailed description on how to use the Remote Display APIs, read our developer documentation. We have also published a sample app on GitHub that is UX compliant.

You can download the code that we used for the demo. To run the app you have to compile it using Gradle or Android Studio. You will also need to copy the "baseq3" folder from your Quake III game to the “qiii4a” folder in the root of the SD card of your Android mobile device. Your mobile device needs to have at least Android KitKat and Google Play services version 7.5.71.

With 17 million Chromecast devices sold and 1.5 billion touches of the Cast button, the opportunity for developers is huge, and it’s simple to add this extra functionality to an existing game. We're eager to see what amazing experiences you create using the Cast Remote Display APIs.

QUAKE II © 1997 and QUAKE III © 1999 id Software LLC, a ZeniMax Media company. QUAKE is a trademark or registered trademark of id Software LLC in the U.S. and/or other countries. QUAKE game assets used under license from id Software LLC. All Rights Reserved

QIII4A © 2012 n0n3m4. GNU General Public License.

Q3E © 2012 n0n3m4. GNU General Public License.

Categories: Programming