Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Announcing the Simple Programmer Podcast

Making the Complex Simple - John Sonmez - Tue, 05/26/2015 - 15:30

If you’ve been following Simple Programmer for a while, you might have been wondering when I was going to create a Simple Programmer podcast. Well, that day has finally come. In fact, by the time you are reading this, 4 episodes have already been published! I’ve wanted to create an official Simple Programmer podcast for […]

The post Announcing the Simple Programmer Podcast appeared first on Simple Programmer.

Categories: Programming

Neo4j: The foul revenge graph

Mark Needham - Tue, 05/26/2015 - 08:03

Last week I was showing the foul graph to my colleague Alistair who came up with the idea of running a ‘foul revenge’ query to find out which players gained revenge for a foul with one of their own later in them match.

Queries like this are very path centric and therefore work well in a graph. To recap, this is what the foul graph looks like:

2015 05 26 07 35 33

The first thing that we need to do is connect the fouls in a linked list based on time so that we can query their order more easily.

We can do this with the following query:

MATCH (foul:Foul)-[:COMMITTED_IN_MATCH]->(match)
WITH foul,match
ORDER BY match.id, foul.sortableTime
WITH match, COLLECT(foul) AS fouls
FOREACH(i in range(0, length(fouls) -2) |
  FOREACH(foul1 in [fouls[i]] | FOREACH (foul2 in [fouls[i+1]] |
    MERGE (foul1)-[:NEXT]->(foul2)
)));

This query collects fouls grouped by match and then adds a ‘NEXT’ relationship between adjacent fouls. The graph now looks like this:

2015 05 26 07 43 28

Now let’s find the revenge foulers in the Bayern Munich vs Barcelona match. We’re looking for the following pattern:

2015 05 26 07 55 45

This translates to the following cypher query:

match (foul1:Foul)-[:COMMITTED_AGAINST]->(app1)-[:COMMITTED_FOUL]->(foul2)-[:COMMITTED_AGAINST]->(app2)-[:COMMITTED_FOUL]->(foul1),
      (player1)-[:MADE_APPEARANCE]->(app1), (player2)-[:MADE_APPEARANCE]->(app2),
      (foul1)-[:COMMITTED_IN_MATCH]->(match:Match {id: "32683310"})<-[:COMMITTED_IN_MATCH]-(foul2)
WHERE (foul1)-[:NEXT*]->(foul2)
RETURN player2.name AS firstFouler, player1.name AS revengeFouler, foul1.time, foul1.location, foul2.time, foul2.location

I’ve added in a few extra parts to the pattern to pull out the players involved and to find the revenge foulers in a specific match – the Bayern Munich vs Barcelona Semi Final 2nd leg.


We end up with the following revenge fouls:

2015 05 26 00 05 48

We can see here that Dani Alves actually gains revenge on Bastian Schweinsteiger twice for a foul he made in the 10th minute.

If we tweak the query to the following we can get a visual representation of the revenge fouls as well:

match (foul1:Foul)-[:COMMITTED_AGAINST]->(app1)-[:COMMITTED_FOUL]->(foul2)-[:COMMITTED_AGAINST]->(app2)-[:COMMITTED_FOUL]->(foul1),
      (player1)-[:MADE_APPEARANCE]->(app1), (player2)-[:MADE_APPEARANCE]->(app2),
      (foul1)-[:COMMITTED_IN_MATCH]->(match:Match {id: "32683310"})<-[:COMMITTED_IN_MATCH]-(foul2),
      (foul1)-[:NEXT*]->(foul2)
RETURN *

2015 05 23 15 23 22

At the moment I’ve restricted the revenge concept to single matches but I wonder whether it’d be more interesting to create a linked list of fouls which crosses matches between teams in the same season.

The code for all of this is on github – the README is a bit sketchy at the moment but I’ll be fixing that up soon.

Categories: Programming

Game Performance: Geometry Instancing

Android Developers Blog - Tue, 05/26/2015 - 01:57

Posted by Shanee Nishry, Games Developer Advocate

Imagine a beautiful virtual forest with countless trees, plants and vegetation, or a stadium with countless people in the crowd cheering. If you are heroic you might like the idea of an epic battle between armies.

Rendering a lot of meshes is desired to create a beautiful scene like a forest, a cheering crowd or an army, but doing so is quite costly and reduces the frame rate. Fortunately this is possible using a simple technique called Geometry Instancing.

Geometry instancing can be used in 2D games for rendering a large number of sprites, or in 3D for things like particles, characters and environment.

The NDK code sample More Teapots demoing the content of this article can be found with the ndk inside the samples folder and in the git repository.

Support and Extensions

Geometry instancing is available from OpenGL ES 3.0 and to OpenGL 2.0 devices which support the GL_NV_draw_instanced or GL_EXT_draw_instanced extensions. More information on how to using the extensions is shown in the More Teapots demo.

Overview

Submitting draw calls causes OpenGL to queue commands to be sent to the GPU, this has an expensive overhead which may affect performance. This overhead grows when changing states such as alpha blending function, active shader, textures and buffers.

Geometry Instancing is a technique that combines multiple draws of the same mesh into a single draw call, resulting in reduced overhead and potentially increased performance. This works even when different transformations are required.

The algorithm

To explain how Geometry Instancing works let’s quickly overview traditional drawing.

Traditional Drawing

To a draw a mesh you’d usually prepare a vertex buffer and an index buffer, bind your shader and buffers, set your uniforms such as a World View Projection matrix and make a draw call.

To draw multiple instances using the same mesh you set new uniform values for the transformations and other data and call draw again. This is repeated for every instance.

Drawing with Geometry Instancing

Geometry Instancing reduces CPU overhead by reducing the sequence described above into a single buffer and draw call.

It works by using an additional buffer which contains custom per-instance data needed by your shader, such as transformations, color, light data.

The first change to your workflow is to create the additional buffer on initialization stage.

To put it into code let’s define an example per-instance data that includes a world view projection matrix and a color:

C++

struct PerInstanceData
{
 Mat4x4 WorldViewProj;
 Vector4 Color;
};

You also need to the structure to your shader. The easiest way is by creating a Uniform Block with an array:

GLSL

#define MAX_INSTANCES 512

layout(std140) uniform PerInstanceData {
    struct
    {
        mat4      uMVP;
        vec4      uColor;
    } Data[ MAX_INSTANCES ];
};

Note that uniform blocks have limited sizes. You can find the maximum number of bytes you can use by querying for GL_MAX_UNIFORM_BLOCK_SIZE using glGetIntegerv.

Example:

GLint max_block_size = 0;
glGetIntegerv( GL_MAX_UNIFORM_BLOCK_SIZE, &max_block_size );

Bind the uniform block on the CPU in your program’s initialization stage:

C++

#define MAX_INSTANCES 512
#define BINDING_POINT 1
GLuint shaderProgram; // Compiled shader program

// Bind Uniform Block
GLuint blockIndex = glGetUniformBlockIndex( shaderProgram, "PerInstanceData" );
glUniformBlockBinding( shaderProgram, blockIndex, BINDING_POINT );

And create a corresponding uniform buffer object:

C++

// Create Instance Buffer
GLuint instanceBuffer;

glGenBuffers( 1, &instanceBuffer );
glBindBuffer( GL_UNIFORM_BUFFER, instanceBuffer );
glBindBufferBase( GL_UNIFORM_BUFFER, BINDING_POINT, instanceBuffer );

// Initialize buffer size
glBufferData( GL_UNIFORM_BUFFER, MAX_INSTANCES * sizeof( PerInstanceData ), NULL, GL_DYNAMIC_DRAW );

The next step is to update the instance data every frame to reflect changes to the visible objects you are going to draw. Once you have your new instance buffer you can draw everything with a single call to glDrawElementsInstanced.

You update the instance buffer using glMapBufferRange. This function locks the buffer and retrieves a pointer to the byte data allowing you to copy your per-instance data. Unlock your buffer using glUnmapBuffer when you are done.

Here is a simple example for updating the instance data:

const int NUM_SCENE_OBJECTS = …; // number of objects visible in your scene which share the same mesh

// Bind the buffer
glBindBuffer( GL_UNIFORM_BUFFER, instanceBuffer );

// Retrieve pointer to map the data
PerInstanceData* pBuffer = (PerInstanceData*) glMapBufferRange( GL_UNIFORM_BUFFER, 0,
                NUM_SCENE_OBJECTS * sizeof( PerInstanceData ),
                GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_RANGE_BIT );

// Iterate the scene objects
for ( int i = 0; i < NUM_SCENE_OBJECTS; ++i )
{
    pBuffer[ i ].WorldViewProj = ... // Copy World View Projection matrix
    pBuffer[ i ].Color = …               // Copy color
}

glUnmapBuffer( GL_UNIFORM_BUFFER ); // Unmap the buffer

And finally you can draw everything with a single call to glDrawElementsInstanced or glDrawArraysInstanced (depending if you are using an index buffer):

glDrawElementsInstanced( GL_TRIANGLES, NUM_INDICES, GL_UNSIGNED_SHORT, 0,
                NUM_SCENE_OBJECTS );

You are almost done! There is just one more step to do. In your shader you need to make use of the new uniform buffer object for your transformations and colors. In your shader main program:

void main()
{
    …
    gl_Position = PerInstanceData.Data[ gl_InstanceID ].uMVP * inPosition;
    outColor = PerInstanceData.Data[ gl_InstanceID ].uColor;
}

You might have noticed the use gl_InstanceID. This is a predefined OpenGL vertex shader variable that tells your program which instance it is currently drawing. Using this variable your shader can properly iterate the instance data and match the correct transformation and color for every vertex.

That’s it! You are now ready to use Geometry Instancing. If you are drawing the same mesh multiple times in a frame make sure to implement Geometry Instancing in your pipeline! This can greatly reduce overhead and improve performance.

Join the discussion on

+Android Developers
Categories: Programming

Power Great Gaming with New Analytics from Play Games

Android Developers Blog - Tue, 05/26/2015 - 01:22

By Ben Frenkel, Google Play Games team

A few weeks ago at the Game Developers Conference (GDC), we announced Play Games Player Analytics, a new set of free reports to help you manage your games business and understand in-game player behavior. Today, we’re excited to make these new tools available to you in the Google Play Developer Console.

Analytics is a key component of running a game as a service, which is increasingly becoming a necessity for running a successful mobile gaming business. When you take a closer look at large developers that do this successfully, you find that they do three things really well:

  • Manage their business to revenue targets
  • Identify hot spots in their business metrics so they can continuously focus on the game updates that will drive the most impact
  • Use analytics to understand how players are progressing, spending, and churning

“With player engagement and revenue data living under one roof, developers get a level of data quality that is simply not available to smaller teams without dedicated staff. As the tools evolve, I think Google Play Games Player Analytics will finally allow indie devs to confidently make data-driven changes that actually improve revenue.”

Kevin Pazirandeh
Developer of Zombie Highway 2

With Player Analytics, we wanted to make these capabilities available to the entire developer ecosystem on Google Play in a frictionless, easy-to-use way, freeing up your precious time to create great gaming experiences. Small studios, including the makers of Zombie Highway 2 and Bombsquad, have already started to see the benefits and impact of Player Analytics on their business.

Further, if you integrate with Google Play game services, you get this set of analytics with no incremental effort. But, for a little extra work, you can also unlock another set of high impact reports by integrating Google Play game services Events, starting with the Sources and Sinks report, a report to help you balance your in-game economy.

If you already have a game integrated with Google Play game services, go check out the new reports in the Google Play Developer Console today. For everyone else, enabling Player Analytics is as simple as adding a handful of lines of code to your game to integrate Google Play game services.

Manage your business to revenue targets

Set your spend target in Player Analytics by choosing a daily goal

To help assess the health of your games business, Player Analytics enables you to select a daily in-app purchase revenue target and then assess how you're doing against that goal through the Target vs Actual report depicted below. Learn more.

Identify hot spots using benchmarks with the Business Drivers report

Ever wonder how your game’s performance stacks up against other games? Player Analytics tells you exactly how well you are doing compared to similar games in your category.

Metrics highlighted in red are below the benchmark. Arrows indicate whether a metric is trending up or down, and any cell with the icon can be clicked to see more details about the underlying drivers of the change. Learn more.

Track player retention by new user cohort

In the Retention report, you can see the percentage of players that continued to play your game on the following seven days after installing your game.

Learn more.

See where players are spending their time, struggling, and churning with the Player Progression report

Measured by the number of achievements players have earned, the Player Progression funnel helps you identify where your players are struggling and churning to help you refine your game and, ultimately, improve retention. Add more achievements to make progression tracking more precise.

Learn more.

Manage your in-game economy with the Sources and Sinks report

The Sources and Sinks report helps you balance your in-game economy by showing the relationship between how quickly players are earning or buying and using resources.

For example, Eric Froemling, one man developer of BombSquad, used the Sources & Sinks report to help balance the rate at which players earned and spent tickets.

Read more about Eric’s experience with Player Analytics in his recent blog post.

To enable the Sources and Sinks report you will need to create and integrate Play game services Events that track sources of premium currency (e.g., gold coins earned), and sinks of premium currency (e.g., gold coins spent to buy in-app items).

Categories: Programming

Integrate Play data into your workflow with data exports

Android Developers Blog - Tue, 05/26/2015 - 00:09

Posted by Frederic Mayot, Google Play team

The Google Play Developer Console makes a wealth of data available to you so you have the insight needed to successfully publish, grow, and monetize your apps and games. We appreciate that some developers want to access and analyze their data beyond the visualization offered today in the Developer Console, which is why we’ve made financial information, crash data, and user reviews available for export. We're now also making all the statistics on your apps and games (installs, ratings, GCM usage, etc.) accessible via Google Cloud Storage.

New Reports section in the Google Play Developer Console

We’ve added a Reports tab to the Developer Console so that you can view and access all available data exports in one place.

A reliable way to access Google Play data

This is the easiest and most reliable way to download your Google Play Developer Console statistics. You can access all of your reports, including install statistics, reviews, crashes, and revenue.

Programmatic access to Google Play data

This new Google Cloud Storage access will open up a wealth of possibilities. For instance, you can now programmatically:

  • import install and revenue data into your in-house dashboard
  • run custom analysis
  • import crashes and ANRs into your bug tracker
  • import reviews into your CRM to monitor feedback and reply to your users

Your data is available in a Google Cloud Storage bucket, which is most easily accessed using gsutil. To get started, follow these three simple steps to access your reports:

  1. Install the gsutil tool.
    • Authenticate to your account using your Google Play Developer Console credentials.
  2. Find your reporting bucket ID on the new Reports section.
    • Your bucket ID begins with: pubsite_prod_rev (example:pubsite_prod_rev_1234567890)
  3. Use the gsutil ls command to list directories/reports and gsutil cp to copy the reports. Your reports are organized in directories by package name, as well as year and month of their creation.

Read more about exporting report data in the Google Play Developer Help Center.

Note about data ownership on Google Play and Cloud Platform: Your Google Play developer account is gaining access to a dedicated, read-only Google Cloud Storage bucket owned by Google Play. If you’re a Google Cloud Storage customer, the rest of your data is unaffected and not connected to your Google Play developer account. Google Cloud Storage customers can find out more about their data storage on the terms of service page.

Categories: Programming

A New Reference App for Multi-device Applications

Android Developers Blog - Mon, 05/25/2015 - 23:57
It is now possible to bring the benefits of your app to your users wherever they happen to be, no matter what device they have near them. Today we’re releasing a reference sample that shows how to implement such a service with an app that works across multiple Android form-factors. This sample, the Universal Music Player, is a bare-bones but functional reference app that supports multiple devices and form factors in a single codebase. It is compatible with Android Auto, Android Wear, and Google Cast devices. Give it a try and easily adapt your own app for wherever your users are, be that a phone, watch, TV, car, or more!

Playback controls and album art in the lock screen.
On the application toolbar, the Google Cast icon.
Controlling playback through Android Auto

Controlling playback on an Android Wear watch
This sample uses a number of new features in Android 5.0 Lollipop, like MediaStyle notifications, MediaSession and MediaBrowserService. They make it easy to implement media browsing and playback on multiple devices with a single version of your app.

Check out the source code and let your users enjoy your app from wherever they like.

Posted by Renato Mangini, Senior Developer Platform Engineer, Google Developer Platform Team Join the discussion on

+Android Developers
Categories: Programming

Drive app installs through App Indexing

Android Developers Blog - Mon, 05/25/2015 - 23:56

Posted by Lawrence Chang, Product Manager

You’ve invested time and effort into making your app an awesome experience, and we want to help people find the great content you’ve created. App Indexing has already been helping people engage with your Android app after they’ve installed it — we now have 30 billion links within apps indexed. Starting this week, people searching on Google can also discover your app if they haven’t installed it yet. If you’ve implemented App Indexing, when indexed content from your app is relevant to a search done on Google on Android devices, people may start to see app install buttons for your app in search results. Tapping these buttons will take them to the Google Play store where they can install your app, then continue straight on to the right content within it.

App installs through app indexing

With the addition of these install links, we are starting to use App Indexing as a ranking signal for all users on Android, regardless of whether they have your app installed or not. We hope that Search will now help you acquire new users, as well as re-engage your existing ones. To get started, visit g.co/AppIndexing and to learn more about the other ways you can integrate with Google Search, visit g.co/DeveloperSearch.

Join the discussion on

+Android Developers
Categories: Programming

There's a lot to explore with Google Play services 7.3

Android Developers Blog - Mon, 05/25/2015 - 23:52
gps

Posted by Ian Lake, Developer Advocate

Today, we’re excited to give you new tools to build better apps with the rollout of Google Play services 7.3. With new Android Wear APIs, the addition of nutrition data to Google Fit, improvements to retrieving the user’s activity and location, and better support for optional APIs, there’s a lot to explore in this release.

Android Wear

Google Play services 7.3 extends the Android Wear network by enabling you to connect multiple Wear devices to a single mobile device.

While the DataApi will automatically sync DataItems across all nodes in the Wear network, the directed nature of the MessageApi is faced with new challenges. What node do you send a message to when the NodeApi starts showing multiple nodes from getConnectedNodes()? This is exactly the use case for the new CapabilityApi, which allows different nodes to advertise that they provide a specific functionality (say, the phone node being able to download images from the internet). This allows you to replace a generic NodeListener with a more specific CapabilityListener, getting only connection results and a list of nodes that have the specific functionality you need. We’ve updated the Sending and Receiving Messages training to explore this new functionality.

Another new addition for Android Wear is the ChannelApi, which provides a bidirectional data connection between two nodes. While assets are the best way to efficiently add binary data to the data layer for synchronization to all devices, this API focuses on sending larger binary data directly between specific nodes. This comes in two forms: sending full files via the sendFile() method (perfect for later offline access) or opening an OutputStream to stream real time binary data. We hope this offers a flexible, low level API to complement the DataApi and MessageApi.

We’ve updated our samples with these changes in mind so go check them out here!

Google Fit

Google Fit makes building fitness apps easier with fitness specific APIs on retrieving sensor data like current location and speed, collecting and storing activity data in Google Fit’s open platform, and automatically aggregating that data into a single view of the user’s fitness data.

To make it even easier to retrieve up-to-date information, Google Play Services 7.3 adds a new method to the HistoryApi: readDailyTotal(). This automatically aggregates data for a given DataType from midnight on the current day through now, giving you a single DataPoint. For TYPE_STEP_COUNT_DELTA, this method does not require any authentication, making it possible to retrieve the current number of steps for today from any application whether on mobile devices or on Android Wear - great for watch faces!

Google Fit is also augmenting its existing data types with granular nutrition information, including protein, fat, cholesterol, and more. By leveraging these details about the user’s diet, developers can help users stay more informed about their health and fitness.

Location

LocationRequest is the heart of the FusedLocationProviderApi, encapsulating the type and frequency of location information you’d like to receive. An important, but small change to LocationRequest is the addition of a maximum wait time for location updates via setMaxWaitTime(). By using a value at least two times larger than the requested interval, the system can batch location updates together, reducing battery usage and, on some devices, actually improving location accuracy.

For any ongoing location requests, it is important to know that you will continue to get good location data back. The SettingsApi is still incredibly useful for confirming that user settings are optimal before you put in a LocationRequest, however, it isn’t the best approach for continual monitoring. For that, you can use the new LocationCallback class in place of your existing LocationListener to receive LocationAvailability updates in addition to location updates, giving you a simple callback whenever settings might have changed which will affect the current set of LocationRequests. You can also use FusedLocationProviderApi’s getLocationAvailability() to retrieve the current state on demand.

Connecting to Google Play services

One of the biggest benefits of GoogleApiClient is that it provides a single connection state, whether you are connecting to a single API or multiple APIs. However, this made it hard to work with APIs that might not be available on all devices, such as the Wearable API. This release makes it much easier to work with APIs that may not always be available with the addition of an addApiIfAvailable() method ensuring that unavailable APIs do not hold up the connection process. The current state for each API can then be retrieved via getConnectionResult(), giving you a way to check at runtime whether an API is available and connected.

While GoogleApiClient’s connection process already takes care of checking for Google Play services availability, if you are not using GoogleApiClient, you’ll find many of the static utility methods in GooglePlayServicesUtil such as isGooglePlayServicesAvailable() have now been moved to the singleton GoogleApiAvailability class. We hope the move away from static methods helps you when writing tests, ensuring your application can properly handle any error cases.

SDK is now available!

Google Play services 7.3 is now available: get started with updated SDK now!

To learn more about Google Play services and the APIs available to you through it, visit the Google Play services section on the Android Developer site.

Join the discussion on

+Android Developers
Categories: Programming

New Android Code Samples

Android Developers Blog - Mon, 05/25/2015 - 23:50

Posted by Rich Hyndman, Developer Advocate

A new set of Android code samples, covering Android Wear, Android for Work, NFC and Screen capturing, have been committed to our Google Samples repository on GitHub. Here’s a summary of the new code samples:

XYZTouristAttractions

This sample mimics a real world mobile and Android Wear app. It has a more refined design and also provides a practical example of how a mobile app would interact and communicate with its Wear counterpart.

The app itself is modeled after a hypothetical tourist attractions experience that notifies the user when they are in close proximity to notable points of interest. In parallel,the Wear component shows tourist attraction images and summary information, and provides quick actions for nearby tourist attractions in a GridViewPager UI component.

DeviceOwner - A Device Owner is a specialized type of device administrator that can control device security and configuration. This sample uses the DevicePolicyManager to demonstrate how to use device owner features, including configuring global settings (e.g.automatic time and time-zone) and setting the default launcher.

NfcProvisioning - This sample demonstrates how to use NFC to provision a device with a device owner. This sample sets up the peer device with the DeviceOwner sample by default. You can rewrite the configuration to use any other device owner.

NFC BeamLargeFiles - A demonstration of how to transfer large files via Android Beam on Android 4.1 and above. After the initial handshake over NFC, file transfer will take place over a secondary high-speed communication channel such as Bluetooth or WiFi Direct.

ScreenCapture - The MediaProjection API was added in Android Lollipop and allows you to easily capture screen contents and/or record system audio. The ScreenCapture sample demonstrates how to use the API to capture device screen in real time and show it on a SurfaceView.

As an additional bonus, the Santa Tracker Android app, including three games, two watch-faces and other goodies, was also recently open sourced and is now available on GitHub.

As with all the Android samples, you can also easily access these new additions in Android Studio using the built in Import Samples feature and they’re also available through our Samples Browser.

Check out a sample today to help you with your development!

Join the discussion on

+Android Developers
Categories: Programming

Always-on and Wi-Fi with the latest Android Wear update

Android Developers Blog - Mon, 05/25/2015 - 23:27

Posted by Wayne Piekarski, Developer Advocate

A new update to Android Wear is rolling out with lots of new features like always-on apps, Wi-Fi connectivity, media browsing, emoji input, and more. Let’s discuss some of the great new capabilities that are available in this release.

Always-on apps

Above all, a watch should make it easy to tell the time. That's why most Android Wear watches have always-on displays, so you can see the time without having to shake your wrist or lift your arm to wake up the display. In this release, we're making it possible for apps to be always-on as well.

With always-on functionality, your app can display dynamic data on the device, even when the app is in ambient mode. This is useful if your app displays information that is continuously updated. For example, running apps like Endomondo, MapMyRun, and Runtastic use the always-on screen to let you keep track of how long and far you’ve been running. Zillow keeps you posted about the median price of homes nearby when you’re house-hunting.

Always-on functionality is also useful for apps that may not update data very frequently, but present information that’s useful for reference over a longer period of time. For example, Bring! lets you keep your shopping list right on your wrist, and Golfshot gives you accurate distances from tee to pin. If you’re at the airport and making your way to your gate, American Airlines, Delta, and KLM let you keep all of your flight info a glance away on your watch.

Note: the above apps will not display always-on functionality on your watch until you receive the update for the latest version of Android Wear.

Always-on functionality works similar to watch faces, in that the power usage of the display and processor is kept to a minimum by reducing the colors and refresh rate of the display. To implement an always-on Activity, you need to make a few small changes to your app's AndroidManifest.xml, your app’s build.gradle, and the Activity to declare that it supports ambient mode. A code sample and documentation are available to show you how it works. Be sure to tune in to the livestream at Google I/O next week for Android Wear: Your app and the always-on screen.

Wi-Fi connectivity and cloud sync

Many existing Android Wear devices already contain hardware support for Wi-Fi, and this release enables software support for Wi-Fi. The saved Wi-Fi networks on your phone are copied to your watch during setup, and your watch automatically connects to those Wi-Fi networks when it loses Bluetooth connection to your phone. Your watch can then connect to your phone over the Internet, even if they’re not on the same Wi-Fi network.

You should continue to use the Data Layer API for all communications between the watch and phone. By using this standard API, your app will always work, no matter what kind of connectivity the user’s wearable supports. Cloud sync also introduces a new virtual node in the Data Layer called the cloud node, which may be returned in calls to getConnectedNodes(). Learn more in the Multi-wearable support section below.

Multi-wearable support

The release of Google Play services 7.3 now allows support for multiple wearable devices to be paired simultaneously to a single phone or tablet, so you can have a wearable for fitness, and another for dressing up. While DataItems will continue to work in the same way, since they are synchronized to all devices, working with the MessageApi is a little different. When you update your build.gradle to use version 7.3 or higher, getConnectedNodes() from the NodeApi will usually return multiple nodes. There is an extra virtual node added to represent the cloud node used to communicate over Wi-Fi, so all developers need to deal with this situation in their code.

To help simplify finding the right node among many devices, we have added a CapabilityApi, allowing your nodes to announce features they provide, for example downloading images or music. You can also now use the ChannelApi to open up a connection to a specific device to transfer large resources such as images or audio streams, without having to send them to all devices like you would when embedding assets into data items. We have updated our Android Wear samples and documentation to show the best practices in implementing this.

MediaBrowser support

The Android 5.0 release added the ability for apps to browse the media content of another app, via the android.media.browse API. With the latest Android Wear update, if your media playback app supports this API, then you will be able to browse to find the next song directly from your watch. This is the same browse capability used in Android Auto. You implement the API once, and it will work across a variety of platforms. To do so, you just need to allow Android Wear to browse your app in the onGetRoot() method validator. You can also add custom actions to the MediaSession that will appear as controls on the watch. We have a Universal Media Player sample that shows you how to implement this functionality.

Updates to existing devices

The latest version of Android Wear will roll out via an over-the-air (OTA) update to all Android Wear watches over the coming weeks. To take advantage of these new features, you will need to use targetSdkVersion 22 and add the necessary dependencies for always-on support. We have also expanded the collection of emulators available via the SDK Manager, to simulate the experience on all the currently available devices, resolutions, and shapes, including insets like the Moto 360.

In this update, we have also disabled support for apps that use the unofficial, activity-based approach for displaying watch faces, as announced in December. These watch faces will no longer work and should be updated to use the new watch face API.

Since the launch of Android Wear last summer, Android Wear has grown into a platform that gives users many possibilities to personalize their watches, with a variety of shapes and styles, a range of watch bands, and thousands of apps and watch faces. Features such as always-on apps and Wi-Fi allow developers even more flexibility to give users amazing experiences with Android Wear.

Categories: Programming

Quickly build a XL Deploy plugin for deploying container applications to CoreOS

Xebia Blog - Mon, 05/25/2015 - 21:57

You can use fleetctl and script files to deploy your container applications to CoreOS. However, using XL Deploy for deployment automation is a great solution when you need to deploy and track versions of many applications. What does it take to create a XL Deploy plugin to deploy these container applications to your CoreOS clusters?

XL Deploy can be extended with custom plugins which add deployment capabilities. Using XL Rules custom plugins can be created quickly with limited effort. In this blog you can read how a plugin can be created in a matter of hours.

In a number of blog posts, Mark van Holsteijn explained how to create a high available Docker container platform using CoreOS and Consul. In these posts shell scripts (with fleetctl commands) are used deploy container applications. Based on these scripts I have built a XL Deploy plugin which deploys fleet unit configuration files to a CoreOS cluster.

 

Deploying these container applications using XL Deploy has a number of advantages:

  • Docker containers can be deployed without creating, adjusting and maintaining scripts for individual applications.
  • XL Deploy will track and report the applications deployed to the CoreOS clusters.
  • Additional deployment scenarios can be added with limited effort.
  • Deployments will be consistent and configuration is managed across environments.
  • XL Deploy permissions can be used to control (direct) access to the CoreOS cluster(s).

 

Building an XL Deploy plugin is fast, since you can:

  • Reuse existing XL Deploy capabilities, like the Overthere plugin.
  • Utilize XL Deploy template processing to inject property values in rules and deploy scripts.
  • Exploit the XL Deploy unified deployment model to get the deltas which drive the required fleetctl deployment commands for any type of deployment (new, update, undeploy and rollback deployments).
  • Use xml and script based rules to build deployment tasks.

Getting started

  • Install XL Deploy, you can download a free edition here. If you are not familiar with XL Deploy, read the getting started documentation.
  • Next add the plugin resources to the ext directory of your XL Deploy installation. You can find the plugin resources in this Github repository. Add the synthetic.xml, xl-rules.xml file from the repository root. In addition, add the scripts directory and its contents. Restart XL Deploy.
  • Next, setup a CoreOS cluster. This blog post explains how you can setup such a platform locally.
  • Now you can connect to XL deploy using your browser. On the deployment tab you can import the sample application, located in the sample-app folder of the plugin resources Github repository
  • You can now setup the target deployment container based on the Overthere.SshHost configuration item type. Verfiy that you can connect to your CoreOS cluster using this XL Deploy container.
  • Next, you can setup a XL Deploy environment, which contains your target deployment container.
  • Now you can use the deployment tab to deploy and undeploy your fleet configuration file applications.

 

Building the plugin

The plugin consists of two xml files and a number of script files. Below you find a description of the plugin implementation steps.

The CoreOS container application deployments are based on fleet unit configuration files. So, first we create a XL Deploy configuration Item type definition which represents such a file. This XL Deploy deployed type is defined in the XL Deploy synthetic.xml file. The snippet below shows the contents of this file. I have assigned the name “fleet.DeployedUnit”.

synthetic

The definition contains  a container-type attribute. The Overthere.SshHost container is referenced. The plugin can simple use the Overthere.SshHost container type to connect to the CoreOS cluster and to execute fleet commands.

Furthermore, I have added two properties. One property which can be used to specify the number of instances. Note that XL Deploy dictionaries can be utilized to define the number of instances for each environment separately. The second property is a flag which controls whether instances will be started (or only submitted and loaded).

If you want to deploy a fleet configuration file using fleetctl, you can issue the following three commands: submit, load and start. In the plugin, I have created a separate script file for each of these fleetctl commands.  The caption below shows the script file to load a fleet configuration file. This load script uses the file name property and numberOfInstances property of the “fleet.DeployedUnit” configuration item.

load-unit

Finally, the plugin can be completed with XML-based rules which create the deployment steps. The caption below shows the rule which adds steps to [1] submit the unit configuration and [2] load the unit when (a version of ) the application is deployed.

rules-xldeploy

Using rules, you can easily define logic to add deployment steps. These steps can closely resemble the commands you perform when you are using fleetctl. For this plugin I have utized xml-based rules only. Using script rules, you can add more intelligence to your plugin. For example, the logic of the restart script can be converted to rules and more fine grained deployment steps.

More information

If you are interested in building your own XL Deploy plugin, the XL Deploy product documentation contains tutorials which will get you started.

If you want to know how you can create a High Available Docker Container Platform Using CoreOS And Consul, the following blogs are great starting point:

Microservices architecture principle #2: Autonomy over coordination

Xebia Blog - Mon, 05/25/2015 - 16:13

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Over the next couple of days we will cover each of these principles in more detail in a series of blog posts.
This blog explains why we prefer autonomy of services over coordination between services.

Our Xebia colleague Serge Beaumont posted "Autonomy over Coordination" in a tweet earlier this year and for me it summarised one of the crucial aspects for creating an agile, scalable and robust IT system or organisational structure. Autonomy over coordination is closely related to the business capabilities described in the previous post in this series, each capability should be implemented in one microservice. Once you have defined your business capabilities correctly the dependencies between those capabilities are minimised. Therefore minimal coordination between capabilities is required, leading to optimal autonomy. Increased autonomy for a microservice gives it freedom to evolve without impacting other services: the optimal technology can be used, it can scale without having to scale others, etc. For the team responsible for the service the advantages are similar, the autonomy enables them to make optimal choices that make their team function at its best.

The drawbacks of less autonomy and more coordination are evident and we all have experienced these. For example, a change leads to a snowball of dependent changes that must be deployed at the same moment, making changes to a module requires approval of other teams,  not being able to scale up a compute intensive function without scaling the whole system, ... the list is endless.

So in summary, pay attention to defining you business capabilities (microservices) in such a manner that autonomy is maximised, it will give you both organisational and technical advantages.

Some Advice On Becoming a Lead Developer

Making the Complex Simple - John Sonmez - Mon, 05/25/2015 - 16:00

Becoming a lead developer or technical lead on a team is a great responsibility and can be an excellent career opportunity, but the transition to this role can be a little jarring. There is a big difference between being responsible for yourself and your work and being at least partially responsible for the work of an […]

The post Some Advice On Becoming a Lead Developer appeared first on Simple Programmer.

Categories: Programming

Python: Joining multiple generators/iterators

Mark Needham - Mon, 05/25/2015 - 00:51

In my previous blog post I described how I’d refactored some scraping code I’ve been working on to use iterators and ended up with a function which returned a generator containing all the events for one BBC live text match:

match_id = "32683310"
events = extract_events("data/raw/%s" % (match_id))
 
>>> print type(events)
<type 'generator'>

The next thing I wanted to do is get the events for multiple matches which meant I needed to glue together multiple generators into one big generator.

itertools’ chain function does exactly what we want:

itertools.chain(*iterables)

Make an iterator that returns elements from the first iterable until it is exhausted, then proceeds to the next iterable, until all of the iterables are exhausted. Used for treating consecutive sequences as a single sequence.

First let’s try it out on a collection of range generators:

import itertools
gens = [(n*2 for n in range(0, 3)), (n*2 for n in range(4,7))]
>>> gens
[<generator object <genexpr> at 0x10ff3b140>, <generator object <genexpr> at 0x10ff7d870>]
 
output = itertools.chain()
for gen in gens:
  output = itertools.chain(output, gen)

Now if we iterate through ‘output’ we’d expect to see the multiples of 2 up to and including 12:

>>> for item in output:
...   print item
...
0
2
4
8
10
12

Exactly as we expected! Our scraping code looks like this once we plug the chaining in:

matches = ["32683310", "32683303", "32384894", "31816155"]
 
raw_events = itertools.chain()
for match_id in matches:
    raw_events = itertools.chain(raw_events, extract_events("data/raw/%s" % (match_id)))

‘raw_events’ now contains a single generator that we can iterate through and process the events for all matches.

Categories: Programming

Microservices architecture principle #1: Each Microservice delivers a single complete business capability

Xebia Blog - Sat, 05/23/2015 - 21:13

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Over the next couple of days we will cover each of these principles in more detail in a series of blog posts.
This blog explains why a Microservice should deliver a complete business capability.

A complete business capability is a process that can be finished consecutively without interruptions or excursions to other services. This means that a business capability should not depend on other services to complete its work.
If a process in a microservice depends on other microservices we would end up in the dependency hell ESBs introduced: in order to service a customer request we need many other services and therefore if one of them fails everything stops. A more robust solution would be to define a service that handles a process that makes sense to a user. Examples include ordering a book in a web shop. This process would start with the selection of a book and end with creating an order. Actually fulfilling the order is a different process that lives in its own service. The fulfillment process might run right after the order process but it doesn’t have to. If the customer orders a PDF version of a book order fulfillment may be completed right away. If the order was for the print version, all the order service can promise is to ask shipping to send the book. Separating these two processes in different services allows us to make choices about the way the process is completed, making sure that a problem or delay in one service has no impact on other services.

So, building a microservice such that it does a single thing well without interruptions or waiting time is at the foundation of a robust architecture.

Are You an Integration Specialist?

Some people specialize in a narrow domain.  They are called specialists because they focus on a specific area of expertise, and they build skills in that narrow area.

Rather than focus on breadth, they go for depth.

Others focus on the bigger picture or connecting the dots.  Rather than focus on depth, they go for breadth.

Or do they?

It actually takes a lot of knowledge and depth to be effective at integration and “connecting the dots” in a meaningful way.  It’s like being a skilled entrepreneur or a skilled business developer.   Not just anybody who wants to generalize can be effective.  

True integration specialists are great pattern matchers and have deep skills in putting things together to make a better whole.

I was reading the book Business Development: A Market-Oriented Perspective where Hans Eibe Sørensen introduces the concept of an Integrating Generalist and how they make the world go round.

I wrote a post about it on Sources of Insight:

The Integrating Generalist and the Art of Connecting the Dots

Given the description, I’m not sure which is better, the Integration Specialist or the Integrating Generalist.  The value of the Integrating Generalist is that it breathes new life into people that want to generalize so that they can put the bigger puzzle together.  Rather than de-value generalists, this label puts a very special value on people that are able to fit things together.

In fact, the author claims that it’s Integrating Generalists that make the world go round.

Otherwise, there would be a lot of great pieces and parts, but nothing to bring them together into a cohesive whole.

Maybe that’s a good metaphor for the Integrating Generalist.  While you certainly need all the parts of the car, you also need somebody to make sure that all the parts come together.

In my experience, Integration Generalists are able to help shape the vision, put the functions that matter in place, and make things happen.

I would say the most effective Program Managers I know do exactly that.

They are the Oil and the Glue for the team because they are able to glue everything together, and, at the same time, remove friction in the system and help people bring out their best, towards a cohesive whole.

It’s synergy in action, in more ways than one.

You Might Also Like

Anatomy of a High-Potential

E-Shape People, Not T-Shape

Generalists vs. Specialists

Categories: Architecture, Programming

Python: Refactoring to iterator

Mark Needham - Sat, 05/23/2015 - 11:14

Over the last week I’ve been building a set of scripts to scrape the events from the Bayern Munich/Barcelona game and I’ve ended up with a few hundred lines of nested for statements, if statements and mutated lists. I thought it was about time I did a bit of refactoring.

The following is a function which takes in a match file and spits out a collection of maps containing times & events.

import bs4
import re
from bs4 import BeautifulSoup
from soupselect import select
 
def extract_events(file):
    match = open(file, 'r')
    soup = BeautifulSoup(match.read())
 
    all_events = []
    for event in select(soup, 'div#live-text-commentary-wrapper div#live-text'):
        for child in event.children:
            if type(child) is bs4.element.Tag:
                all_events.append(child.getText().strip())
 
    for event in select(soup, 'div#live-text-commentary-wrapper div#more-live-text'):
        for child in event.children:
            if type(child) is bs4.element.Tag:
                all_events.append(child.getText().strip())
 
    timed_events = []
    for i in range(0, len(all_events)):
        event = all_events[i]
        time =  re.findall("\d{1,2}:\d{2}", event)
        formatted_time = " +".join(time)
        if time:
            timed_events.append({'time': formatted_time, 'event': all_events[i+1]})
    return timed_events

We call it like this:

match_id = "32683310"
for event in extract_events("data/%s" % (match_id))[:10]:
    print event

The file we’re loading is the Bayern Munich vs Barcelona match HTML file which I have saved locally. After we’ve got that read into beautiful soup we locate the two divs on the page which contain the match events.

We then iterate over that list and create a new list containing (time, event) pairs which we return.

I think we should be able to get to our resulting collection without persisting an intermediate list, but first things first – let’s remove the duplicated for loops:

def extract_events(file):
    match = open(file, 'r')
    soup = BeautifulSoup(match.read())
 
    all_events = []
    events = select(soup, 'div#live-text-commentary-wrapper div#live-text')
    more_events = select(soup, 'div#live-text-commentary-wrapper div#more-live-text')
 
    for event in events + more_events:
        for child in event.children:
            if type(child) is bs4.element.Tag:
                all_events.append(child.getText().strip())
 
    timed_events = []
    for i in range(0, len(all_events)):
        event = all_events[i]
        time =  re.findall("\d{1,2}:\d{2}", event)
        formatted_time = " +".join(time)
        if time:
            timed_events.append({'time': formatted_time, 'event': all_events[i+1]})
    return timed_events

The next step is to refactor towards using an iterator. After a bit of reading I realised a generator would make life even easier.

I created a function which returned an iterator of the raw events and plugged that into the original function:

def raw_events(file):
    match = open(file, 'r')
    soup = BeautifulSoup(match.read())
    events = select(soup, 'div#live-text-commentary-wrapper div#live-text')
    more_events = select(soup, 'div#live-text-commentary-wrapper div#more-live-text')
    for event in events + more_events:
        for child in event.children:
            if type(child) is bs4.element.Tag:
                yield child.getText().strip()
 
def extract_events(file):
    all_events = list(raw_events(file))
 
    timed_events = []
    for i in range(0, len(all_events)):
        event = all_events[i]
        time =  re.findall("\d{1,2}:\d{2}", event)
        formatted_time = " +".join(time)
        if time:
            timed_events.append({'time': formatted_time, 'event': all_events[i+1]})
    return timed_events

If we run that function we still get the same output as before which is good. Now we need to work out how to clean up the second bit of the code which groups the appropriate rows together.

The goal is that ‘extract_events’ returns an iterator rather than a list – we need to figure out how to iterate over the output of ‘raw_events’ in such a way that when we find a ‘time row’ we can yield that and the row immediately after.

Luckily I found a Stack Overflow post explaining that you can use the ‘next’ function inside an iterator to achieve this:

def extract_events(file):
    events = raw_events(file)
    for event in events:
        time =  re.findall("\d{1,2}:\d{2}", event)
        formatted_time = " +".join(time)
        if time:
            yield {'time': formatted_time, 'event': next(events)}

It’s not that much less code than the original function but I think it’s an improvement. Any thoughts/tips to simplify it further are always welcome.

Categories: Programming

Android Developer Story: Wooga’s fast iterations on Android and Google Play

Android Developers Blog - Sat, 05/23/2015 - 02:34

Posted by Leticia Lago, Google Play team

In order to make the best possible games, Wooga works on roughly 40 concepts and prototypes per year, out of which 10 go into production, around seven soft launch, and only two make it to global launch. It’s what they call “the hit filter." For their latest title, Agent Alice, they follow up with new episodes every week to maintain player interest and engagement over time.

The ability to quickly iterate both live and under development games is therefore key to Wooga’s business model — Android and Google Play provide them the tools they need and mean that new features and updates are made on Android first, before they get to other platforms.

Find out more from Sebastian Kriese, Head of Partnerships, and Pal Tamas Feher, Head of Engineering, and learn how the iteration features of Android and Google Play have contributed to successes such as Diamond Dash, Jelly Splash, and Agent Alice.

You can find out more about building successful games businesses on Android and Google Play at Google I/O 2015: in person, on the live stream, or session recordings after the event. Check out the following:

  • Developers connecting the world through Google Play - Hear how the new mobile ecosystem including Google Play and Android are empowering developers to make good on the dream of connecting the world through technology to improve people's lives. This session will be live streamed.
  • Growing games with Google — In addition to consoles, PC, and browser gaming, as well as phone and tablet games, there are emerging fields including virtual reality and mobile games in the living room. This talk covers how Google is helping developers across this broad range of platforms. This session will be live streamed.
  • What’s new in the Google Play Developer Console - Google Play’s new launches will help you acquire more users and improve the quality of your app. Hear an overview of the latest features and how you can start taking advantage of them in the Developer Console.
  • Smarter approaches to app testing — Hear about the new ways Google can help maximize the success of your next app launch with cheaper and easier testing strategies.
Categories: Programming

Game Performance: Explicit Uniform Locations

Android Developers Blog - Sat, 05/23/2015 - 02:32

Posted by Shanee Nishry, Games Developer Advocate

Uniforms variables in GLSL are crucial for passing data between the game code on the CPU and the shader program on the graphics card. Unfortunately, up until the availability of OpenGL ES 3.1, using uniforms required some preparation which made the workflow slightly more complicated and wasted time during loading.

Let us examine a simple vertex shader and see how OpenGL ES 3.1 allows us to improve it:

#version 300 es

layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;

uniform mat4 matWorldViewProjection;

out vec2 outTexCoord;

void main()
{
    outTexCoord = vertexUV;
    gl_Position = matWorldViewProjection * vertexPosition;
}

Note: You might be familiar with this shader from a previous Game Performance article on Layout Qualifiers. Find it here.

We have a single uniform for our world view projection matrix:

uniform mat4 matWorldViewProjection;

The inefficiency appears when you want to assign the uniform value.

You need to use glUniformMatrix4fv or glUniform4f to set the uniform’s value but you also need the handle for the uniform’s location in the program. To get the handle you must call glGetUniformLocation.

GLuint program; // the shader program
float matWorldViewProject[16]; // 4x4 matrix as float array

GLint handle = glGetUniformLocation( program, “matWorldViewProjection” );
glUniformMatrix4fv( handle, 1, false, matWorldViewProject );

That pattern leads to having to call glGetUniformLocation for each uniform in every shader and keeping the handles or worse, calling glGetUniformLocation every frame.

Warning! Never call glGetUniformLocation every frame! Not only is it bad practice but it is slow and bad for your game’s performance. Always call it during initialization and save it somewhere in your code for use in the render loop.

This process is inefficient, it requires you to do more work and costs precious time and performance.

Also take into consideration that you might have multiple shaders with the same uniforms. It would be much better if your code was deterministic and the shader language allowed you to explicitly set the locations of your uniforms so you don’t need to query and manage access handles. This is now possible with Explicit Uniform Locations.

You can set the location for uniforms directly in the shader’s code. They are declared like this

layout(location = index) uniform type name;

For our example shader it would be:

layout(location = 0) uniform mat4 matWorldViewProjection;

This means you never need to use glGetUniformLocation again, resulting in simpler code, initialization process and saved CPU cycles.

This is how the example shader looks after the change. Changes are marked in bold:

#version 310 es

layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;

layout(location = 0) uniform mat4 matWorldViewProjection;

out vec2 outTexCoord;

void main()
{
    outTexCoord = vertexUV;
    gl_Position = matWorldViewProjection * vertexPosition;
}

As Explicit Uniform Locations are only supported from OpenGL ES 3.1 we also changed the version declaration to 310.

Now all you need to do to set your matWorldViewProjection uniform value is call glUniformMatrix4fv for the handle 0:

const GLint UNIFORM_MAT_WVP = 0; // Uniform location for WorldViewProjection
float matWorldViewProject[16]; // 4x4 matrix as float array

glUniformMatrix4fv( UNIFORM_MAT_WVP, 1, false, matWorldViewProject );

This change is extremely simple and the improvements can be substantial, producing cleaner code, asset pipeline and improved performance. Be sure to make these changes If you are targeting OpenGL ES 3.1 or creating multiple APKs to support a wide range of devices.

To learn more about Explicit Uniform Locations check out the OpenGL wiki page for it which contains valuable information on different layouts and how arrays are represented.

Join the discussion on

+Android Developers
Categories: Programming

Game Performance: Layout Qualifiers

Android Developers Blog - Sat, 05/23/2015 - 02:30

Today, we want to share some best practices on using the OpenGL Shading Language (GLSL) that can optimize the performance of your game and simplify your workflow. Specifically, Layout qualifiers make your code more deterministic and increase performance by reducing your work.


Let’s start with a simple vertex shader and change it as we go along.

This basic vertex shader takes position and texture coordinates, transforms the position and outputs the data to the fragment shader:
attribute vec4 vertexPosition;
attribute vec2 vertexUV;

uniform mat4 matWorldViewProjection;

varying vec2 outTexCoord;

void main()
{
  outTexCoord = vertexUV;
  gl_Position = matWorldViewProjection * vertexPosition;
}
Vertex Attribute Index To draw a mesh on to the screen, you need to create a vertex buffer and fill it with vertex data, including positions and texture coordinates for this example.

In our sample shader, the vertex data may be laid out like this:
struct Vertex
{
  Vector4 Position;
  Vector2 TexCoords;
};
Therefore, we defined our vertex shader attributes like this:
attribute vec4 vertexPosition;
attribute vec2  vertexUV;
To associate the vertex data with the shader attributes, a call to glGetAttribLocation will get the handle of the named attribute. The attribute format is then detailed with a call to glVertexAttribPointer.
GLint handleVertexPos = glGetAttribLocation( myShaderProgram, "vertexPosition" );
glVertexAttribPointer( handleVertexPos, 4, GL_FLOAT, GL_FALSE, 0, 0 );

GLint handleVertexUV = glGetAttribLocation( myShaderProgram, "vertexUV" );
glVertexAttribPointer( handleVertexUV, 2, GL_FLOAT, GL_FALSE, 0, 0 );
But you may have multiple shaders with the vertexPosition attribute and calling glGetAttribLocation for every shader is a waste of performance which increases the loading time of your game.

Using layout qualifiers you can change your vertex shader attributes declaration like this:
layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;
To do so you also need to tell the shader compiler that your shader is aimed at GL ES version 3.1. This is done by adding a version declaration:
#version 300 es
Let’s see how this affects our shader, changes are marked in bold:
#version 300 es

layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;

uniform mat4 matWorldViewProjection;

out vec2 outTexCoord;

void main()
{
  outTexCoord = vertexUV;
  gl_Position = matWorldViewProjection * vertexPosition;
}
Note that we also changed outTexCoord from varying to out. The varying keyword is deprecated from version 300 es and requires changing for the shader to work.

Note that Vertex Attribute qualifiers and #version 300 es are supported from OpenGL ES 3.0. The desktop equivalent is supported on OpenGL 3.3 and using #version 330.

Now you know your position attributes always at 0 and your texture coordinates will be at 1 and you can now bind your shader format without using glGetAttribLocation:
const int ATTRIB_POS = 0;
const int ATTRIB_UV   = 1;

glVertexAttribPointer( ATTRIB_POS, 4, GL_FLOAT, GL_FALSE, 0, 0 );
glVertexAttribPointer( ATTRIB_UV, 2, GL_FLOAT, GL_FALSE, 0, 0 );
This simple change leads to a cleaner pipeline, simpler code and saved performance during loading time.

To learn more about performance on Android, check out the Android Performance Patterns series.

Posted by Shanee Nishry, Games Developer Advocate Join the discussion on

+Android Developers
Categories: Programming