Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Readers and Listeners!

Part of the enjoyment of publishing the Software Process and Measurement Blog and Podcast is interacting with my listeners, readers and interviewees!

Contests on the blog are one of the ways we provide the listener’s with a little bit of lagniappe. The Software Process and Measurement Cast 330 features an interview with Anthony Mersino. The interview about Agile, coaching, organizational change AND his new book Agile Project Management  provided great advice to the podcast listeners. In a great gesture,  Anthony sponsored a contest for copy of his book. The winner is Paul Laberge! Paul is a long time reader of the blog and listener to the podcast (and frequently writes to discuss ideas or to argue about cell phone operating systems). Here is a picture of Paul and his new book.

Untitled1

Paul, thank you for being part of both the podcast and the blog! Anthony, thank you for providing the book and the contest! Interested in doing a review Paul?

On another front, recently I attended the CMMI® Global Conference and met Hiroshi Kobayashi. Hiroshi is also a listener, reader and often comments on the blog. I find it incredible to meet people in person that I interact with on the podcast and blog. Interacting with members of the SPaMCAST community keeps me motivated. Here is a picture of Hiroshi and myself (thanks to Paul Byrnes for taking the picture).

Untitled

Hiroshi is a Senior Executive Manager of the CMMI Consulting Division at System Information Company in Japan. He is also PMI certified Project Management Professional and a Scrum Alliance Certified Scrum Master.

Hiroshi, I look forward to collaborating on the blog and podcast!

A message to all readers and listeners! If you are going to be at ICEAA in June 9 -12 in San Diego, please let me know or find me at the conference. I would like to me you in person, hear your story, talk about ideas, concepts and even the weather!

My sincere thanks to Paul, Hiroshi and ALL of my readers and listeners. Interacting with you enriches my blogging and podcasting experience and I hope it enriches yours!


Categories: Process Management

Get ready for Google I/O 2015

Google Code Blog - Tue, 05/26/2015 - 19:24

Posted by Mike Pegg, reppin' I/O since 2011

Google I/O is almost here! We’ll officially kick-off live from the Moscone Center in San Francisco at 9:30AM PDT this Thursday, May 28th. While we’re putting the finishing touches on the keynote, sessions, sandbox talks, and code labs, we wanted to provide you with some tips to get ready to experience I/O, either in-person or offsite.

Navigate the conference with the Web & Android apps

To get the most out of Google I/O, make sure to download the I/O Android App and/or add the I/O web app to your mobile homescreen (both work offline!). From either, you can plan your schedule, view the venue map, and keep up with the latest I/O details. We just updated the website this morning, optimizing it for real-time content, as well as the Android app on Google Play - make sure to download the latest version (3.3.2) before the conference starts.

Attending in person?

New this year, keynote access will be assigned on a first-come, first-serve basis during badge pickup. Be sure to swing by Moscone West tomorrow, Wednesday, May 27th between 9AM-8PM PDT to pick up your badge (full badge pick-up schedule). Don’t forget to bring your government-issued photo ID and a copy of your ticket (on your phone or a printed copy). If you’re an Academic attendee, please remember to bring proof of eligibility. You might want to read through the pro tips in our FAQ before you arrive to learn how to best navigate the conference.

Last but not least, we’re looking forward to kicking back and relaxing with you at the After Hours party during the evening of Day 1. Expect good food, good drinks, and a few Googley surprises. Be sure to check your email during the event for further instructions.

Attending remotely?

Can’t join us in person? Don’t worry, we’ve got you covered! Whether you’re looking to to experience I/O with other devs in your neighborhood, or if you’ll be streaming it live from your couch, here are some ways you can connect with I/O in real-time:

  • I/O Extended: There’s still time to find an I/O Extended event happening near you. We have 460+ events happening in 91+ countries, so your chances of finding one near you are pretty good. These events are organized by Google Developer Groups, Student Ambassadors, and local developers, and include activities such as code labs, hackathons and more.
  • I/O Live: Tune into I/O Live on the web at google.com/io or via the I/O Android app. We will live stream the keynote and all sessions over the course of the event. During breaks, you can watch live interviews happening right from Moscone and educational pre-recorded content. If you want to bring the live stream and/or the #io15 conversation to your audience, simply customize our I/O Live widget and embed it on your site or blog.
  • #io15request: Send us your questions about what’s happening at I/O and a team of onsite Googlers will do their best to track down an answer for you. To submit a request, just make a public post on Google+ or Twitter with the #io15request hashtag. We’ll only be replying to requests made on May 28-29, in English, French or German. Learn more.
  • I/O in photos: Be sure to check out our photo stream during the event. We’ll upload photos in real time, live from Mocone.

We’re looking forward to seeing you in person or remotely on Thursday and Friday. Don’t forget to join the social conversation at #io15!

Categories: Programming

Sponsored Post: Tumblr, Power Admin, Learninghouse, MongoDB, Internap, Aerospike, SignalFx, InMemory.Net, Couchbase, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Make Tumblr fast, reliable and available for hundreds of millions of visitors and tens of millions of users.  As a Site Reliability Engineer you are a software developer with a love of highly performant, fault-tolerant, massively distributed systems. Apply here now! 

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • 90 Days. 1 Bootcamp. A whole new life. Interested in learning how to code? Concordia St. Paul's Coding Bootcamp is an intensive, fast-paced program where you learn to be a software developer. In this full-time, 12-week on-campus course, you will learn either .NET or Java and acquire the skills needed for entry-level developer positions. For more information, read the Guide to Coding Bootcamp or visit bootcamp.csp.edu.

  • June 2nd – 4th, Santa Clara: Register for the largest NoSQL event of the year, Couchbase Connect 2015, and hear how innovative companies like Cisco, TurboTax, Joyent, PayPal, Nielsen and Ryanair are using our NoSQL technology to solve today’s toughest big data challenges. Register Today.

  • The Art of Cyberwar: Security in the Age of Information. Cybercrime is an increasingly serious issue both in the United States and around the world; the estimated annual cost of global cybercrime has reached $100 billion with over 1.5 million victims per day affected by data breaches, DDOS attacks, and more. Learn about the current state of cybercrime and the cybersecurity professionals in charge with combatting it in The Art of Cyberwar: Security in the Age of Information, provided by Russell Sage Online, a division of The Sage Colleges.

  • MongoDB World brings together over 2,000 developers, sysadmins, and DBAs in New York City on June 1-2 to get inspired, share ideas and get the latest insights on using MongoDB. Organizations like Salesforce, Bosch, the Knot, Chico’s, and more are taking advantage of MongoDB for a variety of ground-breaking use cases. Find out more at http://mongodbworld.com/ but hurry! Super Early Bird pricing ends on April 3.
Cool Products and Services
  • Here's a little quiz for you: What do these companies all have in common? Symantec, RiteAid, CarMax, NASA, Comcast, Chevron, HSBC, Sauder Woodworking, Syracuse University, USDA, and many, many more? Maybe you guessed it? Yep! They are all customers who use and trust our software, PA Server Monitor, as their monitoring solution. Try it out for yourself and see why we’re trusted by so many. Click here for your free, 30-Day instant trial download!

  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Loggly alternative.

  • Instructions for implementing Redis functionality in Aerospike. Aerospike Director of Applications Engineering, Peter Milne, discusses how to obtain the semantic equivalent of Redis operations, on simple types, using Aerospike to improve scalability, reliability, and ease of use. Read more.

  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Staying Ahead of the Curve

From the Editor of Methods & Tools - Tue, 05/26/2015 - 15:45
We all want to stay ahead of the curve – after all, that’s what you go to a conference for. But have you ever considered how being ahead of the curve might be dangerous? Using a new language before you understand it, putting a technology into production so you can learn it, abandoning “old practices” […]

Announcing the Simple Programmer Podcast

Making the Complex Simple - John Sonmez - Tue, 05/26/2015 - 15:30

If you’ve been following Simple Programmer for a while, you might have been wondering when I was going to create a Simple Programmer podcast. Well, that day has finally come. In fact, by the time you are reading this, 4 episodes have already been published! I’ve wanted to create an official Simple Programmer podcast for […]

The post Announcing the Simple Programmer Podcast appeared first on Simple Programmer.

Categories: Programming

Product Backlog Refinement (Grooming)

Mike Cohn's Blog - Tue, 05/26/2015 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

Product backlog refinement—sometimes called product backlog grooming in reference to keeping the backlog clean and orderly—is a meeting that is held near the end of one sprint to ensure the backlog is ready for the next sprint.

During a product backlog refinement meeting, the team and product owner discuss the top items on the product backlog. The team is given a chance to ask the questions that would normally arise during sprint planning:

  • What should we do if the user enters invalid data here?
  • Are all users allowed to access this part of the system?
  • What happens if…?

By asking these questions earlier, the product owner is given a chance to arrive at answers to any questions he or she may not be prepared to answer immediately.

If those questions were asked for the first time in sprint planning, and too many could not be answered, it might be unnecessary to put a high-priority product backlog item aside and not work on it during the sprint.

These questions do not need to be fully resolved in a backlog refinement meeting. Rather, the product owner needs only to address them just enough so that the team feels confident that the story can be adequately discussed during the coming planning meeting.

Backlog refinement in that sense is really a checkpoint rather than an effort to fully resolve issues.

I like to hold the product backlog refinement meetings three days before the end of the current sprint. This gives the product owner sufficient time to act on any issues that are identified. Some teams find that doing shorter meetings every week rather than once per sprint are more suited to their cadence, and that is, of course, fine.

Unlike other Scrum meetings, I do not think the product backlog refinement meeting requires the participation of the whole team.

If you think about a whole team meeting three days before the end of the sprint, there is likely to be someone who will be frantically busy—someone who, if forced to attend one more meeting, may have to work late to finish the work of the sprint.

I’d prefer to conduct the meeting without such team members. As long as the same team members don’t miss the meeting each sprint, I think it’s fine to conduct backlog refinement meetings with about half the team plus the product owner and ScrumMaster.

Early Release of Agile and Lean Program Management Available

 Scaling Collaboration Across the OrganizationI have finished integrating comments from the early review of Agile and Lean Program Management: Scaling Collaboration Across the Organization. I decided that the book was good enough to release to the general public.

I find it difficult to release books in progress. The in-progress part challenges my perfection rules. I also know that some of you who want this book will wait until it’s done, or worse, available in paper.

However, since this is an agile and lean book, it seems nuts to not release it, even though it is not quite done.

If you get the book, please send me comments about what confused you, what you thought was crazy, and anything else.

Thanks so much!

 

Categories: Project Management

Neo4j: The foul revenge graph

Mark Needham - Tue, 05/26/2015 - 08:03

Last week I was showing the foul graph to my colleague Alistair who came up with the idea of running a ‘foul revenge’ query to find out which players gained revenge for a foul with one of their own later in them match.

Queries like this are very path centric and therefore work well in a graph. To recap, this is what the foul graph looks like:

2015 05 26 07 35 33

The first thing that we need to do is connect the fouls in a linked list based on time so that we can query their order more easily.

We can do this with the following query:

MATCH (foul:Foul)-[:COMMITTED_IN_MATCH]->(match)
WITH foul,match
ORDER BY match.id, foul.sortableTime
WITH match, COLLECT(foul) AS fouls
FOREACH(i in range(0, length(fouls) -2) |
  FOREACH(foul1 in [fouls[i]] | FOREACH (foul2 in [fouls[i+1]] |
    MERGE (foul1)-[:NEXT]->(foul2)
)));

This query collects fouls grouped by match and then adds a ‘NEXT’ relationship between adjacent fouls. The graph now looks like this:

2015 05 26 07 43 28

Now let’s find the revenge foulers in the Bayern Munich vs Barcelona match. We’re looking for the following pattern:

2015 05 26 07 55 45

This translates to the following cypher query:

match (foul1:Foul)-[:COMMITTED_AGAINST]->(app1)-[:COMMITTED_FOUL]->(foul2)-[:COMMITTED_AGAINST]->(app2)-[:COMMITTED_FOUL]->(foul1),
      (player1)-[:MADE_APPEARANCE]->(app1), (player2)-[:MADE_APPEARANCE]->(app2),
      (foul1)-[:COMMITTED_IN_MATCH]->(match:Match {id: "32683310"})<-[:COMMITTED_IN_MATCH]-(foul2)
WHERE (foul1)-[:NEXT*]->(foul2)
RETURN player2.name AS firstFouler, player1.name AS revengeFouler, foul1.time, foul1.location, foul2.time, foul2.location

I’ve added in a few extra parts to the pattern to pull out the players involved and to find the revenge foulers in a specific match – the Bayern Munich vs Barcelona Semi Final 2nd leg.


We end up with the following revenge fouls:

2015 05 26 00 05 48

We can see here that Dani Alves actually gains revenge on Bastian Schweinsteiger twice for a foul he made in the 10th minute.

If we tweak the query to the following we can get a visual representation of the revenge fouls as well:

match (foul1:Foul)-[:COMMITTED_AGAINST]->(app1)-[:COMMITTED_FOUL]->(foul2)-[:COMMITTED_AGAINST]->(app2)-[:COMMITTED_FOUL]->(foul1),
      (player1)-[:MADE_APPEARANCE]->(app1), (player2)-[:MADE_APPEARANCE]->(app2),
      (foul1)-[:COMMITTED_IN_MATCH]->(match:Match {id: "32683310"})<-[:COMMITTED_IN_MATCH]-(foul2),
      (foul1)-[:NEXT*]->(foul2)
RETURN *

2015 05 23 15 23 22

At the moment I’ve restricted the revenge concept to single matches but I wonder whether it’d be more interesting to create a linked list of fouls which crosses matches between teams in the same season.

The code for all of this is on github – the README is a bit sketchy at the moment but I’ll be fixing that up soon.

Categories: Programming

Game Performance: Geometry Instancing

Android Developers Blog - Tue, 05/26/2015 - 01:57

Posted by Shanee Nishry, Games Developer Advocate

Imagine a beautiful virtual forest with countless trees, plants and vegetation, or a stadium with countless people in the crowd cheering. If you are heroic you might like the idea of an epic battle between armies.

Rendering a lot of meshes is desired to create a beautiful scene like a forest, a cheering crowd or an army, but doing so is quite costly and reduces the frame rate. Fortunately this is possible using a simple technique called Geometry Instancing.

Geometry instancing can be used in 2D games for rendering a large number of sprites, or in 3D for things like particles, characters and environment.

The NDK code sample More Teapots demoing the content of this article can be found with the ndk inside the samples folder and in the git repository.

Support and Extensions

Geometry instancing is available from OpenGL ES 3.0 and to OpenGL 2.0 devices which support the GL_NV_draw_instanced or GL_EXT_draw_instanced extensions. More information on how to using the extensions is shown in the More Teapots demo.

Overview

Submitting draw calls causes OpenGL to queue commands to be sent to the GPU, this has an expensive overhead which may affect performance. This overhead grows when changing states such as alpha blending function, active shader, textures and buffers.

Geometry Instancing is a technique that combines multiple draws of the same mesh into a single draw call, resulting in reduced overhead and potentially increased performance. This works even when different transformations are required.

The algorithm

To explain how Geometry Instancing works let’s quickly overview traditional drawing.

Traditional Drawing

To a draw a mesh you’d usually prepare a vertex buffer and an index buffer, bind your shader and buffers, set your uniforms such as a World View Projection matrix and make a draw call.

To draw multiple instances using the same mesh you set new uniform values for the transformations and other data and call draw again. This is repeated for every instance.

Drawing with Geometry Instancing

Geometry Instancing reduces CPU overhead by reducing the sequence described above into a single buffer and draw call.

It works by using an additional buffer which contains custom per-instance data needed by your shader, such as transformations, color, light data.

The first change to your workflow is to create the additional buffer on initialization stage.

To put it into code let’s define an example per-instance data that includes a world view projection matrix and a color:

C++

struct PerInstanceData
{
 Mat4x4 WorldViewProj;
 Vector4 Color;
};

You also need to the structure to your shader. The easiest way is by creating a Uniform Block with an array:

GLSL

#define MAX_INSTANCES 512

layout(std140) uniform PerInstanceData {
    struct
    {
        mat4      uMVP;
        vec4      uColor;
    } Data[ MAX_INSTANCES ];
};

Note that uniform blocks have limited sizes. You can find the maximum number of bytes you can use by querying for GL_MAX_UNIFORM_BLOCK_SIZE using glGetIntegerv.

Example:

GLint max_block_size = 0;
glGetIntegerv( GL_MAX_UNIFORM_BLOCK_SIZE, &max_block_size );

Bind the uniform block on the CPU in your program’s initialization stage:

C++

#define MAX_INSTANCES 512
#define BINDING_POINT 1
GLuint shaderProgram; // Compiled shader program

// Bind Uniform Block
GLuint blockIndex = glGetUniformBlockIndex( shaderProgram, "PerInstanceData" );
glUniformBlockBinding( shaderProgram, blockIndex, BINDING_POINT );

And create a corresponding uniform buffer object:

C++

// Create Instance Buffer
GLuint instanceBuffer;

glGenBuffers( 1, &instanceBuffer );
glBindBuffer( GL_UNIFORM_BUFFER, instanceBuffer );
glBindBufferBase( GL_UNIFORM_BUFFER, BINDING_POINT, instanceBuffer );

// Initialize buffer size
glBufferData( GL_UNIFORM_BUFFER, MAX_INSTANCES * sizeof( PerInstanceData ), NULL, GL_DYNAMIC_DRAW );

The next step is to update the instance data every frame to reflect changes to the visible objects you are going to draw. Once you have your new instance buffer you can draw everything with a single call to glDrawElementsInstanced.

You update the instance buffer using glMapBufferRange. This function locks the buffer and retrieves a pointer to the byte data allowing you to copy your per-instance data. Unlock your buffer using glUnmapBuffer when you are done.

Here is a simple example for updating the instance data:

const int NUM_SCENE_OBJECTS = …; // number of objects visible in your scene which share the same mesh

// Bind the buffer
glBindBuffer( GL_UNIFORM_BUFFER, instanceBuffer );

// Retrieve pointer to map the data
PerInstanceData* pBuffer = (PerInstanceData*) glMapBufferRange( GL_UNIFORM_BUFFER, 0,
                NUM_SCENE_OBJECTS * sizeof( PerInstanceData ),
                GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_RANGE_BIT );

// Iterate the scene objects
for ( int i = 0; i < NUM_SCENE_OBJECTS; ++i )
{
    pBuffer[ i ].WorldViewProj = ... // Copy World View Projection matrix
    pBuffer[ i ].Color = …               // Copy color
}

glUnmapBuffer( GL_UNIFORM_BUFFER ); // Unmap the buffer

And finally you can draw everything with a single call to glDrawElementsInstanced or glDrawArraysInstanced (depending if you are using an index buffer):

glDrawElementsInstanced( GL_TRIANGLES, NUM_INDICES, GL_UNSIGNED_SHORT, 0,
                NUM_SCENE_OBJECTS );

You are almost done! There is just one more step to do. In your shader you need to make use of the new uniform buffer object for your transformations and colors. In your shader main program:

void main()
{
    …
    gl_Position = PerInstanceData.Data[ gl_InstanceID ].uMVP * inPosition;
    outColor = PerInstanceData.Data[ gl_InstanceID ].uColor;
}

You might have noticed the use gl_InstanceID. This is a predefined OpenGL vertex shader variable that tells your program which instance it is currently drawing. Using this variable your shader can properly iterate the instance data and match the correct transformation and color for every vertex.

That’s it! You are now ready to use Geometry Instancing. If you are drawing the same mesh multiple times in a frame make sure to implement Geometry Instancing in your pipeline! This can greatly reduce overhead and improve performance.

Join the discussion on

+Android Developers
Categories: Programming

Power Great Gaming with New Analytics from Play Games

Android Developers Blog - Tue, 05/26/2015 - 01:22

By Ben Frenkel, Google Play Games team

A few weeks ago at the Game Developers Conference (GDC), we announced Play Games Player Analytics, a new set of free reports to help you manage your games business and understand in-game player behavior. Today, we’re excited to make these new tools available to you in the Google Play Developer Console.

Analytics is a key component of running a game as a service, which is increasingly becoming a necessity for running a successful mobile gaming business. When you take a closer look at large developers that do this successfully, you find that they do three things really well:

  • Manage their business to revenue targets
  • Identify hot spots in their business metrics so they can continuously focus on the game updates that will drive the most impact
  • Use analytics to understand how players are progressing, spending, and churning

“With player engagement and revenue data living under one roof, developers get a level of data quality that is simply not available to smaller teams without dedicated staff. As the tools evolve, I think Google Play Games Player Analytics will finally allow indie devs to confidently make data-driven changes that actually improve revenue.”

Kevin Pazirandeh
Developer of Zombie Highway 2

With Player Analytics, we wanted to make these capabilities available to the entire developer ecosystem on Google Play in a frictionless, easy-to-use way, freeing up your precious time to create great gaming experiences. Small studios, including the makers of Zombie Highway 2 and Bombsquad, have already started to see the benefits and impact of Player Analytics on their business.

Further, if you integrate with Google Play game services, you get this set of analytics with no incremental effort. But, for a little extra work, you can also unlock another set of high impact reports by integrating Google Play game services Events, starting with the Sources and Sinks report, a report to help you balance your in-game economy.

If you already have a game integrated with Google Play game services, go check out the new reports in the Google Play Developer Console today. For everyone else, enabling Player Analytics is as simple as adding a handful of lines of code to your game to integrate Google Play game services.

Manage your business to revenue targets

Set your spend target in Player Analytics by choosing a daily goal

To help assess the health of your games business, Player Analytics enables you to select a daily in-app purchase revenue target and then assess how you're doing against that goal through the Target vs Actual report depicted below. Learn more.

Identify hot spots using benchmarks with the Business Drivers report

Ever wonder how your game’s performance stacks up against other games? Player Analytics tells you exactly how well you are doing compared to similar games in your category.

Metrics highlighted in red are below the benchmark. Arrows indicate whether a metric is trending up or down, and any cell with the icon can be clicked to see more details about the underlying drivers of the change. Learn more.

Track player retention by new user cohort

In the Retention report, you can see the percentage of players that continued to play your game on the following seven days after installing your game.

Learn more.

See where players are spending their time, struggling, and churning with the Player Progression report

Measured by the number of achievements players have earned, the Player Progression funnel helps you identify where your players are struggling and churning to help you refine your game and, ultimately, improve retention. Add more achievements to make progression tracking more precise.

Learn more.

Manage your in-game economy with the Sources and Sinks report

The Sources and Sinks report helps you balance your in-game economy by showing the relationship between how quickly players are earning or buying and using resources.

For example, Eric Froemling, one man developer of BombSquad, used the Sources & Sinks report to help balance the rate at which players earned and spent tickets.

Read more about Eric’s experience with Player Analytics in his recent blog post.

To enable the Sources and Sinks report you will need to create and integrate Play game services Events that track sources of premium currency (e.g., gold coins earned), and sinks of premium currency (e.g., gold coins spent to buy in-app items).

Categories: Programming

Integrate Play data into your workflow with data exports

Android Developers Blog - Tue, 05/26/2015 - 00:09

Posted by Frederic Mayot, Google Play team

The Google Play Developer Console makes a wealth of data available to you so you have the insight needed to successfully publish, grow, and monetize your apps and games. We appreciate that some developers want to access and analyze their data beyond the visualization offered today in the Developer Console, which is why we’ve made financial information, crash data, and user reviews available for export. We're now also making all the statistics on your apps and games (installs, ratings, GCM usage, etc.) accessible via Google Cloud Storage.

New Reports section in the Google Play Developer Console

We’ve added a Reports tab to the Developer Console so that you can view and access all available data exports in one place.

A reliable way to access Google Play data

This is the easiest and most reliable way to download your Google Play Developer Console statistics. You can access all of your reports, including install statistics, reviews, crashes, and revenue.

Programmatic access to Google Play data

This new Google Cloud Storage access will open up a wealth of possibilities. For instance, you can now programmatically:

  • import install and revenue data into your in-house dashboard
  • run custom analysis
  • import crashes and ANRs into your bug tracker
  • import reviews into your CRM to monitor feedback and reply to your users

Your data is available in a Google Cloud Storage bucket, which is most easily accessed using gsutil. To get started, follow these three simple steps to access your reports:

  1. Install the gsutil tool.
    • Authenticate to your account using your Google Play Developer Console credentials.
  2. Find your reporting bucket ID on the new Reports section.
    • Your bucket ID begins with: pubsite_prod_rev (example:pubsite_prod_rev_1234567890)
  3. Use the gsutil ls command to list directories/reports and gsutil cp to copy the reports. Your reports are organized in directories by package name, as well as year and month of their creation.

Read more about exporting report data in the Google Play Developer Help Center.

Note about data ownership on Google Play and Cloud Platform: Your Google Play developer account is gaining access to a dedicated, read-only Google Cloud Storage bucket owned by Google Play. If you’re a Google Cloud Storage customer, the rest of your data is unaffected and not connected to your Google Play developer account. Google Cloud Storage customers can find out more about their data storage on the terms of service page.

Categories: Programming

A New Reference App for Multi-device Applications

Android Developers Blog - Mon, 05/25/2015 - 23:57
It is now possible to bring the benefits of your app to your users wherever they happen to be, no matter what device they have near them. Today we’re releasing a reference sample that shows how to implement such a service with an app that works across multiple Android form-factors. This sample, the Universal Music Player, is a bare-bones but functional reference app that supports multiple devices and form factors in a single codebase. It is compatible with Android Auto, Android Wear, and Google Cast devices. Give it a try and easily adapt your own app for wherever your users are, be that a phone, watch, TV, car, or more!

Playback controls and album art in the lock screen.
On the application toolbar, the Google Cast icon.
Controlling playback through Android Auto

Controlling playback on an Android Wear watch
This sample uses a number of new features in Android 5.0 Lollipop, like MediaStyle notifications, MediaSession and MediaBrowserService. They make it easy to implement media browsing and playback on multiple devices with a single version of your app.

Check out the source code and let your users enjoy your app from wherever they like.

Posted by Renato Mangini, Senior Developer Platform Engineer, Google Developer Platform Team Join the discussion on

+Android Developers
Categories: Programming

Drive app installs through App Indexing

Android Developers Blog - Mon, 05/25/2015 - 23:56

Posted by Lawrence Chang, Product Manager

You’ve invested time and effort into making your app an awesome experience, and we want to help people find the great content you’ve created. App Indexing has already been helping people engage with your Android app after they’ve installed it — we now have 30 billion links within apps indexed. Starting this week, people searching on Google can also discover your app if they haven’t installed it yet. If you’ve implemented App Indexing, when indexed content from your app is relevant to a search done on Google on Android devices, people may start to see app install buttons for your app in search results. Tapping these buttons will take them to the Google Play store where they can install your app, then continue straight on to the right content within it.

App installs through app indexing

With the addition of these install links, we are starting to use App Indexing as a ranking signal for all users on Android, regardless of whether they have your app installed or not. We hope that Search will now help you acquire new users, as well as re-engage your existing ones. To get started, visit g.co/AppIndexing and to learn more about the other ways you can integrate with Google Search, visit g.co/DeveloperSearch.

Join the discussion on

+Android Developers
Categories: Programming

There's a lot to explore with Google Play services 7.3

Android Developers Blog - Mon, 05/25/2015 - 23:52
gps

Posted by Ian Lake, Developer Advocate

Today, we’re excited to give you new tools to build better apps with the rollout of Google Play services 7.3. With new Android Wear APIs, the addition of nutrition data to Google Fit, improvements to retrieving the user’s activity and location, and better support for optional APIs, there’s a lot to explore in this release.

Android Wear

Google Play services 7.3 extends the Android Wear network by enabling you to connect multiple Wear devices to a single mobile device.

While the DataApi will automatically sync DataItems across all nodes in the Wear network, the directed nature of the MessageApi is faced with new challenges. What node do you send a message to when the NodeApi starts showing multiple nodes from getConnectedNodes()? This is exactly the use case for the new CapabilityApi, which allows different nodes to advertise that they provide a specific functionality (say, the phone node being able to download images from the internet). This allows you to replace a generic NodeListener with a more specific CapabilityListener, getting only connection results and a list of nodes that have the specific functionality you need. We’ve updated the Sending and Receiving Messages training to explore this new functionality.

Another new addition for Android Wear is the ChannelApi, which provides a bidirectional data connection between two nodes. While assets are the best way to efficiently add binary data to the data layer for synchronization to all devices, this API focuses on sending larger binary data directly between specific nodes. This comes in two forms: sending full files via the sendFile() method (perfect for later offline access) or opening an OutputStream to stream real time binary data. We hope this offers a flexible, low level API to complement the DataApi and MessageApi.

We’ve updated our samples with these changes in mind so go check them out here!

Google Fit

Google Fit makes building fitness apps easier with fitness specific APIs on retrieving sensor data like current location and speed, collecting and storing activity data in Google Fit’s open platform, and automatically aggregating that data into a single view of the user’s fitness data.

To make it even easier to retrieve up-to-date information, Google Play Services 7.3 adds a new method to the HistoryApi: readDailyTotal(). This automatically aggregates data for a given DataType from midnight on the current day through now, giving you a single DataPoint. For TYPE_STEP_COUNT_DELTA, this method does not require any authentication, making it possible to retrieve the current number of steps for today from any application whether on mobile devices or on Android Wear - great for watch faces!

Google Fit is also augmenting its existing data types with granular nutrition information, including protein, fat, cholesterol, and more. By leveraging these details about the user’s diet, developers can help users stay more informed about their health and fitness.

Location

LocationRequest is the heart of the FusedLocationProviderApi, encapsulating the type and frequency of location information you’d like to receive. An important, but small change to LocationRequest is the addition of a maximum wait time for location updates via setMaxWaitTime(). By using a value at least two times larger than the requested interval, the system can batch location updates together, reducing battery usage and, on some devices, actually improving location accuracy.

For any ongoing location requests, it is important to know that you will continue to get good location data back. The SettingsApi is still incredibly useful for confirming that user settings are optimal before you put in a LocationRequest, however, it isn’t the best approach for continual monitoring. For that, you can use the new LocationCallback class in place of your existing LocationListener to receive LocationAvailability updates in addition to location updates, giving you a simple callback whenever settings might have changed which will affect the current set of LocationRequests. You can also use FusedLocationProviderApi’s getLocationAvailability() to retrieve the current state on demand.

Connecting to Google Play services

One of the biggest benefits of GoogleApiClient is that it provides a single connection state, whether you are connecting to a single API or multiple APIs. However, this made it hard to work with APIs that might not be available on all devices, such as the Wearable API. This release makes it much easier to work with APIs that may not always be available with the addition of an addApiIfAvailable() method ensuring that unavailable APIs do not hold up the connection process. The current state for each API can then be retrieved via getConnectionResult(), giving you a way to check at runtime whether an API is available and connected.

While GoogleApiClient’s connection process already takes care of checking for Google Play services availability, if you are not using GoogleApiClient, you’ll find many of the static utility methods in GooglePlayServicesUtil such as isGooglePlayServicesAvailable() have now been moved to the singleton GoogleApiAvailability class. We hope the move away from static methods helps you when writing tests, ensuring your application can properly handle any error cases.

SDK is now available!

Google Play services 7.3 is now available: get started with updated SDK now!

To learn more about Google Play services and the APIs available to you through it, visit the Google Play services section on the Android Developer site.

Join the discussion on

+Android Developers
Categories: Programming

New Android Code Samples

Android Developers Blog - Mon, 05/25/2015 - 23:50

Posted by Rich Hyndman, Developer Advocate

A new set of Android code samples, covering Android Wear, Android for Work, NFC and Screen capturing, have been committed to our Google Samples repository on GitHub. Here’s a summary of the new code samples:

XYZTouristAttractions

This sample mimics a real world mobile and Android Wear app. It has a more refined design and also provides a practical example of how a mobile app would interact and communicate with its Wear counterpart.

The app itself is modeled after a hypothetical tourist attractions experience that notifies the user when they are in close proximity to notable points of interest. In parallel,the Wear component shows tourist attraction images and summary information, and provides quick actions for nearby tourist attractions in a GridViewPager UI component.

DeviceOwner - A Device Owner is a specialized type of device administrator that can control device security and configuration. This sample uses the DevicePolicyManager to demonstrate how to use device owner features, including configuring global settings (e.g.automatic time and time-zone) and setting the default launcher.

NfcProvisioning - This sample demonstrates how to use NFC to provision a device with a device owner. This sample sets up the peer device with the DeviceOwner sample by default. You can rewrite the configuration to use any other device owner.

NFC BeamLargeFiles - A demonstration of how to transfer large files via Android Beam on Android 4.1 and above. After the initial handshake over NFC, file transfer will take place over a secondary high-speed communication channel such as Bluetooth or WiFi Direct.

ScreenCapture - The MediaProjection API was added in Android Lollipop and allows you to easily capture screen contents and/or record system audio. The ScreenCapture sample demonstrates how to use the API to capture device screen in real time and show it on a SurfaceView.

As an additional bonus, the Santa Tracker Android app, including three games, two watch-faces and other goodies, was also recently open sourced and is now available on GitHub.

As with all the Android samples, you can also easily access these new additions in Android Studio using the built in Import Samples feature and they’re also available through our Samples Browser.

Check out a sample today to help you with your development!

Join the discussion on

+Android Developers
Categories: Programming

Always-on and Wi-Fi with the latest Android Wear update

Android Developers Blog - Mon, 05/25/2015 - 23:27

Posted by Wayne Piekarski, Developer Advocate

A new update to Android Wear is rolling out with lots of new features like always-on apps, Wi-Fi connectivity, media browsing, emoji input, and more. Let’s discuss some of the great new capabilities that are available in this release.

Always-on apps

Above all, a watch should make it easy to tell the time. That's why most Android Wear watches have always-on displays, so you can see the time without having to shake your wrist or lift your arm to wake up the display. In this release, we're making it possible for apps to be always-on as well.

With always-on functionality, your app can display dynamic data on the device, even when the app is in ambient mode. This is useful if your app displays information that is continuously updated. For example, running apps like Endomondo, MapMyRun, and Runtastic use the always-on screen to let you keep track of how long and far you’ve been running. Zillow keeps you posted about the median price of homes nearby when you’re house-hunting.

Always-on functionality is also useful for apps that may not update data very frequently, but present information that’s useful for reference over a longer period of time. For example, Bring! lets you keep your shopping list right on your wrist, and Golfshot gives you accurate distances from tee to pin. If you’re at the airport and making your way to your gate, American Airlines, Delta, and KLM let you keep all of your flight info a glance away on your watch.

Note: the above apps will not display always-on functionality on your watch until you receive the update for the latest version of Android Wear.

Always-on functionality works similar to watch faces, in that the power usage of the display and processor is kept to a minimum by reducing the colors and refresh rate of the display. To implement an always-on Activity, you need to make a few small changes to your app's AndroidManifest.xml, your app’s build.gradle, and the Activity to declare that it supports ambient mode. A code sample and documentation are available to show you how it works. Be sure to tune in to the livestream at Google I/O next week for Android Wear: Your app and the always-on screen.

Wi-Fi connectivity and cloud sync

Many existing Android Wear devices already contain hardware support for Wi-Fi, and this release enables software support for Wi-Fi. The saved Wi-Fi networks on your phone are copied to your watch during setup, and your watch automatically connects to those Wi-Fi networks when it loses Bluetooth connection to your phone. Your watch can then connect to your phone over the Internet, even if they’re not on the same Wi-Fi network.

You should continue to use the Data Layer API for all communications between the watch and phone. By using this standard API, your app will always work, no matter what kind of connectivity the user’s wearable supports. Cloud sync also introduces a new virtual node in the Data Layer called the cloud node, which may be returned in calls to getConnectedNodes(). Learn more in the Multi-wearable support section below.

Multi-wearable support

The release of Google Play services 7.3 now allows support for multiple wearable devices to be paired simultaneously to a single phone or tablet, so you can have a wearable for fitness, and another for dressing up. While DataItems will continue to work in the same way, since they are synchronized to all devices, working with the MessageApi is a little different. When you update your build.gradle to use version 7.3 or higher, getConnectedNodes() from the NodeApi will usually return multiple nodes. There is an extra virtual node added to represent the cloud node used to communicate over Wi-Fi, so all developers need to deal with this situation in their code.

To help simplify finding the right node among many devices, we have added a CapabilityApi, allowing your nodes to announce features they provide, for example downloading images or music. You can also now use the ChannelApi to open up a connection to a specific device to transfer large resources such as images or audio streams, without having to send them to all devices like you would when embedding assets into data items. We have updated our Android Wear samples and documentation to show the best practices in implementing this.

MediaBrowser support

The Android 5.0 release added the ability for apps to browse the media content of another app, via the android.media.browse API. With the latest Android Wear update, if your media playback app supports this API, then you will be able to browse to find the next song directly from your watch. This is the same browse capability used in Android Auto. You implement the API once, and it will work across a variety of platforms. To do so, you just need to allow Android Wear to browse your app in the onGetRoot() method validator. You can also add custom actions to the MediaSession that will appear as controls on the watch. We have a Universal Media Player sample that shows you how to implement this functionality.

Updates to existing devices

The latest version of Android Wear will roll out via an over-the-air (OTA) update to all Android Wear watches over the coming weeks. To take advantage of these new features, you will need to use targetSdkVersion 22 and add the necessary dependencies for always-on support. We have also expanded the collection of emulators available via the SDK Manager, to simulate the experience on all the currently available devices, resolutions, and shapes, including insets like the Moto 360.

In this update, we have also disabled support for apps that use the unofficial, activity-based approach for displaying watch faces, as announced in December. These watch faces will no longer work and should be updated to use the new watch face API.

Since the launch of Android Wear last summer, Android Wear has grown into a platform that gives users many possibilities to personalize their watches, with a variety of shapes and styles, a range of watch bands, and thousands of apps and watch faces. Features such as always-on apps and Wi-Fi allow developers even more flexibility to give users amazing experiences with Android Wear.

Categories: Programming

Quickly build a XL Deploy plugin for deploying container applications to CoreOS

Xebia Blog - Mon, 05/25/2015 - 21:57

You can use fleetctl and script files to deploy your container applications to CoreOS. However, using XL Deploy for deployment automation is a great solution when you need to deploy and track versions of many applications. What does it take to create a XL Deploy plugin to deploy these container applications to your CoreOS clusters?

XL Deploy can be extended with custom plugins which add deployment capabilities. Using XL Rules custom plugins can be created quickly with limited effort. In this blog you can read how a plugin can be created in a matter of hours.

In a number of blog posts, Mark van Holsteijn explained how to create a high available Docker container platform using CoreOS and Consul. In these posts shell scripts (with fleetctl commands) are used deploy container applications. Based on these scripts I have built a XL Deploy plugin which deploys fleet unit configuration files to a CoreOS cluster.

 

Deploying these container applications using XL Deploy has a number of advantages:

  • Docker containers can be deployed without creating, adjusting and maintaining scripts for individual applications.
  • XL Deploy will track and report the applications deployed to the CoreOS clusters.
  • Additional deployment scenarios can be added with limited effort.
  • Deployments will be consistent and configuration is managed across environments.
  • XL Deploy permissions can be used to control (direct) access to the CoreOS cluster(s).

 

Building an XL Deploy plugin is fast, since you can:

  • Reuse existing XL Deploy capabilities, like the Overthere plugin.
  • Utilize XL Deploy template processing to inject property values in rules and deploy scripts.
  • Exploit the XL Deploy unified deployment model to get the deltas which drive the required fleetctl deployment commands for any type of deployment (new, update, undeploy and rollback deployments).
  • Use xml and script based rules to build deployment tasks.

Getting started

  • Install XL Deploy, you can download a free edition here. If you are not familiar with XL Deploy, read the getting started documentation.
  • Next add the plugin resources to the ext directory of your XL Deploy installation. You can find the plugin resources in this Github repository. Add the synthetic.xml, xl-rules.xml file from the repository root. In addition, add the scripts directory and its contents. Restart XL Deploy.
  • Next, setup a CoreOS cluster. This blog post explains how you can setup such a platform locally.
  • Now you can connect to XL deploy using your browser. On the deployment tab you can import the sample application, located in the sample-app folder of the plugin resources Github repository
  • You can now setup the target deployment container based on the Overthere.SshHost configuration item type. Verfiy that you can connect to your CoreOS cluster using this XL Deploy container.
  • Next, you can setup a XL Deploy environment, which contains your target deployment container.
  • Now you can use the deployment tab to deploy and undeploy your fleet configuration file applications.

 

Building the plugin

The plugin consists of two xml files and a number of script files. Below you find a description of the plugin implementation steps.

The CoreOS container application deployments are based on fleet unit configuration files. So, first we create a XL Deploy configuration Item type definition which represents such a file. This XL Deploy deployed type is defined in the XL Deploy synthetic.xml file. The snippet below shows the contents of this file. I have assigned the name “fleet.DeployedUnit”.

synthetic

The definition contains  a container-type attribute. The Overthere.SshHost container is referenced. The plugin can simple use the Overthere.SshHost container type to connect to the CoreOS cluster and to execute fleet commands.

Furthermore, I have added two properties. One property which can be used to specify the number of instances. Note that XL Deploy dictionaries can be utilized to define the number of instances for each environment separately. The second property is a flag which controls whether instances will be started (or only submitted and loaded).

If you want to deploy a fleet configuration file using fleetctl, you can issue the following three commands: submit, load and start. In the plugin, I have created a separate script file for each of these fleetctl commands.  The caption below shows the script file to load a fleet configuration file. This load script uses the file name property and numberOfInstances property of the “fleet.DeployedUnit” configuration item.

load-unit

Finally, the plugin can be completed with XML-based rules which create the deployment steps. The caption below shows the rule which adds steps to [1] submit the unit configuration and [2] load the unit when (a version of ) the application is deployed.

rules-xldeploy

Using rules, you can easily define logic to add deployment steps. These steps can closely resemble the commands you perform when you are using fleetctl. For this plugin I have utized xml-based rules only. Using script rules, you can add more intelligence to your plugin. For example, the logic of the restart script can be converted to rules and more fine grained deployment steps.

More information

If you are interested in building your own XL Deploy plugin, the XL Deploy product documentation contains tutorials which will get you started.

If you want to know how you can create a High Available Docker Container Platform Using CoreOS And Consul, the following blogs are great starting point:

Appknox Architecture - Making the Switch from AWS to the Google Cloud

This is a guest post by dhilipsiva, Full-Stack & DevOps Engineer at Appknox.

Appknox helps detect and fix security loopholes in mobile applications. Securing your app is as simple as submitting your store link. We upload your app, scan for security vulnerabilities, and report the results. 

What's notable about our stack:
  • Modular Design. We modularized stuff so far that we de-coupled our front-end from our back-end. This architecture has many advantages that we'll talk about later in the post.
  • Switch from AWS to Google Cloud. We made our code largely vendor independent so we were able to easily make the switch from AWS to the Google Cloud. 
Primary Languages
  1. Python & Shell for the Back-end
  2. CoffeeScript and LESS for Front-end
Our Stack
  1. Django
  2. Postgres (Migrated from MySQL)
  3. RabbitMQ
  4. Celery
  5. Redis
  6. Memcached
  7. Varnish
  8. Nginx
  9. Ember
  10. Google Compute 
  11. Google Cloud Storage
Architecture
Categories: Architecture

Microservices architecture principle #2:Autonomy over coordination

Xebia Blog - Mon, 05/25/2015 - 16:13

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Over the next couple of days we will cover each of these principles in more detail in a series of blog posts.
This blog explains why we prefer autonomy of services over coordination between services.

Our Xebia colleague Serge Beaumont posted "Autonomy over Coordination" in a tweet earlier this year and for me it summarised one of the crucial aspects for creating an agile, scalable and robust IT system or organisational structure. Autonomy over coordination is closely related to the business capabilities described in the previous post in this series, each capability should be implemented in one microservice. Once you have defined your business capabilities correctly the dependencies between those capabilities are minimised. Therefore minimal coordination between capabilities is required, leading to optimal autonomy. Increased autonomy for a microservice gives it freedom to evolve without impacting other services: the optimal technology can be used, it can scale without having to scale others, etc. For the team responsible for the service the advantages are similar, the autonomy enables them to make optimal choices that make their team function at its best.

The drawbacks of less autonomy and more coordination are evident and we all have experienced these. For example, a change leads to a snowball of dependent changes that must be deployed at the same moment, making changes to a module requires approval of other teams,  not being able to scale up a compute intensive function without scaling the whole system, ... the list is endless.

So in summary, pay attention to defining you business capabilities (microservices) in such a manner that autonomy is maximised, it will give you both organisational and technical advantages.

Some Advice On Becoming a Lead Developer

Making the Complex Simple - John Sonmez - Mon, 05/25/2015 - 16:00

Becoming a lead developer or technical lead on a team is a great responsibility and can be an excellent career opportunity, but the transition to this role can be a little jarring. There is a big difference between being responsible for yourself and your work and being at least partially responsible for the work of an […]

The post Some Advice On Becoming a Lead Developer appeared first on Simple Programmer.

Categories: Programming