Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Take the Harder Road

In this video, I talk about why it is usually a good idea to take the harder road, when faced with two choices and also how important it is to specialize.

The post Take the Harder Road appeared first on Simple Programmer.

Categories: Programming

Docker/Neo4j: Port forwarding on Mac OS X not working

Mark Needham - 8 hours 32 min ago

Prompted by Ognjen Bubalo’s excellent blog post I thought it was about time I tried running Neo4j on a docker container on my Mac Book Pro to make it easier to play around with different data sets.

I got the container up and running by following Ognien’s instructions and had the following ports forwarded to my host machine:

$ docker ps
CONTAINER ID        IMAGE                 COMMAND                CREATED             STATUS              PORTS                                              NAMES
c62f8601e557        tpires/neo4j:latest   "/bin/bash -c /launc   About an hour ago   Up About an hour    0.0.0.0:49153->1337/tcp, 0.0.0.0:49154->7474/tcp   neo4j

This should allow me to access Neo4j on port 49154 but when I tried to access that host:port pair I got a connection refused message:

$ curl -v http://localhost:49154
* Adding handle: conn: 0x7ff369803a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7ff369803a00) send_pipe: 1, recv_pipe: 0
* About to connect() to localhost port 49154 (#0)
*   Trying ::1...
*   Trying 127.0.0.1...
*   Trying fe80::1...
* Failed connect to localhost:49154; Connection refused
* Closing connection 0
curl: (7) Failed connect to localhost:49154; Connection refused

My first thought was the maybe Neo4j hadn’t started up correctly inside the container so I checked the logs:

$ docker logs --tail=10 c62f8601e557
10:59:12.994 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext@2edfbe28{/webadmin,jar:file:/usr/share/neo4j/system/lib/neo4j-server-2.1.5-static-web.jar!/webadmin-html,AVAILABLE}
10:59:13.449 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@192efb4e{/db/manage,null,AVAILABLE}
10:59:13.699 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@7e94c035{/db/data,null,AVAILABLE}
10:59:13.714 [main] INFO  o.e.j.w.StandardDescriptorProcessor - NO JSP Support for /browser, did not find org.apache.jasper.servlet.JspServlet
10:59:13.715 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext@3e84ae71{/browser,jar:file:/usr/share/neo4j/system/lib/neo4j-browser-2.1.5.jar!/browser,AVAILABLE}
10:59:13.807 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@4b6690b1{/,null,AVAILABLE}
10:59:13.819 [main] INFO  o.e.jetty.server.ServerConnector - Started ServerConnector@495350f0{HTTP/1.1}{c62f8601e557:7474}
10:59:13.900 [main] INFO  o.e.jetty.server.ServerConnector - Started ServerConnector@23ad0c5a{SSL-HTTP/1.1}{c62f8601e557:7473}
2014-11-27 10:59:13.901+0000 INFO  [API] Server started on: http://c62f8601e557:7474/
2014-11-27 10:59:13.902+0000 INFO  [API] Remote interface ready and available at [http://c62f8601e557:7474/]

Nope! It’s up and running perfectly fine which suggested the problemw was with port forwarding.

I eventually found my way to Chris Jones’ ‘How to use Docker on OS X: The Missing Guide‘ which explained the problem:

The Problem: Docker forwards ports from the container to the host, which is boot2docker, not OS X.

The Solution: Use the VM’s IP address.

So to access Neo4j on my machine I need to use the VM’s IP address rather than localhost. We can get the VM’s IP address like so:

$ boot2docker ip
 
The VM's Host only interface IP address is: 192.168.59.103

Let’s strip out that surrounding text though:

$ boot2docker ip 2> /dev/null
192.168.59.103

Now if we cURL using that IP instead:

$ curl -v http://192.168.59.103:49154
* About to connect() to 192.168.59.103 port 49154 (#0)
*   Trying 192.168.59.103...
* Adding handle: conn: 0x7fd794003a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fd794003a00) send_pipe: 1, recv_pipe: 0
* Connected to 192.168.59.103 (192.168.59.103) port 49154 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.30.0
> Host: 192.168.59.103:49154
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=UTF-8
< Access-Control-Allow-Origin: *
< Content-Length: 112
* Server Jetty(9.0.5.v20130815) is not blacklisted
< Server: Jetty(9.0.5.v20130815)
<
{
  "management" : "http://192.168.59.103:49154/db/manage/",
  "data" : "http://192.168.59.103:49154/db/data/"
* Connection #0 to host 192.168.59.103 left intact

Happy days!

Chris has solutions to lots of other common problems people come across when using Docker with Mac OS X so it’s worth having a flick through his post.

Categories: Programming

How to implement validation callbacks in AngularJS 1.3

Xebia Blog - Wed, 11/26/2014 - 09:21

In my current project we've recently switched from AngularJS 1.2 to 1.3. Except for a few breaking changes the upgrade was quite trivial. However, after diving into the changelog we noticed that the way AngularJS handles form validation changed drastically. Since we're working on a greenfield application we decided it was worth the effort to rewrite the validation logic. The main argument for this was that the validation we had could be drastically simplified by using the new validation pipeline.

This article is aimed at AngularJS developers interested in the new validation pipeline offered by AngularJS 1.3. Except for a small introduction, this article will not be about all the different aspects related to validating forms. I will showcase 2 different cases in which we had to come up with custom solutions:

  • Displaying additional information after successful validation
  • Validating equality of multiple password fields

What has changed?

Whereas in AngularJS 1.2 we could use $parsers and $formatters for form validation, AngularJS 1.3 introduces the concept of $validators and $asyncValidators. As we can deduce from the names, the latter is for server-side validations using HTTP calls and the former is for validations on the client-side.

All validators are directives that are registered on a specific ngModel by either adding it to the ngModel.$validators or ngModel.$asyncValidators. When validating, all $validators run before executing the $asyncValidators.

AngularJS 1.3 also utilises the HTML5 validation API wherever possible. For each of the HTML5 validation attributes, AngularJS offers a directive. For example, for minlength, add ng-minlength to your input field to incorporate the minimum length check.

When it comes down to showing error messages we can rely on the $error property on a ngModel. Whenever one of the validators fail it will add the name of the validator to the $error property. Using the brand new ngMessages module we can then easily display specific error messaging depending on the type of validator.

Displaying additional information after successful validation

Implementing the new validation pipeline came with a few challenges. The biggest being that we had quite a few use cases in which, after successfully validating a field, we wanted to display some data returned by the web service. Below I will discuss how we've solved this.

The directive itself is very simple and merely does the following:

  1. Clear the data displayed next to the field. If the user has already entered text and the validation succeeds the data from validation call will be shown next to the input field. If the user were to change the input's value and it would not validate correctly, the data displayed next to the field would be stale. To prevent this we first clear the data displayed next to the field at the start of the validation.
  2. Validate the content against the web service using the HelloResource. Besides returning the promise the resource gives us we invoke the callback() method when the promise is successfully resolved.
  3. Display data returned by the HTTP call using a callback method
'use strict';

angular.module('angularValidators')
  .directive('validatorWithCallback', function (HelloResource) {
    return {
      require: 'ngModel',
      link: function (scope, element, attrs, ngModel) {
        function callback(response) {}

        ngModel.$asyncValidators.validateWithCallback = function (modelValue, viewValue) {
          callback('');

          var value = modelValue || viewValue;

          return HelloResource.get({name: value}).$promise.then(function (response) {
            callback(response);
          });
        };
      }
    };
});

We can add the validator to our input by adding the validator-with-callback attribute to the input which we would like to validate.

<form name="form">
    <input type="text" name="name" ng-model="name" required validator-with-callback />
</form>
Implementing the clear and callback

Because this directive should be independent from any specific ngModel we have to find a way to pass the ngModel to the directive. To accomplish this we add a value to the validator-with-callback attribute. We also change the ng-model attribute value to name.value. Why this is required will be explained later on. To finish we also add a div that will only display when the form element is valid and we will set it to display the value of name.detail.

<form name="form">
    <input type="text" name="name" ng-model="name.value" required validator-with-callback="name" />
    <div ng-if="form.name.$valid">{{name.detail}}</div>
</form>

The $eval method from scope can be used to resolve the object using the attribute's value. Displaying the data won't work if we simply supply and overwrite any scoped object (f.e. $scope.data). We have to add a scoped object name which contains 2 properties: value and detail. Note: the naming is not important.

We will add a controller to our HTML file which will be responsible for setting the default value for our scoped object name. As shown in our HTML view above, the value property will be used for storing the value of the field. The detail property will be used to store the response from the web service call and display it to the user.

'use strict';

angular.module('angularValidators')
  .controller('ValidationController', function ($scope) {
    $scope.name = {
      value: undefined,
      detail: ''
    };
});

The last thing is changing the directive implementation to retrieve the target object and implement the clear and callback methods. We do this by retrieving the value from the validator-with-callback attribute by calling scope.$eval(attrs.validatorWithCallback). When we have the target object we can implement the callback method.

'use strict';

angular.module('angularValidators')
  .directive('validatorWithCallback', function (HelloResource) {
    return {
      require: 'ngModel',
      link: function (scope, element, attrs, ngModel) {
        var target = scope.$eval(attrs.validatorWithCallback);

        function callback(response) {
          if (_.isUndefined(target)) {
            return;
          }

          target.detail = response.msg;
        }
        ngModel.$asyncValidators.validateWithCallback = function (modelValue, viewValue) {
          ...omitted...
        };
      }
    };
});

This is all that's needed to create a directive with a callback method. This callback method uses data returned by the web service to populate the property value but it can of course be adjusted to do anything we desire.

Validating equality of multiple password fields

The second example we would like to show is a synchronous validator that validates the values of 2 different fields. The use case we had for this were 2 password fields which required to be equal.

Requirements
  • (In)validate both fields when the user changes the value of either one of them and they are (not) equal
  • The validator successfully validates the field if the second input has not been touched or the value of the second input is empty.
  • Only display 1 error message and only when no other validators (required & min-length) are invalid
Implementation

We start of by creating the 2 different form elements with both a required and a ng-minlength validator. We also add a button to the form to show how enabling/disabling the button depending on the validity of the form works.

Both password fields also have the validate-must-equal-to="other_field_name" attribute. This indicates that we wish to validate the value of this field against the field defined by the attribute. We also add a form-name="form" attribute to pass in the name of the form to our directive. This is needed for invalidating the second input on our form without hardcoding the form name inside the directive and thus making this directive fully independent from form and field names.

To conclude we also conditionally show or hide the error displaying containers. For the errors related to the first input field we also specify that it should not display if the notEqualTo error has been set by our directive. This ensures that no empty div will be displayed if our validator invalidates the first field.

<form name="form">
    <input type="password" name="password" ng-model="password" required ng-minlength="8" validate-must-equal-to="password2" form-name="form" />
    <div ng-messages="form.password.$error" ng-if="form.password.$touched && form.password.$invalid && !form.password.$error.notEqualTo">
        <div ng-message="required">This field is required</div>
        <div ng-message="minlength">Your password must be at least 8 characters long</div>
    </div>
    <br />
    <input type="password" name="password2" ng-model="password2" required ng-minlength="8" validate-must-equal-to="password" form-name="form" />
    <div ng-messages="form.password2.$error" ng-if="form.password2.$touched && form.password2.$invalid">
        <div ng-message="required">This field is required</div>
        <div ng-message="minlength">Your password must be at least 8 characters long</div>
    </div>

    <p>The submit button will only be enabled when the entire form is valid</p>
    <button ng-disabled="form.$invalid">Submit</button>
</form>

The validator itself is again very compact. Basically all we want is to retrieve the value from the input and pass it to a isEqualToOther method which returns a boolean. At the beginning of the link method we also do a check to see if the form-name attribute is provided. If not, we throw an error. We do this to communicate to any developer reusing this directive that this directive requires the form name to function correctly. Unfortunately at this moment there is no other way to communicate the additional mandatory attribute.

'use strict';

angular.module('angularValidators')
  .directive('validateMustEqualTo', function () {
    return {
      require: 'ngModel',
      link: function (scope, element, attrs, ngModel) {
        if (_.isUndefined(attrs.formName)) {
          throw 'For this directive to function correctly you need to supply the form-name attribute';
        }

        function isEqualToOther(value) {
          ...omitted...
        }

        ngModel.$validators.notEqualTo = function (modelValue, viewValue) {
          var value = modelValue || viewValue;

          return isEqualToOther(value);
        };
      }
    };
  });

The isEqualToOther method itself does the following:

  • Retrieve the other input form element
  • Throw an error if it cannot be found which again means this directive won't function as intended
  • Retrieve the value from the other input and validate the field if the input has not been touched or the value is empty
  • Compare both values
  • Set the validity of the other field depending on the comparison
  • Return the comparison to (in)validate the field this directive is linked to
'use strict';

angular.module('angularValidators')
  .directive('validateMustEqualTo', function () {
    return {
      require: 'ngModel',
      link: function (scope, element, attrs, ngModel) {
        ...omitted...

        function isEqualToOther(value) {
          var otherInput = scope[attrs.formName][attrs.validateMustEqualTo];
          if (_.isUndefined(otherInput)) {
            throw 'Cannot retrieve the second field to compare with from the scope';
          }

          var otherValue = otherInput.$modelValue || otherInput.$viewValue;
          if (otherInput.$untouched || _.isEmpty(otherValue)) {
            return true;
          }

          var isEqual = (value === otherValue);

          otherInput.$setValidity('notEqualTo', isEqual);

          return isEqual;
        }

        ngModel.$validators.notEqualTo = function (modelValue, viewValue) {
          ...omitted...
        };
      }
    };
  });
Alternative solution

An alternative solution to the validate-must-equal-to directive could be to implement a directive that encapsulates both password fields and has a scoped function that would handle validation using ng-blur on the fields or a $watch on both properties. However, this approach does not use the out-of-the-box validation pipeline and we would thus have to extend the logic in the form button's ng-disabled to allow the user to submit the form.

Conclusion

AngularJS 1.3 introduces a new validation pipeline which is incredibly easy to use. However, when faced with more advanced validation rules it becomes clear that certain features (like the callback mechanism) are lacking for which we had to find custom solutions. In this article we've shown you 2 different validation cases which extend the standard pipeline.

Demo application

I've set up a stand-alone demo application which can be cloned from GitHub. This demo includes both validators and karma tests that cover all different scenario's. Please feel free to use and modify this code as you feel appropriate.

Because Reading is Fundamental

Coding Horror - Jeff Atwood - Wed, 11/26/2014 - 02:21

Most discussions show a bit of information next to each user:

What message does this send?

  • The only number you can control printed next to your name is post count.
  • Everyone who reads this will see your current post count.
  • The more you post, the bigger that number next to your name gets.

If I have learned anything from the Internet, it is this: be very, very careful when you put a number next to someone's name. Because people will do whatever it takes to make that number go up.

If you don't think deeply about exactly what you're encouraging, why you're encouraging it, and all the things that may happen as a result of that encouragement, you may end up with … something darker. A lot darker.

Printing a post count number next to every user's name implies that the more you post, the better things are. The more you talk, the better the conversations become. Is this the right message to send to everyone in a discussion? More fundamentally, is this even true?

I find that the value of conversations has little to do with how much people are talking. I find that too much talking has a negative effect on conversations. Nobody has time to listen to the resulting massive stream of conversation, they end up just waiting for their turn to pile on and talk, too. The best conversations are with people who spend most of their time listening. The number of times you've posted in a given topic is not a leaderboard; it's a record of failing to communicate.

Consider the difference between a chat room and a discussion. Chat is a never-ending flow of disconnected, stream of consciousness sentences that you can occasionally dip your toes in to get the temperature of the water, and that's about it. Discussion is the process of lobbing paragraphs back and forth that results in an evolution of positions as your mutual understanding becomes more nuanced. We hope.

The Ars Banana Experiment

Ars Technica ran a little experiment in 2011. When they posted Guns at home more likely to be used stupidly than in self defense, embedded in the last sentence of the seventh paragraph of the article was this text:

If you have read this far, please mention Bananas in your comment below. We're pretty sure 90% of the respondants to this story won't even read it first.

The first person to do this is on page 3 of the resulting discussion, comment number 93. Or as helpfully visualized by Brandon Gorrell:

Plenty of talking, but how many people actually read up to paragraph 7 (of 11) of the source article before they rushed to comment on it?

The Slate Experiment

In You Won't Finish This Article, Farhad Manjoo dares us to read to the end.

Only a small number of you are reading all the way through articles on the Web. I’ve long suspected this, because so many smart-alecks jump in to the comments to make points that get mentioned later in the piece.

But most of us won't.

He collected a bunch of analytics data based on real usage to prove his point:

These experiments demonstrate that we don't need to incentivize talking. There's far too much talking already. We badly need to incentivize listening.

And online, listening = reading. That old school program from my childhood was right, so deeply fundamentally right. Reading. Reading Is Fundamental.

Let's say you're interested in World War II. Who would you rather have a discussion with about that? The guy who just skimmed the Wikipedia article, or the gal who read the entirety of The Rise and Fall of the Third Reich?

This emphasis on talking and post count also unnecessarily penalizes lurkers. If you've posted five times in the last 10 years, but you've read every single thing your community has ever written, I can guarantee that you, Mr. or Mrs. Lurker, are a far more important part of that community's culture and social norms than someone who posted 100 times in the last two weeks. Value to a community should be measured every bit by how much you've read as much as how much you talked.

So how do we encourage reading, exactly?

You could do crazy stuff like require commenters to enter some fact from the article, or pass a basic quiz about what the article contained, before allowing them to comment on that article. On some sites, I think this would result in a huge improvement in the quality of the comments. It'd add friction to talking, which isn't necessarily a bad thing, but it's a negative, indirect way of forcing reading by denying talking. Not ideal.

I have some better ideas.

  1. Remove interruptions to reading, primarily pagination.

    Here's a radical idea: when you get to the bottom of the page, load the next damn page automatically. Isn't that the most natural thing to want when you reach the end of the page, to read the next one? Is there any time that you've ever been on the Internet reading an article, reached the bottom of page 1, and didn't want to continue reading? Pagination is nothing more than an arbitrary barrier to reading, and it needs to die a horrible death.

    There are sites that go even further here, such as The Daily Beast, which actually loads the next article when you reach the end of the one you are currently reading. Try it out and see what you think. I don't know that I'd go that far (I like to pick the next thing I read, thanks very much), but it's interesting.

  2. Measure read times and display them.

    What I do not measure, I cannot display as a number next to someone's name, and cannot properly encourage. In Discourse we measure how long each post has been visible in the browser for every (registered) user who encounters that post. Read time is a key metric we use to determine who we trust, and the best posts that people do actually read. If you aren't willing to visit a number of topics and spend time actually listening to us, why should we talk to you – or trust you.

    Forget clicks, forget page loads, measure read time! We've been measuring read times extensively since launch in 2013 and it turns out we're in good company: Medium and Upworthy both recently acknowledged the intrinsic power of this metric.

  3. Give rewards for reading.

    I know, that old saw, gamification, but if you're going to reward someome, do it for the right things and the right reasons. For example, we created a badge for reading to the end of a long 100+ post topic. And our trust levels are based heavily on how often people are returning and how much they are reading, and virtually not at all on how much they post.

    To feel live reading rewards in action, try this classic New York Times Article. There's even a badge for reading half the article!

  4. Update in real time.

    Online we tend to read these conversations as they're being written, as people are engaging in live conversations. So if new content arrives, figure out a way to dynamically rez it in without interrupting people's read position. Preserve the back and forth, real time dynamic of an actual conversation. Show votes and kudos and likes as they arrive. If someone edits their post, bring that in too. All of this goes a long way toward making a stuffy old debate feel like a living, evolving thing versus a long distance email correspondence.

These are strategies I pursued with Discourse, because I believe Reading Is Fundamental. Not just in grade school, but in your life, in my life, in every aspect of online community. To the extent that Discourse can help people learn to be better listeners and better readers – not just more talkative – we are succeeding.

If you want to become a true radical, if you want to have deeper insights and better conversations, spend less time talking and more reading.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Categories: Programming

R: dplyr – Select ‘random’ rows from a data frame

Mark Needham - Wed, 11/26/2014 - 01:01

Frequently I find myself wanting to take a sample of the rows in a data frame where just taking the head isn’t enough.

Let’s say we start with the following data frame:

data = data.frame(
    letter = sample(LETTERS, 50000, replace = TRUE),
    number = sample (1:10, 50000, replace = TRUE)
    )

And we’d like to sample 10 rows to see what it contains. We’ll start by generating 10 random numbers to represent row numbers using the runif function:

> randomRows = sample(1:length(data[,1]), 10, replace=T)
> randomRows
 [1]  8723 18772  4964 36134 27467 31890 16313 12841 49214 15621

We can then pass that list of row numbers into dplyr’s slice function like so:

> data %>% slice(randomRows)
   letter number
1       Z      4
2       F      1
3       Y      6
4       R      6
5       Y      4
6       V     10
7       R      6
8       D      6
9       J      7
10      E      2

If we’re using that code throughout our code then we might want to pull out a function like so:

pickRandomRows = function(df, numberOfRows = 10) {
  df %>% slice(runif(numberOfRows,0, length(df[,1])))
}

And then call it like so:

> data %>% pickRandomRows()
   letter number
1       W      5
2       Y      3
3       E      6
4       Q      8
5       M      9
6       H      9
7       E     10
8       T      2
9       I      5
10      V      4
 
> data %>% pickRandomRows(7)
  letter number
1      V      7
2      N      4
3      W      1
4      N      8
5      G      7
6      V      1
7      N      7
Categories: Programming

Sky Force 2014 Reimagined for Android TV

Android Developers Blog - Mon, 11/24/2014 - 22:39
By Jamil Moledina, Games Strategic Partnerships Lead, Google Play

In the coming months, we’ll be seeing more media players, like the recently released Nexus Player, and TVs from partners with Android TV built-in hit the market. While there’s plenty of information available about the technical aspects of adapting your app or game to Android TV, it’s also useful to consider design changes to optimize for the living room. That way you can provide lasting engagement for existing fans as well as new players discovering your game in this new setting. Here are three things one developer did, and how you can do them too.

Infinite Dreams is an indie studio out of Poland, co-founded by hardcore game fans Tomasz Kostrzewski and Marek Wyszyński. With Sky Force 2014 TV, they brought their hit arcade style game to Android TV in a particularly clever way. The mobile-based version of Sky Force 2014 reimaged the 2004 classic by introducing stunning 3D visuals, and a free-to-download business model using in-app purchasing and competitive tournaments to increase engagement. In bringing Sky Force 2014 to TV, they found ways to factor in the play style, play sessions, and real-world social context of the living room, while paying homage to the title’s classic arcade heritage. As Wyszyński puts it, “We decided not to take any shortcuts, we wanted to make the game feel like it was designed to be played on TV.”

Orientation For starters, Sky Force 2014 is played vertically on a smartphone or tablet, also known as portrait mode. In the game, you’re piloting a powerful fighter plane flying up the screen over a scrolling landscape, targeting waves of steampunk enemies coming down at you. You can see far enough up the screen, enabling you to plan your attacks and dodge enemies in advance. Vertical play on the mobile version When bringing the game to TV, the quickest approach would have been to preserve that vertical orientation of the gameplay, by pillarboxing the field of play.

With Sky Force 2014, Infinite Dreams considered their options, and decided to scale the gameplay horizontally, in landscape mode, and recompose the view and combat elements. You’re still aiming up the screen, but the world below and the enemies coming at you are filling out a much wider field of view. They also completely reworked the UI to be comfortably operated with a gamepad or simple remote. From Wyszyński’s point of view, “We really didn't want to just add support for remote and gamepad on top of what we had because we felt it would not work very well.” This approach gives the play experience a much more immersive field of view, putting you right there in the middle of the action. More information on designing for landscape orientation can be found here.

Multiplayer Like all mobile game developers building for the TV, Infinite Dreams had to figure out how to adapt touch input onto a controller. Sky Force 2014 TV accepts both remote control and gamepad controller input. Both are well-tuned, and fighter handling is natural and responsive, but Infinite Dreams didn’t stop there. They took the opportunity to add cooperative multiplayer functionality to take advantage of the wider field of view from a TV. In this way, they not only scaled the visuals of the game to the living room, but also factored in that it’s a living room where people play together. Given the extended lateral patterns of advancing enemies, multiplayer strategies emerge, like “divide and conquer,” or “I got your back” for players of different skill levels. More information about adding controller support to your Android game can be found here, handling controller actions here, and mapping each player’s paired controllers here.
Players battle side by side in the Android TV version
Business Model Infinite Dreams is also experimenting with monetization and extending play session length. The TV version replaces several $1.99 in-app purchases and timers with a try-before-you-buy model which charges $4.99 after playing the first 2 levels for free. We’ve seen this single purchase model prove successful with other arcade action games like Mediocre’s Smash Hit for smartphones and tablets, in which the purchase unlocks checkpoint saves. We’re also seeing strong arcade action games like Vector Unit’s Beach Buggy Racing and Ubisoft’s Hungry Shark Evolution retain their existing in-app purchase models for Android TV. More information on setting up your games for these varied business models can be found here. We’ll be tracking and sharing these variations in business models on Android TV, including variations in premium, as the Android TV platform grows.

Reflecting on the work involved in making these changes, Wyszyński says, “From a technical point of view the process was not really so difficult – it took us about a month of work to incorporate all of the features and we are very happy with the results.” Take a moment to check out Sky Force 2014 TV on a Nexus Player and the other games in the Android TV collection on Google Play, most of which made no design changes and still play well on a TV. Consider your own starting point, take a look at the Android TV section on our developer blog, and build the version of your game that would be most satisfying to players on the couch.

Join the discussion on

+Android Developers
Categories: Programming

CITCON Europe 2014 wrap-up

Xebia Blog - Mon, 11/24/2014 - 19:33

On the 19th and 20th of September CITCON (pronounced "kit-con") took place in Zagreb, Croatia. CITCON is dedicated to continuous integration and testing. It brings together some of the most interesting people of the European testing and continuous integration community. These people also determine the topics of the conference.

They can do this because CITCON is an Open Space conference. If you're not familiar with the concept of Open Space, check out Wikipedia. On Friday evening, attendees can pitch their proposals. Through dot voting and (constant) shuffling of the schedule, the attendees create their conference program.

In this post we'll wrap up a few topics that were discussed.

Polytesting

Peter Zsoldos (@zsepi) went into his most recent brain-spin: polytesting. If I have a set of requirements, is it feasible to apply those requirements at different levels of my application; say, component, integration and UI level. This sounds very appealing because you can perform ATDD at different levels.
This approach is particularly interesting because it has the potential to keep you focused on the required functionality all the way. You'll need good, concrete requirements for this to work.

Microservices

Microservices are a hot topic nowadays. The promise of small, isolated units with clear interfaces is tempting. There are generally two types of architectures that can be applied. The most common one is where there is no central entity, and services communicate to each other directly.

Douglas Squirrel (@douglasquirrel) explained an alternative architecture by using a central pub-sub "database" to which each service is allowed to publish "facts". Douglas deliberately used the term facts to describe single items that are considered true at a specific point in time ("events"" is too generic a term).

The second model comes closer to mechanisms such as event sourcing (or even ESBs if you take it to the extreme). One of the advantages of this approach is that, because facts are stored, it's possible to construct new functionality based on existing facts. For example, you could use this functionality in a game to create leader boards and, at a later stage, create leaderboards per continent, country, or whatever seems fit.

Unit testing

Arjan Molenaar introduced a flaming hot topic this year: "unit testing is a waste". Inspired by recent discussions of DHH, Martin Fowler, and Kent Beck, Arjan tried to find out the opinions of the CITCON crowd. Most of the people contributing to the discussion must have been working in consultancy, because the main conclusion was "It depends".

Whether unit testing is worth the effort mainly depends on the goals that people try to achieve when writing their unit tests. Some people write them from a TDD perspective. They use tests to guide themselves through development cycles, making sure they haven't made little errors. If this helps you, then please keep doing it! If it does not really help, well ...

Other people write unit tests from a regression perspective, or at least maintain them for regression testing. This part caused the most discussion. How useful are unit tests for regression testing purposes? Are you really catching regression if you isolate a single unit?

The growing interest in microservices also sheds new light on this discussion. In the future, when microservices will be widely adopted, we will be working with much smaller codebases. They might be so small and clear that unit tests are no longer required to guide us through the development process.

CI scaling

Another trending topic was scaling CI systems. It was good to see that the ideas we have at Xebia were consistent with the ideas we heard at CITCON. First off, the solution for everything (and world peace, it seems) is microservices. Unfortunately, some of us, for the time being, must deal with monolithic codebases. Luckily there are still options for your growing CI system, even though for now it remains one big chunk of code.

The staged pipeline: you test the things most likely to break first. Basically, you break your test suite up into multiple test suites and run them at separate stages in the pipeline.

But how do you determine what is most likely to break and what to test where? Tests that are most likely to break are those that are closely linked to the code changes, so run them first. Also, determine high-risk areas for your application. These areas can be identified based on trends (in failing tests) or simply through human analysis. To determine where to run the different test suites is mainly a matter of speed versus confidence. You want fast feedback so you don't want to push all your tests to the end of the pipeline. But you also don't want to wait forever before you know you can move on to the next thing.

Beer brewing for process refinement

Who isn't interested in beer brewing? Tom Denley (@scarytom) proposed a session on home brewing and the analogy. Because Arjan is a homebrewer himself, this seemed like an obvious session for him.

In addition to Tom explaining the process of brewing, we discussed how we got into brewing. In both cases, the first brew was made with a can of hopped malt syrup, adding yeast, and producing a beer from there. For your second beer, you replace the can of syrup with malt extract powder and dark malt (for flavour). At a later stage, you can replace the malt extract with ground malt.

What we basically do is start with the end in mind. If you're starting with continuous delivery, it is considered good practice to do the same: get your application deployed in production as soon as possible and optimise your process from your deployed app back to source code.

Again, it was a good conference with some nice take-aways. Next year's episode will most likely take place in Finland. The year after that... The Netherlands?

Testing cheatsheet

Xebia Blog - Mon, 11/24/2014 - 19:00

Sometimes it is not clear for everybody how unit tests relates to e2e-test. This cheatsheet, I created, describes in one page:

  1. The different definitions
  2. Different structures of the tests
  3. The importance of unit tests
  4. The importance of e2e tests
  5. External versus internal quality
  6. E2E and unit tests living next to each other

Feel free to download and use it in your project if you feel there is a confusion of tongues between unit and e2e tests.

Download: TestingCheatSheet

 

I Spoke at Oredev This Year

Making the Complex Simple - John Sonmez - Mon, 11/24/2014 - 17:00

I’ve been waiting to put this post up until the videos from my talks at Oredev were put online, but since two of the three are online, and I wasn’t sure if the third was actually going to go up, I decided to go ahead and put up the post now. I don’t speak at a lot of conferences, and ... Read More

The post I Spoke at Oredev This Year appeared first on Simple Programmer.

Categories: Programming

f(by) Minsk 2014

Phil Trelford's Array - Mon, 11/24/2014 - 09:40

This weekend Evelina, Yan an I had the pleasure of speaking at f(by) the first dedicated functional conference in Belarus. It was a short hop by train from Vilnius to Minsk, where we had been attending Build Stuff. Sergey Tihon, of F# Weekly fame, was waiting for us at the train station to guide us to the hotel with a short tour of the city.

The venue was a large converted loft space, by the river and not far from the central station, with great views over the city. The event attracted over 100 developers from across the region, and we were treated to tea and tasty local cakes during the breaks.

FuncBy Group Photo

Evelina was the first of us to speak and got a great response to her talk on Understanding Social Networks with F#.

Evelina at f(by)

The slides and samples are available on Evelina’s github repository.

Next up Yan presented Learn you to tame complex APIs with F# powered DSLs:

Learn you to tame complex APIs with F#-powered DSLs from Yan Cui

My talk was another instalment of F# Eye for the C# Guy.

Unicorns

The talk introduces F# from the perspective of a C# developer using live samples covering syntax, F#/C# interop, unit testing, data access via F# Type Providers and F# to JS with FunScript.

In one example we looked at CO2 emissions using World Bank data (using FSharp.Data) in a line chart (using FSharp.Charting):

[gb;uk;by] => (fun i -> i.``CO2 emissions (kg per 2005 PPP $ of GDP)``)

 

CO2 emissions[3]

Many thanks to Alina for inviting us, and the Minsk F# community for making us feel very welcome.

Categories: Programming

R: dplyr – “Variables not shown”

Mark Needham - Sun, 11/23/2014 - 02:02

I recently ran into a problem where the result of applying some operations to a data frame wasn’t being output the way I wanted.

I started with this data frame:

words = function(numberOfWords, lengthOfWord) {
  w = c(1:numberOfWords)  
  for(i in 1:numberOfWords) {
    w[i] = paste(sample(letters, lengthOfWord, replace=TRUE), collapse = "")
  }
  w
}
 
numberOfRows = 100
df = data.frame(a = sample (1:numberOfRows, 10, replace = TRUE),
                b = sample (1:numberOfRows, 10, replace = TRUE),
                name = words(numberOfRows, 10))

I wanted to group the data frame by a and b and output a comma separated list of the associated names. I started with this:

> df %>% 
    group_by(a,b) %>%
    summarise(n = n(), words = paste(name, collapse = ",")) %>%
    arrange(desc(n)) %>%
    head(5)
 
Source: local data frame [5 x 4]
Groups: a
 
   a  b  n
1 19 90 10
2 24 36 10
3 29 20 10
4 29 80 10
5 62 54 10
Variables not shown: words (chr)

Unfortunately the words column has been excluded and I came across this Stack Overflow post which suggested that the print.tbl_df function was the one responsible for filtering columns.

Browsing the docs I found a couple of ways to overwrite this behaviour:

> df %>% 
    group_by(a,b) %>%
    summarise(n = n(), words = paste(name, collapse = ",")) %>%
    arrange(desc(n)) %>%
    head(5) %>%
    print(width = Inf)

or

> options(dplyr.width = Inf)
> df %>% 
    group_by(a,b) %>%
    summarise(n = n(), words = paste(name, collapse = ",")) %>%
    arrange(desc(n)) %>%
    head(5)

And now we see this output instead:

Source: local data frame [5 x 4]
Groups: a
 
   a  b  n                                                                                                         words
1 19 90 10 dfhtcgymxt,zpemxbpnri,rfmkksuavp,jxaarxzdzd,peydpxjizc,trdzchaxiy,arthnxbaeg,kjbpdvvghm,kpvsddlsua,xmysfcynxw
2 24 36 10 wtokzdfecx,eprsvpsdcp,kzgxtwnqli,jbyuicevrn,klriuenjzu,qzgtmkljoy,bonbhmqfaz,uauoybprrl,rzummfbkbx,icyeorwzxl
3 29 20 10 ebubytlosp,vtligdgvqw,ejlqonhuit,jwidjvtark,kmdzcalblg,qzrlewxcsr,eckfgjnkys,vfdaeqbfqi,rumblliqmn,fvezcdfiaz
4 29 80 10 wputpwgayx,lpawiyhzuh,ufykwguynu,nyqnwjallh,abaxicpixl,uirudflazn,wyynsikwcl,usescualww,bkvsowfaab,gfhyifzepx
5 62 54 10 beuegfzssp,gfmegjtrys,wkubhvnkkk,rkhgprxttb,cwsrzulnpo,hzkvjbiywc,gbmiupnlbw,gffovxwtok,uxadfrjvdn,aojjfhxygs

Much better!

Categories: Programming

Want to preview our new DataGrid for Xamarin.Forms?

Eric.Weblog() - Eric Sink - Fri, 11/21/2014 - 19:00
tl;dr

Zumero.DataGrid is a Xamarin.Forms control for displaying data in rows and columns.

If you would be interested in testing and previewing a trial version of this product, email us:

  • datagrid-testing@zumero.com

Details
  • We plan to submit this to the Xamarin Component Store where (pending Xamarin's approval) it would be a commercial product. Pricing has not been set.

  • You'll receive a file in Xamarin Component format (a .xam file) which you will need to install into your IDE.
    (command line: xamarin-component.exe install ZumeroDataGrid-1.0.0.xam)
    The component file contains the libraries for all platforms, plus documentation and samples.

  • The test component is a trial version. It overlays the contents of some (randomly chosen) grid cells with an orange label that says "Trial". It is otherwise identical to the full version.

  • This component is unrelated to the one I mentioned in a blog entry back on 3 October. That approach was based on drawing. It was not capable of hosting Xamarin.Forms views, so it couldn't do things like in-place editing. This component is entirely different.

  • We are eager to hear feedback of any kind.

Features
  • Support for cell contents to be any Xamarin.Forms.View
  • Scrolling, both horizontal and vertical
  • Optional top frozen header row
  • Optional left frozen column
  • Separate View template for each column
  • View recycling (If you have 1000 rows but only 10 can be visible, DataGrid creates 10 Views and reuses them.)
  • Data row objects (provided to the DataGrid as an IList<object>)
  • Strong integration with the Xamarin.Forms binding mechanism
  • Automatic updates if an ObservableCollection is used for rows and columns
  • Configurable row height (for all rows in the grid)
  • Configurable column widths (can be different for each column)
  • Configurable spacing between rows and columns
  • Works with C#, F#, and XAML
  • Support for iOS, Android, and Windows Phone
  • Full API documentation in MonoDoc
  • Technical support service from Zumero
Screenshots

 

Episode 215: Gang of Four – 20 Years Later

Johannes Thönes talks with Erich Gamma, Ralph Johnson and Richard Helm from the Gang of Four about the 20th anniversary of their book Design Patterns. They discuss the following topics: the definition of a design pattern and each guest’s favorite design pattern; the origins of the book in architecture workshops; the writing of the book […]
Categories: Programming

Google Analytics Demos & Tools

Google Code Blog - Thu, 11/20/2014 - 18:05
As a member of the Google Analytics Developer Relations team, I often hear from our community that they want to do more with GA but don't always know how. They know the basics but want to see full examples and demos that show how things should be built.

Well, we've been listening, and today I'm proud to announce the launch of Google Analytics Demos & Tools, a new website geared toward helping Google Analytics developers tackle the challenges they face most often.

The site aims to make experienced developers more productive (we use it internally all the time) and to show new users what's possible and inspire them to leverage the platform to improve their business through advanced measurement and analysis.

Google Analytics Developers Image
Some highlights of the site include a full-featured Enhanced Ecommerce demo with code samples for both Google Analytics and Google Tag Manager, a new Account Explorer tool to help you quickly find the IDs you need for various Google Analytics services and 3rd party integrations, several examples of easy-to-build custom dashboards, and some old favorites like the Query Explorer.

Google Analytics Demos & Tools not only shows off Google Analytics technologies, it also uses them under the hood. All pages that require authorization use the Embed API to log users in, and usage statistics, including outbound link clicks, authorization status, client-side exceptions, and numerous other user interaction events are measured using analytics.js.

Every page that makes use of a Google Analytics technology lists that information in the footer, making it easy for developers to see how all the pieces fit together. In addition, the entire site is open sourced and available on Github, so you can dive in and see exactly how everything works.

Feedback is welcome and appreciated!

Posted By Philip Walton, Developer Programs Engineer
Categories: Programming

How Do You Estimate What You Don’t Know?

Making the Complex Simple - John Sonmez - Thu, 11/20/2014 - 16:00

In this video I answer a question about how to estimate something that you don’t know how to do yet.

The post How Do You Estimate What You Don’t Know? appeared first on Simple Programmer.

Categories: Programming

musiXmatch drives user engagement through innovation

Android Developers Blog - Thu, 11/20/2014 - 15:54

Posted by Leticia Lago, Google Play team

musiXmatch is an app that offers Android users the unique and powerful feature FloatingLyrics. FloatingLyrics pops up a floating window showing synched lyrics as users listen to tracks on their favorite player and music services. It’s achieved through a seamless integration with intents on the platform, something that’s technically possible only on Android.

As a result musiXmatch has seen “a dramatic increase in terms of engagement’, says founder Max Ciociola, “which has been two times more active users and even two times more the average time they spend in the app.”

The ability to deliver lyrics to a range of different devices — such as Chromecast, Android TV, and Android Wear — is creating opportunities for musiXmatch. It’s helping them turn their app into a smart companion for their users and getting them closer to their goal of reaching 100 million people.

In the following video, Max and Android engineer Sebastiano Gottardo talk about the unique capabilities that Android offers to musiXmatch:

To learn about achieving great user engagement and retention and reaching more users through different form factors, be sure to check out these resources:

  • Convert installs to active users — Watch this video to hear from Matteo Vallone, Partner Development Manager for Google Play, about the best practices in engaging and retaining app users using intents, identity, context, and rich notifications as well as delivering a cross-platform user experience.
  • Expanding to new form factors: Tablet, Wear & TV — Watch this panel discussion with Google experts talking about cross-platform opportunities and answering developer questions.
Join the discussion on

+Android Developers
Categories: Programming

Chinese Developers Can Now Offer Paid Applications to Google Play Users in More Than 130 countries

Android Developers Blog - Thu, 11/20/2014 - 03:07

By Ellie Powers, product manager for Google Play

Google Play is the largest digital store for Android users to discover and purchase their favorite mobile app and games, and the ecosystem is continuing to grow globally. Over the past year, we’ve expanded the list of countries where app developers can sign up to be merchants on Google Play, totaling 60 countries, including Lebanon, Jordan, Oman, Pakistan, Puerto Rico, Qatar and Venezuela most recently.

As part of that continued effort, we’re excited to announce merchant support in China, enabling local developers to export and sell their apps to Google Play users in more than 130 countries. Chinese developers can now offer both free and paid applications through various monetization models, including in-app purchasing and subscriptions. For revenue generated on Google Play, developers will receive payment to their Chinese bank accounts via USD wire transfers.

If you develop Android apps in China and want to start distributing your apps to a global audience through Google Play, visit play.google.com/apps/publish and register as a developer. If you want to sell apps and in-app products, you'll need to also sign up for a Google Wallet merchant account, which is available on the “Revenue” page in the Google Play Developer Console. After you’ve uploaded your apps, you can set prices in the Developer Console and later receive reports on your revenue. You’ll receive your developer payouts via wire transfer. For more details, please visit our developer help center.

We look forward to continuing to roll out Google Play support to developers in many more countries around the world.

中国开发者可以向全球130个国家的Google Play用户提供付费应用啦

发表者:Ellie Powers, Google Play产品经理

Google Play是一个可让Android用户发现和购买他们喜爱的移动应用程序和游戏的全球最大的应用商店,这个生态系统在全球迅速成长。过去一年中,我们已经扩展到60个国家,让应用程序开发人员可以注册成为 Google Play的商家,其中新近支持的国家包括黎巴嫩、约旦、阿曼、巴基斯坦、波多黎各、卡塔尔和委内瑞拉。

作为持续改进 Google Play努力的一部分,我们很高兴地宣布在中国增加了对商家的支持,让中国的开发者能售卖应用程序到130个国家的 Google Play 用户。中国的开发者现在可以提供通过各种盈利模式的免费和付费应用,包括应用内购买和订阅。在 Google Play 产生的营收将通过美元电汇的方式支付给开发者的中国的银行账户。

如果你在中国开发Android应用程序,并希望通过 Google Play 把应用程序推广到全球,请登录play.google.com/apps/publish 并建立你的 Google Play 开发者账户。如果你想售卖付费的应用程序和应用程序内的产品,则需要再注册一个Google 电子钱包商家帐户,通过Google Play开发者控制台里的”营收”页面进行设置。上传应用程序后,你可以通过开发者控制台设定价格,之后就可以收到营收报告,你将会通过电汇的方式获得收入。

我们将继续增加更多 Google Play 商家支持的国家,敬请关注。

更多详情,请访问我们的开发者帮助中心

Join the discussion on

+Android Developers
Categories: Programming

Keeping Your Saved Games in the Cloud

Android Developers Blog - Wed, 11/19/2014 - 19:47

Posted by Todd Kerpelman, Developer Advocate

Saved Games Are the Future!

I think most of us have at least one or two games we play obsessively. Me? I'm a Sky Force 2014 guy. But maybe you're into matching colorful objects, battling monsters, or helping avians with their rage management issues. Either way, there's probably some game where you've spent hours upon hours upgrading your squad, reaching higher and higher levels, or unlocking every piece of bonus content in the game.

Now imagine losing all of that progress when you decide to get a new phone. Or reinstall your game. Yikes!

That's why, when Google Play Games launched, one of the very first features we included was the ability for users to save their game to the cloud using a service called the AppState API. For developers who didn't need an entire server-based infrastructure to run their game, but didn't want to lose players when they upgraded their devices, this feature was a real life-saver.

But many developers wanted even more. With AppState, you were limited to 4 slots of 256k of data each, and for some games, this just wasn't enough. So this past year at Google I/O, we launched an entirely new Saved Games feature, powered by Google Drive. This gave you huge amounts of space (up to 3MB per saved game with unlimited save slots), the ability to save a screenshot and metadata with your saved games, and some nice features like showing your player's saved games directly in the Google Play app.

...But AppState is Yesterday's News

Since the introduction of Saved Games, we've seen enough titles happily using the service and heard enough positive feedback from developers that we're convinced that Saved Games is the better offering and the way to go in the future. With that in mind, we've decided to start deprecating the old cloud save system using AppState and are encouraging everybody who's still using it to switch over to the new Saved Games feature (referred to in our libraries as "Snapshots").

What does this mean for you as a game developer?

If you haven't yet added Saved Games to your game, now would be the perfect time! The holidays are coming up and your players are going to start getting new devices over the next couple of months. Wouldn't it be great if they could take your game's progress with them? Unless, I guess, "not retaining users" is part of your business plan.

If you're already using the new Saved Games / Snapshot system, put your feet up and relax. None of these changes affect you. Okay, now put your feet down, and get back to work. You probably have a seasonal update to work on, don't you?

If you're using the old AppState system, you should start migrating your player's data over to the new Saved Games service. Luckily, it's easy to include both systems in the same game, so you should be able to migrate your users' data with their ever knowing. The process would probably work a little something like this:

  • Enable the new Saved Game service for your game by
    • Adding the Drive.SCOPE_APPFOLDER scope to your list of scopes in your GoogleApiClient.
    • Turning on Saved Games for your game in the Google Play Developer Console.
  • Next, when your app tries to load the user's saved game
    • First see if any saved game exists using the new Saved Games service. If there is, go ahead and use it.
    • Otherwise, grab their saved game from the AppState service.
  • When you save the user's game back to the cloud, save it using the new Saved Games service.
  • And that should be it! The next time your user loads up your game, it will find their saved data in the new Saved Games service, and they'll be all set.
  • We've built a sample app that demonstrates how to perform these steps in your application, so we encourage you to check it out.

In a few months, we will be modifying the old AppState service to be read-only. You'll still be able to read your user's old cloud save games and transfer them to the new Saved Games service, but you'll no longer be able to save games using the old service. We are evaluating early Q2 of 2015 to make this change, which should give you enough time to push your "start using Saved Games" update to the world.

If you want to find out more about Saved Games and how they work, feel free to review our documentation, our sample applications, or our Game On! videos. And we look forward to many more hours of gaming, no matter how many times we switch devices.

Join the discussion on

+Android Developers
Categories: Programming

Coding Android TV games is easy as pie

Android Developers Blog - Wed, 11/19/2014 - 19:28
Posted by Alex Ames, Fun Propulsion Labs at Google*

We’re pleased to announce Pie Noon, a simple game created to demonstrate multi-player support on the Nexus Player, an Android TV device. Pie Noon is an open source, cross-platform game written in C++ which supports:

  • Up to 4 players using Bluetooth controllers.
  • Touch controls.
  • Google Play Games Services sign-in and leaderboards.
  • Other Android devices (you can play on your phone or tablet in single-player mode, or against human adversaries using Bluetooth controllers).

Pie Noon serves as a demonstration of how to use the SDL library in Android games as well as Google technologies like Flatbuffers, Mathfu, fplutil, and WebP.

  • Flatbuffers provides efficient serialization of the data loaded at run time for quick loading times. (Examples: schema files and loading compiled Flatbuffers)
  • Mathfu drives the rendering code, particle effects, scene layout, and more, allowing for efficient mathematical operations optimized with SIMD. (Example: particle system)
  • fplutil streamlines the build process for Android, making iteration faster and easier. Our Android build script makes use of it to easily compile and run on on Android devices.
  • WebP compresses image assets more efficiently than jpg or png file formats, allowing for smaller APK sizes.

You can download the game in the Play Store and the latest open source release from our GitHub page. We invite you to learn from the code to see how you can implement these libraries and utilities in your own Android games. Take advantage of our discussion list if you have any questions, and don’t forget to throw a few pies while you’re at it!

* Fun Propulsion Labs is a team within Google that's dedicated to advancing gaming on Android and other platforms.

Join the discussion on

+Android Developers
Categories: Programming

Launchpad Online for developers: Getting Started with Google APIs

Google Code Blog - Wed, 11/19/2014 - 19:00
Google APIs give you the power to build rich integrations with our most popular applications, including Google Maps, YouTube, Gmail, Google Drive and more. If you are new to our developer features, we want to give you a quick jumpstart on how to use them effectively and build amazing things.

As you saw last week, we’re excited to partner with the Startup Launch team to create the “Launchpad Online”, a new video series for a global audience that joins their cadre which already includes the established “Root Access” and “How I:” series, as well as a “Global Spotlight” series on some of the startups that have benefitted from the Startup Launch program. This new series focuses on developers who are beginners to one or more Google APIs. Each episode helps get new users off the ground with the basics, shows experienced developers more advanced techniques, or introduces new API features, all in as few lines of code as possible.

The series assumes you have some experience developing software and are comfortable using languages like Javascript and Python. Be inspired by a short working example that you can drop into your own app and customize as desired, letting you “take off” with that great idea. Hopefully you had a chance to view our intro video last week and are ready for more!

Once you’ve got your developers ready to get started, learn how to use the Google Developers Console to set up a project (see video to the right), and walk through common security code required for your app to access Google APIs. When you’ve got this “foundation” under your belt, you’ll be ready to build your first app… listing your files on Google Drive. You’ll learn something new and see some code that makes it happen in each future episode… we look forward to having you join us soon on Launchpad Online!

Posted by Wesley Chun, Developer Advocate, Google

+Wesley Chun (@wescpy) is an engineer, author of the bestselling Core Python books, and a Developer Advocate at Google, specializing in Google Drive, Google Apps Script, Gmail, Google Apps, cloud computing, developer training, and academia. He loves traveling worldwide to meet Google users everywhere, whether at a developers conference, user group meeting, or on a university campus!
Categories: Programming