Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Shell: Create a comma separated string

Mark Needham - Fri, 06/23/2017 - 13:26

I recently needed to generate a string with comma separated values, based on iterating a range of numbers.

e.g. we should get the following output where n = 3

foo-0,foo-1,foo-2

I only had the shell available to me so I couldn’t shell out into Python or Ruby for example. That means it’s bash scripting time!

If we want to iterate a range of numbers and print them out on the screen we can write the following code:

n=3
for i in $(seq 0 $(($n > 0? $n-1: 0))); do 
  echo "foo-$i"
done

foo-0
foo-1
foo-2

Combining them into a string is a bit more tricky, but luckily I found a great blog post by Andreas Haupt which shows what to do. Andreas is solving a more complicated problem than me but these are the bits of code that we need from the post.

n=3
combined=""

for i in $(seq 0 $(($n > 0? $n-1: 0))); do 
  token="foo-$i"
  combined="${combined}${combined:+,}$token"
done
echo $combined

foo-0,foo-1,foo-2

This won’t work if you set n<0 but that’s ok for me! I’ll let Andreas explain how it works:

  • ${combined:+,} will return either a comma (if combined exists and is set) or nothing at all.
  • In the first invocation of the loop combined is not yet set and nothing is put out.
  • In the next rounds combined is set and a comma will be put out.

We can see how it in action by printing out the value of $combined after each iteration of the loop:

n=3
combined=""

for i in $(seq 0 $(($n > 0 ? $n-1: 0))); do 
  token="foo-$i"
  combined="${combined}${combined:+,}$token"
  echo $combined
done

foo-0
foo-0,foo-1
foo-0,foo-1,foo-2

Looks good to me!

The post Shell: Create a comma separated string appeared first on Mark Needham.

Categories: Programming

What’s new in WebView security

Android Developers Blog - Thu, 06/22/2017 - 22:29
Posted by Xiaowen Xin and Renu Chaudhary, Android Security Team

The processing of external and untrusted content is often one of the most important functions of an app. A newsreader shows the top news articles and a shopping app displays the catalog of items for sale. This comes with associated risks as the processing of untrusted content is also one of the main ways that an attacker can compromise your app, i.e. by passing you malformed content.

Many apps handle untrusted content using WebView, and we've made many improvements in Android over the years to protect it and your app against compromise. With Android Lollipop, we started delivering WebView as an independent APK, updated every six weeks from the Play store, so that we can get important fixes to users quickly. With the newest WebView, we've added a couple more important security enhancements.

Isolating the renderer process in Android O

Starting with Android O, WebView will have the renderer running in an isolated process separate from the host app, taking advantage of the isolation between processes provided by Android that has been available for other applications.

Similar to Chrome, WebView now provides two levels of isolation:

  1. The rendering engine has been split into a separate process. This insulates the host app from bugs or crashes in the renderer process and makes it harder for a malicious website that can exploit the renderer to then exploit the host app.
  2. To further contain it, the renderer process is run within an isolated process sandbox that restricts it to a limited set of resources. For example, the rendering engine cannot write to disk or talk to the network on its own. It is also bound to the same seccomp filter (blogpost on seccomp is coming soon) as used by Chrome on Android. The seccomp filter reduces the number of system calls the renderer process can access and also restricts the allowed arguments to the system calls.
Incorporating Safe Browsing

The newest version of WebView incorporates Google's Safe Browsing protections to detect and warn users about potentially dangerous sites.. When correctly configured, WebView checks URLs against Safe Browsing's malware and phishing database and displays a warning message before users visit a dangerous site. On Chrome, this helpful information is displayed more than 250 million times a month, and now it's available in WebView on Android.

Enabling Safe Browsing

To enable Safe Browsing for all WebViews in your app, add in a manifest tag:

<manifest>
     <meta-data android:name="android.webkit.WebView.EnableSafeBrowsing"
                android:value="true" />
      . . .
     <application> . . . </application>
</manifest>

Because WebView is distributed as a separate APK, Safe Browsing for WebView is available today for devices running Android 5.0 and above. With just one added line in your manifest, you can update your app and improve security for most of your users immediately.

.blogimage img { width: 100%; border: 0; margin: 0; padding: 0; } .floatimage img { float: right; width: 45%; }
Categories: Programming

Avoiding deeply nested component trees

Xebia Blog - Thu, 06/22/2017 - 09:52

By passing child components down instead of data you can avoid passing data down through many levels of components. It also makes your components more reusable. Even multibrand components become much easier to build. Overall it is a pattern which improves your frontend code a lot! The Problem When building frontends you will pass data […]

The post Avoiding deeply nested component trees appeared first on Xebia Blog.

Ending support for Android Market on Android 2.1 and lower

Android Developers Blog - Tue, 06/20/2017 - 19:00
Posted by Maximilian Ruppaner, Software Engineer on Google Play

On June 30, 2017, Google will be ending support for the Android Market app on Android 2.1 Eclair and older devices. When this change happens, users on these devices will no longer be able to access, or install other apps from, the Android Market. The change will happen without a notification on the device, due to technical restrictions in the original Android Market app.

It has been 7 years since Android 2.1 Eclair launched. Most app developers are no longer supporting these Android versions in their apps given these devices now account for only a small number of installs.

We will still be supporting later versions of Android Market for as long as feasible. Google Play, the replacement for Android Market, is available on Android 2.2 and above.

Categories: Programming

Brotli Compression in Google Display Ads

Google Code Blog - Tue, 06/20/2017 - 17:00
Posted by Michael Burns, Software Engineer, Publisher Tagging & Ads Latency Team

Our goal is to help publishers monetize their content and build sustainable businesses through advertising products that allow sites to load as fast as possible to minimize impact to user experience.

Almost two years ago, our compression team announced a new compression algorithm called Brotli. Today, we are happy to announce that the Brotli compression algorithm is now being used to compress Google Display Ads whenever possible. In our experiments, we see data savings of 15% in aggregate over standard gzip compression, and in some instances, a savings of over 40%! This reduces the amount of data sent to end users by tens of thousands of gigabytes every day! This also results in faster page loads and less battery consumption.

We hope results like this will encourage wider adoption and will advance web standards such as Brotli compression.

Categories: Programming

SE-Radio Episode 294: Asaf Yigal on Machine Learning in Log Analysis

Asaf Yigal talks with SE Radio’s Edaena Salinas about machine learning in log analysis. The discussion starts with an overview of the structure of logs and what information they can contain. Asaf discusses what the log analysis process looks like without machine learning — and the role of humans in this – before moving on […]
Categories: Programming

Semantic Time support now available on the Awareness APIs

Android Developers Blog - Fri, 06/16/2017 - 18:00
Posted by Ritesh Nayak M, Product Manager

Last year at I/O we launched the Awareness API, a simple yet powerful API that let developers use signals such as Location, Weather, Time and User Activity to build contextually relevant app experiences.

Available via Google Play services, the Awareness API offers two ways to take advantage of context signals within your app. The Snapshot API lets your app request information about the user's current context, while the Fence API lets your app react to changes in user's context, and when it matches a certain set of conditions. For example, "tell me whenever the user is walking and their headphone is plugged in".

Until now, you could specify a time fence on the Awareness APIs but were restricted to using absolute/canonical representation of time. Based on developer feedback, we realized that the flexibility of the API in regards to building time fences did not support higher level abstractions people use when they think and talk about time. "This weekend", "on the next holiday", "after sunset", are all very common and colloquial ways of expressing time. That's why we're adding Semantic time support to these APIs starting today

For e.g., if you were building a fitness app and wanted a way to prompt users everyday morning to start their routine, or if you're a reading app that wants to turn on night mode after dusk; you would have to query a 3p API for sunrise/sunset information at the user location and then write up an Awareness fence with those canonical time values. With our latest update, you can use our TIME_INSTANT_SUNRISE and TIME_INSTANT_SUNSET constants and let the platform manage all the complexity for you.

Let's look at an example. Suppose you're building a fitness app which prompts users on Tuesday, and Thursday around sunrise to begin their morning work out. You can set up this triggering using the following lines of code.

// A sun-state-based fence that is TRUE only on Tuesday and Thursday during Sunrise 
AwarenessFence.and(
    TimeFence.aroundTimeInstant(TimeFence.TIME_INSTANT_SUNRISE,
            -10 * ONE_MINUTE_MILLIS, 5 * ONE_MINUTE_MILLIS),
    AwarenessFence.or(
        TimeFence.inIntervalOfDay(TimeFence.DAY_OF_WEEK_TUESDAY,
                0, ONE_DAY_MILLIS),
        TimeFence.inIntervalOfDay(TimeFence.DAY_OF_WEEK_THURSDAY,
                0, ONE_DAY_MILLIS)));

One of our favorite semantic time features is public holidays. Every country and regions within it have different holidays. Assume you were a local hiking & adventure app that wants to show users activities they can indulge in on a holiday that falls on a Friday or a Monday. You can use a combination of Days and Holiday flags to identify this state for all your users around the world. You can do this with just 3 lines of code and have this work in any part of the world.

// A local-time fence that is TRUE only on public holidays in the
// device locale that fall on Fridays or Mondays.
AwarenessFence.and(
    TimeFence.inTimeInterval(TimeFence.TIME_INTERVAL_HOLIDAY),
    AwarenessFence.or(
        TimeFence.inIntervalOfDay(TimeFence.DAY_OF_WEEK_FRIDAY,
                9 * ONE_HOUR_MILLIS, 11 * ONE_HOUR_MILLIS),
        TimeFence.inIntervalOfDay(TimeFence.DAY_OF_WEEK_MONDAY,
                9 * ONE_HOUR_MILLIS, 11 * ONE_HOUR_MILLIS)));

In both example cases, Awareness does the heavy lifting of localizing time and holidays based on the device locale settings.

We're excited to see what problems you'll solve using this powerful API. Please join our mailing list to get updates about this and other Context APIs at Google.

.blogimage img { width: 100%; }
Categories: Programming

scikit-learn: Random forests – Feature Importance

Mark Needham - Fri, 06/16/2017 - 06:55

As I mentioned in a blog post a couple of weeks ago, I’ve been playing around with the Kaggle House Prices competition and the most recent thing I tried was training a random forest regressor.

Unfortunately, although it gave me better results locally it got a worse score on the unseen data, which I figured meant I’d overfitted the model.

I wasn’t really sure how to work out if that theory was true or not, but by chance I was reading Chris Albon’s blog and found a post where he explains how to inspect the importance of every feature in a random forest. Just what I needed!

Stealing from Chris’ post I wrote the following code to work out the feature importance for my dataset:

Prerequisites
import numpy as np
import pandas as pd

from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split

# We'll use this library to make the display pretty
from tabulate import tabulate
Load Data
train = pd.read_csv('train.csv')

# the model can only handle numeric values so filter out the rest
data = train.select_dtypes(include=[np.number]).interpolate().dropna()
Split train/test sets
y = train.SalePrice
X = data.drop(["SalePrice", "Id"], axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=.33)
Train model
clf = RandomForestRegressor(n_jobs=2, n_estimators=1000)
model = clf.fit(X_train, y_train)
Feature Importance
headers = ["name", "score"]
values = sorted(zip(X_train.columns, model.feature_importances_), key=lambda x: x[1] * -1)
print(tabulate(values, headers, tablefmt="plain"))
name                 score
OverallQual    0.553829
GrLivArea      0.131
BsmtFinSF1     0.0374779
TotalBsmtSF    0.0372076
1stFlrSF       0.0321814
GarageCars     0.0226189
GarageArea     0.0215719
LotArea        0.0214979
YearBuilt      0.0184556
2ndFlrSF       0.0127248
YearRemodAdd   0.0126581
WoodDeckSF     0.0108077
OpenPorchSF    0.00945239
LotFrontage    0.00873811
TotRmsAbvGrd   0.00803121
GarageYrBlt    0.00760442
BsmtUnfSF      0.00715158
MasVnrArea     0.00680341
ScreenPorch    0.00618797
Fireplaces     0.00521741
OverallCond    0.00487722
MoSold         0.00461165
MSSubClass     0.00458496
BedroomAbvGr   0.00253031
FullBath       0.0024245
YrSold         0.00211638
HalfBath       0.0014954
KitchenAbvGr   0.00140786
BsmtFullBath   0.00137335
BsmtFinSF2     0.00107147
EnclosedPorch  0.000951266
3SsnPorch      0.000501238
PoolArea       0.000261668
LowQualFinSF   0.000241304
BsmtHalfBath   0.000179506
MiscVal        0.000154799

So OverallQual is quite a good predictor but then there’s a steep fall to GrLivArea before things really tail off after WoodDeckSF.

I think this is telling us that a lot of these features aren’t useful at all and can be removed from the model. There are also a bunch of categorical/factor variables that have been stripped out of the model but might be predictive of the house price.

These are the next things I’m going to explore:

  • Make the categorical variables numeric (perhaps by using one hot encoding for some of them)
  • Remove the most predictive features and build a model that only uses the other features

The post scikit-learn: Random forests – Feature Importance appeared first on Mark Needham.

Categories: Programming

Android Things Developer Preview 4.1

Android Developers Blog - Thu, 06/15/2017 - 22:41
Posted by Wayne Piekarski, Developer Advocate for IoT

Today, we're releasing a new Developer Preview 4.1 (DP4.1) of Android Things, with updates for new supported hardware and bug fixes to the platform. Android Things is Google's platform to enable Android Developers to create Internet of Things (IoT) devices, and seamlessly scale from prototype to production.

New hardware

A new Pico i.MX6UL revision B board has been released, which supports many common external peripherals from partners such as Adafruit and Pimoroni. There were some prototype Pico i.MX6UL boards made available to some early beta testers, and these are not compatible with DP4.1.

Improvements

DP4.1 also includes some performance improvements since DP4, such as boot time optimizations that improve the startup time of i.MX7D based hardware. This Developer Preview also includes a version of Google Play Services, specifically optimized for IoT devices. This new IoT variant is a lot smaller and optimized for use with Android Things, and requires the use of play-services 11.0.0 or later in your build.gradle. For more information about the supported features in the IoT variant of Google Play Services, see the information page.

Google I/O

Android Things had a large presence at Google I/O this year, with 7 talks covering different aspects of Android Things for developers, and these are available as videos in a playlist for those who could not attend:

What’s New In Google’s IoT Platform? Ubiquitous Computing at Google Bringing Device Production to Everyone With Android Things From Prototype to Production Devices with Android Things Developing for Android Things Using Android Studio Security for IoT on Android Things Using Google Cloud and TensorFlow on Android Things Building for Enterprise IoT Using Android Things and Google Cloud Platform

Google I/O also had a codelab area, where attendees could sit down and test out Android Things development with some simple guided training guides. These codelabs are available for anyone to try at https://codelabs.developers.google.com/?cat=IoT

Feedback

Thank you to all the developers who submitted feedback for the previous developer previews. Please continue sending us your feedback by filing bug reports and feature requests, and asking any questions on stackoverflow. To download images for DP4.1, visit the Android Things download page and find the changes in the release notes. You can also join Google's IoT Developers Community on Google+, a great resource to get updates and discuss ideas, with over 5,600 members.

.parent { display: flex; justify-content: space-around; align-items: center; width: 100% flex-wrap: wrap; } .child1 { width: 30%; display: flex; justify-content: center; align-items: center; } .child1 img { object-fit: contain; width: 150px; margin: 0; border: 0; padding: 0; } .child2 { width: 70%; text-align: left; padding: 0 0 0 5px; }
Categories: Programming

Welcome to your New Home on Android TV

Android Developers Blog - Thu, 06/15/2017 - 18:43
.image img { width: 100%; padding: 10px 0; margin: 0; border: 0; } Posted by Paul Saxman, Android Devices and Media Developer Relations Lead

Android TV brings rich app experiences and entertainment to the biggest screen in your house, and with Android O, we’re making it even easier for users to access content from their favorite apps. We’ve built a new, content-centric home screen experience for Android TV, and we're bringing the Google Assistant to the platform as well. These features put content that users want to access a few clicks, or spoken words, away.

The New Android TV Home Screen

The new Android TV home screen organizes video content into channels and programs in a way that’s familiar to TV viewers. Each Android TV app can publish multiple channels, which are represented as rows of programs on the home screen. Apps add relevant programs on each channel, and update these programs and channels as users access content or when new content is available. To help engage users, programs can include a video preview, which is automatically played when a user focuses on a program. Users can configure which channels they wish to see on the home screen, and the ordering of channels, so the themes and shows they’re interested in are quick and easy to access.

In addition to channels for you app, the top of the new Android TV home screen includes a quick launch bar for users' favorite apps, and a special Watch Next channel. This channel contains programs based on the viewing habits of the user.

The APIs for creating and maintaining channels and programs are part of the TvProvider APIs, which are distributed as an Android Support Library module with Android O. To get started using these APIs, visit the Android O Developer Preview site for an overview, and try out the Android TV Channels and Programs codelab for a first-hand experience building an Android TV app for Android O.

Later this year, Nexus Players will receive the new Android TV home experience as an OTA update. If you wish build and test apps for the new interface today, however, you can use the Android TV emulator or Nexus Player device images that are part of the latest Android O Developer Preview.

The Google Assistant on Android TV

The Google Assistant on Android TV, coming later this year, will allow users to quickly find and access content using their voice. Because the Assistant is context-aware, it can help users narrow down what content to play. Users will also be able access the Assistant to control playback, even while a video or music is playing. And since the Assistant can control compatible smart home devices, a simple voice request can dim the lights to create an ideal movie viewing environment. When the Google Assistant comes to Android TV, it will launch in the US on Android devices running M, N, and O.

We're looking forward to seeing how developers take advantage of the new Android TV home screen. We welcome feedback, so please visit the Android TV Developer Community on G+ to share you thoughts and ideas!

Categories: Programming

Reduce friction with the new Location APIs

Android Developers Blog - Wed, 06/14/2017 - 19:16
Posted by Aaron Stacy, Software Engineer, Google Play services

The 11.0.0 release of the Google Play services SDK includes a new way to access LocationServices. The new APIs do not require your app to manually manage a connection to Google Play services through a GoogleApiClient. This reduces boilerplate and common pitfalls in your app.

Read more below, or head straight to the updated location samples on GitHub.

Why not use GoogleApiClient?

The LocationServices APIs allow you to access device location, set up geofences, prompt the user to enable location on the device and more. In order to access these services, the app must connect to Google Play services, which can involve error-prone connection logic. For example, can you spot the crash in the app below?

Note: we'll assume our app has the ACCESS_FINE_LOCATION permission, which is required to get the user's exact location using the LocationServices APIs.

public class MainActivity extends AppCompatActivity implements
        GoogleApiClient.OnConnectionFailedListener {

  @Override
  public void onCreate(@Nullable Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);

    GoogleApiClient client = new GoogleApiClient.Builder(this)
        .enableAutoManage(this, this)
        .addApi(LocationServices.API)
        .build();
    client.connect();

    PendingResult result = 
         LocationServices.FusedLocationApi.requestLocationUpdates(
                 client, LocationRequest.create(), pendingIntent);

    result.setResultCallback(new ResultCallback() {
      @Override
      public void onResult(@NonNull Status status) {
        Log.d(TAG, "Result: " + status.getStatusMessage());
      }
    });
  }

  // ...
}

If you pointed to the requestLocationUpdates() call, you're right! That call throws an IllegalStateException, since the GoogleApiClient is has not yet connected. The call to connect() is asynchronous.

While the code above looks like it should work, it's missing a ConnectionCallbacks argument to the GoogleApiClient builder. The call to request location updates should only be made after the onConnected callback has fired:

public class MainActivity extends AppCompatActivity implements 
        GoogleApiClient.OnConnectionFailedListener,
        GoogleApiClient.ConnectionCallbacks {

  private GoogleApiClient client;

  @Override
  protected void onCreate(@Nullable Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);

    client = new GoogleApiClient.Builder(this)
        .enableAutoManage(this, this)
        .addApi(LocationServices.API)
        .addConnectionCallbacks(this)
        .build();

    client.connect();
  }

  @Override
  public void onConnected(@Nullable Bundle bundle) {
    PendingResult result = 
            LocationServices.FusedLocationApi.requestLocationUpdates(
                    client, LocationRequest.create(), pendingIntent);
    
    result.setResultCallback(new ResultCallback() {
      @Override
      public void onResult(@NonNull Status status) {
        Log.d(TAG, "Result: " + status.getStatusMessage());
      }
    });
  }

  // ...
}

Now the code works, but it's not ideal for a few reasons:

  • It would be hard to refactor into shared classes if, for instance, you wanted to access Location Services in multiple activities.
  • The app connects optimistically in onCreate even if Location Services are not needed until later (for example, after user input).
  • It does not handle the case where the app fails to connect to Google Play services.
  • There is a lot of boilerplate connection logic before getting started with location updates.
A better developer experience

The new LocationServices APIs are much simpler and will make your code less error prone. The connection logic is handled automatically, and you only need to attach a single completion listener:

public class MainActivity extends AppCompatActivity {

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);

    FusedLocationProviderClient client =
            LocationServices.getFusedLocationProviderClient(this);

    client.requestLocationUpdates(LocationRequest.create(), pendingIntent)
        .addOnCompleteListener(new OnCompleteListener() {
          @Override
          public void onComplete(@NonNull Task task) {
            Log.d("MainActivity", "Result: " + task.getResult());
          }
        });
  }
}

The new API immediately improves the code in a few ways:

  • The API calls automatically wait for the service connection to be established, which removes the need to wait for onConnected before making requests.
  • It uses the Task API which makes it easier to compose asynchronous operations.
  • The code is self-contained and could easily be moved into a shared utility class or similar.
  • You don't need to understand the underlying connection process to start coding.
What happened to all of the callbacks?

The new API will automatically resolve certain connection failures for you, so you don't need to write code that for things like prompting the user to update Google Play services. Rather than exposing connection failures globally in the onConnectionFailed method, connection problems will fail the Task with an ApiException:

    client.requestLocationUpdates(LocationRequest.create(), pendingIntent)
        .addOnFailureListener(new OnFailureListener() {
          @Override
          public void onFailure(@NonNull Exception e) {
            if (e instanceof ApiException) {
              Log.w(TAG, ((ApiException) e).getStatusMessage());
            } else {
              Log.w(TAG, e.getMessage());
            }
          }
        });
Try it for yourself

Try the new LocationServices APIs out for yourself in your own app or head over to the android-play-location samples on GitHub and see more examples of how the new clients reduce boilerplate and simplify logic.

Categories: Programming

Reduce friction with the new Location APIs

Android Developers Blog - Wed, 06/14/2017 - 19:16
Posted by Aaron Stacy, Software Engineer, Google Play services

The 11.0.0 release of the Google Play services SDK includes a new way to access LocationServices. The new APIs do not require your app to manually manage a connection to Google Play services through a GoogleApiClient. This reduces boilerplate and common pitfalls in your app.

Read more below, or head straight to the updated location samples on GitHub.

Why not use GoogleApiClient?

The LocationServices APIs allow you to access device location, set up geofences, prompt the user to enable location on the device and more. In order to access these services, the app must connect to Google Play services, which can involve error-prone connection logic. For example, can you spot the crash in the app below?

Note: we'll assume our app has the ACCESS_FINE_LOCATION permission, which is required to get the user's exact location using the LocationServices APIs.

public class MainActivity extends AppCompatActivity implements
        GoogleApiClient.OnConnectionFailedListener {

  @Override
  public void onCreate(@Nullable Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);

    GoogleApiClient client = new GoogleApiClient.Builder(this)
        .enableAutoManage(this, this)
        .addApi(LocationServices.API)
        .build();
    client.connect();

    PendingResult result = 
         LocationServices.FusedLocationApi.requestLocationUpdates(
                 client, LocationRequest.create(), pendingIntent);

    result.setResultCallback(new ResultCallback() {
      @Override
      public void onResult(@NonNull Status status) {
        Log.d(TAG, "Result: " + status.getStatusMessage());
      }
    });
  }

  // ...
}

If you pointed to the requestLocationUpdates() call, you're right! That call throws an IllegalStateException, since the GoogleApiClient is has not yet connected. The call to connect() is asynchronous.

While the code above looks like it should work, it's missing a ConnectionCallbacks argument to the GoogleApiClient builder. The call to request location updates should only be made after the onConnected callback has fired:

public class MainActivity extends AppCompatActivity implements 
        GoogleApiClient.OnConnectionFailedListener,
        GoogleApiClient.ConnectionCallbacks {

  private GoogleApiClient client;

  @Override
  protected void onCreate(@Nullable Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);

    client = new GoogleApiClient.Builder(this)
        .enableAutoManage(this, this)
        .addApi(LocationServices.API)
        .addConnectionCallbacks(this)
        .build();

    client.connect();
  }

  @Override
  public void onConnected(@Nullable Bundle bundle) {
    PendingResult result = 
            LocationServices.FusedLocationApi.requestLocationUpdates(
                    client, LocationRequest.create(), pendingIntent);
    
    result.setResultCallback(new ResultCallback() {
      @Override
      public void onResult(@NonNull Status status) {
        Log.d(TAG, "Result: " + status.getStatusMessage());
      }
    });
  }

  // ...
}

Now the code works, but it's not ideal for a few reasons:

  • It would be hard to refactor into shared classes if, for instance, you wanted to access Location Services in multiple activities.
  • The app connects optimistically in onCreate even if Location Services are not needed until later (for example, after user input).
  • It does not handle the case where the app fails to connect to Google Play services.
  • There is a lot of boilerplate connection logic before getting started with location updates.
A better developer experience

The new LocationServices APIs are much simpler and will make your code less error prone. The connection logic is handled automatically, and you only need to attach a single completion listener:

public class MainActivity extends AppCompatActivity {

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);

    FusedLocationProviderClient client =
            LocationServices.getFusedLocationProviderClient(this);

    client.requestLocationUpdates(LocationRequest.create(), pendingIntent)
        .addOnCompleteListener(new OnCompleteListener() {
          @Override
          public void onComplete(@NonNull Task task) {
            Log.d("MainActivity", "Result: " + task.getResult());
          }
        });
  }
}

The new API immediately improves the code in a few ways:

  • The API calls automatically wait for the service connection to be established, which removes the need to wait for onConnected before making requests.
  • It uses the Task API which makes it easier to compose asynchronous operations.
  • The code is self-contained and could easily be moved into a shared utility class or similar.
  • You don't need to understand the underlying connection process to start coding.
What happened to all of the callbacks?

The new API will automatically resolve certain connection failures for you, so you don't need to write code that for things like prompting the user to update Google Play services. Rather than exposing connection failures globally in the onConnectionFailed method, connection problems will fail the Task with an ApiException:

    client.requestLocationUpdates(LocationRequest.create(), pendingIntent)
        .addOnFailureListener(new OnFailureListener() {
          @Override
          public void onFailure(@NonNull Exception e) {
            if (e instanceof ApiException) {
              Log.w(TAG, ((ApiException) e).getStatusMessage());
            } else {
              Log.w(TAG, e.getMessage());
            }
          }
        });
Try it for yourself

Try the new LocationServices APIs out for yourself in your own app or head over to the android-play-location samples on GitHub and see more examples of how the new clients reduce boilerplate and simplify logic.

Categories: Programming

Kubernetes: Which node is a pod on?

Mark Needham - Wed, 06/14/2017 - 09:49

When running Kubernetes on a cloud provider, rather than locally using minikube, it’s useful to know which node a pod is running on.

The normal command to list pods doesn’t contain this information:

$ kubectl get pod
NAME           READY     STATUS    RESTARTS   AGE       
neo4j-core-0   1/1       Running   0          6m        
neo4j-core-1   1/1       Running   0          6m        
neo4j-core-2   1/1       Running   0          2m        

I spent a while searching for a command that I could use before I came across Ta-Ching Chen’s blog post while looking for something else.

Ta-Ching points out that we just need to add the flag -o wide to our original command to get the information we require:

$ kubectl get pod -o wide
NAME           READY     STATUS    RESTARTS   AGE       IP           NODE
neo4j-core-0   1/1       Running   0          6m        10.32.3.6    gke-neo4j-cluster-default-pool-ded394fa-0kpw
neo4j-core-1   1/1       Running   0          6m        10.32.3.7    gke-neo4j-cluster-default-pool-ded394fa-0kpw
neo4j-core-2   1/1       Running   0          2m        10.32.0.10   gke-neo4j-cluster-default-pool-ded394fa-kp68

Easy!

The post Kubernetes: Which node is a pod on? appeared first on Mark Needham.

Categories: Programming

Recognizing Android Excellence on Google Play

Android Developers Blog - Tue, 06/13/2017 - 17:00
Posted by Kacey Fahey, Developer Marketing, Google Play

Every day developers around the world are hard at work creating high quality apps and games on Android. Striving to deliver amazing experiences for an ever growing diverse user base, we've seen a significant increase in the level of polish and quality of apps and games on Google Play.

As part of our efforts to recognize this content on the Play Store, today we're launching Android Excellence. The new collections will showcase apps and games that deliver incredible user experiences on Android, use many of our best practices, and have great design, technical performance, localization, and device optimization.

Android Excellence collections will refresh quarterly and can be found within the revamped Editors' Choice section of the Play Store – which includes app and game reviews curated by our editorial team.

Congrats to our first group of Android Excellence apps and games!

Android Excellence Apps Android Excellence Games AliExpress by Alibaba Mobile

B&H Photo Video by B&H Photo Video

Citymapper by Citymapper Limited

Drivvo by Drivvo

drupe by drupe

Evernote by Evernote Corporation

HotelTonight by HotelTonight

Kitchen Stories by Kitchen Stories

Komoot by komoot GmbH

Lifesum by Lifesum

Memrise by Memrise

Pocket by Read It Later

Runtastic Running & Fitness by Runtastic

Skyscanner by Skyscanner Ltd

Sleep as Android by Urbandroid Team

Vivino by Vivino After the End Forsaken Destiny by NEXON M Inc.

CATS: Crash Arena Turbo Stars by ZeptoLab

Golf Clash by Playdemic

Hitman GO Square Enix Ltd

Horizon Chase by Aquiris Game Studio S.A

Kill Shot Bravo by Hothead Games

Lineage Red Knights by NCSOFT Corporation

Nonstop Knight by flaregames

PAC-MAN 256 - Endless Maze by Bandai Namco Entertainment Europe

Pictionaryâ„¢ by Etermax

Reigns by DevolverDigital

Riptide GP: Renegade by Vector Unit

Star Warsâ„¢: Galaxy of Heroes by Electronic Arts

Titan Brawl by Omnidrone

Toca Blocks by Toca Boca

Transformers: Forged to Fight by Kabam

Stay up-to-date on best practices to succeed on Play and get the latest news and videos with the new beta version of our Playbook app for developers.

How useful did you find this blogpost? ★ ★ ★ ★ ★ .blogimage img { float: right; margin: 0; padding: 0; border: 0; width: 40%; } table { border-collapse: collapse; border-spacing: 0; width: 100%; } th { text-align: left; padding: 8px; text-align: left; vertical-align: middle; width: 50%; font-size: 125%; } td { text-align: left; padding: 8px; vertical-align: top; width: 50%; } .note { text-align: center; margin: 0; padding: 0; font-size: 75%; } .playlogo img { display: block; margin: auto; width: 40%; }
Categories: Programming

Introducing Team Drives for developers

Google Code Blog - Mon, 06/12/2017 - 18:00
Originally posted by Hodie Meyers, Product Manager, Google Drive, and Wesley Chun (@wescpy), Developer Advocate, G Suite on the G Suite Developers Blog

Enterprises are always looking for ways to operate more efficiently, and equipping developers with the right tools can make a difference. We launched Team Drives this year to bring the best of what users love about Drive to enterprise teams. We also updated the Google Drive API, so that developers can leverage Team Drives in the apps they build.

In this latest G Suite Dev Show video, we cover how you can leverage the functionality of Team Drives in your apps. The good news is you don't have to learn a completely new API—Team Drives features are built into the Drive API so you can build on what you already know. Check it out:

By the end of this video, you'll be familiar with four basic operations to help you build Team Drives functionality right into your apps:

  1. How to create Team Drives
  2. How to add members/users to your Team Drives
  3. How to create folders in Team Drives (just like creating a regular Drive folder)
  4. How to upload/import files to Team Drives folders (just like uploading files to regular folders)

Want to explore the code further? Check out the deep dive blog post. In all, the Drive API can help a variety of developers create solutions that work with both Google Drive and Team Drives. Whether you're an Independent Software Vendor (ISV), System Integrator (SI) or work in IT, there are many ways to use the Drive API to enhance productivity, help your company migrate to G Suite, or build tools to automate workflows.

Team Drives features are available in both Drive API v2 and v3, and more details can be found in the Drive API documentation. We look forward to seeing what you build with Team Drives!

Categories: Programming

Money made easily with the new Google Play Billing Library

Android Developers Blog - Mon, 06/12/2017 - 17:00
Posted by Neto Marin, Developer Advocate

Many developers want to make money through their apps, but it's not always easy to deal with all the different types of payment methods. We launched the Google Play In-app Billing API v3 in 2013, helping developers offer in-app products and subscriptions within their apps. Year after year, we've added features to the API, like subscription renewal, upgrades and downgrades, free trials, introductory pricing, promotion codes, and more.

Based on your feedback, we’re pleased to announce the Play Billing Library - Developer Preview 1. This library aims to simplify the development process when it comes to billing, allowing you to focus your efforts on implementing logic specific to your app, such as application architecture and navigation structure. The library includes several convenient classes and features for you to use when integrating your Android apps with the In-app Billing API. The library also provides an abstraction layer on top of the Android Interface Definition Language (AIDL) service, making it easier for you to define the interface between your app and the In-app Billing API.

Easy to get started and easy to use

Starting with Play Billing Library Developer Preview release, the minimum supported API level is Android 2.2 (API level 8), and the minimum supported In-app Billing API is version 3.

In-app billing relies on the Google Play Store, which handles the communication between your app and Google's Play billing service. To use Google Play billing features, your app must request the com.android.vending.BILLING permission in your AndroidManifest.xml file.

To use the library, add the following dependency in your build.gradle file:

dependencies {
    ...
    compile 'com.android.billingclient:billing:dp-1'
}

After this quick setup process, you're ready to start using the Play Billing Library in your app and can connect to the In-app Billing API, query for available products, start the purchase flow, and more.

Sample updated: Trivial Drive V2

With a new library comes a refreshed sample! To help you to understand how to implement in-app billing in your app using the new Play Billing Library, we've rewritten the Trivial Drive sample from the ground up.

Since we released Trivial Drive back in 2013, many new features, devices, and platforms have been added to the Android ecosystem. To reflect this evolution, the Trivial Drive v2 sample now runs on Android TV and Android Wear.

Give it a try!

Before integrating within your app, you can try the Play Billing Library with the codelab published during Google I/O 2017: Buy and Subscribe: Monetize your app on Google Play.

In this codelab, you will start with a simplified version of Trivial Drive V2 that lets users to "drive" and then you will add in-app billing to it. You'll learn how to integrate purchases and subscriptions as well as the best practices for developing reliable apps that handle purchases.

If you are looking for a step-by-step guide about how to sell in-app products from your app using the Play Billing Library, check out our new training class, explaining how to prepare your application, add products for purchase, start purchase flow and much more.

We want your feedback

We look forward to hearing your feedback about this new library. Visit the Play Billing Library site, the library reference, and the new version of the Trivial Drive sample. If you have issues or questions, file a bug report on the Google Issue Tracker, and for issues and suggestions on the sample, contact us on the Trivial Drive issues page.

For technical questions on implementation, library usage, and best practices, you can use the tags google-play and play-billing-library on Stackoverflow or visit the community pages on our Google+ page.

Categories: Programming

Property-based testing in Java with JUnit-Quickcheck - Part 2: Generators

Xebia Blog - Mon, 06/12/2017 - 09:45

In Part 1 of this tutorial we created a Property-based test (PBT) from a normal JUnit test with basic types. Now let us extend the domain object PostalParcel with a list of Products. All examples are written with Java 8 and can be downloaded from my gitlab repository. Write an unit test for the function deliveryCosts with […]

The post Property-based testing in Java with JUnit-Quickcheck - Part 2: Generators appeared first on Xebia Blog.

TeamCity running on Docker

One of the sessions at JaxLondon, Paul Stack mentioned they were running TeamCity on containers at HashiCorp. Because I am doing quite a number of trainings, demos, talks about Continuous Delivery, having the CI server/agents portable and containerised is a big win for me. After I saw JetBrains has the official docker image [for the server and the agents] at DockerHub, I decided to do it sooner, than later.
There are quite things I will cover to have a good touch on containers.

Step1: Setup:
I will use docker’s Mac Toolbox to create TeamCity server and agents. There will be 2 folders required on my host for TeamCity server: data folder, and logs folder to be introduced as volumes to the server container.


Step2: Creating VirtualBox VMs:

I have my Mac Toolbox installed on Mac. Why not Docker-for-Mac, purely I want to rely on VirtualBox to manage my machines, and keep environment variables for Virtualbox VMs.

mymac:~ demokritos$ docker-machine create --driver virtualbox teamcityserver
mymac:~ demokritos$ docker-machine start teamcityserver
Starting "teamcityserver"... (teamcityserver) Waiting for an IP...
Machine "teamcityserver" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses.
You may need to re-run the `docker-machine env` command.
mymac:~ demokritos$ docker-machine env teamcityserver
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/demokritos/.docker/machine/machines/teamcityserver" export DOCKER_MACHINE_NAME="teamcityserver"
# Run this command to configure your shell: # eval $(docker-machine env teamcityserver)
mymac:~ demokritos$ eval $(docker-machine env teamcityserver)


Step2: Create and share our volumes:

We need to create folders, give permission to our group [my user is in wheel group], and share folder with docker. You can see `stat $folder` to display the permissions.

mymac:~ demokritos$ sudo mkdir -p /opt/teamcity_server/logs
mymac:~ demokritos$ sudo mkdir -p /data/teamcity_server/datadir
mymac:~ demokritos$ sudo chmod g+rw /opt/teamcity_server/logs
mymac:~ demokritos$ sudo chmod g+rw /data/teamcity_server/datadir

And share on Docker preferences
2 Folders to share

This will avoid errors like :
docker: Error response from daemon: error while creating mount source path ‘/opt/teamcity_server/logs’: mkdir /opt/teamcity_server/logs: permission denied.

Step3: Run the docker:

sudo docker run -it --name teamcityserver \
-e TEAMCITY_SERVER_MEM_OPTS="-Xmx2g -XX:MaxPermSize=270m \
-XX:ReservedCodeCacheSize=350m"
-v /data/teamcity_server/datadir:/data/teamcity_server/datadir \
-v /opt/teamcity_server/logs:/opt/teamcity_server/logs \
-p 50004:8111 jetbrains/teamcity-server

If you get an error like :

docker: Error response from daemon: Conflict.
The container name "/teamcityserver" is already in use
by container 4143c2d13192b8020f066b13a2c033750b4ac1ac7d54e822a6b31a5f47489647.
You have to remove (or rename) that container to be able to reuse that name..

Then, if you can find them with “ps -aq”, you can remove them in your terminal, if not open a new one and remove it, i.e:

 docker rm 4143c2d13192 

There is a long discussion on moby’s github site, if you are interested in …

And TC server is ready to be configured… Next, we will set up the agents…


Categories: Programming

Property-based testing in Java with JUnit-Quickcheck - Part 1: The basics

Xebia Blog - Mon, 06/12/2017 - 06:48

To be able to show you what Property-based testing (PBT) is, let's start by grasping the concept of a property in programming languages. Since this is a Java tutorial, I will start with Oracle and their definition of a property in their glossary: Characteristics of an object that users can set, such as the color of a window. Property […]

The post Property-based testing in Java with JUnit-Quickcheck - Part 1: The basics appeared first on Xebia Blog.

Introducing Blockly 1.0 for Android and iOS

Google Code Blog - Fri, 06/09/2017 - 18:44
Posted by Erik Pasternak and the Kids Coding Team

Over the past five years, developers have created hundreds of projects with Blockly, our open source library for creating block-based coding experiences. These have ranged from education platforms like Code.org to electronics kits like littleBits and even Android app creation tools like MIT App Inventor. Last year, we also announced our collaboration with the Scratch Team to develop Scratch Blocks—a fork of Blockly optimized for creating coding apps for kids.

Today, we're finalizing our 1.0 release of Blockly on Android and iOS. These versions have everything you need to use Blockly natively in your mobile app, including:

  • Blockly's standard UI
  • Custom blocks, toolbox categories, and layouts
  • Functions, variables, mutators, and extensions
  • Code generation in JavaScript, Python, Dart, PHP, and Lua
  • Internationalization support (including for RTL languages)

While our 1.0 update today is focused on native mobile, we've also made several updates to the web project over the past six months. We've made major improvements to performance and testing, added more structured APIs, and improved touch support for the mobile web. In addition, we improved Internet Explorer and Edge support; Blockly is fully supported on IE10+.

We've done a lot of work to ease cross platform development, too! All blocks can now be defined by JSON, allowing a single set of block definitions to be used for web, iOS, and Android. Check out the documentationfor more details on all three platforms.

Get started right away with our iOS Codelab (Android coming soon)! To learn more about Blockly, check out the above intro video, visit our developer site, join our mailing list, or jump right into the code for web, Android, or iOS.

.blogimage img{ width: 100%; }
Categories: Programming