Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Luigi: An ExternalProgramTask example – Converting JSON to CSV

Mark Needham - Sat, 03/25/2017 - 15:09

I’ve been playing around with the Python library Luigi which is used to build pipelines of batch jobs and I struggled to find an example of an ExternalProgramTask so this is my attempt at filling that void.

Luigi - the Python data library for building data science pipelines

I’m building a little data pipeline to get data from the meetup.com API and put it into CSV files that can be loaded into Neo4j using the LOAD CSV command.

The first task I created calls the /groups endpoint and saves the result into a JSON file:

import luigi
import requests
import json
from collections import Counter

class GroupsToJSON(luigi.Task):
    key = luigi.Parameter()
    lat = luigi.Parameter()
    lon = luigi.Parameter()

    def run(self):
        seed_topic = "nosql"
        uri = "https://api.meetup.com/2/groups?&topic={0}&lat={1}&lon={2}&key={3}".format(seed_topic, self.lat, self.lon, self.key)

        r = requests.get(uri)
        all_topics = [topic["urlkey"]  for result in r.json()["results"] for topic in result["topics"]]
        c = Counter(all_topics)

        topics = [entry[0] for entry in c.most_common(10)]

        groups = {}
        for topic in topics:
            uri = "https://api.meetup.com/2/groups?&topic={0}&lat={1}&lon={2}&key={3}".format(topic, self.lat, self.lon, self.key)
            r = requests.get(uri)
            for group in r.json()["results"]:
                groups[group["id"]] = group

        with self.output().open('w') as groups_file:
            json.dump(list(groups.values()), groups_file, indent=4, sort_keys=True)

    def output(self):
        return luigi.LocalTarget("/tmp/groups.json")

We define a few parameters at the top of the class which will be passed in when this task is executed. The most interesting lines of the run function are the last couple where we write the JSON to a file. self.output() refers to the target defined in the output function which in this case is /tmp/groups.json.

Now we need to create a task to convert that JSON file into CSV format. The jq command line tool does this job well so we’ll use that. The following task does the job:

from luigi.contrib.external_program import ExternalProgramTask

class GroupsToCSV(luigi.contrib.external_program.ExternalProgramTask):
    file_path = "/tmp/groups.csv"
    key = luigi.Parameter()
    lat = luigi.Parameter()
    lon = luigi.Parameter()

    def program_args(self):
        return ["./groups.sh", self.input()[0].path, self.output().path]

    def output(self):
        return luigi.LocalTarget(self.file_path)

    def requires(self):
        yield GroupsToJSON(self.key, self.lat, self.lon)

groups.sh

#!/bin/bash

in=${1}
out=${2}

echo "id,name,urlname,link,rating,created,description,organiserName,organiserMemberId" > ${out}
jq -r '.[] | [.id, .name, .urlname, .link, .rating, .created, .description, .organizer.name, .organizer.member_id] | @csv' ${in} >> ${out}

I wanted to call jq directly from the Python code but I couldn’t figure out how to do it so putting that code in a shell script is my workaround.

The last piece of the puzzle is a wrapper task that launches the others:

import os

class Meetup(luigi.WrapperTask):
    def run(self):
        print("Running Meetup")

    def requires(self):
        key = os.environ['MEETUP_API_KEY']
        lat = os.getenv('LAT', "51.5072")
        lon = os.getenv('LON', "0.1275")

        yield GroupsToCSV(key, lat, lon)

Now we’re ready to run the tasks:

$ PYTHONPATH="." luigi --module blog --local-scheduler Meetup
DEBUG: Checking if Meetup() is complete
DEBUG: Checking if GroupsToCSV(key=xxx, lat=51.5072, lon=0.1275) is complete
INFO: Informed scheduler that task   Meetup__99914b932b   has status   PENDING
DEBUG: Checking if GroupsToJSON(key=xxx, lat=51.5072, lon=0.1275) is complete
INFO: Informed scheduler that task   GroupsToCSV_xxx_51_5072_0_1275_e07372cebf   has status   PENDING
INFO: Informed scheduler that task   GroupsToJSON_xxx_51_5072_0_1275_e07372cebf   has status   PENDING
INFO: Done scheduling tasks
INFO: Running Worker with 1 processes
DEBUG: Asking scheduler for work...
DEBUG: Pending tasks: 3
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) running   GroupsToJSON(key=xxx, lat=51.5072, lon=0.1275)
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) done      GroupsToJSON(key=xxx, lat=51.5072, lon=0.1275)
DEBUG: 1 running tasks, waiting for next task to finish
INFO: Informed scheduler that task   GroupsToJSON_xxx_51_5072_0_1275_e07372cebf   has status   DONE
DEBUG: Asking scheduler for work...
DEBUG: Pending tasks: 2
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) running   GroupsToCSV(key=xxx, lat=51.5072, lon=0.1275)
INFO: Running command: ./groups.sh /tmp/groups.json /tmp/groups.csv
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) done      GroupsToCSV(key=xxx, lat=51.5072, lon=0.1275)
DEBUG: 1 running tasks, waiting for next task to finish
INFO: Informed scheduler that task   GroupsToCSV_xxx_51_5072_0_1275_e07372cebf   has status   DONE
DEBUG: Asking scheduler for work...
DEBUG: Pending tasks: 1
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) running   Meetup()
Running Meetup
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) done      Meetup()
DEBUG: 1 running tasks, waiting for next task to finish
INFO: Informed scheduler that task   Meetup__99914b932b   has status   DONE
DEBUG: Asking scheduler for work...
DEBUG: Done
DEBUG: There are no more tasks to run at this time
INFO: Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) was stopped. Shutting down Keep-Alive thread
INFO: 
===== Luigi Execution Summary =====

Scheduled 3 tasks of which:
* 3 ran successfully:
    - 1 GroupsToCSV(key=xxx, lat=51.5072, lon=0.1275)
    - 1 GroupsToJSON(key=xxx, lat=51.5072, lon=0.1275)
    - 1 Meetup()

This progress looks 🙂 because there were no failed tasks or missing external dependencies

===== Luigi Execution Summary =====

Looks good! Let’s quickly look at our CSV file:

$ head -n10 /tmp/groups.csv 
id,name,urlname,link,rating,created,description,organiserName,organiserMemberId
1114381,"London NoSQL, MySQL, Open Source Community","london-nosql-mysql","https://www.meetup.com/london-nosql-mysql/",4.28,1208505614000,"

Meet others in London interested in NoSQL, MySQL, and Open Source Databases.

","Sinead Lawless",185675230 1561841,"Enterprise Search London Meetup","es-london","https://www.meetup.com/es-london/",4.66,1259157419000,"

Enterprise Search London is a meetup for anyone interested in building search and discovery experiences — from intranet search and site search, to advanced discovery applications and beyond.

Disclaimer: This meetup is NOT about SEO or search engine marketing.

What people are saying:

  • ""Join this meetup if you have a passion for enterprise search and user experience that you would like to share with other able-minded practitioners."" — Vegard Sandvold
  • ""Full marks for vision and execution. Looking forward to the next Meetup."" — Martin White
  • “Consistently excellent” — Helen Lippell

Sweet! And what if we run it again?

$ PYTHONPATH="." luigi --module blog --local-scheduler Meetup
DEBUG: Checking if Meetup() is complete
INFO: Informed scheduler that task   Meetup__99914b932b   has status   DONE
INFO: Done scheduling tasks
INFO: Running Worker with 1 processes
DEBUG: Asking scheduler for work...
DEBUG: Done
DEBUG: There are no more tasks to run at this time
INFO: Worker Worker(salt=172768377, workers=1, host=Marks-MBP-4, username=markneedham, pid=4531) was stopped. Shutting down Keep-Alive thread
INFO: 
===== Luigi Execution Summary =====

Scheduled 1 tasks of which:
* 1 present dependencies were encountered:
    - 1 Meetup()

Did not run any tasks
This progress looks 🙂 because there were no failed tasks or missing external dependencies

===== Luigi Execution Summary =====

As expected nothing happens since our dependencies are already satisfied and we have our first Luigi pipeline up and running.

The post Luigi: An ExternalProgramTask example – Converting JSON to CSV appeared first on Mark Needham.

Categories: Programming

Thunderbolting Your Video Card

Coding Horror - Jeff Atwood - Fri, 03/24/2017 - 10:08

When I wrote about The Golden Age of x86 Gaming, I implied that, in the future, it might be an interesting, albeit expensive, idea to upgrade your video card via an external Thunderbolt 3 enclosure.

I'm here to report that the future is now.

Yes, that's right, I paid $500 for an external Thunderbolt 3 enclosure to fit a $600 video card, all to enable a plug-in upgrade of a GPU on a Skull Canyon NUC that itself cost around $1000 fully built. I know, it sounds crazy, and … OK fine, I won't argue with you. It's crazy.

This matters mostly because of 4k, aka 2160p, aka 3840 × 2160, aka Ultra HD.

4k compared to 1080p

Plain old regular HD, aka 1080p, aka 1920 × 1080, is one quarter the size of 4k, and ¼ the work. By today's GPU standards HD is pretty much easy mode these days. It's not even interesting. No offense to console fans, or anything.

Late in 2016, I got a 4k OLED display and it … kind of blew my mind. I have never seen blacks so black, colors so vivid, on a display so thin. It made my previous 2008 era Panasonic plasma set look lame. It's so good that I'm now a little angry that every display that my eyes touch isn't OLED already. I even got into nerd fights over it, and to be honest, I'd still throw down for OLED. It is legitimately that good. Come at me, bro.

Don't believe me? Well, guess which display in the below picture is OLED? Go on, guess:

Guess which screen is OLED?

@andrewbstiles if it was physically possible to have sex with this TV I.. uh.. I'd take it on long, romantic walks

— Jeff Atwood (@codinghorror) August 13, 2016

There's a reason every site that reviews TVs had to recalibrate their results when they reviewed the 2016 OLED sets.

In my extended review at Reference Home Theater, I call it “the best looking TV I’ve ever reviewed.” But we aren’t alone in loving the E6. Vincent Teoh at HDTVtest writes, “We’re not even going to qualify the following endorsement: if you can afford it, this is the TV to buy.” Rtings.com gave the E6 OLED the highest score of any TV the site has ever tested. Reviewed.com awarded it a 9.9 out of 10, with only the LG G6 OLED (which offers the same image but better styling and sound for $2,000 more) coming out ahead.

But I digress.

Playing games at 1080p in my living room was already possible. But now that I have an incredible 4k display in the living room, it's a whole other level of difficulty. Not just twice as hard – and remember current consoles barely manage to eke out 1080p at 30fps in most games – but four times as hard. That's where external GPU power comes in.

The cool technology underpinning all of this is Thunderbolt 3. The thunderbolt cable bundled with the Razer Core is rather … diminutive. There's a reason for this.

Is there a maximum cable length for Thunderbolt 3 technology?

Thunderbolt 3 passive cables have maximum lengths.

  • 0.5m TB 3 (40Gbps)
  • 1.0m TB 3 (20Gbps)
  • 2.0m TB 3 (20Gbps)

In the future we will offer active cables which will provide 40Gbps of bandwidth at longer lengths.

40Gbps is, for the record, an insane amount of bandwidth. Let's use our rule of thumb based on ultra common gigabit ethernet, that 1 gigabit = 120 megabytes/second, and we arrive at 4.8 gigabytes/second. Zow.

That's more than enough bandwidth to run even the highest of high end video cards, but it is not without overhead. There's a mild performance hit for running the card externally, on the order of 15%. There's also a further performance hit of 10% if you are in "loopback" mode on a laptop where you don't have an external display, so the video frames have to be shuttled back from the GPU to the internal laptop display.

This may look like a gamer-only thing, but surprisingly, it isn't. What you get is the general purpose ability to attach any PCI express card to any computer with a Thunderbolt 3 port and, for the most part, it just works!

Linus breaks it down and answers all your most difficult questions:

Please watch the above video closely if you're actually interested in this stuff; it is essential. I'll add some caveats of my own after working with the Razer Core for a while:

  • Make sure the video card you plan to put into the Razer Core is not too tall, or too wide. You can tell if a card is going to be too tall by looking at pictures of the mounting rear bracket. If the card extends significantly above the standard rear mounting bracket, it won't fit. If the card takes more than 2 slots in width, it also won't fit, but this is more rare. Depth (length) is rarely an issue.

  • There are four fans in the Razer Core and although it is reasonably quiet, it's not super silent or anything. You may want to mod the fans. The Razer Core is a remarkably simple device, internally, it's really just a power supply, some Thunderbolt 3 bridge logic, and a PCI express slot. I agree with Linus that the #1 area Razer could improve in the future, beyond generally getting the price down, is to use fewer and larger fans that run quieter.

  • If you're putting a heavy hitter GPU in the Razer Core, I'd try to avoid blower style cards (the ones that exhaust heat from the rear) in favor of those that cool with large fans blowing down and around the card. Dissipating 150w+ is no mean feat and you'll definitely need to keep the enclosure in open air … and of course within 0.5 meters of the computer it's connected to.

  • There is no visible external power switch on the Razer Core. It doesn't power on until you connect a TB3 cable to it. I was totally not expecting that. But once connected, it powers up and the Windows 10 Thunderbolt 3 drivers kick in and ask you to authorize the device, which I did (always authorize). Then it spun a bit, detected the new GPU, and suddenly I had multiple graphics card active on the same computer. I also installed the latest Nvidia drivers just to make sure everything was ship shape.

  • It's kinda ... weird having multiple GPUs simultaneously active. I wanted to make the Razer Core display the only display, but you can't really turn off the built in GPU – you can select "only use display 2", that's all. I got into several weird states where windows were opening on the other display and I had to mess around a fair bit to get things locked down to just one display. You may want to consider whether you have both "displays" connected for troubleshooting, or not.

And then, there I am, playing Lego Marvel in splitscreen co-op at glorious 3840 × 2160 UltraHD resolution on an amazing OLED display with my son. It is incredible.

Beyond the technical "because I could", I am wildly optimistic about the future of external Thunderbolt 3 expansion boxes, and here's why:

  • The main expense and bottleneck in any stonking gaming rig is, by far, the GPU. It's also the item you are most likely to need to replace a year or two from now.

  • The CPU and memory speeds available today are so comically fast that any device with a low-end i3-7100 for $120 will make zero difference in real world gaming at 1080p or higher … if you're OK with 30fps minimum. If you bump up to $200, you can get a quad-core i5-7500 that guarantees you 60fps minimum everywhere.

  • If you prefer a small system or a laptop, an external GPU makes it so much more flexible. Because CPU and memory speeds are already so fast, 99.9% of the time your bottleneck is the GPU, and almost any small device you can buy with a Thunderbolt 3 port can now magically transform into a potent gaming rig with a single plug. Thunderbolt 3 may be a bit cutting edge today, but more and more devices are shipping with Thunderbolt 3. Within a few years, I predict TB3 ports will be as common as USB3 ports.

  • A general purpose external PCI express enclosure will be usable for a very long time. My last seven video card upgrades were plug and play PCI Express cards that would have worked fine in any computer I've built in the last ten years.

  • External GPUs are not meaningfully bottlenecked by Thunderbolt 3 bandwidth; the impact is 15% to 25%, and perhaps even less over time as drivers and implementations mature. While Thunderbolt 3 has "only" PCI Express x4 bandwidth, many benchmarkers have noted that GPUs moving from PCI Express x16 to x8 has almost no effect on performance. And there's always Thunderbolt 4 on the horizon.

The future, as they say, is already here – it's just not evenly distributed.

I am painfully aware that costs need to come down. Way, way down. The $499 Razer Core is well made, on the vanguard of what's possible, a harbinger of the future, and fantastically enough, it does even more than what it says on the tin. But it's not exactly affordable.

I would absolutely love to see a modest, dedicated $200 external Thunderbolt 3 box that included an inexpensive current-gen GPU. This would clobber any onboard GPU on the planet. Let's compare my Skull Canyon NUC, which has Intel's fastest ever, PS4 class embedded GPU, with the modest $150 GeForce GTX 1050 Ti:

1920 × 1080 high detail Bioshock Infinite15 → 79 fps Rise of the Tomb Raider12 → 49 fps Overwatch43 → 114 fps

As predicted, that's a 3x-5x stompdown. Mac users lamenting their general lack of upgradeability, hear me: this sort of box is exactly what you want and need. Imagine if Apple was to embrace upgrading their laptops and all-in-one systems via Thunderbolt 3.

I know, I know. It's a stretch. But a man can dream … of externally upgradeable GPUs. That are too expensive, sure, but they are here, right now, today. They'll only get cheaper over time.

[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
Categories: Programming

Thunderbolting Your Video Card

Coding Horror - Jeff Atwood - Fri, 03/24/2017 - 10:08

When I wrote about The Golden Age of x86 Gaming, I implied that, in the future, it might be an interesting, albeit expensive, idea to upgrade your video card via an external Thunderbolt 3 enclosure.

I'm here to report that the future is now.

Yes, that's right, I paid $500 for an external Thunderbolt 3 enclosure to fit a $600 video card, all to enable a plug-in upgrade of a GPU on a Skull Canyon NUC that itself cost around $1000 fully built. I know, it sounds crazy, and … OK fine, I won't argue with you. It's crazy.

This matters mostly because of 4k, aka 2160p, aka 3840 × 2160, aka Ultra HD.

4k compared to 1080p

Plain old regular HD, aka 1080p, aka 1920 × 1080, is one quarter the size of 4k, and ¼ the work. By today's GPU standards HD is pretty much easy mode these days. It's not even interesting. No offense to console fans, or anything.

Late in 2016, I got a 4k OLED display and it … kind of blew my mind. I have never seen blacks so black, colors so vivid, on a display so thin. It made my previous 2008 era Panasonic plasma set look lame. It's so good that I'm now a little angry that every display that my eyes touch isn't OLED already. I even got into nerd fights over it, and to be honest, I'd still throw down for OLED. It is legitimately that good. Come at me, bro.

Don't believe me? Well, guess which display in the below picture is OLED? Go on, guess:

Guess which screen is OLED?

@andrewbstiles if it was physically possible to have sex with this TV I.. uh.. I'd take it on long, romantic walks

— Jeff Atwood (@codinghorror) August 13, 2016

There's a reason every site that reviews TVs had to recalibrate their results when they reviewed the 2016 OLED sets.

In my extended review at Reference Home Theater, I call it “the best looking TV I’ve ever reviewed.” But we aren’t alone in loving the E6. Vincent Teoh at HDTVtest writes, “We’re not even going to qualify the following endorsement: if you can afford it, this is the TV to buy.” Rtings.com gave the E6 OLED the highest score of any TV the site has ever tested. Reviewed.com awarded it a 9.9 out of 10, with only the LG G6 OLED (which offers the same image but better styling and sound for $2,000 more) coming out ahead.

But I digress.

Playing games at 1080p in my living room was already possible. But now that I have an incredible 4k display in the living room, it's a whole other level of difficulty. Not just twice as hard – and remember current consoles barely manage to eke out 1080p at 30fps in most games – but four times as hard. That's where external GPU power comes in.

The cool technology underpinning all of this is Thunderbolt 3. The thunderbolt cable bundled with the Razer Core is rather … diminutive. There's a reason for this.

Is there a maximum cable length for Thunderbolt 3 technology?

Thunderbolt 3 passive cables have maximum lengths.

  • 0.5m TB 3 (40Gbps)
  • 1.0m TB 3 (20Gbps)
  • 2.0m TB 3 (20Gbps)

In the future we will offer active cables which will provide 40Gbps of bandwidth at longer lengths.

40Gbps is, for the record, an insane amount of bandwidth. Let's use our rule of thumb based on ultra common gigabit ethernet, that 1 gigabit = 120 megabytes/second, and we arrive at 4.8 gigabytes/second. Zow.

That's more than enough bandwidth to run even the highest of high end video cards, but it is not without overhead. There's a mild performance hit for running the card externally, on the order of 15%. There's also a further performance hit of 10% if you are in "loopback" mode on a laptop where you don't have an external display, so the video frames have to be shuttled back from the GPU to the internal laptop display.

This may look like a gamer-only thing, but surprisingly, it isn't. What you get is the general purpose ability to attach any PCI express card to any computer with a Thunderbolt 3 port and, for the most part, it just works!

Linus breaks it down and answers all your most difficult questions:

Please watch the above video closely if you're actually interested in this stuff; it is essential. I'll add some caveats of my own after working with the Razer Core for a while:

  • Make sure the video card you plan to put into the Razer Core is not too tall, or too wide. You can tell if a card is going to be too tall by looking at pictures of the mounting rear bracket. If the card extends significantly above the standard rear mounting bracket, it won't fit. If the card takes more than 2 slots in width, it also won't fit, but this is more rare. Depth (length) is rarely an issue.

  • There are four fans in the Razer Core and although it is reasonably quiet, it's not super silent or anything. You may want to mod the fans. The Razer Core is a remarkably simple device, internally, it's really just a power supply, some Thunderbolt 3 bridge logic, and a PCI express slot. I agree with Linus that the #1 area Razer could improve in the future, beyond generally getting the price down, is to use fewer and larger fans that run quieter.

  • If you're putting a heavy hitter GPU in the Razer Core, I'd try to avoid blower style cards (the ones that exhaust heat from the rear) in favor of those that cool with large fans blowing down and around the card. Dissipating 150w+ is no mean feat and you'll definitely need to keep the enclosure in open air … and of course within 0.5 meters of the computer it's connected to.

  • There is no visible external power switch on the Razer Core. It doesn't power on until you connect a TB3 cable to it. I was totally not expecting that. But once connected, it powers up and the Windows 10 Thunderbolt 3 drivers kick in and ask you to authorize the device, which I did (always authorize). Then it spun a bit, detected the new GPU, and suddenly I had multiple graphics card active on the same computer. I also installed the latest Nvidia drivers just to make sure everything was ship shape.

  • It's kinda ... weird having multiple GPUs simultaneously active. I wanted to make the Razer Core display the only display, but you can't really turn off the built in GPU – you can select "only use display 2", that's all. I got into several weird states where windows were opening on the other display and I had to mess around a fair bit to get things locked down to just one display. You may want to consider whether you have both "displays" connected for troubleshooting, or not.

And then, there I am, playing Lego Marvel in splitscreen co-op at glorious 3840 × 2160 UltraHD resolution on an amazing OLED display with my son. It is incredible.

Beyond the technical "because I could", I am wildly optimistic about the future of external Thunderbolt 3 expansion boxes, and here's why:

  • The main expense and bottleneck in any stonking gaming rig is, by far, the GPU. It's also the item you are most likely to need to replace a year or two from now.

  • The CPU and memory speeds available today are so comically fast that any device with a low-end i3-7100 for $120 will make zero difference in real world gaming at 1080p or higher … if you're OK with 30fps minimum. If you bump up to $200, you can get a quad-core i5-7500 that guarantees you 60fps minimum everywhere.

  • If you prefer a small system or a laptop, an external GPU makes it so much more flexible. Because CPU and memory speeds are already so fast, 99.9% of the time your bottleneck is the GPU, and almost any small device you can buy with a Thunderbolt 3 port can now magically transform into a potent gaming rig with a single plug. Thunderbolt 3 may be a bit cutting edge today, but more and more devices are shipping with Thunderbolt 3. Within a few years, I predict TB3 ports will be as common as USB3 ports.

  • A general purpose external PCI express enclosure will be usable for a very long time. My last seven video card upgrades were plug and play PCI Express cards that would have worked fine in any computer I've built in the last ten years.

  • External GPUs are not meaningfully bottlenecked by Thunderbolt 3 bandwidth; the impact is 15% to 25%, and perhaps even less over time as drivers and implementations mature. While Thunderbolt 3 has "only" PCI Express x4 bandwidth, many benchmarkers have noted that GPUs moving from PCI Express x16 to x8 has almost no effect on performance. And there's always Thunderbolt 4 on the horizon.

The future, as they say, is already here – it's just not evenly distributed.

I am painfully aware that costs need to come down. Way, way down. The $499 Razer Core is well made, on the vanguard of what's possible, a harbinger of the future, and fantastically enough, it does even more than what it says on the tin. But it's not exactly affordable.

I would absolutely love to see a modest, dedicated $200 external Thunderbolt 3 box that included an inexpensive current-gen GPU. This would clobber any onboard GPU on the planet. Let's compare my Skull Canyon NUC, which has Intel's fastest ever, PS4 class embedded GPU, with the modest $150 GeForce GTX 1050 Ti:

1920 × 1080 high detail Bioshock Infinite15 → 79 fps Rise of the Tomb Raider12 → 49 fps Overwatch43 → 114 fps

As predicted, that's a 3x-5x stompdown. Mac users lamenting their general lack of upgradeability, hear me: this sort of box is exactly what you want and need. Imagine if Apple was to embrace upgrading their laptops and all-in-one systems via Thunderbolt 3.

I know, I know. It's a stretch. But a man can dream … of externally upgradeable GPUs. That are too expensive, sure, but they are here, right now, today. They'll only get cheaper over time.

[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
Categories: Programming

Thunderbolting Your Video Card

Coding Horror - Jeff Atwood - Fri, 03/24/2017 - 10:08

When I wrote about The Golden Age of x86 Gaming, I implied that, in the future, it might be an interesting, albeit expensive, idea to upgrade your video card via an external Thunderbolt 3 enclosure.

I'm here to report that the future is now.

Yes, that's right, I paid $500 for an external Thunderbolt 3 enclosure to fit a $600 video card, all to enable a plug-in upgrade of a GPU on a Skull Canyon NUC that itself cost around $1000 fully built. I know, it sounds crazy, and … OK fine, I won't argue with you. It's crazy.

This matters mostly because of 4k, aka 2160p, aka 3840 × 2160, aka Ultra HD.

4k compared to 1080p

Plain old regular HD, aka 1080p, aka 1920 × 1080, is one quarter the size of 4k, and ¼ the work. By today's GPU standards HD is pretty much easy mode these days. It's not even interesting. No offense to console fans, or anything.

Late in 2016, I got a 4k OLED display and it … kind of blew my mind. I have never seen blacks so black, colors so vivid, on a display so thin. It made my previous 2008 era Panasonic plasma set look lame. It's so good that I'm now a little angry that every display that my eyes touch isn't OLED already. I even got into nerd fights over it, and to be honest, I'd still throw down for OLED. It is legitimately that good. Come at me, bro.

Don't believe me? Well, guess which display in the below picture is OLED? Go on, guess:

Guess which screen is OLED?

@andrewbstiles if it was physically possible to have sex with this TV I.. uh.. I'd take it on long, romantic walks

— Jeff Atwood (@codinghorror) August 13, 2016

There's a reason every site that reviews TVs had to recalibrate their results when they reviewed the 2016 OLED sets.

In my extended review at Reference Home Theater, I call it “the best looking TV I’ve ever reviewed.” But we aren’t alone in loving the E6. Vincent Teoh at HDTVtest writes, “We’re not even going to qualify the following endorsement: if you can afford it, this is the TV to buy.” Rtings.com gave the E6 OLED the highest score of any TV the site has ever tested. Reviewed.com awarded it a 9.9 out of 10, with only the LG G6 OLED (which offers the same image but better styling and sound for $2,000 more) coming out ahead.

But I digress.

Playing games at 1080p in my living room was already possible. But now that I have an incredible 4k display in the living room, it's a whole other level of difficulty. Not just twice as hard – and remember current consoles barely manage to eke out 1080p at 30fps in most games – but four times as hard. That's where external GPU power comes in.

The cool technology underpinning all of this is Thunderbolt 3. The thunderbolt cable bundled with the Razer Core is rather … diminutive. There's a reason for this.

Is there a maximum cable length for Thunderbolt 3 technology?

Thunderbolt 3 passive cables have maximum lengths.

  • 0.5m TB 3 (40Gbps)
  • 1.0m TB 3 (20Gbps)
  • 2.0m TB 3 (20Gbps)

In the future we will offer active cables which will provide 40Gbps of bandwidth at longer lengths.

40Gbps is, for the record, an insane amount of bandwidth. Let's use our rule of thumb based on ultra common gigabit ethernet, that 1 gigabit = 120 megabytes/second, and we arrive at 4.8 gigabytes/second. Zow.

That's more than enough bandwidth to run even the highest of high end video cards, but it is not without overhead. There's a mild performance hit for running the card externally, on the order of 15%. There's also a further performance hit of 10% if you are in "loopback" mode on a laptop where you don't have an external display, so the video frames have to be shuttled back from the GPU to the internal laptop display.

This may look like a gamer-only thing, but surprisingly, it isn't. What you get is the general purpose ability to attach any PCI express card to any computer with a Thunderbolt 3 port and, for the most part, it just works!

Linus breaks it down and answers all your most difficult questions:

Please watch the above video closely if you're actually interested in this stuff; it is essential. I'll add some caveats of my own after working with the Razer Core for a while:

  • Make sure the video card you plan to put into the Razer Core is not too tall, or too wide. You can tell if a card is going to be too tall by looking at pictures of the mounting rear bracket. If the card extends significantly above the standard rear mounting bracket, it won't fit. If the card takes more than 2 slots in width, it also won't fit, but this is more rare. Depth (length) is rarely an issue.

  • There are four fans in the Razer Core and although it is reasonably quiet, it's not super silent or anything. You may want to mod the fans. The Razer Core is a remarkably simple device, internally, it's really just a power supply, some Thunderbolt 3 bridge logic, and a PCI express slot. I agree with Linus that the #1 area Razer could improve in the future, beyond generally getting the price down, is to use fewer and larger fans that run quieter.

  • If you're putting a heavy hitter GPU in the Razer Core, I'd try to avoid blower style cards (the ones that exhaust heat from the rear) in favor of those that cool with large fans blowing down and around the card. Dissipating 150w+ is no mean feat and you'll definitely need to keep the enclosure in open air … and of course within 0.5 meters of the computer it's connected to.

  • There is no visible external power switch on the Razer Core. It doesn't power on until you connect a TB3 cable to it. I was totally not expecting that. But once connected, it powers up and the Windows 10 Thunderbolt 3 drivers kick in and ask you to authorize the device, which I did (always authorize). Then it spun a bit, detected the new GPU, and suddenly I had multiple graphics card active on the same computer. I also installed the latest Nvidia drivers just to make sure everything was ship shape.

  • It's kinda ... weird having multiple GPUs simultaneously active. I wanted to make the Razer Core display the only display, but you can't really turn off the built in GPU – you can select "only use display 2", that's all. I got into several weird states where windows were opening on the other display and I had to mess around a fair bit to get things locked down to just one display. You may want to consider whether you have both "displays" connected for troubleshooting, or not.

And then, there I am, playing Lego Marvel in splitscreen co-op at glorious 3840 × 2160 UltraHD resolution on an amazing OLED display with my son. It is incredible.

Beyond the technical "because I could", I am wildly optimistic about the future of external Thunderbolt 3 expansion boxes, and here's why:

  • The main expense and bottleneck in any stonking gaming rig is, by far, the GPU. It's also the item you are most likely to need to replace a year or two from now.

  • The CPU and memory speeds available today are so comically fast that any device with a low-end i3-7100 for $120 will make zero difference in real world gaming at 1080p or higher … if you're OK with 30fps minimum. If you bump up to $200, you can get a quad-core i5-7500 that guarantees you 60fps minimum everywhere.

  • If you prefer a small system or a laptop, an external GPU makes it so much more flexible. Because CPU and memory speeds are already so fast, 99.9% of the time your bottleneck is the GPU, and almost any small device you can buy with a Thunderbolt 3 port can now magically transform into a potent gaming rig with a single plug. Thunderbolt 3 may be a bit cutting edge today, but more and more devices are shipping with Thunderbolt 3. Within a few years, I predict TB3 ports will be as common as USB3 ports.

  • A general purpose external PCI express enclosure will be usable for a very long time. My last seven video card upgrades were plug and play PCI Express cards that would have worked fine in any computer I've built in the last ten years.

  • External GPUs are not meaningfully bottlenecked by Thunderbolt 3 bandwidth; the impact is 15% to 25%, and perhaps even less over time as drivers and implementations mature. While Thunderbolt 3 has "only" PCI Express x4 bandwidth, many benchmarkers have noted that GPUs moving from PCI Express x16 to x8 has almost no effect on performance. And there's always Thunderbolt 4 on the horizon.

The future, as they say, is already here – it's just not evenly distributed.

I am painfully aware that costs need to come down. Way, way down. The $499 Razer Core is well made, on the vanguard of what's possible, a harbinger of the future, and fantastically enough, it does even more than what it says on the tin. But it's not exactly affordable.

I would absolutely love to see a modest, dedicated $200 external Thunderbolt 3 box that included an inexpensive current-gen GPU. This would clobber any onboard GPU on the planet. Let's compare my Skull Canyon NUC, which has Intel's fastest ever, PS4 class embedded GPU, with the modest $150 GeForce GTX 1050 Ti:

1920 × 1080 high detail Bioshock Infinite15 → 79 fps Rise of the Tomb Raider12 → 49 fps Overwatch43 → 114 fps

As predicted, that's a 3x-5x stompdown. Mac users lamenting their general lack of upgradeability, hear me: this sort of box is exactly what you want and need. Imagine if Apple was to embrace upgrading their laptops and all-in-one systems via Thunderbolt 3.

I know, I know. It's a stretch. But a man can dream … of externally upgradeable GPUs. That are too expensive, sure, but they are here, right now, today. They'll only get cheaper over time.

[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
Categories: Programming

Monitor Your Mesos Cluster with StackState

Xebia Blog - Fri, 03/24/2017 - 09:28

This post is part 2 in a 4-part series about Container Monitoring. Post 1 dives into some of the new challenges containers and microservices create and the information you should focus on. This article describes how to monitor your Mesos cluster. Apache Mesos is a distributed systems kernel at the heart of the Mesosphere DC/OS and is designed for […]

The post Monitor Your Mesos Cluster with StackState appeared first on Xebia Blog.

5 Tips for launching successful apps and games on Google Play

Android Developers Blog - Thu, 03/23/2017 - 19:24
Posted by Adam Gutterman, Go-To-Market Strategic Lead, Google Play Games

Last month at the Game Developers Conference (GDC), we held a developer panel focused on sharing best practices for building successful app and game businesses. Check out 5 tips for developers, both large and small, as shared by our gaming partners at Electronic Arts (EA), Hutch Games, Nix Hydra, Space Ape Games and Omnidrone.



1. Test, test, test
The best time to test, is before you launch; so test boldly and test a lot! Nix Hydra recommends testing creative, including art style and messaging, as well as gameplay mechanics, onboarding flows and anything else you're not sure about. Gathering feedback from real users in advance of launching can highlight what's working and what can be improved to ensure your game's in the best shape possible at launch.
2. Store listing experiments
Run experiments on all of your store listing page assets. Taking bold risks instead of making assumptions allows you to see the impact of different variables with your actual user base on Google Play. Test in different regions to ensure your store listing page is optimized for each major market, as they often perform differently.

3. Early Access program

Space Ape Games recently used Early Access to test different onboarding experiences and gameplay control methods in their game. Finding the right combination led them to double-digit growth in D1 retention. Gathering these results in advance of launch helped the team fine tune and polish the game, minimizing risk before releasing to the masses.

"Early Access is cool because you can ask the big questions and get real answers from real players," Joe Raeburn, Founding Product Guy at Space Ape Games.
Watch the Android Developer Story below to hear how Omnidrone benefits from Early Access using strong user feedback to improve retention, engagement and monetization in their game.


Mobile game developer Omnidrone benefits from Early Access.
4. Pre-registration

Electronic Arts has run more than 5 pre-registration campaigns on Google Play. Pre-registration allows them to start marketing and build awareness for titles with a clear call-to-action before launch. This gives them a running start on launch day having built a group of users to activate upon the game's release resulting in a jump in D1 installs.

5. Seek feedback

All partners strongly recommended seeking feedback early and often. Feedback tells both sides of the story, by pointing out what's broken as well as what you're doing right. Find the right time and channels to request feedback, whether they be in-game, social, email, or even through reading and responding to reviews within the Google Play store.

If you're a startup who has an upcoming launch on Google Play or has launched an app or game recently and you're interested in opportunities like Early Access and pre-registration, get in touch with us so we can work with you.

Watch sessions from Google Developer Day at GDC17 on the Android Developers YT channel to learn tips for success. Also, visit the Android Developers website to stay up-to-date with features and best practices that will help you grow a successful business on Google Play.


How useful did you find this blogpost?         

Categories: Programming

Diverse protections for a diverse ecosystem: Android Security 2016 Year in Review

Android Developers Blog - Wed, 03/22/2017 - 18:49
Posted by Adrian Ludwig & Mel Miller, Android Security Team
Today, we're sharing the third annual Android Security Year In Review, a comprehensive look at our work to protect more than 1.4 billion Android users and their data.

Our goal is simple: keep our users safe. In 2016, we improved our abilities to stop dangerous apps, built new security features into Android 7.0 Nougat, and collaborated with device manufacturers, researchers, and other members of the Android ecosystem. For more details, you can read the full Year in Review report or watch our webinar.


Protecting you from PHAs
It's critical to keep people safe from Potentially Harmful Apps (PHAs) that may put their data or devices at risk. Our ongoing work in this area requires us to find ways to track and stop existing PHAs, and anticipate new ones that haven't even emerged yet.
Over the years, we've built a variety of systems to address these threats, such as application analyzers that constantly review apps for unsafe behavior, and Verify Apps which regularly checks users' devices for PHAs. When these systems detect PHAs, we warn users, suggest they think twice about downloading a particular app, or even remove the app from their devices entirely.

We constantly monitor threats and improve our systems over time. Last year's data reflected those improvements: Verify Apps conducted 750 million daily checks in 2016, up from 450 million the previous year, enabling us to reduce the PHA installation rate in the top 50 countries for Android usage.

Google Play continues to be the safest place for Android users to download their apps. Installs of PHAs from Google Play decreased in nearly every category:
  • Now 0.016 percent of installs, trojans dropped by 51.5 percent compared to 2015
  • Now 0.003 percent of installs, hostile downloaders dropped by 54.6 percent compared to 2015
  • Now 0.003 percent of installs, backdoors dropped by 30.5 percent compared to 2015
  • Now 0.0018 percent of installs, phishing apps dropped by 73.4 percent compared to 2015
By the end of 2016, only 0.05 percent of devices that downloaded apps exclusively from Play contained a PHA; down from 0.15 percent in 2015.

Still, there's more work to do for devices overall, especially those that install apps from multiple sources. While only 0.71 percent of all Android devices had PHAs installed at the end of 2016, that was a slight increase from about 0.5 percent in the beginning of 2015. Using improved tools and the knowledge we gained in 2016, we think we can reduce the number of devices affected by PHAs in 2017, no matter where people get their apps.
New security protections in Nougat
Last year, we introduced a variety of new protections in Nougat, and continued our ongoing work to strengthen the security of the Linux Kernel.
  • Encryption improvements: In Nougat, we introduced file-based encryption which enables each user profile on a single device to be encrypted with a unique key. If you have personal and work accounts on the same device, for example, the key from one account can't unlock data from the other. More broadly, encryption of user data has been required for capable Android devices since in late 2014, and we now see that feature enabled on over 80 percent of Android Nougat devices.
  • New audio and video protections: We did significant work to improve security and re-architect how Android handles video and audio media. One example: We now store different media components into individual sandboxes, where previously they lived together. Now if one component is compromised, it doesn't automatically have permissions to other components, which helps contain any additional issues.
  • Even more security for enterprise users: We introduced a variety of new enterprise security features including "Always On" VPN, which protects your data from the moment your device boots up and ensures it isn't traveling from a work phone to your personal device via an insecure connection. We also added security policy transparency, process logging, improved wifi certification handling, and client certification improvements to our growing set of enterprise tools.
Working together to secure the Android ecosystem
Sharing information about security threats between Google, device manufacturers, the research community, and others helps keep all Android users safer. In 2016, our biggest collaborations were our monthly security updates program and ongoing partnership with the security research community.

Security updates are regularly highlighted as a pillar of mobile security—and rightly so. We launched our monthly security updates program in 2015, following the public disclosure of a bug in Stagefright, to help accelerate patching security vulnerabilities across devices from many different device makers. This program expanded significantly in 2016:
  • More than 735 million devices from 200+ manufacturers received a platform security update in 2016.
  • We released monthly Android security updates throughout the year for devices running Android 4.4.4 and up—that accounts for 86.3 percent of all active Android devices worldwide.
  • Our carrier and hardware partners helped expand deployment of these updates, releasing updates for over half of the top 50 devices worldwide in the last quarter of 2016.
We provided monthly security updates for all supported Pixel and Nexus devices throughout 2016, and we're thrilled to see our partners invest significantly in regular updates as well. There's still a lot of room for improvement however. About half of devices in use at the end of 2016 had not received a platform security update in the previous year. We're working to increase device security updates by streamlining our security update program to make it easier for manufacturers to deploy security patches and releasing A/B updates to make it easier for users to apply those patches.

On the research side, our Android Security Rewards program grew rapidly: we paid researchers nearly $1 million dollars for their reports in 2016. In parallel, we worked closely with various security firms to identify and quickly fix issues that may have posed risks to our users.

We appreciate all of the hard work by Android partners, external researchers, and teams at Google that led to the progress the ecosystem has made with security in 2016. But it doesn't stop there. Keeping you safe requires constant vigilance and effort. We're looking forward to new insights and progress in 2017 and beyond.
Categories: Programming

Diverse protections for a diverse ecosystem: Android Security 2016 Year in Review

Android Developers Blog - Wed, 03/22/2017 - 18:49
Posted by Adrian Ludwig & Mel Miller, Android Security Team
Today, we're sharing the third annual Android Security Year In Review, a comprehensive look at our work to protect more than 1.4 billion Android users and their data.

Our goal is simple: keep our users safe. In 2016, we improved our abilities to stop dangerous apps, built new security features into Android 7.0 Nougat, and collaborated with device manufacturers, researchers, and other members of the Android ecosystem. For more details, you can read the full Year in Review report or watch our webinar.


Protecting you from PHAs
It's critical to keep people safe from Potentially Harmful Apps (PHAs) that may put their data or devices at risk. Our ongoing work in this area requires us to find ways to track and stop existing PHAs, and anticipate new ones that haven't even emerged yet.
Over the years, we've built a variety of systems to address these threats, such as application analyzers that constantly review apps for unsafe behavior, and Verify Apps which regularly checks users' devices for PHAs. When these systems detect PHAs, we warn users, suggest they think twice about downloading a particular app, or even remove the app from their devices entirely.

We constantly monitor threats and improve our systems over time. Last year's data reflected those improvements: Verify Apps conducted 750 million daily checks in 2016, up from 450 million the previous year, enabling us to reduce the PHA installation rate in the top 50 countries for Android usage.

Google Play continues to be the safest place for Android users to download their apps. Installs of PHAs from Google Play decreased in nearly every category:
  • Now 0.016 percent of installs, trojans dropped by 51.5 percent compared to 2015
  • Now 0.003 percent of installs, hostile downloaders dropped by 54.6 percent compared to 2015
  • Now 0.003 percent of installs, backdoors dropped by 30.5 percent compared to 2015
  • Now 0.0018 percent of installs, phishing apps dropped by 73.4 percent compared to 2015
By the end of 2016, only 0.05 percent of devices that downloaded apps exclusively from Play contained a PHA; down from 0.15 percent in 2015.

Still, there's more work to do for devices overall, especially those that install apps from multiple sources. While only 0.71 percent of all Android devices had PHAs installed at the end of 2016, that was a slight increase from about 0.5 percent in the beginning of 2015. Using improved tools and the knowledge we gained in 2016, we think we can reduce the number of devices affected by PHAs in 2017, no matter where people get their apps.
New security protections in Nougat
Last year, we introduced a variety of new protections in Nougat, and continued our ongoing work to strengthen the security of the Linux Kernel.
  • Encryption improvements: In Nougat, we introduced file-based encryption which enables each user profile on a single device to be encrypted with a unique key. If you have personal and work accounts on the same device, for example, the key from one account can't unlock data from the other. More broadly, encryption of user data has been required for capable Android devices since in late 2014, and we now see that feature enabled on over 80 percent of Android Nougat devices.
  • New audio and video protections: We did significant work to improve security and re-architect how Android handles video and audio media. One example: We now store different media components into individual sandboxes, where previously they lived together. Now if one component is compromised, it doesn't automatically have permissions to other components, which helps contain any additional issues.
  • Even more security for enterprise users: We introduced a variety of new enterprise security features including "Always On" VPN, which protects your data from the moment your device boots up and ensures it isn't traveling from a work phone to your personal device via an insecure connection. We also added security policy transparency, process logging, improved wifi certification handling, and client certification improvements to our growing set of enterprise tools.
Working together to secure the Android ecosystem
Sharing information about security threats between Google, device manufacturers, the research community, and others helps keep all Android users safer. In 2016, our biggest collaborations were our monthly security updates program and ongoing partnership with the security research community.

Security updates are regularly highlighted as a pillar of mobile security—and rightly so. We launched our monthly security updates program in 2015, following the public disclosure of a bug in Stagefright, to help accelerate patching security vulnerabilities across devices from many different device makers. This program expanded significantly in 2016:
  • More than 735 million devices from 200+ manufacturers received a platform security update in 2016.
  • We released monthly Android security updates throughout the year for devices running Android 4.4.4 and up—that accounts for 86.3 percent of all active Android devices worldwide.
  • Our carrier and hardware partners helped expand deployment of these updates, releasing updates for over half of the top 50 devices worldwide in the last quarter of 2016.
We provided monthly security updates for all supported Pixel and Nexus devices throughout 2016, and we're thrilled to see our partners invest significantly in regular updates as well. There's still a lot of room for improvement however. About half of devices in use at the end of 2016 had not received a platform security update in the previous year. We're working to increase device security updates by streamlining our security update program to make it easier for manufacturers to deploy security patches and releasing A/B updates to make it easier for users to apply those patches.

On the research side, our Android Security Rewards program grew rapidly: we paid researchers nearly $1 million dollars for their reports in 2016. In parallel, we worked closely with various security firms to identify and quickly fix issues that may have posed risks to our users.

We appreciate all of the hard work by Android partners, external researchers, and teams at Google that led to the progress the ecosystem has made with security in 2016. But it doesn't stop there. Keeping you safe requires constant vigilance and effort. We're looking forward to new insights and progress in 2017 and beyond.
Categories: Programming

Docker container secrets on AWS ECS

Xebia Blog - Wed, 03/22/2017 - 08:42

Almost every application needs some kind of a secret or secrets to do it's work. There are all kind of ways to provide this to the containers but it all comes down to the following five: Save the secrets inside the image Provide the secrets trough ENV variables Provide the secrets trough volume mounts Use a secrets […]

The post Docker container secrets on AWS ECS appeared first on Xebia Blog.

O-MG, the Developer Preview of Android O is here!

Android Developers Blog - Wed, 03/22/2017 - 01:55

Posted by Dave Burke, VP of Engineering

Since the first launch in 2008, the Android project has thrived on the incredible feedback from our vibrant ecosystems of app developers and device makers, as well as of course our users. More recently, we've been pushing hard on improving our engineering processes so we can share our work earlier and more openly with our partners.

So, today, I'm excited to share a first developer preview of the next version of the OS: Android O. The usual caveats apply: it's early days, there are more features coming, and there's still plenty of stabilization and performance work ahead of us. But it's booting :).

Over the course of the next several months, we'll be releasing updated developer previews, and we'll be doing a deep dive on all things Android at Google I/O in May. In the meantime, we'd love your feedback on trying out new features, and of course testing your apps on the new OS.

What's new in O?

Android O introduces a number of new features and APIs to use in your apps. Here's are just a few new things for you to start trying in this first Developer Preview:

Background limits: Building on the work we began in Nougat, Android O puts a big priority on improving a user's battery life and the device's interactive performance. To make this possible, we've put additional automatic limits on what apps can do in the background, in three main areas: implicit broadcasts, background services, and location updates. These changes will make it easier to create apps that have minimal impact on a user's device and battery. Background limits represent a significant change in Android, so we want every developer to get familiar with them. Check out the documentation on background execution limits and background location limits for details.

Notification channels: Android O also introduces notification channels, which are new app-defined categories for notification content. Channels let developers give users fine-grained control over different kinds of notifications — users can block or change the behavior of each channel individually, rather than managing all of the app's notifications together.

Notification channels let users control your app's notification categories

Android O also adds new visuals and grouping to notifications that make it easier for users to see what's going on when they have an incoming message or are glancing at the notification shade.

Autofill APIs: Android users already depend on a range of password managers to autofill login details and repetitive information, which makes setting up new apps or placing transactions easier. Now we are making this work more easily across the ecosystem by adding platform support for autofill. Users can select an autofill app, similar to the way they select a keyboard app. The autofill app stores and secures user data, such as addresses, user names, and even passwords. For apps that want to handle autofill, we're adding new APIs to implement an Autofill service.

PIP for handsets and new windowing features: Picture in Picture (PIP) display is now available on phones and tablets, so users can continue watching a video while they're answering a chat or hailing a car. Apps can put themselves in PiP mode from the resumed or a pausing state where the system supports it - and you can specify the aspect ratio and a set of custom interactions (such as play/pause). Other new windowing features include a new app overlay window for apps to use instead of system alert window, and multi-display support for launching an activity on a remote display.

Font resources in XML: Fonts are now a fully supported resource type in Android O. Apps can now use fonts in XML layouts as well as define font families in XML — declaring the font style and weight along with the font files.

Adaptive icons: To help you integrate better with the device UI, you can now create adaptive icons that the system displays in different shapes, based on a mask selected by the device. The system also animates interactions with the icons, and uses them in the launcher, shortcuts, Settings, sharing dialogs, and in the overview screen.

Adaptive icons display in a variety of shapes across different device models.

Wide-gamut color for apps: Android developers of imaging apps can now take advantage of new devices that have a wide-gamut color capable display. To display wide gamut images, apps will need to enable a flag in their manifest (per activity) and load bitmaps with an embedded wide color profile (AdobeRGB, Pro Photo RGB, DCI-P3, etc.).

Connectivity: For the ultimate in audio fidelity, Android O now also supports high-quality Bluetooth audio codecs such as LDAC codec. We're also adding new Wi-Fi features as well, like Wi-Fi Aware, previously known as Neighbor Awareness Networking (NAN). On devices with the appropriate hardware, apps and nearby devices can discover and communicate over Wi-Fi without an Internet access point. We're working with our hardware partners to bring Wi-Fi Aware technology to devices as soon as possible.

The Telecom framework is extending ConnectionService APIs to enable third party calling apps integrate with System UI and operate seamlessly with other audio apps. For instance, apps can have their calls displayed and controlled in different kinds of UIs such as car head units.

Keyboard navigation: With the advent of Google Play apps on Chrome OS and other large form factors, we're seeing a resurgence of keyboard navigation use within these apps. In Android O we focused on building a more reliable, predictable model for "arrow" and "tab" navigation that aids both developers and end users.

AAudio API for Pro Audio: AAudio is a new native API that's designed specifically for apps that require high-performance, low-latency audio. Apps using AAudio read and write data via streams. In the Developer Preview we're releasing an early version of this new API to get your feedback.

WebView enhancements: In Android Nougat we introduced an optional multiprocess mode for WebView that moved the handling of web content into an isolated process. In Android O, we're enabling multiprocess mode by default and adding an API to let your app handle errors and crashes, for enhanced security and improved app stability. As a further security measure, you can now opt in your app's WebView objects to verify URLs through Google Safe Browsing.

Java 8 Language APIs and runtime optimizations: Android now supports several new Java Language APIs, including the new java.time API. In addition, the Android Runtime is faster than ever before, with improvements of up to 2x on some application benchmarks.

Partner platform contributions: Hardware manufacturers and silicon partners have accelerated fixes and enhancements to the Android platform in the O release. For example, Sony has contributed more than 30 feature enhancements including the LDAC codec and 250 bug fixes to Android O.

Get started in a few simple steps

First, make your app compatible to give your users a seamless transition to Android O. Just download a device system image or emulator system image, install your current app, and test -- the app should run and look great, and handle behavior changes properly. After you've made any necessary updates, we recommend publishing to Google Play right away without changing the app's platform targeting.

Building with Android O

When you're ready, dive in to O in depth to learn about everything you can take advantage of for your app. Visit the O Developer Preview site for details on the preview timeline, behavior changes, new APIs, and support resources.

Plan how your app will support background limits and other changes. Try out some of the great new features in your app -- notification channels, PIP, adaptive icons, font resources in XML, autosizing TextView, and many others. To make it easier to explore the new APIs in Android O, we've brought the API diff report online, along with the Android O API reference.

The latest canary version of Android Studio 2.4 includes new features to help you get started with Android O. You can download and set up the O preview SDK from inside Android Studio, then use Android O's XML font resources and autosizing TextView in the Layout Editor. Watch for more Android O support coming in the weeks ahead.

We're also releasing an alpha version of the 26.0.0 support library for you to try. This version adds a number of new APIs and increases the minSdkversion to 14. Check out the release notes for details.

Preview updates

The O Developer Preview includes an updated SDK with system images for testing on the official Android Emulator and on Nexus 5X, Nexus 6P, Nexus Player, Pixel, Pixel XL and Pixel C devices. If you're building for wearables, there's also an emulator for testing Android Wear 2.0 on Android O.

We plan to update the preview system images and SDK regularly throughout the O Developer Preview. This initial preview release is for developers only and not intended for daily or consumer use, so we're making it available by manual download and flash only. Downloads and instructions are here.

As we get closer to a final product, we'll be inviting consumers to try it out as well, and we'll open up enrollments through Android Beta at that time. Stay tuned for details, but for now please note that Android Beta is not currently available for Android O.

Give us your feedback

As always, your feedback is crucial, so please let us know what you think — the sooner we hear from you, the more of your feedback we can integrate. When you find issues, please report them here. We've moved to a more robust tool, Issue Tracker, which is also used internally at Google to track bugs and feature requests during product development. We hope you'll find it easier to use.

Categories: Programming

What craftsmanship means to me

Actively Lazy - Tue, 03/21/2017 - 22:39

Over a decade ago now I got my first team lead role. It was a reasonably unexpected promotion when the existing team lead left shortly after I joined. This baptism of fire introduced me to line management, but also made me question my career choice. But it was, in hindsight, the beginning of a new journey: of becoming a software craftsman.

With barely 5 years experience I was certainly no senior developer. And yet, here I had been thrust, into a team lead role. With so little experience I made many, many mistakes and was probably a pretty rubbish boss for the three other guys on the team. I tried my best. But the whole process was very draining. But worse, I started to see programming at a more abstract level. In charge of a team, I could see that all we were was a factory for turning requirements into working code. The entire process began to feel like turning a handle: feed the team requirements, some praise and a little coffee and out comes working code.

In the end, a lot of software ends up being very similar: how many CRUD apps does the world really need? Turns out billions of them. And yet, in conception, they’re not massively exciting. Take a piece of data from the user, shovel it back to the database. Take some data out of the database, show it to the user. All very pedestrian. All very repetitive. In this environment it’s easy to become disillusioned with the process of building software. A pointless handle turning exercise.

I moved on from this baptism of fire to my first proper management role. Whereas previously I was still writing code, now I was effectively a full-time manager. I was the team’s meeting and bullshit buffer. It took a lot of buffering. There was a lot of bullshit. I think we even once managed a meeting to discuss why productivity was so poor: maybe the vast number of meetings I was required to attend each day? Or could it have been the 300 emails a day that arrived in my inbox?

If I was disillusioned with the process of writing software before, I now became disillusioned with the entire industry. A large company, little more than a creche for adults, continuing forwards more out of momentum than anything else. Plenty of emails and meetings every day to stop you from having to worry too much about any of that pesky work business.

It was then that I opened my eyes and saw there was a community outside. That programmers across the world were meeting up and discussing what we do. The first thing I saw was the agile community – but even back then it already looked like a vast pyramid scheme. But I was encouraged that there was something larger happening than the dysfunctional companies I kept finding myself working for.

Then Sandro Mancuso and I started talking about software craftsmanship. He introduced me to this movement that seemed to be exactly what I thought was missing in the industry. Not the agile money-go-round, but a movement where the focus is on doing the job right; on life-long learning; on taking pride in your work.

Not long afterwards Sandro and I setup the London Software Craftsmanship Community, which quickly snowballed. It seems we weren’t alone in believing that the job can be done well, that the job should be done well. Soon hundreds of developers joined the community.

The first immediate consequence of my involvement in the software craftsmanship community was discovering a new employer: TIM Group. A company that genuinely has a focus on software built well, with pair programming and TDD. A company where you can take pride in a job done well. The most professional software organisation I’ve worked in. They’re almost certainly still hiring, so if you’re looking, you should definitely talk to them.

Finally I’d found the antidote to my disillusionment with how software is often built: the reason I was frustrated is that it was being built badly. That companies often encourage software to be built slapdash and without care, either implicitly or sometimes even explicitly. If building software feels like just turning a handle it’s because you’re not learning anything. If you’re not learning, it’s because you’re not trying to get better at the job. Don’t tell me you’re already perfect at writing software, I don’t believe it.

Through software craftsmanship I rediscovered my love of programming. My love of a job done well. The fine focus on details that has always interested me. But not just the fine details of the code itself: the fine details of how we build it. The mechanics of TDD done well, of how it should feel. I discovered that as I became more senior not only did I find I had so much more to learn, but now I could also teach others. Not only can I take pride in a job done well, but pride in helping others improve, pride in their job done well.


Categories: Programming, Testing & QA

Introducing Android Native Development Kit r14

Android Developers Blog - Tue, 03/21/2017 - 21:58
Posted by Dan Albert, Android NDK Tech Lead

Android NDK r14
The latest version of the Android Native Development Kit (NDK), Android NDK r14, is now available for download. It is also available in the SDK manager via Android Studio.

So what's new in r14? The full changelog can be seen here, but the highlights include the following:
  • Updated all the platform headers to unified headers (covered in detail below)
  • LTO with Clang now works on Darwin and Linux
  • libc++ has been updated. You can now use thread_local for statics with non-trivial destructors (Clang only)
  • RenderScript is back!

Unified Headers We've completely redone how we ship platform header files in the NDK. Rather than having one set of headers for every target API level, there's now a single set of headers. The availability of APIs for each Android platform is guarded in these headers by #if __ANDROID_API__ >= __ANDROID_API_FOO__ preprocessor directives.

The prior approach relied on periodically-captured snapshots of the platform headers. This meant that any time we fixed a header-only bug, the fix was only available in the latest version aside from the occasional backport. Now bugfixes are available regardless of your NDK API level.

Aside from bugfixes, this also means you'll have access to modern Linux UAPI headers at every target version. This will mostly be important for people porting existing Linux code (especially low-level things). Something important to keep in mind: just because you have the headers doesn't mean you're running on a device with a kernel new enough to support every syscall. As always with syscalls, ENOSYS is a possibility.

Beyond the Linux headers, you'll also have modern headers for OpenGL, OpenSLES, etc. This should make it easier to conditionally use new APIs when you have an older target API level. The GLES3 headers are now accessible on Ice Cream Sandwich even though that library wasn't available until KitKat. You will still need to use all the API calls via dlopen/dlsym, but you'll at least have access to all the constants and #defines that you would need for invoking those functions.
Note that we'll be removing the old headers from the NDK with r16, so the sooner you file bugs, the smoother the transition will go.

Caveats The API #ifdef guards do not exist in third-party headers like those found in OpenGL. In those cases you'll receive a link time error (undefined reference) rather than a compile time error if you use an API that is not available in your targeted API level.

Standalone toolchains using GCC are not supported out of the box (nor will they be). To use GCC, pass -D__ANDROID_API__=$API when compiling.

Enabling Unified Headers in Your Build To ease the transition from the legacy headers to the unified headers, we haven't enabled the new headers by default, though we'll be doing this in r15. How you opt-in to unified headers will depend on your build system.

ndk-build
In your Application.mk:

    APP_UNIFIED_HEADERS := true
You can also set this property from the command-line like this:

    $ ndk-build APP_UNIFIED_HEADERS=true

If you're using ndk-build via Gradle with externalNativeBuild, specify the following configuration settings in build.gradle:

    android {
      ...
      defaultConfig {
        ...
        externalNativeBuild {
          ndkBuild {
            ...
            arguments "APP_UNIFIED_HEADERS=true"
          }
        }
      }
    }
CMake When configuring your build, set ANDROID_UNIFIED_HEADERS=ON. This will usually take the form of invoking CMake with cmake -DANDROID_UNIFIED_HEADERS=ON $OTHER_ARGS.

If you're using CMake via Gradle with externalNativeBuild, you can use:

    android {
      ...
      defaultConfig {
        ...
        externalNativeBuild {
          cmake {
            ...
            arguments "-DANDROID_UNIFIED_HEADERS=ON"
          }
        }
      }
    }
Standalone Toolchains When creating your standalone toolchain, pass --unified-headers. Note that this option is not currently available in the legacy script, make-standalone-toolchain.sh, but only in make_standalone_toolchain.py.

Experimental Gradle Plugin Coming soon! Follow along here.

Custom Build System? We've got you covered. Instructions on adding support for unified headers to your build system can be found here.

For additional information about unified headers, see our docs and the tracking bug. If you're looking ahead to future releases, the most up-to-date version of the documentation is in the master branch.
Categories: Programming

Get a sneak peek at Android Nougat 7.1.2

Android Developers Blog - Tue, 03/21/2017 - 19:21
Posted by Dave Burke, VP of Engineering

The next maintenance release for Android Nougat -- 7.1.2 -- is just around the corner! To get the recipe just right, starting today, we're rolling out a public beta to eligible devices that are enrolled in the Android Beta Program, including Pixel and Pixel XL, Nexus 5X, Nexus Player, and Pixel C devices. We're also preparing an update for Nexus 6P that we expect to release soon.

Android 7.1.2 is an incremental maintenance release focused on refinements, so it includes a number of bugfixes and optimizations, along with a small number of enhancements for carriers and users.

If you'd like to try the public beta for Android 7.1.2, the easiest way is through the Android Beta Program. If you have an eligible device that's already enrolled, you're all set -- your device will get the public beta update in the next few days and no action is needed on your part. If your device isn't enrolled, it only takes a moment to visit android.com/beta and opt-in your eligible Android phone or tablet -- you'll soon receive the public beta update over-the-air. As always, you can also download and flash this update manually.

We're expecting to launch the final release of the Android 7.1.2 in just a couple of months, Like the beta, it will be available for Pixel, Pixel XL, Nexus 5X, Nexus 6P, Nexus Player, and Pixel C devices. Meanwhile we welcome your feedback or requests in the Android Beta community as we work towards the final over-the-air update. Thanks for being part of the public beta!
Categories: Programming

TDD is not about unit tests

Xebia Blog - Tue, 03/21/2017 - 14:42

-- Dave Farley & Arjan Molenaar On many occasions when we come at a customer, we're told the development team is doing TDD. Often, though, a team is writing unit tests, but it's not doing TDD. This is an important distinction. Unit tests are useful things. Unit testing though says nothing about how to create […]

The post TDD is not about unit tests appeared first on Xebia Blog.

Python 3: TypeError: Object of type ‘dict_values’ is not JSON serializable

Mark Needham - Sun, 03/19/2017 - 17:40

I’ve recently upgraded to Python 3 (I know, took me a while!) and realised that one of my scripts that writes JSON to a file no longer works!

This is a simplified version of what I’m doing:

>>> import json
>>> x = {"mark": {"name": "Mark"}, "michael": {"name": "Michael"}  } 
>>> json.dumps(x.values())
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
  File "/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/encoder.py", line 180, in default
    o.__class__.__name__)
TypeError: Object of type 'dict_values' is not JSON serializable

Python 2.7 would be perfectly happy:

>>> json.dumps(x.values())
'[{"name": "Michael"}, {"name": "Mark"}]'

The difference is in the results returned by the values method:

# Python 2.7.10
>>> x.values()
[{'name': 'Michael'}, {'name': 'Mark'}]

# Python 3.6.0
>>> x.values()
dict_values([{'name': 'Mark'}, {'name': 'Michael'}])
>>> 

Python 3 no longer returns an array, instead we have a dict_values wrapper around the data.

Luckily this is easy to resolve – we just need to wrap the call to values with a call to list:

>>> json.dumps(list(x.values()))
'[{"name": "Mark"}, {"name": "Michael"}]'

This versions works with Python 2.7 as well so if I accidentally run the script with an old version the world isn’t going to explode.

The post Python 3: TypeError: Object of type ‘dict_values’ is not JSON serializable appeared first on Mark Needham.

Categories: Programming

The Container Monitoring Problem

Xebia Blog - Thu, 03/16/2017 - 21:16

This post is part 1 in a 4-part series about Docker, Kubernetes and Mesos monitoring. This article dives into some of the new challenges containers and microservices create and the metrics you should focus on. Containers are a solution to the problem of how to get software to run reliably when moved from one environment […]

The post The Container Monitoring Problem appeared first on Xebia Blog.

Tips from developers Peak and Soundcloud on how to grow your startup on Google Play

Android Developers Blog - Thu, 03/16/2017 - 21:15

Posted by Francesca Di Felice, Developer Marketing at Google Play
At Playtime 2016, Google Play's series of developer events, we met with top app and game developers from around the world to share learnings on how to build successful businesses on Google Play. Several startups, including game developer Peaklabs and audio platform SoundCloud, presented on stage their own best practices for growth, which you might find helpful.

Testing for growth, by Peak

Hear from Kevin Shanahan, Product Manager from Peak, a brain training app, on how to grow sustainably.



  • Test lots of ideas: You can't be sure of what will work and what won't, so you need to test lots of ideas. Peak ran four different tests to try to increase conversions to Pro (their subscriber offering):
  1. Made the ability to replay games a Pro feature
  2. Reduced price of Pro by 25% in top 2 markets
  3. Bundled add-on modules from partners into Pro
  4. Showed a preview of Pro-only content
          One of these tests resulted in a 50% increase in conversions.

  • Get the basics right: Start with a great product and have a data-informed culture. Don't only test app features, experimenting your store listing using store listing experiments is also important.
  • Build a robust A/B testing process: Having a well-defined A/B testing process and a system for tracking your experiments is key to testing quickly and effectively.

Improving user retention, by SoundCloud

Andy Carvell, former Product Manager at SoundCloud, an online audio distribution platform that enables its users to upload, record, promote, and share their originally-created sounds, explains how they focus on retention to improve growth.

 
  • Design your retention strategy: Apps with poor retention grow slowly. To increase your retention you should:
    • Convert new users to repeat visitors by providing a strong onboarding experience for new users and taking a high-touch approach during the first days and weeks.
    • Increase visit frequency within this group by providing frequent, timely, and relevant messaging about content or activity on the platform.
    • Target returning users who were not seen over the last period, who are 'at risk of churn' users, by giving them reasons to come back for another session before losing them.
    • Re-activate lapsed (long-term churned) users with campaigns to remind them about your app and offer an incentive to return.
  • Build 'growth machines': Create repeatable processes that testing has proven to positively impact retention, retaining users, and preventing churn.
  • Use activity notifications in a personalised and effective way: At SoundCloud there are plenty of things that happen when users are not in the app that might be relevant to them, for example new content releases or social interactions. They tested 5 new notification types, always keeping a control group to better keep track of the impact, and managed to increase retention in a 5%. Watch the video above for more of Andy's tips on making better use of notifications.

Other speakers, such as Silicon Valley VC Greylock, have also shared their tips for startup growth. Watch more sessions from this year's Playtime events to learn best practices from other apps and game partners, and the Google Play team. Get the Playbook for Developers app to stay up to date with news and tips to help you grow a successful business on Google Play.

How useful did you find this blogpost?                                                                               
Categories: Programming

Android Developer Story: Wallapop improves user conversions with store listing experiments on Google Play

Android Developers Blog - Thu, 03/16/2017 - 21:07
Posted by Lily Sheringham, Developer Marketing, Google Play
Wallapop is a mobile app developer based in Barcelona, Spain. The app provides a platform to users for selling and buying things to others nearby in a virtual flea market by using geolocalization. Wallapop now has over 70% of their user base on Android.

Watch Agus Gomez, Co-Founder & CEO, and Marta Gui, Growth Hacking Manager, explain how using store listing experiments has increased their conversion rate by 17%, and has allowed them to optimize organic installs.


Learn more about store listing experiments. Get the Playbook for Developers app to stay up-to-date with more features and best practices that will help you grow a successful business on Google Play.


How useful did you find this blogpost?                                                                               
Categories: Programming

Discover and celebrate the best local games at Indonesia Games Contest

Android Developers Blog - Thu, 03/16/2017 - 21:03

Posted by David Yin, Business Development Manager, Indonesia, Google Play.

It is a great time to be a mobile game developer on Android with the opportunity reaching more than a billion global users on Google Play. At the same time, developers in fast growing mobile markets like Indonesia have an additional opportunity in the form of a huge local audience that is hungry for local content. We have already seen thousands of Indonesian developers launch high quality, locally relevant games for this new audience, such as "Tahu Bulat" & "Tebak Gambar".

In our continuous quest to discover, nurture growth, and showcase the best games from Indonesia, we are really happy to announce Indonesia Games Contest. This contest celebrates the passion and great potential of local game developers, and provides an opportunity to raise awareness of your game with global and local industry experts, together with gamers, from across Indonesia. It's also a chance to showcase your creativity and win cool prizes.
Entering the contest

The contest is only open to developers based in Indonesia who have published a new game on Google Play after 1 January 2016. Make sure to visit our contest website for the full list of eligibility criteria and terms. A quick summary of the process is below:
  1. If you are eligible, submit your game by 19 March 2017.
  2. Entries will be reviewed by Google Play team and industry experts, and up to 15 finalists will be announced in early April 2017.
  3. The finalists will get to showcase their games at the final event in Jakarta on 26 April 2017.
  4. Winner and runners up will be announced at final event.
To get started

Visit our contest website to find out more about the contest and submit your game.
Terima Kasih!


How useful did you find this blogpost?                                                                                 
Categories: Programming

Engaging users during major events: How The Guardian used innovative notifications

Android Developers Blog - Thu, 03/16/2017 - 20:59
Posted By Tamzin Taylor, Partner Development at Google Play

Major sporting, cultural, political events present an opportunity to re-engage users if you can find a relevant and unique way to serve them information. For example, The Guardian was able to substantially increase user engagement with its mobile app during the recent US election by using new notifications functionality in Android 7.0 Nougat. While notifications themselves are nothing new, The Guardian used innovative techniques and design elements to give their users a rich, real time update on the election results as they happened.
How The Guardian innovated with notifications

Users who opted-in received a single, continuously updating notification which was persistent on their lock screen as results came in on election night. The notification used avatars of the candidates and a progress bar to bring the information to life.




The notification showed the most up-to-date numbers of electoral votes won and states called, an indication of which swing states have been called, and the breakdown of the popular vote between the two leading candidates.

"Having the ability to have a constantly updating notification on screen, allowed us to keep our users engaged throughout election night". – Rob Phillips from The Guardian
Another important feature was the ability to notify users of major updates with a link to detailed information and analysis. In order to do this, the Guardian allowed the newsroom teams to push notifications of major events, such as when the 270 vote mark was passed.

"Our newsroom could let our readers know in real time when there was a serious milestone, and we were able to deliver 101 unique notifications during the course of the evening. The clear menu options acted as key drivers to our journalism as the news unfolded, and meant we could get our readers connected with our content when they were most receptive". – Rob Phillips from The Guardian
Results and next steps
The engagement results were impressive:
  • 170K people signed up to see the alert, with 122K users interacting with the alert
  • The average number of interactions was around 620K, or 5.1 per user
  • 74% of users who saw the notification tapped through to the main live blog
  • 25% of users who saw the notification tapped through to our full results content
Finally, perhaps the most impressive statistic is that promoting live updates (via the notification) resulted in 103% increase in daily installs during election week.

"By providing our users with the ability to quickly and easily check information, to highlight major moments and to direct people to where to find more information, we can deliver value to our readers, helping them make sense of the events wherever they are, quickly and succinctly. After all, that's what we're here to do as a news company, and we're delighted that the new functionality on Nougat lets us do that" – Rob Phillips from The Guardian
On the back of the success of using Android N capabilities for live notifications, the Guardian plans to test the same approach with sports content, and explore how it could be applied more extensively to other major events like The Oscars and the Super Bowl.


How useful did you find this blogpost?                                                                               
Categories: Programming