Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/1' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Improvements for smaller app downloads on Google Play

Android Developers Blog - Fri, 07/22/2016 - 18:55

Posted by Anthony Morris, SWE Google Play

Google Play continues to grow rapidly, as Android users installed over 65 billion apps in the last year from the Google Play Store. We’re also seeing developers move to update their apps more frequently to push great new content, patch security vulnerabilities, and iterate quickly on user feedback.

However, many users are sensitive to the amount of data they use, especially if they are not on Wi-Fi. Google Play is investing in improvements to reduce the data that needs to be transferred for app installs and updates, while making data cost more transparent to users.

Read on to understand the updates and learn some tips for ways to optimize the size of your APK.

New Delta algorithm to reduce the size of app updates

For approximately 98% of app updates from the Play Store, only changes (deltas) to APK files are downloaded and merged with the existing files, reducing the size of updates. We recently rolled out a delta algorithm, bsdiff, that further reduces patches by up to 50% or more compared to the previous algorithm. Bsdiff is specifically targeted to produce more efficient deltas of native libraries by taking advantage of the specific ways in which compiled native code changes between versions. To be most effective, native libraries should be stored uncompressed (compression interferes with delta algorithms).

An example from Chrome: Patch Description Previous patch size Bsdiff Size M46 to M47 major update 22.8 MB 12.9 MB M47 minor update 15.3 MB 3.6 MB

Apps that don’t have uncompressed native libraries can see a 5% decrease in size on average, compared to the previous delta algorithm.

Applying the delta algorithm to APK Expansion Files to further reduce update size

APK Expansion Files allow you to include additional large files up to 2GB in size (e.g. high resolution graphics or media files) with your app, which is especially popular with games. We have recently expanded our delta and compression algorithms to apply to these APK Expansion Files in addition to APKs, reducing the download size of initial installs by 12%, and updates by 65% on average.

Clearer size information in the Play Store

Alongside the improvements to reduce download size, we also made information displayed about data used and download sizes in the Play Store clearer. You can now see actual download sizes, not the APK file size, in the Play Store. If you already have an app, you will only see the update size. These changes are rolling out now.

Example 1: Showing new β€œDownload size” of APK

Example 2: Showing new β€œUpdate size” of APK

Tips to reduce your download sizes

1. Optimize for the right size measurements: Users care about download size (i.e. how many bytes are transferred when installing/updating an app), and they care about disk size (i.e. how much space the app takes up on disk). It’s important to note that neither of these are the same as the original APK file size nor necessarily correlated.


Chrome example: Compressed Native Library Uncompressed Native Library APK Size 39MB 52MB (+25%) Download size (install) 29MB 29MB (no change) Download size (update) 29MB 21MB (-29%) Disk size 71MB 52MB (-26%)/td>

Chrome found that initial download size remained the same by not compressing the native library in their APK, while the APK size increased, because Google Play already performs compression for downloads. They also found that the update size decreased, as deltas are more effective with uncompressed files, and disk size decreased as you no longer need an compressed copy of the native library:

2. Reduce your APK size: Remove unnecessary data from the APK like unused resources and code.

3. Optimize parts of your APK to make them smaller: Using more efficient file formats, for example by using WebP instead of JPEG, or by using Proguard to remove unused code.

Read more about reducing APK sizes and watch the I/O 2016 session β€˜Putting Your App on a Diet’ to learn from Wojtek KaliciΕ„ski, about how to reduce the size of your APK.

Categories: Programming

Mahout/Hadoop: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4

Mark Needham - Fri, 07/22/2016 - 14:55

I’ve been working my way through Dragan Milcevski’s mini tutorial on using Mahout to do content based filtering on documents and reached the final step where I needed to read in the generated item-similarity files.

I got the example compiling by using the following Maven dependency:

<dependency>
      <groupId>org.apache.mahout</groupId>
      <artifactId>mahout-core</artifactId>
      <version>0.9</version>
</dependency>

Unfortunately when I ran the code I ran into a version incompatibility problem:

Exception in thread "main" org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
	at org.apache.hadoop.ipc.Client.call(Client.java:1113)
	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
	at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
	at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
	at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
	at org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
	at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:124)
	at com.markhneedham.mahout.Similarity.getDocIndex(Similarity.java:86)
	at com.markhneedham.mahout.Similarity.main(Similarity.java:25)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

Version 0.9.0 of mahout-core was published in early 2014 so I expect it was built against an earlier version of Hadoop than I’m using (2.7.2).

I tried updating the Hadoop dependencies that were being called in the stack trace to no avail.

<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-client</artifactId>
    <version>2.7.2</version>
</dependency>
 
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-hdfs</artifactId>
    <version>2.7.2</version>
</dependency>

When stepping through the stack trace I noticed that my program was still using an old version of hadoop-core, so with one last throw of the dice I decided to try explicitly excluding that:

<dependency>
    <groupId>org.apache.mahout</groupId>
    <artifactId>mahout-core</artifactId>
    <version>0.9</version>
 
    <exclusions>
        <exclusion>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-core</artifactId>
        </exclusion>
    </exclusions>
</dependency>

And amazingly it worked. Now, finally, I can see how similar my documents are!

Categories: Programming

Hadoop: DataNode not starting

Mark Needham - Fri, 07/22/2016 - 14:31

In my continued playing with Mahout I eventually decided to give up using my local file system and use a local Hadoop instead since that seems to have much less friction when following any examples.

Unfortunately all my attempts to upload any files from my local file system to HDFS were being met with the following exception:

java.io.IOException: File /user/markneedham/book2.txt could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1448)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:690)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:342)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1350)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1346)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1344)
 
at org.apache.hadoop.ipc.Client.call(Client.java:905)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:928)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:811)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427)

I eventually realised, from looking at the output of jps, that the DataNode wasn’t actually starting up which explains the error message I was seeing.

A quick look at the log files showed what was going wrong:


/usr/local/Cellar/hadoop/2.7.1/libexec/logs/hadoop-markneedham-datanode-marks-mbp-4.zte.com.cn.log

2016-07-21 18:58:00,496 WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /usr/local/Cellar/hadoop/hdfs/tmp/dfs/data: namenode clusterID = CID-c2e0b896-34a6-4dde-b6cd-99f36d613e6a; datanode clusterID = CID-403dde8b-bdc8-41d9-8a30-fe2dc951575c
2016-07-21 18:58:00,496 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to /0.0.0.0:8020. Exiting.
java.io.IOException: All specified directories are failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:477)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1361)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1326)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:316)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:801)
        at java.lang.Thread.run(Thread.java:745)
2016-07-21 18:58:00,497 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to /0.0.0.0:8020
2016-07-21 18:58:00,602 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2016-07-21 18:58:02,607 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2016-07-21 18:58:02,608 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2016-07-21 18:58:02,610 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

I’m not sure how my clusterIDs got out of sync, although I expect it’s because I reformatted HDFS without realising at some stage. There are other ways of solving this problem but the quickest for me was to just nuke the DataNode’s data directory which the log file told me sits here:

sudo rm -r /usr/local/Cellar/hadoop/hdfs/tmp/dfs/data/current

I then re-ran the hstart script that I stole from this tutorial and everything, including the DataNode this time, started up correctly:

$ jps
26736 NodeManager
26392 DataNode
26297 NameNode
26635 ResourceManager
26510 SecondaryNameNode

And now I can upload local files to HDFS again. #win!

Categories: Programming

Mahout: Exception in thread β€œmain” java.lang.IllegalArgumentException: Wrong FS: file:/… expected: hdfs://

Mark Needham - Thu, 07/21/2016 - 18:57

I’ve been playing around with Mahout over the last couple of days to see how well it works for content based filtering.

I started following a mini tutorial from Stack Overflow but ran into trouble on the first step:

bin/mahout seqdirectory \
--input file:///Users/markneedham/Downloads/apache-mahout-distribution-0.12.2/foo \
--output file:///Users/markneedham/Downloads/apache-mahout-distribution-0.12.2/foo-out \
-c UTF-8 \
-chunk 64 \
-prefix mah
16/07/21 21:19:20 INFO AbstractJob: Command line arguments: {--charset=[UTF-8], --chunkSize=[64], --endPhase=[2147483647], --fileFilterClass=[org.apache.mahout.text.PrefixAdditionFilter], --input=[file:///Users/markneedham/Downloads/apache-mahout-distribution-0.12.2/foo], --keyPrefix=[mah], --method=[mapreduce], --output=[file:///Users/markneedham/Downloads/apache-mahout-distribution-0.12.2/foo-out], --startPhase=[0], --tempDir=[temp]}
16/07/21 21:19:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/21 21:19:20 INFO deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
16/07/21 21:19:20 INFO deprecation: mapred.compress.map.output is deprecated. Instead, use mapreduce.map.output.compress
16/07/21 21:19:20 INFO deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: file:/Users/markneedham/Downloads/apache-mahout-distribution-0.12.2/foo, expected: hdfs://localhost:8020
	at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:646)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:194)
	at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
	at org.apache.mahout.text.SequenceFilesFromDirectory.runMapReduce(SequenceFilesFromDirectory.java:156)
	at org.apache.mahout.text.SequenceFilesFromDirectory.run(SequenceFilesFromDirectory.java:90)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
	at org.apache.mahout.text.SequenceFilesFromDirectory.main(SequenceFilesFromDirectory.java:64)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
	at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
	at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:152)
	at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:195)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

I was trying to run the command against the local file system on my laptop which should have been possible according to the instructions. I couldn’t find any flag I could pass any flag that I could pass to Mahout to tell it not to use HDFS but I eventually stumbled on someone else experiencing a similar problem.

It turns out the last time I was playing around with Hadoop, in late 2015, I’d actually configured that and had completely forgotten. I needed to comment out the following config:

/usr/local/Cellar/hadoop/2.7.1/libexec/etc/hadoop/core-site.xml

<property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:8020</value>
</property>

I commented that property out and all was happy with the (Hadoop) world again.

Categories: Programming

The Ultimate Tester: Wrap-Up

Xebia Blog - Thu, 07/21/2016 - 16:14
To everyone who has read all or some of the past blog posts in this series: thank you so much for reading. I hope I have given you some food for thought on where you can improve as a tester (or developer who tests!).Β  In four blog posts, we explored what it takes to become

Our Answer To the Alert Storm: Introducing Team View Alerts

Xebia Blog - Thu, 07/21/2016 - 11:39
As a Dev or Ops it’s hard to focus on the things that really matter. Applications, systems, tools and other environments are generating notifications at a frequency and amount greater than you are able to cope with. It's a problem for every Dev and Ops professional. Alerts are used to identify trends, spikes or dips

Neo4j: Cypher – Detecting duplicates using relationships

Mark Needham - Wed, 07/20/2016 - 18:32

I’ve been building a graph of computer science papers on and off for a couple of months and now that I’ve got a few thousand loaded in I realised that there are quite a few duplicates.

They’re not duplicates in the sense that there are multiple entries with the same identifier but rather have different identifiers but seem to be the same paper!

e.g. there are a couple of papers titled ‘Authentication in the Taos operating system’:

http://dl.acm.org/citation.cfm?id=174614

2016 07 20 11 43 00

http://dl.acm.org/citation.cfm?id=168640

2016 07 20 11 43 38

This is the same paper published in two different journals as far as I can tell.

Now in this case it’s quite easy to just do a string similarity comparison of the titles of these papers and realise that they’re identical. I’ve previously use the excellent dedupe library to do this and there’s also an excellent talk from Berlin Buzzwords 2014 where the author uses locality-sensitive hashing to achieve a similar outcome.

However, I was curious whether I could use any of the relationships these papers have to detect duplicates rather than just relying on string matching.

This is what the graph looks like:

Graph  8

We’ll start by writing a query to see how many common references the different Taos papers have:

MATCH (r:Resource {id: "168640"})-[:REFERENCES]->(other)
WITH r, COLLECT(other) as myReferences
 
UNWIND myReferences AS reference
OPTIONAL MATCH path = (other)-[:REFERENCES]->(reference)
WITH other, COUNT(path) AS otherReferences, SIZE(myReferences) AS myReferences
WITH other, 1.0 * otherReferences / myReferences AS similarity WHERE similarity > 0.5
 
RETURN other.id, other.title, similarity
ORDER BY similarity DESC
LIMIT 10
╒════════╀═══════════════════════════════════════════╀══════════╕
β”‚other.idβ”‚other.title                                β”‚similarityβ”‚
β•žβ•β•β•β•β•β•β•β•β•ͺ═══════════════════════════════════════════β•ͺ══════════║
β”‚168640  β”‚Authentication in the Taos operating systemβ”‚1         β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚174614  β”‚Authentication in the Taos operating systemβ”‚1         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

This query:

  • picks one of the Taos papers and finds its references
  • finds other papers which reference those same papers
  • calculates a similarity score based on how many common references they have
  • returns papers that have more than 50% of the same references with the most similar ones at the top

I tried it with other papers to see how it fared:

Performance of Firefly RPC

╒════════╀════════════════════════════════════════════════════════════════╀══════════════════╕
β”‚other.idβ”‚other.title                                                     β”‚similarity        β”‚
β•žβ•β•β•β•β•β•β•β•β•ͺ════════════════════════════════════════════════════════════════β•ͺ══════════════════║
β”‚74859   β”‚Performance of Firefly RPC                                      β”‚1                 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚77653   β”‚Performance of the Firefly RPC                                  β”‚0.8333333333333334β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚110815  β”‚The X-Kernel: An Architecture for Implementing Network Protocolsβ”‚0.6666666666666666β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚96281   β”‚Experiences with the Amoeba distributed operating system        β”‚0.6666666666666666β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚74861   β”‚Lightweight remote procedure call                               β”‚0.6666666666666666β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚106985  β”‚The interaction of architecture and operating system design     β”‚0.6666666666666666β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚77650   β”‚Lightweight remote procedure call                               β”‚0.6666666666666666β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Authentication in distributed systems: theory and practice

╒════════╀══════════════════════════════════════════════════════════╀══════════════════╕
β”‚other.idβ”‚other.title                                               β”‚similarity        β”‚
β•žβ•β•β•β•β•β•β•β•β•ͺ══════════════════════════════════════════════════════════β•ͺ══════════════════║
β”‚121160  β”‚Authentication in distributed systems: theory and practiceβ”‚1                 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚138874  β”‚Authentication in distributed systems: theory and practiceβ”‚0.9090909090909091β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Sadly it’s not as simple as finding 100% matches on references! I expect the later revisions of a paper added more content and therefore additional references.

What if we look for author similarity as well?

MATCH (r:Resource {id: "121160"})-[:REFERENCES]->(other)
WITH r, COLLECT(other) as myReferences
 
UNWIND myReferences AS reference
OPTIONAL MATCH path = (other)-[:REFERENCES]->(reference)
WITH r, other, authorSimilarity,  COUNT(path) AS otherReferences, SIZE(myReferences) AS myReferences
WITH r, other, authorSimilarity,  1.0 * otherReferences / myReferences AS referenceSimilarity
WHERE referenceSimilarity > 0.5
 
MATCH (r)<-[:AUTHORED]-(author)
WITH r, myReferences, COLLECT(author) AS myAuthors
 
UNWIND myAuthors AS author
OPTIONAL MATCH path = (other)<-[:AUTHORED]-(author)
WITH other, myReferences, COUNT(path) AS otherAuthors, SIZE(myAuthors) AS myAuthors
WITH other, myReferences, 1.0 * otherAuthors / myAuthors AS authorSimilarity
WHERE authorSimilarity > 0.5
 
 
 
RETURN other.id, other.title, referenceSimilarity, authorSimilarity
ORDER BY (referenceSimilarity + authorSimilarity) DESC
LIMIT 10
╒════════╀══════════════════════════════════════════════════════════╀═══════════════════╀════════════════╕
β”‚other.idβ”‚other.title                                               β”‚referenceSimilarityβ”‚authorSimilarityβ”‚
β•žβ•β•β•β•β•β•β•β•β•ͺ══════════════════════════════════════════════════════════β•ͺ═══════════════════β•ͺ════════════════║
β”‚121160  β”‚Authentication in distributed systems: theory and practiceβ”‚1                  β”‚1               β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚138874  β”‚Authentication in distributed systems: theory and practiceβ”‚0.9090909090909091 β”‚1               β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
╒════════╀══════════════════════════════╀═══════════════════╀════════════════╕
β”‚other.idβ”‚other.title                   β”‚referenceSimilarityβ”‚authorSimilarityβ”‚
β•žβ•β•β•β•β•β•β•β•β•ͺ══════════════════════════════β•ͺ═══════════════════β•ͺ════════════════║
β”‚74859   β”‚Performance of Firefly RPC    β”‚1                  β”‚1               β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚77653   β”‚Performance of the Firefly RPCβ”‚0.8333333333333334 β”‚1               β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

I’m sure I could find some other papers where neither of these similarities worked well but it’s an interesting start.

I think the next step is to build up a training set of pairs of documents that are and aren’t similar to each other. We could then train a classifier to determine whether two documents are identical.

But that’s for another day!

Categories: Programming

Connecting your App to a Wi-Fi Device

Android Developers Blog - Wed, 07/20/2016 - 18:21

Posted by Rich Hyndman, Android Developer Advocate

With the growth of the Internet of Things, connecting Android applications to Wi-Fi enabled devices is becoming more and more common. Whether you’re building an app for a remote viewfinder, to set up a connected light bulb, or to control a quadcopter, if it’s Wi-Fi based you will need to connect to a hotspot that may not have Internet connectivity.

From Lollipop onwards the OS became a little more intelligent, allowing multiple network connections and not routing data to networks that don’t have Internet connectivity. That’s very useful for users as they don’t lose connectivity when they’re near Wi-Fis with captive portals. Data routing APIs were added for developers, so you can ensure that only the appropriate app traffic is routed over the Wi-Fi connection to the external device.

To make the APIs easier to understand, it is good to know that there are 3 sets of networks available to developers:

  • WiFiManager#startScan returns a list of available Wi-Fi networks. They are primarily identified by SSID.
  • WiFiManager#getConfiguredNetworks returns a list of the Wi-Fi networks configured on the device, also indexed by SSID, but they are not necessarily currently available.
  • ConnectivityManager#getAllNetworks returns a list of networks that are being interacted with by the phone. This is necessary as from Lollipop onwards a device may be connected to multiple networks at once, Wi-Fi, LTE, Bluetooth, etc… The current state of each is available by calling ConnectivityManager#getNetworkInfo and is identified by a network ID.

In all versions of Android you start by scanning for available Wi-Fi networks with WiFiManager#startScan, iterate through the ScanResults looking for the SSID of your external Wi-Fi device. Once you’ve found it you can check if it is already a configured network using WifiManager#getConfiguredNetworks and iterating through the WifiConfigurations returned, matching on SSID. It’s worth noting that the SSIDs of the configured networks are enclosed in double quotes, whilst the SSIDs returned in ScanResults are not.

If your network is configured you can obtain the network ID from the WifiConfiguration object. Otherwise you can configure it using WifiManager#addNetwork and keep track of the network id that is returned.

To connect to the Wi-Fi network, register a BroadcastReceiver that listens for WifiManager.NETWORK_STATE_CHANGED_ACTION and then call WifiManager.enableNetwork (int netId, boolean disableOthers), passing in your network ID. The enableNetwork call disables all the other Wi-Fi access points for the next scan, locates the one you’ve requested and connects to it. When you receive the network broadcasts you can check with WifiManager#getConnectionInfo that you’re successfully connected to the correct network. But, on Lollipop and above, if that network doesn’t have internet connectivity network, requests will not be routed to it.

Routing network requests

To direct all the network requests from your app to an external Wi-Fi device, call ConnectivityManager#setProcessDefaultNetwork on Lollipop devices, and on Marshmallow call ConnectivityManager#bindProcessToNetwork instead, which is a direct API replacement. Note that these calls require android.permission.INTERNET; otherwise they will just return false.

Alternatively, if you’d like to route some of your app traffic to the Wi-Fi device and some to the Internet over the mobile network:

Now you can keep your users connected whilst they benefit from your innovative Wi-Fi enabled products.

Categories: Programming

Android Developer Story: StoryToys finds success in the β€˜Family’ section on Google Play

Android Developers Blog - Wed, 07/20/2016 - 17:21

Posted by Lily Sheringham, Google Play team

Based in Dublin, Ireland, StoryToys is a leading publisher of interactive books and games for children. Like most kids’ app developers, they faced the challenges of engaging with the right audiences to get their content discovered. Since the launch of the Family section on Google Play, StoryToys has experienced an uplift of 270% in revenue and an increase of 1300% in downloads.

Hear Emmet O’Neill, Chief Product Officer, and Gavin Barrett, Commercial Director, discuss how the Family section creates a trusted and creative space for families to find new content. Also hear how beta testing, localized pricing and more, has allowed StoryToy’s flagship app, My Very Hungry Caterpillar, to significantly increase engagement and revenue.

Learn more about Google Play for Families and get the Playbook for Developers app to stay up-to-date with more features and best practices that will help you grow a successful business on Google Play.

Categories: Programming

Strictly Enforced Verified Boot with Error Correction

Android Developers Blog - Wed, 07/20/2016 - 01:51

Posted by Sami Tolvanen, Software Engineer

Overview

Android uses multiple layers of protection to keep users safe. One of these layers is verified boot, which improves security by using cryptographic integrity checking to detect changes to the operating system. Android has alerted about system integrity since Marshmallow, but starting with devices first shipping with Android 7.0, we require verified boot to be strictly enforcing. This means that a device with a corrupt boot image or verified partition will not boot or will boot in a limited capacity with user consent. Such strict checking, though, means that non-malicious data corruption, which previously would be less visible, could now start affecting process functionality more.

By default, Android verifies large partitions using the dm-verity kernel driver, which divides the partition into 4 KiB blocks and verifies each block when read, against a signed hash tree. A detected single byte corruption will therefore result in an entire block becoming inaccessible when dm-verity is in enforcing mode, leading to the kernel returning EIO errors to userspace on verified partition data access.

This post describes our work in improving dm-verity robustness by introducing forward error correction (FEC), and explains how this allowed us to make the operating system more resistant to data corruption. These improvements are available to any device running Android 7.0 and this post reflects the default implementation in AOSP that we ship on our Nexus devices.

Error-correcting codes

Using forward error correction, we can detect and correct errors in source data by shipping redundant encoding data generated using an error-correcting code. The exact number of errors that can be corrected depends on the code used and the amount of space allocated for the encoding data.

Reed-Solomon is one of the most commonly used error-correcting code families, and is readily available in the Linux kernel, which makes it an obvious candidate for dm-verity. These codes can correct up to ⌊t/2βŒ‹ unknown errors and up to t known errors, also called erasures, when t encoding symbols are added.

A typical RS(255, 223) code that generates 32 bytes of encoding data for every 223 bytes of source data can correct up to 16 unknown errors in each 255 byte block. However, using this code results in ~15% space overhead, which is unacceptable for mobile devices with limited storage. We can decrease the space overhead by sacrificing error correction capabilities. An RS(255, 253) code can correct only one unknown error, but also has an overhead of only 0.8%.

An additional complication is that block-based storage corruption often occurs for an entire block and sometimes spans multiple consecutive blocks. Because Reed-Solomon is only able to recover from a limited number of corrupted bytes within relatively short encoded blocks, a naive implementation is not going to be very effective without a huge space overhead.

Recovering from consecutive corrupted blocks

In the changes we made to dm-verity for Android 7.0, we used a technique called interleaving to allow us to recover not only from a loss of an entire 4 KiB source block, but several consecutive blocks, while significantly reducing the space overhead required to achieve usable error correction capabilities compared to the naive implementation.

Efficient interleaving means mapping each byte in a block to a separate Reed-Solomon code, with each code covering N bytes across the corresponding N source blocks. A trivial interleaving where each code covers a consecutive sequence of N blocks already makes it possible for us to recover from the corruption of up to (255 - N) / 2 blocks, which for RS(255, 223) would mean 64 KiB, for example.

An even better solution is to maximize the distance between the bytes covered by the same code by spreading each code over the entire partition, thereby increasing the maximum number of consecutive corrupted blocks an RS(255, N) code can handle on a partition consisting of T blocks to ⌈T/NβŒ‰ Γ— (255 - N) / 2.

Interleaving with distance D and block size B.

An additional benefit of interleaving, when combined with the integrity verification already performed by dm-verity, is that we can tell exactly where the errors are in each code. Because each byte of the code covers a different source blockβ€”and we can verify the integrity of each block using the existing dm-verity metadataβ€”we know which of the bytes contain errors. Being able to pinpoint erasure locations allows us to effectively double our error correction performance to at most ⌈T/NβŒ‰ Γ— (255 - N) consecutive blocks.

For a ~2 GiB partition with 524256 4 KiB blocks and RS(255, 253), the maximum distance between the bytes of a single code is 2073 blocks. Because each code can recover from two erasures, using this method of interleaving allows us to recover from up to 4146 consecutive corrupted blocks (~16 MiB). Of course, if the encoding data itself gets corrupted or we lose more than two of the blocks covered by any single code, we cannot recover anymore.

While making error correction feasible for block-based storage, interleaving does have the side effect of making decoding slower, because instead of reading a single block, we need to read multiple blocks spread across the partition to recover from an error. Fortunately, this is not a huge issue when combined with dm-verity and solid-state storage as we only need to resort to decoding if a block is actually corrupted, which still is rather rare, and random access reads are relatively fast even if we have to correct errors.

Conclusion

Strictly enforced verified boot improves security, but can also reduce reliability by increasing the impact of disk corruption that may occur on devices due to software bugs or hardware issues.

The new error correction feature we developed for dm-verity makes it possible for devices to recover from the loss of up to 16-24 MiB of consecutive blocks anywhere on a typical 2-3 GiB system partition with only 0.8% space overhead and no performance impact unless corruption is detected. This improves the security and reliability of devices running Android 7.0.

Categories: Programming

SE-Radio Episode 263: Camille Fournier on Real-World Distributed Systems

Stefan Tilkov talks to Camille Fournier about the challenges developers face when building distributed systems. Topics include the definition of a distributed system, whether developers can avoid building them at all, and what changes occur once they choose to. They also talk about the role distributed consensus tools such as Apache Zookeeper play, and whether […]
Categories: Programming

Final Developer Preview before Android 7.0 Nougat begins rolling out

Android Developers Blog - Mon, 07/18/2016 - 18:45

Posted by Dave Burke, VP of Engineering

As we close in on the public rollout of Android 7.0 Nougat to devices later this summer, today we’re releasing Developer Preview 5, the last milestone of this preview series. Last month’s Developer Preview included the final APIs for Nougat; this preview gives developers the near-final system updates for all of the supported preview devices, helping you get your app ready for consumers.

Here’s a quick rundown of what’s included in the final Developer Preview of Nougat:

  • System images for Nexus and other preview devices
  • An emulator that you can use for doing the final testing of your apps to make sure they’re ready
  • The final N APIs (API level 24) and latest system behaviors and UI
  • The latest bug fixes and optimizations across the system and in preinstalled apps

Working with this latest Developer Preview, you should make sure your app handles all of the system behavior changes in Android N, like Doze on the Go, background optimizations, screen zoom, permissions changes, and more. Plus, you can take advantage of new developer features in Android N such as Multi-window support, Direct Reply and other notifications enhancements, Direct boot, new emojis and more.

Publish your apps to alpha, beta or production channels in Google Play

After testing your apps with Developer Preview 5 you should publish the updates to Google Play soon. We recommend compiling against, and optionally targeting, API 24 and then publishing to your alpha, beta, or production channels in the Google Play Developer Console. A great strategy to do this is using Google Play’s beta testing feature to get early feedback from a small group of users -- including Developer Preview users β€” and then doing a staged rollout as you release the updated app to all users.

How to get Developer Preview 5

If you are already enrolled in the Android Beta program, your devices will get the Developer Preview 5 update right away, no action is needed on your part. If you aren’t yet enrolled in Android Beta, the easiest way to get started is by visiting android.com/beta and opt-in your eligible Android phone or tablet -- you’ll soon receive this preview update over-the-air. As always, you can also download and flash this update manually. The Nougat Developer Preview is available for Nexus 6, Nexus 5X, Nexus 6P, Nexus 9, and Pixel C devices, as well as General Mobile 4G [Android One] devices.

Thanks so much for all of your feedback so far. Please continue to share feedback or requests either in the N Developer Preview issue tracker, N Preview Developer community, or Android Beta community as we work towards the consumer release later this summer. Android Nougat is almost here!

Also, the Android engineering team will host a Reddit AMA on r/androiddev to answer all your technical questions about the platform tomorrow, July 19 from 12-2 PM (Pacific Time). We look forward to addressing your questions!

Categories: Programming

Announcing the Google Play Indie Games Festival in San Francisco, Sept. 24

Android Developers Blog - Thu, 07/14/2016 - 17:01
Posted by Jamil Moledina, Google Play, Games Strategic Lead

If you’re an indie game developer, you know that games are a powerful medium of expression of art, whimsy, and delight. Being on Google Play can help you reach over a billion users and build a successful, global business. That’s why we recently introduced programs, like the Indie Corner, to help more gamers discover your works of art.

To further celebrate and showcase the passion and innovation of indie game developers, we’re hosting the Google Play Indie Games Festival at the Terra Gallery in San Francisco, on September 24.

This is a great opportunity for you to showcase your indie title to the public, increase your network, and compete to win great prizes, such as Tango devices, free tickets for Google I/O 2017, and Google ad campaign support. Admission will be free and players will get the chance to play and vote on their favorites.

If you’re interested in showcasing your game, we’re accepting submissions now through August 14. We’ll then select high-quality games that are both innovative and fun for the festival. Submissions are open to US and Canadian developers with 15 or less full time staff. Only games published on or after January 1, 2016 or those to be published by December 31, 2016 are eligible. See complete rules.

We encourage virtual reality and augmented reality game submissions that use the Google VR SDK and the Tango Tablet Development Kit.

At the end of August, we’ll announce the group of indies to be featured at the festival.

You can learn more about the event here. We can’t wait to see what innovative and fun experiences you share with us!

Categories: Programming

Android Wear 2.0 Developer Preview 2

Android Developers Blog - Tue, 07/12/2016 - 18:26

Posted by Hoi Lam, Android Wear Developer Advocate

At Google I/O 2016, we launched the Android Wear 2.0 Developer Preview, which gives developers early access to the next major release of Android Wear. Since I/O, feedback from the developer community has helped us identify bugs and shape our product direction. Thank you!

Today, we are releasing the second developer preview with new functionalities and bug fixes. Prior to the consumer release, we plan to release additional updates, so please send us your feedback early and often. Please keep in mind that this preview is a work in progress, and is not yet intended for daily use.

What’s new?
  • Platform API 24 - We have incremented the Android Platform API version number to 24 to match Nougat. You can now update your Android Wear 2.0 Preview project’s compileSdkVersion to API 24, and we recommend that you also update targetSdkVersion to API 24.
  • Wearable Drawers Enhancements - We launched the wearable drawers as part of the Android Wear 2.0 Preview 1, along with UX guidelines on how to best integrate the navigation drawer and action drawer in your Android Wear app. In Preview 2, we have added additional support for wearable drawer peeking, to make it easier for users to access these drawers as they scroll. Other UI improvements include automatic peek view and navigation drawer closure and showing the first action in WearableActionDrawer’s peek view. For developers that want to make custom wearable drawers, we’ve added peek_view and drawer_content attributes to WearableDrawerView. And finally, navigation drawer contents can now be updated by calling notifyDataSetChanged.
  • Wrist Gestures: Since last year, users have been able to scroll through the notification stream via wrist gestures. We have now opened this system to developers to use within their applications. This helps improve single hand usage, for when your users need their other hand to hold onto their shopping or their kids. See the code sample below to get started with gestures in your app:
 public class MainActivity extends Activity {  
   ...    
   @Override /* KeyEvent.Callback */  
   public boolean onKeyDown(int keyCode, KeyEvent event) {  
     switch (keyCode) {  
       case KeyEvent.KEYCODE_NAVIGATE_NEXT:  
         Log.d(TAG, "Next");  
         break;  
       case KeyEvent.KEYCODE_NAVIGATE_PREVIOUS:  
         Log.d(TAG, "Previous");  
         break;  
     }  
     // If you did not handle, then let it be handled by the next possible element as deemed by  
     // Activity.  
     return super.onKeyDown(keyCode, event);  
   }  
 }  

Get started and give us feedback!

The Android Wear 2.0 Developer Preview includes an updated SDK with tools and system images for testing on the official Android emulator, the LG Watch Urbane 2nd Edition LTE, and the Huawei Watch.

To get started, follow these steps:

  1. Take a video tour of the Android Wear 2.0 developer preview
  2. Update to Android Studio v2.1.1 or later
  3. Visit the Android Wear 2.0 Developer Preview site for downloads and documentation
  4. Get the emulator system images through the SDK Manager or download the device system images from the developer preview downloads page
  5. Test your app with your supported device or emulator
  6. Give us feedback

We will update this developer preview over the next few months based on your feedback. The sooner we hear from you, the more we can include in the final release, so don't be shy!

Categories: Programming

SE Radio Episode 262: Software Quality with Bill Curtis

Sven Johann talks with Bill Curtis about Software Quality. They start with what software quality is and then discuss examples of systems which failed to achieve the quality goals (e.g. ObamaCare) and it’s consequences. They then go on with the role of architecture in the overall quality of the system and how to achieve it […]
Categories: Programming

Python: Scraping elements relative to each other with BeautifulSoup

Mark Needham - Mon, 07/11/2016 - 07:01

Last week we hosted a Game of Thrones based intro to Cypher at the Women Who Code London meetup and in preparation had to scrape the wiki to build a dataset.

I’ve built lots of datasets this way and it’s a painless experience as long as the pages make liberal use of CSS classes and/or IDs.

Unfortunately the Game of Thrones wiki doesn’t really do that so I had to find another way to extract the data I wanted – extracting elements based on their position to more prominent elements on the page.

For example, I wanted to extract Arya Stark‘s allegiances which look like this on the page:

2016 07 11 06 45 37

We don’t have a direct route to her allegiances but we do have an indirect path via the h3 element with the text ‘Allegiance’.

The following code gets us the ‘Allegiance’ element:

from bs4 import BeautifulSoup
 
file_name = "Arya_Stark"
wikia = BeautifulSoup(open("data/wikia/characters/{0}".format(file_name), "r"), "html.parser")
allegiance_element = [tag for tag in wikia.find_all('h3') if tag.text == "Allegiance"]
 
> print allegiance_element
[<h3 class="pi-data-label pi-secondary-font">Allegiance</h3>]

Now we need to work out the relative position of the div containing the houses. It’s inside the same parent div so I thought it’d probably be the next sibling:

next_element = allegiance_element[0].next_sibling
 
> print next_element

Nope. Nothing! Hmmm, wonder why:

> print next_element.name, type(next_element)
None <class 'bs4.element.NavigableString'>

Ah, empty string. Maybe it’s the one after that?

next_element = allegiance_element[0].next_sibling.next_sibling
 
> print next_element.name, type(next_element)
[<a href="/wiki/House_Stark" title="House Stark">House Stark</a>, <br/>, <a href="/wiki/Faceless_Men" title="Faceless Men">Faceless Men</a>, u' (Formerly)']

Hoorah! Afer this it became a case of working out how the text was structure and pulling out what I wanted.

The code I ended up with is on github if you want to recreate it yourself.

Categories: Programming

Neo4j 3.0 Drivers – Failed to save the server ID and the certificate received from the server

Mark Needham - Mon, 07/11/2016 - 06:21

I’ve been using the Neo4j Java Driver on various local databases over the past week and ran into the following certificate problem a few times:

org.neo4j.driver.v1.exceptions.ClientException: Unable to process request: General SSLEngine problem
	at org.neo4j.driver.internal.connector.socket.SocketClient.start(SocketClient.java:88)
	at org.neo4j.driver.internal.connector.socket.SocketConnection.<init>(SocketConnection.java:63)
	at org.neo4j.driver.internal.connector.socket.SocketConnector.connect(SocketConnector.java:52)
	at org.neo4j.driver.internal.pool.InternalConnectionPool.acquire(InternalConnectionPool.java:113)
	at org.neo4j.driver.internal.InternalDriver.session(InternalDriver.java:53)
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
	at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1431)
	at sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:535)
	at sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1214)
	at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1186)
	at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.wrap(TLSSocketChannel.java:270)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.runHandshake(TLSSocketChannel.java:131)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.<init>(TLSSocketChannel.java:95)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.<init>(TLSSocketChannel.java:77)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.<init>(TLSSocketChannel.java:70)
	at org.neo4j.driver.internal.connector.socket.SocketClient$ChannelFactory.create(SocketClient.java:251)
	at org.neo4j.driver.internal.connector.socket.SocketClient.start(SocketClient.java:75)
	... 14 more
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
	at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
	at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1728)
	at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:304)
	at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
	at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1497)
	at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:212)
	at sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
	at sun.security.ssl.Handshaker$1.run(Handshaker.java:919)
	at sun.security.ssl.Handshaker$1.run(Handshaker.java:916)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1369)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.runDelegatedTasks(TLSSocketChannel.java:142)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.unwrap(TLSSocketChannel.java:203)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.runHandshake(TLSSocketChannel.java:127)
	... 19 more
Caused by: java.security.cert.CertificateException: Unable to connect to neo4j at `localhost:10003`, because the certificate the server uses has changed. This is a security feature to protect against man-in-the-middle attacks.
If you trust the certificate the server uses now, simply remove the line that starts with `localhost:10003` in the file `/Users/markneedham/.neo4j/known_hosts`.
The old certificate saved in file is:
-----BEGIN CERTIFICATE-----
7770ee598be69c8537b0e576e62442c84400008ca0d3e3565b379b7cce9a51de
0fd4396251df2e8da50eb1628d44dcbca3fae5c8fb9c0adc29396839c25eb0c8
 
-----END CERTIFICATE-----
The New certificate received is:
-----BEGIN CERTIFICATE-----
01a422739a39625ee95a0547fa99c7e43fbb33c70ff720e5ae4a8408421aa63b
2fe4f5d6094c5fd770ed1ad214dbdc428a6811d0955ed80d48cc67d84067df2c
 
-----END CERTIFICATE-----
 
	at org.neo4j.driver.internal.connector.socket.TrustOnFirstUseTrustManager.checkServerTrusted(TrustOnFirstUseTrustManager.java:153)
	at sun.security.ssl.AbstractTrustManagerWrapper.checkServerTrusted(SSLContextImpl.java:936)
	at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1484)
	... 28 more

I got a bit lazy and just nuked the file it mentions in the error message – /Users/markneedham/.neo4j/known_hosts – which led to this error the next time I call the driver in my application:

Failed to save the server ID and the certificate received from the server to file /Users/markneedham/.neo4j/known_hosts.
Server ID: localhost:10003
Received cert:
-----BEGIN CERTIFICATE-----
933c7ec5d6a1b876bd186dc6d05b04478ae771262f07d26a4d7d2e6b7f71054c
3e6b7c172474493b7fe93170d940b9cc3544661c7966632361649f2fda7c66be
 
-----END CERTIFICATE-----

I recreated the file with no content and tried again and it worked fine. Alternatively we can choose to turn off encryption when working with local databases and avoid the issue:

Config config = Config.build().withEncryptionLevel( Config.EncryptionLevel.NONE ).toConfig();
 
try ( Driver driver = GraphDatabase.driver( "bolt://localhost:7687", config );
      Session session = driver.session() )
{
   // use the driver
}
Categories: Programming

R: Sentiment analysis of morning pages

Mark Needham - Sat, 07/09/2016 - 07:36

A couple of months ago I came across a cool blog post by Julia Silge where she runs a sentiment analysis algorithm over her tweet stream to see how her tweet sentiment has varied over time.

I wanted to give it a try but couldn’t figure out how to get a dump of my tweets so I decided to try it out on the text from my morning pages writing which I’ve been experimenting with for a few months.

Here’s an explanation of morning pages if you haven’t come across it before:

Morning Pages are three pages of longhand, stream of consciousness writing, done first thing in the morning.

*There is no wrong way to do Morning Pages* – they are not high art.

They are not even β€œwriting.” They are about anything and everything that crosses your mind– and they are for your eyes only.

Morning Pages provoke, clarify, comfort, cajole, prioritize and synchronize the day at hand. Do not over-think Morning Pages: just put three pages of anything on the page…and then do three more pages tomorrow.

Most of my writing is complete gibberish but I thought it’d be fun to see how my mood changes over time and see if it identifies any peaks or troughs in sentiment that I could then look into further.

I’ve got one file per day so we’ll start by building a data frame containing the text, one row per day:

library(syuzhet)
library(lubridate)
library(ggplot2)
library(scales)
library(reshape2)
library(dplyr)
 
root="/path/to/files"
files = list.files(root)
 
df = data.frame(file = files, stringsAsFactors=FALSE)
df$fullPath = paste(root, df$file, sep = "/")
df$text = sapply(df$fullPath, get_text_as_string)

We end up with a data frame with 3 fields:

> names(df)
 
[1] "file"     "fullPath" "text"

Next we’ll run the sentiment analysis function – syuzhet#get_nrc_sentiment – over the data frame and get a score for each type of sentiment for each entry:

get_nrc_sentiment(df$text) %>% head()
 
  anger anticipation disgust fear joy sadness surprise trust negative positive
1     7           14       5    7   8       6        6    12       14       27
2    11           12       2   13   9      10        4    11       22       24
3     6           12       3    8   7       7        5    13       16       21
4     5           17       4    7  10       6        7    13       16       37
5     4           13       3    7   7       9        5    14       16       25
6     7           11       5    7   8       8        6    15       16       26

Now we’ll merge these columns into our original data frame:

df = cbind(df, get_nrc_sentiment(df$text))
df$date = ymd(sapply(df$file, function(file) unlist(strsplit(file, "[.]"))[1]))
df %>% select(-text, -fullPath, -file) %>% head()
 
  anger anticipation disgust fear joy sadness surprise trust negative positive       date
1     7           14       5    7   8       6        6    12       14       27 2016-01-02
2    11           12       2   13   9      10        4    11       22       24 2016-01-03
3     6           12       3    8   7       7        5    13       16       21 2016-01-04
4     5           17       4    7  10       6        7    13       16       37 2016-01-05
5     4           13       3    7   7       9        5    14       16       25 2016-01-06
6     7           11       5    7   8       8        6    15       16       26 2016-01-07

Finally we can build some ‘sentiment over time’ charts like Julia has in her post:

posnegtime <- df %>% 
  group_by(date = cut(date, breaks="1 week")) %>%
  summarise(negative = mean(negative), positive = mean(positive)) %>% 
  melt
 
names(posnegtime) <- c("date", "sentiment", "meanvalue")
posnegtime$sentiment = factor(posnegtime$sentiment,levels(posnegtime$sentiment)[c(2,1)])
 
ggplot(data = posnegtime, aes(x = as.Date(date), y = meanvalue, group = sentiment)) +
  geom_line(size = 2.5, alpha = 0.7, aes(color = sentiment)) +
  geom_point(size = 0.5) +
  ylim(0, NA) + 
  scale_colour_manual(values = c("springgreen4", "firebrick3")) +
  theme(legend.title=element_blank(), axis.title.x = element_blank()) +
  scale_x_date(breaks = date_breaks("1 month"), labels = date_format("%b %Y")) +
  ylab("Average sentiment score") + 
  ggtitle("Sentiment Over Time")

2016 07 05 06 47 12

So overall it seems like my writing displays more positive sentiment than negative which is nice to know. The chart shows a rolling one week average and there isn’t a single week where there’s more negative sentiment than positive.

I thought it’d be fun to drill into the highest negative and positive days to see what was going on there:

> df %>% filter(negative == max(negative)) %>% select(date)
 
        date
1 2016-03-19
 
> df %>% filter(positive == max(positive)) %>% select(date)
 
        date
1 2016-01-05
2 2016-06-20

On the 19th March I was really frustrated because my boiler had broken down and I had to buy a new one – I’d completely forgotten how annoyed I was, so thanks sentiment analysis for reminding me!

I couldn’t find anything particularly positive on the 5th January or 20th June. The 5th January was the day after my birthday so perhaps I was happy about that but I couldn’t see any particular evidence that was the case.

Playing around with the get_nrc_sentiment function it does seem to identify positive sentiment when I wouldn’t say there is any. For example here’s some example sentences from my writing today:

> get_nrc_sentiment("There was one section that I didn't quite understand so will have another go at reading that.")
 
  anger anticipation disgust fear joy sadness surprise trust negative positive
1     0            0       0    0   0       0        0     0        0        1
> get_nrc_sentiment("Bit of a delay in starting my writing for the day...for some reason was feeling wheezy again.")
 
  anger anticipation disgust fear joy sadness surprise trust negative positive
1     2            1       2    2   1       2        1     1        2        2

I don’t think there’s any positive sentiment in either of those sentences but the function claims 3 bits of positive sentiment! It would be interesting to see if I fare any better with Stanford’s sentiment extraction tool which you can use with syuzhet but requires a bit of setup first.

I’ll give that a try next but in terms of getting an overview of my mood I thought I might get a better picture if I looked for the difference between positive and negative sentiment rather than absolute values.

The following code does the trick:

difftime <- df %>% 
  group_by(date = cut(date, breaks="1 week")) %>%
  summarise(diff = mean(positive) - mean(negative))
 
ggplot(data = difftime, aes(x = as.Date(date), y = diff)) +
  geom_line(size = 2.5, alpha = 0.7) +
  geom_point(size = 0.5) +
  ylim(0, NA) + 
  scale_colour_manual(values = c("springgreen4", "firebrick3")) +
  theme(legend.title=element_blank(), axis.title.x = element_blank()) +
  scale_x_date(breaks = date_breaks("1 month"), labels = date_format("%b %Y")) +
  ylab("Average sentiment difference score") + 
  ggtitle("Sentiment Over Time")
2016 07 09 07 05 34

This one identifies peak happiness in mid January/February. We can find the peak day for this measure as well:

> df %>% mutate(diff = positive - negative) %>% filter(diff == max(diff)) %>% select(date)
 
        date
1 2016-02-25

Or if we want to see the individual scores:

> df %>% mutate(diff = positive - negative) %>% filter(diff == max(diff)) %>% select(-text, -file, -fullPath)
 
  anger anticipation disgust fear joy sadness surprise trust negative positive       date diff
1     0           11       2    3   7       1        6     6        3       31 2016-02-25   28

After reading through the entry for this day I’m wondering if the individual pieces of sentiment might be more interesting than the positive/negative score.

On the 25th February I was:

  • quite excited about reading a distributed systems book I’d just bought (I know?!)
  • thinking about how to apply the tag clustering technique to meetup topics
  • preparing my submission to PyData London and thinking about what was gonna go in it
  • thinking about the soak testing we were about to start doing on our project

Each of those is a type of anticipation so it makes sense that this day scores highly. I looked through some other days which specifically rank highly for anticipation and couldn’t figure out what I was anticipating so even this is a bit hit and miss!

I have a few avenues to explore further but if you have any other ideas for what I can try next let me know in the comments.

Categories: Programming

Changes to Trusted Certificate Authorities in Android Nougat

Android Developers Blog - Fri, 07/08/2016 - 18:41

Posted by Chad Brubaker, Android Security team

In Android Nougat, we’ve changed how Android handles trusted certificate authorities (CAs) to provide safer defaults for secure app traffic. Most apps and users should not be affected by these changes or need to take any action. The changes include:

  • Safe and easy APIs to trust custom CAs.
  • Apps that target API Level 24 and above no longer trust user or admin-added CAs for secure connections, by default.
  • All devices running Android Nougat offer the same standardized set of system CAsβ€”no device-specific customizations.

For more details on these changes and what to do if you’re affected by them, read on.

Safe and easy APIs

Apps have always been able customize which certificate authorities they trust. However, we saw apps making mistakes due to the complexities of the Java TLS APIs. To address this we improved the APIs for customizing trust.

User-added CAs

Protection of all application data is a key goal of the Android application sandbox. Android Nougat changes how applications interact with user- and admin-supplied CAs. By default, apps that target API level 24 willβ€”by designβ€”not honor such CAs unless the app explicitly opts in. This safe-by-default setting reduces application attack surface and encourages consistent handling of network and file-based application data.

Customizing trusted CAs

Customizing the CAs your app trusts on Android Nougat is easy using the Network Security Config. Trust can be specified across the whole app or only for connections to certain domains, as needed. Below are some examples for trusting a custom or user-added CA, in addition to the system CAs. For more examples and details, see the full documentation.

Trusting custom CAs for debugging

To allow your app to trust custom CAs only for local debugging, include something like this in your Network Security Config. The CAs will only be trusted while your app is marked as debuggable.

<network-security-config>  
      <debug-overrides>  
           <trust-anchors>  
                <!-- Trust user added CAs while debuggable only -->
                <certificates src="user" />  
           </trust-anchors>  
      </domain-config>  
 </network-security-config>
Trusting custom CAs for a domain

To allow your app to trust custom CAs for a specific domain, include something like this in your Network Security Config.

<network-security-config>  
      <domain-config>  
           <domain includeSubdomains="true">internal.example.com</domain>  
           <trust-anchors>  
                <!-- Only trust the CAs included with the app  
                     for connections to internal.example.com -->  
                <certificates src="@raw/cas" />  
           </trust-anchors>  
      </domain-config>  
 </network-security-config>
Trusting user-added CAs for some domains

To allow your app to trust user-added CAs for multiple domains, include something like this in your Network Security Config.

<network-security-config>  
      <domain-config>  
           <domain includeSubdomains="true">userCaDomain.com</domain>  
           <domain includeSubdomains="true">otherUserCaDomain.com</domain>  
           <trust-anchors>  
                  <!-- Trust preinstalled CAs -->  
                  <certificates src="system" />  
                  <!-- Additionally trust user added CAs -->  
                  <certificates src="user" />  
           </trust-anchors>  
      </domain-config>  
 </network-security-config>
Trusting user-added CAs for all domains except some

To allow your app to trust user-added CAs for all domains, except for those specified, include something like this in your Network Security Config.

<network-security-config>  
      <base-config>  
           <trust-anchors>  
                <!-- Trust preinstalled CAs -->  
                <certificates src="system" />  
                <!-- Additionally trust user added CAs -->  
                <certificates src="user" />  
           </trust-anchors>  
      </base-config>  
      <domain-config>  
           <domain includeSubdomains="true">sensitive.example.com</domain>  
           <trust-anchors>  
                <!-- Only allow sensitive content to be exchanged  
             with the real server and not any user or  
    admin configured MiTMs -->  
                <certificates src="system" />  
           <trust-anchors>  
      </domain-config>  
 </network-security-config>
Trusting user-added CAs for all secure connections

To allow your app to trust user-added CAs for all secure connections, add this in your Network Security Config.

<network-security-config>  
      <base-config>  
            <trust-anchors>  
                <!-- Trust preinstalled CAs -->  
                <certificates src="system" />  
                <!-- Additionally trust user added CAs -->  
                <certificates src="user" />  
           </trust-anchors>  
      </base-config>  
 </network-security-config>
Standardized set of system-trusted CAs

To provide a more consistent and more secure experience across the Android ecosystem, beginning with Android Nougat, compatible devices trust only the standardized system CAs maintained in AOSP.

Previously, the set of preinstalled CAs bundled with the system could vary from device to device. This could lead to compatibility issues when some devices did not include CAs that apps needed for connections as well as potential security issues if CAs that did not meet our security requirements were included on some devices.

What if I have a CA I believe should be included on Android?

First, be sure that your CA needs to be included in the system. The preinstalled CAs are only for CAs that meet our security requirements because they affect the secure connections of most apps on the device. If you need to add a CA for connecting to hosts that use that CA, you should instead customize your apps and services that connect to those hosts. For more information, see the Customizing trusted CAs section above.

If you operate a CA that you believe should be included in Android, first complete the Mozilla CA Inclusion Process and then file a feature request against Android to have the CA added to the standardized set of system CAs.

Categories: Programming

Verbal Aikido for Product Managers

Xebia Blog - Sun, 07/03/2016 - 13:41
"Well eh ok, I guess so" mumbled the student in the training exercise where he was practicing how to say no to feature gluttony. I decided to give the class an additional exercise to awaken their inner diplomat. β€œDiplomacy is the art of telling people to go to hell in such a way that they