Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Neo4j: Graphing the β€˜My name is…I work’ Twitter meme

Mark Needham - Tue, 02/28/2017 - 16:50

Over the last few days I’ve been watching the chain of ‘My name is…’ tweets kicked off by DHH with interest. As I understand it, the idea is to show that coding interview riddles/hard tasks on a whiteboard are ridiculous.

Hello, my name is David. I would fail to write bubble sort on a whiteboard. I look code up on the internet all the time. I don't do riddles.

— DHH (@dhh) February 21, 2017

Other people quoted that tweet and added their own piece and yesterday Eduardo Hernacki suggested that traversing this chain of tweets seemed tailor made for Neo4j.

@eduardohki is someone traversing all this stuff? #Neo4j

— Eduardo Hernacki (@eduardohki) February 28, 2017

Michael was quickly on the scene and created a Cypher query which calls the Twitter API and creates a Neo4j graph from the resulting JSON response. The only tricky bit is creating a ‘bearer token’ but Jason Kotchoff has a helpful gist showing how to generate one from your Twitter consumer key and consumer secret.

Now that we’re got our bearer token let’s create a parameter to store it. Type the following in the Neo4j browser:

:param bearer: ''

Now we’re ready to query the Twitter API. We’ll start with the search API and find all tweets which contain the text ‘”my name” “I work”‘. That will return a JSON response containing lots of tweets. We’ll then create a node for each tweet it returns, a node for the user who posted the tweet, a node for the tweet it quotes, and relationships to glue them all together.

We’re going to use the apoc.load.jsonParams procedure from the APOC library to help us import the data. If you want to follow along you can use a Neo4j sandbox instance which comes with APOC installed. For your local Neo4j installation, grab the APOC jar and put it into your plugins folder before restarting Neo4j.

This is the query in full:

WITH 'https://api.twitter.com/1.1/search/tweets.json?count=100&result_type=recent&lang=en&q=' as url, {bearer} as bearer

CALL apoc.load.jsonParams(url + "%22my%20name%22%20is%22%20%22I%20work%22",{Authorization:"Bearer "+bearer},null) yield value

UNWIND value.statuses as status
WITH status, status.user as u, status.entities as e
WHERE status.quoted_status_id is not null

// create a node for the original tweet
MERGE (t:Tweet {id:status.id}) 
ON CREATE SET t.text=status.text,t.created_at=status.created_at,t.retweet_count=status.retweet_count, t.favorite_count=status.favorite_count

// create a node for the author + a POSTED relationship from the author to the tweet
MERGE (p:User {name:u.screen_name})
MERGE (p)-[:POSTED]->(t)

// create a MENTIONED relationship from the tweet to any users mentioned in the tweet
FOREACH (m IN e.user_mentions | MERGE (mu:User {name:m.screen_name}) MERGE (t)-[:MENTIONED]->(mu))

// create a node for the quoted tweet and create a QUOTED relationship from the original tweet to the quoted one
MERGE (q:Tweet {id:status.quoted_status_id})
MERGE (t)–[:QUOTED]->(q)

// repeat the above steps for the quoted tweet
WITH t as t0, status.quoted_status as status WHERE status is not null
WITH t0, status, status.user as u, status.entities as e

MERGE (t:Tweet {id:status.id}) 
ON CREATE SET t.text=status.text,t.created_at=status.created_at,t.retweet_count=status.retweet_count, t.favorite_count=status.favorite_count

MERGE (t0)-[:QUOTED]->(t)

MERGE (p:User {name:u.screen_name})
MERGE (p)-[:POSTED]->(t)

FOREACH (m IN e.user_mentions | MERGE (mu:User {name:m.screen_name}) MERGE (t)-[:MENTIONED]->(mu))

MERGE (q:Tweet {id:status.quoted_status_id})
MERGE (t)–[:QUOTED]->(q);

The resulting graph looks like this:

MATCH p=()-[r:QUOTED]->() RETURN p LIMIT 25

Graph  21

A more interesting query would be to find the path from DHH to Eduardo which we can find with the following query:

match path = (dhh:Tweet {id: 834146806594433025})<-[:QUOTED*]-(eduardo:Tweet{id: 836400531983724545})
UNWIND NODES(path) AS tweet
MATCH (tweet)<-[:POSTED]->(user)
RETURN tweet, user

This query:

  • starts from DHH’s tweet
  • traverses all QUOTED relationships until it finds Eduardo’s tweet
  • collects all those tweets and then finds the author
  • returns the tweet and the author

And this is the output:

Graph  20

I ran a couple of other queries against the Twitter API to hydrate some nodes that we hadn’t set all the properties on – you can see all the queries on this gist.

For the next couple of days I also have a sandbox running https://10-0-1-157-32898.neo4jsandbox.com/browser/. You can login using the credentials readonly/twitter.

If you have any questions/suggestions let me know in the comments, @markhneedham on twitter, or email the Neo4j DevRel team – devrel@neo4j.com.

The post Neo4j: Graphing the ‘My name is…I work’ Twitter meme appeared first on Mark Needham.

Categories: Programming

Fixing β€œHNS failed with error : Unspecified error” on docker-compose for Windows

Xebia Blog - Mon, 02/27/2017 - 00:44

The past few days I worked quite a lot with docker-compose on my windows machine and after something strange happened to my machine that crashed it, I was not able to start any containers anymore that had connectivity over the network with each other. Every time I used the command-line docker-compose up, I would get […]

The post Fixing β€œHNS failed with error : Unspecified error” on docker-compose for Windows appeared first on Xebia Blog.

Azure Hidden Gems: Resource Policies

Xebia Blog - Sun, 02/26/2017 - 16:30

Today I want to show a really useful Azure feature to help you with the governance of your Azure Subscriptions: Azure Resource Policies: Resource policies enable you to establish conventions for resources in your organization. By defining conventions, you can control costs and more easily manage your resources. For example, you can specify that only […]

The post Azure Hidden Gems: Resource Policies appeared first on Xebia Blog.

Keeping up to Date with the Support Library

Android Developers Blog - Fri, 02/24/2017 - 19:25
Posted by Agustin Fonts, Product Manager, Android Support Library

It's important to keep current when you're dealing with technology. That’s why we're constantly working to improve the quality of our software, particularly libraries that are linked into your apps, such as the Support Library.  The Support Library is a suite of libraries that provides backward compatibility along with additional features across many Android releases.

We have just released version 25.2 of the Support Library.  If you're making use of the android.support.v7.media.MediaRouter class in revision 25.1.1 or 25.1.0, we strongly recommend that you update due to a known issue.  If you haven't updated recently, you've missed out on some great bug fixes such as these:

25.2:
  • Corrected a severe mediarouter issue in which using an A2DP Bluetooth device and media routing APIs could cause the device to become unresponsive, requiring a reboot
  • Showing a slide presentation with screen mirroring no longer causes the device to disconnect from Wi-Fi
  • Media button now properly handles media apps that did not register themselves with setMediaButtonReceiver()
  • TextInputLayout correctly overlays hint and text if text is set by XML (AOSP issue 230171)
  • Corrected a memory leak in MediaControllerCompat (AOSP issue 231441)
  • RecyclerView no longer crashes when recycling view holders (AOSP issue 225762)
Reporting (and fixing) a Bug
The Support Library is developed by the Android Framework and Developer Relations teams, and, just like the Android platform, you can file bugs using the AOSP issue tracker, or submit fixes to our Git repository. Your feedback is critical in helping us to make the Support Library the most productive environment to use for developing Android applications.
Categories: Programming

Adding text and shapes with the Google Slides API

Google Code Blog - Thu, 02/23/2017 - 20:26
Originally shared on the G Suite Developers Blog

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite
When the Google Slidesteam launched their very first API last November, it immediately opened up a whole new class of applications. These applications have the ability to interact with the Slides service, so you can perform operations on presentations programmatically. Since its launch, we've published several videos to help you realize some of those possibilities, showing you how to:
Today, we're releasing the latest Slides API tutorial in our video series. This one goes back to basics a bit: adding text to presentations. But we also discuss shapesβ€”not only adding shapes to slides, but also adding text within shapes. Most importantly, we cover one best practice when using the API: create your own object IDs. By doing this, developers can execute more requests while minimizing API calls.



Developers use insertText requests to tell the API to add text to slides. This is true whether you're adding text to a textbox, a shape or table cell. Similar to the Google Sheets API, all requests are made as JSON payloads sent to the API's batchUpdate() method. Here's the JavaScript for inserting text in some object (objectID) on a slide:
{
"insertText": {
"objectId": objectID,
"text": "Hello World!\n"
}
Adding shapes is a bit more challenging, as you can see from itssample JSON structure:

{
"createShape": {
"shapeType": "SMILEY_FACE",
"elementProperties": {
"pageObjectId": slideID,
"size": {
"height": {
"magnitude": 3000000,
"unit": "EMU"
},
"width": {
"magnitude": 3000000,
"unit": "EMU"
}
},
"transform": {
"unit": "EMU",
"scaleX": 1.3449,
"scaleY": 1.3031,
"translateX": 4671925,
"translateY": 450150
}
}
}
}
Placing or manipulating shapes or images on slides requires more information so the cloud service can properly render these objects. Be aware that it does involve some math, as you can see from the Page Elements page in the docs as well as the Transforms concept guide. In the video, I drop a few hints and good practices so you don't have to start from scratch.

Regardless of how complex your requests are, if you have at least one, say in an array named requests, you'd make an API call with the aforementioned batchUpdate() method, which in Python looks like this (assuming SLIDES is the service endpoint and a presentation ID of deckID):

SLIDES.presentations().batchUpdate(presentationId=deckID,
body=requests).execute()
For a detailed look at the complete code sample featured in the DevByte, check out the deep dive post. As you can see, adding text is fairly straightforward. If you want to learn how to format and style that text, check out the Formatting Text post and video as well as the text concepts guide.
To learn how to perform text search-and-replace, say to replace placeholders in a template deck, check out the Replacing Text & Images post and video as well as the merging data into slides guide. We hope these developer resources help you create that next great app that automates the task of producing presentations for your users!
Categories: Programming

Neo4j: How do null values even work?

Mark Needham - Thu, 02/23/2017 - 00:28

Every now and then I find myself wanting to import a CSV file into Neo4j and I always get confused with how to handle the various null values that can lurk within.

Let’s start with an example that doesn’t have a CSV file in sight. Consider the following list and my attempt to only return null values:

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
WITH value WHERE value = null
RETURN value

(no changes, no records)

Hmm that’s weird. I’d have expected that at least keep the first value in the collection. What about if we do the inverse?

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
WITH value WHERE value <> null
RETURN value

(no changes, no records)

Still nothing! Let’s try returning the output of our comparisons rather than filtering rows:

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
RETURN value = null AS outcome

╒═══════╀═════════╕
β”‚"value"β”‚"outcome"β”‚
β•žβ•β•β•β•β•β•β•β•ͺ═════════║
β”‚null   β”‚null     β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"null" β”‚null     β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚""     β”‚null     β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Mark" β”‚null     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Ok so that isn’t what we expected. Everything has an ‘outcome’ of ‘null’! What about if we want to check whether the value is the string “Mark”?

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
RETURN value = "Mark" AS outcome

╒═══════╀═════════╕
β”‚"value"β”‚"outcome"β”‚
β•žβ•β•β•β•β•β•β•β•ͺ═════════║
β”‚null   β”‚null     β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"null" β”‚false    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚""     β”‚false    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Mark" β”‚true     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

From executing this query we learn that if one side of a comparison is null then the return value is always going to be null.

So how do we exclude a row if it’s null?

It turns out we have to use the ‘is’ keyword rather than using the equality operator. Let’s see what that looks like:

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
WITH value WHERE value is null
RETURN value

╒═══════╕
β”‚"value"β”‚
β•žβ•β•β•β•β•β•β•β•‘
β”‚null   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”˜

And the positive case:

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
WITH value WHERE value is not null
RETURN value

╒═══════╕
β”‚"value"β”‚
β•žβ•β•β•β•β•β•β•β•‘
β”‚"null" β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€
β”‚""     β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Mark" β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”˜

What if we want to get rid of empty strings?

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
WITH value WHERE value <> ""
RETURN value

╒═══════╕
β”‚"value"β”‚
β•žβ•β•β•β•β•β•β•β•‘
β”‚"null" β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Mark" β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”˜

Interestingly that also gets rid of the null value which I hadn’t expected. But if we look for values matching the empty string:

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
WITH value WHERE value = ""
RETURN value

╒═══════╕
β”‚"value"β”‚
β•žβ•β•β•β•β•β•β•β•‘
β”‚""     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”˜

It’s not there either! Hmm what’s going on here:

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
RETURN value, value = "" AS isEmpty, value <> "" AS isNotEmpty

╒═══════╀═════════╀════════════╕
β”‚"value"β”‚"isEmpty"β”‚"isNotEmpty"β”‚
β•žβ•β•β•β•β•β•β•β•ͺ═════════β•ͺ════════════║
β”‚null   β”‚null     β”‚null        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"null" β”‚false    β”‚true        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚""     β”‚true     β”‚false       β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Mark" β”‚false    β”‚true        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

null values seem to get filtered out for every type of equality match unless we explicitly check that a value ‘is null’.

So how do we use this knowledge when we’re parsing CSV files using Neo4j’s LOAD CSV tool?

Let’s say we have a CSV file that looks like this:

$ cat nulls.csv
name,company
"Mark",
"Michael",""
"Will",null
"Ryan","Neo4j"

So none of the first three rows have a value for ‘company’. I don’t have any value at all, Michael has an empty string, and Will has a null value. Let’s see how LOAD CSV interprets this:

load csv with headers from "file:///nulls.csv" AS row
RETURN row

╒═════════════════════════════════╕
β”‚"row"                            β”‚
β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘
β”‚{"name":"Mark","company":null}   β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚{"name":"Michael","company":""}  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚{"name":"Will","company":"null"} β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚{"name":"Ryan","company":"Neo4j"}β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

We’ve got the full sweep of all the combinations from above. We’d like to create a Person node for each row but only create a Company node and associated ‘WORKS_FOR’ relationshp if an actual company is defined – we don’t want to create a null company.

So we only want to create a company node and ‘WORKS_FOR’ relationship for the Ryan row.

The following query does the trick:

load csv with headers from "file:///nulls.csv" AS row
MERGE (p:Person {name: row.name})
WITH p, row
WHERE row.company <> "" AND row.company <> "null"
MERGE (c:Company {name: row.company})
MERGE (p)-[:WORKS_FOR]->(c)

Added 5 labels, created 5 nodes, set 5 properties, created 1 relationship, statement completed in 117 ms.

And if we visualise what’s been created:

Graph  15

Perfect. Perhaps this behaviour is obvious but it always trips me up so hopefully it’ll be useful to someone else as well!

There’s also a section on the Neo4j developer pages describing even more null scenarios that’s worth checking out.

The post Neo4j: How do null values even work? appeared first on Mark Needham.

Categories: Programming

14 extensions that can enrich your daily VSTS usage

Xebia Blog - Wed, 02/22/2017 - 22:54

Using VSTS on a daily basis I find that I add a regular list of VSTS Marketplace extensions to my VSTS environment. I find them convenient and helping me to get the most out of VSTS. The list below is primarily focussed on the Work and Code area and not so much on the Build […]

The post 14 extensions that can enrich your daily VSTS usage appeared first on Xebia Blog.

Publish your app with confidence from the Google Play Developer Console

Android Developers Blog - Wed, 02/22/2017 - 19:20
Posted by Kobi Glick, Product Manager, Google Play

Publishing a new app, or app update, is an important and exciting milestone for every developer. In order to make the process smoother and more trackable, we're announcing the launch of a new way to publish apps on Google Play with some new features. The changes will give you the ability to manage your app releases with more confidence via a new manage releases page in the Google Play Developer Console.




Manage your app updates with clarity and control

The new manage releases page is where you upload alpha, beta, and production releases of your app. From here, you can see important information and the status of all your releases across tracks.

The new manage releases page.
Easier access to existing and new publishing features

Publishing an app or update is a big step, and one that every developer wants to have confidence in taking. To help, we've added two new features.
First, we've added a validation step that highlights potential issues before you publish. The new "review and rollout" page will appear before you confirm the roll out of a new app and flag if there are validation errors or warnings. This new flow will make the app release process easier, especially for apps using multi-APK. It also provides new information; for example, in cases where you added new permissions to your app, the system will highlight it.


Second, it's now simpler to perform and track staged roll-outs during the publishing flow. With staged rollouts, you can release your update to a growing % of users, giving you a chance to catch and address any issues before affecting your whole audience.

If you want to review the history of your releases, it is now possible to track them granularly and download previous APKs.

Finally we've added a new artifacts library under manage releases where you can find all the files that help you manage a release.
Start using the new manage releases page today
You can access the new manage releases page in the Developer Console. Visit the Google Play Developer Help Center for more information. With these changes, we're helping you to publish, track and manage your app with confidence on Google Play.


How useful did you find this blogpost? β˜… β˜… β˜… β˜… β˜…                                                                               
Categories: Programming

New features in Xcode 8.2 Simulator

Xebia Blog - Wed, 02/22/2017 - 14:01
In the release notes of Xcode 8.2, Apple introduced features for their new version of Xcode. In this blog I will explain how to use these new features. Read more

New features in Xcode 8.2 Simulator

Xebia Blog - Wed, 02/22/2017 - 14:01

In the release notes of Xcode 8.2, Apple introduced features for their new version of Xcode. In this blog I will explain how to use these new features. Read more

The post New features in Xcode 8.2 Simulator appeared first on Xebia Blog.

First Steps in gRPC Bindings for React Native

Xebia Blog - Wed, 02/22/2017 - 13:54
When you want to use gRPC in your React Native app there is no official support yet, but that shouldn’t stop you! In this post I’ll show you how we designed an implementation with type safety in mind and successfully called a service remotely from React Native on Android. Read more

First Steps in gRPC Bindings for React Native

Xebia Blog - Wed, 02/22/2017 - 13:54

When you want to use gRPC in your React Native app there is no official support yet, but that shouldn’t stop you! In this post I’ll show you how we designed an implementation with type safety in mind and successfully called a service remotely from React Native on Android. Read more

The post First Steps in gRPC Bindings for React Native appeared first on Xebia Blog.

Friction in Software

Actively Lazy - Tue, 02/21/2017 - 21:39

Friction can be a very powerful force when building software. The things that are made easier or harder can dramatically influence how we work. I’d like to discuss three areas where I’ve seen friction at work: dependency injection, code reviews and technology selection.

DI Frameworks

A few years ago a colleague and I discussed this and came to the conclusion that the reason most DI frameworks suck (I’m looking in particular at you, Spring) is that they make adding new dependenciesΒ so damned easy! There’s absolutely no friction. Maybe a little XML (shudder) or just a tiny little attribute. It’s so easy!

So when we started a new, greenfield project, we decided to put our theory to the test and introduced just a little bit of friction to dependency injection. I’ve written before about the basic scheme we adopted and the AOP endpoint it reached. But the end result was, I believe, very successful. After a couple of years of development we still had of the order of only 10-20 dependencies. The friction we’d introduced was light (add a couple of lines to a single class), but it was sufficient to act as a constant reminder not to just add a new dependency because it wasΒ easy.

Code Reviews

I was reminded of this recently when discussing code reviews. I have mixed feelings about code reviews: I’ve seen them work well, and it is better to have code reviews than not to have them; but it’s better still to pair program. But not all teams, not all developers, like pair programming – so code reviews exist. The trouble with code reviews is they can provide a form of friction.

If you & I are pairing on a piece of work, we will discuss the various trade-offs as we go: do we spend time on this, do we refactor that, etc etc. The constant judgements about what warrants attention and what can be left for another day are verbalised and agreed. In general I find the code written while pairing is high in quality but also remains tightly focused on task. The long rambling refactors I’ve been guilty of in the past disappear and the lazy “quick hacks” we all try and explain away to ourselves, aren’t so easy to gloss over when pairing.

But code reviews exist outside of this dynamic. In the cold light of the following day, someone uninvolved reviews your work and passes judgement on whether they think it’s up to scratch. It’s easy to see why this becomes combative:Β rather than being collaborative it can be seen as a judgement being passed, on not only the code but the author, too.

When reviewing code it is easy to set a very high bar, higher than you might set for yourself and higher than you might have agreed when pairing. Now, does this mean the comments aren’t valid? Absolutely not! You’re right, there is a test case missing here, although my change is unrelated, I should have added the missing test case. And you’re right this code is a mess; it was a mess before I was here and made a simple edit; but you’re right, I should have tidied it up. Everyone should practice code gardening.

These are all perfectly valid comments. But they create a form ofΒ friction. When I worked on a team that relied on these code reviews you knew you were going to get comments: so you kept the commit small, so as to minimize the diff. A small diff minimizes the amount of extra tests you could be asked to write. A small diff keeps most of the existing mess out of the review, so you won’t be asked to start refactoring.

Now, this seems dysfunctional: we’re deliberately trying to optimize a smooth passage through the review process, instead of optimizing for code quality. Worse than this though was what never happened: refactoring commits. Looking back I realise that the only code reviews I saw (as both reviewer and reviewee) were for feature changes. There were never any code reviews submitted for purely technical debt reduction. Sure, there’d be some individual commits in amongst the feature changes. But never any dedicated, multi-commit sessions, whose sole aim was to improve the code base. Which was a shame, because like any legacy code base, there was scope for improvement.

Comparing this to teams that don’t do code reviews, whereΒ I’ve tended to see more effort on reducing technical debt.Β Without fearing an endless cycle of review comments, developers are free to embark on refactoring efforts (that may or may not even work out!) – but at least they can try. Instead, code reviews provide a form of friction that might actually hurt code quality in the long run.

Technology Selection

I was talking to another colleague recently who is convinced that Hibernate is still the best way to get data in and out of a relational database. I can’t really work out how to persuade people they’re wrong – surely using Hibernate is enough to persuade you? Especially in a large, legacy code base – the pain that Hibernate causes is obvious. Yet plenty of people still believe in Hibernate. There are even people that still believe in Spring. Whether or not they still believe in the tooth fairy is unclear.

But I think technology selection is another area where friction is important. When contemplating moving away from something well-known and well used in industry like Spring or Hibernate there is a lot of friction. There are new technologies to learn, new approaches to understand and new risks to manage. This all adds friction, so sometimes it’s easiest just to stick with what we know. Sometimes it really is the right choice – the technology you have expertise in is the one you’ll be most productive in immediately. But there are longer term questions too, which are much harder to answer: will the team eventually be more productive using technology X than technology Y?

Friction in software is a powerful process: we’re very lazy creatures, constantly trying to optimise. Anything that slows us down or gets in our way quickly gets side-stepped or worked around. We can use this knowledge as a tool to guide developer behaviour; but sometimes we need to be aware of how friction can change behaviours for the worse as well.


Categories: Programming, Testing & QA

Build flexible layouts with FlexboxLayout

Android Developers Blog - Tue, 02/21/2017 - 19:48
Posted by Takeshi Hagikura, Developer Programs Engineer

At Google I/O last year we announced ConstraintLayout, which enables you to build complex layouts while maintaining a flat view hierarchy. It is also fully supported in Android Studio's Visual Layout Editor.

At the same time, we open sourced FlexboxLayout to bring the same functionalities of the CSS Flexible Layout module to Android. Here are some cases where FlexboxLayout is particularly effective.

FlexboxLayout can be interpreted as an advanced LinearLayout because both layouts align their child views sequentially. The significant difference between LinearLayout and FlexboxLayout is that FlexboxLayout has a feature for wrapping.

That means if you add the flexWrap="wrap" attribute, FlexboxLayout puts a view to a new line if there is not enough space left in the current line as shown in the picture below.


One layout for various screen sizes With that characteristic in mind, let's take a case where you want to put views sequentially but have them move to new lines if the available space changes (due to a device factor, orientation changes or the window resizing in the multi-window mode).


Nexus5X portrait


Nexus5X landscape

Pixel C with multi window mode enabled, divider line on the left.

Pixel C with multi window mode enabled, divider line on the middle.

Pixel C with multi window mode enabled, divider line on the right.
You would need to define multiple DP-bucket layouts (such as layout-600dp, layout-720dp, layout-1020dp) to handle various screen sizes with traditional layouts such as LinearLayout or RelativeLayout. But the dialog above is built with a single FlexboxLayout.

The technique used in the example is setting the flexWrap="wrap" as explained above,

<com .google.android.flexbox.flexboxlayout 
     android:layout_width="match_parent" 
     android:layout_height="wrap_content" 
     app:flexwrap="wrap">
then you can get the following layout where child views are aligned to a new line instead of overflowing its parent.




Another technique I'd like to highlight is setting the layout_flexGrow attribute to an individual child. This helps improve the look of the final layout when free space is left over. The layout_flexGrow attribute works similar to the layout_weight attribute in LinearLayout. That means FlexboxLayout will distribute the remaining space according to the layout_flexGrow value set to each child in the same line.

In the example below, it assumes each child has the layout_flexGrow attribute set to 1, so free space will be evenly distributed to each of them.
 <android .support.design.widget.TextInputLayout
     android:layout_width="100dp"
     android:layout_height="wrap_content" 
     app:layout_flexgrow="1">



You can check out the complete layout xml file in the GitHub repository.
RecyclerView integration  Another advantage of FlexboxLayout is that it can be integrated with RecyclerView. With the latest release of the alpha version the new FlexboxLayoutManager extends RecyclerView.LayoutManager, now you can make use of the Flexbox functionalities in a scrollable container in much more memory-efficient way.

Note that you can still achieve a scrollable Flexbox container with FlexboxLayout wrapped with ScrollView. But, you will be likely to experience jankiness or even an OutOfMemoryError if the number of items contained in the layout is large, as FlexboxLayout doesn't take view recycling into account for the views that go off the screen as the user scrolls.

(If you would like to learn more about the RecyclerView in details, you can check out the videos from the Android UI toolkit team such as 1, 2)

A real world example where the RecyclerView integration is useful is for apps like the Google Photos app or News apps, both expect large number of items while needing to handle various width of items.

One example is found in the demo application in the FlexboxLayout repository. As you can see in the repository, each image shown in RecyclerView has a different width. But by setting the flexWrap setting to wrap,

FlexboxLayoutManager layoutManager = new FlexboxLayoutManager();
layoutManager.setFlexWrap(FlexWrap.WRAP);
and setting the flexGrow (as you can see, you can configure the attributes through FlexboxLayoutManager and FlexboxLayoutManager.LayoutParams for child attributes instead of configuring it from xml) attribute to a positive value for each child,
void bindTo(Drawable drawable) {
  mImageView.setImageDrawable(drawable);
  ViewGroup.LayoutParams lp = mImageView.getLayoutParams();
  if (lp instanceof FlexboxLayoutManager.LayoutParams) {
    FlexboxLayoutManager.LayoutParams flexboxLp = 
        (FlexboxLayoutManager.LayoutParams) mImageView.getLayoutParams();
    flexboxLp.setFlexGrow(1.0f);
  }
}
you can see every image fits within the layout nicely regardless of the screen orientation.



If you would like to see complete FlexboxLayout example, you can check:

What's next? Check out the full documentation for other attributes to build flexible layouts tailored for your needs. We're very open to hear your feedback, if you find any issues or feature requests, please file an issue on the GitHub repository.



Categories: Programming

Design by contract using GraphQL

Xebia Blog - Mon, 02/20/2017 - 17:20

When interfacing between systems it is good practice to think about the interface design prior to developing the systems. GraphQL can be a useful tool to write down these design decisions using its schema definition language. Even when you are not using GraphQL itself in production. GraphQL’s schema can be used to generate a mock […]

The post Design by contract using GraphQL appeared first on Xebia Blog.

Neo4j: Analysing a CSV file using LOAD CSV and Cypher

Mark Needham - Sun, 02/19/2017 - 23:39

Last week we ran our first online meetup for several years and I wanted to wanted to analyse the stats that YouTube lets you download for an event.

The file I downloaded looked like this:

$ cat ~/Downloads/youtube_stats_pW9boJoUxO0.csv 
Video IDs:, pW9boJoUxO0, Start time:, Wed Feb 15 08:57:55 2017, End time:, Wed Feb 15 10:03:10 2017
Playbacks, Peak concurrent viewers, Total view time (hours), Average session length (minutes)
348, 112, 97.125, 16.7456896552, 

Country code, AR, AT, BE, BR, BY, CA, CH, CL, CR, CZ, DE, DK, EC, EE, ES, FI, FR, GB, HU, IE, IL, IN, IT, LB, LU, LV, MY, NL, NO, NZ, PK, PL, QA, RO, RS, RU, SE, TR, US, VN, ZA
Playbacks, 2, 2, 1, 14, 1, 10, 2, 1, 1, 1, 27, 1, 1, 1, 3, 1, 25, 54, 1, 4, 6, 8, 1, 1, 1, 1, 1, 23, 1, 1, 1, 1, 1, 1, 2, 6, 22, 1, 114, 1, 1
Peak concurrent viewers, 2, 1, 1, 4, 1, 5, 1, 1, 0, 0, 11, 1, 1, 1, 2, 1, 6, 25, 1, 3, 3, 2, 1, 1, 1, 1, 1, 9, 1, 1, 0, 1, 0, 1, 1, 3, 7, 0, 44, 1, 0
Total view time (hours), 1.075, 0.0166666666667, 0.175, 2.58333333333, 0.00833333333333, 3.01666666667, 0.858333333333, 0.0583333333333, 0.0, 0.0, 8.69166666667, 0.8, 0.0166666666667, 0.0583333333333, 0.966666666667, 0.0166666666667, 4.20833333333, 20.8333333333, 0.00833333333333, 1.39166666667, 1.75, 0.766666666667, 0.00833333333333, 0.15, 0.0333333333333, 1.05833333333, 0.0333333333333, 7.36666666667, 0.0583333333333, 0.916666666667, 0.0, 0.00833333333333, 0.0, 0.00833333333333, 0.4, 1.10833333333, 5.28333333333, 0.0, 32.7333333333, 0.658333333333, 0.0
Average session length (minutes), 32.25, 0.5, 10.5, 11.0714285714, 0.5, 18.1, 25.75, 3.5, 0.0, 0.0, 19.3148148148, 48.0, 1.0, 3.5, 19.3333333333, 1.0, 10.1, 23.1481481481, 0.5, 20.875, 17.5, 5.75, 0.5, 9.0, 2.0, 63.5, 2.0, 19.2173913043, 3.5, 55.0, 0.0, 0.5, 0.0, 0.5, 12.0, 11.0833333333, 14.4090909091, 0.0, 17.2280701754, 39.5, 0.0

I want to look at the country specific stats so the first 4 lines aren’t interesting to me:

$ tail -n+5 youtube_stats_pW9boJoUxO0.csv > youtube.csv

I then put the youtube.csv file into the import directory of Neo4j and wrote the following query to return a row representing each country and its score for each of the metrics:

load csv with headers from "file:///youtube.csv" AS row
WITH [key in keys(row) where key <> "Country code"] AS keys, row, row["Country code"] AS heading
UNWIND keys AS key
RETURN key AS country, heading AS key, row[key] AS value

╒═════════╀═══════════╀═══════╕
β”‚"country"β”‚"key"      β”‚"value"β”‚
β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═══════║
β”‚" SE"    β”‚"Playbacks"β”‚"22"   β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€
β”‚" GB"    β”‚"Playbacks"β”‚"54"   β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€
β”‚" FR"    β”‚"Playbacks"β”‚"25"   β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€
β”‚" RS"    β”‚"Playbacks"β”‚"2"    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€
β”‚" LV"    β”‚"Playbacks"β”‚"1"    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜

Now I want to create a node representing each country and create a property for each of the metrics. Since the property names are going to be dynamic I’ll make use of the APOC library which I drop into my plugins directory. I then tweaked the query to create the nodes:

load csv with headers from "https://dl.dropboxusercontent.com/u/14493611/youtube.csv" AS row
WITH [key in keys(row) where key <> "Country code"] AS keys, row, row["Country code"] AS heading
UNWIND keys AS key
WITH key AS country, heading AS key, row[key] AS value
MERGE (c:Country {name: replace(country, " ", "")})
WITH *
CALL apoc.create.setProperty(c, key, toInteger(value))
YIELD node
RETURN COUNT(*)

We can now see which country provided the most viewers:

MATCH (n:Country) 
RETURN n.name, n.Playbacks AS playbacks, n.`Total view time (hours)` AS viewTimeInHours, n.`Peak concurrent viewers` AS peakConcViewers, n.`Average session length (minutes)` AS aveSessionMins
ORDER BY playbacks DESC
LIMIT 10

╒════════╀═══════════╀═════════════════╀═════════════════╀════════════════╕
β”‚"n.name"β”‚"playbacks"β”‚"viewTimeInHours"β”‚"peakConcViewers"β”‚"aveSessionMins"β”‚
β•žβ•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═════════════════β•ͺ═════════════════β•ͺ════════════════║
β”‚"US"    β”‚"114"      β”‚"32"             β”‚"44"             β”‚"17"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"GB"    β”‚"54"       β”‚"20"             β”‚"25"             β”‚"23"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"DE"    β”‚"27"       β”‚"8"              β”‚"11"             β”‚"19"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"FR"    β”‚"25"       β”‚"4"              β”‚"6"              β”‚"10"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"NL"    β”‚"23"       β”‚"7"              β”‚"9"              β”‚"19"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"SE"    β”‚"22"       β”‚"5"              β”‚"7"              β”‚"14"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"BR"    β”‚"14"       β”‚"2"              β”‚"4"              β”‚"11"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"CA"    β”‚"10"       β”‚"3"              β”‚"5"              β”‚"18"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"IN"    β”‚"8"        β”‚"0"              β”‚"2"              β”‚"5"             β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"IL"    β”‚"6"        β”‚"1"              β”‚"3"              β”‚"17"            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The United States in first unsurprisingly followed by the UK, Germany, and France. We ran the meetup at 5pm UK time so it was a friendly enough time for this side of the globe but not so friendly for Asia or Australia so it’s not too surprising we don’t see anybody from there!

For my last trick I wanted to see the full names of the countries so I downloaded the 2 digit codes for each country along with their full name.

I then updated my graph:

load csv with headers from "file:///countries.csv" AS row
MATCH (c:Country {name: row.Code})
SET c.fullName = row.Name;

Now let’s re-run our query and show the country fullnames instead:

MATCH (n:Country) 
RETURN n.fullName, n.Playbacks AS playbacks, n.`Total view time (hours)` AS viewTimeInHours, n.`Peak concurrent viewers` AS peakConcViewers, n.`Average session length (minutes)` AS aveSessionMins
ORDER BY playbacks DESC
LIMIT 10

╒════════════════╀═══════════╀═════════════════╀═════════════════╀════════════════╕
β”‚"n.fullName"    β”‚"playbacks"β”‚"viewTimeInHours"β”‚"peakConcViewers"β”‚"aveSessionMins"β”‚
β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═════════════════β•ͺ═════════════════β•ͺ════════════════║
β”‚"United States" β”‚"114"      β”‚"32"             β”‚"44"             β”‚"17"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"United Kingdom"β”‚"54"       β”‚"20"             β”‚"25"             β”‚"23"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Germany"       β”‚"27"       β”‚"8"              β”‚"11"             β”‚"19"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"France"        β”‚"25"       β”‚"4"              β”‚"6"              β”‚"10"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Netherlands"   β”‚"23"       β”‚"7"              β”‚"9"              β”‚"19"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Sweden"        β”‚"22"       β”‚"5"              β”‚"7"              β”‚"14"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Brazil"        β”‚"14"       β”‚"2"              β”‚"4"              β”‚"11"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Canada"        β”‚"10"       β”‚"3"              β”‚"5"              β”‚"18"            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"India"         β”‚"8"        β”‚"0"              β”‚"2"              β”‚"5"             β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Israel"        β”‚"6"        β”‚"1"              β”‚"3"              β”‚"17"            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

And that’s the end of my analysis with no relationships in sight!

The post Neo4j: Analysing a CSV file using LOAD CSV and Cypher appeared first on Mark Needham.

Categories: Programming

Debug TensorFlow Models with tfdbg

Google Code Blog - Fri, 02/17/2017 - 23:32
Posted by Shanqing Cai, Software Engineer, Tools and Infrastructure.

We are excited to share TensorFlow Debugger (tfdbg), a tool that makes debugging of machine learning models (ML) in TensorFlow easier.
TensorFlow, Google's open-source ML library, is based on dataflow graphs. A typical TensorFlow ML program consists of two separate stages:
  1. Setting up the ML model as a dataflow graph by using the library's Python API,
  2. Training or performing inference on the graph by using the Session.run()method.
If errors and bugs occur during the second stage (i.e., the TensorFlow runtime), they are difficult to debug.

To understand why that is the case, note that to standard Python debuggers, the Session.run() call is effectively a single statement and does not exposes the running graph's internal structure (nodes and their connections) and state (output arrays or tensors of the nodes). Lower-level debuggers such as gdb cannot organize stack frames and variable values in a way relevant to TensorFlow graph operations. A specialized runtime debugger has been among the most frequently raised feature requests from TensorFlow users.

tfdbg addresses this runtime debugging need. Let's see tfdbg in action with a short snippet of code that sets up and runs a simple TensorFlow graph to fit a simple linear equation through gradient descent.

import numpy as np
import tensorflow as tf
import tensorflow.python.debug as tf_debug
xs = np.linspace(-0.5, 0.49, 100)
x = tf.placeholder(tf.float32, shape=[None], name="x")
y = tf.placeholder(tf.float32, shape=[None], name="y")
k = tf.Variable([0.0], name="k")
y_hat = tf.multiply(k, x, name="y_hat")
sse = tf.reduce_sum((y - y_hat) * (y - y_hat), name="sse")
train_op = tf.train.GradientDescentOptimizer(learning_rate=0.02).minimize(sse)

sess = tf.Session()
sess.run(tf.global_variables_initializer())

sess = tf_debug.LocalCLIDebugWrapperSession(sess)
for _ in range(10):
sess.run(train_op, feed_dict={x: xs, y: 42 * xs})

As the highlighted line in this example shows, the session object is wrapped as a class for debugging (LocalCLIDebugWrapperSession), so the calling the run() method will launch the command-line interface (CLI) of tfdbg. Using mouse clicks or commands, you can proceed through the successive run calls, inspect the graph's nodes and their attributes, visualize the complete history of the execution of all relevant nodes in the graph through the list of intermediate tensors. By using the invoke_stepper command, you can let the Session.run() call execute in the "stepper mode", in which you can step to nodes of your choice, observe and modify their outputs, followed by further stepping actions, in a way analogous to debugging procedural languages (e.g., in gdb or pdb).

A class of frequently encountered issue in developing TensorFlow ML models is the appearance of bad numerical values (infinities and NaNs) due to overflow, division by zero, log of zero, etc. In large TensorFlow graphs, finding the source of such nodes can be tedious and time-consuming. With the help of tfdbg CLI and its conditional breakpoint support, you can quickly identify the culprit node. The video below demonstrates how to debug infinity/NaN issues in a neural network with tfdbg:

A screencast of the TensorFlow Debugger in action, from this tutorial.

Compared with alternative debugging options such as Print Ops, tfdbg requires fewer lines of code change, provides more comprehensive coverage of the graphs, and offers a more interactive debugging experience. It will speed up your model development and debugging workflows. It offers additional features such as offline debugging of dumped tensors from server environments and integration with tf.contrib.learn. To get started, please visit this documentation. This research paperlays out the design of tfdbg in greater detail.

The minimum required TensorFlow version for tfdbgis 0.12.1. To report bugs, please open issues on TensorFlow's GitHub Issues Page. For general usage help, please post questions on StackOverflow using the tag tensorflow.
Acknowledgements
This project would not be possible without the help and feedback from members of the Google TensorFlow Core/API Team and the Applied Machine Intelligence Team.





Categories: Programming

Why don’t monitoring tools monitor changes?

Xebia Blog - Fri, 02/17/2017 - 12:38

Changes in applications or IT infrastructure can lead to application downtime. This not only hits your revenue, it also has a negative impact on your reputation. Everybody in IT understands the importance of having the right monitoring solutions in place. From an infrastructure – to a business perspective, we rely on monitoring tools to get […]

The post Why don’t monitoring tools monitor changes? appeared first on Xebia Blog.

Announcing TensorFlow 1.0

Google Code Blog - Wed, 02/15/2017 - 20:43
Posted By: Amy McDonald Sandjideh, Technical Program Manager, TensorFlow

In just its first year, TensorFlow has helped researchers, engineers, artists, students, and many others make progress with everything from language translation to early detection of skin cancer and preventing blindness in diabetics. We're excited to see people using TensorFlow in over 6000 open-source repositories online.


Today, as part of the first annual TensorFlow Developer Summit, hosted in Mountain View and livestreamed around the world, we're announcing TensorFlow 1.0:


It's faster: TensorFlow 1.0 is incredibly fast! XLA lays the groundwork for even more performance improvements in the future, and tensorflow.org now includes tips & tricksfor tuning your models to achieve maximum speed. We'll soon publish updated implementations of several popular models to show how to take full advantage of TensorFlow 1.0 - including a 7.3x speedup on 8 GPUs for Inception v3 and 58x speedup for distributed Inception v3 training on 64 GPUs!


It's more flexible: TensorFlow 1.0 introduces a high-level API for TensorFlow, with tf.layers, tf.metrics, and tf.losses modules. We've also announced the inclusion of a new tf.keras module that provides full compatibility with Keras, another popular high-level neural networks library.


It's more production-ready than ever: TensorFlow 1.0 promises Python API stability (details here), making it easier to pick up new features without worrying about breaking your existing code.

Other highlights from TensorFlow 1.0:

  • Python APIs have been changed to resemble NumPy more closely. For this and other backwards-incompatible changes made to support API stability going forward, please use our handy migration guide and conversion script.
  • Experimental APIs for Javaand Go
  • Higher-level API modules tf.layers, tf.metrics, and tf.losses - brought over from tf.contrib.learnafter incorporating skflowand TF Slim
  • Experimental release of XLA, a domain-specific compiler for TensorFlow graphs, that targets CPUs and GPUs. XLA is rapidly evolving - expect to see more progress in upcoming releases.
  • Introduction of the TensorFlow Debugger (tfdbg), a command-line interface and API for debugging live TensorFlow programs.
  • New Android demos for object detection and localization, and camera-based image stylization.
  • Installation improvements: Python 3 docker images have been added, and TensorFlow's pip packages are now PyPI compliant. This means TensorFlow can now be installed with a simple invocation of pip install tensorflow.

We're thrilled to see the pace of development in the TensorFlow community around the world. To hear more about TensorFlow 1.0 and how it's being used, you can watch the TensorFlow Developer Summit talks on YouTube, covering recent updates from higher-level APIs to TensorFlow on mobile to our new XLA compiler, as well as the exciting ways that TensorFlow is being used:



Click herefor a link to the livestream and video playlist (individual talks will be posted online later in the day).


The TensorFlow ecosystem continues to grow with new techniques like Foldfor dynamic batching and tools like the Embedding Projector along with updatesto our existing tools like TensorFlow Serving. We're incredibly grateful to the community of contributors, educators, and researchers who have made advances in deep learning available to everyone. We look forward to working with you on forums like GitHub issues, Stack Overflow, @TensorFlow, the discuss@tensorflow.orggroup, and at future events.



Categories: Programming

SE-Radio Episode 282: Donny Nadolny on Debugging Distributed Systems

Donny Nadolny of PagerDuty joins Robert Blumen to tell the story of debugging an issue that PagerDuty encountered when they set up a Zookeeper cluster that spanned across two geographically separated datacenters in different regions.Β  The debugging process took them through multiple levels of the stack starting with their application, the implementation of the Zookeeper […]
Categories: Programming