Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
In my continued playing around with R I wanted to find the minimum value for a specified percentile given a data frame representing a cumulative distribution function (CDF).
e.g. imagine we have the following CDF represented in a data frame:
library(dplyr) df = data.frame(score = c(5,7,8,10,12,20), percentile = c(0.05,0.1,0.15,0.20,0.25,0.5))
and we want to find the minimum value for the 0.05 percentile. We can use the filter function to do so:
> (df %>% filter(percentile > 0.05) %>% slice(1))$score [1] 7
Things become more tricky if we want to return multiple percentiles in one go.
My first thought was to create a data frame with one row for each target percentile and then pull in the appropriate row from our original data frame:
targetPercentiles = c(0.05, 0.2) percentilesDf = data.frame(targetPercentile = targetPercentiles) > percentilesDf %>% group_by(targetPercentile) %>% mutate(x = (df %>% filter(percentile > targetPercentile) %>% slice(1))$score) Error in (list(score = c(5, 7, 8, 10, 12, 20), percentile = c(0.05, 0.1, : invalid subscript type 'double'
Unfortunately this didn’t quite work as I expected – Antonios pointed out that this is probably because we’re mixing up two pipelines and dplyr can’t figure out what we want to do.
Instead he suggested the following variant which uses the do function
df = data.frame(score = c(5,7,8,10,12,20), percentile = c(0.05,0.1,0.15,0.20,0.25,0.5)) targetPercentiles = c(0.05, 0.2) > data.frame(targetPercentile = targetPercentiles) %>% group_by(targetPercentile) %>% do(df) %>% filter(percentile > targetPercentile) %>% slice(1) %>% select(targetPercentile, score) Source: local data frame [2 x 2] Groups: targetPercentile targetPercentile score 1 0.05 7 2 0.20 12
We can then wrap this up in a function:
percentiles = function(df, targetPercentiles) { # make sure the percentiles are in order df = df %>% arrange(percentile) data.frame(targetPercentile = targetPercentiles) %>% group_by(targetPercentile) %>% do(df) %>% filter(percentile > targetPercentile) %>% slice(1) %>% select(targetPercentile, score) }
which we call like this:
df = data.frame(score = c(5,7,8,10,12,20), percentile = c(0.05,0.1,0.15,0.20,0.25,0.5)) > percentiles(df, c(0.08, 0.10, 0.50, 0.80)) Source: local data frame [2 x 2] Groups: targetPercentile targetPercentile score 1 0.08 7 2 0.10 8
Note that we don’t actually get any rows back for 0.50 or 0.80 since we don’t have any entries greater than those percentiles. With a proper CDF we would though so the function does its job.
I’ve recently been reading the literature written by K. Anders Eriksson and co on Deliberate Practice and one of the suggestions for increasing our competence at a skill is to put ourselves in a situation where we can fail.
I’ve been reading Think Bayes – an introductory text on Bayesian statistics, something I know nothing about – and each chapter concludes with a set of exercises to practice, a potentially perfect exercise in failure!
I’ve been going through the exercises and capturing my screen while I do so, an idea I picked up from one of the papers:
our most important breakthrough was developing a relatively inexpensive and efficient way for students to record their exercises on video and to review and analyze their own performances against well-defined criteria
Ideally I’d get a coach to review the video but that seems too much of an ask of someone. Antonios has taken a look at some of my answers, however, and made suggestions for how he’d solve them which has been really helpful.
After each exercise I watch the video and look for areas where I get stuck or don’t make progress so that I can go and practice more in that area. I also try to find inefficiencies in how I solve a problem as well as the types of approaches I’m taking.
These are some of the observations from watching myself back over the last week or so:
It’s much easier to see the error in approach if there is an approach! On one occasion where I hadn’t planned out an approach I ended up staring at the question for 10 minutes and didn’t make any progress at all.
e.g. one exercise was to calculate the 5th percentile of a posterior distribution which I flailed around with for 15 minutes before giving up. Watching back on the video it was obvious that I hadn’t completely understood what a probability mass function was. I read the Wikipedia entry and retried the exercise and this time got the answer.
One of the suggestions that Eriksson makes for practice sessions is to focus on ‘technique’ during practice sessions rather than only on outcome but I haven’t yet been able to translate what exactly that would involved in a programming context.
If you have any ideas or thoughts on this approach do let me know in the comments.
In my continued reading of Think Bayes the next problem to tackle is the Locomotive problem which is defined thus:
A railroad numbers its locomotives in order 1..N.
One day you see a locomotive with the number 60. Estimate how many loco- motives the railroad has.
The interesting thing about this question is that it initially seems that we don’t have enough information to come up with any sort of answer. However, we can get an estimate if we come up with a prior to work with.
The simplest prior is to assume that there’s one railroad operator with between say 1 and 1000 railroads with an equal probability of each size.
We can then write similar code as with the dice problem to update the prior based on the trains we’ve seen.
First we’ll create a data frame which captures the product of ‘number of locomotives’ and the observations of locomotives that we’ve seen (in this case we’ve only seen one locomotive with number ’60′:)
library(dplyr) possibleValues = 1:1000 observations = c(60) l = list(value = possibleValues, observation = observations) df = expand.grid(l) > df %>% head() value observation 1 1 60 2 2 60 3 3 60 4 4 60 5 5 60 6 6 60
Next we want to add a column which represents the probability that the observed locomotive could have come from a particular fleet. If the number of railroads is less than 60 then we have a 0 probability, otherwise we have 1 / numberOfRailroadsInFleet:
prior = 1 / length(possibleValues) df = df %>% mutate(score = ifelse(value < observation, 0, 1/value)) > df %>% sample_n(10) value observation score 179 179 60 0.005586592 1001 1001 60 0.000999001 400 400 60 0.002500000 438 438 60 0.002283105 667 667 60 0.001499250 661 661 60 0.001512859 284 284 60 0.003521127 233 233 60 0.004291845 917 917 60 0.001090513 173 173 60 0.005780347
To find the probability of each fleet size we write the following code:
weightedDf = df %>% group_by(value) %>% summarise(aggScore = prior * prod(score)) %>% ungroup() %>% mutate(weighted = aggScore / sum(aggScore)) > weightedDf %>% sample_n(10) Source: local data frame [10 x 3] value aggScore weighted 1 906 1.102650e-06 0.0003909489 2 262 3.812981e-06 0.0013519072 3 994 1.005031e-06 0.0003563377 4 669 1.493275e-06 0.0005294465 5 806 1.239455e-06 0.0004394537 6 673 1.484400e-06 0.0005262997 7 416 2.401445e-06 0.0008514416 8 624 1.600963e-06 0.0005676277 9 40 0.000000e+00 0.0000000000 10 248 4.028230e-06 0.0014282246
Let’s plot the data frame to see how the probability varies for each fleet size:
library(ggplot2) ggplot(aes(x = value, y = weighted), data = weightedDf) + geom_line(color="dark blue")
The most likely choice is a fleet size of 60 based on this diagram but an alternative would be to find the mean of the posterior which we can do like so:
> weightedDf %>% mutate(mean = value * weighted) %>% select(mean) %>% sum() [1] 333.6561
Now let’s create a function with all that code in so we can play around with some different priors and observations:
meanOfPosterior = function(values, observations) { l = list(value = values, observation = observations) df = expand.grid(l) %>% mutate(score = ifelse(value < observation, 0, 1/value)) prior = 1 / length(possibleValues) weightedDf = df %>% group_by(value) %>% summarise(aggScore = prior * prod(score)) %>% ungroup() %>% mutate(weighted = aggScore / sum(aggScore)) return (weightedDf %>% mutate(mean = value * weighted) %>% select(mean) %>% sum()) }
If we update our observed railroads to have numbers 60, 30 and 90 we’d get the following means of posteriors assuming different priors:
> meanOfPosterior(1:500, c(60, 30, 90)) [1] 151.8496 > meanOfPosterior(1:1000, c(60, 30, 90)) [1] 164.3056 > meanOfPosterior(1:2000, c(60, 30, 90)) [1] 171.3382
At the moment the function assumes that we always want to have a uniform prior i.e. every option has an equal opportunity of being chosen, but we might want to vary the prior to see how different assumptions influence the posterior.
We can refactor the function to take in values & priors instead of calculating the priors in the function:
meanOfPosterior = function(values, priors, observations) { priorDf = data.frame(value = values, prior = priors) l = list(value = priorDf$value, observation = observations) df = merge(expand.grid(l), priorDf, by.x = "value", by.y = "value") %>% mutate(score = ifelse(value < observation, 0, 1 / value)) df %>% group_by(value) %>% summarise(aggScore = max(prior) * prod(score)) %>% ungroup() %>% mutate(weighted = aggScore / sum(aggScore)) %>% mutate(mean = value * weighted) %>% select(mean) %>% sum() }
Now let’s check we get the same posterior means for the uniform priors:
> meanOfPosterior(1:500, 1/length(1:500), c(60, 30, 90)) [1] 151.8496 > meanOfPosterior(1:1000, 1/length(1:1000), c(60, 30, 90)) [1] 164.3056 > meanOfPosterior(1:2000, 1/length(1:2000), c(60, 30, 90)) [1] 171.3382
Now if instead of a uniform prior let’s use a power law one where the assumption is that smaller fleets are more likely:
> meanOfPosterior(1:500, sapply(1:500, function(x) x ** -1), c(60, 30, 90)) [1] 130.7085 > meanOfPosterior(1:1000, sapply(1:1000, function(x) x ** -1), c(60, 30, 90)) [1] 133.2752 > meanOfPosterior(1:2000, sapply(1:2000, function(x) x ** -1), c(60, 30, 90)) [1] 133.9975 > meanOfPosterior(1:5000, sapply(1:5000, function(x) x ** -1), c(60, 30, 90)) [1] 134.212 > meanOfPosterior(1:10000, sapply(1:10000, function(x) x ** -1), c(60, 30, 90)) [1] 134.2435
Now we get very similar posterior means which converge on 134 and so that’s our best prediction.
In my last blog post I showed how to derive posterior probabilities for the Think Bayes dice problem:
Suppose I have a box of dice that contains a 4-sided die, a 6-sided die, an 8-sided die, a 12-sided die, and a 20-sided die. If you have ever played Dungeons & Dragons, you know what I am talking about.
Suppose I select a die from the box at random, roll it, and get a 6.â€¨What is the probability that I rolled each die?
To recap, this was my final solution:
likelihoods = function(names, observations) { scores = rep(1.0 / length(names), length(names)) names(scores) = names Â for(name in names) { for(observation in observations) { if(name < observation) { scores[paste(name)] = 0 } else { scores[paste(name)] = scores[paste(name)] * (1.0 / name) } } } return(scores) } Â dice = c(4,6,8,12,20) l1 = likelihoods(dice, c(6)) > l1 / sum(l1) 4 6 8 12 20 0.0000000 0.3921569 0.2941176 0.1960784 0.1176471
Although it works we have nested for loops which aren’t very idiomatic R so let’s try and get rid of them.
The first thing we want to do is return a data frame rather than a vector so we tweak the first two lines to read like this:
scores = rep(1.0 / length(names), length(names)) df = data.frame(score = scores, name = names)
Next we can get rid of the inner for loop and replace it with a call to ifelse wrapped inside a dplyr mutate call:
library(dplyr) likelihoods2 = function(names, observations) { scores = rep(1.0 / length(names), length(names)) df = data.frame(score = scores, name = names) for(observation in observations) { df = df %>% mutate(score = ifelse(name < observation, 0, score * (1.0 / name)) ) } return(df) } dice = c(4,6,8,12,20) l1 = likelihoods2(dice, c(6)) > l1 score name 1 0.00000000 4 2 0.03333333 6 3 0.02500000 8 4 0.01666667 12 5 0.01000000 20
Finally we’ll tidy up the scores so they’re relatively weighted against each other:
likelihoods2 = function(names, observations) { scores = rep(1.0 / length(names), length(names)) df = data.frame(score = scores, name = names) for(observation in observations) { df = df %>% mutate(score = ifelse(name < observation, 0, score * (1.0 / name)) ) } return(df %>% mutate(weighted = score / sum(score)) %>% select(name, weighted)) } dice = c(4,6,8,12,20) l1 = likelihoods2(dice, c(6)) > l1 name weighted 1 4 0.0000000 2 6 0.3921569 3 8 0.2941176 4 12 0.1960784 5 20 0.1176471
Now we’re down to just the one for loop. Getting rid of that one is a bit trickier. First we’ll create a data frame which contains a row for every (observation, dice) pair, simulating the nested for loops:
likelihoods3 = function(names, observations) { l = list(observation = observations, roll = names) obsDf = do.call(expand.grid,l) %>% mutate(likelihood = 1.0 / roll, score = ifelse(roll < observation, 0, likelihood)) return(obsDf) } dice = c(4,6,8,12,20) l1 = likelihoods3(dice, c(6)) > l1 observation roll likelihood score 1 6 4 0.25000000 0.00000000 2 6 6 0.16666667 0.16666667 3 6 8 0.12500000 0.12500000 4 6 12 0.08333333 0.08333333 5 6 20 0.05000000 0.05000000 l2 = likelihoods3(dice, c(6, 4, 8, 7, 7, 2)) > l2 observation roll likelihood score 1 6 4 0.25000000 0.00000000 2 4 4 0.25000000 0.25000000 3 8 4 0.25000000 0.00000000 4 7 4 0.25000000 0.00000000 5 7 4 0.25000000 0.00000000 6 2 4 0.25000000 0.25000000 7 6 6 0.16666667 0.16666667 8 4 6 0.16666667 0.16666667 9 8 6 0.16666667 0.00000000 10 7 6 0.16666667 0.00000000 11 7 6 0.16666667 0.00000000 12 2 6 0.16666667 0.16666667 13 6 8 0.12500000 0.12500000 14 4 8 0.12500000 0.12500000 15 8 8 0.12500000 0.12500000 16 7 8 0.12500000 0.12500000 17 7 8 0.12500000 0.12500000 18 2 8 0.12500000 0.12500000 19 6 12 0.08333333 0.08333333 20 4 12 0.08333333 0.08333333 21 8 12 0.08333333 0.08333333 22 7 12 0.08333333 0.08333333 23 7 12 0.08333333 0.08333333 24 2 12 0.08333333 0.08333333 25 6 20 0.05000000 0.05000000 26 4 20 0.05000000 0.05000000 27 8 20 0.05000000 0.05000000 28 7 20 0.05000000 0.05000000 29 7 20 0.05000000 0.05000000 30 2 20 0.05000000 0.05000000
Now we need to iterate over the data frame, grouping by ‘roll’ so that we end up with one row for each one.
We’ll add a new column which stores the posterior probability for each dice. This will be calculated by multiplying the prior probability by the product of the ‘score’ entries.
This is what our new likelihood function looks like:
likelihoods3 = function(names, observations) { l = list(observation = observations, roll = names) obsDf = do.call(expand.grid,l) %>% mutate(likelihood = 1.0 / roll, score = ifelse(roll < observation, 0, likelihood)) return (obsDf %>% group_by(roll) %>% summarise(s = (1.0/length(names)) * prod(score) ) %>% ungroup() %>% mutate(weighted = s / sum(s)) %>% select(roll, weighted)) } l1 = likelihoods3(dice, c(6)) > l1 Source: local data frame [5 x 2] roll weighted 1 4 0.0000000 2 6 0.3921569 3 8 0.2941176 4 12 0.1960784 5 20 0.1176471 l2 = likelihoods3(dice, c(6, 4, 8, 7, 7, 2)) > l2 Source: local data frame [5 x 2] roll weighted 1 4 0.000000000 2 6 0.000000000 3 8 0.915845272 4 12 0.080403426 5 20 0.003751302
We’ve now got the same result as we did with our nested for loops so I think the refactoring has been a success.
Last week I described how I’ve been creating fake dictionaries in R using lists and I found myself using the same structure while solving the dice problem in Think Bayes.
The dice problem is described as follows:
Suppose I have a box of dice that contains a 4-sided die, a 6-sided die, an 8-sided die, a 12-sided die, and a 20-sided die. If you have ever played Dungeons & Dragons, you know what I am talking about.
Suppose I select a die from the box at random, roll it, and get a 6.
What is the probability that I rolled each die?
Here’s a simple example of the nested list that I started with:
dice = c(4,6,8,12,20) priors = rep(1.0 / length(dice), length(dice)) names(priors) = dice > priors 4 6 8 12 20 0.2 0.2 0.2 0.2 0.2
I wanted to retrieve the prior for the 8 dice which I tried to do like this:
> priors[8] <NA> NA
That comes back with ‘NA’ because we’re actually looking for the numeric index 8 rather than the item in that column.
As far as I understand if we want to look up values by name we have to use a string so I tweaked the code to store names as strings:
dice = c(4,6,8,12,20) priors = rep(1.0 / length(dice), length(dice)) names(priors) = sapply(dice, paste) > priors["8"] 8 0.2
That works much better but with some experimentation I realised I didn’t even need to run ‘dice’ through the sapply function, it already works the way it was:
dice = c(4,6,8,12,20) priors = rep(1.0 / length(dice), length(dice)) names(priors) = dice > priors["8"] 8 0.2
Now that we’ve got that working we can write a likelihood function which takes in observed dice rolls and tells us how likely it was that we rolled each type of dice. We start simple by copying the above code into a function:
likelihoods = function(names, observations) { scores = rep(1.0 / length(names), length(names)) names(scores) = names return(scores) } dice = c(4,6,8,12,20) l1 = likelihoods(dice, c(6)) > l1 4 6 8 12 20 0.2 0.2 0.2 0.2 0.2
Next we’ll update the score for a particular dice to 0 if one of the observed rolls is greater than the dice’s maximum score:
likelihoods = function(names, observations) { scores = rep(1.0 / length(names), length(names)) names(scores) = names for(name in names) { for(observation in observations) { if(name < observation) { scores[paste(name)] = 0 } } } return(scores) } dice = c(4,6,8,12,20) l1 = likelihoods(dice, c(6)) > l1 4 6 8 12 20 0.0 0.2 0.2 0.2 0.2
The 4 dice has been ruled out since we’ve rolled a 6! Now let’s put in the else condition which updates our score by the probability that we got the observed roll with each of valid dice. i.e. we have a 1/20 chance of rolling any number with the 20 side dice, a 1/8 chance with the 8 sided dice etc.
likelihoods = function(names, observations) { scores = rep(1.0 / length(names), length(names)) names(scores) = names for(name in names) { for(observation in observations) { if(name < observation) { scores[paste(name)] = 0 } else { scores[paste(name)] = scores[paste(name)] * (1.0 / name) } } } return(scores) } dice = c(4,6,8,12,20) l1 = likelihoods(dice, c(6)) > l1 4 6 8 12 20 0.00000000 0.03333333 0.02500000 0.01666667 0.01000000
And finally let’s normalise those scores so they’re a bit more readable:
> l1 / sum(l1) 4 6 8 12 20 0.0000000 0.3921569 0.2941176 0.1960784 0.1176471
When debugging R code, given my Java background, I often find myself trying to print out the state of variables along with an appropriate piece of text like this:
names = c(1,2,3,4,5,6) > print("names: " + names) Error in "names: " + names : non-numeric argument to binary operator
We might try this next:
> print("names: ", names) [1] "names: "
which doesn’t actually print the names variable – only the first argument to the print function is printed.
We’ll find more success with the paste function:
> print(paste("names: ", names)) [1] "names: 1" "names: 2" "names: 3" "names: 4" "names: 5" "names: 6"
This is an improvement but it repeats the ‘names:’ prefix multiple times which isn’t what we want. Introducing the toString function gets us over the line:
> print(paste("names: ", toString(names))) [1] "names: 1, 2, 3, 4, 5, 6"
In my last blog post I showed the translation of a likelihood function from Think Bayes into R and in my first attempt at this function I used a couple of nested for loops.
likelihoods = function(names, mixes, observations) { scores = rep(1, length(names)) names(scores) = names for(name in names) { for(observation in observations) { scores[name] = scores[name] * mixes[[name]][observation] } } return(scores) }
Names = c("Bowl 1", "Bowl 2") bowl1Mix = c(0.75, 0.25) names(bowl1Mix) = c("vanilla", "chocolate") bowl2Mix = c(0.5, 0.5) names(bowl2Mix) = c("vanilla", "chocolate") Mixes = list("Bowl 1" = bowl1Mix, "Bowl 2" = bowl2Mix) Mixes Observations = c("vanilla", "vanilla", "vanilla", "chocolate") l = likelihoods(Names, Mixes, Observations) > l / sum(l) Bowl 1 Bowl 2 0.627907 0.372093
We pass in a vector of bowls, a nested dictionary describing the mixes of cookies in each bowl and the observations that we’ve made. The function tells us that there’s an almost 2/3 probability of the cookies coming from Bowl 1 and just over 1/3 of being Bowl 2.
In this case there probably won’t be much of a performance improvement by getting rid of the loops but we should be able to write something that’s more concise and hopefully idiomatic.
Let’s start by getting rid of the inner for loop. That can be replace by a call to the Reduce function like so:
likelihoods2 = function(names, mixes, observations) { scores = rep(0, length(names)) names(scores) = names for(name in names) { scores[name] = Reduce(function(acc, observation) acc * mixes[[name]][observation], Observations, 1) } return(scores) }
l2 = likelihoods2(Names, Mixes, Observations) > l2 / sum(l2) Bowl 1 Bowl 2 0.627907 0.372093
So that’s good, we’ve still got the same probabilities as before. Now to get rid of the outer for loop. The Map function helps us out here:
likelihoods3 = function(names, mixes, observations) { scores = rep(0, length(names)) names(scores) = names scores = Map(function(name) Reduce(function(acc, observation) acc * mixes[[name]][observation], Observations, 1), names) return(scores) } l3 = likelihoods3(Names, Mixes, Observations) > l3 $`Bowl 1` vanilla 0.1054688 $`Bowl 2` vanilla 0.0625
We end up with a list instead of a vector which we need to fix by using the unlist function:
likelihoods3 = function(names, mixes, observations) { scores = rep(0, length(names)) names(scores) = names scores = Map(function(name) Reduce(function(acc, observation) acc * mixes[[name]][observation], Observations, 1), names) return(unlist(scores)) } l3 = likelihoods3(Names, Mixes, Observations) > l3 / sum(l3) Bowl 1.vanilla Bowl 2.vanilla 0.627907 0.372093
Now we just have this annoying ‘vanilla’ in the name. That’s fixed easily enough:
likelihoods3 = function(names, mixes, observations) { scores = rep(0, length(names)) names(scores) = names scores = Map(function(name) Reduce(function(acc, observation) acc * mixes[[name]][observation], Observations, 1), names) result = unlist(scores) names(result) = names return(result) } l3 = likelihoods3(Names, Mixes, Observations) > l3 / sum(l3) Bowl 1 Bowl 2 0.627907 0.372093
A slightly cleaner alternative makes use of the sapply function:
likelihoods3 = function(names, mixes, observations) { scores = rep(0, length(names)) names(scores) = names scores = sapply(names, function(name) Reduce(function(acc, observation) acc * mixes[[name]][observation], Observations, 1)) names(scores) = names return(scores) } l3 = likelihoods3(Names, Mixes, Observations) > l3 / sum(l3) Bowl 1 Bowl 2 0.627907 0.372093
That’s the best I’ve got for now but I wonder if we could write a version of this using matrix operations some how – but that’s for next time!
As I mentioned in a post last week I’ve been reading through Think Bayes and translating some of the examples form Python to R.
After my first post Antonios suggested a more idiomatic way of writing the function in R so I thought I’d give it a try to calculate the probability that combinations of cookies had come from each bowl.
In the simplest case we have this function which takes in the names of the bowls and the likelihood scores:
f = function(names,likelihoods) { # Assume each option has an equal prior priors = rep(1, length(names)) / length(names) # create a data frame with all info you have dt = data.frame(names,priors,likelihoods) # calculate posterior probabilities dt$post = dt$priors*dt$likelihoods / sum(dt$priors*dt$likelihoods) # specify what you want the function to return list(names=dt$names, priors=dt$priors, likelihoods=dt$likelihoods, posteriors=dt$post) }
We assume a prior probability of 0.5 for each bowl.
Given the following probabilities of of different cookies being in each bowl…
mixes = { 'Bowl 1':dict(vanilla=0.75, chocolate=0.25), 'Bowl 2':dict(vanilla=0.5, chocolate=0.5), }
…we can simulate taking one vanilla cookie with the following parameters:
Likelihoods = c(0.75,0.5) Names = c("Bowl 1", "Bowl 2") res=f(Names,Likelihoods) > res$posteriors[res$name == "Bowl 1"] [1] 0.6 > res$posteriors[res$name == "Bowl 2"] [1] 0.4
If we want to simulate taking 3 vanilla cookies and 1 chocolate one we’d have the following:
Likelihoods = c((0.75 ** 3) * (0.25 ** 1), (0.5 ** 3) * (0.5 ** 1)) Names = c("Bowl 1", "Bowl 2") res=f(Names,Likelihoods) > res$posteriors[res$name == "Bowl 1"] [1] 0.627907 > res$posteriors[res$name == "Bowl 2"] [1] 0.372093
That’s a bit clunky and the intent of ‘3 vanilla cookies and 1 chocolate’ has been lost. I decided to refactor the code to take in a vector of cookies and calculate the likelihoods internally.
First we need to create a data structure to store the mixes of cookies in each bowl that we defined above. It turns out we can do this using a nested list:
bowl1Mix = c(0.75, 0.25) names(bowl1Mix) = c("vanilla", "chocolate") bowl2Mix = c(0.5, 0.5) names(bowl2Mix) = c("vanilla", "chocolate") Mixes = list("Bowl 1" = bowl1Mix, "Bowl 2" = bowl2Mix) > Mixes $`Bowl 1` vanilla chocolate 0.75 0.25 $`Bowl 2` vanilla chocolate 0.5 0.5
Now let’s tweak our function to take in observations rather than likelihoods and then calculate those likelihoods internally:
likelihoods = function(names, observations) { scores = c(1,1) names(scores) = names for(name in names) { for(observation in observations) { scores[name] = scores[name] * mixes[[name]][observation] } } return(scores) } f = function(names,mixes,observations) { # Assume each option has an equal prior priors = rep(1, length(names)) / length(names) # create a data frame with all info you have dt = data.frame(names,priors) dt$likelihoods = likelihoods(Names, Observations) # calculate posterior probabilities dt$post = dt$priors*dt$likelihoods / sum(dt$priors*dt$likelihoods) # specify what you want the function to return list(names=dt$names, priors=dt$priors, likelihoods=dt$likelihoods, posteriors=dt$post) }
And if we call that function:
Names = c("Bowl 1", "Bowl 2") bowl1Mix = c(0.75, 0.25) names(bowl1Mix) = c("vanilla", "chocolate") bowl2Mix = c(0.5, 0.5) names(bowl2Mix) = c("vanilla", "chocolate") Mixes = list("Bowl 1" = bowl1Mix, "Bowl 2" = bowl2Mix) Mixes Observations = c("vanilla", "vanilla", "vanilla", "chocolate") res=f(Names,Mixes,Observations) > res$posteriors[res$names == "Bowl 1"] [1] 0.627907 > res$posteriors[res$names == "Bowl 2"] [1] 0.372093
Exactly the same result as before! #win
About a year ago Ian pointed me at a Chicago Crime data set which seemed like a good fit for Neo4j and after much procrastination I’ve finally got around to importing it.
The data set covers crimes committed from 2001 until now. It contains around 4 million crimes and meta data around those crimes such as the location, type of crime and year to name a few.
The contents of the file follow this structure:
$ head -n 10 ~/Downloads/Crimes_-_2001_to_present.csv ID,Case Number,Date,Block,IUCR,Primary Type,Description,Location Description,Arrest,Domestic,Beat,District,Ward,Community Area,FBI Code,X Coordinate,Y Coordinate,Year,Updated On,Latitude,Longitude,Location 9464711,HX114160,01/14/2014 05:00:00 AM,028XX E 80TH ST,0560,ASSAULT,SIMPLE,APARTMENT,false,true,0422,004,7,46,08A,1196652,1852516,2014,01/20/2014 12:40:05 AM,41.75017626412204,-87.55494559131228,"(41.75017626412204, -87.55494559131228)" 9460704,HX113741,01/14/2014 04:55:00 AM,091XX S JEFFERY AVE,031A,ROBBERY,ARMED: HANDGUN,SIDEWALK,false,false,0413,004,8,48,03,1191060,1844959,2014,01/18/2014 12:39:56 AM,41.729576153145636,-87.57568059471686,"(41.729576153145636, -87.57568059471686)" 9460339,HX113740,01/14/2014 04:44:00 AM,040XX W MAYPOLE AVE,1310,CRIMINAL DAMAGE,TO PROPERTY,RESIDENCE,false,true,1114,011,28,26,14,1149075,1901099,2014,01/16/2014 12:40:00 AM,41.884543798701515,-87.72803579358926,"(41.884543798701515, -87.72803579358926)" 9461467,HX114463,01/14/2014 04:43:00 AM,059XX S CICERO AVE,0820,THEFT,$500 AND UNDER,PARKING LOT/GARAGE(NON.RESID.),false,false,0813,008,13,64,06,1145661,1865031,2014,01/16/2014 12:40:00 AM,41.785633535413176,-87.74148516669783,"(41.785633535413176, -87.74148516669783)" 9460355,HX113738,01/14/2014 04:21:00 AM,070XX S PEORIA ST,0820,THEFT,$500 AND UNDER,STREET,true,false,0733,007,17,68,06,1171480,1858195,2014,01/16/2014 12:40:00 AM,41.766348042591375,-87.64702037047671,"(41.766348042591375, -87.64702037047671)" 9461140,HX113909,01/14/2014 03:17:00 AM,016XX W HUBBARD ST,0610,BURGLARY,FORCIBLE ENTRY,COMMERCIAL / BUSINESS OFFICE,false,false,1215,012,27,24,05,1165029,1903111,2014,01/16/2014 12:40:00 AM,41.889741146006095,-87.66939334853973,"(41.889741146006095, -87.66939334853973)" 9460361,HX113731,01/14/2014 03:12:00 AM,022XX S WENTWORTH AVE,0820,THEFT,$500 AND UNDER,CTA TRAIN,false,false,0914,009,25,34,06,1175363,1889525,2014,01/20/2014 12:40:05 AM,41.85223460427207,-87.63185047834335,"(41.85223460427207, -87.63185047834335)" 9461691,HX114506,01/14/2014 03:00:00 AM,087XX S COLFAX AVE,0650,BURGLARY,HOME INVASION,RESIDENCE,false,false,0423,004,7,46,05,1195052,1847362,2014,01/17/2014 12:40:17 AM,41.73607283858007,-87.56097809501115,"(41.73607283858007, -87.56097809501115)" 9461792,HX114824,01/14/2014 03:00:00 AM,012XX S CALIFORNIA BLVD,0810,THEFT,OVER $500,STREET,false,false,1023,010,28,29,06,1157929,1894034,2014,01/17/2014 12:40:17 AM,41.86498077118534,-87.69571529596696,"(41.86498077118534, -87.69571529596696)"
Since I wanted to import this into Neo4j I needed to do some massaging of the data since the neo4j-import tool expects to receive CSV files containing the nodes and relationships we want to create.
I’d been looking at Spark towards the end of last year and the pre-processing of the big initial file into smaller CSV files containing nodes and relationships seemed like a good fit.
I therefore needed to create a Spark job to do this. We’ll then pass this job to a Spark executor running locally and it will spit out CSV files.
We start by creating a Scala object with a main method that will contain our processing code. Inside that main method we’ll instantiate a Spark context:
import org.apache.spark.{SparkConf, SparkContext} object GenerateCSVFiles { def main(args: Array[String]) { val conf = new SparkConf().setAppName("Chicago Crime Dataset") val sc = new SparkContext(conf) } }
Easy enough. Next we’ll read in the CSV file. I found the easiest way to reference this was with an environment variable but perhaps there’s a more idiomatic way:
import java.io.File import org.apache.spark.{SparkConf, SparkContext} object GenerateCSVFiles { def main(args: Array[String]) { var crimeFile = System.getenv("CSV_FILE") if(crimeFile == null || !new File(crimeFile).exists()) { throw new RuntimeException("Cannot find CSV file [" + crimeFile + "]") } println("Using %s".format(crimeFile)) val conf = new SparkConf().setAppName("Chicago Crime Dataset") val sc = new SparkContext(conf) val crimeData = sc.textFile(crimeFile).cache() }
The type of crimeData is RDD[String] – Spark’s way of representing the (lazily evaluated) lines of the CSV file. This also includes the header of the file so let’s write a function to get rid of that since we’ll be generating our own headers for the different files:
import org.apache.spark.rdd.RDD // http://mail-archives.apache.org/mod_mbox/spark-user/201404.mbox/%3CCAEYYnxYuEaie518ODdn-fR7VvD39d71=CgB_Dxw_4COVXgmYYQ@mail.gmail.com%3E def dropHeader(data: RDD[String]): RDD[String] = { data.mapPartitionsWithIndex((idx, lines) => { if (idx == 0) { lines.drop(1) } lines }) }
Now we’re ready to start generating our new CSV files so we’ll write a function which parses each line and extracts the appropriate columns. I’m using Open CSV for this:
import au.com.bytecode.opencsv.CSVParser def generateFile(file: String, withoutHeader: RDD[String], fn: Array[String] => Array[String], header: String , distinct:Boolean = true, separator: String = ",") = { FileUtil.fullyDelete(new File(file)) val tmpFile = "/tmp/" + System.currentTimeMillis() + "-" + file val rows: RDD[String] = withoutHeader.mapPartitions(lines => { val parser = new CSVParser(',') lines.map(line => { val columns = parser.parseLine(line) fn(columns).mkString(separator) }) }) if (distinct) rows.distinct() saveAsTextFile tmpFile else rows.saveAsTextFile(tmpFile) }
We then call this function like this:
generateFile("/tmp/crimes.csv", withoutHeader, columns => Array(columns(0),"Crime", columns(2), columns(6)), "id:ID(Crime),:LABEL,date,description", false)
The output into ‘tmpFile’ is actually 32 ‘part files’ but I wanted to be able to merge those together into individual CSV files that were easier to work with.
I won’t paste the the full job here but if you want to take a look it’s on github.
Now we need to submit the job to Spark. I’ve wrapped this in a script if you want to follow along but these are the contents:
./spark-1.1.0-bin-hadoop1/bin/spark-submit \ --driver-memory 5g \ --class GenerateCSVFiles \ --master local[8] \ target/scala-2.10/playground_2.10-1.0.jar \ $@
If we execute that we’ll see the following output…”
Spark assembly has been built with Hive, including Datanucleus jars on classpath Using Crimes_-_2001_to_present.csv Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 15/04/15 00:31:44 INFO SparkContext: Running Spark version 1.3.0 ... 15/04/15 00:47:26 INFO TaskSchedulerImpl: Removed TaskSet 8.0, whose tasks have all completed, from pool 15/04/15 00:47:26 INFO DAGScheduler: Stage 8 (saveAsTextFile at GenerateCSVFiles.scala:51) finished in 2.702 s 15/04/15 00:47:26 INFO DAGScheduler: Job 4 finished: saveAsTextFile at GenerateCSVFiles.scala:51, took 8.715588 s real 0m44.935s user 4m2.259s sys 0m14.159s
and these CSV files will be generated:
$ ls -alh /tmp/*.csv -rwxrwxrwx 1 markneedham wheel 3.0K 14 Apr 07:37 /tmp/beats.csv -rwxrwxrwx 1 markneedham wheel 217M 14 Apr 07:37 /tmp/crimes.csv -rwxrwxrwx 1 markneedham wheel 84M 14 Apr 07:37 /tmp/crimesBeats.csv -rwxrwxrwx 1 markneedham wheel 120M 14 Apr 07:37 /tmp/crimesPrimaryTypes.csv -rwxrwxrwx 1 markneedham wheel 912B 14 Apr 07:37 /tmp/primaryTypes.csv
Let’s have a quick check what they contain:
$ head -n 10 /tmp/beats.csv id:ID(Beat),:LABEL 1135,Beat 1421,Beat 2312,Beat 1113,Beat 1014,Beat 2411,Beat 1333,Beat 2521,Beat 1652,Beat
$ head -n 10 /tmp/crimes.csv id:ID(Crime),:LABEL,date,description 9464711,Crime,01/14/2014 05:00:00 AM,SIMPLE 9460704,Crime,01/14/2014 04:55:00 AM,ARMED: HANDGUN 9460339,Crime,01/14/2014 04:44:00 AM,TO PROPERTY 9461467,Crime,01/14/2014 04:43:00 AM,$500 AND UNDER 9460355,Crime,01/14/2014 04:21:00 AM,$500 AND UNDER 9461140,Crime,01/14/2014 03:17:00 AM,FORCIBLE ENTRY 9460361,Crime,01/14/2014 03:12:00 AM,$500 AND UNDER 9461691,Crime,01/14/2014 03:00:00 AM,HOME INVASION 9461792,Crime,01/14/2014 03:00:00 AM,OVER $500
$ head -n 10 /tmp/crimesBeats.csv :START_ID(Crime),:END_ID(Beat),:TYPE 5896915,0733,ON_BEAT 9208776,2232,ON_BEAT 8237555,0111,ON_BEAT 6464775,0322,ON_BEAT 6468868,0411,ON_BEAT 4189649,0524,ON_BEAT 7620897,0421,ON_BEAT 7720402,0321,ON_BEAT 5053025,1115,ON_BEAT
Looking good. Let’s get them imported into Neo4j:
$ ./neo4j-community-2.2.0/bin/neo4j-import --into /tmp/my-neo --nodes /tmp/crimes.csv --nodes /tmp/beats.csv --nodes /tmp/primaryTypes.csv --relationships /tmp/crimesBeats.csv --relationships /tmp/crimesPrimaryTypes.csv Nodes [*>:45.76 MB/s----------------------------------|PROPERTIES(2)=============|NODE:3|v:118.05 MB/] 4M Done in 5s 605ms Prepare node index [*RESOLVE:64.85 MB-----------------------------------------------------------------------------] 4M Done in 4s 930ms Calculate dense nodes [>:42.33 MB/s-------------------|*PREPARE(7)===================================|CALCULATOR-----] 8M Done in 5s 417ms Relationships [>:42.33 MB/s-------------|*PREPARE(7)==========================|RELATIONSHIP------------|v:44.] 8M Done in 6s 62ms Node --> Relationship [*>:??-----------------------------------------------------------------------------------------] 4M Done in 324ms Relationship --> Relationship [*LINK-----------------------------------------------------------------------------------------] 8M Done in 1s 984ms Node counts [*>:??-----------------------------------------------------------------------------------------] 4M Done in 360ms Relationship counts [*>:??-----------------------------------------------------------------------------------------] 8M Done in 653ms IMPORT DONE in 26s 517ms
Next I updated conf/neo4j-server.properties to point to my new database:
#*************************************************************** # Server configuration #*************************************************************** # location of the database directory #org.neo4j.server.database.location=data/graph.db org.neo4j.server.database.location=/tmp/my-neo
Now I can start up Neo and start exploring the data:
$ ./neo4j-community-2.2.0/bin/neo4j start
MATCH (:Crime)-[r:CRIME_TYPE]->() RETURN r LIMIT 10
There’s lots more relationships and entities that we could pull out of this data set – what I’ve done is just a start. So if you’re up for some more Chicago crime exploration the code and instructions explaining how to run it are on github.
I’ve been working through Alan Downey’s Thinking Bayes and I thought it’d be an interesting exercise to translate some of the code from Python to R.
The first example is a simple one about conditional probablity and the author creates a class ‘PMF’ (Probability Mass Function) to solve the following problem:
Suppose there are two bowls of cookies. Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. Bowl 2 contains 20 of each.
Now suppose you choose one of the bowls at random and, without looking, select a cookie at random. The cookie is vanilla.
What is the probability that it came from Bowl 1?
In Python the code looks like this:
pmf = Pmf() pmf.Set('Bowl 1', 0.5) pmf.Set('Bowl 2', 0.5) pmf.Mult('Bowl 1', 0.75) pmf.Mult('Bowl 2', 0.5) pmf.Normalize() print pmf.Prob('Bowl 1')
The ‘PMF’ class is defined here.
We want to create something similar in R and the actual calculation is stragiht forward:
pBowl1 = 0.5 pBowl2 = 0.5 pVanillaGivenBowl1 = 0.75 pVanillaGivenBowl2 = 0.5 > (pBowl1 * pVanillaGivenBowl1) / ((pBowl1 * pVanillaGivenBowl1) + (PBowl2 * pVanillaGivenBowl2)) 0.6 > (pBowl2 * pVanillaGivenBowl2) / ((pBowl1 * pVanillaGivenBowl1) + (pBowl2 * pVanillaGivenBowl2)) 0.4
The problem is we have quite a bit of duplication and it doesn’t read as cleanly as the Python version.
I’m not sure of the idiomatic way of handling this type of problem in R with mutable state in R but it seems like we can achieve this using functions.
I ended up writing the following function which returns a list of other functions to call.
create.pmf = function() { priors <<- c() likelihoods <<- c() list( prior = function(option, probability) { l = c(probability) names(l) = c(option) priors <<- c(priors, l) }, likelihood = function(option, probability) { l = c(probability) names(l) = c(option) likelihoods <<- c(likelihoods, l) }, posterior = function(option) { names = names(priors) normalised = 0.0 for(name in names) { normalised = normalised + (priors[name] * likelihoods[name]) } (priors[option] * likelihoods[option]) / normalised } ) }
I couldn’t work out how to get ‘priors’ and ‘likelihoods’ to be lexically scoped so I’ve currently got those defined as global variables. I’m using a list as a kind of dictionary following a suggestion on Stack Overflow.
The code doesn’t handle the unhappy path very well but it seems to work for the example from the book:
pmf = create.pmf() pmf$prior("Bowl 1", 0.5) pmf$prior("Bowl 2", 0.5) pmf$likelihood("Bowl 1", 0.75) pmf$likelihood("Bowl 2", 0.5) > pmf$posterior("Bowl 1") Bowl 1 0.6 > pmf$posterior("Bowl 2") Bowl 2 0.4
How would you solve this type of problem? Is there a cleaner/better way?
A few days ago I read a really cool blog post explaining how Markov chains can be used to model the possible state transitions in a game of snakes and ladders, a use of Markov chains I hadn’t even thought of!
While the example is very helpful for understanding the concept, my understanding of the code is that it works off the assumption that any roll of the dice that puts you on a score > 100 is a winning roll.
In the version of the game that I know you have to land exactly on 100 to win. e.g if you’re on square 98 and roll a 6 you would go forward 2 spaces to 100 and then bounce back 4 spaces to 96.
I thought it’d be a good exercise to tweak the code to cater for this:
n=100 # We have 6 extra columns because we want to represent throwing of the dice which results in a final square > 100 M=matrix(0,n+1,n+1+6) rownames(M)=0:n colnames(M)=0:(n+6) # set probabilities of landing on each square assuming that there aren't any snakes or ladders for(i in 1:6){ diag(M[,(i+1):(i+1+n)])=1/6 } # account for 'bounce back' if a dice roll leads to a final score > 100 for(i in 96:100) { for(c in 102:107) { idx = 101 - (c - 101) M[i, idx] = M[i, idx] + M[i, c] } }
We can inspect the last few rows to check that if the transition matrix is accurate:
> M[95:100,95:101] 94 95 96 97 98 99 100 94 0 0.1666667 0.1666667 0.1666667 0.1666667 0.1666667 0.1666667 95 0 0.0000000 0.1666667 0.1666667 0.1666667 0.3333333 0.1666667 96 0 0.0000000 0.0000000 0.1666667 0.3333333 0.3333333 0.1666667 97 0 0.0000000 0.0000000 0.1666667 0.3333333 0.3333333 0.1666667 98 0 0.0000000 0.1666667 0.1666667 0.1666667 0.3333333 0.1666667 99 0 0.1666667 0.1666667 0.1666667 0.1666667 0.1666667 0.1666667
If we’re on the 99th square (the last row) we could roll a 1 and end up on 100, a 2 and end up on 99 (1 forward, 1 back), a 3 and end up on 98 (1 forward, 2 back), a 4 and end up on 97 (1 forward, 3 back), a 5 and end up on 96 (1 forward, 4 back) or a 6 and end up on 95 (1 forward, 5 back). i.e. we can land on 95, 96, 97, 98, 99 or 100 with 1/6 probability.
If we’re on the 96th square (the 3rd row) we could roll a 1 and end up on 97, a 2 and end up on 98, a 3 and end up on 99, a 4 and end up on 100, a 5 and end up on 99 (4 forward, 1 back) or a 6 and end up on 98 (4 forward, 2 back). i.e. we can land on 97 with 1/6 probability, 98 with 2/6 probability, 99 with 2/6 probability or 100 with 1/6 probability.
We could do a similar analysis for the other squares but it seems like the probabilities are being calculated correctly.
Next we can update the matrix with the snakes and ladders. That code stays the same:
# get rid of the extra columns, we don't need them anymore M=M[,1:(n+1)] # add in the snakes and ladders starting = c(4,9,17,20,28,40,51,54,62,64,63,71,93,95,92) ending = c(14,31,7,38,84,59,67,34,19,60,81,91,73,75,78) for(i in 1:length(starting)) { # Retrieve current probabilities of landing on the starting square v=M[,starting[i]+1] ind=which(v>0) # Set no probability of falling on the starting squares M[ind,starting[i]+1]=0 # Move all existing probabilities to the ending squares M[ind,ending[i]+1]=M[ind,ending[i]+1]+v[ind] }
We can also simplify the powermat function which is used to simulate what the board would look like after a certain number of dice rolls:
# original powermat=function(P,h){ Ph=P if(h>1) { for(k in 2:h) { Ph=Ph%*%P } } return(Ph) } #new library(expm) powermat = function(P,h) { return (P %^% h) }
initial=c(1,rep(0,n)) h = 1 > (initial%*%powermat(M,h))[1:15] 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 [1,] 0 0.1666667 0.1666667 0.1666667 0 0.1666667 0.1666667 0 0 0 0 0 0 0 0.1666667 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
One interesting thing I noticed is that it now seems to take way more turns on average to finish the game than when you didn’t need to score exactly 100 to win:
> sum(1 - game) [1] 999
distrib=initial%*%M game=rep(NA,1000) for(h in 1:length(game)){ game[h]=distrib[n+1] distrib=distrib%*%M} plot(1-game[1:200],type="l",lwd=2,col="red", ylab="Probability to be still playing")
I expected it to take longer to finish the game but not this long! I think I’ve probably made a mistake but I’m not sure where…
A few days ago I read a really cool blog post explaining how Markov chains can be used to model the possible state transitions in a game of snakes and ladders, a use of Markov chains I hadn’t even thought of!
While the example is very helpful for understanding the concept, my understanding of the code is that it works off the assumption that any roll of the dice that puts you on a score > 100 is a winning roll.
In the version of the game that I know you have to land exactly on 100 to win. e.g if you’re on square 98 and roll a 6 you would go forward 2 spaces to 100 and then bounce back 4 spaces to 96.
I thought it’d be a good exercise to tweak the code to cater for this:
n=100 # We have 6 extra columns because we want to represent throwing of the dice which results in a final square > 100 M=matrix(0,n+1,n+1+6) rownames(M)=0:n colnames(M)=0:(n+6) # set probabilities of landing on each square assuming that there aren't any snakes or ladders for(i in 1:6){ diag(M[,(i+1):(i+1+n)])=1/6 } # account for 'bounce back' if a dice roll leads to a final score > 100 for(i in 96:100) { for(c in 102:107) { idx = 101 - (c - 101) M[i, idx] = M[i, idx] + M[i, c] } }
We can inspect the last few rows to check that if the transition matrix is accurate:
> M[95:100,95:101] 94 95 96 97 98 99 100 94 0 0.1666667 0.1666667 0.1666667 0.1666667 0.1666667 0.1666667 95 0 0.0000000 0.1666667 0.1666667 0.1666667 0.3333333 0.1666667 96 0 0.0000000 0.0000000 0.1666667 0.3333333 0.3333333 0.1666667 97 0 0.0000000 0.0000000 0.1666667 0.3333333 0.3333333 0.1666667 98 0 0.0000000 0.1666667 0.1666667 0.1666667 0.3333333 0.1666667 99 0 0.1666667 0.1666667 0.1666667 0.1666667 0.1666667 0.1666667
If we’re on the 99th square (the last row) we could roll a 1 and end up on 100, a 2 and end up on 99 (1 forward, 1 back), a 3 and end up on 98 (1 forward, 2 back), a 4 and end up on 97 (1 forward, 3 back), a 5 and end up on 96 (1 forward, 4 back) or a 6 and end up on 95 (1 forward, 5 back). i.e. we can land on 95, 96, 97, 98, 99 or 100 with 1/6 probability.
If we’re on the 96th square (the 3rd row) we could roll a 1 and end up on 97, a 2 and end up on 98, a 3 and end up on 99, a 4 and end up on 100, a 5 and end up on 99 (4 forward, 1 back) or a 6 and end up on 98 (4 forward, 2 back). i.e. we can land on 97 with 1/6 probability, 98 with 2/6 probability, 99 with 2/6 probability or 100 with 1/6 probability.
We could do a similar analysis for the other squares but it seems like the probabilities are being calculated correctly.
Next we can update the matrix with the snakes and ladders. That code stays the same:
# get rid of the extra columns, we don't need them anymore M=M[,1:(n+1)] # add in the snakes and ladders starting = c(4,9,17,20,28,40,51,54,62,64,63,71,93,95,92) ending = c(14,31,7,38,84,59,67,34,19,60,81,91,73,75,78) for(i in 1:length(starting)) { # Retrieve current probabilities of landing on the starting square v=M[,starting[i]+1] ind=which(v>0) # Set no probability of falling on the starting squares M[ind,starting[i]+1]=0 # Move all existing probabilities to the ending squares M[ind,ending[i]+1]=M[ind,ending[i]+1]+v[ind] }
We can also simplify the powermat function which is used to simulate what the board would look like after a certain number of dice rolls:
# original powermat=function(P,h){ Ph=P if(h>1) { for(k in 2:h) { Ph=Ph%*%P } } return(Ph) } #new library(expm) powermat = function(P,h) { return (P %^% h) }
initial=c(1,rep(0,n)) h = 1 > (initial%*%powermat(M,h))[1:15] 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 [1,] 0 0.1666667 0.1666667 0.1666667 0 0.1666667 0.1666667 0 0 0 0 0 0 0 0.1666667 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 [1,] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
One interesting thing I noticed is that it now seems to take way more turns on average to finish the game than when you didn’t need to score exactly 100 to win:
> sum(1 - game) [1] 999
distrib=initial%*%M game=rep(NA,1000) for(h in 1:length(game)){ game[h]=distrib[n+1] distrib=distrib%*%M} plot(1-game[1:200],type="l",lwd=2,col="red", ylab="Probability to be still playing")
I expected it to take longer to finish the game but not this long! I think I’ve probably made a mistake but I’m not sure where…
UpdateAntonios found the mistake I’d made – when on the 100th square we should have a 1 as the probability of getting to the 100th square. i.e. we need to update M like so:
M[101,101] = 1
Now if we visualise he probability that we’re still playing we get a more accurate curve:
distrib=initial%*%M game=rep(NA,1000) for(h in 1:length(game)){ game[h]=distrib[n+1] distrib=distrib%*%M} plot(1-game[1:200],type="l",lwd=2,col="red", ylab="Probability to be still playing")
Over the past couple of weeks I’ve been reading about skill building and the break down of skills into more manageable chunks, and recently had a chance to break down the skills required to learn to cycle.
I initially sketched out the skill progression but quickly realised I had drawn a dependency graph and thought that putting it into Neo4j would simplify things.
I started out with the overall goal for cycling which was to ‘Be able to cycle through a public park':
MERGE (:Goal:Task {name: "Be able to cycle through a public park"})
This goal is easy for someone who’s already learnt to cycle but if we’re starting from scratch it’s a bit daunting so we need to break it down into a simpler skill that we can practice.
The mini algorithm that we’re going to employ for task breakdown is this:
One of the things to keep in mind is that we won’t get the break down perfect the first time so we may need to change it. For a diagram drawn on a piece of paper this would be annoying but in Neo4j it’s just a simpler refactoring.
Going back to cycling. Since the goal isn’t yet achievable we need to break that down into something a bit easier. Let’s start with something really simple:
MERGE (task:Task {name: "Take a few steps forward while standing over the bike"}) WITH task MATCH (goal:Goal:Task {name: "Be able to cycle through a public park"}) MERGE (goal)-[:DEPENDS_ON]->(task)
In the first line we create our new task and then we connect it to our goal which we created earlier.
After we’ve got the hang of walking with the bike we want to get comfortable with cycling forward a few rotations while sitting on the bike but to do that we need to be able to get the bike moving from a standing start. We might also have another step where we cycle forward while standing on the bike as that might be slightly easier.
Let’s update our graph:
// First let's get rid of the relationship between our initial task and the goal MATCH (initialTask:Task {name: "Take a few steps forward while standing over the bike"}) MATCH (goal:Goal {name: "Be able to cycle through a public park"}) MATCH (goal)-[rel:DEPENDS_ON]->(initialTask) DELETE rel WITH initialTask, goal, ["Get bike moving from standing start", "Cycle forward while standing", "Cycle forward while sitting"] AS newTasks // Create some nodes for our new tasks UNWIND newTasks AS newTask MERGE (t:Task {name: newTask}) WITH initialTask, goal, COLLECT(t) AS newTasks WITH initialTask, goal, newTasks, newTasks[0] AS firstTask, newTasks[-1] AS lastTask // Connect the last task to the goal MERGE (goal)-[:DEPENDS_ON]->(lastTask) // And the first task to our initial task MERGE (firstTask)-[:DEPENDS_ON]->(initialTask) // And all the tasks to each other FOREACH(i in RANGE(0, length(newTasks) - 2) | FOREACH(t1 in [newTasks[i]] | FOREACH(t2 in [newTasks[i+1]] | MERGE (t2)-[:DEPENDS_ON]->(t1) )))
We don’t strictly need to learn how to cycle while standing up – we could just go straight from getting the bike moving to cycling forward while sitting. Let’s update the graph to reflect that:
MATCH (sitting:Task {name: "Cycle forward while sitting"}) MATCH (moving:Task {name: "Get bike moving from standing start"}) MERGE (sitting)-[:DEPENDS_ON]->(moving)
Once we’ve got the hang of those tasks let’s add in a few more to get us closer to our goal:
WITH [ {skill: "Controlled stop using brakes/feet", dependsOn: "Cycle forward while sitting"}, {skill: "Steer around stationary objects", dependsOn: "Controlled stop using brakes/feet"}, {skill: "Steer around people", dependsOn: "Steer around stationary objects"}, {skill: "Navigate a small circular circuit", dependsOn: "Steer around stationary objects"}, {skill: "Navigate a loop of a section of the park", dependsOn: "Navigate a small circular circuit"}, {skill: "Navigate a loop of a section of the park", dependsOn: "Steer around people"}, {skill: "Be able to cycle through a public park", dependsOn: "Navigate a loop of a section of the park"} ] AS newTasks FOREACH(newTask in newTasks | MERGE (t1:Task {name: newTask.skill}) MERGE (t2:Task {name: newTask.dependsOn}) MERGE (t1)-[:DEPENDS_ON]->(t2) )
Finally let’s get rid of the relationship from our goal to ‘Cycle forward while sitting’ since we’ve replaced that with some intermediate steps:
MATCH (task:Task {name: "Cycle forward while sitting"}) WITH task MATCH (goal:Goal:Task {name: "Be able to cycle through a public park"}) MERGE (goal)-[rel:DEPENDS_ON]->(task) DELETE rel
And here’s what the final dependency graph looks like:
Although I put this into Neo4j in order to visualise the dependencies we can now query the data as well. For example, let’s say I know how to cycle forward while sitting on the bike. What steps are there between me and being able to cycle around a park?
MATCH (t:Task {name: "Cycle forward while sitting"}), (g:Goal {name: "Be able to cycle through a public park"}), path = shortestpath((g)-[:DEPENDS_ON*]->(t)) RETURN path
Or if we want a list of the tasks we need to do next we could restructure the query slightly:
MATCH (t:Task {name: "Cycle forward while sitting"}), (g:Goal {name: "Be able to cycle through a public park"}), path = shortestpath((t)<-[:DEPENDS_ON*]->(g)) WITH [n in nodes(path) | n.name] AS tasks UNWIND tasks AS task RETURN task ==> +--------------------------------------------+ ==> | task | ==> +--------------------------------------------+ ==> | "Cycle forward while sitting" | ==> | "Controlled stop using brakes/feet" | ==> | "Steer around stationary objects" | ==> | "Steer around people" | ==> | "Navigate a loop of a section of the park" | ==> | "Be able to cycle through a public park" | ==> +--------------------------------------------+ ==> 6 rows
That’s all for now but I think this is an interesting way of tracking how you’d learn a skill. I’m trying a similar approach for some statistics topics I’m learning about but I’ve found the order of tasks isn’t so linear there – interestingly much more a graph than a tree.
Over the weekend I’ve been reading about Markov Chains and I thought it’d be an interesting exercise for me to translate Wikipedia’s example into R code.
But first a definition:
A Markov chain is a random process that undergoes transitions from one state to another on a state space.
It is required to possess a property that is usually characterized as “memoryless”: the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it.
that ‘random process’ could be moves in a Monopoly game, the next word in a sentence or, as in Wikipedia’s example, the next state of the Stock Market.
The diagram below shows the probabilities of transitioning between the various states:
e.g. if we’re in a Bull Market the probability of the state of the market next week being a Bull Market is 0.9, a Bear Market is 0.075 and a Stagnant Market is 0.025.
We can model the various transition probabilities as a matrix:
M = matrix(c(0.9, 0.075, 0.025, 0.15, 0.8, 0.05, 0.25, 0.25, 0.5), nrow = 3, ncol = 3, byrow = TRUE) > M [,1] [,2] [,3] [1,] 0.90 0.075 0.025 [2,] 0.15 0.800 0.050 [3,] 0.25 0.250 0.500
Rows/Cols 1-3 are Bull, Bear, Stagnant respectively.
Now let’s say we start with a Bear market and want to find the probability of each state in 3 weeks time.
We can do this is by multiplying our probability/transition matrix by itself 3 times and then multiplying the result by a vector representing the initial Bear market state.
threeIterations = (M %*% M %*% M) > threeIterations > threeIterations [,1] [,2] [,3] [1,] 0.7745 0.17875 0.04675 [2,] 0.3575 0.56825 0.07425 [3,] 0.4675 0.37125 0.16125 > c(0,1,0) %*% threeIterations [,1] [,2] [,3] [1,] 0.3575 0.56825 0.07425
So we have a 56.825% chance of still being in a Bear Market, 35.75% chance that we’re now in a Bull Market and only a 7.425% chance of being in a stagnant market.
I found it a bit annoying having to type ‘%*% M’ multiple times but luckily the expm library allows us to apply a Matrix power operation:
install.packages("expm") library(expm) > M %^% 3 [,1] [,2] [,3] [1,] 0.7745 0.17875 0.04675 [2,] 0.3575 0.56825 0.07425 [3,] 0.4675 0.37125 0.16125
The nice thing about this function is that we can now easily see where the stock market will trend towards over a large number of weeks:
> M %^% 100 [,1] [,2] [,3] [1,] 0.625 0.3125 0.0625 [2,] 0.625 0.3125 0.0625 [3,] 0.625 0.3125 0.0625
i.e. 62.5% of weeks we will be in a bull market, 31.25% of weeks will be in a bear market and 6.25% of weeks will be stagnant,
After weeks of playing around with various algorithms to extract story arcs in How I met your mother I’ve come to the conclusion that I don’t yet have the skills to completely automate this process so I’m going to change my approach.
The new plan is to treat the outputs of the algorithms as suggestions for possible themes but then have a manual step where I extract what I think are interesting themes in the series.
A theme can consist of a single word or a phrase and the idea is that once a story arc is identified we’ll search over the corpus and find the episodes where that phrase occurs.
We can then generate a CSV file of (story arc) -> (episodeId), store that into our HIMYM graph and use the story arc as another factor for episode similarity.
I ended up with the following script to work out which episodes contained a story arc:
#!/bin/bash find_term() { arc=${1} searchTerm=${2} episodes=$(grep --color -iE "${searchTerm}" data/import/sentences.csv | awk -F"," '{ print $2 }' | sort | uniq) for episode in ${episodes}; do echo ${arc},${episode} done } find_term "Bro Code" "bro code" find_term "Legendary" "legen(.*)ary" find_term "Slutty Pumpkin" "slutty pumpkin" find_term "Magician's Code" "magician's code" find_term "Thanksgiving" "thanksgiving" find_term "The Playbook" "playbook" find_term "Slap Bet" "slap bet" find_term "Wrestlers and Robots" "wrestlers" find_term "Robin Sparkles" "sparkles" find_term "Blue French Horn" "blue french horn" find_term "Olive Theory" "olive" find_term "Thank You Linus" "thank you, linus" find_term "Have you met...?" "have you met" find_term "Laser Tag" "laser tag" find_term "Goliath National Bank" "goliath national bank" find_term "Challenge Accepted" "challenge accepted" find_term "Best Man" "best man"
If we run this script we’ll see something like the following:
$ ./scripts/arcs.sh Bro Code,14 Bro Code,155 Bro Code,170 Bro Code,188 Bro Code,201 Bro Code,61 Bro Code,64 Legendary,117 Legendary,120 Legendary,122 Legendary,136 Legendary,137 Legendary,15 Legendary,152 Legendary,157 Legendary,162 Legendary,171 ... Best Man,208 Best Man,30 Best Man,32 Best Man,41 Best Man,42
I pulled out these themes by eyeballing the output of the following scripts:
I can’t remember off the top of my head if any obvious themes have been missed so if you know HIMYM better than me let me know and I’ll try and work out why those didn’t surface.
Next I want to see how these scripts fare against some other TV shows and see how quickly I can extract themes for those. It’d also be cool if I can make the whole process a bit more automated.
Yesterday I spent the day in Berlin delivering a workshop as part of the Data Science Retreat and one of the exercises we did was write a query that would pull back all the information you’d need to create the IMDB page for a movie.
Scanning the page we can see that need to get some basic meta data including the title. Next we’ll need to pull in the actors, directors, producers and finally a recommendation for some other movies the viewer might like to see.
I’d struggle to be able to write this all in one go – it’s non trivial. However, if we break it down there are actually 5 simpler queries that we probably can write. Our final step is then to work out how to glue them all together.
Let’s get started.
If you want to follow along open up your Neo4j browser and type :play movies and import the built in data set.
We’re going to create the query for The Matrix home page so the first step is to find the node representing that movie in the database:
match (movie:Movie {title: "The Matrix"}) return movie.title ==> +--------------+ ==> | movie.title | ==> +--------------+ ==> | "The Matrix" | ==> +--------------+ ==> 1 row
Easy enough. Now let’s get back the producers:
match (movie:Movie {title: "The Matrix"}) optional match (producer)-[:PRODUCED]->(movie) RETURN movie.title, COLLECT(producer.name) AS producers ==> +--------------------------------+ ==> | movie.title | producers | ==> +--------------------------------+ ==> | "The Matrix" | ["Joel Silver"] | ==> +--------------------------------+ ==> 1 row
We’ve introduced the COLLECT function here as we want to ensure that our final result only has one row regardless of how many producers there are. COLLECT applies an implicit group by ‘movie.title’ and collects the producers for each movie (in this case just The Matrix) into an array.
We’ve used OPTIONAL MATCH *LINK* because we still want to return a row for the query even if it has no producers. In the case that there are no producers we’d hope to see an empty array.
Now let’s write the same query to pull back the directors of the movie:
match (movie:Movie {title: "The Matrix"}) optional match (director)-[:DIRECTED]->(movie) RETURN movie.title, COLLECT(director.name) AS directors ==> +----------------------------------------------------+ ==> | movie.title | directors | ==> +----------------------------------------------------+ ==> | "The Matrix" | ["Lana Wachowski","Andy Wachowski"] | ==> +----------------------------------------------------+ ==> 1 row
We really want to do both of these in one query so we get back a single result with 3 columns. To do that we’re going to introduce the WITH clause which allows us combine the results of traversals together.
In this case we’ll first do a traversal to get the producers, collect those into an array and then traverse out again to get the directors and collect those. This is what the query looks like:
match (movie:Movie {title: "The Matrix"}) optional match (producer)-[:PRODUCED]->(movie) with movie, COLLECT(producer.name) AS producers optional match (director)-[:DIRECTED]->(movie) RETURN movie.title, producers, COLLECT(director.name) AS directors ==> +----------------------------------------------------------------------+ ==> | movie.title | producers | directors | ==> +----------------------------------------------------------------------+ ==> | "The Matrix" | ["Joel Silver"] | ["Lana Wachowski","Andy Wachowski"] | ==> +----------------------------------------------------------------------+ ==> 1 row
We can follow the same pattern to return the actors:
match (movie:Movie {title: "The Matrix"}) optional match (producer)-[:PRODUCED]->(movie) with movie, COLLECT(producer.name) AS producers optional match (director)-[:DIRECTED]->(movie) with movie, producers, COLLECT(director.name) AS directors optional match (actor)-[:ACTED_IN]->(movie) RETURN movie.title, COLLECT(actor.name) AS actors, producers, directors ==> +--------------------------------------------------------------------------------------------------------------------------------------------------------------+ ==> | movie.title | actors | producers | directors | ==> +--------------------------------------------------------------------------------------------------------------------------------------------------------------+ ==> | "The Matrix" | ["Hugo Weaving","Laurence Fishburne","Carrie-Anne Moss","Keanu Reeves","Emil Eifrem"] | ["Joel Silver"] | ["Lana Wachowski","Andy Wachowski"] | ==> +--------------------------------------------------------------------------------------------------------------------------------------------------------------+ ==> 1 row
So far, so good. We’ve got everything except the other movies recommendation which is a bit trickier so we’ll write it on its own first:
match (movie:Movie {title: "The Matrix"})<-[:ACTED_IN]-(actor)-[:ACTED_IN]->(otherMovie) RETURN otherMovie, COUNT(*) AS score ORDER BY score DESC ==> +---------------------------------------------------------------------------------------------------------------------------+ ==> | otherMovie | score | ==> +---------------------------------------------------------------------------------------------------------------------------+ ==> | Node[348]{title:"The Matrix Revolutions",released:2003,tagline:"Everything that has a beginning has an end"} | 4 | ==> | Node[347]{title:"The Matrix Reloaded",released:2003,tagline:"Free your mind"} | 4 | ==> | Node[490]{title:"Something's Gotta Give",released:2003} | 1 | ==> | Node[349]{title:"The Devil's Advocate",released:1997,tagline:"Evil has its winning ways"} | 1 | ==> | Node[438]{title:"Johnny Mnemonic",released:1995,tagline:"The hottest data on earth. In the coolest head in town"} | 1 | ==> | Node[443]{title:"Cloud Atlas",released:2012,tagline:"Everything is connected"} | 1 | ==> | Node[452]{title:"V for Vendetta",released:2006,tagline:"Freedom! Forever!"} | 1 | ==> | Node[425]{title:"The Replacements",released:2000,tagline:"Pain heals, Chicks dig scars... Glory lasts forever"} | 1 | ==> +---------------------------------------------------------------------------------------------------------------------------+ ==> 8 rows
Our recommendation query finds all the actors in The Matrix and then traverses out to find other movies they’ve acted in and orders those movies based on how many of our actors appeared in them. Not surprisingly the other Matrix movies come out top.
In order to plug this into the rest of the query we need a single row to be returned i.e. our other movie suggestions need to be returned as an array rather than individual rows. Let’s do that:
match (movie:Movie {title: "The Matrix"})<-[:ACTED_IN]-(actor)-[:ACTED_IN]->(otherMovie) WITH otherMovie, COUNT(*) AS score ORDER BY score DESC RETURN COLLECT({movie: otherMovie.title, score: score}) AS otherMovies ==> +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ==> | recommended | ==> +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ==> | [{movie -> "The Matrix Revolutions", score -> 4},{movie -> "The Matrix Reloaded", score -> 4},{movie -> "Something's Gotta Give", score -> 1},{movie -> "The Devil's Advocate", score -> 1},{movie -> "Johnny Mnemonic", score -> 1},{movie -> "Cloud Atlas", score -> 1},{movie -> "V for Vendetta", score -> 1},{movie -> "The Replacements", score -> 1}] | ==> +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
We’ve introduced a WITH clause for two reasons:
Now we’re ready to plug this recommendation query into our main one:
match (movie:Movie {title: "The Matrix"}) optional match (producer)-[:PRODUCED]->(movie) with movie, COLLECT(producer.name) AS producers optional match (director)-[:DIRECTED]->(movie) with movie, producers, COLLECT(director.name) AS directors optional match (actor)-[:ACTED_IN]->(movie) WITH movie, COLLECT(actor.name) AS actors, producers, directors optional match (movie)<-[:ACTED_IN]-(actor)-[:ACTED_IN]->(otherMovie) WITH movie, actors, producers, directors, otherMovie, COUNT(*) AS score ORDER BY score DESC RETURN movie, actors, producers, directors, COLLECT({movie: otherMovie.title, score: score}) AS recommended ==> +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ==> | movie | actors | producers | directors | recommended | ==> +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ==> | Node[338]{title:"The Matrix",released:1999,tagline:"Welcome to the Real World"} | ["Hugo Weaving","Laurence Fishburne","Carrie-Anne Moss","Keanu Reeves","Emil Eifrem"] | ["Joel Silver"] | ["Lana Wachowski","Andy Wachowski"] | [{movie -> "The Matrix Revolutions", score -> 4},{movie -> "The Matrix Reloaded", score -> 4},{movie -> "Johnny Mnemonic", score -> 1},{movie -> "The Replacements", score -> 1},{movie -> "Cloud Atlas", score -> 1},{movie -> "V for Vendetta", score -> 1},{movie -> "Something's Gotta Give", score -> 1},{movie -> "The Devil's Advocate", score -> 1}] | ==> +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ==> 1 row
Voila! 4 different types of data gathered and just one query to do it all.
For the eagle eyed cypher specialists (Hi Michael!), you’ll have noticed a bit of duplication in how we traverse out to the actors twice, once to retrieve them and once to make the movie recommendation.
We could optimise this by collecting the actors once and then using the UNWIND clause but that’s an optimisation which I think slightly obscures the intent of the query so I’ve left it like this for now.
I’m planning to write a variant of the TF/IDF algorithm over the HIMYM corpus which weights in favour of term that appear in a medium number of documents and as a prerequisite needed a function that when given a number of documents would return a weighting.
It should return a higher value when a term appears in a medium number of documents i.e. if I pass in 10 I should get back a higher value than 200 as a term that appears in 10 episodes is likely to be more interesting than one which appears in almost every episode.
I went through a few different scipy distributions but none of them did exactly what I want so I ended up writing a sampling function which chooses random numbers in a range with different probabilities:
import math import numpy as np values = range(1, 209) probs = [1.0 / 208] * 208 for idx, prob in enumerate(probs): if idx > 3 and idx < 20: probs[idx] = probs[idx] * (1 + math.log(idx + 1)) if idx > 20 and idx < 40: probs[idx] = probs[idx] * (1 + math.log((40 - idx) + 1)) probs = [p / sum(probs) for p in probs] sample = np.random.choice(values, 1000, p=probs) >>> print sample[:10] [ 33 9 22 126 54 4 20 17 45 56]
Now let’s visualise the distribution of this sample by plotting a histogram:
import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt binwidth = 2 plt.hist(sample, bins=np.arange(min(sample), max(sample) + binwidth, binwidth)) plt.xlim([0, max(sample)]) plt.show()
It’s a bit hacky but it does seem to work in terms of weighting the values correctly. It would be nice if it I could smooth it off a bit better but I’m not sure how at the moment.
One final thing we can do is get the count of any one of the values by using the bincount function:
>>> print np.bincount(sample)[1] 4 >>> print np.bincount(sample)[10] 16 >>> print np.bincount(sample)[206] 3
Now I need to plug this into the rest of my code and see if it actually works!
Since I upgraded to Yosemite I’ve noticed that attempts to resolve localhost on my home network have been taking ages (sometimes over a minute) so I thought I’d try and work out why.
This is what my initial /etc/hosts file looked like based on the assumption that my machine’s hostname was teetotal:
$ cat /etc/hosts ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost #fe80::1%lo0 localhost 127.0.0.1 wuqour.local 127.0.0.1 teetotal
I setup a little test which replicated the problem:
import java.net.InetAddress; import java.net.UnknownHostException; public class LocalhostResolution { public static void main( String[] args ) throws UnknownHostException { long start = System.currentTimeMillis(); InetAddress localHost = InetAddress.getLocalHost(); System.out.println(localHost); System.out.println(System.currentTimeMillis() - start); } }
which has the following output:
Exception in thread "main" java.net.UnknownHostException: teetotal-2: teetotal-2: nodename nor servname provided, or not known at java.net.InetAddress.getLocalHost(InetAddress.java:1473) at LocalhostResolution.main(LocalhostResolution.java:9) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134) Caused by: java.net.UnknownHostException: teetotal-2: nodename nor servname provided, or not known at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293) at java.net.InetAddress.getLocalHost(InetAddress.java:1469) ... 6 more
Somehow my hostname has changed to teetotal-2 so I added the following entry to /etc/hosts:
127.0.0.1 teetotal-2
Now if we run the program we see this output instead:
teetotal-2/127.0.0.1 5011
It’s still taking 5 seconds to resolve which is much longer than I’d expect. After break pointing through the code it seems like it’s trying to do an IPv6 resolution rather than IPv4 so I added an /etc/hosts entry for that too:
::1 teetotal-2
Now resolution is much quicker:
teetotal-2/127.0.0.1 6
Happy days!
One of the most common uses of Neo4j is for building real time recommendation engines and a common theme is that they make use of lots of different bits of data to come up with an interesting recommendation.
For example in this video Amanda shows how dating websites build real time recommendation engines by starting with social connections and then introducing passions, location and a few other things.
Graph Aware have a neat framework that helps you to build your own recommendation engine using Java and I was curious what a Cypher version would look like.
This is the sample graph:
CREATE (m:Person:Male {name:'Michal', age:30}), (d:Person:Female {name:'Daniela', age:20}), (v:Person:Male {name:'Vince', age:40}), (a:Person:Male {name:'Adam', age:30}), (l:Person:Female {name:'Luanne', age:25}), (c:Person:Male {name:'Christophe', age:60}), (lon:City {name:'London'}), (mum:City {name:'Mumbai'}), (m)-[:FRIEND_OF]->(d), (m)-[:FRIEND_OF]->(l), (m)-[:FRIEND_OF]->(a), (m)-[:FRIEND_OF]->(v), (d)-[:FRIEND_OF]->(v), (c)-[:FRIEND_OF]->(v), (d)-[:LIVES_IN]->(lon), (v)-[:LIVES_IN]->(lon), (m)-[:LIVES_IN]->(lon), (l)-[:LIVES_IN]->(mum);
We want to recommend some potential friends to ‘Adam’ so the first layer of our query is to find his friends of friends as there are bound to be some potential friends amongst them:
MATCH (me:Person {name: "Adam"}) MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend) RETURN me, potentialFriend, COUNT(*) AS friendsInCommon ==> +--------------------------------------------------------------------------------------+ ==> | me | potentialFriend | friendsInCommon | ==> +--------------------------------------------------------------------------------------+ ==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40} | 1 | ==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 1 | ==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25} | 1 | ==> +--------------------------------------------------------------------------------------+ ==> 3 rows
This query gives us back a list of potential friends and how many friends we have in common.
Now that we’ve got some potential friends let’s start building a ranking for each of them. One indicator which could weigh in favour of a potential friend is if they live in the same location as us so let’s add that to our query:
MATCH (me:Person {name: "Adam"}) MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend) WITH me, potentialFriend, COUNT(*) AS friendsInCommon RETURN me, potentialFriend, SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation ==> +-----------------------------------------------------------------------------------+ ==> | me | potentialFriend | sameLocation | ==> +-----------------------------------------------------------------------------------+ ==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40} | 0 | ==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 0 | ==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25} | 0 | ==> +-----------------------------------------------------------------------------------+ ==> 3 rows
Next we’ll check whether Adams’ potential friends have the same gender as him by comparing the labels each node has. We’ve got ‘Male’ and ‘Female’ labels which indicate gender.
MATCH (me:Person {name: "Adam"}) MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend) WITH me, potentialFriend, COUNT(*) AS friendsInCommon RETURN me, potentialFriend, SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation, LABELS(me) = LABELS(potentialFriend) AS gender ==> +--------------------------------------------------------------------------------------------+ ==> | me | potentialFriend | sameLocation | gender | ==> +--------------------------------------------------------------------------------------------+ ==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40} | 0 | true | ==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 0 | false | ==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25} | 0 | false | ==> +--------------------------------------------------------------------------------------------+ ==> 3 rows
Next up let’s calculate the age different between Adam and his potential friends:
MATCH (me:Person {name: "Adam"}) MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend) WITH me, potentialFriend, COUNT(*) AS friendsInCommon RETURN me, potentialFriend, SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation, abs( me.age - potentialFriend.age) AS ageDifference, LABELS(me) = LABELS(potentialFriend) AS gender, friendsInCommon ==> +------------------------------------------------------------------------------------------------------------------------------+ ==> | me | potentialFriend | sameLocation | ageDifference | gender | friendsInCommon | ==> +------------------------------------------------------------------------------------------------------------------------------+ ==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40} | 0 | 10.0 | true | 1 | ==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 0 | 10.0 | false | 1 | ==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25} | 0 | 5.0 | false | 1 | ==> +------------------------------------------------------------------------------------------------------------------------------+ ==> 3 rows
Now let’s do some filtering to get rid of people that Adam is already friends with – there wouldn’t be much point in recommending those people!
MATCH (me:Person {name: "Adam"}) MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend) WITH me, potentialFriend, COUNT(*) AS friendsInCommon WITH me, potentialFriend, SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation, abs( me.age - potentialFriend.age) AS ageDifference, LABELS(me) = LABELS(potentialFriend) AS gender, friendsInCommon WHERE NOT (me)-[:FRIEND_OF]-(potentialFriend) RETURN me, potentialFriend, SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation, abs( me.age - potentialFriend.age) AS ageDifference, LABELS(me) = LABELS(potentialFriend) AS gender, friendsInCommon ==> +------------------------------------------------------------------------------------------------------------------------------+ ==> | me | potentialFriend | sameLocation | ageDifference | gender | friendsInCommon | ==> +------------------------------------------------------------------------------------------------------------------------------+ ==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40} | 0 | 10.0 | true | 1 | ==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 0 | 10.0 | false | 1 | ==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25} | 0 | 5.0 | false | 1 | ==> +------------------------------------------------------------------------------------------------------------------------------+ ==> 3 rows
In this case we haven’t actually filtered anyone out but for some of the other people we would see a reduction in the number of potential friends.
Our final step is to come up with a score for each of the features that we’ve identified as being important for making a friend suggestion.
We’ll assign a score of 10 if the people live in the same location or have the same gender as Adam and 0 if not. For the ageDifference and friendsInCommon we’ll apply apply a log curve so that those values don’t have a disproportional effect on our final score. We’ll use the formula defined in the ParetoScoreTransfomer to do this:
public <OUT> float transform(OUT item, float score) { if (score < minimumThreshold) { return 0; } double alpha = Math.log((double) 5) / eightyPercentLevel; double exp = Math.exp(-alpha * score); return new Double(maxScore * (1 - exp)).floatValue(); }
And now for our completed recommendation query:
MATCH (me:Person {name: "Adam"}) MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend) WITH me, potentialFriend, COUNT(*) AS friendsInCommon WITH me, potentialFriend, SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation, abs( me.age - potentialFriend.age) AS ageDifference, LABELS(me) = LABELS(potentialFriend) AS gender, friendsInCommon WHERE NOT (me)-[:FRIEND_OF]-(potentialFriend) WITH potentialFriend, // 100 -> maxScore, 10 -> eightyPercentLevel, friendsInCommon -> score (from the formula above) 100 * (1 - exp((-1.0 * (log(5.0) / 10)) * friendsInCommon)) AS friendsInCommon, sameLocation * 10 AS sameLocation, -1 * (10 * (1 - exp((-1.0 * (log(5.0) / 20)) * ageDifference))) AS ageDifference, CASE WHEN gender THEN 10 ELSE 0 END as sameGender RETURN potentialFriend, {friendsInCommon: friendsInCommon, sameLocation: sameLocation, ageDifference:ageDifference, sameGender: sameGender} AS parts, friendsInCommon + sameLocation + ageDifference + sameGender AS score ORDER BY score DESC ==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ==> | potentialFriend | parts | score | ==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ==> | Node[1006]{name:"Vince",age:40} | {friendsInCommon -> 14.86600774792154, sameLocation -> 0, ageDifference -> -5.52786404500042, sameGender -> 10} | 19.33814370292112 | ==> | Node[1008]{name:"Luanne",age:25} | {friendsInCommon -> 14.86600774792154, sameLocation -> 0, ageDifference -> -3.312596950235779, sameGender -> 0} | 11.55341079768576 | ==> | Node[1005]{name:"Daniela",age:20} | {friendsInCommon -> 14.86600774792154, sameLocation -> 0, ageDifference -> -5.52786404500042, sameGender -> 0} | 9.33814370292112 | ==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
The final query isn’t too bad – the only really complex bit is the log curve calculation. This is where user defined functions will come into their own in the future.
The nice thing about this approach is that we don’t have to step outside of cypher so if you’re not comfortable with Java you can still do real time recommendations! On the other hand, the different parts of the recommendation engine all get mixed up so it’s not as easy to see the whole pipeline as if you use the graph aware framework.
The next step is to apply this to the Twitter graph and come up with follower recommendations on there.
I’ve been playing around with some of the matplotlib demos recently and discovered that simply copying one of the examples didn’t actually work for me.
I was following the bar chart example and had the following code:
import numpy as np import matplotlib.pyplot as plt N = 5 ind = np.arange(N) fig, ax = plt.subplots() menMeans = (20, 35, 30, 35, 27) menStd = (2, 3, 4, 1, 2) width = 0.35 # the width of the bars rects1 = ax.bar(ind, menMeans, width, color='r', yerr=menStd) plt.show()
When I execute this script from the command line it just hangs and I don’t see anything at all.
Via a combination of different blog posts (which all suggested different things!) I ended up with the following variation of imports which seems to do the job:
import numpy as np import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt N = 5 ind = np.arange(N) fig, ax = plt.subplots() menMeans = (20, 35, 30, 35, 27) menStd = (2, 3, 4, 1, 2) width = 0.35 # the width of the bars rects1 = ax.bar(ind, menMeans, width, color='r', yerr=menStd) plt.show()
If I run this script a Python window pops up and contains the following image which is what I expected to happen in the first place!
The thing to notice is that we’ve had to change the backend in order to use matplotlib from the shell:
With the TkAgg backend, which uses the Tkinter user interface toolkit, you can use matplotlib from an arbitrary non-gui python shell.
Current state: Wishing for ggplot!