Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Variations in Iteration - New Lecture Posted(1)

10x Software Development - Steve McConnell - Tue, 06/23/2015 - 11:17

In this week's lecture (https://cxlearn.com) I explain how the lifecycle model can be used to show the incredibly large number of variations in approaches to software projects, especially including numerous variations in kinds of iteration. I identify approaches that work if you need predictability, if you need flexibility, if you need to attack uncertainty in requirements (i.e., unknown requirements), and if you need to attack uncertainty in architecture (i.e., technical risk).  

The overarching message is that there are lots of different ways to organize the activities on a software project, and the way you organize the activities significantly affects what a project will accomplish. 

Lectures posted so far include:  

0.0 Understanding Software Projects - Intro
     0.1 Introduction - My Background
     0.2 Reading the News
     0.3 Definitions and Notations 

1.0 The Software Lifecycle Model - Intro
     1.1 Variations in Iteration 
     1.2 Lifecycle Model - Defect Removal
     1.3 Lifecycle Model Applied to Common Methodologies
     1.4 Lifecycle Model - Selecting an Iteration Approach 

2.0 Software Size
     2.05 Size - Comments on Lines of Code
     2.1 Size - Staff Sizes 
     2.2 Size - Schedule Basics 

Check out the lectures at http://cxlearn.com!

Understanding Software Projects - Steve McConnell

 

How to Become an Overnight Success

Making the Complex Simple - John Sonmez - Mon, 06/22/2015 - 16:00

It seems that everyone wants a shortcut to success. I constantly talk to programmers and wannabe entrepreneurs who are ready to give up and throw in the towel at the first sign of trouble. It reminds me of this quote by eight-time—yes, that’s right, eight-time—Mr. Olympia, Ronnie Coleman. “Everybody wants to be a bodybuilder, but […]

The post How to Become an Overnight Success appeared first on Simple Programmer.

Categories: Programming

R: Scraping Neo4j release dates with rvest

Mark Needham - Sun, 06/21/2015 - 23:07

As part of my log analysis I wanted to get the Neo4j release dates which are accessible from the release notes and decided to try out Hadley Wickham’s rvest scraping library which he released at the end of 2014.

rvest is based on Python’s beautifulsoup which has become my scraping library of choice so I didn’t find it too difficult to pick up.

To start with we need to download the release notes locally so we don’t have to go over the network when we’re doing our scraping:

download.file("http://neo4j.com/release-notes/page/1", "release-notes.html")
download.file("http://neo4j.com/release-notes/page/2", "release-notes2.html")

We want to parse those pages back and return the rows which contain version numbers and release dates. The HTML looks like this:

2015 06 21 22 57 20

We can get the rows with the following code:

library(rvest)
library(dplyr)
 
page1 <- html("release-notes.html")
page2 <- html("release-notes2.html")
 
rows = c(page1 %>% html_nodes("div.small-12 div.row"), 
         page2 %>% html_nodes("div.small-12 div.row") ) 
 
> rows %>% head(1)
[[1]]
<div class="row"> <h3 class="entry-title"><a href="http://neo4j.com/release-notes/neo4j-2-2-2/">Latest Release: Neo4j 2.2.2</a></h3> <h6>05/21/2015</h6> <p>Neo4j 2.2.2 is a maintenance release, with critical improvements.</p> <p>Notably, this release:</p> <ul><li>Provides support for running Neo4j on Oracle and OpenJDK Java 8 runtimes</li> <li>Resolves an issue that prevented the Neo4j Browser from loading in the latest Chrome release (43.0.2357.65).</li> <li>Corrects the behavior of the <code>:sysinfo</code> (aka <code>:play sysinfo</code>) browser directive.</li> <li>Improves the <a href="http://neo4j.com/docs/2.2.2/import-tool.html">import tool</a> handling of values containing newlines, and adds support f...</li></ul><a href="http://neo4j.com/release-notes/neo4j-2-2-2/">Read full notes →</a> </div>

Now we need to loop through the rows and pull out just the version and release date. I wrote the following function to do this and strip out any extra text that we’re not interested in:

generate_releases = function(rows) {
  releases = data.frame()
  for(row in rows) {
    version = row %>% html_node("h3.entry-title")
    date = row %>% html_node("h6")  
 
    if(!is.null(version) && !is.null(date)) {
      version = version %>% html_text()
      version = gsub("Latest Release: ", "", version)
      version = gsub("Neo4j ", "", version)
      releases = rbind(releases, data.frame(version = version, date = date %>% html_text()))
    }
  }
  return(releases)
}
 
> generate_releases(rows)
   version       date
1    2.2.2 05/21/2015
2    2.2.1 04/14/2015
3    2.1.8 04/01/2015
4    2.2.0 03/25/2015
5    2.1.7 02/03/2015
6    2.1.6 11/25/2014
7    1.9.9 10/13/2014
8    2.1.5 09/30/2014
9    2.1.4 09/04/2014
10   2.1.3 07/28/2014
11   2.0.4 07/08/2014
12   1.9.8 06/19/2014
13   2.1.2 06/11/2014
14   2.0.3 04/30/2014
15   2.0.1 02/04/2014
16   2.0.2 04/15/2014
17   1.9.7 04/11/2014
18   1.9.6 02/03/2014
19     2.0 12/11/2013
20   1.9.5 11/11/2013
21   1.9.4 09/19/2013
22   1.9.3 08/30/2013
23   1.9.2 07/16/2013
24   1.9.1 06/24/2013
25     1.9 05/13/2013
26   1.8.3         //

Finally I wanted to convert the ‘date’ column to be in R date format and get rid of the 1.8.3 row since it doesn’t contain a date. lubridate is my goto library for date manipulation in R so we’ll use that here:

library(lubridate)
 
> generate_releases(rows) %>%  
      mutate(date = mdy(date)) %>%   
      filter(!is.na(date)) 
 
   version       date
1    2.2.2 2015-05-21
2    2.2.1 2015-04-14
3    2.1.8 2015-04-01
4    2.2.0 2015-03-25
5    2.1.7 2015-02-03
6    2.1.6 2014-11-25
7    1.9.9 2014-10-13
8    2.1.5 2014-09-30
9    2.1.4 2014-09-04
10   2.1.3 2014-07-28
11   2.0.4 2014-07-08
12   1.9.8 2014-06-19
13   2.1.2 2014-06-11
14   2.0.3 2014-04-30
15   2.0.1 2014-02-04
16   2.0.2 2014-04-15
17   1.9.7 2014-04-11
18   1.9.6 2014-02-03
19     2.0 2013-12-11
20   1.9.5 2013-11-11
21   1.9.4 2013-09-19
22   1.9.3 2013-08-30
23   1.9.2 2013-07-16
24   1.9.1 2013-06-24
25     1.9 2013-05-13

We could then easily see how many releases there were by year:

releasesByDate = generate_releases(rows) %>%  
  mutate(date = mdy(date)) %>%   
  filter(!is.na(date))
 
> releasesByDate %>% mutate(year = year(date)) %>% count(year)
Source: local data frame [3 x 2]
 
  year  n
1 2013  7
2 2014 13
3 2015  5

Or by month:

> releasesByDate %>% mutate(month = month(date)) %>% count(month)
Source: local data frame [11 x 2]
 
   month n
1      2 3
2      3 1
3      4 5
4      5 2
5      6 3
6      7 3
7      8 1
8      9 3
9     10 1
10    11 2
11    12 1

Previous to this quick bit of hacking I’d always turned to Ruby or Python whenever I wanted to scrape a dataset but it looks like rvest makes R a decent option for this type of work now. Good times!

Categories: Programming

NDC Oslo 2015 Experience Report

Phil Trelford's Array - Sun, 06/21/2015 - 21:54

Last week my son Sean and I popped over to Norway for NDC Oslo, billed as one of the world’s largest independent software conferences.

Venue

The venue, the Oslo Spektrum in the centre of the city was huge, with a large lower floor for vendors and refreshments surrounded by 9 rooms for talks. There were nearly 2000 delegates and nearly 200 speakers at the event, enjoying a veritable binge festival of hour long talks, and even a room where you could watch all the talks simultaneously on a series of big screens.

Invitation

My 8yo Sean had been invited to do a lightning talk, and according to one NDC conference organizer they were then stuck with inviting me along too, apparently something they wouldn’t have done otherwise, or so he felt compelled to tell me on our first meeting. The other organizers seemed happier to see me or perhaps were more able to exercise tact. Regardless, it was great to meet up with many friends old and new and from far and wide.

The #fsharp rat pack at #ndcoslo last night! pic.twitter.com/xTEWkrtzh5

— John Azariah (@johnazariah) June 19, 2015

Travel

NDC were kind enough to arrange flights and accommodation for me, but I had to pay for Sean’s travel myself. The hotel was close to the venue, unfortunately when we arrived late on Wednesday night it was over-booked and the room we were initially sent to was already occupied, where the phrase what is seen cannot be unseen is probably quite fitting. Thankfully we were eventually rehoused in another hotel nearby. The next day we were asked to pay 4500Kr (450GBP) to remain in the hotel. After some assistance from organizer Charlotte Lyng we were relieved to be given an unoccupied room at the original hotel for the following nights.

Lightning talk

Sean’s lightning talk was after lunch on the Thursday, the room was standing room only with over 100 attending, and he had a great reception. He started with a short rendition of Pink Floyd’s Wish You Were Here while I set up the laptop (thanks to Carl Franklin of .Net Rocks for the loan of the guitar). Followed by the main content, a live coding session on Composing 3D Objects in OpenGL with F#.

There was a lot of reaction on Twitter, here's a few of the many tweets:

Sean warming up for his lightning talk in Room 6 at 13:40 #ndcoslo #fsharp #opengl #webgl pic.twitter.com/uomoGV9vS6

— Sean's dad (@ptrelford) June 18, 2015

Full house for Sean Trelford 8yo at #ndcoslo pic.twitter.com/rUiywM38mG

— Jakob Bradford (@jakbradf) June 18, 2015

Sean presenting 3D rendering in #fsharp for lightning talks. He's the youngest speaker at #ndcodlo pic.twitter.com/oad4mcNtqM

— Jérémie Chassaing (@thinkb4coding) June 18, 2015

Watching 8-year old Sean do a lightning talk about composing 3D objects in OpenGL with F# #ndcoslo pic.twitter.com/hNTbJyqV78

— Andreia Gaita (@sh4na) June 18, 2015

This 8 y/o kid wins #ndcoslo - talking on F# and OpenGL. Remember this those of you looking for the courage to speak. pic.twitter.com/VXwXgSB351

— Troy Hunt (@troyhunt) June 18, 2015

Thanks for all the supportive comments, this was only Sean's third public speaking engagement and he really enjoyed speaking at and attending the conference.

You can try composing your own 3D scenes in the browser with F# and WebGL at Fun3D.net


F# for C# developers

My talk was an introduction to the F# programming language from a C# developer's perspective and also seemed to be well received, although in terms of Twitter I was rightly eclipsed by my son Sean's efforts. In fact from now on I think I might be more commonly known as Sean's dad.

Solid content from @ptrelford at #ndcoslo. Live-code ports of C# to F# If you missed catch on video #fsharp pic.twitter.com/7UjeoNdqql

— Bryan Hunter (@bryan_hunter) June 19, 2015

Dot-driven-development with #fsharp. @ptrelford exploring Premiere League table with #html type provider #ndcoslo pic.twitter.com/07y47rU9Lx

— Tomas Petricek (@tomaspetricek) June 19, 2015

I also appear to have made the NDC Oslo speaker leaderboard, in the 100-200 audience category, thanks for all the greens votes! :)

Look closely: @elixirlang talks are heading up the leaderboards with 100% audience approval at #ndcoslo pic.twitter.com/mqZMgqi6j1

— Chris McCord (@chris_mccord) June 19, 2015

Other talks

We saw some fantastic talks including Phil Nash on his C++ unit testing framework Catch, Gojko Azdic's thoughts on Continuous Delivery, Grey Young's new project PrivateEye, Tomas Petricek's live coded F# web programming session, Gary Short's Troll Hunting talk and many more. It was also great to see F# feature so heavily both on and off the functional programming track, and attendance on the FP track doubling since last year!

Summary

Both Sean and I really enjoyed the conference, there were a vast array of speakers to learn from, huge numbers of passionate developers to mingle with during the breaks and a very child friendly vendor floor. The floor was complete with a surfboard simulator, and Sean was very happy to win a Raspberry Pi for his efforts on it, along with enjoying an endless supply of ice cream! Also, look out for Sean's interview on Herding Code which should be out in a month or so.

Group photo #ndcoslo :) pic.twitter.com/tYQApGTiDe

— Andrea (@silverSpoon) June 19, 2015
Categories: Programming

R: dplyr – segfault cause ‘memory not mapped’

Mark Needham - Sat, 06/20/2015 - 23:18

In my continued playing around with web logs in R I wanted to process the logs for a day and see what the most popular URIs were.

I first read in all the lines using the read_lines function in readr and put the vector it produced into a data frame so I could process it using dplyr.

library(readr)
dlines = data.frame(column = read_lines("~/projects/logs/2015-06-18-22-docs"))

In the previous post I showed some code to extract the URI from a log line. I extracted this code out into a function and adapted it so that I could pass in a list of values instead of a single value:

extract_uri = function(log) {
  parts = str_extract_all(log, "\"[^\"]*\"")
  return(lapply(parts, function(p) str_match(p[1], "GET (.*) HTTP")[2] %>% as.character))
}

Next I ran the following function to count the number of times each URI appeared in the logs:

library(dplyr)
pages_viewed = dlines %>%
  mutate(uri  = extract_uri(column)) %>% 
  count(uri) %>%
  arrange(desc(n))

This crashed my R process with the following error message:

segfault cause 'memory not mapped'

I narrowed it down to a problem when doing a group by operation on the ‘uri’ field and came across this post which suggested that it was handled more cleanly in more recently version of dplyr.

I upgraded to 0.4.2 and tried again:

## Error in eval(expr, envir, enclos): cannot group column uri, of class 'list'

That makes more sense. We’re probably returning a list from extract_uri rather than a vector which would fit nicely back into the data frame. That’s fixed easily enough by unlisting the result:

extract_uri = function(log) {
  parts = str_extract_all(log, "\"[^\"]*\"")
  return(unlist(lapply(parts, function(p) str_match(p[1], "GET (.*) HTTP")[2] %>% as.character)))
}

And now when we run the count function it’s happy again, good times!

Categories: Programming

Leadership Skills for Making Things Happen

"A leader is one who knows the way, goes the way, and shows the way." -- John C. Maxwell

How many people do you know that talk a good talk, but don’t walk the walk?

Or, how many people do you know have a bunch of ideas that you know will never see the light of day?  They can pontificate all day long, but the idea of turning those ideas into work that could be done, is foreign to them.

Or, how many people do you know can plan all day long, but their plan is nothing more than a list of things that will never happen?  Worse, maybe they turn it into a team sport, and everybody participates in the planning process of all the outcomes, ideas and work that will never happen. (And, who exactly wants to be accountable for that?)

It doesn’t need to be this way.

A lot of people have Hidden Strengths they can develop into Learned Strengths.   And one of the most important bucket of strengths is Leading Implementation.

Leading Implementation is a set of leadership skills for making things happen.

It includes the following leadership skills:

  1. Coaching and Mentoring
  2. Customer Focus
  3. Delegation
  4. Effectiveness
  5. Monitoring Performance
  6. Planning and Organizing
  7. Thoroughness

Let’s say you want to work on these leadership skills.  The first thing you need to know is that these are not elusive skills reserved exclusively for the elite.

No, these are commonly Hidden Strengths that you and others around you already have, and they just need to be developed.

If you don’t think you are good at any of these, then before you rule yourself out, and scratch them off your list, you need to ask yourself some key reflective questions:

  1. Do you know what good actually looks like?  Who are you role models?   What do they do differently than you, and is it really might and magic or do they simply do behaviors or techniques that you could learn, too?
  2. How much have you actually practiced?   Have you really spent any sort of time working at the particular skill in question?
  3. How did you create an effective feedback loop?  So many people rapidly improve when they figure out how to create an effective learning loop and an effective feedback loop.
  4. Who did you learn from?  Are you expecting yourself to just naturally be skilled?  Really?  What if you found a good mentor or coach, one that could help you create an effective learning loop and feedback loop, so you can improve and actually chart and evaluate your progress?
  5. Do you have a realistic bar?  It’s easy to fall into the trap of “all or nothing.”   What if instead of focusing on perfection, you focused on progress?   Could a little improvement in a few of these areas, change your game in a way that helps you operate at a higher level?

I’ve seen far too many starving artists and unproductive artists, as well as mad scientists, that had brilliant ideas that they couldn’t turn into reality.  While some were lucky to pair with the right partners and bring their ideas to live, I’ve actually seen another pattern of productive artists.

They develop some of the basic leadership skills in themselves to improve their ability to execute.

Not only are they more effective on the job, but they are happier with their ability to express their ideas and turn their ideas into action.

Even better, when they partner with somebody who has strong execution, they amplify their impact even more because they have a better understanding and appreciation of what it takes to execute ideas.

Like talk, ideas are cheap.

The market rewards execution.

Categories: Architecture, Programming

R: Regex – capturing multiple matches of the same group

Mark Needham - Fri, 06/19/2015 - 22:38

I’ve been playing around with some web logs using R and I wanted to extract everything that existed in double quotes within a logged entry.

This is an example of a log entry that I want to parse:

log = '2015-06-18-22:277:548311224723746831\t2015-06-18T22:00:11\t2015-06-18T22:00:05Z\t93317114\tip-127-0-0-1\t127.0.0.5\tUser\tNotice\tneo4j.com.access.log\t127.0.0.3 - - [18/Jun/2015:22:00:11 +0000] "GET /docs/stable/query-updating.html HTTP/1.1" 304 0 "http://neo4j.com/docs/stable/cypher-introduction.html" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"'

And I want to extract these 3 things:

  • /docs/stable/query-updating.html
  • http://neo4j.com/docs/stable/cypher-introduction.html
  • Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36

i.e. the URI, the referrer and browser details.

I’ll be using the stringr library which seems to work quite well for this type of work.

To extract these values we need to find all the occurrences of double quotes and get the text inside those quotes. We might start by using the str_match function:

> library(stringr)
> str_match(log, "\"[^\"]*\"")
     [,1]                                               
[1,] "\"GET /docs/stable/query-updating.html HTTP/1.1\""

Unfortunately that only picked up the first occurrence of the pattern so we’ve got the URI but not the referrer or browser details. I tried str_extract with similar results before I found str_extract_all which does the job:

> str_extract_all(log, "\"[^\"]*\"")
[[1]]
[1] "\"GET /docs/stable/query-updating.html HTTP/1.1\""                                                                            
[2] "\"http://neo4j.com/docs/stable/cypher-introduction.html\""                                                                    
[3] "\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36\""

We still need to do a bit of cleanup to get rid of the ‘GET’ and ‘HTTP/1.1′ in the URI and the quotes in all of them:

parts = str_extract_all(log, "\"[^\"]*\"")[[1]]
uri = str_match(parts[1], "GET (.*) HTTP")[2]
referer = str_match(parts[2], "\"(.*)\"")[2]
browser = str_match(parts[3], "\"(.*)\"")[2]
 
> uri
[1] "/docs/stable/query-updating.html"
 
> referer
[1] "https://www.google.com/"
 
> browser
[1] "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"

We could then go on to split out the browser string into its sub components but that’ll do for now!

Categories: Programming

How to deploy composite Docker applications with Consul key values to CoreOS

Xebia Blog - Fri, 06/19/2015 - 16:34

Most examples on the deployment of Docker applications to CoreOS use a single docker application. But as soon as you have an application that consists of more than 1 unit, the number of commands you have to type soon becomes annoying. At Xebia we have a best practice that says "Three strikes and you automate" mandating that a third time you do something similar, you automate. In this blog I share the manual page of the utility called fleetappctl that allows you to perform rolling upgrades and deploy Consul Key value pairs of composite applications to CoreOS and show three examples of its usage.


fleetappctl is a utility that allows you to manage a set of CoreOS fleet unit files as a single application. You can start, stop and deploy the application. fleetappctl is idempotent and does rolling upgrades on template files with multiple instances running. It can substitute placeholders upon deployment time and it is able to deploy Consul key value pairs as part of your application. Using fleetappctl you have everything you need to create a self contained deployment  unit of your composite application and put it under version control.

The command line options to fleetappctl are shown below:

fleetappctl [-d deployment-descriptor-file]
            [-e placeholder-value-file]
            (generate | list | start | stop | destroy)
option -d

The deployment descriptor file describes all the fleet unit files and Consul key-value pair files that make up the application. All the files referenced in the deployment-descriptor may have placeholders for deployment time values. These placeholders are enclosed in  double curly brackets {{ }}.

option -e

The file contains the values for the placeholders to be used on deployment of the application. The file has a simple format:

<name>=<value>
start

starts all units in the order as they appear in the deployment descriptor. If you have a template unit file, you can specify the number of instances you want to start. Start is idempotent, so you may call start multiple times. Start will bring the deployment inline with your descriptor.

If the unit file has changed with respect to the deployed unit file, the corresponding instances will be stopped and restarted with the new unit file. If you have a template file, the instances of the template file will be upgraded one by one.

Any consul key value pairs as defined by the consul.KeyValuePairs entries are created in Consul. Existing values are not overwritten.

generate

generates a deployment descriptor (deployit-manifest.xml) based upon all the unit files found in your directory. If a file is a fleet unit template file the number of instances to start is set to 2, to support rolling upgrades.

stop

stops all units in reverse order of their appearance in the deployment descriptor.

destroy

destroys all units in reverse order of their appearance in the deployment descriptor.

list

lists the runtime status of the units that appear in the deployment descriptor.

Install fleetappctl

to nstall the fleetappctl utility, type the following commands:

curl -q -L https://github.com/mvanholsteijn/fleetappctl/archive/0.25.tar.gz | tar -xzf -
cd fleetappctl-0.25
./install.sh
brew install xmlstarlet
brew install fleetctl
Start the platform

If you do not have the platform running, start it first.

cd ..
git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service.git
cd coreos-container-platform-as-a-service
git checkout 029d3dd8e54a5d0b4c085a192c0ba98e7fc2838d
cd vagrant
vagrant up
./is_platform_ready.sh
Example - Three component web application


The first example is a three component application. It consists of a mount, a Redis database service and a web application. We generate the deployment descriptor, indicate we do not want to start the mount, start the application and then modify the web application unit file to change the service name into 'helloworld'. We perform a rolling upgrade by issuing start again.. Finally we list, stop and destroy the application.

cd ../fleet-units/app
# generate a deployment descriptor
fleetappctl generate

# do not start mount explicitly
xml ed -u '//fleet.UnitConfigurationFile[@name="mnt-data"]/startUnit' \
       -v false deployit-manifest.xml > \
        deployit-manifest.xml.new
mv deployit-manifest.xml{.new,}

# start the app
fleetappctl start 

# Check it is working
curl hellodb.127.0.0.1.xip.io:8080
curl hellodb.127.0.0.1.xip.io:8080

# Change the service name of the application in the unit file
sed -i -e 's/SERVICE_NAME=hellodb/SERVICE_NAME=helloworld/' app-hellodb@.service

# do a rolling upgrade
fleetappctl start 

# Check it is now accessible on the new service name
curl helloworld.127.0.0.1.xip.io:8080

# Show all units of this app
fleetappctl list

# Stop all units of this app
fleetappctl stop
fleetappctl list

# Restart it again
fleetappctl start

# Destroy it
fleetappctl destroy
Example - placeholder references

This example shows the use of a placeholder reference in the unit file of the paas-monitor application. The application takes two optional environment  variables: RELEASE and MESSAGE that allow you to configure the resulting responses. The variable RELEASE is configured in the Docker run command in the fleet unit file through a placeholder. The actual value for the current deployment is taken from an placeholder value file.

cd ../fleetappctl-0.25/examples/paas-monitor
#check out the placeholder reference
grep '{{' paas-monitor@.service

...
ExecStart=/bin/sh -c "/usr/bin/docker run --rm --name %p-%i \
 <strong>--env RELEASE={{release}}</strong> \
...
# checkout our placeholder values
cat dev.env
...
release=V2
# start the app
fleetappctl -e dev.env start

# show current release in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start is idempotent (ie. nothing happens)
fleetappctl -e dev.env start

# update the placeholder value and see a rolling upgrade in the works
echo 'release=V3' > dev.env
fleetappctl -e dev.env start
curl paas-monitor.127.0.0.1.xip.io:8080/status

fleetappctl destroy
Example - Env Consul Key Value Pair deployments


The final example shows the use of a Consul Key Value Pair, the use of placeholders and envconsul to dynamically update the environment variables of a running instance. The environment variables RELEASE and MESSAGE are taken from the keys under /paas-monitor in Consul. In turn the initial value of these keys are loaded on first load and set using values from the placeholder file.

cd ../fleetappctl-0.25/examples/envconsul

#check out the Consul Key Value pairs, and notice the reference to placeholder values
cat keys.consul
...
paas-monitor/release={{release}}
paas-monitor/message={{message}}

# checkout our placeholder values
cat dev.env
...
release=V4
message=Hi guys
# start the app
fleetappctl -e dev.env start

# show current release and message in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# Change the message in Consul
fleetctl ssh paas-monitor@1 \
    curl -X PUT \
    -d \'hello Consul\' \
    http://172.17.8.101:8500/v1/kv/paas-monitor/message

# checkout the changed message
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start does not change the values..
fleetappctl -e dev.env start
Conclusion

CoreOS provides all the basic functionality for a Container Platform as a Service. With the utility fleetappctl it becomes easy to start, stop and upgrade composite applications. The script is an superfluous to fleetctl and does not break other ways of deploying your applications to CoreOS.

The source code, manual page and documentation of fleetappctl can be found on https://github.com/mvanholsteijn/fleetappctl.

 

Startup Thinking

“Startups don't win by attacking. They win by transcending.  There are exceptions of course, but usually the way to win is to race ahead, not to stop and fight.” -- Paul Graham

A startup is the largest group of people you can convince to build a different future.

Whether you launch a startup inside a big company or launch a startup as a new entity, there are a few things that determine the strength of the startup: a sense of mission, space to think, new thinking, and the ability to do work.

The more clarity you have around Startup Thinking, the more effective you can be whether you are starting startups inside our outside of a big company.

In the book, Zero to One: Notes on Startups, or How to Build the Future, Peter Thiel shares his thoughts about Startup Thinking.

Startups are Bound Together by a Sense of Mission

It’s the mission.  A startup has an advantage when there is a sense of mission that everybody lives and breathes.  The mission shapes the attitudes and the actions that drive towards meaningful outcomes.

Via Zero to One: Notes on Startups, or How to Build the Future:

“New technology tends to come from new ventures--startups.  From the Founding Fathers in politics to the Royal Society in science to Fairchild Semiconductor's ‘traitorous eight’ in business, small groups of people bound together by a sense of mission have changed the world for the better.  The easiest explanation for this is negative: it's hard to develop new things in big organizations, and it's even harder to do it by yourself.  Bureaucratic hierarchies move slowly, and entrenched interests shy away from risk.” 

Signaling Work is Not the Same as Doing Work

One strength of a startup is the ability to actually do work.  With other people.  Rather than just talk about it, plan for it, and signal about it, a startup can actually make things happen.

Via Zero to One: Notes on Startups, or How to Build the Future:

“In the most dysfunctional organizations, signaling that work is being done becomes a better strategy for career advancement than actually doing work (if this describes your company, you should quit now).  At the other extreme, a lone genius might create a classic work of art or literature, but he could never create an entire industry.  Startups operate on the principle that you need to work with other people to get stuff done, but you also need to stay small enough so that you actually can.”

New Thinking is a Startup’s Strength

The strength of a startup is new thinking.  New thinking is even more valuable than agility.  Startups provide the space to think.

Via Zero to One: Notes on Startups, or How to Build the Future:

“Positively defined, a startup is the largest group of people you can convince of a plan to build a different future.  A new company's most important strength is new thinking: even more important than nimbleness, small size affords space to think.  This book is about the questions you must ask and answer to succeed in the business of doing new things: what follows is not a manual or a record of knowledge but an exercise in thinking.  Because that is what a startup has to do: question received ideas and rethink business from scratch.”

Do you have stinking thinking or do you beautiful mind?

New thinking will take you places.

You Might Also Like

How To Get Innovation to Succeed Instead of Fail

Management Innovation is at the Top of the Innovation Stack

The Innovation Revolution

The New Competitive Landscape

The New Realities that Call for New Organizational and Management Capabilities

Categories: Architecture, Programming

I Know I’ll Never Be Happy Unless I Do Something For Myself

Making the Complex Simple - John Sonmez - Thu, 06/18/2015 - 16:00

In this episode, I answer an email about being truly happy doing something for oneself. Full transcript: John:               Hey, John Sonmez from simpleprogrammer.com. I’ve got a question here about some life choices. It’s titled In A Pickle, and I’ll read you here. Cole says, “Hey, John. First off, I wanted to say you’re a huge […]

The post I Know I’ll Never Be Happy Unless I Do Something For Myself appeared first on Simple Programmer.

Categories: Programming

Introducing new Android training programs with Udacity

Android Developers Blog - Wed, 06/17/2015 - 18:51

Posted by Peter Lubbers, Senior Program Manager, Google Developer Training

We know how important it is for you to efficiently develop the skills to build better Android apps and be successful in your jobs. To meet your training needs, we’ve partnered with Udacity to create Android training courses, ranging from beginner to more advanced content.

Last week at Google I/O we announced the Android Nanodegree, an education credential that is designed for busy people to learn new skills and advance their careers in a short amount of time from anywhere at any time. The nanodegree ties together our Android courses, and provides you with a certificate that may help you be a more marketable Android developer.

Training courses

All training courses are developed and taught by expert Google instructors from the Developer Platform team. In addition to updating our popular Developing Android Apps course and releasing Advanced Android App Development, we now have courses for everyone from beginning programmers to advanced developers who want to configure their Gradle build settings. And then there's all the fun stuff in between—designing great-looking, high performance apps, making your apps run on watches, TVs, and in cars, and using Google services like Maps, Ads, Analytics, and Fit.

Each course is available individually, without charge, at udacity.com/google. Our instructors are waiting for you:

Android Nanodegree

You can also enroll in the new Android Nanodegree for a monthly subscription fee, which gives you access to coaches who will review your code, provide guidance on your project, answer questions about the class, and help keep you on track when you need it.

More importantly, you will learn by doing, focusing only on where you need to grow. Since the Nanodegree is based on your skills and the projects in your portfolio, you do not need to complete the courses that address the skills you already have. You can focus on writing the code and building the projects that meet the requirements for the Nanodegree credential. We’ll also be inviting 50 Android Nanodegree graduates to Google's headquarters in Mountain View, California, for a three day intensive Android Career Summit in November. Participants will have the opportunity to experience Google’s company culture and attend workshops focused on developing their personal career paths. Participants will then leverage the skills learned from Udacity’s Android Nanodegree during a two-day hackathon.

To help you learn more about this program and and courses within it, Google and Udacity are partnering up for an "Ask the Experts" live streamed series. In the first episode on Wednesday, June 3rd at 2pm PDT, Join Sebastian Thrun, Peter Lubbers and Jocelyn Becker who will be answering your questions on the Nanodegree. RSVP here and ask and vote for questions here. Android training in Arabic

We also believe that everyone has the right to learn how to develop Android apps. Today, there is a great need for developers in countries outside of the United States as software powers every industry from food and transportation to healthcare and retail. As a first step in getting the Android Nanodegree localized and targeted for individual countries, we have worked with the Government of Egypt and Udacity to create end-to-end translations of our top Android courses into Arabic (including fully dubbed video). Google will offer 2,000 scholarships to students to get a certificate for completing the Arabic version of the Android Fundamentals course. Google will also host job fairs and sessions for students with local employers and the Egyptian Government. For more information, see www.udacity.com/egypt.

Complete Android course catalog

Here are the currently-planned courses in the Android Nanodegree:

So get learning now at udacity.com/android

Join the discussion on

+Android Developers
Categories: Programming

Coding: Explore and retreat

Mark Needham - Wed, 06/17/2015 - 18:23

When refactoring code or looking for the best way to integrate a new piece of functionality I generally favour a small steps/incremental approach but recent experiences have led me to believe that this isn’t always the quickest approach.

Sometimes it seems to make more sense to go on little discovery missions in the code, make some bigger steps and then if necessary retreat and revert our changes and apply the lessons learnt on our next discovery mission. This technique which isn’t anything novel but I think is quite effective.

Michael and I were recently looking at the Smart Local Moving algorithm which is used for community detection in large networks and decided to refactor the code to make sure we understood how it worked. When we started the outline of the main class was like this:

public class Network implements Cloneable, Serializable
{
    private static final long serialVersionUID = 1;
 
    private int numberOfNodes;
    private int[] firstNeighborIndex;
    private int[] neighbor;
    private double[] edgeWeight;
    private double totalEdgeWeightSelfLinks;
    private double[] nodeWeight;
    private int nClusters;
    private int[] cluster;
 
    private double[] clusterWeight;
    private int[] numberNodesPerCluster;
    private int[][] nodePerCluster;
    private boolean clusteringStatsAvailable;
...
}

My initial approach was to put methods around things to make it a bit easier to understand and then step by step replace each of those fields with nodes and relationships. I spent the first couple of hours doing this and while it was making the code more readable it wasn’t progressing very quickly and I wasn’t much wiser about how the code worked.

Michael and I paired on it for a few hours and he adopted a slightly different but more successful approach where we looked at slightly bigger chunks of code e.g. all the loops that used the firstNeighborIndex field and then created a hypothesis of what that code was doing.

In this case firstNeighborIndex acts as an offset into neighbor and is used to iterate through a node’s relationships. We thought we could probably replace that with something more similar to the Neo4j model where you have classes for nodes and relationships and a node has a method which returns a collection of relationships.

We tried tearing out everywhere that used those two fields and replacing them with our new nodes/relationships code but that didn’t work because we hadn’t realised that edgeWeight and nodeWeight are also tied to the contents of the original fields.

We therefore needed to retreat and try again. This time I put the new approach alongside the existing approach and then slowly replaced existing bits of code.

Along the way I came up with other ideas about how to restructure the code, tried some more bigger leaps to validate my ideas and then moved back into incremental mode again.

In summary I’ve found the combination of incrementally changing code and going on bigger exploratory missions works quite well.

Now I’m trying to work out when each approach is appropriate and I’ll write that up when I learn more! You can see my progress via the github commits.

Categories: Programming

Episode 229: Flavio Junqueira on Distributed Coordination with Apache ZooKeeper

Flavio Junqueira is the author of Zookeeper: Distributed Process Coordination. Flavio and Jeff Meyerson begin by defining ZooKeeper and talking about what ZooKeeper is and isn’t. ZooKeeper can be thought of as a patch against certain fallacies of distributed computing: that the network is secure, has zero latency, has infinite bandwidth, and so on. With […]
Categories: Programming

The Lifecycle Model Applied to Common Methodologies - New Lecture Posted

10x Software Development - Steve McConnell - Wed, 06/17/2015 - 14:35

I've posted this week's lecture in my Understanding Software Projects series at https://cxlearn.com. Some of the past lectures that have been posted are still free. 

In this week's lecture I explain how methodologies like Scrum, Extreme Programming, Waterfall, and Code & Fix look from the point of view of the Software Lifecycle Model. The lecture starts by going into more detail about how the Lifecycle Model can be abstracted for purposes of describing various methodologies and then goes into describing specific methodologies. I conclude with some comments about the role that specific practices fit within the Lifecycle Model (pair programming, formal inspections, test first, continuous integration, etc.). 

The point of this is to show the way that familiar methodologies can be abstracted into a general-purpose model. Performing that abstraction helps to separate the substance from the hype in terms of identifying what is really significantly different about a methodology like Scrum or XP vs. what is just marketing fluff. 

Lectures posted so far include:  

0.0 Understanding Sofware Projects - Intro
     0.1 Introduction - My Background
     0.2 Reading the News
     0.3 Definitions and Notations 

1.0 The Software Lifecycle Model - Intro
     1.1 Variations in Iteration 
     1.2 Lifecycle Model - Defect Removal
     1.3 Lifecycle Model Applied to Common Methodologies (New this Week)

2.0 Software Size
     2.05 Size - Comments on Lines of Code
     2.1 Size - Staff Sizes 
     2.2 Size - Schedule Basics 

Check out the lectures at http://cxlearn.com!

Understanding Software Projects - Steve McConnell

 

Legacy Hiring

Phil Trelford's Array - Wed, 06/17/2015 - 12:40

Over the years I’ve been involved in a lot of developer hiring. Here I’d like to share a pearl of wisdom I gleaned from a colleague I shared the hiring process with.

Their angle was to give extra points to people with experience with both building AND maintaining systems, i.e those who stuck around long enough to understand the consequences of their own actions.

I found this is an interesting insight, the logic being that you learn the longer term impact of your design decisions from hanging around to fix the faults and add new features.

But what does staying the distance actually teach you? I think the only real way is to find out for yourself, but here’s a few things that spring to mind.

Prefer simple

Simple isn’t easy, there’s an old saying that goes “I didn't have time to write a short letter, so I wrote a long one instead”.

The same goes for code; finding the underlying simplicity can take significantly longer than just bashing out the first thing that comes to mind. However the long term payoffs for system maintenance can be huge.

Tests

Tests can be a good thing, and test-driven development (TDD) can provide a helpful suite of regression tests to move forward with. However if the end result is tightly-coupled tests then the the pattern might be better coined Test-Driven Concrete (TDC), as instead of making change easier the product becomes almost impossible to maintain (thanks to Ian Johnson for the metaphor).

Instead of testing at the class and method level you should be testing required behaviour.

Summary

If you’re embarking on a big shiny new project, it might make sense to have some developers around who’ve been through the mill and come out the other side, and can help guide you to a more maintainable future.

Do you have any tips to share?

Categories: Programming

Has C# Peaked?

Phil Trelford's Array - Tue, 06/16/2015 - 18:47

Microsoft’s C# programming language first appeared in 2000, over 15 years ago, that’s a long time in tech.

Every month or so there’s a “Leaving .Net” articles, the last one I bumped into was “The collapse of the .net ecosystem” from Justin Angel:

"The collapse of the .net ecosystem" @ http://t.co/doKJ712Kqm Wrote this six months ago and didn't want to offend people by publishing it.

— Justin Angel (@JustinAngel) June 1, 2015

The article shows, through a series of charts, the level of interest in C# peaking and then falling.

This is what I’d consider to be the natural continuum of things, where languages have their day, and then slowly decline. In the past C was my primary language, then C++ and so on, why should C# be any different?

Disclaimer: This is not a leaving .Net post, just some of my own research that I thought I’d share, with a focus on the UK as that’s where I live.

Job trends

Indeed.com provides statistics on job placements with specific keywords, lets look at C#:

csharp jobgraph

This chart appears to show job adverts peaking between around 2010 and 2012 and tapering off fairly heavily after that.

Google trends

Google trends lets you track interest over time based on a keyword, here I'm looking at C# in the UK:


On this chart the peak seems to be earlier, around 2009, perhaps the trend can be seen earlier in the UK, but again the decline in interest is clearly visible.

TIOBE

Questions around the validity of TIOBE’s numbers abound, but here it is anyway:

TIOBE Index for CSharp

Here the peak appears to be around 2012, although the drop is perhaps less pronounced.

PYPL

Yet another popularity index this time tracking google trends for “C# tutorial” in the UK:

PYPL CSharp UK

This chart shows uses a logarithmic scale, however what we might surmise, if the numbers are to believed, is that interest in C# appears to fall off a cliff towards the end of 2014.

Stack Overflow

The recent stackoverflow developer survey also shows a significant decline in interest from 44.7% in 2013 to 31.6% in 2015. Developer’s current preferences are also quite revealing:

image

 

Where’s everyone gone?

This is only conjecture, but from my experience of .Net oriented developer events over the years in the UK, C# has always had a significant focus on the web and ASP.Net. My suspicion is that with the rise of thick-client JavaScript and mobile, significant numbers of developers have been migrating naturally towards those ecosystems.

Should I panic?

Probably not, there’s still plenty of C# jobs out there for now, and hell, some of the best paid jobs nowadays are for maintaining C++ and even COBOL systems. But if the numbers are anything to go by then we can say that C# interest has peaked.

That said who knows what the future will hold and perhaps like C++ there’ll be a renaissance in the future, although C++’s renaissance never really brought it back to the levels of it’s heady hey days.

Then again perhaps it’s more pragmatic not to dwell too heavily on the past, accept the numbers, and look to a new future. If you’re skills extend beyond one language then I guess you’ve probably got nothing to worry about, otherwise perhaps it’s time to pick up a new language and new paradigms.

And I’ll leave you with a question: Do you think you’ll be using the same programming language in 5 years time?

Categories: Programming

Rat

Phil Trelford's Array - Tue, 06/16/2015 - 18:16

One of the biggest life lessons I learnt while at school, although I didn’t know it at the time, was from a PE lesson and a game called Rat.

 

There were only a few rules to Rat: one kid had the ball and could throw at any of the other kids, when you were hit you were out, and the last kid left standing was the rat.

Most of the kids would run and jump around trying to avoid the ball, but would rarely last the distance; the surviving rats tended to push others into the line of fire or better keep their heads down and find a place to hide.

 

On reflection this seems to be a perfect analogy of the corporate world where “survivors” in a company are typically the ones who keep their head down and try not to draw attention to themselves.

But where’s the fun in that, the real lesson for me was not to get too caught up in the “game”, if you’re hit you can just move on to a new one.

Categories: Programming

Northwind: Finding direct/transitive Reports in SQL and Neo4j’s Cypher

Mark Needham - Mon, 06/15/2015 - 23:53

Every few months we run a relational to graph meetup at the Neo London office where we go through how to take your data from a relational database and into the graph.

We use the Northwind dataset which often comes as a demo dataset on relational databases and come up with some queries which seem graph in nature.

My favourite query is one which finds out how employees are organised and who reports to whom. I thought it’d be quite interesting to see what it would look like in Postgres SQL as well, just for fun.

We’ll start off by getting a list of employees and the person they report to:

SELECT e."EmployeeID", e."ReportsTo"
FROM employees AS e
WHERE e."ReportsTo" IS NOT NULL;
 
 EmployeeID | ReportsTo
------------+-----------
          1 |         2
          3 |         2
          4 |         2
          5 |         2
          6 |         5
          7 |         5
          8 |         2
          9 |         5
(8 ROWS)

In cypher we’d do this:

MATCH (e:Employee)<-[:REPORTS_TO]-(sub)
RETURN sub.EmployeeID, e.EmployeeID 
 
+-------------------------------+
| sub.EmployeeID | e.EmployeeID |
+-------------------------------+
| "4"            | "2"          |
| "5"            | "2"          |
| "1"            | "2"          |
| "3"            | "2"          |
| "8"            | "2"          |
| "9"            | "5"          |
| "6"            | "5"          |
| "7"            | "5"          |
+-------------------------------+
8 rows

Next let’s find the big boss who doesn’t report to anyone. First in SQL:

SELECT e."EmployeeID" AS bigBoss
FROM employees AS e
WHERE e."ReportsTo" IS NULL
 
 bigboss
---------
       2
(1 ROW)

And now cypher:

MATCH (e:Employee)
WHERE NOT (e)-[:REPORTS_TO]->()
RETURN e.EmployeeID AS bigBoss
 
+---------+
| bigBoss |
+---------+
| "2"     |
+---------+
1 row

We still don’t need to join anything so the query isn’t that interesting yet. Let’s bring in some more properties from the manager record so we have to self join on the employees table:

SELECT e."FirstName", e."LastName", e."Title", manager."FirstName", manager."LastName", manager."Title"
FROM employees AS e
JOIN employees AS manager ON e."ReportsTo" = manager."EmployeeID"
WHERE e."ReportsTo" IS NOT NULL
 
 FirstName | LastName  |          Title           | FirstName | LastName |         Title
-----------+-----------+--------------------------+-----------+----------+-----------------------
 Nancy     | Davolio   | Sales Representative     | Andrew    | Fuller   | Vice President, Sales
 Janet     | Leverling | Sales Representative     | Andrew    | Fuller   | Vice President, Sales
 Margaret  | Peacock   | Sales Representative     | Andrew    | Fuller   | Vice President, Sales
 Steven    | Buchanan  | Sales Manager            | Andrew    | Fuller   | Vice President, Sales
 Michael   | Suyama    | Sales Representative     | Steven    | Buchanan | Sales Manager
 Robert    | King      | Sales Representative     | Steven    | Buchanan | Sales Manager
 Laura     | Callahan  | Inside Sales Coordinator | Andrew    | Fuller   | Vice President, Sales
 Anne      | Dodsworth | Sales Representative     | Steven    | Buchanan | Sales Manager
(8 ROWS)
MATCH (e:Employee)<-[:REPORTS_TO]-(sub)
RETURN sub.FirstName, sub.LastName, sub.Title, e.FirstName, e.LastName, e.Title
 
+----------------------------------------------------------------------------------------------------------------+
| sub.FirstName | sub.LastName | sub.Title                  | e.FirstName | e.LastName | e.Title                 |
+----------------------------------------------------------------------------------------------------------------+
| "Margaret"    | "Peacock"    | "Sales Representative"     | "Andrew"    | "Fuller"   | "Vice President, Sales" |
| "Steven"      | "Buchanan"   | "Sales Manager"            | "Andrew"    | "Fuller"   | "Vice President, Sales" |
| "Nancy"       | "Davolio"    | "Sales Representative"     | "Andrew"    | "Fuller"   | "Vice President, Sales" |
| "Janet"       | "Leverling"  | "Sales Representative"     | "Andrew"    | "Fuller"   | "Vice President, Sales" |
| "Laura"       | "Callahan"   | "Inside Sales Coordinator" | "Andrew"    | "Fuller"   | "Vice President, Sales" |
| "Anne"        | "Dodsworth"  | "Sales Representative"     | "Steven"    | "Buchanan" | "Sales Manager"         |
| "Michael"     | "Suyama"     | "Sales Representative"     | "Steven"    | "Buchanan" | "Sales Manager"         |
| "Robert"      | "King"       | "Sales Representative"     | "Steven"    | "Buchanan" | "Sales Manager"         |
+----------------------------------------------------------------------------------------------------------------+
8 rows

Now let’s see how many direct reports each manager has:

SELECT manager."EmployeeID" AS manager, COUNT(e."EmployeeID") AS reports
FROM employees AS manager
LEFT JOIN employees AS e ON e."ReportsTo" = manager."EmployeeID"
GROUP BY manager
ORDER BY reports DESC;
 
 manager | reports
---------+---------
       2 |       5
       5 |       3
       1 |       0
       3 |       0
       4 |       0
       9 |       0
       6 |       0
       7 |       0
       8 |       0
(9 ROWS)
MATCH (e:Employee)
OPTIONAL MATCH (e)<-[rel:REPORTS_TO]-(report)
RETURN e.EmployeeID AS employee, COUNT(rel) AS reports
 
+--------------------+
| employee | reports |
+--------------------+
| "2"      | 5       |
| "5"      | 3       |
| "8"      | 0       |
| "7"      | 0       |
| "1"      | 0       |
| "4"      | 0       |
| "6"      | 0       |
| "9"      | 0       |
| "3"      | 0       |
+--------------------+
9 rows

Things start to get more interesting if we find the transitive reporting relationships that exist. I’m not an expert at Postgres but one way to achieve this is by writing a recursive WITH query like so:

WITH RECURSIVE recursive_employees("EmployeeID", "ReportsTo") AS (
        SELECT e."EmployeeID", e."ReportsTo"
        FROM employees e
      UNION ALL
        SELECT e."EmployeeID", e."ReportsTo"
        FROM employees e, recursive_employees re
        WHERE e."EmployeeID" = re."ReportsTo"
)
SELECT re."ReportsTo", COUNT(*) AS COUNT
FROM recursive_employees AS re
WHERE re."ReportsTo" IS NOT NULL
GROUP BY re."ReportsTo";
 
 ReportsTo | COUNT
-----------+-------
         2 |     8
         5 |     3
(2 ROWS)

If there’s a simpler way let me know in the comments.

In cypher we only need to add one character, ‘*’, after the ‘REPORTS_TO’ relationship to get it to recurse as far as it can. We’ll also remove the ‘OPTIONAL MATCH’ so that we only get back people who have people reporting to them:

MATCH (e:Employee)<-[rel:REPORTS_TO*]-(report)
RETURN e.EmployeeID AS employee, COUNT(rel) AS reports
 
+--------------------+
| employee | reports |
+--------------------+
| "2"      | 8       |
| "5"      | 3       |
+--------------------+
2 rows

Now I need to find some relational datasets with more complicated queries to play around with. If you have any ideas do let me know.

Categories: Programming

You’ve Cracked The Coding Interview, Now What?

Making the Complex Simple - John Sonmez - Mon, 06/15/2015 - 16:00

So, you’ve managed to crack the coding interview and got yourself a job offer, but now what? Negotiating your salary This is one of the most crucial steps to getting a new job, one most programmers ignore. Far too many programmers take the first offer they are given and don’t spend any effort negotiating their salaries. […]

The post You’ve Cracked The Coding Interview, Now What? appeared first on Simple Programmer.

Categories: Programming

boot2docker on xhyve

Xebia Blog - Mon, 06/15/2015 - 09:00

xhyve is a new hypervisor in the vein of KVM on Linux and bhyve on BSD. It’s actually a port of BSD’s bhyve to OS X and more similar to KVM than to Virtualbox in that it’s minimal and command line only which makes it a good fit for an always running virtual machine like boot2docker on OS X.

This post documents the steps to get boot2docker running within xhyve and contains some quick benchmarks to compare xhyve’s performance with Virtualbox.

Read the full blogpost on Simon van der Veldt's website