Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Personal Development Insights from the Greatest Book on Personal Development Ever

“Let him who would move the world first move himself.” ― Socrates

At work, and in life, you need every edge you can get.

Personal development is a process of realizing and maximizing your potential.

It’s a way to become all that you’re capable of.

One of the most powerful books on personal development is Unlimited Power, by Tony Robbins.  In Unlimited Power, Tony Robbins shares some of the most profound insights in personal development that world has ever known.

Develop Your Abilities and Model Success

Through a deep dive into the world of NLP (Neuro-Linguistic Programming) and Neuro-Associative Conditioning, Robbins shows you how to master you mind, master your body, master your emotional intelligence, and improve what you’re capable of in all aspects of your life.  You can think of NLP as re-programming your mind, body, and emotions for success.

We’ve already been programmed by the shows we watch, the books we’ve read, the people in our lives, the beliefs we’ve formed.  But a lot of this was unconscious.  We were young and took things at face value, and jumped to conclusions about how the world works, who we are, and who we can be, or worse, who others think we should be.

NLP is a way to break way from limiting beliefs and to model the success of others with skill.  You can effectively reverse engineer how other people get success and then model the behavior, the attitudes, and the actions that create that success.  And you can do it better, faster, and easier, than you might imagine. 

NLP is really a way to model what the most successful people think, say, and do.

Unlimited Power at Your Fingertips

I’ve created a landing page that is a round up and starting point to dive into some of the book nuggets from Unlimited Power:

Unlimited Power Book Nuggets at a Glance

On that page, I also provided very brief summaries of the core personal development insight so that you can get a quick sense of the big ideas.

A Book Nugget is simply what I call a mini-lesson or insight from a book that you can use to change what you think, feel, or do.

Unlimited Power is not an easy book to read, but it’s one of the most profound tombs of knowledge in terms of personal development insights.

Personal Development Insights at Your Fingertips

If you want to skip the landing page and just jump into a few Unlimited Power Book Nuggets and take a few personal development insights for a spin, here you go:

5 Keys to Wealth and Happiness

5 Rules for Formulating Outcomes

5 Sources of Beliefs for Personal Excellence

7 Beliefs for Personal Excellence

7 Traits of Success

Create Your Ideal Day, the Tony Robbins Way

Don’t Compare Yourself to Others

How To Change the Emotion of Any Experience to Empower You

How To Get Whatever You Want

Leadership for a Better World

Persuasion is the Most Important Skill You Can Develop

Realizing Your Potential is a Dynamic Process

Schotoma: Why You Can’t See What’s Right in Front of You

Seven Meta-Programs for Understanding People

The Difference Between Those Who Succeed and Those Who Fail

As you’ll quickly see, Unlimited Power remains one of the most profound sources of insight for realizing your potential and becoming all that you’re capable of.

It truly is the ultimate source of personal development in action.

Enjoy.

Categories: Architecture, Programming

Derick Bailey is a Messaging Master

Making the Complex Simple - John Sonmez - Fri, 08/14/2015 - 13:00

Derick Bailey is a good friend of mine. You may know him from the Entreprogrammers Podcast or if you listen to Get Up and CODE, he has been on as my guest a few times. Derick is a software developer by trade. He is also an entrepreneur and has launched a number of notable products […]

The post Derick Bailey is a Messaging Master appeared first on Simple Programmer.

Categories: Programming

Android Developer Story: Zabob Studio and Buff Studio reach global users with Google Play

Android Developers Blog - Fri, 08/14/2015 - 05:43

Posted by Lily Sheringham, Google Play team

South Korean Games developers Zabob Studio and Buff Studio are start-ups seeking to become major players in the global mobile games industry.

Established in 2013, Zabob Studio was set up by Kwon Dae-hyeon and his wife in 2013. This couple-run business but they have already published ten games, including hits ‘Zombie Judgement Day’ and ‘Infinity Dungeon.’ So far, the company has generated more than KRW ₩140M (approximately $125,000 USD) in sales revenue, with about 60 percent of the studio’s downloads coming from international markets, such as Taiwan and Brazil.

Elsewhere, Buff Studio was founded in 2014 and right from the start, its first game Buff Knight was an instant hit. It was even featured as the ‘Game of the Week’ on Google Play and was included in “30 Best Games of 2014” lists. A sequel is already in the works showing the potential of the franchise.

In this video, Kwon Dae-hyeon, CEO of Zabob Studio ,and Kim Do-Hyeong, CEO of Buff Studio, talk about how Google Play services and the Google Play Developer Console have helped them maintain a competitive edge, market their games efficiently to global users and grow revenue on the platform.

Android Developer Story: Buff Studio - Reaching global users with Google Play

Android Developer Story: Zabob Studio - Growing revenue with Google Play

Check Zabob Studio apps and Buff Knight on Google Play!

We’re pleased to share that Android Developer Stories will now come with translated subtitles on YouTube in popular languages around the world. Find out how to turn on YouTube captions. To read locally translated blog posts, visit the Google developer blog in Korean.

Join the discussion on

+Android Developers
Categories: Programming

The Flaw of Averages

Herding Cats - Glen Alleman - Fri, 08/14/2015 - 04:30

In a recent post of forecasting capacity planning a time series of data was used as the basis of the discussion.

Screen Shot 2015-08-13 at 6.40.47 PM

Some static statistics were then presented.

Screen Shot 2015-08-13 at 6.43.01 PMWith a discussion of the upper and lower ranges of the past data. The REAL question though is what is the likely outcomes for data in the future given the past performance data. That is if we recorded what happened in the past, what is the likely data in the future?

The average and upper and lower ranges from the past data are static statistics. That is all the dynamic behavior of the past is wiped out in the averaging and deviation processes, so that information can no longer be used to forecast the possible outcomes of the future. 

This is one of the attributes of The Flaw of Averages and How to Lie With Statistics, two books that should be on every managers desk. That is managers tasked with making decisions in the presence of uncertainty when spending other peoples money.

We now have a Time Series and can ask the question what is the range of possible outcomes in the future given the values in the past? This can easily be done with a free tool - R. R is a statistical programming language that is free from the Comprehensive R Archive Network (CRAN). In R, there are several functions that can be used to make these forecasts. That is what are the estimated values in the future form the past and their confidence intervals.

Let's start with some simple steps:

  1. Record all the data in the past. For example make a text file of the values in the first chart. Name that file NE.Numbers
  2. Start the R tool. Better yet download an IDE for R. RStudio is one. That way there is a development environment for your statistical work. As well there are many Free R books on statistical forecasting - estimating outcomes in the future.
  3. OK, read the Time Series of raw data from the file of value as assign it to a Variable
    • NETS=ts(NE.Numbers)
    • The ts function converts the Time Series into an object - a Time Series - that can be used by the next function
  4. With the Time Series now in the right format, apply the ARIMA function. ARIMA is Autoregressive Integrated Moving Average. Also know as the Box-Jenkins algorithm. The is George Box of the famously misused and blatantly abused quote all models are wrong some models are useful. If you don't have the full paper where that quote came from and the book Time Series Analysis: Forecasting and Control, Box and Jenkins, please resist re-quoting out of context. That quoyte has become the meme for those not having the  background to do the math for time series analysis and it becomes a mantra for willfully ignoring the math needed to actually make estimates of the future - forecasting - using time series of the past in ANY domain. ARIMA is the beginning basis of all statistical forecasting, the science, engineering, and finance.
    • The ARIMA algorithm has three parameters - p, d, q
    • p is the order of the autoregressive model.
    • d is the degree of he differencing
    • q is the order of the moving average
    • Here's the manual in R for ARIMA
  5. With the original data turned into a Time Series and presented to the ARIMA function we can now apply the Forecast function. This function provides methods and tools for displaying and analyzing univariate time series forecasts including exponential smoothing via state space models and automatic ARIMA modelling.
  6. When applied to the ARIMA output we get a Forecast series that can be plotted.

Here's what all this looks like in RStudio:


NETS=ts(NE.Numbers) - convert the raw numbers to a time series
NETSARIMA=arima(NETS, c=order(0,1,1)) - make an ARIMA object
NEFORECAST = forecast(NETSARIMA) - make a forecast using that
plot(NEFORECAST) - plot it

Here's the plot, with the time series from the raw data and the 80% and 90% confidence bands on the possible outcomes in the future. 

Rplot

The Punch Line

You want to make decisions with other peoples money when the 80% confidence in a possible outcome is itself a - 56% to +68% variance? really. Flipping coins gets a better probability of an outcome inside all the possible outcomes that happened in the past. The time series is essentially a random series with very low confidence of being anywhere near the mean. This is the basis of The Flaw of Averages

Where I work this would be a non-starter if we came to the Program Manager with this forecast of the Estimate to Complete based on an Average with that wide a variance. 

Possible where there is low value at risk, a customer that has little concern for cost and schedule overrun, and maybe where the work is actually and experiment  with no deadline or not-to-exceed budget, or any other real constraint. But if your project has a need date for the produced capabilities, a date when those capabilities need to start earning their keep and need to start producing value that can be booked on the balance sheet a much higher confidence in what the future NEEDS to be is likely going to be the key to success

The Primary Reason for Estimates

First estimates are for the business. Yes developers can use them too. But the business has a business goal. Make money at some point in the future on the sunk costs of today - the breakeven date. These sunk costs are recoverable - hopefully - so we need to know when we'll be even with our investment. This is how business works, they make decisions in the presence of uncertainty - not on the opinion of development saying we recorded our past performance on an average for projected that to the future. No, they need a risk adjusted, statistically sound level of confidence that they won't run out money before breakeven. What this means in practice is a management reserve and cost and schedule margin to protect the project from those naturally occurring variances and those probabilistic events to derail all the best laid plans.

Now developers make not think like this. But someone somewhere in a non-trivial business does. Usually in the Office of the CFO. This is called Managerial Finance and it's how serious money at risk firms manage. 

So when you see time series like those in the original post, do your homework and show the confidence of the probability of the needed performance actually showing up. And by needed performance I mean the steering target used in the Closed Loop Control system used to increase the probability that the planned value - that the Agilest so dearly treasure - actually appears somewhere near the planned need date and somewhere around the planned cost so the Return on Investment those paying for your work are not disappointed with a negative return and label their spend as underwater

So What Does This Mean in the End?

Even when you're using past performance - one of the better ways of forecasting the future - you need to give careful consideration of those past numbers. Averages and simple variances which wipe out the actual underlying time series variances - are not only naive, they are bad statistics used to make bad management decisions. 

Add to the poorly formed notion that decisions can be made about future outcomes in the presence of  uncertainty in the absence of estimates about that future and you've got the makings of management disappointment. The discipline of estimating future outcomes from past behaviors is well developed. The mathematics and especially the terms used in that mathematics are well established. Here's some source we use in our everyday work. These are not populist books, they are math and engineering. They have equations, algorithm, code examples. They are used used the value at risk is sufficiently high that management is on the hook for meeting the performance goals in exchange for the money assigned to the project.

If you work a project that doesn't care too much about deadlines, budget overages, or what gets produced other than the minimal products, then these books and related papers are probably not for you. And most likely Not Estimating the probability that you'll not over spend, show up seriously late, and fail to produce the needed capabilities to meet the Business Plans, will be just fine. But if you are expected to meet the business goals in exchange for the spend plan you've beed assigned, these might be a good place start to avoid being a statistic (dead skunk on the middle of the road) in the next Chaos Report (no matter how poorly the statistics are).

This by the way is an understanding I came to on the plane flight home this week. #Noestimates is a credible way to run your project when these conditions are in place. Otherwise you may what to read how to make credible forecasts of what the cost and schedule is going to be for the value produced with your customer's money, assuming they actually care about not wasting it.

 

Related articles IT Risk Management Thinking, Talking, Doing on the Road to Improvement Estimating Processes in Support of Economic Analysis Making Conjectures Without Testable Outcomes
Categories: Project Management

Breaking Down Value

Deconstructing Value Is Just Like Demolition!

Deconstructing Value Is Just Like Demolition!

Conceptually, value is fairly simple. Value is a measure of importance or worth. This is where simplicity ends and complexity begins. To determine worth and importance we have to understand the difference between the perceived cost and the perceived benefits. Here is the value equation:

Value = (Benefit + Perception) – (Cost + Perception)

Cost, benefits and perceptions are like a suitcase; they are containers with many different things inside. Let’s unpack those suitcases just a bit.

Thinking like a cost accountant is useful when considering the cost component of the equation. The typical costs that are accounted for when costing an IT project include labor, materials, hardware and overhead. Each of these categories could easily be broken down even further. For example, labor can include internal labor (cost of employees) and the cost of contract or outsourced labor. The cost of all labor needs to be considered. I once ran into an organization that only considered external labor when deciding which piece of work should be done. This caused the firm to make economically unbalanced decisions that reduced the value that they delivered to their clients. Far less typical, but no less relevant, costs that need to be assessed when determining value include opportunity costs, business costs and legal and risk mitigation costs. Opportunity costs represent the benefit that would be gained from the work that is foregone to another piece of work. All work has an opportunity cost. Ranking portfolios of possible work by importance is a way of incorporating perceived costs into real costs.

Interestingly, benefits are often far less tangible than costs. Benefits are always predictions of the future, and therefore are more apt to be impacted by perceptions and cognitive biases. Depending on the type of effort, typical benefits often include improvements in market share, cost savings and revenue enhancements. The forecast of benefits is the result of some form of transformational magic that leverages the predicted impact of a change (what will happening if a piece of software is used) and the probability of use (will it be used and how fast will the uptake be). The transformational magic can be incredibly formal based on market research and statistical interpretation or a reflection of gut feel.

The third component of both benefits and cost is perception. Perception means how we regard, interpret or understand a cost or a benefit. Perceptions is the bucket of factors that reflect all of the biases or gut feels with either the cost or benefits. For example, a financial projection might indicate that a feature will deliver $1,000 of value however the might temper that projection based on his or her feeling about the direction of the market. Techniques such as template driven cost/benefit analyses, parametric estimation, planning poker and Delphi estimation try to ameliorate the impacts of some types of less rational perceptions. Overly optimistic estimation of costs and benefits are types of less rational perceptions (read Optimism: The Good and The Bad).

Value is a shorthand mechanism to reflect the difference between benefits and cost based filtered through the perceptions of those collecting and analyzing the numbers. Value is important to concepts ranging from qualifying which feature to work on (part of #NoProjects), selecting user stories for the next sprint (sprint planning) or as prize for getting code into production as fast as possible (#NotImplementedNoValue). These represent only the tip of the iceberg for how value is used in Agile. Without a firm grounding what the word value means it is difficult to ask a product owner what the value of a particular feature is and then help evaluate the answer.


Categories: Process Management

Google Play services 7.8 - Let’s see what’s Nearby!

Android Developers Blog - Thu, 08/13/2015 - 23:30

Posted by Magnus Hyttsten, Developer Advocate, Play services team

Today we’ve finished the roll-out of Google Play services 7.8. In this release, we’ve added two new APIs. The Nearby Messages API allows you to build simple interactions between nearby devices and people, while the Mobile Vision API helps you create apps that make sense of the visual world, using real-time on-device vision technology. We’ve also added optimization and new features to existing APIs. Check out the highlights in the video or read about them below.

Nearby Messages

Nearby Messages introduces a cross-platform API to find and communicate with mobile devices and beacons, based on proximity. Nearby uses a combination of Bluetooth, Wi-Fi, and an ultrasonic audio modem to connect devices. And it works across Android and iOS. For more info on Nearby Messages, check out the documentation and the launch blog post.

Mobile Vision API

We’re happy to announce a new Mobile Vision API. Mobile Vision has two components.

The Face API allows developers to find human faces in images and video. It’s faster, more accurate and provides more information than the Android FaceDetector.Face API. It finds faces in any orientation, allows developers to find landmarks such as the eyes, nose, and mouth, and identifies faces that are smiling and/or have their eyes open. Applications include photography, games, and hands-free user interfaces.

The Barcode API allows apps to recognize barcodes in real-time, on device, in any orientation. It supports a range of barcodes and can detect multiple barcodes at once. For more information, check out the Mobile Vision documentation.

Google Cloud Messaging

And finally, Google Cloud Messaging - Google’s simple and reliable messaging service - has expanded notification to support localization for Android. When composing the notification from the server, set the appropriate body_loc_key, body_loc_args, title_loc_key, and title_loc_args. GCM will handle displaying the notification based on current device locale, which saves you having to figure out which messages to display on which devices! Check out the docs for more info.

And getting ready for the Android M release, we've added high and normal priority to GCM messaging, giving you additional control over message delivery through GCM. Set messages that need immediate users attention to high priority, e.g., chat message alert, incoming voice call alert. And keep the remaining messages at normal priority so that it can be handled in the most battery efficient way without impeding your app performance.

SDK Now Available!

You can get started developing today by downloading the Google Play services SDK from the Android SDK Manager.

To learn more about Google Play services and the APIs available to you through it, visit our documentation on Google Developers.

Join the discussion on

+Android Developers
Categories: Programming

Sed: Using environment variables

Mark Needham - Thu, 08/13/2015 - 20:30

I’ve been playing around with the BBC football data set that I wrote about a couple of months ago and I wanted to write some code that would take the import script and replace all instances of remote URIs with a file system path.

For example the import file contains several lines similar to this:

LOAD CSV WITH HEADERS 
FROM "https://raw.githubusercontent.com/mneedham/neo4j-bbc/master/data/matches.csv" 
AS row

And I want that to read:

LOAD CSV WITH HEADERS 
FROM "file:///Users/markneedham/repos/neo4j-bbc/data/matches.csv" 
AS row

The start of that path also happens to be my working directory:

$ echo $PWD
/Users/markneedham/repos/neo4j-bbc

So I wanted to write a script that would look for occurrences of ‘https://raw.githubusercontent.com/mneedham/neo4j-bbc/master’ and replace it with $PWD. I’m a fan of Sed so I thought I’d try and use it to solve my problem.

The first thing we can do to make life easy is to change the default delimiter. Sed usually uses ‘/’ to separate parts of the command but since we’re using URIs that’s going to be horrible so we’ll use an underscore instead.

For a first cut I tried just removing that first part of the URI but not replacing it with anything in particular:

$ sed 's_https://raw.githubusercontent.com/mneedham/neo4j-bbc/master__' import.cql
 
$ sed 's_https://raw.githubusercontent.com/mneedham/neo4j-bbc/master__' import.cql | grep LOAD
LOAD CSV WITH HEADERS FROM "/data/matches.csv" AS row
LOAD CSV WITH HEADERS FROM "/data/players.csv" AS row
LOAD CSV WITH HEADERS FROM "/data/players.csv" AS row
LOAD CSV WITH HEADERS FROM "/data/fouls.csv" AS row
LOAD CSV WITH HEADERS FROM "/data/attempts.csv" AS row
LOAD CSV WITH HEADERS FROM "/data/attempts.csv" AS row
LOAD CSV WITH HEADERS FROM "/data/corners.csv" AS row
LOAD CSV WITH HEADERS FROM "/data/corners.csv" AS row
LOAD CSV WITH HEADERS FROM "/data/cards.csv" AS row
LOAD CSV WITH HEADERS FROM "/data/cards.csv" AS row
LOAD CSV WITH HEADERS FROM "/data/subs.csv" AS row

Cool! That worked as expected. Now let’s try and replace it with $PWD:

$ sed 's_https://raw.githubusercontent.com/mneedham/neo4j-bbc/master_file://$PWD_' import.cql | grep LOAD
LOAD CSV WITH HEADERS FROM "file://$PWD/data/matches.csv" AS row
LOAD CSV WITH HEADERS FROM "file://$PWD/data/players.csv" AS row
LOAD CSV WITH HEADERS FROM "file://$PWD/data/players.csv" AS row
LOAD CSV WITH HEADERS FROM "file://$PWD/data/fouls.csv" AS row
LOAD CSV WITH HEADERS FROM "file://$PWD/data/attempts.csv" AS row
LOAD CSV WITH HEADERS FROM "file://$PWD/data/attempts.csv" AS row
LOAD CSV WITH HEADERS FROM "file://$PWD/data/corners.csv" AS row
LOAD CSV WITH HEADERS FROM "file://$PWD/data/corners.csv" AS row
LOAD CSV WITH HEADERS FROM "file://$PWD/data/cards.csv" AS row
LOAD CSV WITH HEADERS FROM "file://$PWD/data/cards.csv" AS row
LOAD CSV WITH HEADERS FROM "file://$PWD/data/subs.csv" AS row

Hmmm that didn’t work as expected. The $PWD is being treated as a literal instead of being evaluated like we want it to be.

It turns out this is a popular question on Stack Overflow and there are lots of suggestions – I tried a few of them and found that single quotes did the trick:

$ sed 's_https://raw.githubusercontent.com/mneedham/neo4j-bbc/master_file://'$PWD'_' import.cql | grep LOAD
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/matches.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/players.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/players.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/fouls.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/attempts.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/attempts.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/corners.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/corners.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/cards.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/cards.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/subs.csv" AS row

We could also use double quotes everywhere if we prefer:

$ sed "s_https://raw.githubusercontent.com/mneedham/neo4j-bbc/master_file://"$PWD"_" import.cql | grep LOAD
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/matches.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/players.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/players.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/fouls.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/attempts.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/attempts.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/corners.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/corners.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/cards.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/cards.csv" AS row
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/repos/neo4j-bbc/data/subs.csv" AS row
Categories: Programming

How Everyone Thinks They’ll Someday Become Rich

Making the Complex Simple - John Sonmez - Thu, 08/13/2015 - 16:00

In this episode, I talk about how thinking about becoming rich can limit you from accomplishing things. Full transcript: John:               Hey, John Sonmez from simpleprogrammer.com. Today the video is not a question, kind of just an observation that I’ve made. So, everyone sort of, kind of has this idea that someday that they’re going to […]

The post How Everyone Thinks They’ll Someday Become Rich appeared first on Simple Programmer.

Categories: Programming

Android Experiments: A celebration of creativity and code

Android Developers Blog - Wed, 08/12/2015 - 17:15

Posted by Roman Nurik, Design Advocate, and Richard The, Google Creative Lab

Android was created as an open and flexible platform, giving people more ways to come together to imagine and create. This spirit of invention has allowed developers to push the boundaries of mobile development and has helped make Android the go-to platform for creative projects in more places—from phones, to tablets, to watches, and beyond. We set out to find a way to celebrate the creative, experimental Android work of developers everywhere and inspire more developers to get creative with technology and code.

Today, we’re excited to launch Android Experiments: a showcase of inspiring projects on Android and an open invitation for all developers to submit their own experiments to the gallery.


The 20 initial experiments show a broad range of creative work–from camera experiments to innovative Android Wear apps to hardware hacks to cutting edge OpenGL demos. All are built using platforms such as the Android SDK and NDK, Android Wear, the IOIO board, Cinder, Processing, OpenFrameworks and Unity. Each project creatively examines in small and big ways how we think of the devices we interact with every day.

Today is just the beginning as we’re opening up experiment submissions to creators everywhere. Whether you’re a student just starting out, or you’ve been at it for a while, and no matter the framework it uses or the device it runs on, Android Experiments is open to everybody.

Check out Android Experiments to view the completed projects, or to submit one of your own. While we can’t post every submission, we’d love to see what you’ve created.

Follow along to see what others build at AndroidExperiments.com.

Categories: Programming

Why My Water Droplet Is Better Than Your Hadoop Cluster

We’ve had computation using slime mold and soap film, now we have computation using water droplets. Stanford bioengineers have built a “fully functioning computer that runs like clockwork - but instead of electrons, it operates using the movement of tiny magnetised water droplets.”

 

By changing the layout of the bars on the chip it's possible to make all the universal logic gates. And any Boolean logic circuit can be built by moving the little magnetic droplets around. Currently the chips are about half the size of a postage stamp and the droplets are smaller than poppy seeds.

What all this means I'm not sure, but pavo6503 has a comment that helps understand what's going on:

Logic gates pass high and low states. Since they plan to use drops of water as carriers and the substances in those drops to determine what the high/low state is they could hypothetically make a filter that sorts drops of water containing 1 to many chemicals. Pure water passes through unchanged. water with say, oil in it, passes to another container, water with alcohol to another. A "chip" with this setup could be used to purify water where there are many contaminants you want separated.

Categories: Architecture

Make the Complex Simple to Win in Your Career and Life

Making the Complex Simple - John Sonmez - Wed, 08/12/2015 - 13:00

Because, deep down, I must be sort of a cheesy person, when John asked if I’d be interested in writing posts for Simple Programmer, the first thing I did was think about what that title, “Simple Programmer,” meant to me. More specifically, I started to think about the tagline, “Making the complex simple,” which has […]

The post Make the Complex Simple to Win in Your Career and Life appeared first on Simple Programmer.

Categories: Programming

Introducing FunSharp

Phil Trelford's Array - Wed, 08/12/2015 - 07:54

FunSharp is a new cross-platform open source graphics library, based on Small Basic’s library, with a typed API crafted for the sharp languages, F# and C#.

Drawing graphics is quick and easy:

GraphicsWindow.FillEllipse(10,10,300,300)

And when you ask FunSharp to draw graphics it will just go and open a window to view it for you:

Purple_Circle

FunSharp provides similar functionality to Python’s PyGame, used for developing games, and Processing used for visual art.

You can call FunSharp immediately from the F# REPL or build full-blown applications using F# and C#.

It’s an ideal library for beginners or anyone who just wants  to get stuff on the screen quickly.

With this in mind FunSharp works seamlessly on Raspberry Pi, Linux and Windows.

Getting Started

If you haven’t already, you’ll need to install F# on your machine, follow these instructions:

  • Linux (for Raspbian follow the Debian instructions)
  • Windows (install Xamarin Studio or Visual Studio)

Get the FunSharp source from GitHub using Clone or Download Zip, then load the FunSharp solution into MonoDevelop or Visual Studio and starting playing with the samples.

Notes:

  • FunSharp is built on Mono’s cross platform graphics libraries Xwt and Gtk# libraries which must be referenced.
  • currently apps must run in 32-bit mode.

Turtles

FunSharp has a Turtle module, which can be used to make fun shapes:

Turtle.X <- 150.
Turtle.Y <- 150.
for i in 0..5..200 do
   Turtle.Move(i)
   Turtle.Turn(90)

Renders:

Turtle_Example

Games

FunSharp can be used to make games quickly and easily. I’ve ported several games written for Small Basic requiring only minor modifications:

Asteroids

Have fun!

Categories: Programming

No Signal Means, No Steering Target, Means No Corrective Actions

Herding Cats - Glen Alleman - Wed, 08/12/2015 - 06:13

Screen Shot 2015-08-11 at 6.44.36 PMThe management of projects involves many things. Capabilities, Requirements, Development, Staffing, Budgeting, Procurement, Accounting, Testing, Security, Deployment, Maintenance, Training, Support, Sales and Marketing, and other development and operational processes. Each of these has interdependencies with other elements. Each operates in its own specific ways on the project. Almost all have behaviors described by probabilistic model driven by the underlying statistical processes.

Management in this sense is control in the presence of these probabilistic processes. And yes we can control these items - it's a well developed process, starting with Statistical Process Control, Monte Carlo Simulation of resulting, Bayesian Networks, Probabilistic Real Options and other methods based on probabilistic processes.

The notion that are not controllable is at its heart flawed and essentially misinformed. But this control requires information. It's been mentioned before about Closed Loop Control, Closed Loop versus Open LoopStaying on Plan Means Closed Loop ControlUse and Misuse of Control Systemsand Why Project Management is a Control System.

All these lead to Five Immutable Principles of Project Success. Along with these Principles, comes Practices, and Processes. But it's the Principles we're after as a start.

Each of the Principles makes use of information used on managing the project. This information is the signal used by management to make decisions. These signals are used to compare the current of the project (the system under management) to the desired state of the system. This is the basis of the Control System used to manage the System Under Management. With the control system - what ever that may be and there are many systems - the next step to gather the information needed to make decisions using this control system. This means information about the  past, present, and future of the system under management. Past data - when recorded - is available from the database of what happened in the past. Present data should be readily available directly from the system. The future data presents a unique problem. Rarely is information about the future recorded some place. It's in the future and hasn't happened yet.  But we can find sources for this future information. We have models of what the system may do in the future. We may have similar systems from the past that can be used to create future data. But there is a critical issue here. The future may not be like the past. Or the future may be impacted in ways not found on similar projects. But we to come up with this information if we are to make decisions about the future. So if we're missing the model, or missing the similar project, what can we do? We can make estimates from the data or models in some probabilistically informed manner. This is the role of estimating. To inform our decision making processes in the presence of uncertainty of possible future outcomes, knowing something about the past and present state of the system under management. With this probabilistic information, decisions can be made to take corrective actions to keep our project under control. That is moving in the direction we planned to move to reach the goal of the project. This goal is usually...
... provide needed capabilities to those paying for the project to meet some business goal or fulfill a mission strategy. To accomplish some beneficial outcome in exchange for the cost and time invested in development of the capabilities. So what happens when we don't have information about the future. That is we choose to Not Estimate when faced with the uncertainty of the possible future states of the system we need to take corrective actions to keep the project moving in the desired direction? Without these estimates, we have no signal needed to take corrective. We have an Open Loop Control system. A system that takes any path it wants, it has no control mechanism to keep it on track. The open loop control system is a non-feedback system in which the control input to the system is determined using only the current state of the system and a model of the system. There is no feedback to determine if the system is achieving the desired output based on the reference input or set point. The system does not observe itself to correct itself and, as such, is more prone to errors and cannot compensate for disturbances to the system. This means we're going to get what we're going to get, with no chance to steer the system toward our desired outcome. For any non-trivial system, not estimating the future state, creating the error signal between the desired future state and the current state and using that error signal to take corrective actions is considered Bad Management.  This would be considered Doing Stupid Things On Purpose in most domains that spend other people's money to produce needed capabilities and the resulting value for the planned cost on the planned date.  
Categories: Project Management

#notimplementednovalue – The Price, Cost and Value Conundrum

 They are all balloons!  Using the right word is important for understanding!

They are all balloons! Using the right word is important for understanding!

The idea that a flow of value to customers and users is at the heart of concepts such as #NotImplementedNoValue and #NoProjects. At the heart of the flow of value is a conundrum. That conundrum is entwined in three terms; price, cost and value. These three terms are often conflated despite representing very different ideas. When project teams or product owners use price, cost and value interchangeability they can easily make the wrong choices as they they lead value through the development or maintenance process.

Price is often equated to value. We have all heard “you get what you pay for.” As anyone that has worked in grocery store or had to bid for a project will tell you, price is a significantly more nuanced than a straight translation of value. Price is defined as the amount of money given in payment for something. Price can also refer to an unwelcome experience or condition required to achieve an end. The act of pricing is dance between what can be charged and the strategy of the seller (think Game Theory). I have heard sales people suggest that they price the first deal to get in the door in order to prove the value of their product or service therefore price of any individual transaction might not be directly related to value. All software transactions must be view across the life of the software and the life of the relationship with the provider. Bottom line: Price of any individual transaction is only loosely related to value.

Cost is a “simple” concept. The use of the term simple is very tongue and cheek as costs often include labor, materials, office charges, hardware, business changes and opportunity costs. In a commercial product, cost and price are connected by the margin. The discipline of having to maintain a positive margin over the long term generates a need understand the relationship of cost to price and later value. Most internal IT organizations do not face the long-term discipline of the market and often only have to manage labor costs, which can lead to poor behaviors. One example of poor practice is substituting labor for automation because automation has a higher upfront cost (resisting test automation software for example).

Value is defined on Dictionary.com as ”relative worth, merit, or importance.” Value measures range from profit and revenue to a relative measure based on the perception of what an individual or group will get from a service or product. To make matters more complicated, the perception of value changes based on context, which can affect what can be charged. For example, water has significantly more value to someone dying of thirst than a person sitting in their kitchen drinking a glass of water. Value can be seen as the interaction between factors that include global or organizational impact, individual benefit and the probability of use. Many people use cost and price as tools to help determine value.

Price, cost and value are related. Cost plus margin will always equal price for a specific transaction. Over the long run pricing any specific transaction below the typical margin or at a loss may make strategic sense. Examples might include capturing market share or getting in the door. But in the long run, pricing too many transactions below cost is catastrophic. The linkage between cost and price is more tenuous, because even if measured using hard currency the amount of value is affected by perception. In order to pursue concepts like #NoProject or #NotImplemnetedNoValue you need to have a handle the nuances of price, cost and value so that you can understand and talk clearly about the value derived from each story, feature or epic. Falling prey to conflating price, cost and value will yield a conundrum that will impact the decisions we make about what to deliver and when.

Programming Note: We will dive into value in more detail on Thursday.


Categories: Process Management

New Features on PlanningPoker.com

Mike Cohn's Blog - Tue, 08/11/2015 - 15:00

The PlanningPoker.com team has been quite busy. A couple of months ago, they launched a brand new design and mobile support. Since then, they have continued to improve the site based on your feedback.

Now, they’ve launched PlanningPoker.com’s first set of premium features. While these premium features do require a monthly fee, the base game will always be free.

Visit PlanningPoker.com to see these new features in action:

  • Import and export from JIRA and other backlog tools
  • Add and edit story details and acceptance criteria within the game
  • Monitor remaining team velocity while estimating
  • Build custom pointing scales
  • Save your default game settings

… and more.

Digital product development agency 352 Inc. is the team that created the new PlanningPoker.com, and they’re committed to making it the most helpful and efficient way to run your planning sessions. They are also open to your feedback on how to improve the product.

Give the new features a try and let them know what more you would like to see. To learn more about Planning Poker in general, check out our informational page on it.

You might not need lodash (in your ES2015 project)

Xebia Blog - Tue, 08/11/2015 - 12:44
code { display: inline !important; }

This post is the first in a series of ES2015 posts. We'll be covering new JavaScript functionality every week for the coming two months.

ES2015 brings a lot of new functionality to the table. It might be a good idea to evaluate if your new or existing projects actually require a library such as lodash. We'll talk about several common usages of lodash functions that can be simply replaced by a native ES2015 implementation.

 

_.extend / _.merge

Let's start with _.extend and its related _.merge function. These functions are often used for combining multiple configuration properties in a single object.

const dst = { xeb: 0 };
const src1 = { foo: 1, bar: 2 };
const src2 = { foo: 3, baz: 4 };

_.extend(dst, src1, src2);

assert.deepEqual(dst, { xeb: 0, foo: 3, bar: 2, baz: 4 });

Using the new Object.assign method, the same behaviour is natively possible:

const dst2 = { xeb: 0 };

Object.assign(dst2, src1, src2);

assert.deepEqual(dst2, { xeb: 0, foo: 3, bar: 2, baz: 4 });

We're using Chai assertions to confirm the correct behaviour.

 

_.defaults / _.defaultsDeep

Sometimes when passing many parameters to a method, a config object is used. _.defaults and its related _.defaultsDeep function come in handy to define defaults in a certain structure for these config objects.

function someFuncExpectingConfig(config) {
  _.defaultsDeep(config, {
    text: 'default',
    colors: {
      bgColor: 'black',
      fgColor: 'white'
    }
  });
  return config;
}
let config = { colors: { bgColor: 'grey' } };

someFuncExpectingConfig(config);

assert.equal(config.text, 'default');
assert.equal(config.colors.bgColor, 'grey');
assert.equal(config.colors.fgColor, 'white');

With ES2015, you can now destructure these config objects into separate variables. Together with the new default param syntax we get:

function destructuringFuncExpectingConfig({
  text = 'default',
  colors: {
    bgColor: bgColor = 'black',
    fgColor: fgColor = 'white' }
  }) {
  return { text, bgColor, fgColor };
}

const config2 = destructuringFuncExpectingConfig({ colors: { bgColor: 'grey' } });

assert.equal(config2.text, 'default');
assert.equal(config2.bgColor, 'grey');
assert.equal(config2.fgColor, 'white');

 

_.find / _.findIndex

Searching in arrays using an predicate function is a clean way of separating behaviour and logic.

const arr = [{ name: 'A', id: 123 }, { name: 'B', id: 436 }, { name: 'C', id: 568 }];
function predicateB(val) {
 return val.name === 'B';
}

assert.equal(_.find(arr, predicateB).id, 436);
assert.equal(_.findIndex(arr, predicateB), 1);

In ES2015, this can be done in exactly the same way using Array.find.

assert.equal(Array.find(arr, predicateB).id, 436);
assert.equal(Array.findIndex(arr, predicateB), 1);

Note that we're not using the extended Array-syntax arr.find(predicate). This is not possible with Babel that was used to transpile this ES2015 code.

 

_.repeat, _.startsWith, _.endsWith and _.includes

Some very common but never natively supported string functions are _.repeat to repeat a string multiple times and _.startsWith / _.endsWith / _.includes to check if a string starts with, ends with or includes another string respectively.

assert.equal(_.repeat('ab', 3), 'ababab');
assert.isTrue(_.startsWith('ab', 'a'));
assert.isTrue(_.endsWith('ab', 'b'));
assert.isTrue(_.includes('abc', 'b'));

Strings now have a set of new builtin prototypical functions:

assert.equal('ab'.repeat(3), 'ababab');
assert.isTrue('ab'.startsWith('a'));
assert.isTrue('ab'.endsWith('b'));
assert.isTrue('abc'.includes('b'));

 

_.fill

A not-so-common function to fill an array with default values without looping explicitly is _.fill.

const filled = _.fill(new Array(3), 'a', 1);
assert.deepEqual(filled, [, 'a', 'a']);

It now has a drop-in replacement: Array.fill.

const filled2 = Array.fill(new Array(3), 'a', 1);
assert.deepEqual(filled2, [, 'a', 'a']);

 

_.isNaN, _.isFinite

Some type checks are quite tricky, and _.isNaN and _.isFinite fill in such gaps.

assert.isTrue(_.isNaN(NaN));
assert.isFalse(_.isFinite(Infinity));

Simply use the new Number builtins for these checks now:

assert.isTrue(Number.isNaN(NaN));
assert.isFalse(Number.isFinite(Infinity));

 

_.first, _.rest

Lodash comes with a set of functional programming-style functions, such as _.first (aliased as _.head) and _.rest (aliased _.tail) which get the first and rest of the values from an array respectively.

const elems = [1, 2, 3];

assert.equal(_.first(elems), 1);
assert.deepEqual(_.rest(elems), [2, 3]);

The syntactical power of the rest parameter together with destructuring replaces the need for these functions.

const [first, ...rest] = elems;

assert.equal(first, 1);
assert.deepEqual(rest, [2, 3]);

 

_.restParam

Specially written for ES5, lodash contains helper functions to mimic the behaviour of some ES2015 parts. An example is the _.restParam function that wraps a function and sends the last parameters as an array to the wrapped function:

function whatNames(what, names) {
 return what + ' ' + names.join(';');
}
const restWhatNames = _.restParam(whatNames);

assert.equal(restWhatNames('hi', 'a', 'b', 'c'), 'hi a;b;c');

Of course, in ES2015 you can simply use the rest parameter as intended.

function whatNamesWithRest(what, ...names) {
 return what + ' ' + names.join(';');
}

assert.equal(whatNamesWithRest('hi', 'a', 'b', 'c'), 'hi a;b;c');

 

_.spread

Another example is the _.spread function that wraps a function which takes an array and sends the array as separate parameters to the wrapped function:

function whoWhat(who, what) {
 return who + ' ' + what;
}
const spreadWhoWhat = _.spread(whoWhat);
const callArgs = ['yo', 'bro'];

assert.equal(spreadWhoWhat(callArgs), 'yo bro');

Again, in ES2015 you want to use the spread operator.

assert.equal(whoWhat(...callArgs), 'yo bro');

 

_.values, _.keys, _.pairs

A couple of functions exist to fetch all values, keys or value/key pairs of an object as an array:

const bar = { a: 1, b: 2, c: 3 };

const values = _.values(bar);
const keys = _.keys(bar);
const pairs = _.pairs(bar);

assert.deepEqual(values, [1, 2, 3]);
assert.deepEqual(keys, ['a', 'b', 'c']);
assert.deepEqual(pairs, [['a', 1], ['b', 2], ['c', 3]]);

Now you can use the Object builtins:

const values2 = Object.values(bar);
const keys2 = Object.keys(bar);
const pairs2 = Object.entries(bar);

assert.deepEqual(values2, [1, 2, 3]);
assert.deepEqual(keys2, ['a', 'b', 'c']);
assert.deepEqual(pairs2, [['a', 1], ['b', 2], ['c', 3]]);

 

_.forEach (for looping over object properties)

Looping over the properties of an object is often done using a helper function, as there are some caveats such as skipping unrelated properties. _.forEach can be used for this.

const foo = { a: 1, b: 2, c: 3 };
let sum = 0;
let lastKey = undefined;

_.forEach(foo, function (value, key) {
  sum += value;
  lastKey = key;
});

assert.equal(sum, 6);
assert.equal(lastKey, 'c');

With ES2015 there's a clean way of looping over Object.entries and destructuring them:

sum = 0;
lastKey = undefined;
for (let [key, value] of Object.entries(foo)) {
  sum += value;
  lastKey = key;
}

assert.equal(sum, 6);
assert.equal(lastKey, 'c');

 

_.get

Often when having nested structures, a path selector can help in selecting the right variable. _.get is created for such an occasion.

const obj = { a: [{}, { b: { c: 3 } }] };

const getC = _.get(obj, 'a[1].b.c');

assert.equal(getC, 3);

Although ES2015 does not has a native equivalent for path selectors, you can use destructuring as a way of 'selecting' a specific value.

let a, b, c;
({ a : [, { b: { c } }]} = obj);

assert.equal(c, 3);

 

_.range

A very Python-esque function that creates an array of integer values, with an optional step size.

const range = _.range(5, 10, 2);
assert.deepEqual(range, [5, 7, 9]);

As a nice ES2015 alternative, you can use a generator function and the spread operator to replace it:

function* rangeGen(from, to, step = 1) {
  for (let i = from; i &lt; to; i += step) {
    yield i;
  }
}

const range2 = [...rangeGen(5, 10, 2)];

assert.deepEqual(range2, [5, 7, 9]);

A nice side-effect of a generator function is it's laziness. It is possible to use the range generator without generating the entire array first, which can come in handy when memory usage should be minimal.

Conclusion

Just like kicking the jQuery habit, we've seen that there are alternatives to some lodash functions, and it can be preferable to use as little of these functions as possible. Keep in mind that the lodash library offers a consistent API that developers are probably familiar with. Only swap it out if the ES2015 benefits outweigh the consistency gains (for instance, when performance is an issue).

For reference, you can find the above code snippets at this repo. You can run them yourself using webpack and Babel.

Java: Jersey – java.lang.NoSuchMethodError: com.sun.jersey.core.reflection.ReflectionHelper. getContextClassLoaderPA()Ljava/security/PrivilegedAction;

Mark Needham - Tue, 08/11/2015 - 07:59

I’ve been trying to put some tests around an Neo4j unmanaged extension I’ve been working on and ran into the following stack trace when launching the server using the Neo4j test harness:

public class ExampleResourceTest {
    @Rule
    public Neo4jRule neo4j = new Neo4jRule()
            .withFixture("CREATE (:Person {name: 'Mark'})")
            .withFixture("CREATE (:Person {name: 'Nicole'})")
            .withExtension( "/unmanaged", ExampleResource.class );
 
    @Test
    public void shouldReturnAllTheNodes() {
        // Given
        URI serverURI = neo4j.httpURI();
        // When
        HTTP.Response response = HTTP.GET(serverURI.resolve("/unmanaged/example/people").toString());
 
        // Then
        assertEquals(200, response.status());
        List content = response.content();
 
        assertEquals(2, content.size());
    }
}
07:51:32.985 [main] WARN  o.e.j.u.component.AbstractLifeCycle - FAILED o.e.j.s.ServletContextHandler@29eda4f8{/unmanaged,null,STARTING}: java.lang.NoSuchMethodError: com.sun.jersey.core.reflection.ReflectionHelper.getContextClassLoaderPA()Ljava/security/PrivilegedAction;
java.lang.NoSuchMethodError: com.sun.jersey.core.reflection.ReflectionHelper.getContextClassLoaderPA()Ljava/security/PrivilegedAction;
	at com.sun.jersey.spi.scanning.AnnotationScannerListener.<init>(AnnotationScannerListener.java:94) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.spi.scanning.PathProviderScannerListener.<init>(PathProviderScannerListener.java:59) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.api.core.ScanningResourceConfig.init(ScanningResourceConfig.java:79) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.api.core.PackagesResourceConfig.init(PackagesResourceConfig.java:104) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.api.core.PackagesResourceConfig.<init>(PackagesResourceConfig.java:78) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.api.core.PackagesResourceConfig.<init>(PackagesResourceConfig.java:89) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.spi.container.servlet.WebComponent.createResourceConfig(WebComponent.java:696) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at com.sun.jersey.spi.container.servlet.WebComponent.createResourceConfig(WebComponent.java:674) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:203) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:374) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:557) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at javax.servlet.GenericServlet.init(GenericServlet.java:244) ~[javax.servlet-api-3.1.0.jar:3.1.0]
	at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:612) ~[jetty-servlet-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:395) ~[jetty-servlet-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:871) ~[jetty-servlet-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298) ~[jetty-servlet-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) [jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) [jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) [jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) [jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) [jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) [jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.Server.start(Server.java:387) [jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) [jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) [jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.Server.doStart(Server.java:354) [jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) [jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.neo4j.server.web.Jetty9WebServer.startJetty(Jetty9WebServer.java:381) [neo4j-server-2.2.3.jar:2.2.3]
	at org.neo4j.server.web.Jetty9WebServer.start(Jetty9WebServer.java:184) [neo4j-server-2.2.3.jar:2.2.3]
	at org.neo4j.server.AbstractNeoServer.startWebServer(AbstractNeoServer.java:474) [neo4j-server-2.2.3.jar:2.2.3]
	at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:230) [neo4j-server-2.2.3.jar:2.2.3]
	at org.neo4j.harness.internal.InProcessServerControls.start(InProcessServerControls.java:59) [neo4j-harness-2.2.3.jar:2.2.3]
	at org.neo4j.harness.internal.InProcessServerBuilder.newServer(InProcessServerBuilder.java:72) [neo4j-harness-2.2.3.jar:2.2.3]
	at org.neo4j.harness.junit.Neo4jRule$1.evaluate(Neo4jRule.java:64) [neo4j-harness-2.2.3.jar:2.2.3]
	at org.junit.rules.RunRules.evaluate(RunRules.java:20) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) [junit-4.11.jar:na]
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) [junit-4.11.jar:na]
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309) [junit-4.11.jar:na]
	at org.junit.runner.JUnitCore.run(JUnitCore.java:160) [junit-4.11.jar:na]
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:78) [junit-rt.jar:na]
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212) [junit-rt.jar:na]
	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68) [junit-rt.jar:na]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_51]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_51]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_51]
	at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_51]
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140) [idea_rt.jar:na]
07:51:32.991 [main] WARN  o.e.j.u.component.AbstractLifeCycle - FAILED org.eclipse.jetty.server.handler.HandlerList@2b22a1cc[o.e.j.s.h.MovedContextHandler@5e671e20{/,null,AVAILABLE}, o.e.j.s.ServletContextHandler@29eda4f8{/unmanaged,null,STARTING}, o.e.j.s.ServletContextHandler@62573c86{/db/manage,null,null}, o.e.j.s.ServletContextHandler@2418ba04{/db/data,null,null}, o.e.j.w.WebAppContext@14229fa7{/browser,jar:file:/Users/markneedham/.m2/repository/org/neo4j/app/neo4j-browser/2.2.3/neo4j-browser-2.2.3.jar!/browser,null}, o.e.j.s.ServletContextHandler@2ab0702e{/,null,null}]: java.lang.NoSuchMethodError: com.sun.jersey.core.reflection.ReflectionHelper.getContextClassLoaderPA()Ljava/security/PrivilegedAction;
java.lang.NoSuchMethodError: com.sun.jersey.core.reflection.ReflectionHelper.getContextClassLoaderPA()Ljava/security/PrivilegedAction;
	at com.sun.jersey.spi.scanning.AnnotationScannerListener.<init>(AnnotationScannerListener.java:94) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.spi.scanning.PathProviderScannerListener.<init>(PathProviderScannerListener.java:59) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.api.core.ScanningResourceConfig.init(ScanningResourceConfig.java:79) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.api.core.PackagesResourceConfig.init(PackagesResourceConfig.java:104) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.api.core.PackagesResourceConfig.<init>(PackagesResourceConfig.java:78) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.api.core.PackagesResourceConfig.<init>(PackagesResourceConfig.java:89) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.spi.container.servlet.WebComponent.createResourceConfig(WebComponent.java:696) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at com.sun.jersey.spi.container.servlet.WebComponent.createResourceConfig(WebComponent.java:674) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:203) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:374) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:557) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at javax.servlet.GenericServlet.init(GenericServlet.java:244) ~[javax.servlet-api-3.1.0.jar:3.1.0]
	at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:612) ~[jetty-servlet-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:395) ~[jetty-servlet-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:871) ~[jetty-servlet-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298) ~[jetty-servlet-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) [jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) [jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) [jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) [jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) [jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) [jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.Server.start(Server.java:387) [jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) [jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) [jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.Server.doStart(Server.java:354) [jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) [jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.neo4j.server.web.Jetty9WebServer.startJetty(Jetty9WebServer.java:381) [neo4j-server-2.2.3.jar:2.2.3]
	at org.neo4j.server.web.Jetty9WebServer.start(Jetty9WebServer.java:184) [neo4j-server-2.2.3.jar:2.2.3]
	at org.neo4j.server.AbstractNeoServer.startWebServer(AbstractNeoServer.java:474) [neo4j-server-2.2.3.jar:2.2.3]
	at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:230) [neo4j-server-2.2.3.jar:2.2.3]
	at org.neo4j.harness.internal.InProcessServerControls.start(InProcessServerControls.java:59) [neo4j-harness-2.2.3.jar:2.2.3]
	at org.neo4j.harness.internal.InProcessServerBuilder.newServer(InProcessServerBuilder.java:72) [neo4j-harness-2.2.3.jar:2.2.3]
	at org.neo4j.harness.junit.Neo4jRule$1.evaluate(Neo4jRule.java:64) [neo4j-harness-2.2.3.jar:2.2.3]
	at org.junit.rules.RunRules.evaluate(RunRules.java:20) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) [junit-4.11.jar:na]
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) [junit-4.11.jar:na]
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309) [junit-4.11.jar:na]
	at org.junit.runner.JUnitCore.run(JUnitCore.java:160) [junit-4.11.jar:na]
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:78) [junit-rt.jar:na]
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212) [junit-rt.jar:na]
	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68) [junit-rt.jar:na]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_51]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_51]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_51]
	at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_51]
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140) [idea_rt.jar:na]
07:51:33.013 [main] INFO  o.e.jetty.server.ServerConnector - Started ServerConnector@19962194{HTTP/1.1}{localhost:7475}
07:51:33.014 [main] WARN  o.e.j.u.component.AbstractLifeCycle - FAILED org.eclipse.jetty.server.Server@481e91b6: java.lang.NoSuchMethodError: com.sun.jersey.core.reflection.ReflectionHelper.getContextClassLoaderPA()Ljava/security/PrivilegedAction;
java.lang.NoSuchMethodError: com.sun.jersey.core.reflection.ReflectionHelper.getContextClassLoaderPA()Ljava/security/PrivilegedAction;
	at com.sun.jersey.spi.scanning.AnnotationScannerListener.<init>(AnnotationScannerListener.java:94) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.spi.scanning.PathProviderScannerListener.<init>(PathProviderScannerListener.java:59) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.api.core.ScanningResourceConfig.init(ScanningResourceConfig.java:79) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.api.core.PackagesResourceConfig.init(PackagesResourceConfig.java:104) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.api.core.PackagesResourceConfig.<init>(PackagesResourceConfig.java:78) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.api.core.PackagesResourceConfig.<init>(PackagesResourceConfig.java:89) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.spi.container.servlet.WebComponent.createResourceConfig(WebComponent.java:696) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at com.sun.jersey.spi.container.servlet.WebComponent.createResourceConfig(WebComponent.java:674) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:203) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:374) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:557) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at javax.servlet.GenericServlet.init(GenericServlet.java:244) ~[javax.servlet-api-3.1.0.jar:3.1.0]
	at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:612) ~[jetty-servlet-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:395) ~[jetty-servlet-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:871) ~[jetty-servlet-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298) ~[jetty-servlet-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.Server.start(Server.java:387) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.server.Server.doStart(Server.java:354) ~[jetty-server-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) ~[jetty-util-9.2.4.v20141103.jar:9.2.4.v20141103]
	at org.neo4j.server.web.Jetty9WebServer.startJetty(Jetty9WebServer.java:381) [neo4j-server-2.2.3.jar:2.2.3]
	at org.neo4j.server.web.Jetty9WebServer.start(Jetty9WebServer.java:184) [neo4j-server-2.2.3.jar:2.2.3]
	at org.neo4j.server.AbstractNeoServer.startWebServer(AbstractNeoServer.java:474) [neo4j-server-2.2.3.jar:2.2.3]
	at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:230) [neo4j-server-2.2.3.jar:2.2.3]
	at org.neo4j.harness.internal.InProcessServerControls.start(InProcessServerControls.java:59) [neo4j-harness-2.2.3.jar:2.2.3]
	at org.neo4j.harness.internal.InProcessServerBuilder.newServer(InProcessServerBuilder.java:72) [neo4j-harness-2.2.3.jar:2.2.3]
	at org.neo4j.harness.junit.Neo4jRule$1.evaluate(Neo4jRule.java:64) [neo4j-harness-2.2.3.jar:2.2.3]
	at org.junit.rules.RunRules.evaluate(RunRules.java:20) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) [junit-4.11.jar:na]
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) [junit-4.11.jar:na]
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) [junit-4.11.jar:na]
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309) [junit-4.11.jar:na]
	at org.junit.runner.JUnitCore.run(JUnitCore.java:160) [junit-4.11.jar:na]
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:78) [junit-rt.jar:na]
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212) [junit-rt.jar:na]
	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68) [junit-rt.jar:na]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_51]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_51]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_51]
	at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_51]
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140) [idea_rt.jar:na]
 
org.neo4j.server.ServerStartupException: Starting Neo4j Server failed: com.sun.jersey.core.reflection.ReflectionHelper.getContextClassLoaderPA()Ljava/security/PrivilegedAction;
	at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:258)
	at org.neo4j.harness.internal.InProcessServerControls.start(InProcessServerControls.java:59)
	at org.neo4j.harness.internal.InProcessServerBuilder.newServer(InProcessServerBuilder.java:72)
	at org.neo4j.harness.junit.Neo4jRule$1.evaluate(Neo4jRule.java:64)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:78)
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212)
	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Caused by: java.lang.NoSuchMethodError: com.sun.jersey.core.reflection.ReflectionHelper.getContextClassLoaderPA()Ljava/security/PrivilegedAction;
	at com.sun.jersey.spi.scanning.AnnotationScannerListener.<init>(AnnotationScannerListener.java:94)
	at com.sun.jersey.spi.scanning.PathProviderScannerListener.<init>(PathProviderScannerListener.java:59)
	at com.sun.jersey.api.core.ScanningResourceConfig.init(ScanningResourceConfig.java:79)
	at com.sun.jersey.api.core.PackagesResourceConfig.init(PackagesResourceConfig.java:104)
	at com.sun.jersey.api.core.PackagesResourceConfig.<init>(PackagesResourceConfig.java:78)
	at com.sun.jersey.api.core.PackagesResourceConfig.<init>(PackagesResourceConfig.java:89)
	at com.sun.jersey.spi.container.servlet.WebComponent.createResourceConfig(WebComponent.java:696)
	at com.sun.jersey.spi.container.servlet.WebComponent.createResourceConfig(WebComponent.java:674)
	at com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:203)
	at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:374)
	at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:557)
	at javax.servlet.GenericServlet.init(GenericServlet.java:244)
	at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:612)
	at org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:395)
	at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:871)
	at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
	at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
	at org.eclipse.jetty.server.Server.start(Server.java:387)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
	at org.eclipse.jetty.server.Server.doStart(Server.java:354)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.neo4j.server.web.Jetty9WebServer.startJetty(Jetty9WebServer.java:381)
	at org.neo4j.server.web.Jetty9WebServer.start(Jetty9WebServer.java:184)
	at org.neo4j.server.AbstractNeoServer.startWebServer(AbstractNeoServer.java:474)
	at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:230)
	... 22 more

I was a bit baffled at first but if we look closely about 5 – 10 lines down the stack trace we can see the mistake I’ve made:

	at com.sun.jersey.api.core.PackagesResourceConfig.(PackagesResourceConfig.java:89) ~[jersey-server-1.19.jar:1.19]
	at com.sun.jersey.spi.container.servlet.WebComponent.createResourceConfig(WebComponent.java:696) ~[jersey-servlet-1.17.1.jar:1.17.1]
	at com.sun.jersey.spi.container.servlet.WebComponent.createResourceConfig(WebComponent.java:674) ~[jersey-servlet-1.17.1.jar:1.17.1]

We have different versions of the Jersey libraries. Since we’re writing the extension for Neo4j 2.2.3 we can quickly check which version it depends on:

$ ls -alh neo4j-community-2.2.3/system/lib/ | grep jersey
-rwxr-xr-x@  1 markneedham  staff   426K 22 Jun 04:57 jersey-core-1.19.jar
-rwxr-xr-x@  1 markneedham  staff    52K 22 Jun 05:02 jersey-multipart-1.19.jar
-rwxr-xr-x@  1 markneedham  staff   686K 22 Jun 05:02 jersey-server-1.19.jar
-rwxr-xr-x@  1 markneedham  staff   126K 22 Jun 05:02 jersey-servlet-1.19.jar

So we should’t have 1.17.1 in our project and it was easy enough to find my mistake by looking in the pom file:

<properties>
...
    <jersey.version>1.17.1</jersey.version>
...
</properties>
 
<dependencies>
...
 
    <dependency>
        <groupId>com.sun.jersey</groupId>
        <artifactId>jersey-servlet</artifactId>
        <version>${jersey.version}</version>
    </dependency>
 
...
</dependencies>

Also easy enough to fix!

<properties>
...
    <jersey.version>1.19</jersey.version>
...
</properties>

You can see an example of this working on github.

Categories: Programming

Neo4j 2.2.3: Unmanaged extensions – Creating gzipped streamed responses with Jetty

Mark Needham - Tue, 08/11/2015 - 00:57

Back in 2013 I wrote a couple of blog posts showing examples of an unmanaged extension which had a streamed and gzipped response but two years on I realised they were a bit out of date and deserved a refresh.

When writing unmanaged extensions in Neo4j a good rule of thumb is to try and reduce the amount of objects you keep hanging around. In this context this means that we should stream our response to the client as quickly as possible rather than building it up in memory and sending it in one go.

The documentation has a good example showing how to stream a list of colleagues but in this blog post we’ll look at how to do something simpler – we’ll create a couple of nodes representing people and then write an unmanaged extension to return them.

We’ll first create an unmanaged extension which runs a cypher query, iterates through the rows returned and sends them to the client:

@Path("/example")
public class ExampleResource {
    private final GraphDatabaseService db;
    private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
 
    public ExampleResource(@Context GraphDatabaseService db) {
        this.db = db;
    }
 
    @GET
    @Produces(MediaType.APPLICATION_JSON)
    @Path("/people")
    public Response allNodes() throws IOException {
        StreamingOutput stream = streamQueryResponse("MATCH (n:Person) RETURN n.name AS name");
        return Response.ok().entity(stream).type(MediaType.APPLICATION_JSON).build();
    }
 
    private StreamingOutput streamQueryResponse(final String query) {
        return new StreamingOutput() {
                @Override
                public void write(OutputStream os) throws IOException, WebApplicationException {
                    JsonGenerator jg = OBJECT_MAPPER.getJsonFactory().createJsonGenerator(os, JsonEncoding.UTF8);
                    jg.writeStartArray();
 
                    writeQueryResultTo(query, jg);
 
                    jg.writeEndArray();
                    jg.flush();
                    jg.close();
                }
            };
    }
 
    private void writeQueryResultTo(String query, JsonGenerator jg) throws IOException {
        try (Result result = db.execute(query)) {
            while (result.hasNext()) {
                Map<String, Object> row = result.next();
 
                jg.writeStartObject();
                for (Map.Entry<String, Object> entry : row.entrySet()) {
                    jg.writeFieldName(entry.getKey());
                    jg.writeString(entry.getValue().toString());
                }
                jg.writeEndObject();
            }
        }
    }
}

There’s nothing too complicated going on here although notice that we make much more fine grained calls to the JSON Library rather than created a JSON object in memory and calling ObjectMapper#writeValueAsString on it.

To get this to work we’d build a JAR containing this class, put that into the plugins folder and then add the following property to conf/neo4j-server.properties (or the Neo4j desktop equivalent) before restarting the server:

org.neo4j.server.thirdparty_jaxrs_classes=org.neo4j.unmanaged=/unmanaged

We can then test it out like this:

$ curl http://localhost:7474/unmanaged/example/people
[{"name":"Mark"},{"name":"Nicole"}]

I’ve put in a couple of test people nodes – full instructions are available on the github README page.

Next we want to make it possible to send that response in the gzip format. To do that we need to add a GzipFilter to the Neo4j lifecycle. This class has moved to a different namespace in Jetty 9 which Neo4j 2.2.3 depends on, but the following class does the job:

import org.eclipse.jetty.servlets.GzipFilter;
 
public class GZipInitialiser implements SPIPluginLifecycle {
    private WebServer webServer;
 
    @Override
    public Collection<Injectable<?>> start(NeoServer neoServer) {
        webServer = getWebServer(neoServer);
        GzipFilter filter = new GzipFilter();
 
        webServer.addFilter(filter, "/*");
        return Collections.emptyList();
    }
 
    private WebServer getWebServer(final NeoServer neoServer) {
        if (neoServer instanceof AbstractNeoServer) {
            return ((AbstractNeoServer) neoServer).getWebServer();
        }
        throw new IllegalArgumentException("expected AbstractNeoServer");
    }
 
    @Override
    public Collection<Injectable<?>> start(GraphDatabaseService graphDatabaseService, Configuration configuration) {
        throw new IllegalAccessError();
    }
 
    @Override
    public void stop() {
 
    }
}

I needed to include the jersey-servlets JAR in my unmanaged extension JAR in order for this to work correctly. Once we redeploy the JAR and restart Neo4j we can try making the same request as above but with a gzip header:

$ curl -v -H "Accept-Encoding:gzip,deflate" http://localhost:7474/unmanaged/example/people
��V�K�MU�R�M,�V�Ձ��2��sR�jcf(�#

We can unpack that on the fly by piping it through gunzip to check we get a sensible result:

$ curl -v -H "Accept-Encoding:gzip,deflate" http://localhost:7474/unmanaged/example/people | gunzip
[{"name":"Mark"},{"name":"Nicole"}]

And there we have it – a gzipped streamed response. All the code is on github so give it a try and give me a shout if it doesn’t work. The fastest way to get me is probably on our new shiny neo4j-users Slack group.

Categories: Programming

Learn about Google Translate in Coffee with a Googler

Google Code Blog - Mon, 08/10/2015 - 18:35

Posted by Laurence Moroney, Developer Advocate

Over the past few months, we’ve been bringing you 'Coffee with a Googler', giving you a peek at people working on cool stuff that you might find inspirational and useful. Starting with this week’s episode, we’re going to accompany each video with a short post for more details, while also helping you make the most of the tech highlighted in the video.

This week we meet with MacDuff Hughes from the Google Translate team. Google Translate uses statistics based translation. By finding very large numbers of examples of translations from one language to another, it uses statistics to see how various phrases are treated, so it can make reasonable estimates at the correct phrases that are natural sounding in the target language. For common phrases, there are many candidate translations, so the engine converts them within the context of the passage that the phrase is in. Images can also be translated. When you point your mobile device at printed text and it will translate to the preferred for you.

Translate is adding languages all the time, and part of its mission is to serve languages that are less frequently used such as Gaelic, Welsh or Maori, in the same manner as the more commonly used ones, such as English, Spanish and French. To this end, Translate supports more than 90 languages. In the quest to constantly improve translation, the technology provides a way for the community to validate translations, and this is particularly used by less commonly used translations, effectively helping them to grow and thrive. It also enhances the machine translation by having people involved too.

You can learn more about Google Translate, and the translate app here.

Developers have a couple of options to use translate:

  • The Free Website Translate plugin that you can add to your site and have translations available immediately.
  • The Cloud-based Translate API that you can use to build apps or sites that have translation in them.

Watch the episode here:


Categories: Programming

Spark, Parquet and S3 – It’s complicated.

(A version of this post was originally posted in AppsFlyer’s blog. Also special thanks to Morri Feldman and Michael Spector from AppsFlyer data team that did most of the work solving the problems discussed in this article)

TL;DR; The combination of Spark, Parquet and S3 (& Mesos) is a powerful, flexible and cost effective analytics platform (and, incidentally, an alternative to Hadoop). However making all these technologies gel and play nicely together is not a simple task. This post describes the challenges we (AppsFlyer) faced when building our analytics platform on these technologies, and the steps we  took to mitigate them and make it all work.

Spark is shaping up as the leading alternative to Map/Reduce for several reasons including the wide adoption by the different Hadoop distributions, combining both batch and streaming on a single platform and a growing library of machine-learning integration (both in terms of included algorithms and the integration with machine learning languages namely R and Python). At AppsFlyer, we’ve been using Spark for a while now as the main framework for ETL (Extract, Transform & Load) and analytics. A recent example is the new version of our retention report  that we recently released, which utilized Spark  to crunch several data streams (> 1TB a day) with ETL (mainly data cleansing) and analytics (a stepping stone towards full click-fraud detection) to produce the report.
One of the main changes we introduced in this report is the move from building onSequence files to using Parquet files. Parquet is a columnar data format, which is probably the best option today for storing long term big data for analytics purposes (unless you are heavily invested in Hive, where Orc is the more suitable format). The advantages of Parquet  vs. Sequence files are performance and compression without losing the benefit of wide support by big-data tools  (Spark, Hive, Drill, Tajo, Presto etc.).

One relatively unique aspect of our infrastructure for big data is that we do not use Hadoop (perhaps that’s a topic for a separate post). We are using Mesos as a resource manager instead of YARN and we use Amazon S3 instead of HDFS as a distributed storage solution. HDFS has several advantages over S3, however, the cost/benefit for maintaining long running HDFS clusters on AWS  vs. using S3 are overwhelming in favor of S3.

That said, the combination of Spark, Parquet and S3 posed several challenges for us and this post will list the major ones and the solutions we came up with to cope with them.

Parquet & Spark

Parquet and Spark seem to have been in a  love-hate relationship for a while now. On the one hand, the Spark documentation touts  Parquet as one of the best formats for analytics of big data (it is) and on the other hand the support for Parquet in Spark is incomplete and annoying to use. Things are surely moving in the right direction but there are still a few quirks and pitfalls to watch out for.

To start on a positive note,Spark and Parquet integration has come a  long way in the past few months. Previously, one had to jump through hoops just to be able to convert existing data to Parquet. The introduction of DataFrames to Spark made this process much, much simpler. When the input format is supported by the DataFrame API e.g. the input is JSON  (built-in) or Avro (which isn’t built in Spark yet, but you can use a library to read it) converting to Parquet is just a matter of reading the input format on one side and persisting it as Parquet on the other. Consider for example the following snippet in Scala:

View the code on Gist.

Even when you are handling a format where the schema isn’t part of the data, the conversion process is quite simple as Spark lets you specify the schema programmatically. The Spark documentation is pretty straightforward and contains examples in Scala, Java and Python. Furthermore, it isn’t too complicated to define schemas in other languages. For instance, here (AppsFlyer), we use Clojure as our main development language so we developed a couple of helper functions to do just that. The sample code below provides the details:

The first thing is to extract the data from whatever structure we have and specify the schema we like. The code below takes an event-record and extracts various data points from it into a vector of the form [:column_name value optional_data_type]. Note that the data type is optional since it is defaults to string if not specified.

View the code on Gist.

The next step is to use the above mentioned structure to both extract the schema and convert to DataFrame Rows:

View the code on Gist.

Finally we apply these functions over an RDD , convert it to a data frame and save as parquet:

View the code on Gist.

As mentioned above, things are on the up and up for Parquet and Spark but the road is not clear yet. Some of the problems we encountered include:

  • a critical bug in 1.4 release where a race condition when writing parquet files caused massive data loss on jobs (This bug is fixed in 1.4.1 – so if you are using Spark 1.4 and parquet upgrade yesterday!)
  • Filter pushdown optimization, which is turned off  by default since Spark still uses Parquet 1.6.0rc3  -even though 1.6.0 has been out for awhile (it seems Spark 1.5 will use parquet 1.7.0 so the problem will be solved)
  • Parquet is not “natively” supported in Spark, instead, Spark relies on Hadoop support for the parquet format – this is not a problem in itself, but for us it caused major performance issues when we tried to use Spark and Parquet with S3 – more on that in the next section

Parquet, Spark & S3

Amazon S3 (Simple Storage Services) is an object storage solution that is relatively cheap to use. It does have a few disadvantages vs. a “real” file system; the major one is eventual consistency i.e. changes made by one process are not immediately visible to other applications. (If you are using Amazon’s EMR you can use EMRFS “consistent view” to overcome this.) However, if you understand this limitation, S3 is still a viable input and output source, at least for batch jobs.

As mentioned above, Spark doesn’t have a native S3 implementation and relies on Hadoop classes to abstract the data access to Parquet. Hadoop provides 3 file system clients to S3:

  • S3 block file system (URI schema of the form “s3://..”) which doesn’t seem to work with Spark which only work on EMR (Edited: 12/8/2015 thanks to Ewan Leith)
  • S3 Native Filesystem (“s3n://..” URIs) – download  Spark distribution that supports Hadoop 2.* and up if you want to use this (tl;dr – you don’t)
  • S3a – a replacement for S3n that removes some of the limitations and problems of S3n. Download “Spark with Hadoop 2.6 and up” support to use this one (tl;dr – you want this but it needs some work before it is usable)

 

When we used Spark 1.3 we encountered many problems when we tried to use S3, so we started out using s3n – which worked for the most part, i.e. we got jobs running and completing but a lot of them failed with various read timeout and host unknown exceptions. Looking at the tasks within the jobs the picture was even grimmer with high percentages of failures that pushed us to increase timeouts and retries to ridiculous levels. When we moved to Spark 1.4.1, we took another stab at trying s3a. This time around we got it to work. The first thing we had to do was to set both spark.executor.extraClassPath and spark.executor.extraDriverPath to point at the aws-java-sdk and the hadoop-aws jars since apparently both are missing from the “Spark with Hadoop 2.6” build. Naturally we used the 2.6 version of these files but then we were hit by this little problem. Hadoop 2.6 AWS implementation has a bug which causes it to split S3 files  in unexpected ways (e.g. a 400 files jobs ran with 18 million tasks) luckily using  Hadoop AWS jar to version 2.7.0 instead of the 2.6 one solved this problem –  So,with all that set  s3a prefixes works without hitches (and provides better performance than s3n).

Finding the right S3 Hadoop library contributes to the stability of our jobs but  regardless of S3 library (s3n or s3a) the performance of Spark jobs that use Parquet files was  still abysmal. When looking at the Spark UI, the actual work of handling the data seemed quite reasonable but Spark spent a huge amount of time before actually starting the work and after the job was “completed” before it actually terminated. We like to call this phenomena the “Parquet Tax.”

Obviously we couldn’t live with the “Parquet Tax” so we delved into the log files of our jobs and discovered several issues. This first one has to do with startup times of Parquet jobs. The people that built Spark understood that schema can evolve over time and provides a nice feature for DataFrames called “schema merging.” If you look at schema in a big data lake/reservoir (or whatever it is called today) you can definitely expect the schema to evolve over time. However if you look at a directory that is the result of a single job there is no difference in the schema… It turns out that when Spark initializes a job, it reads the footers of all the Parquet files to perform the schema merging. All this work is done from the driver before any tasks are allocated to the executor and can take long minutes, even hours (e.g. we have jobs that look back at half a year of install data). It isn’t documented but looking at the Spark code  you can override this behavior by specifying mergeSchema as false :

In Scala:

View the code on Gist.

and in Clojure:

View the code on Gist.

Note that this doesn’t work in Spark 1.3. In Spark 1.4 it works as expected and in Spark 1.4.1 it causes Spark only to look at _common_metadata file which is not the end of the world since it is a small file and there’s only one of these per directory. However, this brings us to another aspect of the “Parquet Tax” – the “end of job” delays.

Turning off schema merging and controlling the schema used by Spark helped cut down the job start up times but, as mentioned we still suffered from long delays at the end of jobs. We already knew of one Hadoop<->S3 related problem when using text files. Hadoop  being immutable first writes files to a temp directory and then copies them over. With S3 that’s not a problem but the copy operation is very very expensive. With text files, DataBricks created DirectOutputCommitter  (probably for their Spark SaaS offering). Replacing the output committer for text files is fairly easy – you just need to set “spark.hadoop.mapred.output.committer.class” on the Spark configuration e.g.:

View the code on Gist.

A similar solution exists for Parquet and unlike the solution for text files it is even part of the Spark distribution. However, to make things complicated you have to configure it on Hadoop configuration and not on the Spark configuration. To get the Hadoop configuration you first need to create a Spark context from the Spark configuration, call hadoopConfiguration on it and then set “spark.sql.parquet.output.committer.class” as in:

View the code on Gist.

Using the DirectParquetOutputCommitter provided a significant reduction in the “Parquet Tax” but we still found that some jobs were taking a very long time to complete. Again the problem was the file system assumptions Spark and Hadoop hold which were  the culprits. Remember the “_common_metadata” Spark looks at the onset of a job – well, Spark spends a lot of time at the end of the job creating both this file and an additional MetaData file with additional info from the files that are in the directory. Again this is all done from one place (the driver) rather than being handled by the executors. When the job results in small files (even when there are couple of thousands of those) the process takes reasonable time. However, when the job results in larger files (e.g. when we ingest a full day of application launches) this takes upward of an hour. As with mergeSchema the solution is to manage metadata manually so we set “parquet.enable.summary-metadata” to false (again on the Hadoop configuration and generate the _common_metadata file ourselves (for the large jobs)

To sum up, Parquet and especially Spark are works in progress – making cutting edge technologies work for you can be a challenge and require a lot of digging. The documentation is far from perfect at times but luckily all the relevant technologies are open source (even the Amazon SDK), so you can always dive into the bug reports, code etc. to understand how things actually work and find the solutions you need. Also, from time to time you can find articles and blog posts that explain how to overcome the common issues in technologies you are using. I hope this post clears off some of the complications  of integrating Spark, Parquet and S3, which are, at the end of the day, all great technologies with a lot of potential.

Image by xkcd. Licensed under creative-commons 2.5

Categories: Architecture