Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!


Craziest Entrepreneur Challenge: Can 3 Devs Make $100 Each In 24 Hours? (1 of 3)

Making the Complex Simple - John Sonmez - Tue, 03/04/2014 - 16:45

You are dropped in the middle of the forest by helicopter with nothing but a hunting knife. What do you do? How do you survive? That was the basic premise of the challenge a couple of my buddies and I undertook, only instead of our competition taking place in the woods, ours took place completely […]

The post Craziest Entrepreneur Challenge: Can 3 Devs Make $100 Each In 24 Hours? (1 of 3) appeared first on Simple Programmer.

Categories: Programming

Satya Nadella on How Success is a Mental Game

As technology and software change our world at a faster rate than ever before, we need to play a better game.

How do we play a better game?

By recognizing our conceptual blocks and removing them.

Here is how Satya Nadella told us to think about our mental game and conceptual blocks:

“It's really a mental game.

At this point, it's got nothing to do with your capability, at all.  You're going to be facing stuff that you never faced before and it's all in the head.  The question is how are you going to cope with it.  It's all a conceptual block. 

And if we can get rid of that, things get a lot easier.

You've got to really think about the conceptual block you have, be mindful of it, and remove it.

And then you can have a different perspective.”

When we change our perspective, we change our game.

That’s how we win, in work and in life.

You Might Also Like

Microsoft Explained: Making Sense of the Microsoft Platform Story

Satya Nadella is the New Microsoft CEO

Satya Nadella is All About Customer Focus, Employee Engagement, and Changing the World

Satya Nadella on Live and Work a Meaningful Life

Satya Nadella on the Future is Software

Satya Nadella on Everyone Has to Be a Leader

Categories: Architecture, Programming

Geoip geolocation with Google BigQuery

Google Code Blog - Tue, 03/04/2014 - 02:03
Author PhotoBy Felipe Hoffa, Cloud Platform team

Cross-posted from the Google Cloud Platform Blog

Aggregating numbers by geolocation is a powerful way to analyze your data, but not an easy task when you have millions of IP addresses to analyze. In this post, we'll check how we can we use Google BigQuery to quickly solve this use case using a publicly available dataset.

We take the developer community seriously and it’s a great way for us to see what your use cases are. This is where I found a very interesting question: "user2881671" on Stack Overflow had created a way to transform IP addresses into geographical locations in BigQuery, and asked for help optimizing their query. We worked out an optimized solution there, and today I'm happy to present an even better solution.

For example, if you want to peek at what are the top cities contributing modifications to Wikipedia, you can run this query:
SELECT COUNT(*) c, city, countryLabel, NTH(1, latitude) lat, NTH(1, longitude) lng
INTEGER(PARSE_IP(contributor_ip)) AS clientIpNum,
INTEGER(PARSE_IP(contributor_ip)/(256*256)) AS classB
WHERE contributor_ip IS NOT NULL
) AS a
JOIN EACH [fh-bigquery:geocode.geolite_city_bq_b2b] AS b
ON a.classB = b.classB
WHERE a.clientIpNum BETWEEN b.startIpNum AND b.endIpNum
AND city != ''
GROUP BY city, countryLabel
We can visualize the results on a map:

You can do the same operation with your own tables containing ipv4 IP addresses. Just take the previous query and replace [publicdata:samples.wikipedia] with your own table, and contributor_ip with the name of its column containing ipv4 addresses.

Technical details

First, I downloaded the Creative Commons licensed GeoLite City IPv4 made available by MaxMind in its .csv format. There is a newer database available too, but I didn't work with it as it's only available in binary form for now. I uploaded its 2 tables into BigQuery: blocks and locations.

To get better performance later, some processing was needed: For each rule I extracted into a new column its class B prefix (192.168.x.x) and generated duplicate rules for segments that spanned more than one B class. I also joined both original tables, to skip that step when processing data. In the StackOverflow question "user2881671" went even further, generating additional rules for segments without a location mapping (cleverly using the LAG() window function), but I skipped that step here (so addresses without a location will be skipped rather than counted). In total, only 32,702 new rows were needed.

The final query JOINs the class B prefix from your IP addresses with the lookup table, to prevent the performance hit of doing a full cross join.

You can find the new table with the BigQuery web UI, or using the REST-based API to integrate these queries and dataset with your own software.

To get started with BigQuery, you can check out our site and the "What is BigQuery" introduction. You can post questions and get quick answers about BigQuery usage and development on Stack Overflow. Follow the latest BigQuery news at We love your feedback and comments. Join the discussion on +Google Cloud Platform using the hashtag #BigQuery.

This post includes GeoLite data created by MaxMind, available from, distributed under the Creative Commons Attribution-ShareAlike 3.0 Unported License.

Felipe Hoffa is part of the Cloud Platform Team. He'd love to see the world's data accessible for everyone in BigQuery.

Posted by Scott Knaster, Editor
Categories: Programming

IISpeed accelerates, with PageSpeed optimization by Google

Google Code Blog - Mon, 03/03/2014 - 23:32
Author PhotoThis guest post was written by Otto van der Schaaf, Founder and Software Engineer, We-Amp

Google has been working on making the web faster for years now. Recently, We-Amp collaborated with Google and SunStar Media to conduct a test to measure the real-world effects of PageSpeed optimization with IISpeed on conversion rates and overall site speed. We used IISpeed to optimize the Visit Napa Valley site.

Our test found that IISpeed reduced page load time on average by 12.36% for mobile, and increased conversions by 1%. (In this case, a conversion is a visit to a booking page on an external site.) In the mobile and tablet segments, we saw 7x and 9x improvements in the 0-1 second page load time bucket.

We got these results after spending just 20 minutes to configure IISpeed.

What did IISpeed optimize?

James Moberg, CTO of SunStar Media, said this about the project: “This website displays a large number of unique images. IISpeed automatically controls caching for updated files and performs lazy loading. We were using some server-side logic and jQuery libraries to do this, but it required more code, required caching file dates and it appeared to display slower during page load. We use Adobe ColdFusion and there are some libraries that concat and minify both JavaScript and CSS, but none are as easy or robust as IISpeed. Adding a custom header allows us to enable or disable any IISpeed features on page requests based on any criteria like authenticated user, browser, cookie, IP address, internal application toggle, and so on. Overall we are extremely pleased with IISpeed and are proactively removing all optimization hacks from past projects and allowing IISpeed to manage it.”

Some third-party content isn't optimized by PageSpeed in this test. These are areas for future enhancement.


Changes measured: IISpeed versus no IISpeed. We set up a classic A/B test on Google Analytics to facilitate our test strategy.
table of findingsThe average timings are sensitive to outliers, so for correct interpretation it is good to have a look at the timing distribution as well.

IISpeed by We-Amp

IISpeed brings the full power of PageSpeed optimization to Microsoft’s IIS web server, which powers millions of websites. PageSpeed optimization via IISpeed improves site speed by automating best practices as outlined by Steve Souders, Head Performance Engineer at Google. In addition to these optimizations, Google’s “Make the web faster” team has added above-the-fold content prioritization and a lot more.

IISpeed is an initiative by We-Amp, a web performance optimization company located in the Netherlands.

Otto van der Schaaf is one of the founders of We-Amp,and spends his time coding on web acceleration technologies and accelerating web sites. When he is not working he loves to play soccer with his two sons.

Posted by Scott Knaster, Editor
Categories: Programming

Neo4j 2.1.0-M01: LOAD CSV with Rik Van Bruggen’s Tube Graph

Mark Needham - Mon, 03/03/2014 - 17:34

Last week we released the first milestone of Neo4j 2.1.0 and one its features is a new function in cypher – LOAD CSV – which aims to make it easier to get data into Neo4j.

I thought I’d give it a try to import the London tube graph – something that my colleague Rik wrote about a few months ago.

I’m using the same data set as Rik but I had to tweak it a bit as there were naming differences when describing the connection from Kennington to Waterloo and Kennington to Oval. My updated version of the dataset is on github.

With the help of Alistair we now have a variation on the original which takes into account the various platforms at stations and the waiting time of a train on the platform. This will also enable us to add in things like getting from the ticket hall to the various platforms more easily.

The model looks like this:

2014 03 03 16 15 58

Now we need to create a graph and the first step is to put an index on station name as we’ll be looking that up quite frequently in the queries that follow:

CREATE INDEX on :Station(stationName)

Now that’s in place we can make use of LOAD CSV. The data is very de-normalised which works out quite nicely for us and we end up with the following script:

LOAD CSV FROM "file:/Users/markhneedham/code/tube/runtimes.csv" AS csvLine
WITH csvLine[0] AS lineName, 
     csvLine[1] AS direction, 
     csvLine[2] AS startStationName,
     csvLine[3] AS destinationStationName, 
     toFloat(csvLine[4]) AS distance, 
     toFloat(csvLine[5]) AS runningTime
MERGE (start:Station { stationName: startStationName}) 
MERGE (destination:Station { stationName: destinationStationName}) 
MERGE (line:Line { lineName: lineName}) 
MERGE (line) - [:DIRECTION] -> (dir:Direction { direction: direction}) 
CREATE (inPlatform:InPlatform {name: "In: " + destinationStationName + " " + lineName + " " + direction})
CREATE (outPlatform:OutPlatform {name: "Out: " + startStationName + " " + lineName + " " + direction}) 
CREATE (inPlatform) - [:AT] -> (destination) 
CREATE (outPlatform) - [:AT] -> (start) 
CREATE (inPlatform) - [:ON] -> (dir) 
CREATE (outPlatform) - [:ON] -> (dir) 
CREATE (outPlatform) - [r:TRAIN {distance: distance, runningTime: runningTime}] -> (inPlatform)

This file doesn’t contain any headers so we’ll simulate them by using a WITH clause so that we don’t have index lookups all over the place. In this case we’re pointing to a file on the local file system but we could choose to point to a CSV file on the web if we wanted to.

Since stations, lines and directions appear frequently we’ll use MERGE to ensure they don’t get duplicated.

After that we have a post processing step to connect the ‘in’ and ‘out’ platforms shown in the diagram.

MATCH (station:Station) <-[:AT]- (platformIn:InPlatform), 
      (station:Station) <-[:AT]- (platformOut:OutPlatform), 
      (direction:Direction) <-[:ON]- (platformIn:InPlatform), 
      (direction:Direction) <-[:ON]- (platformOut:OutPlatform) 
CREATE (platformIn) -[:WAIT {runningTime: 0.5}]-> (platformOut)

After running a few queries on the graph I realised that it wasn’t possible to combine some journies through Kennington and Euston so I had to add some relationships in there as well:

// link the Euston stations
MATCH (euston:Station {stationName: "EUSTON"})<-[:AT]-(eustonIn:InPlatform)
MATCH (eustonCx:Station {stationName: "EUSTON (CX)"})<-[:AT]-(eustonCxIn:InPlatform)
MATCH (eustonCity:Station {stationName: "EUSTON (CITY)"})<-[:AT]-(eustonCityIn:InPlatform)
CREATE UNIQUE (eustonIn)-[:WAIT {runningTime: 0.0}]->(eustonCxIn)
CREATE UNIQUE (eustonIn)-[:WAIT {runningTime: 0.0}]->(eustonCityIn)
CREATE UNIQUE (eustonCxIn)-[:WAIT {runningTime: 0.0}]->(eustonCityIn)
// link the Kennington stations
MATCH (kenningtonCx:Station {stationName: "KENNINGTON (CX)"})<-[:AT]-(kenningtonCxIn:InPlatform)
MATCH (kenningtonCity:Station {stationName: "KENNINGTON (CITY)"})<-[:AT]-(kenningtonCityIn:InPlatform)
CREATE UNIQUE (kenningtonCxIn)-[:WAIT {runningTime: 0.0}]->(kenningtonCityIn)

I’ve been playing around with the A* algorithm to find the quickest route between stations based on the distances between stations.

The next step is to put a timetable graph alongside this so we can do quickest routes at certain parts of the day and the next step after that will be to take delays into account.

If you’ve got some data you want to get into the graph give LOAD CSV a try and let us know how you get on, the cypher team are keen to get feedback on this.

Categories: Programming

Are You Afraid Of Losing Your Job?

Making the Complex Simple - John Sonmez - Mon, 03/03/2014 - 17:00

Nothing can paralyze you worse than the fear of losing your job.  If you are afraid that you might lose your job, you are likely to act in ways that are neither in the best interest of your employer or yourself. How to know if you are afraid It might seem like an obvious thing, […]

The post Are You Afraid Of Losing Your Job? appeared first on Simple Programmer.

Categories: Programming

NorDevCon 2014

Phil Trelford's Array - Sun, 03/02/2014 - 17:21

NorDevCon is a one day agile and tech conference held annually in the historic English town of Norwich, also the setting for the recent British comedy Alan Partridge: Alpha Papa.

Last Friday I took the short hop across the Fens to Norwich to talk about Data Science and Machine Learning with F#. First a talk on F# Type Providers entitled All your base types are belong to us (thanks to Ross McKinlay for the meme), and then after lunch a hands on Machine Learning workshop exploring `Titanic passengers using data from Kaggle (a data science company based in San Francisco).

The conference attracted a great range of speakers with some really interesting sessions:


In the morning I got to chat with Jon Skeet about programming with kids, and then watched him answer stack overflow questions at a frightening pace.

Jason Gorman gave a lively and warm opening keynote on Software Apprenticeships, which included a Skype session with his brave apprentice, Will Price. Immediately followed by Chris O’Dell on Continuous Delivery at 7Digital which ended with a lot of interested questions.

In the afternoon I caught Phil Nash’s thought provoking session on Agile and Mobile – do they work together as well as they should. Phil talked about a schism in testing approaches on mobile platforms, particularly iOS, with some in the community advocating unit testing and TDD and others none at all. Check out Phil’s links from the talk to learn more.

The closing keynote came from the highly respected Nat Pryce and Steve Freeman on Building SOLID Foundations. The talk focused on design principles for addressing complexity in mid-scale codebases. Nat gave examples of successfully taming complexity in an unnamed risk management project using immutability and DSLs, in effect a functional-style approach. An approach that Jessica Kerr also explored in some depth in her popular Functional Principles for Object-Oriented Developers talk at last year’s Øredev.

The day was capped off was a hearty dinner with a fun format where people moved around between the courses.

All your types are belong to us!

My first talk show cased typed access to a vast array of data sources through to software environments like Hadoop and R, via F# Type Providers:

All your types are belong to us! from Phillip Trelford

Type Providers covered:

  • JSON, CSV, HTML Tables and the World Bank (via FSharp.Data)
  • SQLite (via Ross’s SQL Provider which also supports MySQL, Oracle, Postgre & SQL Server)
  • R (via BlueMountain Capital’s R-Type Provider)
  • Hadoop (in the browser via Try FSharp)

All of the Type Providers shown are open source projects developed by the F# community, can be easily integrated into projects via Nuget and run on Linux, Mac and Windows.

The HTML Table type provider being the most recent, developed by Colin Bull, it gives immediate typed access to data in tables on web pages.

After the talk Jon Skeet suggested a Protocol Buffers Type Provider:

Learning about F# Type Providers with @ptrelford at #nordevcon. Must try this with Protocol Buffers. (As @dsyme mentioned in 2012!)

— Jon Skeet (@jonskeet) February 28, 2014

For which it turns out Cameron Taggart already has a project called Froto.

If you are interested in creating your own Type Provider I’d recommend reading Michael Newtons’s Type Provider tutorial. Michael will be running a free workshop on building Type Providers at the F#unctional Londoners meetup on May 1st.

Hands On Machine Learning workshop

This session gave an introduction to machine learning, using F#’s REPL and the CSV Type Provider to easily explore data on Titanic and predict survival outcomes:

Machine learning from disaster from Phillip Trelford

It was a great group and everyone managed to complete the task and produce good prediction results using decision tree learning in just 1.5 hours.

Wrapping Up

Thanks to Paul Grenyer for organizing an excellent conference and giving me the opportunity to speak. Paul put together a really interesting programme which was extremely well-executed.

A++ would recommend :)

Categories: Programming

Get Up and CODE 43: Diet Plan Basics

Making the Complex Simple - John Sonmez - Sat, 03/01/2014 - 17:00

This is my first solo episode of Get Up and CODE.  Last episode was Iris’s last episode. It was harder than I thought to record a completely solo episode, but I did it and here it is. In this episode, I talk about how to figure out how many calories you burn in a day […]

The post Get Up and CODE 43: Diet Plan Basics appeared first on Simple Programmer.

Categories: Programming

Neo4j: Cypher – Finding directors who acted in their own movie

Mark Needham - Fri, 02/28/2014 - 23:57

I’ve been doing quite a few Intro to Neo4j sessions recently and since it contains a lot of problems for the attendees to work on I get to see how first time users of Cypher actually use it.

A couple of hours in we want to write a query to find directors who acted in their own film based on the following model.

2014 02 28 22 40 02

A common answer is the following:

MATCH (a)-[:ACTED_IN]->(m)<-[:DIRECTED]-(d)

We’re matching an actor ‘a’, finding the movie they acted in and then finding the director of that movie. We now have pairs of actors and directors which we filter down by comparing their ‘name’ property.

I haven’t written SQL for a while but if my memory serves me correctly comparing properties or attributes in this way is quite a common way to test for equality.

In a graph we don’t need to compare properties – what we actually want to check is if ‘a’ and ‘d’ are the same node:

MATCH (a)-[:ACTED_IN]->(m)<-[:DIRECTED]-(d)
WHERE a = d

We’ve simplifed the query a bit but we can actually go one better by binding the director to the same identifier as the actor like so:

MATCH (a)-[:ACTED_IN]->(m)<-[:DIRECTED]-(a)

So now we’re matching an actor ‘a’, finding the movie they acted in and then finding the director if they happen to be the same person as ‘a’.

The code is now much simpler and more revealing of its intent too.

Categories: Programming

Satya Nadella on Live and Work a Meaningful Life

There's a quote in Ferris Bueller's Day Off:

”Life moves pretty fast. If you don't stop and look around once in a while, you could miss it.”

It’s true.

Satya gets it.

Sayta reminds us to individually think about our broader impact, our deeper meaning, and the significance of everything we do, even the little things. 

Here is how Satya reminded us to focus on our significance and impact:

“I want to work in a place where everybody gets more meaning out of their work on an everyday basis.

We spend far too much time at work for it not to have a deeper meaning in your life.

The way we connect with that meaning is by knowing the work we do has broader implications, broader impact, outside of work.

The reality is every feature, everything you do, or every marketing program you do, or every sales program you do is going to have a broader impact.

I think that us reminding ourselves of that, and taking consideration from that, matters a lot.  And I that's a gift that we have in this industry, in this company, and I think we should take full advantage of that.  Because when you look back, when it's all said and done, it's that meaning that you'll recount, it's not the specifics of what you did, and I think that's one of the perspectives that's important.”

My take away is, if you’re not making your work matter, to you, to others, you’re doing it wrong.

You Might Also Like

Microsoft Explained: Making Sense of the Microsoft Platform Story

Satya Nadella is the New Microsoft CEO

Satya Nadella is All About Customer Focus, Employee Engagement, and Changing the World

Satya Nadella on the Future is Software

Satya Nadella on Everyone Has to Be a Leader

Categories: Architecture, Programming

10 Years of Coding Horror

Coding Horror - Jeff Atwood - Fri, 02/28/2014 - 10:01

In 2007, I was offered $120,000 to buy this blog outright.

I was sorely tempted, because that's a lot of money. I had to think about it for a week. Ultimately I decided that my blog was an integral part of who I was, and who I eventually might become. How can you sell yourself, even for $120k?

I sometimes imagine how different my life would have been if I had taken that offer. Would Stack Overflow exist? Would Discourse? It's easy to look back now and say I made the right decision, but it was far less clear at the time.

One of my philosophies is to always pick the choice that scares you a little. The status quo, the path of least resistance, the everyday routine — that stuff is easy. Anyone can do that. But the right decisions, the decisions that challenge you, the ones that push you to evolve and grow and learn, are always a little scary.

I'm thinking about all this because this month marks the 10 year anniversary of Coding Horror. I am officially old school. I've been blogging for a full decade now. Just after the "wardrobe malfunction" Janet Jackson had on stage at Super Bowl XXXVIII in February 2004, I began with a reading list and a new year's resolution to write one blog entry every weekday. I was even able to keep that pace up for a few years!

Janet Jackson clothing malfunction

The ten year mark is a time for change. As of today, I'm pleased to announce that Coding Horror is now proudly hosted on the Ghost blog platform. I've been a blog minimalist from the start, and finding a truly open source platform which reflects that minimalism and focus is incredibly refreshing. Along with the new design, you may also notice that comments are no longer present. Don't worry. I love comments. They'll all be back. This is only a temporary state, as there's another notable open source project I want to begin supporting here.

It is odd to meet developers that tell me they "grew up" with Coding Horror. But I guess that's what happens when you keep at something for long enough, given a modest amount of talent and sufficient resolve. You become recognized. Maybe even influential. Now, after 10 years, I am finally an overnight success. And: old.

So, yeah, it's fair to say that blogging quite literally changed my life. But I also found that as the audience grew, I felt more pressure to write deeply about topics that are truly worthy of everyone's time, your time, rather than frittering it away on talking head opinions on this week's news. So I wrote less. And when things got extra busy at Stack Exchange, and now at Discourse, I didn't write at all.

I used to tell people who asked me for advice about blogging that if they couldn't think about one interesting thing to write about every week, they weren't trying hard enough. The world is full of so many amazing things and incredible people. As Albert Einstein once said, there are two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle.

Watchmen page

I wasn't trying hard enough. I had forgotten. I can't fully process all the things that are happening to me until I write about them. I have to be able to tell the story to understand it myself. My happiness only becomes real when I share it with all of you.

This is the philosophy that underlies Stack Overflow. This is the philosophy that underlies Discourse. These are all projects based on large scale, communal shared happiness. Love of learning. Love of teamwork. Love of community.


For the next decade of Coding Horror, I resolve to remember how miraculous that is.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Categories: Programming

Join us at the Game Developers Conference

Google Code Blog - Thu, 02/27/2014 - 18:41
Author PhotoBy Greg Hartrell, Lead Product Manager

Cross-posted from the Android Developers Blog

When we’re not guiding a tiny bird across a landscape of pipes on our phones, we’re getting ready for our biggest-ever Developer Day at this year’s Game Developers Conference in San Francisco. On Tuesday 18 March, all the teams at Google dedicated to gaming will share their insights on the best ways to build games, grow audiences, engage players and make money. Some of the session highlights include:

  • Growth Hacking with Play Games
  • Making Money on Google Play: Best Practices in Monetization
  • Grow Your Game Revenue with AdMob
  • From Players to Customers: Tracking Revenue with Google Analytics
  • Build Games that Scale in the Cloud
  • From Box2D to Liquid Fun: Just Add Water-like Particles!
GDC logoAnd there’s a lot more, so check out the full Google Developer Day schedule on the GDC website, where you can also buy tickets. We hope to see you there, but if you can’t make the trip, don’t worry; all the talks will be livestreamed on YouTube, starting at 10:00 am PDT (5:00 pm UTC).

Then from 19-21 March, meet the Google teams in person from AdMob, Analytics and Cloud at the Google Education Center in the Moscone Center’s South Hall (booth no. 218), and you could win a Nexus 7.

Greg Hartrell is Lead Product Manager on Google Play game services, devoted to helping developers make incredible games through Google Play. In his spare time, he enjoys jumping from platform to platform, boss battles and matching objects in threes.

Posted by Scott Knaster, Editor
Categories: Programming

Head To Higher Ground

Making the Complex Simple - John Sonmez - Thu, 02/27/2014 - 17:30

A little struggle and difficult is not always bad. We can actually flip around negative circumstances and frustrations and use them to our advantage if we are willing to overcome challenges other are not. In this video, I talk about why you might want to consider heading to higher ground or to go through the […]

The post Head To Higher Ground appeared first on Simple Programmer.

Categories: Programming

Process Stats: Understanding How Your App Uses RAM

Android Developers Blog - Thu, 02/27/2014 - 03:48

Posted by Dianne Hackborn, Android framework team

Android 4.4 KitKat introduced a new system service called procstats that helps you better understand how your app is using the RAM resources on a device. Procstats makes it possible to see how your app is behaving over time — including how long it runs in the background and how much memory it uses during that time. It helps you quickly find inefficiencies and misbehaviors in your app that can affect how it performs, especially when running on low-RAM devices.

You can access procstats data using an adb shell command, but for convenience there is also a new Process Stats developer tool that provides a graphical front-end to that same data. You can find Process Stats in Settings > Developer options > Process Stats.

In this post we’ll first take a look at the Process Stats graphical tool, then dig into the details of the memory data behind it, how it's collected, and why it's so useful to you as you analyze your app.

Process Stats overview of memory used by background processes over time.

Looking at systemwide memory use and background processes

When you open Process Stats, you see a summary of systemwide memory conditions and details on how processes are using memory over time. The image at right gives you an example of what you might see on a typical device.

At the top of the screen we can see that:

  • We are looking at that data collected over the last ~3.5 hours.
  • Currently the device’s RAM is in good shape ("Device memory is currently normal").
  • During that entire time the memory state has been good — this is shown by the green bar. If device memory was getting low, you would see yellow and red regions on the left of the bar representing the amount of total time with low memory.

Below the green bar, we can see an overview of the processes running in the background and the memory load they've put on the system:

  • The percentage numbers on the right indicate the amount of time each process has spent running during the total duration.
  • The blue bars indicate the relative computed memory load of each process. (The memory load is runtime*avg_pss, which we will go into more detail on later.)
  • Some apps may be listed multiple times, since what is being shown is processes (for example, Google Play services runs in two processes). The memory load of these apps is the sum of the load of their individual processes.
  • There are a few processes at the top that have all been running for 100% of the time, but with different weights because of their relative memory use.
Analyzing memory for specific processes

The example shows some interesting data: we have a Clock app with a higher memory weight than Google Keyboard, even though it ran for less than half the time. We can dig into the details of these processes just by tapping on them:

Process Stats memory details for Clock and Keyboard processes over the past 3.5 hours.

The details for these two processes reveal that:

  • The reason that Clock has been running at all is because it is being used as the current screen saver when the device is idle.
  • Even though the Clock process ran for less than half the time of the Keyboard, its ram use was significantly larger (almost 3x), which is why its overall weight is larger.

Essentially, procstats provides a “memory use” gauge that's much like the storage use or data use gauges, showing how much RAM the apps running in the background are using. Unlike with storage or data, though, memory use is much harder to quantify and measure, and procstats uses some tricks to do so. To illustrate the complexity of measuring memory use, consider a related topic: task managers.

Understanding task managers and their memory info

We’ve had a long history of task managers on Android. Android has always deeply supported multitasking, which means the geeky of us will tend to want to have some kind of UI for seeing and controlling this multitasking like the traditional UI we are used to from the desktop. However, multitasking on Android is actually quite a bit more complicated and fundamentally different than on a traditional desktop operating system, as I previously covered in Multitasking the Android Way. This deeply impacts how we can show it to the user.

Multitasking and continuous process management

To get a feel for just how different process management is on Android, you can take a look at the output of an important system service, the activity manager, with adb shell dumpsys activity. The example below shows a snapshot of current application processes on Android 4.4, listing them from most important to least:

ACTIVITY MANAGER RUNNING PROCESSES (dumpsys activity processes)
Process LRU list (sorted by oom_adj, 22 total, non-act at 2, non-svc at 2):
  PERS #21: sys   F/ /P  trm: 0 23064:system/1000 (fixed)
  PERS #20: pers  F/ /P  trm: 0 (fixed)
  PERS #19: pers  F/ /P  trm: 0 23344:com.nuance.xt9.input/u0a77 (fixed)
  PERS #18: pers  F/ /P  trm: 0 (fixed)
  PERS #17: pers  F/ /P  trm: 0 (fixed)
  Proc # 3: fore  F/ /IB trm: 0 (service)<=Proc{23064:system/1000}
  Proc # 2: fore  F/ /IB trm: 0 (provider)<=Proc{}
  Proc # 0: fore  F/A/T  trm: 0 (top-activity)
  Proc # 4: vis   F/ /IF trm: 0 (service)<=Proc{23064:system/1000}
  Proc #14: prcp  F/ /IF trm: 0 (service)<=Proc{23064:system/1000}
  Proc # 1: home  B/ /HO trm: 0 (home)
  Proc #16: cch   B/ /CA trm: 0 (cch-act)
  Proc # 6: cch   B/ /CE trm: 0 (cch-empty)
  Proc # 5: cch   B/ /CE trm: 0 (cch-empty)
  Proc # 8: cch+2 B/ /CE trm: 0 (cch-empty)
  Proc # 7: cch+2 B/ /CE trm: 0 (cch-empty)
  Proc #10: cch+4 B/ /CE trm: 0 (cch-empty)
  Proc # 9: cch+4 B/ /CE trm: 0 (cch-empty)
  Proc #15: cch+6 B/ /S  trm: 0 (cch-started-services)
  Proc #13: cch+6 B/ /CE trm: 0 (cch-empty)
  Proc #12: cch+6 B/ /S  trm: 0 (cch-started-services)
  Proc #11: cch+6 B/ /CE trm: 0 (cch-empty)

Example output of dumpsys activity command, showing all processes currently running.

There are a few major groups of processes here — persistent system processes, the foreground processes, background processes, and finally cached processes — and the category of a process is extremely important for understanding its impact on the system.

At the same time, processes on this list change all of the time. For example, in the snapshot above we can see that “” is currently an important process, but that is because it is doing a background sync, something the user would not generally be aware of or want to manage.

Snapshotting per-process RAM use

The traditional use of a task manager is closely tied to RAM use, and Android provides a tool called meminfo for looking at a snapshot of current per-process RAM use. You can access it with the command adb shell dumpsys meminfo. Here's an example of the output.

Total PSS by OOM adjustment:
    31841 kB: Native
               13173 kB: zygote (pid 23001)
                4372 kB: surfaceflinger (pid 23000)
                3721 kB: mediaserver (pid 126)
                3317 kB: glgps (pid 22993)
                1656 kB: drmserver (pid 125)
                 995 kB: wpa_supplicant (pid 23148)
                 786 kB: netd (pid 121)
                 518 kB: sdcard (pid 132)
                 475 kB: vold (pid 119)
                 458 kB: keystore (pid 128)
                 448 kB: /init (pid 1)
                 412 kB: adbd (pid 134)
                 254 kB: ueventd (pid 108)
                 238 kB: dhcpcd (pid 10617)
                 229 kB: tf_daemon (pid 130)
                 200 kB: installd (pid 127)
                 185 kB: dumpsys (pid 14207)
                 144 kB: healthd (pid 117)
                 139 kB: debuggerd (pid 122)
                 121 kB: servicemanager (pid 118)
    48217 kB: System
               48217 kB: system (pid 23064)
    49095 kB: Persistent
               34012 kB: (pid 23163 / activities)
                7719 kB: (pid 23357)
                4676 kB: (pid 23371)
                2688 kB: com.nuance.xt9.input (pid 23344)
    24945 kB: Foreground
               24945 kB: (pid 24811 / activities)
    17136 kB: Visible
               14026 kB: (pid 23472)
                3110 kB: (pid 13976)
     6911 kB: Perceptible
                6911 kB: (pid 23298)
    14277 kB: A Services
               14277 kB: (pid 23513)
    26422 kB: Home
               26422 kB: (pid 23395 / activities)
    21798 kB: B Services
               16242 kB: (pid 23767)
                5556 kB: (pid 7738)
   145869 kB: Cached
               41588 kB: (pid 24689)
               21417 kB: (pid 23966 / activities)
               14463 kB: (pid 8644)
               14303 kB: (pid 9115)
               11014 kB: (pid 7716)
               10688 kB: (pid 13892)
               10240 kB: (pid 23338)
                9882 kB: (pid 5131)
                8807 kB: (pid 8937)
                3467 kB: (pid 8922)

Total RAM: 998096 kB
 Free RAM: 574945 kB (145869 cached pss + 393200 cached + 35876 free)
 Used RAM: 392334 kB (240642 used pss + 107196 buffers + 3856 shmem + 40640 slab)
 Lost RAM: 30817 kB
   Tuning: 64 (large 384), oom 122880 kB, restore limit 40960 kB (high-end-gfx)

Example output of dumpsys meminfo command, showing memory currently used by running processes.

We are now looking at the same processes as above, again organized by importance, but now with on their impact on RAM use.

Usually when we measure RAM use in Android, we do this with Linux’s PSS (Proportional Set Size) metric. This is the amount of RAM actually mapped into the process, but weighted by the amount it is shared across processes. So if there is a 4K page of RAM mapped in to two processes, its PSS amount for each process would be 2K.

The nice thing about using PSS is that you can add up this value across all processes to determine the actual total RAM use. This characteristic is used at the end of the meminfo report to compute how much RAM is in use (which comes in part from all non-cached processes), versus how much is "free" (which includes cached processes).

Task-manager style memory info, showing a snapshot of memory used by running apps.

Task manager UI based on PSS snapshot

Given the information we have so far, we can imagine various ways to present this in a somewhat traditional task manager UI. In fact, the UI you see in Settings > Apps > Running is derived from this information. It shows all processes running services (“svc” adjustment in the LRU list) and on behalf of the system (the processes with a “<=Proc{489:system/1000}” dependency), computing the PSS RAM for each of these and any other processes they have dependencies on.

The problem with visualizing memory use in this way is that it gives you the instantaneous state of the apps, without context over time. On Android, users don’t directly control the creation and removal of application processes — they may be kept for future use, removed when the system decides, or run in the background without the user explicitly launching them. So looking only at the instantaneous state of memory use only, you would be missing important information about what is actually going on over time.

For example, in our first look at the process state we see the process running for a sync, but when we collected the RAM use right after that it was no longer running in the background but just being kept around as an old cached process.

To address this problem, the new procstats tool continually monitors the state of all application processes over time, aggregating that information and collecting PSS samples from those processes while doing so. You can view the raw data being collected by procstats with the command adb shell dumpsys procstats.

Seeing memory use over time with procstats

Let’s now go back to procstats and take a look at the context it provides by showing memory use over time. We can use the command adb shell dumpsys procstats --hours 3 to output memory information collected over the last 3 hours. This is the same data as represented graphically in the first Process Stats example.

The output shows all of the processes that have run in the last 3 hours, sorted with the ones running the most first. (Processes in a cached state don’t count for the total time in this sort.) Like the initial graphical representation, we now clearly see a big group of processes that run all of the time, and then some that run occasionally — this includes the Magazines process, which we can now see ran for 3.6% of the time over the last 3 hours.

  * / u0a57:
           TOTAL: 100% (6.4MB-6.7MB-6.8MB/5.4MB-5.4MB-5.4MB over 21)
          Imp Fg: 100% (6.4MB-6.7MB-6.8MB/5.4MB-5.4MB-5.4MB over 21)
  * / u0a8:
           TOTAL: 100% (12MB-13MB-14MB/10MB-11MB-12MB over 211)
          Imp Fg: 0.11%
          Imp Bg: 0.83% (13MB-13MB-13MB/11MB-11MB-11MB over 1)
         Service: 99% (12MB-13MB-14MB/10MB-11MB-12MB over 210)
  * / u0a12:
           TOTAL: 100% (29MB-32MB-34MB/26MB-29MB-30MB over 21)
      Persistent: 100% (29MB-32MB-34MB/26MB-29MB-30MB over 21)
  * / 1001:
           TOTAL: 100% (6.5MB-7.1MB-7.6MB/5.4MB-5.9MB-6.4MB over 21)
      Persistent: 100% (6.5MB-7.1MB-7.6MB/5.4MB-5.9MB-6.4MB over 21)
  * com.nuance.xt9.input / u0a77:
           TOTAL: 100% (2.3MB-2.5MB-2.7MB/1.5MB-1.5MB-1.5MB over 21)
      Persistent: 100% (2.3MB-2.5MB-2.7MB/1.5MB-1.5MB-1.5MB over 21)
  * / 1027:
           TOTAL: 100% (4.2MB-4.5MB-4.6MB/3.2MB-3.2MB-3.3MB over 21)
      Persistent: 100% (4.2MB-4.5MB-4.6MB/3.2MB-3.2MB-3.3MB over 21)
  * / u0a8:
           TOTAL: 100% (13MB-13MB-14MB/10MB-11MB-11MB over 21)
          Imp Fg: 100% (13MB-13MB-14MB/10MB-11MB-11MB over 21)
  * system / 1000:
           TOTAL: 100% (42MB-46MB-56MB/39MB-42MB-48MB over 21)
      Persistent: 100% (42MB-46MB-56MB/39MB-42MB-48MB over 21)
  * / u0a35:
           TOTAL: 100% (16MB-16MB-16MB/14MB-14MB-14MB over 17)
         Service: 100% (16MB-16MB-16MB/14MB-14MB-14MB over 17)
  * / u0a13:
           TOTAL: 77% (25MB-26MB-27MB/22MB-23MB-24MB over 73)
             Top: 77% (25MB-26MB-27MB/22MB-23MB-24MB over 73)
          (Home): 23% (25MB-26MB-26MB/23MB-23MB-24MB over 12)
  * / u0a6:
           TOTAL: 48% (5.0MB-5.3MB-5.5MB/4.0MB-4.2MB-4.2MB over 11)
          Imp Fg: 0.00%
          Imp Bg: 0.00%
         Service: 48% (5.0MB-5.3MB-5.5MB/4.0MB-4.2MB-4.2MB over 11)
        Receiver: 0.00%
        (Cached): 22% (4.1MB-4.5MB-4.8MB/3.0MB-3.5MB-3.8MB over 8)
  * / u0a36:
           TOTAL: 42% (20MB-21MB-21MB/18MB-19MB-19MB over 8)
          Imp Fg: 42% (20MB-21MB-21MB/18MB-19MB-19MB over 8)
         Service: 0.00%
        Receiver: 0.01%
        (Cached): 58% (17MB-20MB-21MB/16MB-18MB-19MB over 14)
  * / 1000:
           TOTAL: 23% (19MB-22MB-28MB/15MB-19MB-24MB over 31)
             Top: 23% (19MB-22MB-28MB/15MB-19MB-24MB over 31)
      (Last Act): 77% (9.7MB-14MB-20MB/7.5MB-11MB-18MB over 8)
        (Cached): 0.02%
  * / u0a59:
           TOTAL: 3.6% (10MB-10MB-10MB/8.7MB-9.0MB-9.0MB over 6)
          Imp Bg: 0.03%
         Service: 3.6% (10MB-10MB-10MB/8.7MB-9.0MB-9.0MB over 6)
        (Cached): 17% (9.9MB-10MB-10MB/8.7MB-8.9MB-9.0MB over 5)
  * / u0a5:
           TOTAL: 1.4% (2.7MB-3.0MB-3.0MB/1.9MB-1.9MB-1.9MB over 7)
             Top: 1.2% (3.0MB-3.0MB-3.0MB/1.9MB-1.9MB-1.9MB over 6)
          Imp Fg: 0.19% (2.7MB-2.7MB-2.7MB/1.9MB-1.9MB-1.9MB over 1)
         Service: 0.00%
        (Cached): 15% (2.6MB-2.6MB-2.6MB/1.8MB-1.8MB-1.8MB over 1)
  * / u0a78:
           TOTAL: 1.3% (9.0MB-9.0MB-9.0MB/7.8MB-7.8MB-7.8MB over 1)
          Imp Bg: 1.0% (9.0MB-9.0MB-9.0MB/7.8MB-7.8MB-7.8MB over 1)
         Service: 0.27%
      Service Rs: 0.01%
        Receiver: 0.00%
        (Cached): 99% (9.1MB-9.4MB-9.7MB/7.7MB-7.9MB-8.1MB over 24)
  * / u0a8:
           TOTAL: 0.91% (9.2MB-9.2MB-9.2MB/7.6MB-7.6MB-7.6MB over 1)
          Imp Bg: 0.79% (9.2MB-9.2MB-9.2MB/7.6MB-7.6MB-7.6MB over 1)
         Service: 0.11%
        Receiver: 0.00%
        (Cached): 99% (8.2MB-9.4MB-10MB/6.5MB-7.6MB-8.1MB over 25)
  * / u0a44:
           TOTAL: 0.56%
          Imp Bg: 0.55%
         Service: 0.01%
        Receiver: 0.00%
        (Cached): 99% (11MB-13MB-14MB/10MB-12MB-13MB over 24)
  * / u0a70:
           TOTAL: 0.22%
          Imp Bg: 0.22%
         Service: 0.00%
        Receiver: 0.00%
        (Cached): 100% (38MB-40MB-41MB/36MB-38MB-39MB over 17)
  * / u0a39:
           TOTAL: 0.15%
          Imp Bg: 0.09%
         Service: 0.06%
        (Cached): 54% (13MB-14MB-14MB/12MB-12MB-13MB over 17)
  * / u0a62:
           TOTAL: 0.11%
          Imp Bg: 0.04%
         Service: 0.06%
        Receiver: 0.01%
        (Cached): 70% (7.7MB-10MB-11MB/6.4MB-9.0MB-9.3MB over 20)
  * / u0a24:
           TOTAL: 0.01%
        Receiver: 0.01%
        (Cached): 69% (8.1MB-8.4MB-8.6MB/7.0MB-7.1MB-7.1MB over 13)
  * / u0a19:
           TOTAL: 0.00%
        Receiver: 0.00%
        (Cached): 69% (2.7MB-3.2MB-3.4MB/1.8MB-2.0MB-2.2MB over 13)

Run time Stats:
  SOff/Norm: +1h43m29s710ms
  SOn /Norm: +1h37m14s290ms
      TOTAL: +3h20m44s0ms

          Start time: 2013-11-06 07:24:27
  Total elapsed time: +3h42m23s56ms (partial) chromeview

Example output of dumpsys procstats --hours 3 command, showing memory details for processes running in the background over the past ~3 hours.

The percentages tell you how much of the overall time each process has spent in various key states. The memory numbers tell you about memory samples in those states, as minPss-avgPss-maxPss / minUss-avgUss-maxUss. The procstats tool also has a number of command line options to control its output — use adb shell dumpsys procstats -h to see a list of the available options.

Comparing this raw data from procstats with the visualization of its data we previously saw, we can see that it is showing only process run data from a subset of states: Imp Fg, Imp Bg, Service, Service Rs, and Receiver. These are the situations where the process is actively running in the background, for as long as it needs to complete the work it is doing. In terms of device memory use, these are the process states that tend to cause the most trouble: apps running in the background taking RAM from other things.

Getting started with procstats

We have already found the new procstats tool to be invaluable in better understanding the overall memory behavior of Android systems, and it has been a key part of the Project Svelte effort in Android 4.4.

As you develop your own applications, be sure to use procstats and the other tools mentioned here to help understand how your own app is behaving, especially how much it runs in the background and how much RAM it uses during that time.

More information about how to analyze and debug RAM use on Android is available on the developer page Investigating Your RAM Usage.

Join the discussion on

+Android Developers
Categories: Programming

Issues with sync and constraints

Eric.Weblog() - Eric Sink - Wed, 02/26/2014 - 19:00

(This entry is part of a series. The audience: SQL Server developers. The topic: SQLite on mobile devices.)

Fail Fast

Think of a bug as having two parts:

  1. The incorrect code

  2. The visible symptom

The worst bugs are the ones where these two parts are separated.

For example, consider the following function in C:

void crash_now(void)
    char* p = NULL;
    *p = 5;

This crash will be easy to find and fix, because the incorrect code is very close to the point where the crash is going to occur.

In contrast, the following code is likely going to waste more time:

int count_decimal_digits(int n)
    char* p = malloc(64);
    sprintf(p, "%d", n);
    return strlen(p);

One of the several bugs in this function is a memory leak. Whatever symptom arises from this leak will almost certain occur much later, making it much more difficult to realize that the incorrect code is right here in this function.

In 1992 I was working at Spyglass (before we joined the browser wars, when our focus was on scientific data visualization tools). We had a product named Spyglass Format which had a bug involving our failure to properly dispose of a handle we got from the Mac palette manager. The visible symptom of that bug was an intermittent, unreproduceable crash. Bugs like that are so hard to find, but this one was unusually difficult, because the crash always happened in a different app, not in Spyglass Format. :-)

Of course, the affected user started by calling the vendor of the other product (which happened to be Apple) about this problem. And of course, Apple was unable to help them. And of course, when they called us to claim that "it seems like MPW only crashes when Spyglass Format is also running", we were initially rather skeptical. The whole thing took months to figure out.

Sync and Constraints

Let's talk about situations where you are using SQLite on a mobile device and synchronizing with SQL Server on the backend.

Compared to an app which does all database operations over REST calls, the advantages of this "replicate and sync" architecture include offline support and much better performance. However, one of the potential disadvantages of this approach is that it can move the symptom of a constraint violation bug far away from the incorrect code that caused it.

In your SQL Server database on the backend, you have constraints which are designed to protect the integrity of your data.

Suppose you have an app which is trying to INSERT an invalid row directly, such as through ADO.NET. The constraint violation will cause an error right away. This is good.

However, in a mobile app which uses "replicate and sync", changes to the data happen in two steps:

  1. The row gets INSERTed into a SQLite database on the mobile device.

  2. Later, the next time that device syncs with the backend, that row will get INSERTed into the actual SQL Server database.

If the new row is invalid (because of, say, a bug in the mobile app), we want the failure to happen when we try the INSERT into SQLite on the mobile device, not [potentially much] later when the sync happens.

Or to put this another way: Any transaction successfully committed to the SQLite database on the mobile device should also succeed when that change is synchronized to the SQL Server backend.


If SQLite always behaved exactly like SQL Server, this would not be an issue. But there are differences, and that's what this blog series is all about. Several of the entries later in this series will deal with specific cases where SQLite might accept something that SQL Server would not. In a "replicate and sync" architecture, all of these cases deserve a bit of extra attention.


On quality, beauty and elegance. Parting words of a fellow Xebian

Xebia Blog - Wed, 02/26/2014 - 17:27

This week we said goodbye to a long-time colleague, Luuk Dijkhuis, senior consultant with a history in multiple Xebia units. He agreed to let me share his parting speech with you, here goes:


My dear Xebians, thank you for your kind words. Now, I could say, “thank you, it’s been six nice years, bye bye”, but you deserve more than that. I want to talk a bit about one of our key values. I will keep it short, don’t worry.

So. Quality without compromise. What kind of nonsense is that? Quality is always a compromise, or you will never get anything live, will you. So what ARE we on about?

Like the poet John Keats said, “beauty is truth, truth beauty, that’s all you know on earth and all you need to know”. He said it specifically about the concentrated and timeless austerity of a Grecian urn, but in general there is something to that combination of beauty and truth that resonates.

As most of you know, I have started out as a musician, and I have always had a close relationship to beauty and aesthetics in general. If you want to produce beautiful things or sounds then you must try to see beauty everywhere, that is to say, you must open your eyes to absorb all kinds of it, in order to be able later to produce it yourself. And indeed when you do, it does seem like there is a relationship between truth and beauty. Although the well versed cynic will always be ready to point out some counter examples.

The famous scientist Paul Dirac had a special thing for beauty, he was convinced that mathematics could only be correct when it is beautiful, he said: “What makes the theory of relativity so acceptable to physicists is its great mathematical beauty. This is a quality which cannot be defined, any more than beauty in art can be defined, but which people who study mathematics usually have no difficulty in appreciating

And it’s the same here in our trade, be it software architecture, or process, or actual code: this notion of beauty, of elegance let’s say, plays an important part in how we do things right. A well known quote of Edsger Dijkstra is “Elegance is not a dispensable luxury but a quality that decides between success and failure”. There you are. You guys all know that, and not only do you know it, you breathe it, you live it. It’s not the actual “Quality” itself that is without compromise, it’s all about the relentless pursuit of it. In our branch, actual quality is obviously contextual, it is in the end all about “fit for purpose”, but elegance in its creation is what makes it stand out and shine.

It was an absolutely exhilarating experience in 2007 to suddenly be plunged into a community that had that kind of attitude towards the things they were doing, and that first sense of “wow, this is great stuff” has never left me. I am truly proud to have been a part of you, of this, of what I have come to see as my extended family of Xebians, with all their crazy quirks and oddities. Hey, I never said that beauty is about being normal :-)

But now it is time for me to leave you all, not because I have had enough of you, but because there are other things in my life that need to be tended to now. So I say: “goodbye, see you around”, not “farewell”, and I know you will keep that spark going, that search for elegance. It’s only as a community that you can pull this off, so please, despite all that splitting off of Business Units stuff, PLEASE keep doing things right together. I will miss you. Thank you.


Satya Nadella on Everyone Has to Be a Leader

Satya Nadella, the new CEO for Microsoft, is all about employee engagement and employee empowerment.

Here is how Satya reminded us that we all need to be a leader:

“We express that core identity, being the company that allows every individual to be more empowered and get more out of every moment of their lives as things get more digital.
I want each of us to give ourselves permission to be able to move things forward.
Each of us sometimes overestimate the power others have to do things vs. our own ability to make things happen.
Everyone in the company has to be a leader.”

Here is a great video that a colleague sent me on how to embed the capacity for greatness in the people and practices of an organization.

Video:  Greatness, by David Marquet

If you see a problem, fix it. 

If you see an opportunity take it.

Don’t wait for somebody else to do it.

You Might Also Like

Satya Nadella is the New Microsoft CEO

Satya Nadella is All About Customer Focus, Employee Engagement, and Changing the World

Satya Nadella on the Future is Software

Categories: Architecture, Programming

Welcome OpenID Connect

Google Code Blog - Wed, 02/26/2014 - 16:00
Author PhotoBy Adam Dawes, Product Manager, Google Accounts Team

Improving security while making it easier for users to sign in is the perennial challenge we face in the authentication trade. Federated sign-in has long held this promise but to be successful, it needs to be simple for users to understand and easy for developers to deploy. Today, the OpenID Foundation announced that the OpenID Connect specification has been ratified and is now available as an open standard for the world. We think it is going to make a big difference in improving people’s login experience all over the Internet. This new authentication standard is layered on top of OAuth 2.0 so that all the technology that sites already use to connect to other sites' APIs can also be reused for authentication. And like OAuth 2.0, OpenID Connect provides strong protections for users by only sharing account information that users explicitly tell us to.

Open ID connect logo
We’re putting our weight behind this new standard, providing formal support from its launch as well as building it into Google+ Sign-In. And to keep things as simple as we can for developers, we’re also going to consolidate all our federated sign-in support onto the OpenID Connect standard. This means that we will deprecate support for our older federated sign-in protocols including OpenID 2.0 on April 20, 2015, and our early version of OAuth 2.0 for Login, including the userinfo scopes and endpoint, on September 1, 2014 (see migration timetable for full details).

The easiest way to take advantage of our support for OpenID Connect is to use Google+ Sign-In, which provides easy-to-integrate libraries on the most popular platforms. Google+ Sign-In provides not only OpenID Connect sign-in but also other great features to give your app deeper integration with Google like over-the-air installs, cross-device sign-on, analytics as well as powerful social features for users who have a Google+ profile. You can still hand roll your integration to Google using the OpenID Connect protocol if you prefer, but you’ll miss out on these features. Please see our migration guide to get started moving to Google+ Sign-In and OpenID Connect.

Adam Dawes is a Product Manager on the Google Accounts Team where he is working to make it easy for users to share their data while maintaining full control over it. Outside the Googleplex, Adam enjoys exploring the outdoors with his family.

Posted by Scott Knaster, Editor
Categories: Programming

Java 8: Lambda Expressions vs Auto Closeable

Mark Needham - Wed, 02/26/2014 - 08:32

If you used earlier versions of Neo4j via its Java API with Java 6 you probably have code similar to the following to ensure write operations happen within a transaction:

public class StylesOfTx
    public static void main( String[] args ) throws IOException
        String path = "/tmp/tx-style-test";
        FileUtils.deleteRecursively(new File(path));
        GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase( path );
        Transaction tx = db.beginTx();

In Neo4j 2.0 Transaction started extending AutoCloseable which meant that you could use ‘try with resources’ and the ‘close’ method would be automatically called when the block finished:

public class StylesOfTx
    public static void main( String[] args ) throws IOException
        String path = "/tmp/tx-style-test";
        FileUtils.deleteRecursively(new File(path));
        GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase( path );
        try ( Transaction tx = db.beginTx() )
            Node node = db.createNode();

This works quite well although it’s still possible to have transactions hanging around in an application when people don’t use this syntax – the old style is still permissible.

In Venkat Subramaniam’s Java 8 book he suggests an alternative approach where we use a lambda based approach:

public class StylesOfTx
    public static void main( String[] args ) throws IOException
        String path = "/tmp/tx-style-test";
        FileUtils.deleteRecursively(new File(path));
        GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase( path );
        Db.withinTransaction(db, neo4jDb -> {
            Node node = neo4jDb.createNode();
    static class Db {
        public static void withinTransaction(GraphDatabaseService db, Consumer<GraphDatabaseService> fn) {
            try ( Transaction tx = db.beginTx() )

The ‘withinTransaction’ function would actually go on GraphDatabaseService or similar rather than being on that Db class but it was easier to put it on there for this example.

A disadvantage of this style is that you don’t have explicit control over the transaction for handling the failure case – it’s assumed that if ‘tx.success()’ isn’t called then the transaction failed and it’s rolled back. I’m not sure what % of use cases actually need such fine grained control though.

Brian Hurt refers to this as the ‘hole in the middle pattern‘ and I imagine we’ll start seeing more code of this ilk once Java 8 is released and becomes more widely used.

Categories: Programming

F# Eye for the VB Guy

Phil Trelford's Array - Wed, 02/26/2014 - 07:35

Are you a VB developer curious about functional-first programming. F# is a statically typed language built into Visual Studio. It is a multi-paradigm language with both functional and object-oriented constructs.

F# has powerful type inference which reduces the amount of typing you need to do without reducing performance or correctness.

F# projects are easily referenced from VB and vice versa. Like VB, F# makes minimal use of curly braces, and for many operations the syntax will feel quite familiar.

Here’s my cut-out-and-keep guide to common operations in both languages:

Declaring values

VB.Net F#
' Fully qualified
Dim greeting As String = "Hello"
' Type inferred
Dim greeting = "Hello"
// Fully qualified 
let greeting : string = "Hello"
// Type inferred 
let greeting = "Hello"


Declaring functions

VB.Net F#
Sub Print(message As String)
End Sub

Function Add _
  (a As Integer, b As Integer) _
  As Integer
  Return a + b
End Function
let print( message: string ) = 

// Return type is inferred
let add(a:int, b:int) = a + b



VB.Net F#
For i = 1 To 10

For Each c In "Hello"
for i = 1 to 10 do 

for c in "Hello" do 



VB.Net F#
Dim ageGroup As String
If age < 18 Then
  ageGroup = "Junior"
  ageGroup = "Senior"
End If
let ageGroup = 
  if age < 18 then  


Pattern Matching

VB.Net F#
' Score Scrabble letter
Select Case c Case "A", "E", "I", "L", "N", _ "O", "R", "S", "T", "U" Return 1 Case "D", "G" Return 2 Case "B", "C", "M", "P" Return 3 Case "F", "H", "V", "W", "Y" Return 4 Case "K" Return 5 Case "J", "X" Return 8 Case "Q", "Z" Return 10 Case Else Throw New _
  End Select
// Score scrabble letter
match letter with
| 'A' | 'E' | 'I' | 'L' | 'N' 
| 'O' | 'R' | 'S' | 'T' | 'U' -> 1
| 'D' | 'G' -> 2
| 'B' | 'C' | 'M' | 'P' -> 3
| 'F' | 'H' | 'V' | 'W' | 'Y' -> 4
| 'K' -> 5
| 'J' | 'X' -> 8
| 'Q' | 'Z' -> 10
| _ -> invalidOp ""



VB.Net F#
Dim i As Integer = 5
  Throw New ArgumentException()
Catch e As OverflowException _
      When i = 5
  Console.WriteLine("First handler")
Catch e As ArgumentException _
      When i = 4
  Console.WriteLine("Second handler")
Catch When i = 5
  Console.WriteLine("Third handler")
End Try
let i = 5
  raise (ArgumentException())
| :? OverflowException when i = 5 ->
  Console.WriteLine("First handler")
| :? ArgumentException when i = 4 ->
  Console.WriteLine("Second handler")
| _ when i = 5 ->
  Console.WriteLine("Third handler")



VB.Net F#
Module Math
  ' Raise to integer power
  Function Pown( _
    x As Double, y As Integer)
    Dim result = 1
    For i = 1 To y
      result = result * x
    Return result
  End Function
End Module
module Maths =
  // Raise to integer power
 let pown (x:float,y:int) =
   let mutable result = 1.0
   for i = 1 to y do
     result <- result * x



VB.Net F#
' Immutable class
Public Class Person
  Private ReadOnly myName As String

  Public Sub New(name As String)
    myName = name
  End Sub

  ReadOnly Property Name() As String
      Return myName
    End Get
  End Property
End Class

' Inheritance
Public Class MyWindow _
  : Inherits Window
End Class
// Immutable class
type Person (name : string) = 
  member my.Name = name












// Inheritance
type MyWindow() =
   inherit Window()



Interested in learning more? Give F# a try in Visual Studio with the built-in F# Tutorial project or in your browser with

Categories: Programming