Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Free ebook: Building Cloud Apps with Microsoft Azure

ScottGu's Blog - Scott Guthrie - Sun, 08/03/2014 - 03:20

9780735695658f Last week MS Press published a free ebook based on the Building Real-World Apps using Azure talks I gave at the NDC and TechEd conferences.  The talks + book walks through a patterns-based approach to building real world cloud solutions, and help make it easier to understand how to be successful with cloud development. Videos of the Talks You can watch a video recording of the talks I gave here:

 Part 1: Building Real World Cloud Apps with Azure

 Part 2: Building Real World Cloud Apps with Azure

eBook Downloads

You can now download a completely free PDF, Mobi or ePub version of the ebook based on the talks using the links below:

Download the PDF (6.35 MB)  

Download the EPUB file (12.3 MB)  

Download the Mobi for Kindle file (22.7 MB)

Hope this helps,

Scott

Categories: Architecture, Programming

Code Golf

Phil Trelford's Array - Sat, 08/02/2014 - 19:22

Last month Grant Crofton popped down from Leeds to the F#unctional Londoners meetup at Skills Matter to run a fun hands on code golf session. The idea of code golf is to implement a specific algorithm in the fewest possible characters. This is not the kind of thing you should be doing in enterprise code, but it is fun, and an interesting way of exploring features in a programming language.

On the night we attempted condensed versions of FizzBuzz and 99 bottles of beer, with Ross and I winning the first challenge and Simon & Adrian the second.

FizzBuzz Score99 Bottles Score

Thanks again to Grant for a really fun session.

F# FizzBuzz

A while back I strived to squeeze an F# implementation of FizzBuzz into a tweet, and with white space removed it weighs in at 104 characters (excluding line breaks):

for n=1 to 100 do 
 printfn"%s"
  (match n%3,n%5 with 0,0->"FizzBuzz"|0,_->"Fizz"|_,0->"Buzz"|_,_->string n)

The implementation, although quite clear, requires a fair number of characters for the pattern matching portion.

After some head scratching we came up with the idea of using a lookup to select the string to display:

N %3 N % 5 Index Output 0 0 0 N >0 0 1 “Fizz” 0 >0 2 “Buzz” >0 >0 3 “FizzBuzz”

This took the implementation down to 89 characters (without line breaks):

for i=1 to 100 do
 printfn"%s"["FizzBuzz";"Buzz";"Fizz";string i].[sign(i%5)*2+sign(i%3)]

Another trick is to abuse the sign function, to get 1 if the result of the modulus is above 0 and 0 otherwise.

The lookup trick can be used in other languages, and here’s a few examples, just for fun.

VB.Net FizzBuzz

VB.Net has a reputation for being a little verbose, but using the lookup trick it was possible to it get down to 96 characters (excluding line breaks):

For i=1 To 100:
Console.WriteLine({i,"Fizz","Buzz","FizzBuzz"}(-(i Mod 3=0)-(i Mod 5=0)*2))
:Next

In VB.Net true values translate to –1 and false to 0. This allowed me to simply negate the result of i % N = 0 to compute an index.

Python FizzBuzz

Using a similar trick in Python, where true translates to 1 and 0 to false, I was able to get to a very respectable 79 characters (excluding line breaks):

for x in range(1,101):
 print([x,"Fizz","Buzz","FizzBuzz"][(x%3==0)+2*(x%5==0)])

Python’s simple print function also helped to keep the character count down.

Amanda FizzBuzz

Amanda is a variant of David Turner’s quintessential functional programming language Miranda. Amanda runs on Windows, and is used for teaching FP at some universities.

Using a list comprehension it was possible to squeeze in to a mere 67 characters:

[["FizzBuzz","Buzz","Fizz",itoa x]!(min[1,x%3]+min[1,x%5]*2)|x<-[1..100]]

Note: this is cheating a little as we are not explicitly writing to the console.

APL FizzBuzz

APL is a very mature language, dating back to the 1960s, and is still used commercially today. It also wins hands down in code golf with just 54 characters:

⍪{'FizzBuzz' 'Buzz' 'Fizz'⍵[1+(2×1⌊5|⍵)+1⌊3|⍵]}¨⍳100

APL is particularly strong at matrix processing and provides single character representations for common operations:

Notation Name MeaningB Index generator Creates a list from 1 to B ¨ Each for each loop ⍪ Table Produces a one column matrix ⌊B Floor Greatest integer less than or equal to B

Try APL in your browser now.

Challenge

Can you beat the APL implementation of FizzBuzz?

Have fun!

Categories: Programming

Playing around with Yo

Xebia Blog - Fri, 08/01/2014 - 13:09

Yo has been quite a bit in the news lately. Mainly because it got a lot of investment, which surprised and shocked some people because it all seems too simple. Yo only allows you to send a Yo notification to other users. However, it has a lot of potential to become big while staying that simple.

Screenshot 2014-08-01 14.23.48

After reading Why A Stupid App Like Yo May Have Billion-Dollar Platform Potential a few days ago I felt it was time to play around a bit with Yo and it's API.

I came up with 3 very simple use cases that should be simple to solve with Yo:

  • Get a Yo when it's time to have lunch when I'm at work
  • Get a Yo when I forgot to check out from my parking App
  • Get a Yo when a new blog post is published here on the Xebia Blog

Time to register a couple of Yo developer usernames. The Yo API is, just like Yo itself, very simple. You register a username from which you want to receive notifications at http://yoapi.justyo.co after which you'll receive an API token for that username. Once you have that, people can subscribe to that username with the Yo app. Then you can send a Yo to either all subscribers or to a single subscribe with a simple POST request and the token. All this is explained at https://medium.com/@YoAppStatus/e7f2f0ec5c3c in more detail.

Let's start with our lunch notifications. I created the TIME2LUNCH username for this. I want my notifications at 12:00pm, since that's the time I go to lunch. So all I need is some service that sends the POST request every day at 12:00pm (for now I don't care about getting one in the weekend as well and I only care about my own time zone, Central European Time). Using NodeJS, it's just a single line of code to do such a request:

require('request')
  .post('http://api.justyo.co/yoall/', 
    {
      form: {api_token: 'my_secret_api_token'}
    }
  );

Now we need to have a scheduler that executes it every day at 12:00pm. Luckily Heroku has a scheduler that can do exactly that:

Screenshot 2014-07-31 17.43.51

So after deploying our javascript file with a single line of code and creating our scheduled job we will receive our daily Yo from TIME2LUNCH. Not bad for a first attempt.

Usually my co-workers will remind me when it's time to go to lunch so let's do something that's actually a bit less useless.

To park my car near the office I always use the Parkmobile service. At the end of the day I have to check out to avoid paying too much. Unfortunately it's an easy thing to forget. Parkmobile knows this and can send sms alerts at a specific time or after parking a certain amount of hours. Unfortunately they charge € 0.25 per sms. They also send e-mails, but they're easier to miss. It would be nice to get a Yo instead, for free of course.

What we need is to send the Yo POST request each time we receive the Parkmobile e-mails. Sounds like we might be able to use IFTTT (if this then that) to accomplish this. When browsing for channels and recipes on IFTTT I saw that they already support Yo as a channel. I thought I was gonna be done fast. Unfortunately it's only possible to use Yo as a trigger (if Yo then that) and not as an action (if this then Yo). So we need another solution here. I couldn't find a way to send a cURL request directly from IFTTT, but when Googling for a solution I found a webhook project: https://github.com/captn3m0/ifttt-webhook. The ifttt-webhook works by acting as a WordPress site, which is something that can act as an action of IFTTT. It then allows us to send a POST request to a specific URL. Not exactly the POST requests that are accepted by the Yo API though. But we already made some NodeJS code to send a Yo request so I'm sure we can add some code to accept a request from the ifttt-webhook and then pass it on to something that Yo does understand.

If we follow the instructions on the Github page and set our username to our Yo username and use our API token as password, then the webhook will send a POST request with a JSON body that looks something like this:

{ user: 'MYUSERNAME', pass: 'ab1234-1234-abcd-1234-abcd1234abcd', title: '' }

We can handle that in NodeJS like this:

var express = require('express');
var bodyParser = require('body-parser')
var app = express();
var request = require('request');

app.use(bodyParser.json());
app.post('/api/yo', function (req, res) {
  var user = req.body.user;
  var apiToken = req.body.pass;
  request.post('http://api.justyo.co/yo/',
    {
      form: {
        api_token: apiToken,
        username: user
      }
    });
});

var port = Number(process.env.PORT || 5000);
app.listen(port, function() {
  console.log('Listening on ' + port);
});

This is just a simple express web server that listens for POST calls on /api/yo and then uses the user and pass fields from the body to send a POST request to the Yo API.

This is deployed at http://youser.herokuapp.com/ so everyone can use it as a IFTTT to Yo action.

We can now create our IFTTT recipe. Creating the this step is easy. I receive the e-mails from Parkmobile in my Gmail and their e-mail address is norepy@parkmobile.com. So the rule becomes to trigger each time when I receive an email from them. Then in the that step I activate the WordPress channel with the Yo username and api token and in the body I set the http://youser.herokuapp.com/api/yo URL.

Here is the recipe:

Screenshot 2014-07-31 18.32.12

The last use case I had is to send a Yo each time a new blog post was posted on this blog. For that I registered the XEBIABLOG username (so make sure to subscribe to that in your Yo app if you want to get Yo'd as well for each new blog post).

Since this blog has an RSS feed, I figured I could poll that once in a while to check for new posts. We also already used the Heroku scheduler, so we might as well use that again. I found a little node library called feed-read that makes reading RSS feeds easy. So here is our little app that runs every hour:

var feed = require("feed-read");
var request = require('request');
var ONE_HOUR = 60 * 60 * 1000;

feed("http://blog.xebia.com/feed/", function(err, articles) {
  if (err) throw err;

  var lastArticle = articles[0];
  if ((new Date() - lastArticle.published) < ONE_HOUR) {
    console.log('Sending Yo for ' + lastArticle.title);
    request.post('http://api.justyo.co/yoall/',
      {
        form: {
          api_token: 'my_secret_token'
        }
      });
  }
});

We now have completed our 3 little use cases. Not the most useful things but nice non nonetheless. When looking back on them, we can imagine a couple of improvements. For example for the TIME2LUNCH it would be possible to make a little service where people could register and set their timezone at which they want to receive their notification. We could create a little database that store Yo usernames and the zone. But at this moment it's not possible to verify that USERX is really USERX. Yo doesn't support third party authentication like Facebook and Twitter have with OAuth. Perhaps that's something they will add in the future to make platform more useable for user specific notifications.

Learn How UX Design can Make Your App More Successful

Android Developers Blog - Fri, 08/01/2014 - 02:39

By Nazmul Idris, a Developer Advocate at Google who's passionate about Android and UX design

As a mobile developer, how do you create 5-star apps that your users will not just download, but love to use every single day? How do you get your app noticed, and how do you drive engagement? One way is to focus on excellence in design — from visual and interaction design to user research, in other words: UX design.

If you’re new to the world of UX design but want to embrace it to improve your apps, we've created a new online course just for you. The UX Design for Mobile Developers course teaches you how to put your designer hat on, in addition to your developer hat, as you think about your apps' ideal user and how to meet their needs.

The course is divided into a series of lessons, each of which gives you practical takeaways that you can apply immediately to start seeing the benefits of good UX design.

Without jargon or buzzwords, the course teaches you where you should focus your attention, to bring in new users, keep existing users engaged, and increase your app's ratings. You'll learn how to optimize your app, rather than optimizing login/signup forms, and how to use low-resolution wireframing.

After you take the course, you'll "level up" from being an excellent developer to becoming an excellent design-minded developer.

Check out the video below to get a taste of what the course is like, and click through this short deck for an overview of the learning plan.

The full course materials — all the videos, quizzes, and forums — are available for free for all students by selecting “View Courseware”. Personalized ongoing feedback and guidance from Coaches is also available to anyone who chooses to enroll in Udacity’s guided program.

If that’s not enough, for even more about UX design from a developer's perspective, check out our YouTube UXD series, on the AndroidDevelopers channel: http://bit.ly/uxdplaylist.


Android Developers
at Udacity

Join the discussion on
+Android Developers


Categories: Programming

Grow with Google Play: Scaled Publishing and New App Insights

Android Developers Blog - Thu, 07/31/2014 - 23:55

By Kobi Glick, Google Play team

If you're growing your business on Google Play, the Google Play Developer Console is one of the most important tools at your disposal. At Google I/O, we introduced a number of new changes that give you valuable insight into how your app is performing. Here's an overview of some of the improvements you can now take advantage of.

Publishing API for scaling your app operations

Today we're happy to announce that the Google Play Developer Publishing API is now available to all developers. The API will let you upload APKs to Beta testing, Staged rollout and Production, and integrate publishing operations with your release processes and toolchain. The Publishing API also makes it easier for you to manage your in-app products catalog, provide tablet-specific screenshots, and localize your store listing text and graphics. The Publishing API will help you focus on your core business, with less time managing your releases, even as your business grows to more apps and markets.

Actionable insights at the right time Email notifications for alerts

Recently, we added Alerts in the Developer Console to let you know when there are sudden changes in important stats like app installs, ratings, and crashes. You can now turn on email notifications for Alerts so that, even while you’re not in the Developer Console, you’ll be informed of relevant events before they can have a broader effect on your app. You can turn on email notifications for one or more of your apps under Email Preferences in the Developer Console settings.

New Optimization Tips

You’ll now see new Optimization Tips with instructions when we detect opportunities to improve your app. For example, we’ll let you know when updated versions of APIs you use are available — such as new Google Play in-app billing or Google Maps APIs. For games developers, we’ll also surface opportunities to use Google Play game services that can help improve users’ gaming experience and drive engagement. To see what tips we suggest for you, go to your app in the Developer Console and click on Optimization Tips.

Better data to inform your business decisions Enhanced revenue statistics

To help you better understand your commercial success, we’ve enhanced revenue statistics in the Finance section of the Developer Console. We now let you see the average revenue per paying user (ARPPU) and give you more ways to analyse buyer data, such as comparing returning buyers (i.e., those who also made purchases in the past) to new buyers.

Bulk export of reviews

You can already engage with your users by reading and replying to reviews in the Developer Console and we’ve now added bulk export of reviews so you can download and analyze your app’s reviews en masse. This is particularly useful if you receive a large volume of reviews and want to perform your own sentiment analysis.

Improved stats for beta releases and staged rollouts

Since last year’s launch, you’ve used beta testing to release alpha and beta versions of your app, and staged rollout to gradually launch your app to production. To help you make the most of this feature, we’re now improving the way alpha, beta and staged rollout specific stats are displayed. When viewing your app and crash statistics you can now filter the app version by alpha, beta, or staged rollout to better understand the impact of your testing.

Improved reporting of native crashes

If you develop in native code, we’ve improved the reporting and presentation specifically for native crashes, with better grouping of similar crashes and summarizing of relevant information.

Deep-linking to help drive engagement

Finally, we’ve also added website verification in the Developer Console, to enable deep-linking to your app from search results. Deep-linking helps remind users about the apps they already have. It is available through search for all apps that implement app indexing. For example, if a user with the Walmart Android app searches for “Chromecast where to buy”, they’ll go directly to the Chromecast page in the Walmart app. The new App Indexing API is now open to all Android developers, globally. Get started now.

We hope you find these features useful and take advantage of them so that you can continue to grow your user base and improve your users’ experience. If you're interested in some other great tools for distributing your apps, check out this blog post, or any of the sessions which have now been posted to the Google Developers Channel.
Join the discussion on
+Android Developers


Categories: Programming

Google I/O 2014 App Source Code Now Available

Android Developers Blog - Wed, 07/30/2014 - 22:24

By Bruno Oliveira, Tech Lead of the I/O app project

The source code for the 2014 version of the Google I/O app is now available. Since its first release on Google Play a few weeks before the conference, the I/O app was downloaded by hundreds of thousands of people, including on-site attendees, I/O Extended event participants and users tuning in from home. If one of the goals of the app is to be useful to conference attendees, the other primary goal is to serve as a practical example of best practices for Android app design and development.

In addition to showing how to implement a wide variety of features that are useful for most Android apps, such as Fragments, Loaders, Services, Broadcast Receivers, alarms, notifications, SQLite databases, Content Providers, Action Bar and the Navigation Drawer, the I/O app source code also shows how to integrate with several Google products and services, from the Google Drive API to Google Cloud Messaging. It uses the material design approach, the Android L Preview APIs and full Android Wear integration with a packaged wearable app for sending session feedback.

To simplify the process of reusing and customizing the source code to build apps for other conferences, we rewrote the entire sync adapter to work with plain JSON files instead of requiring a server with a specific API. These files can be hosted on any web server of the developer's choice, and their format is fully documented.

Storing and syncing the user's data (that is, the personalized schedule) is crucial part of the app. The source code shows how user data can be stored in the Application Data folder of the user's own Google Drive account and kept in sync across multiple devices, and how to use Google Cloud Messaging to trigger syncs when necessary to ensure the data is always fresh.

The project includes the source code to the App Engine app that can be reused to send GCM messages to devices to trigger syncs, as well as a module (called Updater) that can be adapted to read conference data from other backends to produce the JSON files that are consumed by the I/O app.

We are excited to share this source code with the developer community today, and we hope it will serve as a learning tool, a source of reusable snippets and a useful example of Android app development in general. In the coming weeks we will post a few technical articles with more detailed information about the IOSched source code to help bring some insight into the app development process. We will continue to update the app in the coming months, and as always, your pull requests are very welcome!


Join the discussion on
+Android Developers


Categories: Programming

100+ Motivational Quotes to Inspire Your Greatness

I updated my Motivational Quotes page.

I’ve got more than 100 motivational quotes on the page to help you find your inner-fire.

It’s not your ordinary motivational quotes list.  

It’s deep and it draws from several masters of inspiration including Bruce Lee, Jim Rohn, and Zig Ziglar.

Here is a sampling of some of my personal favorite motivational quotes ..

“If you always put limit on everything you do, physical or anything else. It will spread into your work and into your life. There are no limits. There are only plateaus, and you must not stay there, you must go beyond them.” – Bruce Lee

“Knowing is not enough; we must apply. Willing is not enough; we must do.” - Johann Wolfgang von Goethe

“Kites rise highest against the wind; not with it.” – Winston Churchill

“To hell with circumstances; I create opportunities.” – Bruce Lee

“Our greatest glory is not in never falling but in rising every time we fall.” – Confucius

“There is no such thing as failure. There are only results.” – Tony Robbins

“When it’s time to die, let us not discover that we have never lived.” -Henry David Thoreau

“People who say it cannot be done should not interrupt those who are doing it.” – Anonymous

“Motivation alone is not enough. If you have an idiot and you motivate him, now you have a motivated idiot.” – Jim Rohn

“If you love life, don’t waste time, for time is what life is made up of.” – Bruce Lee

For more quotes, check out my motivational quotes page.

It’s a living page and at some point I’ll do a complete revamp.

I think in the future I’ll organize it by sub-categories within motivation rather than by people.I think at the time it made sense to have words of wisdom by various folks, but now I think grouping motivational quotes by sub-categories would work much better, especially when there is such a large quantity of quotes.

Categories: Architecture, Programming

4 types of user

Mark Needham - Tue, 07/29/2014 - 20:07

I’ve been working with Neo4j full time for slightly more than a year now and from interacting with the community I’ve noticed that while using different features of the product people fall into 4 categories.

These are as follows:

4types

On one axis we have ‘loudness’ i.e. how vocal somebody is either on twitter, StackOverflow or by email and on the other we have ‘success’ which is how well a product feature is working for them.

The people in the top half of the diagram will get the most attention because they’re the most visible.

Of those people we’ll tend to spend more time on the people who are unhappy and vocal to try and help them solve the problems their having.

When working with the people in the top left it’s difficult to understand how representative they are for the whole user base.

It could be the case that they aren’t representative at all and actually there is a quiet majority who the product is working for and are just getting on with it with no fuss.

However, it could equally be the case that they are absolutely representative and there are a lot of users quietly suffering / giving up using the product.

I haven’t come up with a good way to come across the less vocal users but in my experience they’ll often be passive users of the user group or Stack Overflow i.e. they’ll read existing issues but not post anything themselves.

Given this uncertainty I think it makes sense to assume that the silent majority suffer the same problems as the more vocal minority.

Another interesting thing I’ve noticed about this quadrant is that the people in the top right are often the best people in the community to help those who are struggling.

It’d be interesting to know whether anyone has noticed a similar thing with the products they worked on, and if so what approach do you take to unveiling the silent majority?

Categories: Programming

Using C3js with AngularJS

Gridshore - Tue, 07/29/2014 - 17:17

C3js is a graph javascript library on top of D3js. For a project we needed graphs and we ended up using c3js. How this happened and some of the first steps we took is written down by Roberto van der Linden in his blogpost: Creating charts with C3.js

Using C3js without other javascript libraries is of course perfectly doable. Still I think using it in combination with AngularJS is interesting to evaluate. In this blogpost I am going to document some steps I took to go from the basic c3 sample to a more advanced AngularJS powered sample.

Setup project – most basic sample

The easiest setup is by cloning by github repository: c3-angular-sample. If you are more the follow a long kind of person, than these are the steps.

We start by downloading C3js, d3 and AngularJS and create the first html page to get us started.

Next we create the index1.html that loads the stylesheet as well as the required javascript files. Two things are important to notice. One, the div with id chart. This is used to position the chart. Two, the script block that creates the chart object with data.

<!doctype html>
<html>
<head>
	<meta charset="utf-8">

	<link href="css/c3-0.2.4.css" rel="stylesheet" type="text/css">
</head>
<body>
	<h1>Sample 1: Most basic C3JS sample</h1>
	<div id="chart"></div>

	<!-- Load the javascript libraries -->
	<script src="js/d3/d3-3.4.11.min.js" charset="utf-8"></script>
	<script src="js/c3/c3-0.2.4.min.js"></script>

	<!-- Initialize and draw the chart -->
	<script type="text/javascript">
		var chart = c3.generate({
		    bindto: '#chart',
		    data: {
		      columns: [
		        ['data1', 30, 200, 100, 400, 150, 250],
		        ['data2', 50, 20, 10, 40, 15, 25]
		      ]
		    }
		});
	</script>
</body>
</html>

Next we introduce AngularJS to create the chart object.

Use AngularJS to create the chart

The first step is to remove the script block and add more javascript libraries to load. We load the AngularJS library and our own js file called graph-app-2.js. In the html tag we initialise the AngularJS app using ng-app=”graphApp”. In the body tag we initialise the Graph controller and call the function when the Angular app is initialised.

<html ng-app="graphApp">
<body ng-controller="GraphCtrl" ng-init="showGraph()">

Now let us have a look at some AngularJS javascript code. Notice that their is hardly a difference between the JavaScript code in both samples.

var graphApp = angular.module('graphApp', []);
graphApp.controller('GraphCtrl', function ($scope) {
	$scope.chart = null;
	
	$scope.showGraph = function() {
		$scope.chart = c3.generate({
			    bindto: '#chart',
			    data: {
			      columns: [
			        ['data1', 30, 200, 100, 400, 150, 250],
			        ['data2', 50, 20, 10, 40, 15, 25]
			      ]
			    }
			});		
	}
});

Still everything is really static, it makes use of mostly defaults. Let us move on and make it possible for the user to make some changes to the data and the type of chart.

Give the power to the user to change the data

We have two input boxes that accept common separated data and two drop downs to change the type of the chart. Fun to try out all the different types available. Below first he piece of html that contains the form to accept the user input.

	<form novalidate>
		<p>Enter in format: val1,val2,val3,etc</p>
		<input ng-model="config.data1" type="text" size="100"/>
		<select ng-model="config.type1" ng-options="typeOption for typeOption in typeOptions"></select>
		<p>Enter in format: val1,val2,val3,etc</p>
		<input ng-model="config.data2" type="text" size="100"/>
		<select ng-model="config.type2" ng-options="typeOption for typeOption in typeOptions"></select>
	</form>
	<button ng-click="showGraph()">Show graph</button>

This should be easy to understand. There are a few fields taken from the model as well as a button that generates the graph. Below is the complete javascript code for the controller.

var graphApp = angular.module('graphApp', []);

graphApp.controller('GraphCtrl', function ($scope) {
	$scope.chart = null;
	$scope.config={};
	$scope.config.data1="30, 200, 100, 200, 150, 250";
	$scope.config.data2="70, 30, 10, 240, 150, 125";

	$scope.typeOptions=["line","bar","spline","step","area","area-step","area-spline"];

	$scope.config.type1=$scope.typeOptions[0];
	$scope.config.type2=$scope.typeOptions[1];


	$scope.showGraph = function() {
		var config = {};
		config.bindto = '#chart';
		config.data = {};
		config.data.json = {};
		config.data.json.data1 = $scope.config.data1.split(",");
		config.data.json.data2 = $scope.config.data2.split(",");
		config.axis = {"y":{"label":{"text":"Number of items","position":"outer-middle"}}};
		config.data.types={"data1":$scope.config.type1,"data2":$scope.config.type2};
		$scope.chart = c3.generate(config);		
	}
});

In the first part we initialise all the variables in the $scope that are used on the form. In the showGraph function we have added a few things compared to the previous sample. We do not use the column data provider anymore, now we use the json provider. therefore we create arrays out of the comma separated numbers string. We also add a label to the y-axis and we set the types of charts using the same names as in the son object with the data. I think this is a good time to show a screen dump, stil it is much nicer to open the index3.html file yourself and play around with it.

Screen Shot 2014 07 29 at 17 52 26

Now we want to do more with angular, introduce a service that could obtain data from the server, but also make the data time based data.

Introducing time and data generation

The html is very basic, it only contains to buttons. One to start generating data and one to stop generating data. Let us focus on the javascript we have created. When creating the application we now tell it to look for another module called graphApp.services. Than we create the service using the factory and register it with the application. That way we can inject the service later on into the controller.

var graphApp = angular.module('graphApp', ['graphApp.services']);
var services = angular.module('graphApp.services', []);
services.factory('dataService', [function() {
	function DataService() {
		var data = [];
		var numDataPoints = 60;
		var maxNumber = 200;

		this.loadData = function(callback) {
			if (data.length > numDataPoints) {
				data.shift();
			}
			data.push({"x":new Date(),"data1":randomNumber(),"data2":randomNumber()});
			callback(data);
		};

		function randomNumber() {
			return Math.floor((Math.random() * maxNumber) + 1);
		}
	}
	return new DataService();
}]);

This service has one exposed method, loadData(callback). The result is that you get back a collection of objects with 3 fields: x, data1 and data2. The number of points is maximised to 60 and they have a random value between 0 and 200.

Next is the controller. I start with the controller, the parameters that are initialised as well as the method to draw the graph.

graphApp.controller('GraphCtrl', ['$scope','$timeout','dataService',function ($scope,$timeout,dataService) {
	$scope.chart = null;
	$scope.config={};

	$scope.config.data=[]

	$scope.config.type1="spline";
	$scope.config.type2="spline";
	$scope.config.keys={"x":"x","value":["data1","data2"]};

	$scope.keepLoading = true;

	$scope.showGraph = function() {
		var config = {};
		config.bindto = '#chart';
		config.data = {};
		config.data.keys = $scope.config.keys;
		config.data.json = $scope.config.data;
		config.axis = {};
		config.axis.x = {"type":"timeseries","tick":{"format":"%S"}};
		config.axis.y = {"label":{"text":"Number of items","position":"outer-middle"}};
		config.data.types={"data1":$scope.config.type1,"data2":$scope.config.type2};
		$scope.chart = c3.generate(config);		
	}

Take note of the $scope.config.keys which is used later on in the config.data.keys object. With this we move from an array of data per line to a an object with all the datapoints. The x is coming from the x property, the value properties are coming from data1 and data2. Also take note of the config.axis.x property. Here we specify that we a re now dealing with timeseries data and we format the ticks to show only the seconds. This is logical since we configured the have a maximum of 60 points and we create a new point every second, which we will see later on. The next code block shows the other three functions in the controller.

	$scope.startLoading = function() {
		$scope.keepLoading = true;
		$scope.loadNewData();
	}

	$scope.stopLoading = function() {
		$scope.keepLoading = false;
	}

	$scope.loadNewData = function() {
		dataService.loadData(function(newData) {
			var data = {};
			data.keys = $scope.config.keys;
			data.json = newData;
			$scope.chart.load(data);
			$timeout(function(){
				if ($scope.keepLoading) {
					$scope.loadNewData()				
				}
			},1000);			
		});
	}

In the loadNewData method we use the $timeout function of AngularJS. After a second we call the loadNewData again if we did not set the keepLoading property explicitly to false. Note that adding a point to the graph and calling the function recursively is done in the callback of the methode as send to the dataService.

Now run the sample for longer than 60 seconds and see what happens. Below is a screen dump, but is becomes more funny when you run the sample yourself.

Screen Shot 2014 07 29 at 18 16 01

That is about it, now I am going to integrate this with my elasticsearch-ui plugin

The post Using C3js with AngularJS appeared first on Gridshore.

Categories: Architecture, Programming

No Slack = No Innovation

"To accomplish great things we must dream as well as act." -- Anatole France

Innovation is the way to leap frog and create new ways to do things better, faster, and cheaper.

But it takes slack.

The problem is when you squeeze the goose, to get the golden egg, you lose the slack that creates the eggs in the first place.

In the book The Future of Management, Gary Hamel shares how when there is a lack of slack, there is no innovation.

The Most Important Source of Productivity is Creativity

Creativity unleashes productivity.  And it takes time to unleash creativity.  But the big bold bet is that the time you give to creativity and innovation, pays you back with new opportunities and new ways to do things better, faster, or cheaper.

Via The Future of Management:

“In the pursuit of efficiency, companies have wrung a lot of slack out of their operations.  That's a good thing.  No one can argue with the goal of cutting inventory levels, reducing working capital, and slashing over-head.  The problem, though, is that if you wring all the slack out of a company, you'll wring out all of the innovation as well.  Innovation takes time -- time to dream, time to reflect, time to learn, time to invent, and time to experiment.  And it takes uninterrupted time -- time when you can put your feet up and stare off into space.  As Pekka Himanen put it in his affectionate tribute to hackers, '... the information economy's most important source of productivity is creativity, and it is not possible to create interesting things in a constant hurry or in a regulated way from nine to five.'”

There is No “Thinking Time”

Without think time, creativity lives in a cave.

Via The Future of Management:

“While the folks in R&D and new product development are given time to innovate, most employees don't enjoy this luxury.  Every day brings a barrage of e-mails, voice mails, and back-to-back meetings.  In this world, where the need to be 'responsive' fragments human attention into a thousand tiny shards, there is no 'thinking time.'  And therein lies the problem.  However creative your colleagues may be, if they don't have the right to occasionally abandon their posts and work on something that's not mission critical, most of their creativity will remain dormant.”

Are People Encouraged to Quietly Dream Up the Future?

If you want more innovation, make space for it.

Via The Future of Management:

“OK, you already know that -- but how is that knowledge reflected in your company's management processes?  How hard is it for a frontline employee to get permission to spend 20 percent of her time working on a project that has nothing to do with her day job, nor your company's 'core businesses'?  And how often does this happen?  Does your company track the number of hours employees spend working on ideas that are incidental to their core responsibilities? Is 'slack' institutionalized in the same way that cost efficiency is?  Probably not.  There are plenty of incentives in your company for people to stay busy.  ('Maybe if I look like I'm working flat out, they won't send my job offshore.')  But where are the incentives that encourage people to spend time quietly dreaming up the future?”

Are you slacking your way to a better future?

You Might Also Like

Innovation Quotes

The Drag of Old Mental Models on Innovation and Change

The New Competitive Landscape

The New Realities that Call for New Organizational and Management Capabilities

Who’s Managing Your Company

Categories: Architecture, Programming

3 Challenges to Help You Set New Benchmarks in Innovation

If you want to change your game, you need to know what the key challenges are.

Innovation is a game that you can play much better, if you know where and how to debottleneck it.

In the book The Future of Management, Gary Hamel shares 3 challenges that he believes can help you unleash your organization’s capacity for innovation.

  1. How can you enroll every individual within your company in the work of innovation, and equip each one with creativity-boosting tools?
  2. How can you ensure that top management's hallowed beliefs don't straightjacket innovation, and that heretical ideas are given the chance to prove their worth?
  3. How can you create the time and space for grassroots innovation in an organization that is running flat to deliver today's results?

According to Hamel, "Make progress on these challenges and your company will set new benchmarks in innovation."

If I think back through the various teams I’ve been on at Microsoft, one team that I was on was especially good at helping innovation flourish, and we were constantly pushing the envelope to “be what’s next.”   Our innovation flourished the most when we directly addressed the challenges above.  People were challenged to share and test their ideas more freely and innovation was baked into how we planned our portfolio, programs, and projects.

Innovation was a first-class citizen – by design.

You Might Also Like

High-Leverage Strategies for Innovation

Innovation Life Cycle

Lessons Learned from the Most Successful Innovators

The Drag of Old Mental Models on Innovation and Change

The New Realities that Call for New Organizational and Management Capabilities

Categories: Architecture, Programming

Episode 207: Mitchell Hashimoto on the Vagrant Project

Charles Anderson talks to Mitchell Hashimoto about the Vagrant open source project, which can be used to create and configure lightweight, reproducible, and portable development environments. Vagrant aims to make new developers on a project productive within minutes of joining the project instead of spending hours or days setting up the developer’s workstation. The outline […]
Categories: Programming

The Experience Paradox–How to Get a Job Without Experience

Making the Complex Simple - John Sonmez - Mon, 07/28/2014 - 15:00

One of the most difficult things about becoming a software developer, is the experience paradox of needing to have a job to get experience and needing to have experience in order to get a job. This problem is of course not relegated to the field of software development, but many new software developers often struggle […]

The post The Experience Paradox–How to Get a Job Without Experience appeared first on Simple Programmer.

Categories: Programming

Fearless Speaking

“Do one thing every day that scares you.” ― Eleanor Roosevelt

I did a deep dive book review.

This time, I reviewed Fearless Speaking.

The book is more than meets the eye.

It’s actually a wealth of personal development skills at your fingertips and it’s a powerful way to grow your personal leadership skills.

In fact, there are almost fifty exercises throughout the book.

Here’s an example of one of the techniques …

Spotlight Technique #1

When you’re overly nervous and anxious as a public speaker, you place yourself in a ‘third degree’ spotlight.  That’s the name for the harsh bright light police detectives use in days gone by to ‘sweat’ a suspect and elicit a confession.  An interrogation room was always otherwise dimly lit, so the source of light trained on the person (who was usually forced to sit in a hard straight backed chair) was unrelenting.

This spotlight is always harsh, hot, and uncomfortable – and the truth is, you voluntarily train it on yourself by believing your audience is unforgiving.  The larger the audience, the more likely you believe that to be true.

So here’s a technique to get out from under this hot spotlight that you’re imagining so vividly turn it around! Visualize swiveling the spotlight so it’s aimed at your audience instead of you.  After all, aren’t you supposed to illuminate your listeners? You don’t want to leave them in the dark, do you?

There’s no doubt that it’s cooler and much more comfortable when you’re out under that harsh light.  The added benefit is that now the light is shining on your listeners – without question the most important people in the room or auditorium!

I like that there are so many exercises and techniques to choose from.   Many of them don’t fit my style, but there were several that exposed me to new ways of thinking and new ideas to try.

And what’s especially great is knowing that these exercise come from professional actors and speakers – it’s like an insider’s guide at your fingertips.

My book review on Fearless Speaking includes a list of all the exercises, the chapters at a glance, key features from the book, and a few of my favorite highlights from the book (sort of like a movie trailer for the book.)

You Might Also Like

7 Habits of Highly Effective People at a Glance

347 Personal Effectiveness Articles to Help You Change Your Game

Effectiveness Blog Post Roundup

Categories: Architecture, Programming

Transform the input before indexing in elasticsearch

Gridshore - Sat, 07/26/2014 - 07:51

Sometimes you are indexing data and want to have as little to do in the input, or maybe even no influence on the input. Still you need to make changes, you want other content, or other fields. Maybe even remove fields. In elasticsearch 1.3 a new feature is introduces called Transform. In this blogpost I am going to show some of the aspects of this new feature.

Insert the document with the problem

The input we get is coming from a system that puts the string null in a field if it is empty. We do not want null as a string in elasticsearch index. Therefore we want to remove this field completely when indexing a document like that. We start with the example and the proof that you can search on the field.

PUT /transform/simple/1
{
  "title":"This is a document with text",
  "description":"null"
}

Now search for the word null in the description.

For completeness I’ll show you the response as well.

Response:
{
   "took": 1,
   "timed_out": false,
   "_shards": {
      "total": 1,
      "successful": 1,
      "failed": 0
   },
   "hits": {
      "total": 1,
      "max_score": 0.30685282,
      "hits": [
         {
            "_index": "transform",
            "_type": "simple",
            "_id": "1",
            "_score": 0.30685282,
            "_source": {
               "title": "This is a document with text",
               "description": "null"
            }
         }
      ]
   }
}
Change mapping to contain transform

Next we are going to use the transform functionality to remove the field if it contains the string null. To do that we need to remove the index and create a mapping containing the transform functionality. We use the groovy language for the script. Beware that the script is only validated when the first document is inserted.

PUT /transform
{
  "mappings": {
    "simple": {
      "transform": {
        "lang":"groovy",
        "script":"if (ctx._source['description']?.equals('null')) ctx._source['description'] = null"
      },
      "properties": {
        "title": {
          "type": "string"
        },
        "description": {
          "type": "string"
        }
      }
    }
  }
}

When we insert the same document as before and execute the same query we do not get hits. The description field is no longer indexed. An important aspect is that the actual _source is not changed. When requesting the _source of the document you still get back the original document.

GET transform/simple/1/_source
Response:
{
   "title": "This is a document with text",
   "description": "null"
}
Add a field to the mapping

To add a bit more complexity, we add a field called nullField which will contain the name of the field that was null. Not very useful but it suits to show the possibilities.

PUT /transform
{
  "mappings": {
    "simple": {
      "transform": {
        "lang":"groovy",
        "script":"if (ctx._source['description']?.equals('null')) {ctx._source['description'] = null;ctx._source['nullField'] = 'description';}"
      },
      "properties": {
        "title": {
          "type": "string"
        },
        "description": {
          "type": "string"
        },
        "nullField": {
          "type": "string"
        }
      }
    }
  }
}

Notice that we script has changed, not only do we remove the description field, now we also add a new field called nullField. Check that the _source is still not changed. Now we do a search and only return the fields description and nullField. Before scrolling to the response think about the response that you would expect.

GET /transform/_search
{
  "query": {
    "match_all": {}
  },
  "fields": ["nullField","description"]
}

Did you really think about it? Try it out and notice that the nullField is not returned. That is because we did not store it in the index and it is not obtained from the source. So if we really need this value, we can store the nullField in the index and we are fine.

PUT /transform
{
  "mappings": {
    "simple": {
      "transform": {
        "lang":"groovy",
        "script":"if (ctx._source['description']?.equals('null')) {ctx._source['description'] = null;ctx._source['nullField'] = 'description';}"
      },
      "properties": {
        "title": {
          "type": "string"
        },
        "description": {
          "type": "string"
        },
        "nullField": {
          "type": "string",
          "store": "yes"
        }
      }
    }
  }
}

Than with the match all query for two fields we get the following response.

GET /transform/_search
{
  "query": {
    "match_all": {}
  },
  "fields": ["nullField","description"]
}
Response:
{
   "took": 2,
   "timed_out": false,
   "_shards": {
      "total": 1,
      "successful": 1,
      "failed": 0
   },
   "hits": {
      "total": 1,
      "max_score": 1,
      "hits": [
         {
            "_index": "transform",
            "_type": "simple",
            "_id": "1",
            "_score": 1,
            "fields": {
               "description": [
                  "null"
               ],
               "nullField": [
                  "description"
               ]
            }
         }
      ]
   }
}

Yes, now we do have the new field. That is it, but wait there is more you need to know. There is a way to check what is actually passed to the index for a certain document.

GET transform/simple/1?pretty&_source_transform
Result:
{
   "_index": "transform",
   "_type": "simple",
   "_id": "1",
   "_version": 1,
   "found": true,
   "_source": {
      "description": null,
      "nullField": "description",
      "title": "This is a document with text"
   }
}

Notice the null description and the nullField in the _source.

Final remark

You cannot update the transform part, think about what would happen to your index when some documents did pass the transform version 1 and others version 2.

I would be gentile with this feature, try to solve it before sending it to elasticsearch, but maybe you just have the usecase for this feature, now you know it exists.

In my next blogpost I dive a little bit deeper into the scripting module.

The post Transform the input before indexing in elasticsearch appeared first on Gridshore.

Categories: Architecture, Programming

R: ggplot – Plotting back to back charts using facet_wrap

Mark Needham - Fri, 07/25/2014 - 22:57

Earlier in the week I showed a way to plot back to back charts using R’s ggplot library but looking back on the code it felt like it was a bit hacky to ‘glue’ two charts together using a grid.

I wanted to find a better way.

To recap, I came up with the following charts showing the RSVPs to Neo4j London meetup events using this code:

2014 07 20 17 42 40

The first thing we need to do to simplify chart generation is to return ‘yes’ and ‘no’ responses in the same cypher query, like so:

timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01", tz = "GMT")
 
query = "MATCH (e:Event)<-[:TO]-(response {response: 'yes'})
         WITH e, COLLECT(response) AS yeses
         MATCH (e)<-[:TO]-(response {response: 'no'})<-[:NEXT]-()
         WITH e, COLLECT(response) + yeses AS responses
         UNWIND responses AS response
         RETURN response.time AS time, e.time + e.utc_offset AS eventTime, response.response AS response"
allRSVPs = cypher(graph, query)
allRSVPs$time = timestampToDate(allRSVPs$time)
allRSVPs$eventTime = timestampToDate(allRSVPs$eventTime)
allRSVPs$difference = as.numeric(allRSVPs$eventTime - allRSVPs$time, units="days")

The query is a bit because we want to capture the ‘no’ responses when they initially said yes which is why we check for a ‘NEXT’ relationship when looking for the negative responses.

Let’s inspect allRSVPs:

> allRSVPs[1:10,]
                  time           eventTime response difference
1  2014-06-13 21:49:20 2014-07-22 18:30:00       no   38.86157
2  2014-07-02 22:24:06 2014-07-22 18:30:00      yes   19.83743
3  2014-05-23 23:46:02 2014-07-22 18:30:00      yes   59.78053
4  2014-06-23 21:07:11 2014-07-22 18:30:00      yes   28.89084
5  2014-06-06 15:09:29 2014-07-22 18:30:00      yes   46.13925
6  2014-05-31 13:03:09 2014-07-22 18:30:00      yes   52.22698
7  2014-05-23 23:46:02 2014-07-22 18:30:00      yes   59.78053
8  2014-07-02 12:28:22 2014-07-22 18:30:00      yes   20.25113
9  2014-06-30 23:44:39 2014-07-22 18:30:00      yes   21.78149
10 2014-06-06 15:35:53 2014-07-22 18:30:00      yes   46.12091

We’ve returned the actual response with each row so that we can distinguish between responses. It will also come in useful for pivoting our single chart later on.

The next step is to get ggplot to generate our side by side charts. I started off by plotting both types of response on the same chart:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  geom_bar(binwidth=1)

2014 07 25 22 14 28

This one stacks the ‘yes’ and ‘no’ responses on top of each other which isn’t what we want as it’s difficult to compare the two.

What we need is the facet_wrap function which allows us to generate multiple charts grouped by key. We’ll group by ‘response’:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  geom_bar(binwidth=1) + 
  facet_wrap(~ response, nrow=2, ncol=1)

2014 07 25 22 34 46

The only thing we’re missing now is the red and green colours which is where the scale_fill_manual function comes in handy:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  scale_fill_manual(values=c("#FF0000", "#00FF00")) + 
  geom_bar(binwidth=1) +
  facet_wrap(~ response, nrow=2, ncol=1)

2014 07 25 22 39 56

If we want to show the ‘yes’ chart on top we can pass in an extra parameter to facet_wrap to change where it places the highest value:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  scale_fill_manual(values=c("#FF0000", "#00FF00")) + 
  geom_bar(binwidth=1) +
  facet_wrap(~ response, nrow=2, ncol=1, as.table = FALSE)

2014 07 25 22 43 29

We could go one step further and group by response and day. First let’s add a ‘day’ column to our data frame:

allRSVPs$dayOfWeek = format(allRSVPs$eventTime, "%A")

And now let’s plot the charts using both columns:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  scale_fill_manual(values=c("#FF0000", "#00FF00")) + 
  geom_bar(binwidth=1) +
  facet_wrap(~ response + dayOfWeek, as.table = FALSE)

2014 07 25 22 49 57

The distribution of dropouts looks fairly similar for all the days – Thursday is just at an order of magnitude below the other days because we haven’t run many events on Thursdays so far.

At a glance it doesn’t appear that so many people sign up for Thursday events on the day or one day before.

One potential hypothesis is that people have things planned for Thursday whereas they decide more last minute what to do on the other days.

We’ll have to run some more events on Thursdays to see whether that trend holds out.

The code is on github if you want to play with it

Categories: Programming

Marketing scrum vs IT scrum - a report published and presented at agile 2014

Xebia Blog - Fri, 07/25/2014 - 17:49

As we know, Scrum is the perfect framework for IT / software development projects to learn, adapt to change and deliver great software of value, faster.

But is Scrum also usable outside of software development? Can we apply similar or maybe even the same principals in other departments in the enterprise?

Yes, we can! And yes there are differences but there are also a lot of similarities.

We (Remco en Me)  successfully implemented Scrum in the marketing departments of two large companies: The ANWB and ING Bank. Both companies are now using Scrum for the development of new campaigns, their full commercial expressions and even at the product development level. They wanted a faster time to market, more ownership, and greater innovation. How did we approach and realized a transition with those goals in the marketing environment? And what are the results?

So when we are not delivering software but other things, how does Scrum change? Well, a great deal actually. The people working in these other departments are, in general, quite different to those in Software Development (and yes more than you would expect). This means coaches or change agents need to take another approach.

Since the people are different, it is possible to go faster or ‘deeper’ in certain areas. Entrepreneurial skills or ambitions are more present in marketing. This gives a sense of ‘act first apologize later’, taking ownership, a higher drive to succeed, and upfront and willing behavior. Scrumming here means thinking more about business goals and KPIs (how to go from department to scrumteam goals for example). After that the fun begins…

I will be speaking about this topic at agile 2014. A great honor offcourse to be standing there. I will also attende the conference and therefor try to post some updates here.

To read more about this topic you can read my publication about marketing scrum. It has the extensive research paper I publisched about this story. Please feel free to give me comments and questions either about agile 2014 or the paper.

 

Enjoy reading the paper:

Marketing scrum vs IT scrum – two marketing case studies who now ‘act first and apologize later'

 

The Drag of Old Mental Models on Innovation and Change

“Don’t worry about people stealing your ideas. If your ideas are any good, you’ll have to ram them down people’s throats.” — Howard Aiken

It's not a lack of risk taking that holds innovation and change back. 

Even big companies take big risks all the time.

The real barrier to innovation and change is the drag of old mental models.

People end up emotionally invested in their ideas, or they are limited by their beliefs or their world views.  They can't see what's possible with the lens they look through, or fear and doubt hold them back.  In some cases, it's even learned helplessness.

In the book The Future of Management, Gary Hamel shares some great insight into what holds people and companies back from innovation and change.

Yesterday’s Heresies are Tomorrow’s Dogmas

Yesterday's ideas that were profoundly at odds with what is generally accepted, eventually become the norm, and then eventually become a belief system that is tough to change.

Via The Future of Management:

“Innovators are, by nature, contrarians.  Trouble is, yesterday's heresies often become tomorrow's dogmas, and when they do, innovation stalls and the growth curve flattens out.”

Deeply Held Beliefs are the Real Barrier to Strategic Innovation

Success turns beliefs into barriers by cementing ideas that become inflexible to change.

Via The Future of Management:

“... the real barrier to strategic innovation is more than denial -- it's a matrix of deeply held beliefs about the inherent superiority of a business model, beliefs that have been validated by millions of customers; beliefs that have been enshrined in physical infrastructure and operating handbooks; beliefs that have hardened into religious convictions; beliefs that are held so strongly, that nonconforming ideas seldom get considered, and when they do, rarely get more than grudging support.”

It's Not a Lack of Risk Taking that Holds Innovation Back

Big companies take big risks every day.  But the risks are scoped and constrained by old beliefs and the way things have always been done.

Via The Future of Management:

“Contrary to popular mythology, the thing that most impedes innovation in large companies is not a lack of risk taking.  Big companies take big, and often imprudent, risks every day.  The real brake on innovation is the drag of old mental models.  Long-serving executives often have a big chunk of their emotional capital invested in the existing strategy.  This is particularly true for company founders.  While many start out as contrarians, success often turns them into cardinals who feel compelled to defend the one true faith.  It's hard for founders to credit ideas that threaten the foundations of the business models they invented.  Understanding this, employees lower down self-edit their ideas, knowing that anything too far adrift from conventional thinking won't win support from the top.  As a result, the scope of innovation narrows, the risk of getting blindsided goes up, and the company's young contrarians start looking for opportunities elsewhere.”

Legacy Beliefs are a Much Bigger Liability When It Comes to Innovation

When you want to change the world, sometimes it takes a new view, and existing world views get in the way.

Via The Future of Management:

“When it comes to innovation, a company's legacy beliefs are a much bigger liability than its legacy costs.  Yet in my experience, few companies have a systematic process for challenging deeply held strategic assumptions.  Few have taken bold steps to open up their strategy process to contrarian points of view.  Few explicitly encourage disruptive innovation.  Worse, it's usually senior executives, with their doctrinaire views, who get to decide which ideas go forward and which get spiked.  This must change.”

What you see, or can’t see, changes everything.

You Might Also Like

The New Competitive Landscape

The New Realities that Call for New Organizational and Management Capabilities

Who’s Managing Your Company

Categories: Architecture, Programming

Playing with two most interesting new features of elasticsearch 1.3.0

Gridshore - Fri, 07/25/2014 - 11:47

Just a few days a go elasticsearch released version 1.3.0 of their flagship product. The first one is the most waited for feature called the Top hits aggregation. Basically this is what is called grouping. You want to group certain items based on one characteristic, but within this group you want to have the best matching result(s) based on score. Another very important feature is the new support for scripts. Better security options when using scripts using sandboxed script languages.

In this blogpost I am going to explain and show the top_hits feature as well as the new scripting support.


Top hits

I am going to show a very simple example of top hits using my music index. This index contains all the songs I have in my itunes library. First step is to find songs by genre, the following query gives (the default) 10 hits based on the match_all query and the terms aggregation as requested.

GET /mymusic/_search
{
  "aggs": {
    "byGenre": {
      "terms": {
        "field": "genre",
        "size": 10
      }
    }
  }
}

The response is of the format:

{
	"hits": {},
	"aggregations": {
		"byGenre": {
			"buckets": [
				{"key":"rock","doc_count":1910},
				...
			]
		}
    }
}

Now we add the query to the request, songs containing the word love in the title.

GET /mymusic/_search
{
  "query": {
    "match": {
      "name": "love"
    }
  }, 
  "aggs": {
    "byGenre": {
      "terms": {
        "field": "genre",
        "size": 10
      }
    }
  }
}

Now we have less hits, still a number of buckets and the amount of songs that match our query within that bucket. The biggest change is the score in the returned hits. In te previous query the score was always 1, now the score is different due to the query we execute. The highest score now is the song Love by The Mission. The genre for this song is Rock and the song is from the year 1990. Time to introduce the top hits aggregation. With this query we can return the top song containing the word love in the title per genre

GET /mymusic/_search
{
  "query": {
    "match": {
      "name": "love"
    }
  },
  "aggs": {
    "byGenre": {
      "terms": {
        "field": "genre",
        "size": 5
      },
      "aggs": {
        "topFoundHits": {
          "top_hits": {
            "size": 1
          }
        }
      }
    }
  }
}

Again we get hits, but they are not different from the query before. The interesting part is in the aggs part. Here we add a sub aggregation to the byGenre aggregation. This aggregation is called topFoundHits of type top_hits. We only return the best hit per genre. The next code block shows the part of the response with the top hits, I did remove the content of the _source field in the top_hits to keep the response shorter.

{
   "took": 4,
   "timed_out": false,
   "_shards": {
      "total": 3,
      "successful": 3,
      "failed": 0
   },
   "hits": {
      "total": 141,
      "max_score": 0,
      "hits": []
   },
   "aggregations": {
      "byGenre": {
         "buckets": [
            {
               "key": "rock",
               "doc_count": 52,
               "topFoundHits": {
                  "hits": {
                     "total": 52,
                     "max_score": 4.715253,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "4147",
                           "_score": 4.715253,
                           "_source": {
                              "name": "Love",
                           }
                        }
                     ]
                  }
               }
            },
            {
               "key": "pop",
               "doc_count": 39,
               "topFoundHits": {
                  "hits": {
                     "total": 39,
                     "max_score": 3.3341873,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "11381",
                           "_score": 3.3341873,
                           "_source": {
                              "name": "Love To Love You",
                           }
                        }
                     ]
                  }
               }
            },
            {
               "key": "alternative",
               "doc_count": 12,
               "topFoundHits": {
                  "hits": {
                     "total": 12,
                     "max_score": 4.1945505,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "7889",
                           "_score": 4.1945505,
                           "_source": {
                              "name": "Love Love Love",
                           }
                        }
                     ]
                  }
               }
            },
            {
               "key": "b",
               "doc_count": 9,
               "topFoundHits": {
                  "hits": {
                     "total": 9,
                     "max_score": 3.0271564,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "2549",
                           "_score": 3.0271564,
                           "_source": {
                              "name": "First Love",
                           }
                        }
                     ]
                  }
               }
            },
            {
               "key": "r",
               "doc_count": 7,
               "topFoundHits": {
                  "hits": {
                     "total": 7,
                     "max_score": 3.0271564,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "2549",
                           "_score": 3.0271564,
                           "_source": {
                              "name": "First Love",
                           }
                        }
                     ]
                  }
               }
            }
         ]
      }
   }
}

Did you note a problem with my analyser for genre? Hint R&B!

More information on the top_hits aggregation can be found here:

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-metrics-top-hits-aggregation.html

Scripting

Elasticsearch has support for scripts for a long time. The default scripting language was and is mvel up to version 1.3. It will change to groovy in 1.4. Mvel is not a well known scripting language. The biggest advantage is that mvel is very powerful. The disadvantage is that it is to powerful. Mvel does not come with a sandbox principle. Therefore is is possible to write some very nasty scripts even when only doing a query. This was very well shown by a colleague of mine (Byron Voorbach) who created a query to read private keys on developer machines who did not safeguard their elasticsearch instance. Therefore dynamic scripting was switched off in version 1.2 by default.

This came with a very big disadvantage, now it was not possible anymore to use the function_score query without resorting to stored scripts on the server. In version 1.3 of elasticsearch a much better way is introduced. Now you use sandboxed scripting languages like groovy to keep using the flexible approach. Groovy can be configured to include object creation and method calls that are allowed. More information about this is provided in the elasticsearch documentation about scripting.

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html

Next is an example query that is querying my music index. This index contains all the songs from my music library. It queries all he songs after the year 1999 and calculates the score based on the year. So the newest songs get the highest score. And yes I know a sort by year desc would have given the same result.

GET mymusic/_search
{
  "query": {
    "function_score": {
      "query": {
        "range": {
          "year": {
            "gte": 2000
          }
        }
      },
      "functions": [
        {
          "script_score": {
            "lang": "groovy", 
            "script": "_score * doc['year'].value"
          }
        }
      ]
    }
  }
}

The score now becomes high, since we do a range query we get back only scores of one. Using the function_score as the multiplication of the year with the score, the end score is the year. I added the year as the only field to return, some of the results than are:

{
   "took": 4,
   "timed_out": false,
   "_shards": {
      "total": 3,
      "successful": 3,
      "failed": 0
   },
   "hits": {
      "total": 2895,
      "max_score": 2014,
      "hits": [
         {
            "_index": "mymusic",
            "_type": "itunes",
            "_id": "12965",
            "_score": 2014,
            "fields": {
               "year": [
                  "2014"
               ]
            }
         },
         {
            "_index": "mymusic",
            "_type": "itunes",
            "_id": "12975",
            "_score": 2014,
            "fields": {
               "year": [
                  "2014"
               ]
            }
         }
      ]
   }
}

Next up is the last sample, a combination of top_hits and scripting.

Top hits with scripting

We start with the sample from top_hits using my music index. Now we want to sort the buckets on the score of the best matching document in the bucket. The default is the number of documents in the bucket. As mentioned in the documentation you need a trick to do this.

The top_hits aggregator isn’t a metric aggregator and therefor can’t be used in the order option of the terms aggregator.

GET /mymusic/_search?search_type=count
{
  "query": {
    "match": {
      "name": "love"
    }
  },
  "aggs": {
    "byGenre": {
      "terms": {
        "field": "genre",
        "size": 5,
        "order": {
          "best_hit":"desc"
        }
      },
      "aggs": {
        "topFoundHits": {
          "top_hits": {
            "size": 1
          }
        },
        "best_hit": {
          "max": {
            "lang": "groovy", 
            "script": "doc.score"
          }
        }
      }
    }
  }
}

The results of this query again with most of the _source taken out is following. Compare it to the query in the top_hits section. Notice the different genres that we get back now. Also check the scores.

{
   "took": 4,
   "timed_out": false,
   "_shards": {
      "total": 3,
      "successful": 3,
      "failed": 0
   },
   "hits": {
      "total": 141,
      "max_score": 0,
      "hits": []
   },
   "aggregations": {
      "byGenre": {
         "buckets": [
            {
               "key": "rock",
               "doc_count": 37,
               "topFoundHits": {
                  "hits": {
                     "total": 37,
                     "max_score": 4.715253,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "4147",
                           "_score": 4.715253,
                           "_source": {
                              "name": "Love",
                           }
                        }
                     ]
                  }
               },
               "best_hit": {
                  "value": 4.715252876281738
               }
            },
            {
               "key": "alternative",
               "doc_count": 12,
               "topFoundHits": {
                  "hits": {
                     "total": 12,
                     "max_score": 4.1945505,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "7889",
                           "_score": 4.1945505,
                           "_source": {
                              "name": "Love Love Love",
                           }
                        }
                     ]
                  }
               },
               "best_hit": {
                  "value": 4.194550514221191
               }
            },
            {
               "key": "punk",
               "doc_count": 3,
               "topFoundHits": {
                  "hits": {
                     "total": 3,
                     "max_score": 4.1945505,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "7889",
                           "_score": 4.1945505,
                           "_source": {
                              "name": "Love Love Love",
                           }
                        }
                     ]
                  }
               },
               "best_hit": {
                  "value": 4.194550514221191
               }
            },
            {
               "key": "pop",
               "doc_count": 24,
               "topFoundHits": {
                  "hits": {
                     "total": 24,
                     "max_score": 3.3341873,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "11381",
                           "_score": 3.3341873,
                           "_source": {
                              "name": "Love To Love You",
                           }
                        }
                     ]
                  }
               },
               "best_hit": {
                  "value": 3.3341872692108154
               }
            },
            {
               "key": "b",
               "doc_count": 7,
               "topFoundHits": {
                  "hits": {
                     "total": 7,
                     "max_score": 3.0271564,
                     "hits": [
                        {
                           "_index": "mymusic",
                           "_type": "itunes",
                           "_id": "2549",
                           "_score": 3.0271564,
                           "_source": {
                              "name": "First Love",
                           }
                        }
                     ]
                  }
               },
               "best_hit": {
                  "value": 3.027156352996826
               }
            }
         ]
      }
   }
}

This is just a first introduction into the top_hits and scripting. Stay tuned for more blogs around these topics.

The post Playing with two most interesting new features of elasticsearch 1.3.0 appeared first on Gridshore.

Categories: Architecture, Programming

Review The Twitter Story by Nick Bilton

Gridshore - Thu, 07/24/2014 - 23:35


A lot of people dream of creating the new Facebook, Twitter or Instagram. Nobody knows when they will create it, most of the ideas just came to be big. Personally I do not believe I will ever create such a product. To busy with to much different things, usually based on great ideas of others. One thing I do like is reading about the success stories of others. I read the book about starbucks, microsoft, apple and a few others.

Recently I started reading the Twitter Story. It reeds like an exciting story, nevertheless it is telling a story based on interviews and facts behind one of the most exciting companies on the internet of the last century.

I do not think it is a coincidence that a lot of what I read in the Lean Startup but also the starbucks story is coming back in the twitter story. One thing that struck me in this book is what business does to friendship. It is also showing that people with great ideas are usually not the people that make this ideas into a profitable company.

When starting a company based on your terrific idea, read this book and learn from it. It might make your life a lot better.

The post Review The Twitter Story by Nick Bilton appeared first on Gridshore.

Categories: Architecture, Programming