Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Power Great Gaming with New Analytics from Play Games

Android Developers Blog - 3 hours 26 min ago

By Ben Frenkel, Google Play Games team

A few weeks ago at the Game Developers Conference (GDC), we announced Play Games Player Analytics, a new set of free reports to help you manage your games business and understand in-game player behavior. Today, we’re excited to make these new tools available to you in the Google Play Developer Console.

Analytics is a key component of running a game as a service, which is increasingly becoming a necessity for running a successful mobile gaming business. When you take a closer look at large developers that do this successfully, you find that they do three things really well:

  • Manage their business to revenue targets
  • Identify hot spots in their business metrics so they can continuously focus on the game updates that will drive the most impact
  • Use analytics to understand how players are progressing, spending, and churning

“With player engagement and revenue data living under one roof, developers get a level of data quality that is simply not available to smaller teams without dedicated staff. As the tools evolve, I think Google Play Games Player Analytics will finally allow indie devs to confidently make data-driven changes that actually improve revenue.”

Kevin Pazirandeh
Developer of Zombie Highway 2

With Player Analytics, we wanted to make these capabilities available to the entire developer ecosystem on Google Play in a frictionless, easy-to-use way, freeing up your precious time to create great gaming experiences. Small studios, including the makers of Zombie Highway 2 and Bombsquad, have already started to see the benefits and impact of Player Analytics on their business.

Further, if you integrate with Google Play game services, you get this set of analytics with no incremental effort. But, for a little extra work, you can also unlock another set of high impact reports by integrating Google Play game services Events, starting with the Sources and Sinks report, a report to help you balance your in-game economy.

If you already have a game integrated with Google Play game services, go check out the new reports in the Google Play Developer Console today. For everyone else, enabling Player Analytics is as simple as adding a handful of lines of code to your game to integrate Google Play game services.

Manage your business to revenue targets

Set your spend target in Player Analytics by choosing a daily goal

To help assess the health of your games business, Player Analytics enables you to select a daily in-app purchase revenue target and then assess how you're doing against that goal through the Target vs Actual report depicted below. Learn more.

Identify hot spots using benchmarks with the Business Drivers report

Ever wonder how your game’s performance stacks up against other games? Player Analytics tells you exactly how well you are doing compared to similar games in your category.

Metrics highlighted in red are below the benchmark. Arrows indicate whether a metric is trending up or down, and any cell with the icon can be clicked to see more details about the underlying drivers of the change. Learn more.

Track player retention by new user cohort

In the Retention report, you can see the percentage of players that continued to play your game on the following seven days after installing your game.

Learn more.

See where players are spending their time, struggling, and churning with the Player Progression report

Measured by the number of achievements players have earned, the Player Progression funnel helps you identify where your players are struggling and churning to help you refine your game and, ultimately, improve retention. Add more achievements to make progression tracking more precise.

Learn more.

Manage your in-game economy with the Sources and Sinks report

The Sources and Sinks report helps you balance your in-game economy by showing the relationship between how quickly players are earning or buying and using resources.

For example, Eric Froemling, one man developer of BombSquad, used the Sources & Sinks report to help balance the rate at which players earned and spent tickets.

Read more about Eric’s experience with Player Analytics in his recent blog post.

To enable the Sources and Sinks report you will need to create and integrate Play game services Events that track sources of premium currency (e.g., gold coins earned), and sinks of premium currency (e.g., gold coins spent to buy in-app items).

Categories: Programming

Trends for 2015

It’s time to blow your mind with the amazing things going on around the world.

Each year, I try to create a bird’s-eye view of key trends.   It’s a mash up of all the changes in technology, consumerism, and ideas that seem to be taking off.

Here are my trends for 2015:

Trends for 2015:  The Year of Digital Transformation

I call it the year of Digital Transformation because of the encroachment of technology into all things business, work, and life.

Technology is everywhere (and it’s on more wrists than I’ve ever seen before.)

What’s different now is how the combination of Cloud + Mobile + Social + Big Data + Internet-of-Things are really changing how business gets done.  Businesses around the world are using Cloud, Mobile, Social, Big Data, and Internet-of-Things to leap frog ahead of their competitors and to differentiate in new and exciting ways.

Businesses around the world are using technology to gains insights and to shape their customer experience, transform their workforce, and streamline their back-office operations.

It’s a new race for leadership positions in the Digital Economy.   With infinite compute and capacity, along with new ways to connect around the world, business leaders and IT leaders are re-imagining the art of the possible for their businesses.

While Cloud, Mobile, Social, Big Data, and Internet-of-Things might be the backbone of the changes all around us, it’s business model innovation that is bringing these changes to the market and helping them take hold.

Here is a preview of 10 key trends from Trends for 2015:  The Year of Digital Transformation:

  1. The Age of Instant Gratification.
  2. It’s a Mobile Everything World.
  3. Businesses Look to the Cloud.
  4. Internet of Things (IoT)
  5. Design is everywhere.
  6. Economy Re-Imagined (“The Circular Economy”)
  7. Culture of Health.
  8. Money is Reimagined (“The Future of Payments and Currency”)
  9. Digital Personal Assistants are Everywhere.
  10. Renegades and Rebels Rule the World

As a tip, when you read the post, try to scan it first, all the way down, so you can see the full collection of ideas.  Then circle back and slow down to really absorb the full insight.   You’ll find that you’ll start to see more patterns across the trends, and you’ll start to connect more dots.

I designed the post to make it easy to scan, as well as to read it end-to-end in depth.  I think it’s more valuable to be able to quickly take the balcony view, before diving in.   This way, you get more of a full picture view of what’s happening around the world.  Even if you don’t master all the trends, a little bit of awareness can actually go a long way. 

In fact, you might surprise yourself as some of the trends pop into your mind, while you’re working on something completely different.   By having the trends at my fingertips, I’m finding myself seeing new patterns in business, along with new ways that technology can enhance our work and life.

Trends actually become a vocabulary for generating and shaping new ideas.   There are so many ways to arrange and re-arrange the constellation of ideas.  You’ll find that I paid a lot of attention to the naming of each trend.  I tried to share what was already pervasive and sticky, or if it was complicated, I tried to turn it into something more memorable.

Use the trends as fodder and insight as you pave your way through your own Digital Transformation.

Categories: Architecture, Programming

Python: Creating a skewed random discrete distribution

Mark Needham - Mon, 03/30/2015 - 23:28

I’m planning to write a variant of the TF/IDF algorithm over the HIMYM corpus which weights in favour of term that appear in a medium number of documents and as a prerequisite needed a function that when given a number of documents would return a weighting.

It should return a higher value when a term appears in a medium number of documents i.e. if I pass in 10 I should get back a higher value than 200 as a term that appears in 10 episodes is likely to be more interesting than one which appears in almost every episode.

I went through a few different scipy distributions but none of them did exactly what I want so I ended up writing a sampling function which chooses random numbers in a range with different probabilities:

import math
import numpy as np
 
values = range(1, 209)
probs = [1.0 / 208] * 208
 
for idx, prob in enumerate(probs):
    if idx > 3 and idx < 20:
        probs[idx] = probs[idx] * (1 + math.log(idx + 1))
    if idx > 20 and idx < 40:
        probs[idx] = probs[idx] * (1 + math.log((40 - idx) + 1))
 
probs = [p / sum(probs) for p in probs]
sample =  np.random.choice(values, 1000, p=probs)
 
>>> print sample[:10]
[ 33   9  22 126  54   4  20  17  45  56]

Now let’s visualise the distribution of this sample by plotting a histogram:

import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
 
binwidth = 2
plt.hist(sample, bins=np.arange(min(sample), max(sample) + binwidth, binwidth))
plt.xlim([0, max(sample)])
plt.show()

2015 03 30 23 25 05

It’s a bit hacky but it does seem to work in terms of weighting the values correctly. It would be nice if it I could smooth it off a bit better but I’m not sure how at the moment.

One final thing we can do is get the count of any one of the values by using the bincount function:

>>> print np.bincount(sample)[1]
4
 
>>> print np.bincount(sample)[10]
16
 
>>> print np.bincount(sample)[206]
3

Now I need to plug this into the rest of my code and see if it actually works!

Categories: Programming

Open Google Maps from your iOS app

Google Code Blog - Mon, 03/30/2015 - 20:00
Originally posted on the Google Geo Developers Blog
Posted by Todd Kerpelman, Developer Advocate

If you're an iOS developer, you're probably aware that you have the ability to open some apps directly by taking advantage of their custom URL schemes. (And if you're not aware of that fact, I have an excellent set of videos to recommend to you!)

Of course, we wouldn't be telling you all of this on the Google Geo Developers blog if it weren't for the fact that you can also use the comgooglemaps:// custom URL scheme to open up a map, Street View, or direction request directly in Google Maps on iOS.
Constructing these URLs, however, isn't always easy -- I don't know about you, but I don't spend a lot of my time memorizing key/value pairs for URL arguments. And adding x-callback-url support, while super useful for redirecting users back to your app, means adding even more URL arguments and escaping. And because not everybody has Google Maps installed on their iOS device, you may also want to build URLs to open up Apple Maps, which have their own similar-but-slightly different set of URL arguments.

It was one of those situations that made me say, "Hey, somebody should write a utility to make this easier." And that's how, a few months later, we ended up publishing the OpenInGoogleMapsController for iOS.

OpenInGoogleMapsController is a class that makes it easy to build links to open a map (or display Street View or directions) directly in Google Maps for iOS. Rather than creating URLs by hand, you can create map requests using Objective-C classes and types, so you can take advantage of all the type-checking and code hinting you've come to expect from Xcode.

For instance, if you needed biking directions from Sherlock Holmes' apartment on Baker Street to Scotland Yard, your request might look something like this:

GoogleDirectionsDefinition *defn = [[GoogleDirectionsDefinition alloc] init];
defn.startingPoint =
[GoogleDirectionsWaypoint waypointWithQuery:@"221B Baker Street, London"];
defn.destinationPoint = [GoogleDirectionsWaypoint
waypointWithLocation:CLLocationCoordinate2DMake(51.498511, -0.133091)];
defn.travelMode = kGoogleMapsTravelModeBiking;
[[OpenInGoogleMapsController sharedInstance] openDirections:defn];

My favorite feature about this utility is that it supports a number of fallback strategies. If, for instance, you want to open up your map request in Google Maps, but then fallback to Apple Maps if the user doesn't have Google Maps installed, our library can do that for you. On the other hand, if it's important that your map location uses Google's data set, you can open up the map request in Google Maps in Safari or Chrome as a fallback strategy. And, of course, it fully supports the x-callback-url standard, so you can make sure Google Maps (or Google Chrome) has a button that points back to your app.

Sound interesting? Give it a try. Just add a couple of files to your Xcode project, and you're ready to go. Feel free to add issues or enhancements requests you might encounter in the GitHub repository, and let us know if you use it in your app. We'd be excited to check it out.
Categories: Programming

Why You Need to Speak at Your Next Code Camp

Making the Complex Simple - John Sonmez - Mon, 03/30/2015 - 16:00

I just came back from Orlando Code camp, so I thought I’d do a post talking about why speaking at a code camp is a great opportunity and why—even if you think you have nothing to talk about or teach—you should be speaking at a code camp near you. Now, just in case you aren’t […]

The post Why You Need to Speak at Your Next Code Camp appeared first on Simple Programmer.

Categories: Programming

Most Prolific Bloggers on .Net

Phil Trelford's Array - Mon, 03/30/2015 - 08:15

On Saturday I headed down to Thoughtworks in Soho, London for an F# Open Data Hackathon organized by Thoughtworker Sean Newham. We started up with questions we’d like to answer using open data. I was interested in finding the most prolific bloggers in .Net, and formed a team with Adam Kosiński, Emmet Cassidy and my son Sean. We had from around 11am to 3pm to answer the question and present the results.

Dew Drop

We used data mined from Alvin Ashcraft’s Morning Dew site, which provides a labelled list of top links almost every week day since 2008.

Here’s Alvin’s activity since 2008:

Dew Drop Calendar 2008-2015

Alvin’s links covers many topics including .Net, Web, Mobile and XAML:

Dew Drop Tags 2008-2015

Top 100 .Net Bloggers

For this analysis we’re looking only at links labelled as “Top Links” or “.NET”. Between 2008 and 2015 there were over 20,000 links from over 3000 unique author names.

Interestingly the top 100 bloggers account for roughly half of all posts, and here’s the table of top 100 .Net bloggers based on data extracted from the Morning Dew:

Rank Name 2008 2009 2010 2011 2012 2013 2014 2015 Total 1 Greg Duncan 2 16 74 142 116 86 74 14 524 2 Oren Eini 31 71 76 42 94 49 32 12 407 3 Sean Sexton 0 0 0 0 0 193 195 0 388 4 Zain Naboulsi 0 0 236 56 17 37 3 0 349 5 Richard Carr 6 13 78 77 90 58 12 0 334 6 Eric Lippert 7 48 47 46 37 68 44 0 297 7 Raymond Chen 0 0 0 20 75 92 86 17 290 8 Scott Hanselman 28 16 38 39 44 37 50 7 259 9 MS Downloads 27 35 42 48 46 23 13 0 234 10 CodePlex 36 12 34 70 49 13 10 0 224 11 Sasha Goldshtein 4 16 30 31 22 25 19 7 154 12 Brian Harry 0 0 0 0 35 58 46 8 147 13 Julie Lerman 19 45 15 16 19 16 7 2 139 14 Scott Guthrie 3 12 47 18 19 24 12 2 137 15 Martin Hinshelwood 3 12 12 20 27 32 25 5 136 16 Mike Hadlow 6 16 30 21 25 22 4 0 124 17 Dhananjay Kumar 0 0 0 60 26 7 25 1 119 18 Derik Whittaker 24 21 21 21 18 8 6 0 119 19 Gunnar Peipman 1 40 36 9 9 15 4 1 115 20 Abhijit Jana 0 0 18 87 0 3 2 3 113 21 Ricardo Peres 0 0 0 6 15 31 38 13 103 22 Jimmy Bogard 19 18 14 12 7 5 18 6 99 23 Dennis Delimarsky 0 0 50 28 16 5 0 0 99 24 Peter Vogel 0 0 0 0 13 27 44 12 96 25 Jonathan Allen 0 0 11 19 25 15 17 9 96 26 Matthew Podwysocki 27 45 21 1 1 0 0 0 95 27 Rockford Lhotka 8 19 17 20 20 5 3 0 92 28 K. Scott Allen 11 8 14 15 19 11 8 3 89 29 Shai Raiten 0 0 18 40 22 9 0 0 89 30 Charles Sterling 3 8 4 6 25 24 15 1 86 31 Deborah Kurata 0 24 36 2 9 3 8 1 83 32 Phil Haack 3 5 10 31 13 11 6 0 79 33 James Michael Hare 0 0 8 38 24 3 0 3 76 34 Peter Kellner 2 18 10 8 22 9 3 3 75 35 Sacha Barber 7 4 5 9 4 8 31 7 75 36 Kunal Chowdhury 0 0 14 13 20 20 6 2 75 37 Pete Brown 3 6 28 17 16 3 2 0 75 38 Mike Taulty 7 9 9 10 13 4 21 1 74 39 Glenn Block 12 9 18 11 9 5 6 2 72 40 Miguel de Icaza 0 0 25 19 8 5 12 3 72 41 Davy Brion 7 30 26 9 0 0 0 0 72 42 Mary Jo Foley 0 3 1 20 17 16 12 2 71 43 Rick Strahl 9 13 1 11 15 8 7 7 71 44 Alex Skorkin 0 0 11 51 9 0 0 0 71 45 Jesse Liberty 3 0 10 13 19 4 15 1 65 46 Tatworth 0 0 0 14 30 7 14 0 65 47 Daniel Moth 17 23 0 12 6 3 4 0 65 48 Abhishek Sur 0 1 20 35 2 3 1 2 64 49 Clemens Reijnen 11 12 19 5 8 3 5 1 64 50 Justin Etheredge 25 20 16 3 0 0 0 0 64 51 Jeremy Likness 0 0 18 12 21 3 5 3 62 52 Iris Classon 0 0 0 0 37 16 6 3 62 53 Bill Wagner 0 7 2 21 11 7 11 3 62 54 Mark Needham 0 31 31 0 0 0 0 0 62 55 Harry Pierson 17 37 0 2 3 0 1 0 60 56 Rob Eisenberg 4 1 14 5 3 12 13 7 59 57 Steven Sinofsky 0 0 1 22 31 3 1 0 58 58 John Papa 8 3 7 3 13 15 7 0 56 59 Patrick Smacchia 9 8 11 12 10 4 2 0 56 60 Jon Skeet 6 1 0 15 9 5 16 3 55 61 Steve Smith 0 1 3 22 8 6 15 0 55 62 Bnaya Eshet 0 0 0 12 18 9 5 9 53 63 Carl Franklin & Richard Campbell 0 0 3 0 7 17 16 10 53 64 Shawn Wildermuth 7 13 8 6 10 1 6 2 53 65 Rory Primrose 0 0 19 18 7 7 2 0 53 66 Willy-P. Schaub 0 0 0 3 17 9 19 4 52 67 Kirill Osenkov 1 18 12 5 9 4 3 0 52 68 S.Somasegar 0 0 0 0 15 16 17 3 51 69 Bart de Smet 21 24 5 1 0 0 0 0 51 70 Laurent Bugnion 0 0 7 17 13 4 6 3 50 71 Eric Battalio 0 0 0 0 6 15 27 2 50 72 Maarten Balliauw 2 0 6 16 7 15 3 1 50 73 Don Syme 1 2 4 15 13 14 1 0 50 74 Gil Fink 1 4 24 20 1 0 0 0 50 75 Cameron Skinner 11 11 21 6 1 0 0 0 50 76 Dave M. Bush 7 25 5 0 0 3 7 2 49 77 Stephen Forte 4 21 21 3 0 0 0 0 49 78 Michael Crump 0 0 2 11 13 4 13 5 48 79 Kim Spilker 0 0 2 16 7 9 14 0 48 80 Jura Gorohovsky 0 0 5 12 15 12 4 0 48 81 Wally McClure 0 4 11 17 15 0 1 0 48 82 Jason Zander 0 10 13 5 18 0 1 0 47 83 Chris Sells 2 22 10 4 9 0 0 0 47 84 Marcelo Lopez Ruiz 1 2 37 5 1 0 0 0 46 85 Nicholas Blumhardt 0 4 6 6 1 7 18 3 45 86 Rob Reynolds 0 12 11 9 6 3 4 0 45 87 Dmitri Nesteruk 0 1 1 1 14 20 2 5 44 88 Rory Becker 0 0 13 11 1 1 18 0 44 89 Filip Ekberg 0 0 0 0 11 21 11 0 43 90 G. Andrew Duthie 2 12 4 4 20 1 0 0 43 91 Grigori Melnik 0 0 8 13 8 8 4 1 42 92 Jonathan Wood 0 0 6 21 4 0 8 3 42 93 Peter Ritchie 3 1 14 6 13 2 3 0 42 94 Rob Conery 10 2 4 15 5 1 5 0 42 95 Gian Maria Ricci 0 0 1 37 4 0 0 0 42 96 Anoop Madhusudanan 0 0 14 8 11 8 0 0 41 97 Jeff Blankenburg 0 1 15 6 19 0 0 0 41 98 Rowan Miller 0 0 5 4 4 10 15 2 40 99 Hadi Hariri 0 1 4 21 11 2 0 0 39 100 Yochay Kiriaty 3 15 15 6 0 0 0 0 39

Data

You can download the data from http://trelford.com/DewDropTo2015.csv

I’d be interested in hearing about what you find :)

Categories: Programming

InetAddressImpl#lookupAllHostAddr slow/hangs

Mark Needham - Sun, 03/29/2015 - 01:31

Since I upgraded to Yosemite I’ve noticed that attempts to resolve localhost on my home network have been taking ages (sometimes over a minute) so I thought I’d try and work out why.

This is what my initial /etc/hosts file looked like based on the assumption that my machine’s hostname was teetotal:

$ cat /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1	localhost
255.255.255.255	broadcasthost
::1             localhost
#fe80::1%lo0	localhost
127.0.0.1	wuqour.local
127.0.0.1       teetotal

I setup a little test which replicated the problem:

import java.net.InetAddress;
import java.net.UnknownHostException;
 
public class LocalhostResolution
{
    public static void main( String[] args ) throws UnknownHostException
    {
        long start = System.currentTimeMillis();
        InetAddress localHost = InetAddress.getLocalHost();
        System.out.println(localHost);
        System.out.println(System.currentTimeMillis() - start);
    }
}

which has the following output:

Exception in thread "main" java.net.UnknownHostException: teetotal-2: teetotal-2: nodename nor servname provided, or not known
	at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
	at LocalhostResolution.main(LocalhostResolution.java:9)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Caused by: java.net.UnknownHostException: teetotal-2: nodename nor servname provided, or not known
	at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
	at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
	at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
	at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
	... 6 more

Somehow my hostname has changed to teetotal-2 so I added the following entry to /etc/hosts:

127.0.0.1	teetotal-2

Now if we run the program we see this output instead:

teetotal-2/127.0.0.1
5011

It’s still taking 5 seconds to resolve which is much longer than I’d expect. After break pointing through the code it seems like it’s trying to do an IPv6 resolution rather than IPv4 so I added an /etc/hosts entry for that too:

::1             teetotal-2

Now resolution is much quicker:

teetotal-2/127.0.0.1
6

Happy days!

Categories: Programming

Internet of things

Xebia Blog - Sat, 03/28/2015 - 11:26

Do you remember the time that you were not connected to the internet? When you had to call your friend instead of sending a message via Whatsapp to make an appointment. The times you had to apply for a job by writing a letter with  a pen instead of sending your Linkedin page? We all have gone through a major revolution in which we've all got a digital identity. The same revolution has started for things, right now!

Internet of Things

Potential

Things we use all day, like your watch or thermostat, get suddenly connected to the internet. These things are equipped with electronics, sensors and connectivity. Whether you like it or not, you will not escape this movement. According to Gartner there will be nearly 26 billion devices on the Internet of Things by 2020.
The potential of the Internet of Things isn’t just linking millions of devices. It is more like the potential of Google or Amazon had when building massive IT infrastructure. The IoT is about transforming business models and enabling companies to sell products in entirely new and better ways.

Definition

The definition of IoT is fairly simple. The internet of things involves `every thing that is connected to the internet`. You should exclude mobiles and computers, because they've already become mainstream.

In fact, the Internet of things is a name for a set of trends. First of all we have the trend that objects, we use all day,  are suddenly connected to the internet. For example bathroom scales that compares your weight with others.
Another trend is that devices that were already shipped with CPUs have become much more powerful when they’re connected to a network. Functionality of the hardware depends on local computing as well as powerful cloud computing. Fitness tracker, can carry out some functionality, like basic data cleaning and tabulation, on local hardware, but it depends on sophisticated machine-learning algorithms on an ever improving cloud service to offer prescription and insight.
Besides that we have the makers movement. Creating hardware isn’t for the big companies like Sony or Samsung anymore, just like creating software in the 80’s and 90’s was a privilege for IBM and Microsoft.

What’s new?

Devices that are connected to the internet aren’t new. For many years, we've connected devices to the internet, although not at that size. I think we have to compare it concepts like 'the cloud'. Before I heard the name 'the cloud' I was already using services that run on the internet, like Hotmail. But one swallow doesn't make a summer. When suddenly lots of services are running on the web, we speak of a trend and the market will give it a name to it. The same is happening with the internet of things. Suddenly all kinds of 'normal' devices are connected to the internet and we have a movement towards that direction that is unstoppable. And the possibilities it will bring are endless.

Splitting up in pieces

We can divide the internet of things into five smart categories. First of all we have the smart wearable. It is designed for a variety of purposes as well as for wear on a variety of parts of the body. Think of a bicycle helmet that detects a crash. Next to it we have the smart home. It’s goal is to make the experience of living at home more convenient and pleasant. A thermostat that learns what temperatures users like and builds a context-aware personalized schedule. A third and fourth category are the smart city and smart environment. It focuses on finding sustainable solutions to the growing problems. The last category is very interesting. It is the smart enterprise. One can think of real-time shipment tracking or smart metering solution that manages energy consumption at the individual appliance and machine level.

Changes of business models

There are plenty of new business models due to the internet of things. An example is a system that provides a sensor-embedded trash, so it’s capable of real-time context analysis and alerting the authorities when it is full and needs to be emptied. Another example is the insurance world, which will adopt the internet of things. There's already behaviour-based car insurance pricing available. Customers who agree to install a monitoring device on their car's diagnostic port, get a break on their insurance rates and pricing that varies according to usage and driving habits.

Big data

Other disciplines in software development also have interest in the new movement, like big data. Internet of things will not generate big data, but obesitas data. Did you know that within every flight, there are  3 terabytes of data generated with all the sensors in an airplane? Think about how much data will be generated, that has to be analyzed, when all buildings in a town are equipped with sensors to measure the quality of the air?

Conclusion

The intersection between software and the physical world becomes smaller every day and that is the result of the unstoppable movement of internet of things. It is still at the beginning. Business models are changing and there are lots of opportunities. You just have to find them...

Toward a Better Markdown Tutorial

Coding Horror - Jeff Atwood - Sat, 03/28/2015 - 01:19

It's always surprised me when people, especially technical people, say they don't know Markdown. Do you not use GitHub? Stack Overflow? Reddit?

I get that an average person may not understand how Markdown is based on simple old-school plaintext ASCII typing conventions. Like when you're *really* excited about something, you naturally put asterisks around it, and Markdown makes that automagically italic.

But how can we expect them to know that, if they grew up with wizzy-wig editors where the only way to make italic is to click a toolbar button, like an animal?

I am not advocating for WYSIWYG here. While there's certainly more than one way to make italic, I personally don't like invisible formatting tags and I find that WYSIWYG is more like WYCSYCG in practice. It's dangerous to be dependent on these invisible formatting codes you can't control. And they're especially bad if you ever plan to care about differences, revisions, and edit history. That's why I like to teach people simple, visible formatting codes.

We can certainly debate which markup language is superior, but in Discourse we tried to build a rainbow tool that satisifies everyone. We support:

  • HTML (safe subset)
  • BBCode (basic subset)
  • Markdown (full)

This makes coding our editor kind of hellishly complex, but it means that for you, the user, whatever markup language you're used to will probably "just work" on any Discourse site you happen to encounter in the future. But BBCode and HTML are supported mostly as bridges. What we view as our primary markup format, and what we want people to learn to use, is Markdown.

However, one thing I have really struggled with is that there isn't any single great place to refer people to with a simple walkthrough and explanation of Markdown.

When we built Stack Overflow circa 2008-2009, I put together my best effort at the time which became the "editing help" page:

It's just OK. And GitHub has their Markdown Basics, and GitHub Flavored Markdown help pages. They're OK.

The Ghost editor I am typing this in has an OK Markdown help page too.

But none of these are great.

What we really need is a great Markdown tutorial and reference page, one that we can refer anyone to, anywhere in the world, from someone who barely touches computers to the hardest of hard-core coders. I don't want to build another one for these kinds of help pages for Discourse, I want to build one for everyone. Since it is for everyone, I want to involve everyone. And by everyone, I mean you.

After writing about Our Programs Are Fun To Use – which I just updated with a bunch of great examples contributed in the comments, so go check that out even if you read it already – I am inspired by the idea that we can make a fun, interactive Markdown tutorial together.

So here's what I propose: a small contest to build an interactive Markdown tutorial and reference, which we will eventually host at the home page of commonmark.org, and can be freely mirrored anywhere in the world.

Some ground rules:

  • It should be primarily in JavaScript and HTML. Ideally entirely so. If you need to use a server-side scripting language, that's fine, but try to keep it simple, and make sure it's something that is reasonable to deploy on a generic Linux server anywhere.

  • You can pick any approach you want, but it should be highly interactive, and I suggest that you at minimum provide two tracks:

    • A gentle, interactive tutorial for absolute beginners who are asking "what the heck does Markdown even mean?"

    • A dynamic, interactive reference for intermediates and experts who are asking more advanced usage questions, like "how do I make code inside a list, or a list inside a list?"

  • There's a lot of variance in Markdown implementations, so teach the most common parts of Markdown, and cover the optional / less common variations either in the advanced reference areas or in extra bonus sections. People do love their tables and footnotes! We recommend using a CommonMark compatible implementation, but it is not a requirement.

  • Your code must be MIT licensed.

  • Judging will be completely at the whim of myself and John MacFarlane. Our decisions will be capricious, arbitrary, probably nonsensical, and above all, final.

  • We'll run this contest for a period of one month, from today until April 28th, 2015.

  • If I have hastily left out any clarifying rules I should have had, they will go here.

Of course, the real reward for building is the admiration of your peers, and the knowledge that an entire generation of people will grow up learning basic Markdown skills through your contribution to a global open source project.

But on top of that, I am offering … fabulous prizes!

  1. Let's start with my Recommended Reading List. I count sixteen books on it. As long as you live in a place Amazon can ship to, I'll send you all the books on that list. (Or the equivalent value in an Amazon gift certificate, if you happen to have a lot of these books already, or prefer that.)

  2. Second prize is a CODE Keyboard. This can be shipped worldwide.

  3. Third prize is you're fired. Just kidding. Third prize is your choice of any three books on my reading list. (Same caveats around Amazon apply.)

Looking for a place to get started? Check out:

If you want privacy, you can mail your entries to me directly (see the about page here for my email address), or if you are comfortable with posting your contest entry in public, I'll create a topic on talk.commonmark for you to post links and gather feedback. Leaving your entry in the comments on this article is also OK.

We desperately need a great place that we can send everyone to learn Markdown, and we need your help to build it. Let's give this a shot. Surprise and amaze us!

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Categories: Programming

7 Habits of Highly Motivated People

Motivation is a powerful skill.

It can lift you up from the worst of places, and inspire you to new heights.

After all, nothing is worse than slogging your way through your days, or working your way through a bunch of mundane tasks.

But, like I said, motivation is a skill.

You need to learn it.   For many people it does not come naturally.   And chances are, many of us have had bad models, bad advice, and worse, bad habits for a lifetime.

One of the most important insights I found was said by Stephen Covey long ago – satisfied needs don’t motivate.

It’s why we need to stay hungry.

Here’s how you stay hungry -- find a problem you hate, and focus on creating a solution you love.

But how you light your fire from the inside out in a sustainable way?

That’s where the 7 habits of highly motivated people comes in.

I wanted to put together a very simple set of habits and practices that actually work for building your motivational muscle and finding your inner mojo.

Here are the 7 habits of highly motivated people at a glance:

  1. Find Your WHY
  2. Change Your Beliefs About What’s Possible
  3. Change Your Beliefs That Limit You
  4. Spend More Time In Your Values
  5. Surround Yourself With Catalysts
  6. Build Better Feedback Loops
  7. “Pull” Yourself with Compelling Goals

There is a lot of science behind the habits.  If you’re that motivated, you can research it through bunches of books, bunches of sites, and brilliant TED talks.

But, I’d much rather that you spent the time simply adopting and applying the habits, so you can set your motivation on fire.

It’s time to do more of what you were born to do.

It’s time to live and breathe the things that you want to live and breathe.

It’s time to rise again from whatever ashes might have burned you down, and let your phoenix fly.

If you aren’t sure where to get started, first read 7 habits of highly motivated people, and then adopt habit #1:

Find Your WHY.

You’ll be glad you did.

I can see your pilot light is on already.

Categories: Architecture, Programming

Neo4j: Generating real time recommendations with Cypher

Mark Needham - Fri, 03/27/2015 - 07:59

One of the most common uses of Neo4j is for building real time recommendation engines and a common theme is that they make use of lots of different bits of data to come up with an interesting recommendation.

For example in this video Amanda shows how dating websites build real time recommendation engines by starting with social connections and then introducing passions, location and a few other things.

Graph Aware have a neat framework that helps you to build your own recommendation engine using Java and I was curious what a Cypher version would look like.

This is the sample graph:

CREATE
    (m:Person:Male {name:'Michal', age:30}),
    (d:Person:Female {name:'Daniela', age:20}),
    (v:Person:Male {name:'Vince', age:40}),
    (a:Person:Male {name:'Adam', age:30}),
    (l:Person:Female {name:'Luanne', age:25}),
    (c:Person:Male {name:'Christophe', age:60}),
 
    (lon:City {name:'London'}),
    (mum:City {name:'Mumbai'}),
 
    (m)-[:FRIEND_OF]->(d),
    (m)-[:FRIEND_OF]->(l),
    (m)-[:FRIEND_OF]->(a),
    (m)-[:FRIEND_OF]->(v),
    (d)-[:FRIEND_OF]->(v),
    (c)-[:FRIEND_OF]->(v),
    (d)-[:LIVES_IN]->(lon),
    (v)-[:LIVES_IN]->(lon),
    (m)-[:LIVES_IN]->(lon),
    (l)-[:LIVES_IN]->(mum);

We want to recommend some potential friends to ‘Adam’ so the first layer of our query is to find his friends of friends as there are bound to be some potential friends amongst them:

MATCH (me:Person {name: "Adam"})
MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend)
RETURN me, potentialFriend, COUNT(*) AS friendsInCommon
 
==> +--------------------------------------------------------------------------------------+
==> | me                             | potentialFriend                   | friendsInCommon |
==> +--------------------------------------------------------------------------------------+
==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40}   | 1               |
==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 1               |
==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25}  | 1               |
==> +--------------------------------------------------------------------------------------+
==> 3 rows

This query gives us back a list of potential friends and how many friends we have in common.

Now that we’ve got some potential friends let’s start building a ranking for each of them. One indicator which could weigh in favour of a potential friend is if they live in the same location as us so let’s add that to our query:

MATCH (me:Person {name: "Adam"})
MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend)
 
WITH me, potentialFriend, COUNT(*) AS friendsInCommon
 
RETURN  me,
        potentialFriend,
        SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation
 
==> +-----------------------------------------------------------------------------------+
==> | me                             | potentialFriend                   | sameLocation |
==> +-----------------------------------------------------------------------------------+
==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40}   | 0            |
==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 0            |
==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25}  | 0            |
==> +-----------------------------------------------------------------------------------+
==> 3 rows

Next we’ll check whether Adams’ potential friends have the same gender as him by comparing the labels each node has. We’ve got ‘Male’ and ‘Female’ labels which indicate gender.

MATCH (me:Person {name: "Adam"})
MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend)
 
WITH me, potentialFriend, COUNT(*) AS friendsInCommon
 
RETURN  me,
        potentialFriend,
        SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation,
        LABELS(me) = LABELS(potentialFriend) AS gender
 
==> +--------------------------------------------------------------------------------------------+
==> | me                             | potentialFriend                   | sameLocation | gender |
==> +--------------------------------------------------------------------------------------------+
==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40}   | 0            | true   |
==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 0            | false  |
==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25}  | 0            | false  |
==> +--------------------------------------------------------------------------------------------+
==> 3 rows

Next up let’s calculate the age different between Adam and his potential friends:

MATCH (me:Person {name: "Adam"})
MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend)
 
WITH me, potentialFriend, COUNT(*) AS friendsInCommon
 
RETURN me,
       potentialFriend,
       SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation,
       abs( me.age - potentialFriend.age) AS ageDifference,
       LABELS(me) = LABELS(potentialFriend) AS gender,
       friendsInCommon
 
==> +------------------------------------------------------------------------------------------------------------------------------+
==> | me                             | potentialFriend                   | sameLocation | ageDifference | gender | friendsInCommon |
==> +------------------------------------------------------------------------------------------------------------------------------+
==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40}   | 0            | 10.0          | true   | 1               |
==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 0            | 10.0          | false  | 1               |
==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25}  | 0            | 5.0           | false  | 1               |
==> +------------------------------------------------------------------------------------------------------------------------------+
==> 3 rows

Now let’s do some filtering to get rid of people that Adam is already friends with – there wouldn’t be much point in recommending those people!

MATCH (me:Person {name: "Adam"})
MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend)
 
WITH me, potentialFriend, COUNT(*) AS friendsInCommon
 
WITH me,
     potentialFriend,
     SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation,
     abs( me.age - potentialFriend.age) AS ageDifference,
     LABELS(me) = LABELS(potentialFriend) AS gender,
     friendsInCommon
 
WHERE NOT (me)-[:FRIEND_OF]-(potentialFriend)
 
RETURN me,
       potentialFriend,
       SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation,
       abs( me.age - potentialFriend.age) AS ageDifference,
       LABELS(me) = LABELS(potentialFriend) AS gender,
       friendsInCommon
 
==> +------------------------------------------------------------------------------------------------------------------------------+
==> | me                             | potentialFriend                   | sameLocation | ageDifference | gender | friendsInCommon |
==> +------------------------------------------------------------------------------------------------------------------------------+
==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40}   | 0            | 10.0          | true   | 1               |
==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 0            | 10.0          | false  | 1               |
==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25}  | 0            | 5.0           | false  | 1               |
==> +------------------------------------------------------------------------------------------------------------------------------+
==> 3 rows

In this case we haven’t actually filtered anyone out but for some of the other people we would see a reduction in the number of potential friends.

Our final step is to come up with a score for each of the features that we’ve identified as being important for making a friend suggestion.

We’ll assign a score of 10 if the people live in the same location or have the same gender as Adam and 0 if not. For the ageDifference and friendsInCommon we’ll apply apply a log curve so that those values don’t have a disproportional effect on our final score. We’ll use the formula defined in the ParetoScoreTransfomer to do this:

    public <OUT> float transform(OUT item, float score) {
        if (score < minimumThreshold) {
            return 0;
        }
 
        double alpha = Math.log((double) 5) / eightyPercentLevel;
        double exp = Math.exp(-alpha * score);
        return new Double(maxScore * (1 - exp)).floatValue();
    }

And now for our completed recommendation query:

MATCH (me:Person {name: "Adam"})
MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend)
 
WITH me, potentialFriend, COUNT(*) AS friendsInCommon
 
WITH me,
     potentialFriend,
     SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation,
     abs( me.age - potentialFriend.age) AS ageDifference,
     LABELS(me) = LABELS(potentialFriend) AS gender,
     friendsInCommon
 
WHERE NOT (me)-[:FRIEND_OF]-(potentialFriend)
 
WITH potentialFriend,
       // 100 -> maxScore, 10 -> eightyPercentLevel, friendsInCommon -> score (from the formula above)
       100 * (1 - exp((-1.0 * (log(5.0) / 10)) * friendsInCommon)) AS friendsInCommon,
       sameLocation * 10 AS sameLocation,
       -1 * (10 * (1 - exp((-1.0 * (log(5.0) / 20)) * ageDifference))) AS ageDifference,
       CASE WHEN gender THEN 10 ELSE 0 END as sameGender
 
RETURN potentialFriend,
      {friendsInCommon: friendsInCommon,
       sameLocation: sameLocation,
       ageDifference:ageDifference,
       sameGender: sameGender} AS parts,
     friendsInCommon + sameLocation + ageDifference + sameGender AS score
ORDER BY score DESC
 
==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> | potentialFriend                   | parts                                                                                                           | score             |
==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> | Node[1006]{name:"Vince",age:40}   | {friendsInCommon -> 14.86600774792154, sameLocation -> 0, ageDifference -> -5.52786404500042, sameGender -> 10} | 19.33814370292112 |
==> | Node[1008]{name:"Luanne",age:25}  | {friendsInCommon -> 14.86600774792154, sameLocation -> 0, ageDifference -> -3.312596950235779, sameGender -> 0} | 11.55341079768576 |
==> | Node[1005]{name:"Daniela",age:20} | {friendsInCommon -> 14.86600774792154, sameLocation -> 0, ageDifference -> -5.52786404500042, sameGender -> 0}  | 9.33814370292112  |
==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

The final query isn’t too bad – the only really complex bit is the log curve calculation. This is where user defined functions will come into their own in the future.

The nice thing about this approach is that we don’t have to step outside of cypher so if you’re not comfortable with Java you can still do real time recommendations! On the other hand, the different parts of the recommendation engine all get mixed up so it’s not as easy to see the whole pipeline as if you use the graph aware framework.

The next step is to apply this to the Twitter graph and come up with follower recommendations on there.

Categories: Programming

Game Performance: Layout Qualifiers

Android Developers Blog - Thu, 03/26/2015 - 21:02
Today, we want to share some best practices on using the OpenGL Shading Language (GLSL) that can optimize the performance of your game and simplify your workflow. Specifically, Layout qualifiers make your code more deterministic and increase performance by reducing your work.

Let’s start with a simple vertex shader and change it as we go along.

This basic vertex shader takes position and texture coordinates, transforms the position and outputs the data to the fragment shader:
attribute vec4 vertexPosition;
attribute vec2 vertexUV;

uniform mat4 matWorldViewProjection;

varying vec2 outTexCoord;

void main()
{
  outTexCoord = vertexUV;
  gl_Position = matWorldViewProjection * vertexPosition;
}
Vertex Attribute Index To draw a mesh on to the screen, you need to create a vertex buffer and fill it with vertex data, including positions and texture coordinates for this example.

In our sample shader, the vertex data may be laid out like this:
struct Vertex
{
  Vector4 Position;
  Vector2 TexCoords;
};
Therefore, we defined our vertex shader attributes like this:
attribute vec4 vertexPosition;
attribute vec2  vertexUV;
To associate the vertex data with the shader attributes, a call to glGetAttribLocation will get the handle of the named attribute. The attribute format is then detailed with a call to glVertexAttribPointer.
GLint handleVertexPos = glGetAttribLocation( myShaderProgram, "vertexPosition" );
glVertexAttribPointer( handleVertexPos, 4, GL_FLOAT, GL_FALSE, 0, 0 );

GLint handleVertexUV = glGetAttribLocation( myShaderProgram, "vertexUV" );
glVertexAttribPointer( handleVertexUV, 2, GL_FLOAT, GL_FALSE, 0, 0 );
But you may have multiple shaders with the vertexPosition attribute and calling glGetAttribLocation for every shader is a waste of performance which increases the loading time of your game.

Using layout qualifiers you can change your vertex shader attributes declaration like this:
layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;
To do so you also need to tell the shader compiler that your shader is aimed at GL ES version 3.1. This is done by adding a version declaration:
#version 300 es
Let’s see how this affects our shader, changes are marked in bold:
#version 300 es

layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;

uniform mat4 matWorldViewProjection;

out vec2 outTexCoord;

void main()
{
  outTexCoord = vertexUV;
  gl_Position = matWorldViewProjection * vertexPosition;
}
Note that we also changed outTexCoord from varying to out. The varying keyword is deprecated from version 300 es and requires changing for the shader to work.

Note that Vertex Attribute qualifiers and #version 300 es are supported from OpenGL ES 3.0. The desktop equivalent is supported on OpenGL 3.3 and using #version 330.

Now you know your position attributes always at 0 and your texture coordinates will be at 1 and you can now bind your shader format without using glGetAttribLocation:
const int ATTRIB_POS = 0;
const int ATTRIB_UV   = 1;

glVertexAttribPointer( ATTRIB_POS, 4, GL_FLOAT, GL_FALSE, 0, 0 );
glVertexAttribPointer( ATTRIB_UV, 2, GL_FLOAT, GL_FALSE, 0, 0 );
This simple change leads to a cleaner pipeline, simpler code and saved performance during loading time.

To learn more about performance on Android, check out the Android Performance Patterns series.

Posted by Shanee Nishry, Games Developer Advocate Join the discussion on

+Android Developers
Categories: Programming

Why Do I Write Emails the Way That I Do?

Making the Complex Simple - John Sonmez - Thu, 03/26/2015 - 15:00

In this video I respond to a somewhat negative email asking why I write emails the way that I do. The answer is simple: It’s effective.

The post Why Do I Write Emails the Way That I Do? appeared first on Simple Programmer.

Categories: Programming

Python: matplotlib hangs and shows nothing (Mac OS X)

Mark Needham - Thu, 03/26/2015 - 01:02

I’ve been playing around with some of the matplotlib demos recently and discovered that simply copying one of the examples didn’t actually work for me.

I was following the bar chart example and had the following code:

import numpy as np
import matplotlib.pyplot as plt
 
N = 5
ind = np.arange(N)
fig, ax = plt.subplots()
menMeans = (20, 35, 30, 35, 27)
menStd =   (2, 3, 4, 1, 2)
width = 0.35       # the width of the bars
rects1 = ax.bar(ind, menMeans, width, color='r', yerr=menStd)
 
plt.show()

When I execute this script from the command line it just hangs and I don’t see anything at all.

Via a combination of different blog posts (which all suggested different things!) I ended up with the following variation of imports which seems to do the job:

import numpy as np
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
 
N = 5
ind = np.arange(N)
fig, ax = plt.subplots()
menMeans = (20, 35, 30, 35, 27)
menStd =   (2, 3, 4, 1, 2)
width = 0.35       # the width of the bars
rects1 = ax.bar(ind, menMeans, width, color='r', yerr=menStd)
 
plt.show()

If I run this script a Python window pops up and contains the following image which is what I expected to happen in the first place!

2015 03 25 23 56 08

The thing to notice is that we’ve had to change the backend in order to use matplotlib from the shell:

With the TkAgg backend, which uses the Tkinter user interface toolkit, you can use matplotlib from an arbitrary non-gui python shell.

Current state: Wishing for ggplot!

Categories: Programming

Developing audio apps for Android Auto

Android Developers Blog - Wed, 03/25/2015 - 20:56

Posted by Joshua Gordon, Developer Advocate

Have you ever wanted to develop apps for the car, but found the variety of OEMs and proprietary platforms too big of a hurdle? Now with Android Auto, you can target a single platform supported by vehicles coming soon from 28 manufacturers.

Using familiar Android APIs, you can easily add a great in-car user experience to your existing audio apps, with just a small amount of code. If you’re new to developing for Auto, watch this DevByte for an overview of the APIs, and check out the training docs for an end-to-end tutorial.


Playback and custom controls
table, th, td { border: 1px solid black; border-collapse: collapse; }

Custom playback controls on NPR One and iHeartRadio.

The first thing to understand about developing audio apps on Auto is that you don’t draw your user interface directly. Instead, the framework has two well-defined UIs (one for playback, one for browsing) that are created automatically. This ensures consistent behavior across audio apps for drivers, and frees you from dealing with any car specific functionalities or layouts. Although the layout is predefined, you can customize it with artwork, color themes, and custom controls.

Both NPR One and iHeartRadio customize their UI. NPR One adds controls to mark a story as interesting, to view a list of upcoming stories, and to skip to the next story. iHeartRadio adds controls to favorite stations and to like songs. Both apps store user preferences across form factors.

Because the UI is drawn by the framework, playback commands need to be relayed to your app. This is accomplished with the MediaSession callback, which has methods like onPlay() and onPause(). All car specific functionality is handled behind the scenes. For example, you don’t need to be aware if a command came from the touch screen, the steering wheel buttons, or the user’s voice.

Browsing and recommendations
table, th, td { border: 1px solid black; border-collapse: collapse; }

Browsing content on NPR One and iHeartRadio.

The browsing UI is likewise drawn by the framework. You implement the MediaBrowserService to share your content hierarchy with the framework. A content hierarchy is a collection of MediaItems that are either playable (e.g., a song, audio book, or radio station) or browsable (e.g., a favorites folder). Together, these form a tree used to display a browsable menu of your content.

With both apps, recommendations are key. NPR One recommends a short list of in-depth stories that can be selected from the browsing menu. These improve over time based on user feedback. iHeartRadio’s browsing menu lets you pick from favorites and recommended stations, and their “For You” feature gives recommendations based on user location. The app also provides the ability create custom stations, from the browsing menu. Doing so is efficient and requires only three taps (“Create Station” -> “Rock” -> “Foo Fighters”).

When developing for the car, it’s important to quickly connect users with content to minimize distractions while driving. It’s important to note that design considerations on Android Auto are different than on a mobile device. If you imagine a typical media player on a phone, you may picture a browsable menus of “all tracks” or “all artists”. These are not ideal in the car, where the primary focus should be on the road. Both NPR One and iHeartRadio provide good examples of this, because they avoid deep menu hierarchies and lengthy browsable lists.

Voice actions for hands free operation

Voice actions (e.g., “Play KQED”) are an important part of Android Auto. You can support voice actions in your app by implementing onPlayFromSearch() in the MediaSession.Callback. Voice actions may also be used to start your app from the home screen (e.g., “Play KQED on iHeartRadio”). To enable this functionality, declare the MEDIA_PLAY_FROM_SEARCH intent filter in your manifest. For an example, see this sample app.

Next steps

NPR One and iHeartRadio are just two examples of great apps for Android Auto today. They feel like a part of the car, and look and sound great. You can extend your apps to the car today, too, and developing for Auto is easy. The framework handles the car specific functionalities for you, so you’re free to focus on making your app special. Join the discussion at http://g.co/androidautodev if you have questions or ideas to share. To get started on your app, visit developer.android.com/auto.

Join the discussion on

+Android Developers
Categories: Programming

Agile Results Refresher

We live in amazing times. 

The world is full of opportunity at your fingertips.

You can inspire your mind and the art of the possible with TED talks.

You can learn anything with all of the Open Courseware from MIT or Wharton, or Coursera, or you can build your skills with The Great Courses or Udemy.

You can read about anything and fill your kindle with more books than you can read in this lifetime.

You can invest in yourself.  You can develop your intellectual horsepower, your emotional intelligence, your personal effectiveness, your communication skills, your relationship skills, and your financial intelligence.

You can develop your career, expand your experience, build your network, and grow your skills and abilities.  You can take on big hairy audacious goals.  You can learn your limits, build your strengths, and reduce your liabilities.

You can develop your body and your physical intelligence, with 4-minute work outs, P90x3 routines, Fit Bits, Microsoft Band, etc.

You can expand your network and connect with people around the world, all four corners of the globe, from all walks of life, for all sorts of reasons.

You can explore the world, either virtually through Google Earth, or take real-world epic adventures.

You can fund your next big idea and bring it to the world with Kickstarter.

You can explore new hobbies and develop your talents, your art, your music, you name it.

But where in the world will you get time?

And how will you manage your competing priorities?

And how will you find and keep your motivation?

How will you wake up strong, with a spring in your step, where all the things you want to achieve in this lifetime, pull you forward, and help you rise above the noise of every day living?

That's not how I planned on starting this post, but it's a reminder of how the world is full of possibility, and how amazing your life can be when you come alive and you begin the journey to become all that you're capable of.

How I planned to start the post was this.  It's Spring.  It's time for a refresher in the art of Agile Results to help you bring out your best.

Agile Results is a simple system for meaningful results.  It combines proven practices for productivity, time management, and personal effectiveness to help you achieve more in less time, and enjoy the process.

It's a way to spend your best time and your best energy to get your best results.

Agile Results is a way to slow down to speed up, find more fulfillment, and put your ambition into practice.

Agile Results is a way to realize your potential, and to unleash your greatest potential.  Potential is a muscle that gets better through habits.

The way to get started with Agile Results is simple.  

  1. Daily Wins.  Each day, ask yourself what are Three Wins you want for today?   Maybe it's win a raving fan, maybe it's finish that thing that's been hanging over you, or maybe it's have a great lunch.  You can ask yourself this right here, right now -- what are Three Wins you want to accomplish today?
  2. Monday Vision.  On Mondays, ask yourself what are Three Wins you want for this week?  Imagine it was Friday and you are looking back on the week, what are Three Wins that you want under your belt.  Use these Three Wins for the Week to inspire you, all week long, and pull you forward.  Each week is a fresh start.
  3. Friday Reflection.  On Friday, ask yourself, what are three things going well, and what are three things to improve?  Use these insights to get better each week.  Each week is a chance to get better at prioritizing, focusing, and creating clarity around what you value, and what others value, and what's worth spending more time on.

For bonus, and to really find a path of fulfillment, there are three more habits you can add ...

  1. Three Wins for the Month.  At the start of each month, ask yourself, what are Three Wins you want for the month?  If you know your Three Wins for the Month, you can use these to make your months more meaningful.  In fact, a great practice is to pick a theme for the month, whatever you want your month to be about, and use that to make your month matter.  And, when you combine that with your Three Wins, not only will you enjoy the month more, but at the end of the month, you'll feel a better sense of accomplishment when you can talk about your Three Wins that you achieved, whether they are your private or public victories.  And, if the month gets off track, use your Three Wins to help you get back on track.  And, each month is a fresh start.
  2. Three Wins for the Year.  At the start of the year, ask yourself what are Three Wins you want to achieve for the year?  This could be anything from get to your fighting weight to take an epic adventure to write your first book.
  3. Ask better questions.   You can do this anytime, anywhere.  Thinking is just asking and answering questions.  If you want better answers, ask better questions.  You can exponentially improve the quality of your life by asking better questions.

A simple way that I remember this is I remember to think in Three Wins:

Think in terms of Three Wins for the Day, Three Wins for the Week, Three Wins for the Month, Three Wins for the Year

Those are the core habits of Agile Results in a nutshell. 

You can start with that and be well on your way to getting better results in work and life.

If you want to really master Agile Results, you can read the book, Getting Results the Agile Way: A Personal Results System for Work and Life.

It's been a best seller in time management, and it’s helped many people around the world create a better version of themselves.

If you want to take Agile Results to the next level, start a study group and share ways that you use Agile Results with each other around how you create work-life balance, create better energy, learn more faster, and unleash your potential, while enjoying the journey, and getting more from work and life.

Share your stories and results with me, with your friends, with your family, anyone, and everyone – help everybody live a little better.

Categories: Architecture, Programming

Topic Modelling: Working out the optimal number of topics

Mark Needham - Tue, 03/24/2015 - 23:33

In my continued exploration of topic modelling I came across The Programming Historian blog and a post showing how to derive topics from a corpus using the Java library mallet.

The instructions on the blog make it very easy to get up and running but as with other libraries I’ve used, you have to specify how many topics the corpus consists of. I’m never sure what value to select but the authors make the following suggestion:

How do you know the number of topics to search for? Is there a natural number of topics? What we have found is that one has to run the train-topics with varying numbers of topics to see how the composition file breaks down. If we end up with the majority of our original texts all in a very limited number of topics, then we take that as a signal that we need to increase the number of topics; the settings were too coarse.

There are computational ways of searching for this, including using MALLETs hlda command, but for the reader of this tutorial, it is probably just quicker to cycle through a number of iterations (but for more see Griffiths, T. L., & Steyvers, M. (2004). Finding scientific topics. Proceedings of the National Academy of Science, 101, 5228-5235).

Since I haven’t yet had the time to dive into the paper or explore how to use the appropriate option in mallet I thought I’d do some variations on the stop words and number of topics and see how that panned out.

As I understand it, the idea is to try and get a uniform spread of topics -> documents i.e. we don’t want all the documents to have the same topic otherwise any topic similarity calculations we run won’t be that interesting.

I tried running mallet with 10,15,20 and 30 topics and also varied the stop words used. I had one version which just stripped out the main characters and the word ‘narrator’ & another where I stripped out the top 20% of words by occurrence and any words that appeared less than 10 times.

The reason for doing this was that it should identify interesting phrases across episodes better than TF/IDF can while not just selecting the most popular words across the whole corpus.

I used mallet from the command line and ran it in two parts.

  1. Generate the model
  2. Work out the allocation of topics and documents based on hyper parameters

I wrote a script to help me out:

#!/bin/sh
 
train_model() {
  ./mallet-2.0.7/bin/mallet import-dir \
    --input mallet-2.0.7/sample-data/himym \
    --output ${2} \
    --keep-sequence \
    --remove-stopwords \
    --extra-stopwords ${1}
}
 
extract_topics() {
  ./mallet-2.0.7/bin/mallet train-topics \
    --input ${2} --num-topics ${1} \
    --optimize-interval 20 \
    --output-state himym-topic-state.gz \
    --output-topic-keys output/himym_${1}_${3}_keys.txt \
    --output-doc-topics output/himym_${1}_${3}_composition.txt
}
 
train_model "stop_words.txt" "output/himym.mallet"
train_model "main-words-stop.txt" "output/himym.main.words.stop.mallet"
 
extract_topics 10 "output/himym.mallet" "all.stop.words"
extract_topics 15 "output/himym.mallet" "all.stop.words"
extract_topics 20 "output/himym.mallet" "all.stop.words"
extract_topics 30 "output/himym.mallet" "all.stop.words"
 
extract_topics 10 "output/himym.main.words.stop.mallet" "main.stop.words"
extract_topics 15 "output/himym.main.words.stop.mallet" "main.stop.words"
extract_topics 20 "output/himym.main.words.stop.mallet" "main.stop.words"
extract_topics 30 "output/himym.main.words.stop.mallet" "main.stop.words"

As you can see, this script first generates a bunch of models from text files in ‘mallet-2.0.7/sample-data/himym’ – there is one file per episode of HIMYM. We then use that model to generate differently sized topic models.

The output is two files; one containing a list of topics and another describing what percentage of the words in each document come from each topic.

$ cat output/himym_10_all.stop.words_keys.txt
 
0	0.08929	back brad natalie loretta monkey show call classroom mitch put brunch betty give shelly tyler interview cigarette mc laren
1	0.05256	zoey jerry arthur back randy arcadian gael simon blauman blitz call boats becky appartment amy gary made steve boat
2	0.06338	back claudia trudy doug int abby call carl stuart voix rachel stacy jenkins cindy vo katie waitress holly front
3	0.06792	tony wendy royce back jersey jed waitress bluntly lucy made subtitle film curt mosley put laura baggage officer bell
4	0.21609	back give patrice put find show made bilson nick call sam shannon appartment fire robots top basketball wrestlers jinx
5	0.07385	blah bob back thanksgiving ericksen maggie judy pj valentine amanda made call mickey marcus give put dishes juice int
6	0.04638	druthers karen back jen punchy jeanette lewis show jim give pr dah made cougar call jessica sparkles find glitter
7	0.05751	nora mike pete scooter back magazine tiffany cootes garrison kevin halloween henrietta pumpkin slutty made call bottles gruber give
8	0.07321	ranjit back sandy mary burger call find mall moby heather give goat truck made put duck found stangel penelope
9	0.31692	back give call made find put move found quinn part ten original side ellen chicago italy locket mine show
$ head -n 10 output/himym_10_all.stop.words_composition.txt
#doc name topic proportion ...
0	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/1.txt	0	0.70961794636687	9	0.1294699168584466	8	0.07950442338871108	2	0.07192178481473664	4	0.008360809510263838	5	2.7862560133367015E-4	3	2.562409242784946E-4	7	2.1697378721335337E-4	1	1.982849604752168E-4	6	1.749937876710496E-4
1	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/10.txt	2	0.9811551470820473	9	0.016716882136209997	4	6.794128563082893E-4	0	2.807350575301132E-4	5	2.3219634098530471E-4	8	2.3018997315244256E-4	3	2.1354177341696056E-4	7	1.8081798384467614E-4	1	1.6524340216541808E-4	6	1.4583339433951297E-4
2	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/100.txt	2	0.724061485807234	4	0.13624729774423758	0	0.13546964196228636	9	0.0019436342339785994	5	4.5291919356563914E-4	8	4.490055982996677E-4	3	4.1653183421485213E-4	7	3.5270123154213927E-4	1	3.2232165301666123E-4	6	2.8446074162457316E-4
3	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/101.txt	2	0.7815231689893246	0	0.14798271520316794	9	0.023582384458063092	8	0.022251052243582908	1	0.022138209217973336	4	0.0011804626661380394	5	4.0343527385745457E-4	3	3.7102343418895774E-4	7	3.1416667687862693E-4	6	2.533818368250992E-
4	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/102.txt	6	0.6448245189567259	4	0.18612146979166502	3	0.16624873439661025	9	0.0012233726722317548	0	3.4467218590717303E-4	5	2.850788252495599E-4	8	2.8261550915084904E-4	2	2.446611421432842E-4	7	2.2199909869250053E-4	1	2.028774216237081E-
5	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/103.txt	8	0.7531586740033047	5	0.17839539108961253	0	0.06512376460651902	9	0.001282794040111701	4	8.746645156304241E-4	3	2.749100345664577E-4	2	2.5654476523149865E-4	7	2.327819863700214E-4	1	2.1273153572848481E-4	6	1.8774342292520802E-4
6	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/104.txt	7	0.9489502365148181	8	0.030091466847852504	4	0.017936457663121977	9	0.0013482824985091328	0	3.7986419553884905E-4	5	3.141861834124008E-4	3	2.889445824352445E-4	2	2.6964174000656E-4	1	2.2359178288566958E-4	6	1.9732799141958482E-4
7	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/105.txt	8	0.7339694064061175	7	0.1237041841318045	9	0.11889696041555338	0	0.02005288536233353	4	0.0014026751618923005	5	4.793786828705149E-4	3	4.408655780020889E-4	2	4.1141370625324785E-4	1	3.411516484151411E-4	6	3.0107890675777946E-4
8	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/106.txt	5	0.37064909999661005	9	0.3613559917055785	0	0.14857567731040344	6	0.09545466082502917	4	0.022300625744661403	8	3.8725629469313333E-4	3	3.592484711785775E-4	2	3.3524900189121E-4	7	3.041961449432886E-4	1	2.779945050112539E-4

The output is a bit tricky to understand on its own so I did a bit of post processing using pandas and then ran the results of that through matplotlib to see the distribution of documents for different topics sizes with different stop words. You can see the script here.

I ended up with the following chart:

2015 03 24 22 08 48

On the left hand side we’re using more stop words and on the right just the main ones. For most of the variations there are one or two topics which most documents belong to but interestingly the most uniform distribution seems to be when we have few topics.

These are the main words for the most popular topics on the left hand side:

15 topics

8       0.50732 back give call made put find found part move show side ten mine top abby front fire full fianc

20 topics

12      0.61545 back give call made put find show found part move side mine top front ten full cry fire fianc

30 topics

22      0.713   back call give made put find show part found side move front ten full top mine fire cry bottom

All contain more or less the same words which at first glance seem like quite generic words so I’m surprised they weren’t excluded.

On the right hand side we haven’t removed many words so we’d expect common words in the English language to dominate. Let’s see if they do:

10 topics

1       3.79451 don yeah ll hey ve back time guys good gonna love god night wait uh thing guy great make

15 topics

5       2.81543 good time love ll great man guy ve night make girl day back wait god life yeah years thing
 
10      1.52295 don yeah hey gonna uh guys didn back ve ll um kids give wow doesn thing totally god fine

20 topics

1       3.06732 good time love wait great man make day back ve god life years thought big give apartment people work
 
13      1.68795 don yeah hey gonna ll uh guys night didn back ve girl um kids wow guy kind thing baby

30 topics

14      1.42509 don yeah hey gonna uh guys didn back ve um thing ll kids wow time doesn totally kind wasn
 
24      2.19053 guy love man girl wait god ll back great yeah day call night people guys years home room phone
 
29      1.84685 good make ve ll stop time made nice put feel love friends big long talk baby thought things happy

Again we have similar words across each run and as expected they are all quite generic words.

My take away from this exploration is that I should vary the stop word percentages as well and see if that leads to an improved distribution.

Taking out very common words like we do with the left hand side charts seems to make sense although I need to work out why there’s a single outlier in each group.

The authors suggest that having the majority of our texts in a small number of topics means we need to create more of them so I will investigate that too.

The code is all on github along with the transcripts so give it a try and let me know what you think.

Categories: Programming

Two Programmers Turned WordPress Entrepreneurs Profiled

Making the Complex Simple - John Sonmez - Tue, 03/24/2015 - 20:18

I’ve been involved with the WordPress community quite a bit, since I’ve launched my “How to Create a Blog to Boost Your Career” course. As I result, I’ve been very interested in what in happening in the WordPress development world. I’ve gotten a few requests for information about WordPress development, but I’m really not an […]

The post Two Programmers Turned WordPress Entrepreneurs Profiled appeared first on Simple Programmer.

Categories: Programming

Announcing the new Azure App Service

ScottGu's Blog - Scott Guthrie - Tue, 03/24/2015 - 15:23

In a mobile first, cloud first world, every business needs to deliver great mobile and web experiences that engage and connect with their customers, and which enable their employees to be even more productive.  These apps need to work with any device, and to be able to consume and integrate with data anywhere.

I'm excited to announce the release of our new Azure App Service today - which provides a powerful new offering to deliver these solutions.  Azure App Service is an integrated service that enables you to create web and mobile apps for any platform or device, easily integrate with SaaS solutions (Office 365, Dynamics CRM, Salesforce, Twilio, etc), easily connect with on-premises applications (SAP, Oracle, Siebel, etc), and easily automate businesses processes while meeting stringent security, reliability, and scalability needs. Azure App Service

Azure App Service includes the Web App + Mobile App capabilities that we previously delivered separately (as Azure Websites + Azure Mobile Services).  It also includes powerful new Logic/Workflow App and API App capabilities that we are introducing today for the very first time - along with built-in connectors that make it super easy to build logic workflows that integrate with dozens of popular SaaS and on-premises applications (Office 365, SalesForce, Dynamics, OneDrive, Box, DropBox, Twilio, Twitter, Facebook, Marketo, and more). 

All of these features can be used together at one low price.  In fact, the new Azure App Service pricing is exactly the same price as our previous Azure Websites offering.  If you are familiar with our Websites service you now get all of the features it previously supported, plus additional new mobile support, plus additional new workflow support, plus additional new connectors to dozens of SaaS and on-premises solutions at no extra charge

Web + Mobile + Logic + API Apps

Azure App Service enables you to easily create Web + Mobile + Logic + API Apps:

image

You can run any number of these app types within a single Azure App Service deployment.  Your apps are automatically managed by Azure App Service and run in managed VMs isolated from other customers (meaning you don't have to worry about your app running in the same VM as another customer).  You can use the built-in AutoScaling support within Azure App Service to automatically increase and decrease the number of VMs that your apps use based on the actual resource consumption of them. 

This provides an incredibly cost-effective way to build and run highly scalable apps that provide both Web and Mobile experiences, and which contain automated business processes that integrate with a wide variety of apps and data sources.

Below are additional details on the different app types supported by Azure App Service.  Azure App Service is generally available starting today for Web apps, with the Mobile, Logic and API app types available in public preview:

Web Apps

The Web App support within Azure App Service includes 100% of the capabilities previously supported by Azure Websites.  This includes:

  • Support for .NET, Node.js, Java, PHP, and Python code
  • Built-in AutoScale support (automatically scale up/down based on real-world load)
  • Integrated Visual Studio publishing as well as FTP publishing
  • Continuous Integration/Deployment support with Visual Studio Online, GitHub, and BitBucket
  • Virtual networking support and hybrid connections to on-premises networks and databases
  • Staged deployment and test in production support
  • WebJob support for long running background tasks

Customers who have previously deployed an app using the Azure Website service will notice today that they these apps are now called "Web Apps" within the Azure management portals.  You can continue to run these apps exactly as before - or optionally now also add mobile + logic + API app support to your solution as well without having to pay anything more.

Mobile Apps

The Mobile App support within Azure App Service provides the core capabilities we previously delivered using Azure Mobile Services.  It also includes several new enhancements that we are introducing today including:

  • Built-in AutoScale support (automatically scale up/down based on real-world load)
  • Traffic Manager support (geographically scale your apps around the world)
  • Continuous Integration/Deployment support with Visual Studio Online, GitHub, and BitBucket
  • Virtual networking support and hybrid connections to on-premises databases
  • Staged deployment and test in production support
  • WebJob support for long running background tasks

Because we have an integrated App Service offering, you can now run both Web and Mobile Apps using a single Azure App Service deployment.  This allows you to avoid having to pay for a separate web and mobile backend - and instead optionally pool your resources to save even more money.

Logic Apps

The Logic App support within Azure App Services is brand new and enables you to automate workflows and business processes.  For example, you could configure a workflow that automatically runs every time your app calls an API, or saves data within a database, or on a timer (e.g. once a minute) - and within your workflows you can do tasks like create/retrieve a record in Dynamics CRM or Salesforce, send an email or SMS message to a sales-rep to follow up on, post a message on Facebook or Twitter or Yammer, schedule a meeting/reminder in Office 365, etc. 

Constructing such workflows is now super easy with Azure App Services.  You can define a workflow either declaratively using a JSON file (which you can check-in as source code) or using the new Logic/Workflow designer introduced today within the Azure Portal.  For example, below I've used the new Logic designer to configure an automatically recurring workflow that runs every minute, and which searches Twitter for tweets about Azure, and then automatically send SMS messages (using Twilio) to have employees follow-up on them:

image 

Creating the above workflow is super easy and takes only a minute or so to do using the new Logic App designer.  Once saved it will automatically run within the same VMs/Infrastructure that the Web Apps and Mobile Apps you've built using Azure App Service use as well.  This means you don't have to deploy or pay for anything extra - if you deploy a Web or Mobile App on Azure you can now do all of the above workflow + integration scenarios at no extra cost

Azure App Service today includes support for the following built-in connectors that you can use to construct and automate your Logic App workflows:

image

Combined the above connectors provide a super powerful way to build and orchestrate tasks that run and scale within your apps.  You can now build much richer web and mobile apps using it.

Watch this Azure Friday video about Logic Apps with Scott Hanselman and Josh Twist to learn more about how to use it.

API Apps

The API Apps support within Azure App Service provides additional support that enables you to easily create, consume and call APIs - both APIs you create (using a framework like ASP.NET Web API or the equivalent in other languages) as well as APIs from other SaaS and cloud providers.

API Apps enable simple access control and credential management within your applications, as well as automatic SDK generation support that enables you to easily expose and integrate APIs across a wide-variety of languages.  You can optionally integrate these APIs with Logic Apps. Getting Started

Getting started with Azure App Service is easy.  Simply sign-into the Azure Preview Portal and click the "New" button in the bottom left of the screen.  Select the "Web + Mobile" sub-menu and you can now create Web Apps, Mobile Apps, Logic Apps, and API Apps:

image 

You can create any number of Web, Mobile, Logic and API apps and run them on a single Azure App Service deployment at no additional cost. 

Learning More

I'll be hosting a special Azure App Service launch event online on March 24th at 11am PDT which will contain more details about Azure App Service, a great demo from Scott Hanselman, and talks by several customers and analytics talking about their experiences.  You can watch the online event for free here.

Also check out our new Azure Friday App Service videos with Scott Hanselman that go into detail about all of the new capabilities, and show off how to build Web, Mobile, Logic and API Apps using Azure App Service:

Then visit our documentation center to learn more about the service and how to get started with it today.  Pricing details are available here.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building great web and mobile applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at:twitter.com/scottgu omni

Categories: Architecture, Programming

A High Available Docker Container Platform using CoreOS and Consul

Xebia Blog - Tue, 03/24/2015 - 11:35

Docker containers are hot, but containers in themselves are not very interesting. It needs an eco-system to make it into  24x7 production deployments. Just handing your container names to operations, does not cut it.

In the blog post, we will show you how  CoreOS can be used to provide a High Available Docker Container Platform as a Service, with a box standard way to deploy Docker containers. Consul is added to the mix to create a lightweight HTTP Router to any docker application offering a HTTP service.

We will be killing a few processes and machine on the way to prove our point...

Architecture

The basic architecture for our Docker Container Platform as a Service, consists of the following components

coreos-caas

  • CoreOS cluster
    The CoreOS cluster will provide us with a cluster of Highly Available Docker Hosts. CoreOS is an open source lightweight operating system based on the Linux kernel and provides an infrastructure for clustered deployments of applications. The interesting part of CoreOS is that you cannot install applications or packages on CoreOS itself. Any custom application has to be packed and deployed as a Docker container. At the same time CoreOS provides only basic functionality for managing these applications.
  • Etcd
    etcd is the CoreOS distributed key value store and provides a reliable mechanism to distribute data through the cluster.
  • Fleet
    Fleet is the cluster wide init system of CoreOS which allows you to schedule applications to run inside the Cluster and provides the much needed nanny system for you apps.
  • Consul
    Consul from Hashicorp is a tool that eases service discovery and configuration. Consul allows services to be discovered via DNS and HTTP and provides us with the ability to respond to changes in the service registration.
  • Registrator
    The Registrator from Gliderlabs will automatically register and deregister any Docker container as a service in Consul. The registrator runs on each Docker Host.
  • HttpRouter
    Will dynamically route HTTP traffic to any application providing a HTTP services, running anywhere in the cluster.  It listens on port 80.
  • Load Balancer
    An external load balancer which will route the HTTP traffic to any of the CoreOS node listening on port 80.
  • Apps
    These are the actual applications that may advertise HTTP services to be discovered and accessed. These will be provided by you.

 

Getting Started

In order to get your own container platform as a service running, we have created a Amazon AWS CloudFormation file which installs the basic services: Consul, Registrator, HttpRouter and the load balancer.

In the infrastructure we create two autoscaling groups: one for the Consul Servers which is limited to 3 to 5 machines and one from the Consul clients which is basically unlimited and depends on your need.

The nice thing about the autoscaling group is that it will automatically launch a new machine if the number of machines drops below the minimum or desired number.  This adds robustness to the platform.

The Amazon Elastic Load Balancer balances incoming traffic to any port machine in either autoscaling group.

We created a little script that creates your CoreOS cluster. This has prerequisite that you are running MacOS and have installed:

In addition, the CloudFormation file assumes that you have a Route53 HostedZone in which we can add Records for your domain. It may work on other Linux platforms, but that I did not test.

 

git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service
cd coreos-container-platform-as-a-service
./bin/create-stack.sh -d cargonauts.dutchdevops.net

...
{
"StackId": "arn:aws:cloudformation:us-west-2:233211978703:stack/cargonautsdutchdevopsnet/b4c802f0-d1ff-11e4-9c9c-5088484a585d"
}
INFO: create in progress. sleeping 15 seconds...
INFO: create in progress. sleeping 15 seconds...
INFO: create in progress. sleeping 15 seconds...
INFO: create in progress. sleeping 15 seconds...
INFO: create in progress. sleeping 15 seconds...
INFO: create in progress. sleeping 15 seconds...
INFO: create in progress. sleeping 15 seconds...
INFO: create in progress. sleeping 15 seconds...
INFO: create in progress. sleeping 15 seconds...
INFO: create in progress. sleeping 15 seconds...
INFO: create in progress. sleeping 15 seconds...
CoreOSServerAutoScale 54.185.55.139 10.230.14.39
CoreOSServerAutoScaleConsulServer 54.185.125.143 10.230.14.83
CoreOSServerAutoScaleConsulServer 54.203.141.124 10.221.12.109
CoreOSServerAutoScaleConsulServer 54.71.7.35 10.237.157.117

Now you are ready to look around. Use one of the external IP addresses to setup a tunnel for fleetctl.

export FLEETCTL_TUNNEL=54.203.141.124

fleetctl is the command line utility that allows you to manage the units that you deploy on CoreOS.


fleetctl list-machines
....
MACHINE		IP		METADATA
1cdadb87...	10.230.14.83	consul_role=server,region=us-west-2
2dde0d31...	10.221.12.109	consul_role=server,region=us-west-2
7f1f2982...	10.230.14.39	consul_role=client,region=us-west-2
f7257c36...	10.237.157.117	consul_role=server,region=us-west-2

will list all the machines in the platform with their private IP addresses and roles. As you can see we have tagged 3 machines for the consul server role and 1 machine for the consul client role. To see all the docker containers that have started on the individual machines, you can run the following script:

for machine in $(fleetctl list-machines -fields=machine -no-legend -full) ; do
   fleetctl ssh $machine docker ps
done
...
CONTAINER ID        IMAGE                                  COMMAND                CREATED             STATUS              PORTS                                                                                                                                                                                                                                      NAMES
ccd08e8b672f        cargonauts/consul-http-router:latest   "/consul-template -c   6 minutes ago       Up 6 minutes        10.221.12.109:80->80/tcp                                                                                                                                                                                                                   consul-http-router
c36a901902ca        progrium/registrator:latest            "/bin/registrator co   7 minutes ago       Up 7 minutes                                                                                                                                                                                                                                                   registrator
fd69ac671f2a        progrium/consul:latest                 "/bin/start -server    7 minutes ago       Up 7 minutes        172.17.42.1:53->53/udp, 10.221.12.109:8300->8300/tcp, 10.221.12.109:8301->8301/tcp, 10.221.12.109:8301->8301/udp, 10.221.12.109:8302->8302/udp, 10.221.12.109:8302->8302/tcp, 10.221.12.109:8400->8400/tcp, 10.221.12.109:8500->8500/tcp   consul
....

To inspect the Consul console, you need to first setup a tunnel to port 8500 on a server node in the cluster:

ssh-add stacks/cargonautsdutchdevopsnet/cargonauts.pem
ssh -A -L 8500:10.230.14.83:8500 core@54.185.125.143
open http://localhost:8500

Consul Console

You will now see that there are two services registered: consul and the consul-http-router. Consul registers itself and the http router was detected and registered by the Registrator on 4 machines.

Deploying an application

Now we can to deploy an application and we have a wonderful app to do so: the paas-monitor. It is a simple web application which continuously gets the status of a backend service and shows who is responding in a table.

In order to deploy this application we have to create a fleet unit file. which is basically a systemd unit file. It describes all the commands that it needs to execute for managing the life cycle of a unit. The paas-monitor unit file looks like this:


[Unit]
Description=paas-monitor

[Service]
Restart=always
RestartSec=15
ExecStartPre=-/usr/bin/docker kill paas-monitor-%i
ExecStartPre=-/usr/bin/docker rm paas-monitor-%i
ExecStart=/usr/bin/docker run --rm --name paas-monitor-%i --env SERVICE_NAME=paas-monitor --env SERVICE_TAGS=http -P --dns 172.17.42.1 --dns-search=service.consul mvanholsteijn/paas-monitor
ExecStop=/usr/bin/docker stop paas-monitor-%i

It states that this unit should always be restarted, with a 15 second interval. Before it starts, it stops and removes the previous container (ignoring any errors) and when it starts, it runs a docker container - non-detached. This allows systemd to detect that the process has stopped. Finally there is also a stop command.

The file also contains %i: This is a template file which means that more instances of the unit can be started.

In the environment settings of the Docker container, hints for the Registrator are set. The environment variable SERVICE_NAME indicates the name under which it would like to be registered in Consul and the SERVICE_TAGS indicates which tags should be attached to the service. These tags allow you to select the  'http' services in a domain or even from a single container.

If the container would expose more ports for instance 8080 en 8081 for  http and administrative traffic, you cloud specify environment variables.

SERVICE_8080_NAME=paas-monitor
SERVICE_8080_TAGS=http
SERVICE_8081_NAME=paas-monitor=admin
SERVICE_8081_TAGS=admin-http

Deploying the file goes in two stages: submitting the template file and starting an instance:

cd fleet-units/paas-monitor
fleetctl submit paas-monitor@.service
fleetctl start paas-monitor@1

Unit paas-monitor@1.service launched on 1cdadb87.../10.230.14.83

Now the fleet report that it is launched, but that does not mean it is running. In the background Docker has to pull the image which takes a while. You can monitor the progress using fleetctl status.

fleetctl status paas-monitor@1

paas-monitor@1.service - paas-monitor
   Loaded: loaded (/run/fleet/units/paas-monitor@1.service; linked-runtime; vendor preset: disabled)
   Active: active (running) since Tue 2015-03-24 09:01:10 UTC; 2min 48s ago
  Process: 3537 ExecStartPre=/usr/bin/docker rm paas-monitor-%i (code=exited, status=1/FAILURE)
  Process: 3529 ExecStartPre=/usr/bin/docker kill paas-monitor-%i (code=exited, status=1/FAILURE)
 Main PID: 3550 (docker)
   CGroup: /system.slice/system-paas\x2dmonitor.slice/paas-monitor@1.service
           └─3550 /usr/bin/docker run --rm --name paas-monitor-1 --env SERVICE_NAME=paas-monitor --env SERVICE_TAGS=http -P --dns 172.17.42.1 --dns-search=service.consul mvanholsteijn/paas-monitor

Mar 24 09:02:41 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: 85071eb722b3: Pulling fs layer
Mar 24 09:02:43 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: 85071eb722b3: Download complete
Mar 24 09:02:43 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: 53a248434a87: Pulling metadata
Mar 24 09:02:44 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: 53a248434a87: Pulling fs layer
Mar 24 09:02:46 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: 53a248434a87: Download complete
Mar 24 09:02:46 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: b0c42e8f4ac9: Pulling metadata
Mar 24 09:02:47 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: b0c42e8f4ac9: Pulling fs layer
Mar 24 09:02:49 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: b0c42e8f4ac9: Download complete
Mar 24 09:02:49 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: b0c42e8f4ac9: Download complete
Mar 24 09:02:49 ip-10-230-14-83.us-west-2.compute.internal docker[3550]: Status: Downloaded newer image for mvanholsteijn/paas-monitor:latest

Once it is running you can navigate to http://paas-monitor.cargonauts.dutchdevops.net and click on start.

Screen Shot 2015-03-24 at 10.05.20

 

 

You can now add new instances and watch them appear in the paas-monitor! It definitively takes a while because the docker images have to be pulled from the registry before the can be started, but in the end it they will all appear!

fleetctl start paas-monitor@{2..10}
Unit paas-monitor@2.service launched on 2dde0d31.../10.221.12.109
Unit paas-monitor@4.service launched on f7257c36.../10.237.157.117
Unit paas-monitor@3.service launched on 7f1f2982.../10.230.14.39
Unit paas-monitor@6.service launched on 2dde0d31.../10.221.12.109
Unit paas-monitor@5.service launched on 1cdadb87.../10.230.14.83
Unit paas-monitor@8.service launched on f7257c36.../10.237.157.117
Unit paas-monitor@9.service launched on 1cdadb87.../10.230.14.83
Unit paas-monitor@7.service launched on 7f1f2982.../10.230.14.39
Unit paas-monitor@10.service launched on 2dde0d31.../10.221.12.109

to see all deployed units, use the list-units command

fleetctl list-units
...
UNIT MACHINE ACTIVE SUB
paas-monitor@1.service 94d16ece.../10.90.9.78 active running
paas-monitor@2.service f7257c36.../10.237.157.117 active running
paas-monitor@3.service 7f1f2982.../10.230.14.39 active running
paas-monitor@4.service 94d16ece.../10.90.9.78 active running
paas-monitor@5.service f7257c36.../10.237.157.117 active running
paas-monitor@6.service 7f1f2982.../10.230.14.39 active running
paas-monitor@7.service 7f1f2982.../10.230.14.39 active running
paas-monitor@8.service 94d16ece.../10.90.9.78 active running
paas-monitor@9.service f7257c36.../10.237.157.117 active running
How does it work?

Whenever there is a change to the consul service registry, the consul-http-router is notified, selects all http tagged services and generates a new nginx.conf. After the configuration is generated it is reloaded by nginx so that there is little impact on the current traffic.

The consul-http-router uses the Go template language to regenerate the config. It looks like this:

events {
    worker_connections 1024;
}

http {
{{range $index, $service := services}}{{range $tag, $services := service $service.Name | byTag}}{{if eq "http" $tag}}

    upstream {{$service.Name}} {
	least_conn;
	{{range $services}}server {{.Address}}:{{.Port}} max_fails=3 fail_timeout=60 weight=1;
	{{end}}
    }
{{end}}{{end}}{{end}}

{{range $index, $service := services}}{{range $tag, $services := service $service.Name | byTag}}{{if eq "http" $tag}}
    server {
	listen 		80;
	server_name 	{{$service.Name}}.*;

	location / {
	    proxy_pass 		http://{{$service.Name}};
	    proxy_set_header 	X-Forwarded-Host	$host;
	    proxy_set_header 	X-Forwarded-For 	$proxy_add_x_forwarded_for;
	    proxy_set_header 	Host 			$host;
	    proxy_set_header 	X-Real-IP 		$remote_addr;
	}
    }
{{end}}{{end}}{{end}}

    server {
	listen		80 default_server;

	location / {
	    root /www;
	    index index.html index.htm Default.htm;
	}
    }
}

It loops through all the services and selects all services tagged 'http' and creates a virtual host for servicename.* which sends all request to the registered upstream services. Using the following two commands you can see the current configuration file.

AMACHINE=$(fleetctl list-machines -fields=machine -no-legend -full | head -1)
fleetctl ssh $AMACHINE docker exec consul-http-router cat /etc/nginx/nginx.conf
...
events {
    worker_connections 1024;
}

http {

    upstream paas-monitor {
	least_conn;
	server 10.221.12.109:49154 max_fails=3 fail_timeout=60 weight=1;
	server 10.221.12.109:49153 max_fails=3 fail_timeout=60 weight=1;
	server 10.221.12.109:49155 max_fails=3 fail_timeout=60 weight=1;
	server 10.230.14.39:49153 max_fails=3 fail_timeout=60 weight=1;
	server 10.230.14.39:49154 max_fails=3 fail_timeout=60 weight=1;
	server 10.230.14.83:49153 max_fails=3 fail_timeout=60 weight=1;
	server 10.230.14.83:49154 max_fails=3 fail_timeout=60 weight=1;
	server 10.230.14.83:49155 max_fails=3 fail_timeout=60 weight=1;
	server 10.237.157.117:49153 max_fails=3 fail_timeout=60 weight=1;
	server 10.237.157.117:49154 max_fails=3 fail_timeout=60 weight=1;

    }

    server {
	listen 		80;
	server_name 	paas-monitor.*;

	location / {
	    proxy_pass 		http://paas-monitor;
	    proxy_set_header 	X-Forwarded-Host	$host;
	    proxy_set_header 	X-Forwarded-For 	$proxy_add_x_forwarded_for;
	    proxy_set_header 	Host 			$host;
	    proxy_set_header 	X-Real-IP 		$remote_addr;
	}
    }

    server {
	listen		80 default_server;

	location / {
	    root /www;
	    index index.html index.htm Default.htm;
	}
    }
}
[/code]

This also happens when you stop or kill and instance. Just stop an instance and watch your monitor respond.
1
fleetctl destroy paas-monitor@10
...
Destroyed paas-monitor@10.service
Killing a machine

Now let's be totally brave and stop and entire machine!

ssh core@$FLEETCTL_TUNNEL sudo shutdown -h now.
...
Connection to 54.203.141.124 closed by remote host.

Keep watching your paas-monitor. You will notice a slow down and also notice that a number of backend services are no longer responding. After a short while (1 or 2 minutes) you will see new instances appear in the list.
paas-monitor after restart
What happened is that Amazon AWS restarted a new instance into the cluster and all units that were running on the stopped node have been moved to running instances with only 6 HTTP errors!

Please note that CoreOS is not capable of automatically recovering from loss of a majority of the servers at the same time. In that case, manual recovery by operations is required.

Conclusion

CoreOS provides all the basic functionality to manage Docker containers and provided High Availability to your application with a minimum of fuss. Consul and Consul templates actually make it very easy to use custom components like NGiNX to implement dynamic service discovery.

Outlook

In the next blog we will be deploying an multi-tier application that uses Consul DNS to connect application parts to databases!

References

This blog is based on information, ideas and source code snippets from  https://coreos.com/docs/running-coreos/cloud-providers/ec2, http://cargonauts.io/mitchellh-auto-dc and https://github.com/justinclayton/coreos-and-consul-cluster-via-terraform