Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Take your apps on the road with Android Auto

Android Developers Blog - Thu, 03/19/2015 - 19:17

Posted by Wayne Piekarski, Developer Advocate

Starting today, anyone can take their apps for a drive with Android Auto using Android 5.0+ devices, connected to compatible cars and aftermarket head units. Android Auto lets you easily extend your apps to the car in an efficient way for drivers, allowing them to stay connected while still keeping their hands on the wheel and their eyes on the road. When users connect their phone to a compatible vehicle, they will see an Android experience optimized for the head unit display that seamlessly integrates voice input, touch screen controls, and steering wheel buttons. Moreover, Android Auto provides consistent UX guidelines to ensure that developers are able to create great experiences across many diverse manufacturers and vehicle models, with a single application available on Google Play.

With the availability of the Pioneer AVIC-8100NEX, AVIC-7100NEX, and AVH-4100NEX aftermarket systems in the US, the AVIC-F77DAB, AVIC-F70DAB, AVH-X8700BT in the UK, and in Australia the AVIC-F70DAB, AVH-X8750BT, it is now possible to add Android Auto to many cars already on the road. As a developer, you now have a way to test your apps in a realistic environment. These are just the first Android Auto devices to launch, and vehicles from major auto manufacturers with integrated Android Auto support are coming soon.

With the increasing adoption of Android Auto by manufacturers, your users are going to be expecting more support of their apps in the car, so now is a good time to get started with development. If you are new to Android Auto, check out our DevByte video, which explains more about how this works, along with some live demos.

The SDK for Android Auto was made available to developers a few months ago, and now Google Play is ready to accept your application updates. Your existing apps can take advantage of all these cool new Android Auto features with just a few small changes. You’ll need to add Android Auto support to your application, and then agree to the Android Auto terms in the Pricing & Distribution category in the Google Play Developer Console. Once the application is approved, it will be made available as an update to your users, and shown in the cars’ display.

Adding support for Android Auto is easy. We have created an extensive set of documentation to help you add support for messaging (sample), and audio playback (sample). There are also short introduction DevByte videos for messaging and audio as well. Stay tuned for a series of posts coming up soon discussing more details of these APIs and how to work with them. We also have simulators to help you test your applications right at your desk during development.

With the launch of Android Auto, a new set of possibilities are available for you to make even more amazing experiences for your users, providing them the right information for the road ahead. Come join the discussion about Android Auto on Google+ at http://g.co/androidautodev where you can share ideas and ask questions with other developers.

Join the discussion on

+Android Developers
Categories: Programming

Quitting A Job You Just Started

Making the Complex Simple - John Sonmez - Thu, 03/19/2015 - 15:00

In this video I respond to an email asking about how to handle quitting a job that you just started. Keep in mind that you should never quit without having something else lined up.

The post Quitting A Job You Just Started appeared first on Simple Programmer.

Categories: Programming

Dependency injection with PostSharp

Actively Lazy - Wed, 03/18/2015 - 22:21

I don’t really like IoC containers. Or rather, I don’t like the crappy code people write when they’re given an IoC container. Before you know it you have NounVerbers everywhere, a million dependencies and no decent domain model. Dependencies should really be external to your application; everything outside of the core domain model that your application represents.

  • A web service? That’s a dependency
  • A database? Yup
  • A message queue? Definitely a dependency
  • A scheduler or thread pool? Yup
  • Any NounVerber (PriceCalculator, StockFetcher, BasketFactory, VatCalculator) no! Not a dependency. Stop it. They’re all part of your core business domain and are actually methods on a class. If you can’t write Price.Calculate() or Stock.Fetch() or new Basket() or Vat.Calculate() then fix your domain model first before you go hurting yourself with an IoC container

A while back I described a very simple, hand-rolled approach to dependency injection. But if we wave a little PostSharp magic we can improve on that basic idea. All the source code for this is available on github.

It works like this: if we have a dependency, say an AuthService, we declare an interface that business objects can implement to request that they have the dependency injected into them. In this case, IRequireAuthService.

class User : IRequireAuthService
{
  public IAuthService AuthService { set; private get; }

We create a DependencyInjector that can set these properties:

public void InjectDependencies(object instance)
{
  if (instance is IRequireAuthService)
    ((IRequireAuthService)instance).AuthService = AuthService;
    ...
}

This might not be the prettiest method – you’ll end up with an if…is IRequire… line for each dependency you can inject. But this provides a certain amount of friction. While it is easy to add new dependencies, developers are discouraged from doing it. This small amount of friction massively limits the unchecked growth of dependencies so prevalent with IoC containers. This friction is why I prefer the hand-rolled approach to off-the-shelf IoC containers.

So how do we trigger the dependency injector to do what it has to do? This is where some PostSharp magic comes in. We declare an attribute to use on the constructor:

  [InjectDependencies]
  public User(string id)

Via the magic of PostSharp aspect weaving this attribute causes some code to be executed before the constructor. This attribute is simply defined as:

public sealed override void OnEntry(MethodExecutionArgs args)
{
  DependencyInjector.CurrentInjector.InjectDependencies(args.Instance);
  base.OnEntry(args);
}

And that’s it – PostSharp weaves this method before each constructor with the [InjectDependencies] attribute. We get the current dependency injector and pass in the object instance (i.e. the newly created User instance) to have dependencies injected into it. Just like that we have a very simple dependency injector. Even better all this aspect weaving magic is available with the express (free!) edition of PostSharp.

Taking it Further

There are a couple of obvious extensions to this. You can create a TestDependencyInjector so that your unit tests can provide their own (mock) implementations of dependencies. This can also include standard (stub) implementations of some dependencies. E.g. a dependency that manages cross-thread scheduling can be replaced by an immediate (synchronous) implementation for unit tests to ensure that unit tests are single-threaded and repeatable.

Secondly, the DependencyInjector uses a ThreadLocal to store the current dependency injector. If you use background threads and want dependency injection to work there, you need a way of pushing the dependency injector onto the background thread. This generally means wrapping thread spawning code (which will itself be a dependency). You’ll want to wrap any threading code anyway to make it unit-testable.

Compile Time Checks

Finally, the most common failure mode we encountered with this was people forgetting to put [InjectDependencies] on the constructor. This means you get nulls at runtime, instead of dependencies. With a bit more PostSharp magic (this brand of magic requires the paid-for version, though) we can stop that, too. First, we change each IRequire to use a new attribute that indicates it manages injection of a dependency:

[Dependency]
public interface IRequireAuthService
{
  IAuthService AuthService { set; }
}

We configure this attribute to be inherited to all implementation classes – so all business objects that require auth service get the behaviour – then we define a compile time check to verify that the constructors have [InjectDependencies] defined:

public override bool CompileTimeValidate(System.Reflection.MethodBase method)
{
  if (!method.CustomAttributes.Any(a => a.AttributeType == typeof(InjectDependenciesAttribute)))
  {
    Message.Write(SeverityType.Error, "InjectDependences", "No [InjectDependencies] declared on " + method.DeclaringType.FullName + "." + method.Name, method);
    return false;
  }
  return base.CompileTimeValidate(method);
}

This compile time check now makes the build fail if I ever declare a class IRequireAuthService without adding [InjectDependencies] onto the class’ constructor.

Simple, hand-rolled dependency injection with compile time validation thanks to PostSharp!

 


Categories: Programming, Testing & QA

Episode 223: Joram Barrez on the Activiti Business Process Management Platform

Josh Long talks to Activiti cofounder Joram Barrez about the wide world of (open source) workflow engines, the Activiti BPMN2 engine, and what workflow implies when you’re building process-driven applications and services. Joram was originally a contributor to the jBPM project with jBPM founder Tom Baeyens at Red Hat. He cofounded Activiti in 2010 at […]
Categories: Programming

Android Developer Story: Outfit7 — Building an entertainment company with Google

Android Developers Blog - Wed, 03/18/2015 - 20:15

Posted by Leticia Lago, Google Play team

Outfit7, creators of My Talking Tom and My Talking Angela, recently announced they’ve achieved 2.5 billion app downloads across their portfolio. The company now offers a complete entertainment experience to users spanning mobile apps, user generated and original YouTube content, and a range of toys, clothing, and accessories. They even have a silver screen project underway.

We caught up with Iza Login, Rok Zorko and Marko Ĺ tamcar - some of the co-founders- in Ljubljana, Slovenia, to learn best practices that helped them in reaching this milestone.

To learn about some of the Google and Google Play features used by Outfit7 to create their successful business, check out these resources:

  • Monetization — explore the options available for generating revenue from your apps and games.
  • Monetization with AdMob — learn how you can maximize your ad revenue.
  • YouTube for Developers — Whether you’re building a business on YouTube or want to enhance your app with video, a rich set of YouTube APIs can bring your products to life.
Join the discussion on

+Android Developers
Categories: Programming

Microservices: coupling vs. autonomy

Xebia Blog - Wed, 03/18/2015 - 14:35

Microservices are the latest architectural style promising to resolve all issues we had we previous architectural styles. And just like other styles it has its own challenges. The challenge discussed in this blog is how to realise coupling between microservices while keeping the services as autonomous as possible. Four options will be described and a clear winner will be selected in the conclusion.

To me microservices are autonomous services that take full responsibility for one business capability. Full responsibility includes presentation, API, data storage and business logic. Autonomous is the keyword for me, by making the services autonomous the services can be changed with no or minimal impact on others. If services are autonomous, then operational issues in one service should have no impact on the functionality of other services. That all sounds like a good idea, but services will never be fully isolated islands. A service is virtually always dependent on data provided by another service. For example imagine a shopping cart microservice as part of a web shop, some other service must put items in the shopping cart and the shopping cart contents must be provided to yet other services to complete the order and get it shipped. The question now is how to realise these couplings while keeping maximum autonomy.  The goal of this blog post is to explain which pattern should be followed to couple microservices while retaining maximum autonomy.

rr-ps

I'm going to structure the patterns by 2 dimensions, the interaction pattern and the information exchanged using this pattern.

Interaction pattern: Request-Reply vs. Publish-Subscribe.

  • Request-Reply means that one service does a specific request for information (or to take some action). It then expects a response. The requesting service therefore needs to know what to aks and where to ask it. This could still be implemented asynchronously and of course your could put some abstraction in place such that the request service does not have to know the physical address of the other service, the point still remains that one service is explicitly asking a for specific information (or action to be taken) and functionally waiting for a response.
  • Publish-Subscribe: with this pattern a service registers itself as being interested in certain information, or being able to handle certain requests. The relevant information or requests will then be delivered to it and it can decide what to do with it. In this post we'll assume that there is some kind of middleware in place to take care of delivery of the published messages to the subscribed services.

Information exchanged: Events vs. Queries/Commands

  • Events are facts that cannot be argued about. For example, an order with number 123 is created. Events only state what has happened. They do not describe what should happen as a consequence of such an event.
  • Queries/Commands: Both convey what should happen. Queries are a specific request for information, commands are a specific request to the receiving service to take some action.

Putting these two dimensions in a matrix results into 4 options to realise couplings between microservices. So what are the advantages and disadvantages for each option? And which one is the best for reaching maximum autonomy?

In the description below we'll use 2 services to illustrate each pattern. The Order service which is responsible for managing orders and the Shipping service which is responsible for shipping stuff, for example the items included in an order. Services like these could be part of a webshop, which could then also contain services like a shopping cart, a product (search) service, etc.

1. Request-Reply with Events:rre

In this pattern one service asks a specific other service for events that took place (since the last time it asked). This implies strong dependency between these two services, the Shipping service must know which service to connect to for events related to orders. There is also a runtime dependency since the shipping service will only be able to ship new orders if the Order service is available.

Since the Shipping service only receives events it has to decide by itself when an order may be shipped based on information in these events. The Order service does not have to know anything about shipping, it simply provides events stating what happened to orders and leaves the responsibility to act on these events fully to the services requesting the events.

2. Request-Reply with Commands/Queries:

rrcIn this pattern the shipping Order service is going to request the Shipping service to ship an order. This implies strong coupling since the Order service is explicitly requesting a specific service to take care of the shipping and now the Order service must determine when an order is ready to be shipped. It is aware of the existence of a Shipping service and it even knows how to interact with it. If other factors not related to the order itself should be taken into account before shipping the order (e.g. credit status of the customer), then the order services should take this into account before requesting the shipping service to ship the order. Now the business process is baked into the architecture and therefore the architecture cannot be changed easily.

Again there is a runtime dependency since the Order service must ensure that the shipping request is successfully delivered to the Shipping service.

3. Publish-Subscribe with Events

pseIn Publish-Subscribe with Events the Shipping service registers itself as being interested in events related to Orders. After registering itself it will receive all events related to Orders without being aware what the source of the order events is. It is loosely coupled to the source of the Order events. The shipping service will need to retain a copy of the data received in the events such that is can conclude when an order is ready to be shipped. The Order service needs to have no knowledge about shipping. If multiple services provide order related events containing relevant data for the Shipping service then this is not recognisable by the Shipping service. If (one of) the service(s) providing order events is down, the Shipping service will not be aware, it just receives less events. The Shipping service will not be blocked by this.

4. Publish-Subscribe with Commands/Queries

pscIn Publish-Subscribe with Command/Queries the Shipping service registers itself as a service being able to ship stuff. It then receives all commands that want to get something shipped. The Shipping service does not have to be aware of the source of the Shipping commands and on the flip side the Order service is not aware of which service will take care of shipping. In that sense they are loosely coupled. However, the Order service is aware of the fact that orders must get shipped since it is sending out a ship command, this does make the coupling stronger.

Conclusion

Now that we have described the four options we go back to the original question, which pattern of the above 4 provides maximum autonomy?

Both Request-Reply patterns imply a runtime coupling between two services and that implies strong coupling. Both Command/Queries patterns imply that one service is aware of what another service should do (in the examples above the order service is aware that another service takes care of shipping) and that also implies strong coupling, but this time on functional level. That leaves one option: 3. Publish-Subscribe with Events. In this case both services are not aware of each others existence from both runtime and functional perspective. To me this is the clear winner for achieving maximum autonomy between services.

The next question pops up immediately, should you always couple services using Publish-Subscribe with events? If your only concern is maximum autonomy of services the answer would be yes, but, there are more factors that should be taken into the account. Always coupling using this pattern comes at a price, data is replicated, measures must be taken to deal with lost events, events driven architectures do add extra requirements on infrastructure, their might be extra latency, and more. In a next post I'll dive into these trade-offs and put things into perspective. For now remember that Publish-Subscribe with Events is a good bases for achieving autonomy of services.

Reducing the size of Docker Images

Xebia Blog - Wed, 03/18/2015 - 01:00

Using the basic Dockerfile syntax it is quite easy to create a fully functional Docker image. But if you just start adding commands to the Dockerfile the resulting image can become unnecessary big. This makes it harder to move the image around.

A few basic actions can reduce this significantly.

Neo4j: Detecting potential typos using EXPLAIN

Mark Needham - Tue, 03/17/2015 - 23:46

I’ve been running a few intro to Neo4j training sessions recently using Neo4j 2.2.0 RC1 and at some stage in every session somebody will make a typo when writing out of the example queries.

For example one of the queries that we do about half way finds the actors and directors who have worked together and aggregates the movies they were in.

This is the correct query:

MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED]-(director)
RETURN actor.name, director.name, COLLECT(movie.title) AS movies
ORDER BY LENGTH(movies) DESC
LIMIT 5

which should yield the following results:

==> +-----------------------------------------------------------------------------------------------------------------------+
==> | actor.name           | director.name    | movies                                                                      |
==> +-----------------------------------------------------------------------------------------------------------------------+
==> | "Hugo Weaving"       | "Andy Wachowski" | ["Cloud Atlas","The Matrix Revolutions","The Matrix Reloaded","The Matrix"] |
==> | "Hugo Weaving"       | "Lana Wachowski" | ["Cloud Atlas","The Matrix Revolutions","The Matrix Reloaded","The Matrix"] |
==> | "Laurence Fishburne" | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"]               |
==> | "Keanu Reeves"       | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"]               |
==> | "Carrie-Anne Moss"   | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"]               |
==> +-----------------------------------------------------------------------------------------------------------------------+

However, a common typo is to write ‘DIRECTED_IN’ instead of ‘DIRECTED’ in which case we’ll see no results:

MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED_IN]-(director)
RETURN actor.name, director.name, COLLECT(movie.title) AS movies
ORDER BY LENGTH(movies) DESC
LIMIT 5
 
==> +-------------------------------------+
==> | actor.name | director.name | movies |
==> +-------------------------------------+
==> +-------------------------------------+
==> 0 row

It’s not immediately obvious why we aren’t seeing any results which can be quite frustrating.

However, in Neo4j 2.2 the ‘EXPLAIN’ keyword has been introduced and we can use this to see what the query planner thinks of the query we want to execute without actually executing it.

Instead the planner makes use of knowledge that it has about our schema to come up with a plan that it would run and how much of the graph it thinks that plan would touch:

EXPLAIN MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED_IN]-(director)
RETURN actor.name, director.name, COLLECT(movie.title) AS movies
ORDER BY LENGTH(movies) DESC
LIMIT 5

2015 03 17 23 39 55

The first row of the query plan describes an all nodes scan which tells us that the query will start from the ‘director’ but it’s the second row that’s interesting.

The estimated rows when expanding the ‘DIRECTED_IN’ relationship is 0 when we’d expect it to at least be a positive value if there were some instances of that relationship in the database.

If we compare this to the plan generated when using the proper ‘DIRECTED’ relationship we can see the difference:

2015 03 17 23 43 11

Here we see an estimated 44 rows from expanding the ‘DIRECTED’ relationship so we know there are at least some nodes connected by that relationship type.

In summary if you find your query not returning anything when you expect it to, prefix an ‘EXPLAIN’ and make sure you’re not seeing the dreaded ‘0 expected rows’.

Categories: Programming

One month of mini habits

Mark Needham - Tue, 03/17/2015 - 02:32

I recently read a book in the ‘getting things done’ genre written by Stephen Guise titled ‘Mini Habits‘ and although I generally don’t like those types of books I quite enjoyed this one and decided to give his system a try.

The underlying idea is that there are two parts of actually doing stuff:

  • Planning what to do
  • Doing it

We often get stuck in between the first and second steps because what we’ve planned to do is too big and overwhelming.

Guise’s approach for overcoming this inaction is to shrink the amount of work to do until it’s small enough that we don’t feel any resistance to getting started.

It should be something that you can do in 1 or 2 minutes – stupidly small – something that you can do even on your worst day when you have no time/energy.

I’m extremely good at procrastinating so I thought I’d give it a try and see if it helped. Guise suggests starting with one or two habits but I had four things that I want to do so I’ve ignored that advice for now.

My attempted habits are the following:

  • Read one page of a data science related paper/article a day
  • Read one page of a computer science related paper/article a day
  • Write one line of data science related code a day
  • Write 50 words on blog a day
Sooooo….has it helped?

In terms of doing each of the habits I’ve been successful so far – today is the 35th day in a row that I’ve managed to do each of them. Having said that, there have been some times when I’ve got back home at 11pm and realised that I haven’t done 2 of the habits and need to quickly do the minimum to ‘tick them off’.

The habit I’ve enjoyed doing the most is writing one line of data science related code a day.

My initial intention was that this was going to only involved writing machine learning code but at the moment I’ve made it a bit more generic so it can include things like the Twitter Graph or other bits and pieces that I want to get started on.

The main problem I’ve had with making progress on mini projects like that is that I imagine its end state and it feels too daunting to start on. Committing to just one line of code a day has been liberating in some way.

One tweak I have made to all the habits is to have some rough goal of where all the daily habits are leading as I noticed that the stuff I was doing each day was becoming very random. Michael pointed me at Amy Hoy’s ‘Guide to doing it backwards‘ which describes a neat technique for working back from a goal and determining the small steps required to achieve it.

Writing at least 50 words a day has been beneficial for getting blog posts written. Before the last month I’ve found myself writing most of my posts at the end of month but I have a more regular cadence now which feels better.

Computer science wise I’ve been picking up papers which have some sort of link to databases to try and learn more of the low level detail there. e.g. I’ve read the LRU-K cache paper which Neo4j 2.2’s page cache is based on and have been flicking through the original CRDTs paper over the last few days.

I also recently came across the Papers We Love repository so I’ll probably work through some of the distributed systems papers they’ve collated next.

Other observations

I’ve found that if I do stuff early in the morning it feels better as you know it’s out of the way and doesn’t linger over you for the rest of the day.

I sometimes find myself wanting to just tick off the habits for the day even when it might be interesting to spend more time on one of the habits. I’m not sure what to make of this really – perhaps I should reduce the number of habits to the ones I’m really interested in?

With the writing it does sometimes feel like I’m just writing for the sake of it but it is a good habit to get into as it forces me to explain what I’m working on and get ideas from other people so I’m going to keep doing it.

I’ve enjoyed my experience with ‘mini habits’ so far although I think I’d be better off focusing on fewer habits so that there’s still enough time in the day to read/learn random spontaneous stuff that doesn’t fit into these habits.

Categories: Programming

Google Summer of Code now open for student applications

Google Code Blog - Mon, 03/16/2015 - 20:06

Posted by Carol Smith, Google Open Source team

Originally posted to the Google Open Source blog

If you’re a university student looking to earn real-world experience this summer, consider writing code for a cool open source project with the Google Summer of Code program.

Students who are accepted into the program will put the skills they have learned in university to good use by working on an actual software project over the summer. Students receive a stipend and are paired with mentors to help address technical questions and concerns throughout the course of the project. With the knowledge and hands-on experience students gain during the summer, they strengthen their future employment opportunities. Best of all, more source code is created and released for the use and benefit of all.

Interested students can submit proposals on the website starting now through Friday, March 27 at 19:00 UTC. Get started by reviewing the ideas pages of the 137 open source projects in this year’s program and decide which projects you’re interested in. Because Google Summer of Code has a limited number of spots for students, writing a great project proposal is essential to being selected to the program — be sure to check out the Student Manual for advice.

For ongoing information throughout the application period and beyond, see the Google Open Source Blog, join our Summer of Code mailing lists or join us on Internet relay chat at #gsoc on Freenode.

Good luck to all the open source coders out there, and remember to submit your proposals early — you only have until March 27 to apply!

Categories: Programming

11 Rules All Programmers Should Live By

Making the Complex Simple - John Sonmez - Mon, 03/16/2015 - 15:00

I am a person who tends to live by rules. Now granted, they are mostly rules I set for myself—but they are still rules. I find that creating rules for myself helps me to function better, because I pre-decide things ahead of time instead of making all kinds of decisions on the fly. Should I […]

The post 11 Rules All Programmers Should Live By appeared first on Simple Programmer.

Categories: Programming

Python: Transforming Twitter datetime string to timestamp (z’ is a bad directive in format)

Mark Needham - Sun, 03/15/2015 - 23:43

I’ve been playing around with importing Twitter data into Neo4j and since Neo4j can’t store dates natively just yet I needed to convert a date string to timestamp.

I started with the following which unfortunately throws an exception:

from datetime import datetime
date = "Sat Mar 14 18:43:19 +0000 2015"
 
>>> datetime.strptime(date, "%a %b %d %H:%M:%S %z %Y")
 
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/_strptime.py", line 317, in _strptime
    (bad_directive, format))
ValueError: 'z' is a bad directive in format '%a %b %d %H:%M:%S %z %Y'

%z is actually a valid option used to extract the timezone but my googling suggests it not working is one of the idiosyncrasies of strptime.

I eventually came across the python-dateutil library, as recommended by Joe Shaw on StackOverflow.

Using that library the problem is suddenly much simpler:

$ pip install python-dateutil
from dateutil import parser
parsed_date = parser.parse(date)
 
>>> parsed_date
datetime.datetime(2015, 3, 14, 18, 43, 19, tzinfo=tzutc())

To get to a timestamp we can use calendar as I’ve described before:

import calendar
timestamp = calendar.timegm(parser.parse(date).timetuple())
 
>>> timestamp
1426358599
Categories: Programming

Haystack TV Doubles Engagement with Android TV

Android Developers Blog - Sat, 03/14/2015 - 23:51

Posted by Joshua Gordon, Developer Advocate

Haystack TV is a small six person startup with an ambitious goal: personalize the news. Traditionally, watching news on TV means viewing a list of stories curated by the network. Wouldn’t it be better if you could watch a personalized news channel, based on interesting YouTube stories?

Haystack already had a mobile app, but entering the living room space seemed daunting. Although “Smart TVs” have been on the market for a while, they remain challenging for developers to work with. Many hardware OEMs have proprietary platforms, but Android TV is different. It’s an open ecosystem with great developer resources. Developers can reach millions of users with familiar Android APIs. If you have an existing Android app, it’s easy to bring it to the living room.

Two weeks was all it took for Haystack TV to bring their mobile app to Android TV. That includes building an immersive, cinematic UI (a task greatly simplified by the Android framework). Since launching on Android TV, Haystack TV’s viewership is growing at 40% per month. Previously, users were spending about 40 minutes watching content on mobile per week. Now that’s up to 80 minutes in the living room. Their longest engagements are through Chromecast and Android TV.

Hear from Daniel Barreto, CEO of Haystack TV, on developing for Android TV

Haystack TV’s success on Android TV is a great example of how the Android multi-form factor developer experience shines. Once you’ve learned the ropes of writing Android apps, developing for another form factor (Wear, Auto, TV) is simple.

Android TV helps you create cinematic UIs

Haystack TV’s UI is smooth and cinematic. How were they able to build a great one so quickly? Developing an immersive UI/UX with Android TV is surprisingly easy. The Leanback support library provides fragments for browsing content, showing a details screen, and search. You can use these to get transitions and animations almost for free. To learn more about building UIs for Android TV, watch the Using the Leanback Library DevByte and check out the code samples.

Browsing recommended stories

Your content, front and center

The recommendations row is a central feature of the Android TV home screen. It’s the first thing users see when they turn on their TVs. You can surface content to appear on the recommendations row by implementing the recommendation service. For example, your app can suggest videos your users will want to watch next (say, the next episode in a series, or a related news story). This is great for getting noticed and increasing engagements.

Haystack’s content on the recommendations row

Make your content searchable

How can users find their favorite movie or show from a library of thousands? On Android TV, they can search for it using their voice. This is much faster and more relaxing than typing on the screen with a remote control! In addition to providing in-app search, your app can surface content to appear on the global search results page. The framework takes care of speech recognition for you and delivers the result to your app as a plain text string.

Next Steps

Android TV makes it possible for small startups to create apps for the living room. There are extensive developer resources. For an overview, watch the Introduction to Android TV DevByte. For details, see the developer training docs. Watch this episode of Coffee with a Googler to learn more about the vision for the platform. To get started on your app, visit developer.android.com/tv.

Join the discussion on

+Android Developers
Categories: Programming

Python: Checking any value in a list exists in a line of text

Mark Needham - Sat, 03/14/2015 - 03:52

I’ve been doing some log file analysis to see what cypher queries were being run on a Neo4j instance and I wanted to narrow down the lines I looked at to only contain ones which had mutating operations i.e. those containing the words MERGE, DELETE, SET or CREATE

Here’s an example of the text file I was parsing:

$ cat blog.txt
MATCH n RETURN n
MERGE (n:Person {name: "Mark"}) RETURN n
MATCH (n:Person {name: "Mark"}) ON MATCH SET n.counter = 1 RETURN n

So I only want lines 2 & 3 to be returned as the first one only returns data and doesn’t execute any updates on the graph.

I started off with a very crude way of doing this;

with open("blog.txt", "r") as ins:
    for line in ins:
        if "MERGE" in line or "DELETE" in line or "SET" in line or "CREATE" in line:
           print line.strip()

A better way of doing this is to use the any command and make sure at least one of the words exists in the line:

mutating_commands = ["SET", "DELETE", "MERGE", "CREATE"]
with open("blog.txt", "r") as ins:
    for line in ins:
        if any(command in line for command in mutating_commands):
           print line.strip()

I thought I might be able to simplify the code even further by using itertools but my best attempt so far is less legible than the above:

import itertools
 
commands = ["SET", "CREATE", "MERGE", "DELETE"]
with open("blog.txt", "r") as ins:
    for line in ins:
        if len(list(itertools.ifilter(lambda x: x in line, mutating_commands))) > 0:
            print line.strip()

I think I’ll go with the 2nd approach for now but if I’m doing something wrong with itertools and it’s much easier to use than the example I’ve shown do correct me!

Categories: Programming

A new reference app for multi-device applications

Android Developers Blog - Fri, 03/13/2015 - 19:56
It is now possible to bring the benefits of your app to your users wherever they happen to be, no matter what device they have near them. Today we’re releasing a reference sample that shows how to implement such a service with an app that works across multiple Android form-factors. This sample, the Universal Music Player, is a bare-bones but functional reference app that supports multiple devices and form factors in a single codebase. It is compatible with Android Auto, Android Wear, and Google Cast devices. Give it a try and easily adapt your own app for wherever your users are, be that a phone, watch, TV, car, or more!

Playback controls and album art in the lock screen.
On the application toolbar, the Google Cast icon.
Controlling playback through Android Auto

Controlling playback on an Android Wear watch
This sample uses a number of new features in Android 5.0 Lollipop, like MediaStyle notifications, MediaSession and MediaBrowserService. They make it easy to implement media browsing and playback on multiple devices with a single version of your app.

Check out the source code and let your users enjoy your app from wherever they like.

Posted by Renato Mangini, Senior Developer Platform Engineer, Google Developer Platform Team Join the discussion on

+Android Developers
Categories: Programming

Are Ads On Your Blog Generally Frowned Upon?

Making the Complex Simple - John Sonmez - Thu, 03/12/2015 - 16:00

In this video I respond to an email asking about whether or not having ads on your blog is good or bad. I explain that unless you are making significant conversions, you’d be better off without the ads. A good way to way judge this is if you have 500 daily visitors, then you might […]

The post Are Ads On Your Blog Generally Frowned Upon? appeared first on Simple Programmer.

Categories: Programming

Python/Neo4j: Finding interesting computer sciency people to follow on Twitter

Mark Needham - Wed, 03/11/2015 - 22:13

At the beginning of this year I moved from Neo4j’s field team to dev team and since the code we write there is much lower level than I’m used to I thought I should find some people to follow on twitter whom I can learn from.

My technique for finding some of those people was to pick a person from the Neo4j kernel team who’s very good at systems programming and uses twitter which led me to Mr Chris Vest.

I thought that the people Chris interacts with on twitter are likely to be interested in this type of programming so I manually traversed out to those people and had a look who they interacted with and put them all into a list.

This approach has worked well and I’ve picked up lots of reading material from following these people. It does feel like something that could be automated though and we’ll using Python and Neo4j as our tool kit.

We need to find a library to connect to the Twitter API. There are a few to choose from but tweepy seems simple enough so I started using that.

The first thing we need to so is fill in our Twitter API auth details. Since the application is just for me I’m not going to bother with setting up OAuth – instead I’ll just create an application on apps.twitter.com and grab the appropriate tokens:

2015 03 11 01 18 47

Once we’ve done that we can navigate to that app and get a consumer key, consumer secret, access token and access token secret. They all reside on the ‘Keys and Access Tokens’ tab:

2015 03 11 01 20 17 2015 03 11 01 20 30

Now that we’ve got ourself all auth’d up let’s write some code that starts from Chris and goes out and finds his latest tweets and the people he’s interacted with in them. We’ll write the appropriate information out to a CSV file so that we can import it into Neo4j later on:

import tweepy
import csv
from collections import Counter, deque
 
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
 
api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True)
 
counter = Counter()
users_to_process = deque()
USERS_TO_PROCESS = 50
 
def extract_tweet(tweet):
    user_mentions = ",".join([user["screen_name"].encode("utf-8")
                             for user in tweet.entities["user_mentions"]])
    urls = ",".join([url["expanded_url"]
                     for url in tweet.entities["urls"]])
    return [tweet.user.screen_name.encode("utf-8"),
            tweet.id,
            tweet.text.encode("utf-8"),
            user_mentions,
            urls]
 
starting_user = "chvest"
with open("tweets.csv", "a") as tweets:
    writer = csv.writer(tweets, delimiter=",", escapechar="\\", doublequote = False)
    for tweet in tweepy.Cursor(api.user_timeline, id=starting_user).items(50):
        writer.writerow(extract_tweet(tweet))
        tweets.flush()
        for user in tweet.entities["user_mentions"]:
            if not len(users_to_process) > USERS_TO_PROCESS:
                users_to_process.append(user["screen_name"])
                counter[user["screen_name"]] += 1
            else:
                break

As well as printing out Chris’ tweets I’m also capturing other users who he’s had interacted and putting them in a queue that we’ll drain later on. We’re limiting the number of other users that we’ll process to 50 for now but it’s easy to change.

If we print out the first few lines of ‘tweets.csv’ this is what we’d see:

$ head -n 5 tweets.csv
userName,tweetId,contents,usersMentioned,urls
chvest,575427045167071233,@shipilev http://t.co/WxqFIsfiSF,shipilev,
chvest,575403105174552576,@AlTobey I often use http://t.co/G7Cdn9Udst for small one-off graph diagrams.,AlTobey,http://www.apcjones.com/arrows/
chvest,575337346687766528,RT @theburningmonk: this is why you need composition over inheritance... :s #CompositionOverInheritance http://t.co/aKRwUaZ0qo,theburningmonk,
chvest,575269402083459072,@chvest except…? “Each library implementation should therefore be identical with respect to the public API”,chvest,

We’re capturing the user, tweetId, the tweet itself, any users mentioned in the tweet and any URLs shared in the tweet.

Next we want to get some of the tweets of the people Chris has interacted with

# Grab the code from here too - https://gist.github.com/mneedham/3188c44b2cceb88c6de0
 
import tweepy
import csv
from collections import Counter, deque
 
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
 
api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True)
 
counter = Counter()
users_to_process = deque()
USERS_TO_PROCESS = 50
 
def extract_tweet(tweet):
    user_mentions = ",".join([user["screen_name"].encode("utf-8")
                             for user in tweet.entities["user_mentions"]])
    urls = ",".join([url["expanded_url"]
                     for url in tweet.entities["urls"]])
    return [tweet.user.screen_name.encode("utf-8"),
            tweet.id,
            tweet.text.encode("utf-8"),
            user_mentions,
            urls]
 
starting_user = "chvest"
with open("tweets.csv", "a") as tweets:
    writer = csv.writer(tweets, delimiter=",", escapechar="\\", doublequote = False)
    for tweet in tweepy.Cursor(api.user_timeline, id=starting_user).items(50):
        writer.writerow(extract_tweet(tweet))
        tweets.flush()
        for user in tweet.entities["user_mentions"]:
            if not len(users_to_process) > USERS_TO_PROCESS:
                users_to_process.append(user["screen_name"])
                counter[user["screen_name"]] += 1
            else:
                break
    users_processed = set([starting_user])
    while True:
        if len(users_processed) >= USERS_TO_PROCESS:
            break
        else:
            if len(users_to_process) > 0:
                next_user = users_to_process.popleft()
                print next_user
                if next_user in users_processed:
                    "-- user already processed"
                else:
                    "-- processing user"
                    users_processed.add(next_user)
                    for tweet in tweepy.Cursor(api.user_timeline, id=next_user).items(10):
                        writer.writerow(extract_tweet(tweet))
                        tweets.flush()
                        for user_mentioned in tweet.entities["user_mentions"]:
                            if not len(users_processed) > 50:
                                users_to_process.append(user_mentioned["screen_name"])
                                counter[user_mentioned["screen_name"]] += 1
                            else:
                                break
            else:
                break

Finally let’s take a quick look at the users who show up most frequently:

>>> for user_name, count in counter.most_common(20):
        print user_name, count
 
neo4j 13
devnexus 12
AlTobey 11
bitprophet 11
hazelcast 10
chvest 9
shipilev 9
AntoineGrondin 8
gvsmirnov 8
GatlingTool 8
lagergren 7
tomsontom 6
dorkitude 5
noctarius2k 5
DanHeidinga 5
chris_mahan 5
coda 4
mccv 4
gAmUssA 4
jmhodges 4

A few of the people on that list are in my list which is a good start. We can explore the data set better once it’s in Neo4j though so let’s write some Cypher import statements to create our own mini Twitter graph:

// add people
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/projects/neo4j-twitter/tweets.csv" AS row
MERGE (p:Person {userName: row.userName});
 
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/projects/neo4j-twitter/tweets.csv" AS row
WITH SPLIT(row.usersMentioned, ",") AS users
UNWIND users AS user
MERGE (p:Person {userName: user});
 
// add tweets
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/projects/neo4j-twitter/tweets.csv" AS row
MERGE (t:Tweet {id: row.tweetId})
ON CREATE SET t.contents = row.contents;
 
// add urls
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/projects/neo4j-twitter/tweets.csv" AS row
WITH SPLIT(row.urls, ",") AS urls
UNWIND urls AS url
MERGE (:URL {value: url});
 
// add links
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/projects/neo4j-twitter/tweets.csv" AS row
MATCH (p:Person {userName: row.userName})
MATCH (t:Tweet {id: row.tweetId})
MERGE (p)-[:TWEETED]->(t);
 
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/projects/neo4j-twitter/tweets.csv" AS row
WITH SPLIT(row.usersMentioned, ",") AS users, row
UNWIND users AS user
MATCH (p:Person {userName: user})
MATCH (t:Tweet {id: row.tweetId})
MERGE (p)-[:MENTIONED_IN]->(t);
 
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/projects/neo4j-twitter/tweets.csv" AS row
WITH SPLIT(row.urls, ",") AS urls, row
UNWIND urls AS url
MATCH (u:URL {value: url})
MATCH (t:Tweet {id: row.tweetId})
MERGE (t)-[:CONTAINS_LINK]->(u);

We can put all those commands in a file and execute them using neo4j-shell:

$ ./neo4j-community-2.2.0-RC01/bin/neo4j-shell --file import.cql

Now let’s write some queries against the graph:

// Find the tweets where Chris mentioned himself
MATCH path = (n:Person {userName: "chvest"})-[:TWEETED]->()<-[:MENTIONED_IN]-(n)
RETURN path

Graph  5

// Find the most popular links shared in the network
MATCH (u:URL)<-[r:CONTAINS_LINK]->()
RETURN u.value, COUNT(*) AS times
ORDER BY times DESC
LIMIT 10
 
+-------------------------------------------------------------------------------------------------+
| u.value                                                                                 | times |
+-------------------------------------------------------------------------------------------------+
| "http://www.polyglots.dk/"                                                              | 4     |
| "http://www.java-forum-nord.de/"                                                        | 4     |
| "http://hirt.se/blog/?p=646"                                                            | 3     |
| "http://wp.me/p26jdv-Ja"                                                                | 3     |
| "https://instagram.com/p/0D4I_hH77t/"                                                   | 3     |
| "https://blogs.oracle.com/java/entry/new_java_champion_tom_chindl"                      | 3     |
| "http://www.kennybastani.com/2015/03/spark-neo4j-tutorial-docker.html"                  | 2     |
| "https://firstlook.org/theintercept/2015/03/10/ispy-cia-campaign-steal-apples-secrets/" | 2     |
| "http://buff.ly/1GzZXlo"                                                                | 2     |
| "http://buff.ly/1BrgtQd"                                                                | 2     |
+-------------------------------------------------------------------------------------------------+
10 rows

The first link is for a programming language meetup in Copenhagen, the second for a Java conference in Hanovier and the third an announcement about the latest version of Java Mission Control. So far so good!

A next step in this area would be to run the links through Prismatic’s interest graph so we can model topics in our graph as well. For now let’s have a look at the interactions between Chris and others in the graph:

// Find the people who Chris interacts with most often
MATCH path = (n:Person {userName: "chvest"})-[:TWEETED]->()<-[:MENTIONED_IN]-(other)
RETURN other.userName, COUNT(*) AS times
ORDER BY times DESC
LIMIT 5
 
+------------------------+
| other.userName | times |
+------------------------+
| "gvsmirnov"    | 7     |
| "shipilev"     | 5     |
| "nitsanw"      | 4     |
| "DanHeidinga"  | 3     |
| "AlTobey"      | 3     |
+------------------------+
5 rows

Let’s generalise that to find interactions between any pair of people:

// Find the people who interact most often
MATCH (n:Person)-[:TWEETED]->()<-[:MENTIONED_IN]-(other)
WHERE n <> other
RETURN n.userName, other.userName, COUNT(*) AS times
ORDER BY times DESC
LIMIT 5
 
+------------------------------------------+
| n.userName    | other.userName   | times |
+------------------------------------------+
| "fbogsany"    | "AntoineGrondin" | 8     |
| "chvest"      | "gvsmirnov"      | 7     |
| "chris_mahan" | "bitprophet"     | 6     |
| "maxdemarzi"  | "neo4j"          | 6     |
| "chvest"      | "shipilev"       | 5     |
+------------------------------------------+
5 rows

Let’s combine a couple of these together to come up with a score for each person:

MATCH (n:Person)
 
// number of mentions
OPTIONAL MATCH (n)-[mention:MENTIONED_IN]->()
WITH n, COUNT(mention) AS mentions
 
// number of links shared by someone else
OPTIONAL MATCH (n)-[:TWEETED]->()-[:CONTAINS_LINK]->(link)<-[:CONTAINS_LINK]-()
 
WITH n, mentions, COUNT(link) AS links
RETURN n.userName, mentions + links AS score, mentions, links
ORDER BY score DESC
LIMIT 10
 
+------------------------------------------+
| n.userName    | score | mentions | links |
+------------------------------------------+
| "chvest"      | 17    | 10       | 7     |
| "hazelcast"   | 16    | 10       | 6     |
| "neo4j"       | 15    | 13       | 2     |
| "noctarius2k" | 14    | 4        | 10    |
| "devnexus"    | 12    | 12       | 0     |
| "polyglotsdk" | 11    | 2        | 9     |
| "shipilev"    | 11    | 10       | 1     |
| "AlTobey"     | 11    | 10       | 1     |
| "bitprophet"  | 10    | 9        | 1     |
| "GatlingTool" | 10    | 8        | 2     |
+------------------------------------------+
10 rows

Amusingly Chris is top of his own network but we also see three accounts which aren’t people, but rather products – neo4j, hazelcast and GatlingTool. The rest are legit though

That’s as far as I’ve got but to make this more useful I think we need to introduce follower/friend links as well as importing more data.

In the mean time I’ve got a bunch of links to go and read!

Categories: Programming

Android 5.1 Lollipop SDK

Android Developers Blog - Tue, 03/10/2015 - 19:08

By Jamal Eason, Product Manager, Android

Yesterday we announced Android 5.1, an updated version of the Android Lollipop platform that improves stability, provides better control of notifications, and increases performance. As a part of the Lollipop update, we are releasing the Android 5.1 SDK (API Level 22) which supports the new platform and lets you get started with developing and testing.

What's new in Android 5.1?

For developers, Android 5.1 introduces a small set of new APIs. A key API addition is support for multiple SIM cards, which is important for many regions where Android One phones are being adopted. Consumers of Android One devices will have more flexibility to switch between carriers and manage their network activities in the way that works best for them. Therefore you, as a developer, can create new app experiences that take advantage of this new feature.

In addition to the new consumer features, Android 5.1 also enhances enterprise features to better support the launch of Android for Work.

Android 5.1 supports multiple SIM cards on compatible devices like Android One.

Updates for the Android SDK

To get you started with Android 5.1, we have updated the Android SDK tools to support the new platform and its new APIs. The SDK now includes Android 5.1 emulator system images that you can use to test your apps and develop using the latest capabilities and APIs. You can update your SDK through the Android SDK Manager in Android Studio.

For details on the new developer APIs, take a look at the API Overview.

Coming to Nexus devices soon

Over the next few weeks, we’ll be rolling out updates for Android 5.1 to the following Nexus devices: Nexus 4, Nexus 5, Nexus 6, Nexus 7 [2012], Nexus 7 [2012] (3G), Nexus 7 (2013), Nexus 7 [2013] (3G/LTE), Nexus 9, Nexus 9 (LTE), Nexus 10, and Nexus Player.

Next Steps

As with all Android releases, it’s a good idea to test your apps on the new platform as soon as possible. You can get started today using Android 5.1 system images with the emulator that’s included in the SDK, or you can download an Android 5.1 Nexus image and flash the system image to your Nexus device.

If you have not had a chance to update your app to material design, or explore how your app might work on Android Wear, Android TV, or even Android Auto, now is a good time to get started with the Android 5.1 SDK update.

Join the discussion on

+Android Developers
Categories: Programming

Python: Streaming/Appending to a file

Mark Needham - Tue, 03/10/2015 - 00:00

I’ve been playing around with Twitter’s API (via the tweepy library) and due to the rate limiting it imposes I wanted to stream results to a CSV file rather than waiting until my whole program had finished.

I wrote the following program to simulate what I was trying to do:

import csv
import time
 
with open("rows.csv", "a") as file:
    writer = csv.writer(file, delimiter = ",")
 
    end = time.time() + 10
    while True:
        if time.time() > end:
            break
        else:
            writer.writerow(["mark", "123"])
            time.sleep(1)

The program will run for 10 seconds and append one line to ‘rows.csv’ once a second. Although I have used the ‘a’ flag in my call to ‘open’ if I poll that file before the 10 seconds is up it’s empty:

$ date && wc -l rows.csv
Mon  9 Mar 2015 22:54:27 GMT
       0 rows.csv
 
$ date && wc -l rows.csv
Mon  9 Mar 2015 22:54:31 GMT
       0 rows.csv
 
$ date && wc -l rows.csv
Mon  9 Mar 2015 22:54:34 GMT
       0 rows.csv
 
$ date && wc -l rows.csv
Mon  9 Mar 2015 22:54:43 GMT
      10 rows.csv

I thought the flushing of the file was completely controlled by the with block but lucky for me there’s actually a flush() function which allows me to force writes to the file whenever I want.

Here’s the new and improved sample program:

import csv
import time
 
with open("rows.csv", "a") as file:
    writer = csv.writer(file, delimiter = ",")
 
    end = time.time() + 10
    while True:
        if time.time() > end:
            break
        else:
            writer.writerow(["mark", "123"])
            time.sleep(1)
            file.flush()

And if we poll the file while the program’s running:

$ date && wc -l rows.csv
Mon  9 Mar 2015 22:57:36 GMT
      14 rows.csv
 
$ date && wc -l rows.csv
Mon  9 Mar 2015 22:57:37 GMT
      15 rows.csv
 
$ date && wc -l rows.csv
Mon  9 Mar 2015 22:57:40 GMT
      18 rows.csv
 
$ date && wc -l rows.csv
Mon  9 Mar 2015 22:57:45 GMT
      20 rows.csv

Much easier than I expected – I ♥ python!

Categories: Programming

Fun Basic Preview

Phil Trelford's Array - Mon, 03/09/2015 - 20:30

Fun Basic is an extended clone of Microsoft’s Small Basic programming language that runs as an app in the Windows Store. The app provides a range of sample programs from simple turtle graphics through to video games and physics simulations. The samples have been selected from programs in the public domain created by the Small Basic community. Each sample is presented with it’s corresponding source code allowing you to edit the code and run the program.

FunBasic Entry Screenshot

The concept is that you can learn elements of programming by reading the provided programs and editing and enhancing them. The inspiration comes from the type-in programs circulated in computer magazines throughout the 80s, and through which I had some of my earliest forays into programming.

The editor on the other hand affords more modern conveniences including syntax colouring, code completion and tooltips over API calls.

FunBasic 1942 Screenshot

Why not head over to the Windows Store and try the preview (new features will be appearing weekly).

It’s free and there’s games to play! http://tinyurl.com/funbasic

Language resources

Fun Basic provides all the language features of Small Basic and it extends the language with parameters on subroutines. You can learn more about language features in the Small Basic Tutorial.

The app is currently a preview, in future releases extended support for function return values, tuples and pattern matching (using Select Case) will be enabled.

Technical bit

Fun Basic is an offshoot from an extended Small Basic Compiler project I wrote for fun in early 2014 while learning FParsec, a parser combinator libraryfor F#. You can learn more about that effort in my presentation at NDC London “Write Your Own Compiler in 24 Hours” and an interview on .Net Rocks on “Writing Compilers”.

My kids (8 & 12) have been enjoying programming and downloading apps from the Windows Store so last month I set out to cover both of these interests with a Store app version.

Interpreter

The Windows Store sandbox doesn’t support compilation within apps so the compiler was out. Fortunately I’d also knocked together an interpreter for Small Basic during early prototyping so I used this as a basis for the runtime. The interpreter simply pattern matches over the Abstract Syntax Tree (AST) generated by the parser, executing instructions and evaluating expressions.

Library

The next challenge was supporting Small Basic’s runtime library which provides simple graphics and text based operations. This had to be written from scratch as it needed to work against the Windows Store API and run in process with the app. All API calls are made asynchronously and I’ve employed separate canvases for the drawing, shapes and text layers. There’s also support for the Flickr API which unfortunately at the time of writing is broken in Small Basic.

image

Editor

The editor employs the ActiPro SyntaxEditor control to provide syntax colouring, code completion and tool tips. ActiPro’s Language Designer wizard meant I had syntax colouring set up in minutes, and it was relatively easy to set up the code completion and tooltips using my existing parser and AST. I’m planning to enable more of the SyntaxEditor’s features in future versions.

App

To build the app I used the Windows Store Grid App project template that’s built-in to Visual Studio. All that was needed was to customize the grid items with pictures and descriptions for the master view and add the editor and program panels to the detail view.

Logo

Special thanks to Sergey Tihon,, author of the excellent F# Weekly, for putting together a nice logo for the app!

Source Code

The source is open and available on BitBucket, see http://bitbucket.org/ptrelford/funbasic

If you delve in you’ll find a mixture of F#, C# and VB.Net. F# is used for the parser and interpreter, C# for interfaces and small imperative functions and VB.Net for gluing the app together.

Releases

I’m currently releasing new features weekly, let me know if there’s a feature you “must have” via Twitter @ptrelford :)

Categories: Programming