Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Making a performant watch face

Android Developers Blog - Thu, 12/18/2014 - 22:27

Posted by Hoi Lam, Developer Advocate, Android Wear

What’s a better holiday gift than great performance? You’ve got a great watch face idea -- now, you want to make sure the face you’re presenting to the world is one of care and attention to detail.

At the core of the watch face's process is an onDraw method for canvas operations. This allows maximum flexibility for your design, but also comes with a few performance caveats. In this blog post, we will mainly focus on performance using the real life journey of how we optimised the Santa Tracker watch face, more than doubling the number of fps (from 18 fps to 42 fps) and making the animation sub-pixel smooth.

Starting point - 18 fps

Our Santa watch face contains a number of overlapping bitmaps that are used to achieve our final image. Here's a list of them from bottom to top:

  1. Background (static)
  2. Clouds which move to the middle
  3. Tick marks (static)
  4. Santa figure and sledge (static)
  5. Santa’s hands - hours and minutes
  6. Santa’s head (static)

The journey begins with these images...

Large images kill performance (+14 fps)

Image size is critical to performance in a Wear application, especially if the images will be scaled and rotated. Wasted pixel space (like Santa’s arm here) is a common asset mistake:

Before: 584 x 584 = 341,056 pixelsAfter: 48*226 = 10,848 (97% reduction)

It's tempting to use bitmaps from the original mock up that have the exact location of watch arms and components in absolute space. Sadly, this creates problems, like in Santa's arm here. While the arm is in the correct position, even transparent pixels increase the size of the image, which can cause performance problems due to memory fetch. You'll want to work with your design team to extract padding and rotational information from the images, and rely on the system to apply the transformations on our behalf.

Since the original image covers the entire screen, even though the bitmap is mostly transparent, the system still needs to check every pixel to see if they have been impacted. Cutting down the area results in significant gains in performance. After correcting both of the arms, the Santa watch face frame rate increased by 10 fps to 28 fps (fps up 56%). We saved another 4 fps (fps up 22%) by cropping Santa’s face and figure layer. 14 fps gained, not bad!

Combine Bitmaps (+7 fps)

Although it would be ideal to have the watch tick marks on top of our clouds, it actually does not make much difference visually as the clouds themselves are transparent. Therefore there is an opportunity to combine the background with the ticks.

+

When we combined these two views together, it meant that the watch needed to spend less time doing alpha blending operations between them, saving precious GPU time. So, consider collapsing alpha blended resources wherever we can in order to increase performance. By combining two full screen bitmaps, we were able to gain another 7 fps (fps up 39%).

Anti-alias vs FilterBitmap flags - what should you use? (+2 fps)

Android Wear watches come in all shapes and sizes. As a result, it is sometimes necessary to resize a bitmap before drawing on the screen. However, it is not always clear what options developers should select to make sure that the bitmap comes out smoothly. With canvas.drawBitmap, developers need to feed in a Paint object. There are two important options to set - they are anti-alias and FilterBitmap. Here’s our advice:

  • Anti-alias does not do anything for bitmaps. We often switch on the anti-alias option by default as developers when we are creating a Paint object. However, this option only really makes sense for vector objects. For bitmaps, this has no impact. The hand on the left below has anti-alias switched on, the one on the right has it switched off. So turn off anti-aliasing for bitmaps to gain performance back. For our watch face, we gained another 2 fps (fps up 11%) by switching this option off.
  • Switch on FilterBitmap for all bitmap objects which are on top of other objects - this option smooths the edges when drawBitmap is called. This should not be confused with the filter option on Bitmap.createScaledBitmap for resizing bitmaps. We need both to be turned on. The bitmaps below are the magnified view of Santa’s hand. The one on the left has FilterBitmap switched off and the one on the right has FilterBitmap switched on.
  • Eliminate expensive calls in the onDraw loop (+3 fps)

    onDraw is the most critical function call in watch faces. It's called for every drawable frame, and the actual painting process cannot move forward until it's finished. As such, our onDraw method should be as light and as performant as possible. Here's some common problems that developers run into that can be avoided:

    1. Do move heavy and common code to a precompute function - e.g. if we commonly grab R.array.cloudDegrees, try doing that in onCreate, and just referencing it in the onDraw loop.
    2. Don’t repeat the same image transform in onDraw - it’s common to resize bitmaps at runtime to fit the screen size but this is not available in onCreate. To avoid resizing the bitmap over and over again in onDraw, override onSurfaceChanged where width and height information are available and resize images there.
    3. Don't allocate objects in onDraw - this leads to high memory churn which will force garbage collection events to kick off, killing frame rates.
    4. Do analyze the CPU performance by using a tool such as the Android Device Monitor. It’s important that the onDraw execution time is short and occurs in a regular period.

    Following these simple rules will improve rendering performance drastically.

    In the first version, the Santa onDraw routine has a rogue line:

    int[] cloudDegrees = 
        getResources().getIntArray(R.array.cloudDegrees);

    This loads the int array on every call from resources which is expensive. By eliminating this, we gained another 3 fps (fps up 17%).

    Sub-pixel smooth animation (-2 fps)

    For those keeping count, we should be 44 fps, so why is the end product 42 fps? The reason is a limitation with canvas.drawBitmap. Although this command takes left and top positioning settings as a float, the API actually only deals with integers if it is purely translational for backwards compatibility reasons. As a result, the cloud can only move in increments of a whole pixel resulting in janky animations. In order to be sub-pixel smooth, we actually need to draw and then rotate rather than having pre-rotate clouds which moves towards Santa. This additional rotation costs us 2 fps. However, the effect is worthwhile as the animation is now sub-pixel smooth.

    Before - fast but janky and wobbly

    for (int i = 0; i < mCloudBitmaps.length; i++) {
        float r = centerX - (timeElapsed / mCloudSpeeds[i]) % centerX;
        float x = centerX + 
            -1 * (r * (float) Math.cos(Math.toRadians(cloudDegrees[i] + 90)));
        float y = centerY - 
            r * (float) Math.sin(Math.toRadians(cloudDegrees[i] + 90));
        mCloudFilterPaints[i].setAlpha((int) (r/centerX * 255));
        Bitmap cloud = mCloudBitmaps[i];
        canvas.drawBitmap(cloud,
            x - cloud.getWidth() / 2,
            y - cloud.getHeight() / 2,
            mCloudFilterPaints[i]);
    }

    After - slightly slower but sub-pixel smooth

    for (int i = 0; i < mCloudBitmaps.length; i++) {
        canvas.save();
        canvas.rotate(mCloudDegrees[i], centerX, centerY);
        float r = centerX - (timeElapsed / (mCloudSpeeds[i])) % centerX;
        mCloudFilterPaints[i].setAlpha((int) (r / centerX * 255));
        canvas.drawBitmap(mCloudBitmaps[i], centerX, centerY - r,
            mCloudFilterPaints[i]);
        canvas.restore();
    }

    Before: Integer translation values create janky, wobbly animation. After: smooth sailing!

    Quality on every wrist

    The watch face is the most prominent UI element in Android Wear. As craftspeople, it is our responsibility to make it shine. Let’s put quality on every wrist!

    Join the discussion on

    +Android Developers
Categories: Programming

The Sweet Spot of Customer Demand Meets Microsoft Supply

Here’s a simple visual that I whiteboard when I lead workshops for business transformation.

image

The Sweet Spot is where customer “demand” meets Microsoft “supply.”

I’m not a fan of product pushers or product pushing.  I’m a fan of creating “pull.”

In order for customers to pull-through any product, platform, or service, you need to know the customer’s pains, needs, and desired outcomes.  Without customer empathy, you’re not relevant.

This is a simple visual, but a powerful one.  

When you have good representation of the voice of the customer, you can really identity problems worth solving.   It always comes down to pains, needs, opportunities, and desired outcomes.  In short, I always just say pains, needs, and desired outcomes so that people can remember it easily.

To make it real, we use scenarios to tell a simple story of a customer’s pain, needs, and desired outcomes.   We use our friends in the field working with customers to give us real stories of real pain.  

Here is an example Scenario Narrative where a company is struggling in creating products that its customers care about …

image

As you can see, the Current State is a pretty good story of pain, that a lot of business leaders and product owners can identify with.  For some, it’s all too real, because it is their story and they can see themselves in it.

In the Desired Future State, it’s a pretty good story of what success would look like.   It paints a pretty simple picture of what would be an ideal scenario …. a future possibility.

Here is an example of a Solution Storyboard, where we paint a simple picture of that Desired Future State, or more precisely, a Future Capability Vision.     It’s this Future Capability Vision that shows how, with the right capabilities, the customer can address their pains, needs, and desired outcomes.

image

The beauty of this approach is that it’s product and technology agnostic.   It’s all about building capabilities.

From there, with a  good understanding of the pains, needs, and desired outcomes, it’s super easy to overlay relevant products, technologies, consulting services, etc.

And then, rather than trying to do a product “push”, it becomes a product “pull” because it connects with customers in a very deep, very real, very relevant way.

Think “pull” not “push.”

You Might Also Like

Drive Business Transformation by Reenvisioning Operations

Drive Business Transformation by Reenvisioning Your Customer Experience

Dual-Speed IT Drives Business Transformation and Improves IT-Business Relationships

How Business Leaders are Building Digital Skills

How To Build a Roadmap for Your Digital Transformation

Categories: Architecture, Programming

Learning Hacks

Making the Complex Simple - John Sonmez - Thu, 12/18/2014 - 16:00

In this video, I talk about some of the hacks I use to get in extra learning time in my busy day. One of the main hacks I use is to listen to audio books whenever I am in the car or working out. I also read a book while walking on the treadmill every night.

The post Learning Hacks appeared first on Simple Programmer.

Categories: Programming

Reminder to migrate to updated Google Data APIs

Google Code Blog - Mon, 12/15/2014 - 18:00
Over the past few years, we’ve been updating our APIs with new versions across Drive and Calendar, as well as those used for managing Google Apps for Work domains. These new APIs offered developers several improvements over older versions of the API. With each of these introductions, we also announced the deprecation of a set of corresponding APIs.

The deprecation period for these APIs is coming to an end. As of April 20, 2015, we will discontinue these deprecated APIs. Calls to these APIs and any features in your application that depend on them will not work after April 20th.

Discontinued APIReplacement APIDocuments List APIDrive APIAdmin AuditAdmin SDK Reports APIGoogle Apps ProfilesAdmin SDK Directory APIProvisioningAdmin SDK Directory APIReportingAdmin SDK Reports APIEmail Migration API v1Email Migration API v2Reporting VisualizationNo replacement available
When updating, we also recommend that you use the opportunity to switch to OAuth2 for authorization. Older protocols, such as ClientLogin, AuthSub, and OpenID 2.0, have also been deprecated and are scheduled to shut down.

For help on migration, consult the documentation for the APIs or ask questions about the Drive API or Admin SDK on StackOverflow.

Posted by Steven Bazyl, Developer Advocate
Categories: Programming

How I got Robert (Uncle Bob) Martin to write a foreword for my book

Making the Complex Simple - John Sonmez - Mon, 12/15/2014 - 16:00

Last week my publisher, Manning, gave me a little surprise. They told me that my new book, Soft Skills: The Software Developer’s Life Manual was going to publish early; that anyone who ordered before December 14th would be able to get the print version in their hands by Christmas (barring any unforeseen circumstances.) This was very exciting, until I realized ... Read More

The post How I got Robert (Uncle Bob) Martin to write a foreword for my book appeared first on Simple Programmer.

Categories: Programming

The Evolution of eInk

Coding Horror - Jeff Atwood - Mon, 12/15/2014 - 09:40

Sure, smartphones and tablets get all the press, and deservedly so. But if you place the original mainstream eInk device from 2007, the Amazon Kindle, side by side with today's model, the evolution of eInk devices is just as striking.

Each of these devices has a 6 inch eInk screen. Beyond that they're worlds apart.

8" × 5.3" × 0.8"
10.2 oz 6.4" × 4.5" × 0.3"
6.3 oz 6" eInk display
167 PPI
4 level greyscale 6" eInk display
300 PPI
16 level greyscale
backlight 256 MB 4 GB 400 Mhz CPU 1 GHz CPU $399 $199 7 days battery life
USB 6 weeks battery life
WiFi / Cellular

They may seem awfully primitive compared to smartphones, but that's part of their charm – they are the scooter to the motorcycle of the smartphone. Nowhere near as versatile, but as a form of basic transportation, radically simpler, radically cheaper, and more durable. There's an object lesson here in stripping things away to get to the core.

eInk devices are also pleasant in a paradoxical way because they basically suck at everything that isn't reading. That doesn't sound like something you'd want, except when you notice you spend every fifth page switching back to Twitter or Facebook or Tinder or Snapchat or whatever. eInk devices let you tune out the world and truly immerse yourself in reading.

I believe in the broadest sense, bits > atoms. Sure, we'll always read on whatever device we happen to hold in our hands that can display words and paragraphs. And the advent of retina class devices sure made reading a heck of a lot more pleasant on tablets and smartphones.

But this idea of ultra-cheap, pervasive eInk reading devices eventually replacing those ultra-cheap, pervasive paperbacks I used to devour as a kid has great appeal to me. I can't let it go. Reading is Fundamental, man!

That's why I'm in this weird place where I will buy, sight unseen, every new Kindle eInk device. I wasn't quite crazy enough to buy the original Kindle (I mean, look at that thing) but I've owned every model since the third generation Kindle was introduced in 2010.

I've also been tracking the Kindle prices to see when they can get them down to $49 or lower. We're not quite there yet – the basic Kindle eInk reader, which by the way is still pretty darn amazing compared to that original 2007 model pictured above – is currently on sale for $59.

But this is mostly about their new flagship eInk device, the Kindle Voyage. Instead of being cheap, it's trying to be upscale. The absolute first thing you need to know is this is the first 300 PPI (aka "retina") eInk reader from Amazon. If you're familiar with the smartphone world before and after the iPhone 4, then you should already be lining up to own one of these.

When you experience 300 PPI in eInk, you really feel like you're looking at a high quality printed page rather than an array of RGB pixels. Yeah, it's still grayscale, but it is glorious. Here are some uncompressed screenshots I made from mine at native resolution.

Note that the real device is eInk, so there's a natural paper-like fuzziness that makes it seem even more high resolution than these raw bitmaps would indicate.

I finally have enough resolution to pick a thinner font than fat, sassy old Caecilia.

The backlight was new to the original Paperwhite, and it definitely had some teething pains. The third time's the charm; they've nailed the backlight aspect for improved overall contrast and night reading. The Voyage also adds an ambient light sensor so it automatically scales the backlight to anything from bright outdoors to a pitch-dark bedroom. It's like automatic night time headlights on a car – one less manual setting I have to deal with before I sit down and get to my reading. It's nice.

The Voyage also adds page turn buttons back into the mix, via pressure sensing zones on the left and right bezel. I'll admit I had some difficulty adjusting to these buttons, to the point that I wasn't sure I would, but I eventually did – and now I'm a convert. Not having to move your finger into the visible text on the page to advance, and being able to advance without moving your finger at all, just pushing it down slightly (which provides a little haptic buzz as a reward), does make for a more pleasant and efficient reading experience. But it is kind of subtle and it took me a fair number of page turns to get it down.

In my experience eInk devices are a bit more fragile than tablets and smartphones. So you'll want a case for automatic on/off and basic "throw it in my bag however" paperback book level protection. Unfortunately, the official Kindle Voyage case is a disaster. Don't buy it.

Previous Kindle cases were expensive, but they were actually very well designed. The Voyage case is expensive and just plain bad. Whoever came up with the idea of a weirdly foldable, floppy origami top opening case on a thing you expect to work like a typical side-opening book should be fired. I recommend something like this basic $14.99 case which works fine to trigger on/off and opens in the expected way.

It's not all sweetness and light, though. The typography issues that have plagued the Kindle are still present in full force. It doesn't personally bother me that much, but it is reasonable to expect more by now from a big company that ostensibly cares about reading. And has a giant budget with lots of smart people on its payroll.

This is what text looks like on a kindle.

— Justin Van Slembrou… (@jvanslem) February 6, 2014

If you've dabbled in the world of eInk, or you were just waiting for a best of breed device to jump in, the Kindle Voyage is easy to recommend. It's probably peak mainstream eInk. Would recommend, would buy again, will probably buy all future eInk models because I have an addiction. A reading addiction. Reading is fundamental. Oh, hey, $2.99 Kindle editions of The Rise and Fall of the Third Reich? Yes, please.

(At the risk of coming across as a total Amazon shill, I'll also mention that the new Amazon Family Sharing program is amazing and lets me and my wife finally share books made of bits in a sane way, the way we used to share regular books: by throwing them at each other in anger.)

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Categories: Programming

New blog posts about bower, grunt and elasticsearch

Gridshore - Mon, 12/15/2014 - 08:45

Two new blog posts I want to point out to you all. I wrote these blog posts on my employers blog:

The first post is about creating backups of your elasticsearch cluster. Some time a go they introduced the snapshot/restore functionality. Of course you can use the REST endpoint to use the functionality, but how easy is it if you can use a plugin to handle the snapshots. Or maybe even better, integrate the functionality in your own java application. That is what this blogpost is about, integrating snapshot/restore functionality in you java application. As a bonus there are the screens of my elasticsearch gui project snowing the snapshot/restore functionality.

Creating elasticsearch backups with snapshot/restore

The second blog post I want to put under you attention is front-end oriented. I already mentioned my elasticsearch gui project. This is an Angularjs application. I have been working on the plugin for a long time and the amount of javascript code is increasing. Therefore I wanted to introduce grunt and bower to my project. That is what this blogpost is about.

Improve my AngularJS project with grunt

The post New blog posts about bower, grunt and elasticsearch appeared first on Gridshore.

Categories: Architecture, Programming

Agile: how hard can it be?!

Xebia Blog - Sun, 12/14/2014 - 13:48

Yesterday my colleagues and I ran an awesome workshop at the MIT conference in which we built a Rube Goldberg machine using Scrum and Extreme Engineering techniques. As agile coaches one would think that being an Agile team should come naturally to us, but I'd like to share our pitfalls and insights with you since "we learned a lot" about being an agile team and what an incredible powerful model a Rube Goldberg machine is for scaled agile product development.

If you're not the reading type, check out the video.

Rube ... what?

Goldberg. According to Wikipedia; A Rube Goldberg machine is a contraption, invention, device or apparatus that is deliberately over-engineered or overdone to perform a very simple task in a very complicated fashion, usually including a chain reaction. The expression is named after American cartoonist and inventor Rube Goldberg (1883–1970).

In our case we set out on a 6 by 4 meter stage divided in 5 sections. Each section had a theme like rolling, propulsion, swinging, lifting etc. In a fashion it resembled a large software product that requires to respond to some event in an (for outsiders) incredibly complex matter, by triggering a chain of sub-systems which end in some kind of end-result.

The workspace, scrum boards and build stuff

The workspace, scrum boards and build stuff

Extreme Scrum

During the day 5 teams worked in a total of 10 sprints to create the most incredible machine, experiencing everything one can find during "normal" product development. We had inexperienced team members, little to no documentation, legacy systems whom's engineering principles were shrouded in mystery, teams that forgot to retrospective, interfaces that were ignored because the problem "lies with the other team". The huge time pressure of the relative small sprint and the complexity of what we were trying to achieve created a pressure cooker that brought these problems to the surface faster than anything else and with Scrum we were forced to face and fix these problems.

Team scrumboard

Team scrumboard

Build, fail, improve, build

“Most people do not listen with the intent to understand; they listen with the intent to reply.” - Stephen R. Covey

Having 2 minutes to do your planning makes it very difficult to listen, especially when your head is buzzing with ideas, yet sometimes you have to slow down to speed up. Effective building requires us to really understand what your team mate is going to do, pairing proved a very effective way to slow down your own brain and benefit from both rubber ducking and the insight of your team mate. Once our teams reached 4 members we could pair and drastically improve the outcome.

Deadweight with pneumatic fuse

Deadweight with pneumatic fuse

Once the machine had reached a critical size integration tests started to fail. The teams responded by testing multiple times during the sprint and fix the broken build rather than adding new features. Especially in mechanical engineering that is not a simple as it sounds. Sometimes a part of the machine would be "refactored" and since we did not design for a simple end-to-end test to be applied continuously. It took a couple of sprints to get that right.

A MVP that made it to the final product

A MVP that made it to the final product

"Keep your code clean" we teach teams every day. "Don't accept technical or functional debt, you know it will slow you down in the end". Yet it is so tempting. Despite a Scrum Master and an "Über Scrum Master" we still had a hard time keeping our workspace clean, refactor broken stuff out, optimise and simplify...

Have an awesome goal

"A true big hairy audacious goal is clear and compelling, serves as unifying focal point of effort, and acts as a clear catalyst for team spirit. It has a clear finish line, so the organization can know when it has achieved the goal; people like to shoot for finish lines." - Collins and Porras, Built to Last: Successful Habits of Visionary Companies

Truth is: we got lucky with the venue. Building a machine like this is awesome and inspiring in itself, learning how Extreme Scrum can help teams to build machines better, faster, innovative and with a whole lot more fun is a fantastic goal in itself, but parallel to our build space was a true magnet, something that really focussed the teams and go that extra mile.

The ultimate goal of the machine

The ultimate goal of the machine

Biggest take away

Building things is hard, building under pressure is even harder. Even teams that are aware of the theory will be tempted to throw everything overboard and just start somewhere. Applying Extreme Engineering techniques can truly help you, it's a simple set of rules but it requires an unparalleled level of discipline. Having a Scrum coach can make all the difference between a successful and failed project.

R: Time to/from the weekend

Mark Needham - Sat, 12/13/2014 - 21:38

In my last post I showed some examples using R’s lubridate package and another problem it made really easy to solve was working out how close a particular date time was to the weekend.

I wanted to write a function which would return the previous Sunday or upcoming Saturday depending on which was closer.

lubridate’s floor_date and ceiling_date functions make this quite simple.

e.g. if we want to round the 18th December down to the beginning of the week and up to the beginning of the next week we could do the following:

> library(lubridate)
> floor_date(ymd("2014-12-18"), "week")
[1] "2014-12-14 UTC"
 
> ceiling_date(ymd("2014-12-18"), "week")
[1] "2014-12-21 UTC"

For the date in the future we actually want to grab the Saturday rather than the Sunday so we’ll subtract one day from that:

> ceiling_date(ymd("2014-12-18"), "week") - days(1)
[1] "2014-12-20 UTC"

Now let’s put that together into a function which finds the closest weekend for a given date:

findClosestWeekendDay = function(dateToLookup) {
  before = floor_date(dateToLookup, "week") + hours(23) + minutes(59) + seconds(59)
  after  = ceiling_date(dateToLookup, "week") - days(1)
  if((dateToLookup - before) < (after - dateToLookup)) {
    before  
  } else {
    after  
  }
}
 
> findClosestWeekendDay(ymd_hms("2014-12-13 13:33:29"))
[1] "2014-12-13 UTC"
 
> findClosestWeekendDay(ymd_hms("2014-12-14 18:33:29"))
[1] "2014-12-14 23:59:59 UTC"
 
> findClosestWeekendDay(ymd_hms("2014-12-15 18:33:29"))
[1] "2014-12-14 23:59:59 UTC"
 
> findClosestWeekendDay(ymd_hms("2014-12-17 11:33:29"))
[1] "2014-12-14 23:59:59 UTC"
 
> findClosestWeekendDay(ymd_hms("2014-12-17 13:33:29"))
[1] "2014-12-20 UTC"
 
> findClosestWeekendDay(ymd_hms("2014-12-19 13:33:29"))
[1] "2014-12-20 UTC"

I’ve set the Sunday date at 23:59:59 so that I can use this date in the next step where we want to calculate how how many hours it is from the current date to the nearest weekend.

I ended up with this function:

distanceFromWeekend = function(dateToLookup) {
  before = floor_date(dateToLookup, "week") + hours(23) + minutes(59) + seconds(59)
  after  = ceiling_date(dateToLookup, "week") - days(1)
  timeToBefore = dateToLookup - before
  timeToAfter = after - dateToLookup
 
  if(timeToBefore < 0 || timeToAfter < 0) {
    0  
  } else {
    if(timeToBefore < timeToAfter) {
      timeToBefore / dhours(1)
    } else {
      timeToAfter / dhours(1)
    }
  }
}
 
> distanceFromWeekend(ymd_hms("2014-12-13 13:33:29"))
[1] 0
 
> distanceFromWeekend(ymd_hms("2014-12-14 18:33:29"))
[1] 0
 
> distanceFromWeekend(ymd_hms("2014-12-15 18:33:29"))
[1] 18.55833
 
> distanceFromWeekend(ymd_hms("2014-12-17 11:33:29"))
[1] 59.55833
 
> distanceFromWeekend(ymd_hms("2014-12-17 13:33:29"))
[1] 58.44194
 
> distanceFromWeekend(ymd_hms("2014-12-19 13:33:29"))
[1] 10.44194

While this works it’s quite slow when you run it over a data frame which contains a lot of rows.

There must be a clever R way of doing the same thing (perhaps using matrices) which I haven’t figured out yet so if you know how to speed it up do let me know.

Categories: Programming

R: Numeric representation of date time

Mark Needham - Sat, 12/13/2014 - 20:58

I’ve been playing around with date times in R recently and I wanted to derive a numeric representation for a given value to make it easier to see the correlation between time and another variable.

e.g. December 13th 2014 17:30 should return 17.5 since it’s 17.5 hours since midnight.

Using the standard R libraries we would write the following code:

> december13 = as.POSIXlt("2014-12-13 17:30:00")
> as.numeric(december13 - trunc(december13, "day"), units="hours")
[1] 17.5

That works pretty well but Antonios recently introduced me to the lubridate so I thought I’d give that a try as well.

The first nice thing about lubridate is that we can use the date we created earlier and call the floor_date function rather than truncate:

> (december13 - floor_date(december13, "day"))
Time difference of 17.5 hours

That gives us back a difftime…

> class((december13 - floor_date(december13, "day")))
[1] "difftime"

…which we can divide by different units to get the granularity we want:

> diff = (december13 - floor_date(december13, "day"))
> diff / dhours(1)
[1] 17.5
 
> diff / ddays(1)
[1] 0.7291667
 
> diff / dminutes(1)
[1] 1050

Pretty neat!

lubridate also has some nice functions for creating dates/date times. e.g.

> ymd_hms("2014-12-13 17:00:00")
[1] "2014-12-13 17:00:00 UTC"
 
> ymd_hm("2014-12-13 17:00")
[1] "2014-12-13 17:00:00 UTC"
 
> ymd_h("2014-12-13 17")
[1] "2014-12-13 17:00:00 UTC"
 
> ymd("2014-12-13")
[1] "2014-12-13 UTC"

And if you want a different time zone that’s pretty easy too:

> with_tz(ymd("2014-12-13"), "GMT")
[1] "2014-12-13 GMT"
Categories: Programming

Extreme Engineering - Building a Rube Goldberg machine with scrum

Xebia Blog - Fri, 12/12/2014 - 15:16

Is agile usable to do other things than software development? Well we knew that already; yes!
But to create a machine in 1 day with 5 teams and continuously changing members using scrum might be exciting!

See our report below (it's in Dutch for now)

 

Extreme engineering video

 

 

New Code Samples for Lollipop

Android Developers Blog - Thu, 12/11/2014 - 23:43

Posted by Trevor Johns, Developer Programs Engineer

With the launch of Android 5.0 Lollipop, we’ve added more than 20 new code samples demonstrating how to implement some of the great new features of this release. To access the code samples, you can easily import them in Android Studio 1.0 using the new Samples Wizard.

Go to File > Import Sample in order to browse the available samples, which include a description and preview for each. Once you’ve made your selection, select “Next” and a new project will be automatically created for you. Run the project on an emulator or device, and feel free to experiment with the code.

Samples Wizard in Android Studio 1.0 Newly imported sample project in Android Studio

Alternatively, you can browse through them via the Samples browser on the developer site. Each sample has an Overview description, Project page to browse app file structure, and Download link for obtaining a ZIP file of the sample. As a third option, code samples can also be accessed in the SDK Manager by downloading the SDK samples for Android 5.0 (API 21) and importing them as existing projects into your IDE.


Sample demonstrating transition animations
Material Design

When adopting material design, you can refer to our collection of sample code highlighting material elements:

For additional help, please refer to our design checklist, list of key APIs and widgets, and documentation guide.

To view some of these material design elements in action, check out the Google I/O app source code.

Platform

Lollipop brings the most extensive update to the Android platform yet. The Overview screen allows an app to surface multiple tasks as concurrent documents. You can include enhanced notifications with this sample code, which shows you how to use the lockscreen and heads-up notification APIs.

We also introduced a new Camera API to provide developers more advanced image capture and processing capabilities. These samples detail how to use the camera preview and take photos, how to record video, and implement a real-time high-dynamic range camera viewfinder.

Elsewhere, Project Volta encourages developers to make their apps more battery-efficient with new APIs and tools. The JobScheduler sample demonstrates how you can schedule background tasks to be completed later or under specific conditions.

For those interested in the enterprise device administration use case, there are sample apps on setting app restrictions and creating a managed profile.

Android Wear

For Android Wear, we have a speed tracker sample to show how to take advantage of GPS support on wearables. You can browse the rest of the Android Wear samples too, and here are some highlights that demonstrate the unique capabilities of wearables, such as data synchronization, notifications, and supporting round displays:

Android TV

Extend your app for Android TV using the Leanback library described in this training guide and sample.

To try out a game that is specifically optimized for Android TV, download Pie Noon from Google Play. It’s an open-source game developed in-house at Google that supports multiple players using Bluetooth controllers or touch controls on mobile devices.

Android Auto

For the use cases highlighted in the Introduction to Android Auto DevByte, we have two code samples. The Media Browser sample (DevByte) demonstrates how easy it is to make an audio app compatible with Android Auto by using the new Lollipop media APIs, while the Messaging sample (DevByte) demonstrates how to implement notifications that support replies using speech recognition.

Google Play services

Since we’ve discussed sample resources for the Android platform and form factors, we also want to mention that there are existing samples for Google Play services. With Google Play services, your app can take advantage of the latest Google-powered APIs such as Maps, Google Fit, Google Cast, and more. Access samples in the Google Play services SDK or visit the individual pages for each API on the developer site. For game developers, you can reference the Google Play Games services samples for how to add achievements, leaderboards, and multiplayer support to your game.

Check out a sample today to help you with your development!

Join the discussion on

+Android Developers
Categories: Programming

Hello World, meet our new experimental toolchain, Jack and Jill

Android Developers Blog - Thu, 12/11/2014 - 20:22

Posted by Paul Rashidi, Developer Programs Engineer

We've been working on a new toolchain for Android that’s designed to improve build times and simplify development by reducing dependencies on other tools. Today, we’re introducing you to Jack (Java Android Compiler Kit) and Jill (Jack Intermediate Library Linker), the two tools at the core of the new toolchain.

We are making an early, experimental version of Jack and Jill available for testing with non-production versions of your apps. This post describes how the toolchain works, how to configure it, and how to let us know of your feature requests and any bugs you find.

So how does it work?

When the new tool chain is enabled, Jill will translate any libraries you are referencing to a new Jack library file (.jack). This prepares them to be quickly merged with other .jack files. The Android Gradle plugin and Jack collect any .jack library files, along with your source code, and compiles them into a set of dex files. During the process, Jack also handles any requested code minification. The output is then assembled into an APK file as normal. We also include support for multiple dex files, if you have enabled that support.

How do I use it?

Jack and Jill are already available in the 21.1.1+ Build Tools for Android Studio. Complementary Gradle support is also currently available in the Android 1.0.0+ Gradle plugin. To get started, all you need to do is make sure you're using these versions of the tooling and then add a single line in your build.gradle file. Perform a build of your application to receive a newly built APK.

android {
    ...
    buildToolsRevision '21.1.1'
    defaultConfig {
      // Enable the experimental Jack build tools.
      useJack = true
    }
    ...
}
If you want to build your app with both toolchains, Product Flavors are a great way to do this. Your build.gradle file might look something like the snippet below.
android {
    ...
    productFlavors {
        dev {
            ...
        }
        experimental {
            useJack = true
        }
        prod {
            ...
        }
    }
    ...
}
How do I configure my build?

We are making the transition to Jack as smooth as possible by supporting minification (shrinking and/or obfuscation), as well as repackaging (i.e. similar to tools like jarjar), while using the same input files as you are used to. Minification is available in the Gradle plugin immediately and repackaging will follow. You should continue to use the "minifyEnabled true" directive to reduce the size of your app among all other optimizations you would normally use. There are more details on our reference page (linked below) regarding the level of support for each type of optimization. We encourage you to provide feedback there if your current configuration isn't supported.

Give us your feedback

We are attempting to make the toolchain as easy to test out as possible and we're looking for your help to fine tune it. Use the reference page to find known issues, file feature requests, and report bugs. Happy building!

Join the discussion on

+Android Developers
Categories: Programming

Azure: Premium Storage, RemoteApp, SQL Database Update, Live Media Streaming, Search and More

ScottGu's Blog - Scott Guthrie - Thu, 12/11/2014 - 20:14

Today we released a number of great enhancements to Microsoft Azure. These include:

  • Premium Storage: New Premium high-performance Storage for Azure Virtual Machine workloads
  • RemoteApp: General Availability of Azure RemoteApp service
  • SQL Database: Enhancements to Azure SQL Databases
  • Media Services: General Availability of Live Channels for Media Streaming
  • Azure Search: Enhanced management experience, multi-language support and more
  • DocumentDB: Support for Bulk Add Documents and Query Syntax Highlighting
  • Site Recovery: General Availability of disaster recovery to Azure for branch offices and SMB customers
  • Azure Active Directory: General Availability of Azure Active Directory application proxy and password write back support

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Premium Storage: High-performance Storage for Virtual Machines

I’m excited to announce the public preview of our new Azure Premium Storage offering. With the introduction of the new Premium Storage option, Azure now offers two types of durable storage: Premium Storage and Standard Storage. Premium Storage stores data durably on Solid State Drives (SSDs) and provides high performance, low latency, disk storage with consistent performance delivery guarantees.

image

Premium Storage is ideal for I/O-sensitive workloads - and is great for database workloads hosted within Virtual Machines.  You can optionally attach several premium storage disks to a single VM, and support up to 32 TB of disk storage per Virtual Machine and drive more than 50,000 IOPS per VM at less than 1 millisecond latency for read operations. This provides a wickedly fast storage option that enables you to run even more workloads in the cloud.

Using Premium Storage, Azure now offers the ability to "lift-and-shift" more demanding enterprise applications to the cloud - including SQL Server, Dynamics AX, Dynamics CRM, Exchange Server, MySQL, Oracle Database, IBM DB2, and SAP Business Suite solutions.

Premium Storage is now available in public preview starting today. To sign up to use the Azure Premium Storage preview, visit the Azure Preview page. Disk Sizes and Performance

Premium Storage disks provide up to 5,000 IOPS and 200 MB/sec throughput depending on the disk size. When you create a new premium storage disk you get the option to select the disk size and performance characteristics you want based on your application performance and storage capacity needs.  For the public preview, we are offering three Premium Storage disk configurations:

Disk Types<?xml:namespace prefix = "o" />

P10

P20

P30

Disk Size

128 GB

512 GB

1 TB

IOPS per Disk

500

2300

5000

Throughput per Disk

100 MB/sec

150 MB/sec

200 MB/sec

You can maximize the performance of your VMs by attaching multiple Premium Storage disks to them (up to the network bandwidth limit available to the VM for disk traffic). To learn the disk bandwidth available for each VM size, see the Virtual Machine and Cloud Service Sizes for Azure Durability

Durability of data is of utmost importance for any persistent storage option. Azure customers have critical applications that depend on the persistence of their data and high tolerance against failures. Premium Storage keeps three replicas of data within the same region. In addition, you can also optionally create snapshots of your disks and copy those snapshots to a Standard GRS storage account - which enables you to maintain a geo-redundant snapshot of your data that is stored at least 400 miles away from your primary Azure region. Learn More

You can learn more about Premium Storage disks here.  To sign up to use Premium Storage, go to the Azure Preview page, and sign up for Microsoft Azure Premium Storage service using your subscription. RemoteApp: General Availability of Azure RemoteApp

I’m excited to announce the general availability of Azure RemoteApp. Using Azure RemoteApp, you can deploy Windows desktop applications in the cloud, and provide your users with an intuitive, high-fidelity, WAN-ready remote application experience.  Users can use the cloud-hosted Windows applications you enable on their phones, tablets, or PCs - including Windows, Mac, iOS and Android based devices.  We are delivering RemoteApp with a super competitive price - you can host your user's applications in the cloud for just $10/user/month.  With today’s release, Azure RemoteApp is backed by an SLA and supported by Microsoft Support, offering the full scalability and security of the Azure cloud. Getting Started

Setting up RemoteApp is easy. In the Azure Management Portal, select NEW -> App Services -> RemoteApp -> Quick Create. Pick a name, region, select the scale configuration plan you want to use, pick one of the standard template images, and click OK. When you do this for the first time, your 30-day free trial will also start. This is a fully featured trial, available to all Azure customers.

image

A RemoteApp instance is an elastic, automatically scaled, collection of Windows Server VMs that are running the Remote Desktop Session Host role and host the applications. The VMs are all created based on the template image you provide. You can provide your own template image containing your custom apps, or use one of the standard template images we provide. One of these is for Office 365 ProPlus, which you can use if you have a subscription that contains the Office 365 ProPlus service:

image

Once enabled, your users can quickly and easily connect to the applications you host in Azure.  They can use Windows, Mac, iOS and Android devices to connect to the RemoteApp service - enabling you to use Azure to run your Windows desktop applications anywhere in the world, on any device. Enabling Hybrid Applications

Many business-critical Windows applications rely on on-premises infrastructure such as identity and machine management, and require access to on-premises databases and resources. Azure RemoteApp provides a hybrid deployment model that supports all of these scenarios.

  • Hybrid Management: In a hybrid RemoteApp collection, the VMs which host your applications are joined to your AD domain. Therefore, you can use on-premises management tools like Group Policy, System Center, and many other enterprise management tools that rely on AD membership.

  • Federated Identity: You can use Azure Active Directory integrated with your on-premises AD and your users can logon with their familiar corporate identities. When the Windows applications starts, it is running in a fully domain-joined session, with the usual integrated authentication capabilities of a Windows domain.
  • Hybrid Networking: Windows applications in a hybrid RemoteApp collection can seamlessly access on-premises data and resources. This capability is built on Azure Virtual Networking with site-to-site VPN, providing cloud-premise virtual network connectivity. In the future, RemoteApp collections will support full range of Azure Networking capabilities, including ExpressRoute.

Performance and Scale Configurations

With today's general availability release, we are offering two scale configurations: BASIC and STANDARD.

  • BASIC is intended for lighter, task-worker use cases, such as a single productivity application or a data-entry frontend to a line of business system.
  • STANDARD is intended for typical productivity use cases such as using Outlook, Word, Excel and other knowledge worker line of business and productivity applications.

You can select the scale configuration for your RemoteApp collection while creating it. If you want to change it later, you can do so using the SCALE tab. Your applications and settings and your user data remain intact through this change.

image Pricing

We are making the RemoteApp service available at a very attractive, affordable price.  You can host a line of business Windows application for as little as $10/user per month using the BASIC configuration.

At the STANDARD level, you can host your users’ entire productivity workspace for just $15/user per month.

Learn More

A variety of resources are available on the RemoteApp overview page. You can also download the client for your device and take a test drive. Finally, RDV Team blog discusses today’s new features in more detail. SQL Databases: Now with SQL 2014 Features and Compatibility

Today we are making available a preview of the next-generation release of our Azure SQL Database service.  We announced some of the preview's new features earlier in November.  Today's release delivers near-complete SQL Server 2014 engine compatibility and even better performance.

Our internal benchmark tests (using over 600 million rows of data) show query performance improvements of around 5x with today's preview relative to our existing Premium Tier SQL Database offering and up to 100x performance improvements when using the new In-memory columnstore technology now supported with today's preview release. Lots of great new features and improvements

Key highlights of today's preview include:

  • Better management of large databases. We now support heavier database workload management with parallel queries, table partitioning, online indexing, worry-free large index rebuilds with the previous 2GB size limit removed, and more alter database commands.

  • Support for more programmability capabilities: You can now build even more robust applications with CLR, T-SQL Windows functions, XML index, and change tracking support.

  • Up to 100x performance improvements with support for In-memory columnstore queries for data mart and analytic workloads.

  • Improved monitoring and troubleshooting: Extended Events (XEvents) and visibility into over 100 new table views via an expanded set of Database Management Views (DMVs).

  • New S3 performance level: Today's preview introduces a new pricing option for SQL Databases. The new "S3" performance tier delivers 100 DTU of performance (twice the DTU level of the existing S2 tier) and all of the features available in the Standard tier. It enables an even more cost effective way to run applications with higher performance needs.

Learn More and Get Started

You can read more about the enhancements in today's preview on the preview getting started page.  To use today's preview, you can navigate to the SETTINGS part on the SQL Database blade in the Azure Preview Portal and upgrade to use the preview.

image

Try our the new features and give us your feedback! Media Services: General Availability of Live Media Streaming

I’m very excited to announce the General Availability of Live Channels Media Streaming support.  Live Channels with Azure Media Services is the live services backbone that broadcasters such as NBC Sports have used to deliver live online media streamed events such as English Premier League, NHL hockey, Sunday Night Football.  A dozen international broadcasters also used it to seamlessly deliver live media streaming coverage of the 2014 Sochi Winter Olympics and 2014 FIFA World Cup.

You can now use Live Channels to stream events of your own - and scale to literally millions of users watching them.  Today's general availability release is backed by an enterprise-grade Service-Level Agreement (SLA) for all customers. 

Live Streaming Learn More

For more information on functionality and pricing, visit the Getting Started with Live Streaming blog post, the Media Services webpage or Media Services Pricing webpage, or the Live Streaming MSDN section.

Search: Portal Management, Multi-language support

I am happy to announce a number of highly requested features available today in Azure Search.  Azure Search provides developers with all of the features needed to build out search experience for web and mobile applications without having to deal with the typical complexities that come with managing, tuning and scaling a real-world search service.  Azure Portal Enhancements

Helping developers setup and manage their services quickly and easily is a key goal of the Azure Management Portal. Today's release adds several new capabilities to the Azure Search support in the portal that makes it even easier to get started with Azure Search and reduce the need to write code.

For example, you can now easily create a new index. In the portal, you can name the search index, set all of the fields, and assign the properties for each of them - all without writing any code:

image

Once you create the index, you can also now drill into usage details like document counts and storage size. You can see all of the fields associated with this index as shown below:

image

Index tuning is another enhancement now supported in the portal user interface. Boosting relevant items not only provides a better search experience, it also helps you achieve business objectives. For example, if you are searching a product index, you might want to boost documents where the result was found in the product name, as opposed to another document where the result was found in the product description. Or you may wish to use a scoring function that allows you to boost items that have high star ratings or that provide higher margins.

The task of tuning an index was previously only available through the API. Starting today, using the Azure Preview portal you can create or alter scoring profiles, instantly tuning the results of your search queries without having to write a line of code:

image Multilanguage Support across 27 Languages

With today’s release, Azure Search now has support for 27 languages. This allows Azure Search to accommodate the unique characteristics of a given language, enabling word-breaking and text normalization to work as expected for each language. Part of this enhancement includes support for stemming in the relevant languages, reducing words to their word stems. For example, you can search for the word “runs” and find documents that say “run” or “running”.

When creating an index you can choose to include content from multiple languages, allowing you to search and return results based on the chosen language of your user. For more information, you can visit the Language Support page. Over time, we will continue to enhance multi-language support to include additional languages. API features

Last but not least, we’ve introduced a new Azure Search Management REST API that allows you to perform common administrative tasks, such as creating new services, and scaling services to allow for additional storage or higher query-per-second rates. You can see a sample of how to use this Management API at CodePlex. Document DB: Bulk Add Documents and Syntax Highlighting

DocumentDB is a NoSQL document database service designed for scalable and high performance modern applications.  DocumentDB is delivered as a fully managed service (meaning you don’t have to manage any infrastructure or VMs yourself) with an enterprise grade SLA.

We now support some nice new capabilities for Document DB in the Azure Preview Portal:

  • Add Documents: Upload existing JSON documents via Document Explorer
  • Query syntax highlighting: Document DB SQL query

These features make it even easier to get started and explore DocumentDB.

Add Documents Support within the Azure Portal

The DocumentDB Document Explorer within the Azure Preview Portal now supports the uploading of existing JSON documents - which makes it easy to import and start using existing data stored in existing JSON files. Simply open Document Explorer and click the Add Document command:

image

In the new blade that opens, click the browse button to open a file explorer and select 1 or more JSON documents to upload. Note that Document Explorer currently supports up to 100 JSON document files in a single upload operation.

image

Once you’re satisfied with your selection, click the upload button. The documents will automatically be added to the Document Explorer grid and aggregate results will be displayed as the upload operation is in progress.

image

Once the operation has completed, you can select up to another 100 documents to upload without having to close the Add Document blade.  This makes it easy to import data into your DocumentDB databases. Query Explorer – Syntax Highlighting

We’ve also enabled basic keyword and value highlighting within Query Explorer.

image

This makes it even easier to experiment and test queries against your NoSQL databases.

Please send us your feedback and suggestions on the Microsoft Azure DocumentDB feedback forum. If you haven’t tried DocumentDB yet, you can learn more about how to get started here. Disaster Recovery: GA of DR for Branch Offices & SMB Customers

I’m excited to announce the General Availability of the Disaster Recovery (DR) to Azure for Branch offices and SMB feature in our Azure Site Recovery (ASR) service.  Today's new support enables consistent replication, protection, and recovery of Virtual Machines directly in Microsoft Azure.  With this new support we have extended the Azure Site Recovery service to become a simple, reliable & cost effective DR Solution for enabling Virtual Machine replication and recovery between Windows Server 2012 R2 and Microsoft Azure without having to deploy a System Center Virtual Machine Manager on your primary site.

These features builds on top of the Hyper-V Replica technology available in Windows Server 2012 R2 and Microsoft Azure to provide remote health monitoring, no-impact recovery plan testing and single click orchestrated recovery – all of this backed by an SLA that is enterprise grade. Verify DR Plans with Confidence

The Test Failover feature within Azure Site Recovery allows you to test your disaster recovery plans without impacting your production workload which ensures that you can perform periodic DR drills to meet your compliance objectives. You can connect to the virtual machine running in Azure via RDP after enabling the appropriate endpoints for the virtual machine running in Azure.

A Planned Failover will do a shutdown of your on-premises machine, transfer all the last changes inside the virtual machine to Azure & then bring up an instance of the VM in Azure without any data loss. An Unplanned Failover is usually triggered when your on-premises site has been hit by an unexpected disaster.

If you are looking for failing over a multi-virtual machine application, you can do so using the One-Click Orchestration using Recovery Plans feature available in Azure Site Recovery. Recovery plans make failover and failback from Azure easy and ensure that you meet your Recovery Time Objectives (RTO) goals of your organization.

Check out additional pricing or product information about Azure Site Recovery, and sign up for a free Azure trial and start using it today. Active Directory: GA of Application Proxy and Password Writeback support

Today's Azure update includes some great updates to Azure Active Directory. Azure Active Directory Application Proxy

The Azure Active Directory Application Proxy allows you to make your on-premises web applications securely accessible to users who want to use them from the cloud - and enables you to authenticate access to them using Azure AD.

You can do this without changing your applications and without having to change your DMZ configuration. Just install a lightweight connector anywhere on your network and configure access to the application in your Azure Active Directory, and you can make your SharePoint, Outlook Web Access (or any other Web application that relies on Kerberos) available to users outside your corporate network.

image

With today's release we added support for Kerberos Constrained Delegation. Now, once a user has authenticated to Azure Active Directory, the Azure Active Directory Application Proxy can automatically authenticate users to your on-premises application. Password Writeback for Azure Active Directory Premium Customers

With the new Password Writeback support in Azure AD Sync, you can now configure your Active Directory system so that any time a user or administrator changes a password in Azure AD, the new password is also written back to your on-premises Active Directory as well. So, for example, when a user forgets their password to your on-premises AD, they can reset their password using the Azure AD password reset service we provide in the cloud, and then use their new password to sign on to your on-premises AD.  This makes it easier for organizations to enable a variety of self-service IT and password reset features to their employees and partners. Preview of security questions for password reset

With today's release we’re also introducing preview support that enables you to configure security questions for password reset scenarios. This enables users to register their answers to secret questions, and then use those answers to prove who they are when they go to reset a forgotten password. Add your own password SSO for SaaS applications

With today's release we are introducing the preview of functionality that lets you configure password-based single sign-on for any web application that has an HTML sign-in page, even for applications that are not in the Azure AD Application Gallery. You can also add any links to your users’ Azure AD Access Panel, such as deep links to specific SharePoint pages, or to web apps that use Active Directory Federation Services. More Ways to Get AD Premium

We now support the ability to purchase Azure Active Directory Premium online at the Office 365 commerce catalogue, where you can purchase Azure AD Premium licenses for as many users as you want.  You can then easily manage your Azure Active Directory by navigating to http://aka.ms/accessAAD or through the Office administration portal.

To learn more about these new capabilities and how you can start using them, read Alex Simons’ post on the Active Directory Team Blog. Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu omni

Categories: Architecture, Programming

R: data.table/dplyr/lubridate – Error in wday(date, label = TRUE, abbr = FALSE) : unused arguments (label = TRUE, abbr = FALSE)

Mark Needham - Thu, 12/11/2014 - 20:03

I spent a couple of hours playing around with data.table this evening and tried changing some code written using a data frame to use a data table instead.

I started off by building a data frame which contains all the weekends between 2010 and 2015…

> library(lubridate)
 
> library(dplyr)
 
> dates = data.frame(date = seq( dmy("01-01-2010"), to=dmy("01-01-2015"), by="day" ))
 
> dates = dates %>% filter(wday(date, label = TRUE, abbr = FALSE) %in% c("Saturday", "Sunday"))

…which works fine:

> dates %>% head()
         date
1: 2010-01-02
2: 2010-01-03
3: 2010-01-09
4: 2010-01-10
5: 2010-01-16
6: 2010-01-17

I then tried to change the code to use a data table instead which led to the following error:

> library(data.table)
 
> dates = data.table(date = seq( dmy("01-01-2010"), to=dmy("01-01-2015"), by="day" ))
 
> dates = dates %>% filter(wday(date, label = TRUE, abbr = FALSE) %in% c("Saturday", "Sunday"))
Error in wday(date, label = TRUE, abbr = FALSE) : 
  unused arguments (label = TRUE, abbr = FALSE)

I wasn’t sure what was going on so I went back to the data frame version to check if that still worked…

> dates = data.frame(date = seq( dmy("01-01-2010"), to=dmy("01-01-2015"), by="day" ))
 
> dates = dates %>% filter(wday(date, label = TRUE, abbr = FALSE) %in% c("Saturday", "Sunday"))
Error in wday(c(1262304000, 1262390400, 1262476800, 1262563200, 1262649600,  : 
  unused arguments (label = TRUE, abbr = FALSE)

…except it now didn’t work either! I decided to check what wday was referring to…

Help on topic ‘wday’ was found in the following packages:

Integer based date class
(in package data.table in library /Library/Frameworks/R.framework/Versions/3.1/Resources/library)
Get/set days component of a date-time.
(in package lubridate in library /Library/Frameworks/R.framework/Versions/3.1/Resources/library)

…and realised that data.table has its own wday function – I’d been caught out by R’s global scoping of all the things!

We can probably work around that by the order in which we require the various libraries but for now I’m just prefixing the call to wday and all is well:

dates = dates %>% filter(lubridate::wday(date, label = TRUE, abbr = FALSE) %in% c("Saturday", "Sunday"))
Categories: Programming

Building a scalable geofencing API on Google’s App Engine

Google Code Blog - Thu, 12/11/2014 - 19:04
Thorsten Schaeff has been studying Computer Science and Media at the Media University in Stuttgart and the Institute of Art, Design and Technology in Dublin. For the past six months he’s been interning with the Maps for Work Team in London, researching best practice architectures for working with big geo data in the cloud.

Google’s Cloud Platform offers a great set of tools for creating easily scalable applications in the cloud. In this post, I’ll explore some of the special challenges of working with geospatial data in a cloud environment, and how Google’s Cloud can help. I’ve found that there aren’t many options to do this, especially when dealing with complicated operations like geofencing with multiple complex polygons. You can find the complete code for my approach on GitHub.

Geofencing is the procedure of identifying if a location lies within a certain fence, e.g. neighborhood boundaries, school attendance zones or even the outline of a shop in a mall. It’s particularly useful in mobile applications that need to apply this extra context to someone’s exact location. This process isn’t actually as straight forward as you’d hope and, depending on the complexity of your fences, can include some intense calculations and if your app gets a lot of use, you need to make sure this doesn’t impact performance.

In order to simplify this problem this blogpost outlines the process of creating a scalable but affordable geofencing API on Google’s App Engine.

And the best part? It’s completely free to start playing around.
geofencing_API_example.PNGGeofencing a route through NYC against 342 different polygons that resulted from converting the NYC neighbourhood tabulation areas into single-part polygons.Getting startedTo kick things off you can work through the Java Backend API Tutorial. This uses Apache Maven to manage and build the project.

If you want to dive right in you can download the finished geofencing API from my GitHub account.

The ArchitectureThe requirements are:

  • Storing complex fences (represented as polygons) and some metadata like a name and a description. For this I use Cloud Datastore, a scalable, fully managed, schemaless database for storing non-relational data. You can even use this to store and serve GeoJSON to your frontend.
  • Indexing these polygons for fast querying in a spatial index. I use an STR-Tree (part of JTS) which I store as a Java Object in memcache for fast access.
  • Serving results to multiple platforms through HTTP requests. To achieve this I use Google Cloud Endpoints, a set of tools and libraries that allow you to generate APIs and client libraries.

That’s all you need to get started - so let’s start cooking!

Creating the ProjectTo set up the project simply use Apache Maven and follow the instructions here. This creates the correct folder structure, sets up the routing in the web.xml file for use with Google’s API Explorer and creates a Java file with a sample endpoint.

Adding additional LibrariesI’m using the Java Topology Suite to find out which polygon a certain latitude-longitude-pair is in. JTS is an open source library that offers a nice set of geometric functions.

To include this library into your build path you simply add the JTS Maven dependency to the pom.xml file in your project’s root directory.

In addition I’m using the GSON library to handle JSON within the Java backend. You can basically use any JSON library you want to. If you want to use GSON import this dependency.

Writing your EndpointsAdding Fences to Cloud DatastoreFor the sake of convenience you’re only storing the fences’ vertices and some metadata. To send and receive data through the endpoints you need an object model which you need to create in a little Java Bean called MyFence.java.


Now you need to create an endpoint called add. This endpoint expects a string for the group name, a boolean indicating whether to rebuild the spatial index, and a JSON object representing the fence’s object model. From this App Engine creates a new fence and writes it to Cloud Datastore.

Retrieving a List of our FencesFor some use cases it makes sense to fetch all the fences at once in the beginning, therefore you want to have an endpoint to list all fences from a certain group.

Cloud Datastore uses internal indexes to speed up queries. If you deploy the API directly to App Engine you’re probably going to get an error message, saying that the Datastore query you’re using needs an index. The App Engine Development server can auto-generate the indexes, therefore I’d recommend testing all your endpoints on the development server before pushing it to App Engine.

Getting a Fence’s Metadata by its IDWhen querying the fences you only return the ids of the fences in the result, therefore you need an endpoint to retrieve the metadata that corresponds to a fence’s id.

Building the Spatial IndexTo speed up the geofencing queries you put the fences in an STR tree. The JTS library does most of the heavy lifting here, so you only need to fetch all your fences from the Datastore, create a polygon object for each one and add the polygon’s bounding box to the index.

You then build the index and write it to memcache for fast read access.

Testing a pointYou want to have an endpoint to test any latitude-longitude-pair against all your fences. This is the actual geofencing part. That’s so you will be able to know, if the point falls into any of your fences and if so, it should return the ids of the fences the point is in.

For this you first need to retrieve the index from memcache. Then query the index with the bounding box of the point which returns a list of polygons. Since the index only tests against the bounding boxes of the polygons, you need to iterate through the list and test if the point actually lies within the polygon.


Querying for a Polylines or PolygonsThe process of testing for a point can easily be adapted to test the fences against polylines and polygons. In the case of polylines you query the index with the polyline’s bounding box and then test if the polyline actually crosses the returned fences.


When testing for a polygon you want to get back all fences that are either completely or partly contained in the polygon. Therefore you test if the returned fences are within the polygon or are not disjoint. For some use cases you only want to return fences that are completely contained within the polygon. In that case you want to delete the not disjoint test in the if clause.

Testing & Deploying to App EngineTo test or deploy your API simply follow the steps in the ‘Using Apache Maven’ tutorial.

Scalability & PricingThe beauty of this is, since it’s running on App Engine, Google’s platform as a service offering, it scales automatically and you only pay for what you use.

If you want to insure best performance and great scalability you should consider to switch from the free and shared memcache to a dedicated memcache for your application. This guarantees enough capacity for your spatial index and therefore ensures enough space even for a large amount of complex fences.

That’s it - that’s all you need to create a fast and scalable geofencing API.
Preview: Processing Big Spatial Data in the Cloud with Dataflow
In my next post I will show you how I geofenced more than 340 million NYC Taxi locations in the cloud using Google’s new Big Data tool called Cloud Dataflow.
Categories: Programming

Win Big, Lose Small

Making the Complex Simple - John Sonmez - Thu, 12/11/2014 - 16:00

In this video I talk about the idea of limiting the damage on your bad days and making your good days count the most. In life, you can learn to win big and lose small, you’ll always make forward progress. This is especially helpful if you are trying to lose weight or follow a diet. You’ll eventually mess up, but ... Read More

The post Win Big, Lose Small appeared first on Simple Programmer.

Categories: Programming

Watch Face API Now Available for Android Wear

Android Developers Blog - Thu, 12/11/2014 - 07:08

Posted by Wayne Piekarski, Developer Advocate

We’re pleased to announce that the official Android Wear Watch Face API is now available for developers. Watch faces give users even more ways to express their personal style, while creating an opportunity for developers to customize the most prominent UI feature of the watches. Watch faces have been the most requested feature from users and developers alike, and we can’t wait to see what you build for them.

An Introduction to Watch Faces for Android Wear by Timothy Jordan

Design and development

To get started, first learn about Designing Watch Faces, and then check out the Creating Watch Faces training class. The WatchFace Sample available online and in the Android Studio samples manager also provides a number of different examples to help you jump right in. For a quick overview, you can also watch the Watch Faces for Android Wear DevByte video above.

Watch faces are services that run from your wearable app, so you can provide one or multiple watch faces with a single app install. You can also choose to have configuration activities on the phone or watch, for example to let a user change between 12 and 24-hour time, or to change the watch face’s background. You can use OpenGL to provide smooth graphics, and a background service to pull in useful data like weather and calendar events. Watch faces can be analog, or digital, or display the time in some new way that hasn’t been invented yet––it’s up to you.

Updates to existing devices

Over the next week, the latest release of Android Wear, based on Android 5.0 and implementing API 21, will roll out to users. All Android Wear devices will be updated to Android 5.0 via an over-the air (OTA) update. The update allows users to manage and configure watch faces in the Android Wear app on their phone, and install watch faces from Google Play. Any handheld device running Android 4.3 or later will continue to work with all Android Wear devices.

Upgrade your watch faces

Developers are incredibly resourceful and we’re impressed with the watch faces you were able to create without any documentation at all. If you’ve already built a watch face for Android Wear using an unofficial approach, you should migrate your apps to the official API. The official API ensures a consistent user experience across the platform, while giving you additional information and controls, such as letting you know when the watch enters ambient mode, allowing you to adjust the position of system UI elements, and more. Using the new API is also necessary for your app to be featured in the Watch Faces collection on Google Play.

Deployment of watch faces to Google Play

We recommend you update your apps on Google Play as soon as the Android Wear 5.0 API 21 OTA rollout is complete, which we’ll announce on the Android Wear Developers Google+ community. It’s important to wait until the OTA rollout is complete because a Watch Face requiring API 21 will not be visible on a watch running API 20. Once your user gets the OTA, then the watch face will become visible. If you want to immediately launch your updates during the OTA rollout, make sure you set minSdkVersion to 20 in your wearable app, otherwise the app will fail to install for pre-OTA users. Once the rollout is complete, please transition your existing watch faces to the new API by January 31, 2015, at which point we plan to remove support for watch faces that don't use the official API.

Android Wear apps on Google Play

Starting today, you can submit any of your apps for designation as Android Wear apps on Google Play by following the Distributing to Android Wear guidelines. If your apps follow the criteria in the Wear App Quality checklist and are accepted as Wear apps on Play, it will be easier for Android Wear users to discover them. To opt-in for Android Wear review, visit the Pricing & Distribution section of the Google Play Developer Console.

In the few short months since we’ve launched Android Wear, developers have already written thousands of apps, taking advantage of custom notifications, voice actions, and fully native Android capabilities. Thanks to you, users have infinite ways to personalize their watches, choosing from six devices, a range of watch bands, and thousands of apps. With support for custom watch faces launching today, users will have even more choices in the future. These choices are at the heart of a rich Android Wear ecosystem and as we continue to open up core features of the platform to developers, we can’t wait to see what you build next.

Join the discussion on

+Android Developers
Categories: Programming

Google Cardboard: Seriously Fun

Google Code Blog - Wed, 12/10/2014 - 19:42
As simple as they are, cardboard boxes are pretty great. Maybe you transformed one into a fort or castle growing up. Or maybe your kids took last week’s package delivery and turned the box into a puppet theater. The best part about cardboard is that it can become anything—all you need is your imagination.

It’s this same spirit that inspired our team to turn a smartphone, and some cardboard, into a virtual reality (VR) viewer earlier this year. Suddenly, exploring the Palace of Versailles was as easy as opening an app. And the response was kind of delightful.

We’ve been working to improve Google Cardboard ever since. And today—with more than half a million Cardboard viewers in people’s hands—we've got a fresh round of updates for users, developers, and makers.
cardboard-500k.png
For users: more apps to enjoy, and more places to buy

There are now dozens of Cardboard-compatible apps on Google Play, and starting today we’re dedicating a new collection page to some of our favorites. These VR experiences range from test drives to live concerts to fully-immersive games, and they all have something amazing to offer. So give ‘em a try today, and download the new Cardboard app to watch the collection grow over time.
cardboard-apps-blog-mock-option2.pngExample apps for Cardboard (clockwise, from top left):
Paul McCartney concert, Volvo Reality test drive, Proton Pulse 3D game

If you don’t have a Cardboard viewer yet, you can now pick one up from DODOcase, I Am Cardboard, Knoxlabs, and Unofficial Cardboard. And of course you can always build your own (with new specs below!).

For developers: SDKs for Android and Unity

If you’ve ever tried creating a VR application, then you’ve probably wrestled with issues like lens distortion correction, head tracking, and side-by-side rendering. It’s important to get these things right, but they can suck up all your time—time you’d rather spend on gameplay or graphics.

We want to give you that time back, so today we’re introducing Cardboard SDKs for Android and Unity. The SDKs simplify common VR development tasks so you can focus on your awesome, immersive app. And with both Android and Unity support, you can use the tools you already know and love. Download the SDKs today, and check out apps like Caaaaardboard! and Tilt Brush Gallery to see what’s already possible.

For makers: tool-specific specs, and custom viewer calibration

To help bring VR experiences to everyone, we open sourced a Cardboard viewer specification earlier this year. Since then we’ve seen all sorts of viewers from all sorts of makers, and today we’re investing in this community even further.

For starters, we’re publishing new building specs with specific cutting tools in mind. So whether you’re laser- or die-cutting your Cardboard viewers in high quantities, or carving single units with a blade, we’ve got you covered.

Once you’ve got your custom viewer, we also want to help you tailor the viewing experience to its unique optical layout. So early next year we’ll be adding a viewer calibration tool to the Cardboard SDK. You’ll be able to define your viewer’s base and focal length, for example, then have every Cardboard app adjust accordingly.

For the future: watch this space, and we’re hiring

The growth of mobile, and the acceleration of open platforms like Android make it an especially exciting time for VR. There are more devices, and more enthusiastic developers than ever before, and we can’t wait to see what’s next! We’re also working on a few projects ourselves, so if you’re passionate about VR, you should know we’re hiring.

Here’s to the cardboard box, and all the awesome it brings.

by Andrew Nartker, Product Manager, Google Cardboard
Categories: Programming

Finite State Machine Compiler

Phil Trelford's Array - Wed, 12/10/2014 - 18:22

Yesterday I noticed a tweet recommending an “Uncle” Bob Martin video that is intended to “demystify compilers”. I haven’t seen the video (Episode 29, SMC Parser) as it’s behind a pay wall, but I did find a link on the page to a github repository with a hand rolled parser written in vanilla Java, plus a transformation step that compiles it out to either Java or C code.

cats and dogs

The parser implementation is quite involved with a number of files covering both lexing and parsing:

I found the Java code clean but a little hard to follow so tried to reverse engineer the state machine syntax off of a BNF (found in comments) and some examples (found in unit tests):

Actions: Turnstile
FSM: OneCoinTurnstile
Initial: Locked
{
Locked Coin Unlocked {alarmOff unlock}
Locked Pass Locked  alarmOn
Unlocked Coin Unlocked thankyou
Unlocked Pass Locked lock
}

It reminded me a little of the state machine sample in Martin Fowler’s Domain-Specific Language book, which I recreated in F# a few years back as both internal and external DSLs.

On the train home I thought it would be fun to knock up a parser for the state machine in F# using FParsec, a parser combinator library. Parser combinators mix the lexing and parsing stages, and make light work of parsing. One of the many things I like about FParsec is that you get pretty good error messages for free from the library.

Finite State Machine Parser

I used F#’s discriminated unions to describe the AST which quite closely resembles the core of Uncle Bob’s BNF:

type Name = string
type Event = Name
type State = Name
type Action = Name
type Transition = { OldState:State; Event:Event; NewState:State; Actions:Action list }
type Header = Header of Name * Name

The parser, using FParsec, turned out to be pretty short (less than 40 loc):

open FParsec

let pname = many1SatisfyL isLetter "name"

let pheader = pname .>> pstring ":" .>> spaces1 .>>. pname .>> spaces |>> Header

let pstate = pname .>> spaces1
let pevent = pname .>> spaces1
let paction = pname

let pactions = 
   paction |>> fun action -> [action]
   <|> between (pstring "{") (pstring "}") (many (paction .>> spaces))

let psubtransition =
   pipe3 pevent pstate pactions (fun ev ns act -> ev,ns,act)

let ptransition1 =
   pstate .>>. psubtransition
   |>> fun (os,(ev,ns,act)) -> [{OldState=os;Event=ev;NewState=ns;Actions=act}]

let ptransitionGroup =
   let psub = spaces >>. psubtransition .>> spaces
   pstate .>>. (between (pstring "{") (pstring "}") (many1 psub))
   |>> fun (os,subs) -> 
      [for (ev,ns,act) in subs -> {OldState=os;Event=ev;NewState=ns;Actions=act}]

let ptransitions =
   let ptrans = attempt ptransition1 <|> ptransitionGroup
   between (pstring "{") (pstring "}") (many (spaces >>. ptrans .>> spaces))
   |>> fun trans -> List.collect id trans

let pfsm =
   spaces >>. many pheader .>>. ptransitions .>> spaces

let parse code =
   match run pfsm code with
   | Success(result,_,_) -> result
   | Failure(msg,_,_) -> failwith msg

You can try the parser snippet out directly in F#’s interactive window.

Finite State Machine Compiler

I also found code for compiling out to Java and C, and ported the former:

let compile (headers,transitions) =
   let header name = 
      headers |> List.pick (function Header(key,value) when key = name -> Some value | _ -> None)
   let states = 
      transitions |> List.collect (fun trans -> [trans.OldState;trans.NewState]) |> Seq.distinct      
   let events =
      transitions |> List.map (fun trans -> trans.Event) |> Seq.distinct

   "package thePackage;\n" +
   (sprintf "public abstract class %s implements %s {\n" (header "FSM") (header "Actions")) +
   "\tpublic abstract void unhandledTransition(String state, String event);\n" +
   (sprintf "\tprivate enum State {%s}\n" (String.concat "," states)) +
   (sprintf "\tprivate enum Event {%s}\n" (String.concat "," events)) +
   (sprintf "\tprivate State state = State.%s;\n" (header "Initial")) +
   "\tprivate void setState(State s) {state = s;}\n" +
   "\tprivate void handleEvent(Event event) {\n" +
   "\t\tswitch(state) {\n" +
   (String.concat ""
      [for (oldState,ts) in transitions |> Seq.groupBy (fun t -> t.OldState) ->
         (sprintf "\t\t\tcase %s:\n" oldState) +
         "\t\t\t\tswitch(event) {\n" +
         (String.concat ""
            [for t in ts ->
               (sprintf "\t\t\t\t\tcase %s:\n" t.Event) +
               (sprintf "\t\t\t\t\t\tsetState(State.%s);\n" t.NewState)+
               (String.concat ""
                  [for a in t.Actions -> sprintf "\t\t\t\t\t\t%s();\n" a]
               ) +
               "\t\t\t\t\t\tbreak;\n"
            ]         
         ) +
         "\t\t\t\t\tdefault: unhandledTransition(state.name(), event.name()); break;\n" +
         "\t\t\t\t}\n" +
         "\t\t\t\tbreak;\n"
      ] 
   )+   
   "\t\t}\n" +
   "\t}\n" +   
   "}\n"

Again, and probably not surprisingly, this is shorter than Uncle Bob’s Java implementation.

And again, you can try the compiler snippet out in the F# interactive window.

Conclusion

Writing a parser using a parser combinator library (and F#) appears to be significantly easier and more succinct than writing a hand rolled parser (in Java).

Categories: Programming