Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Unlocking ES2015 features with Webpack and Babel

Xebia Blog - 1 hour 49 min ago

This post is part of a a series of ES2015 posts. We'll be covering new JavaScript functionality every week for the coming two months.

After being in the working draft state for a long time, the ES2015 (formerly known as ECMAScript 6 or ES6 shorthand) specification has reached a definitive state a while ago. For a long time now, BabelJS, a Javascript transpiler, formerly known as 6to5, has been available for developers that would already like to use ES2015 features in their projects.

In this blog post I will show you how you can integrate Webpack, a Javascript module builder/loader, with Babel to automate the transpiling of ES2015 code to ES5. Besides that I'll also explain you how to automatically generate source maps to ease development and debugging.

Webpack Introduction

Webpack is a Javascript module builder and module loader. With Webpack you can pack a variety of different modules (AMD, CommonJS, ES2015, ...) with their dependencies into static file bundles. Webpack provides you with loaders which essentially allow you to pre-process your source files before requiring or loading them. If you are familiar with tools like Grunt or Gulp you can think of loaders as tasks to be executed before bundling sources. To make your life even easier, Webpack also comes with a development server with file watch support and browser reloading.

Installation

In order to use Webpack all you need is npm, the Node Package Manager, available by downloading either Node or io.js. Once you've got npm up and running all you need to do to setup Webpack globally is install it using npm:

npm install -g webpack

Alternatively, you can include it just in the projects of your preference using the following command:

npm install --save-dev webpack

Babel Introduction

With Babel, a Javascript transpiler, you can write your code using ES2015 (and even some ES7 features) and convert it to ES5 so that well-known browsers will be able to interpret it. On the Babel website you can find a list of supported features and how you can use these in your project today. For the React developers among us, Babel also comes with JSX support out of the box.

Alternatively, there is the Google Traceur compiler which essentially solves the same problem as Babel. There are multiple Webpack loaders available for Traceur of which traceur-loader seems to be the most popular one.

Installation

Assuming you already have npm installed, installing Babel is as easy as running:

npm install --save-dev babel-loader

This command will add babel-loader to your project's package.json. Run the following command if you prefer installing it globally:

npm install -g babel-loader

Project structure
webpack-babel-integration-example/
  src/
    DateTime.js
    Greeting.js
    main.js
  index.html
  package.json
  webpack.config.js

Webpack's configuration can be found in the root directory of the project, named webpack.config.js. The ES6 Javascript sources that I wish to transpile to ES5 will be located under the src/ folder.

Webpack configuration

The Webpack configuration file that is required is a very straightforward configuration or a few aspects:

  • my main source entry
  • the output path and bundle name
  • the development tools that I would like to use
  • a list of module loaders that I would like to apply to my source
var path = require('path');

module.exports = {
  entry: './src/main.js',
  output: {
    path: path.join(__dirname, 'build'),
    filename: 'bundle.js'
  },
  devtool: 'inline-source-map',
  module: {
    loaders: [
      {
        test: path.join(__dirname, 'src'),
        loader: 'babel-loader'
      }
    ]
  }
};

The snippet above shows you that my source entry is set to src/main.js, the output is set to create a build/bundle.js, I would like Webpack to generate inline source maps and I would like to run the babel-loader for all files located in src/.

ES6 sources A simple ES6 class

Greeting.js contains a simple class with only the toString method implemented to return a String that will greet the user:

class Greeting {
  toString() {
    return 'Hello visitor';
  }
}

export default Greeting
Using packages in your ES2015 code

Often enough, you rely on a bunch of different packages that you include in your project using npm. In my example, I'll use the popular date time library called Moment.js. In this example, I'll use Moment.js to display the current date and time to the user.

Run the following command to install Moment.js as a local dependency in your project:

npm install --save-dev moment

I have created the DateTime.js class which again only implements the toString method to return the current date and time in the default date format.

import moment from 'moment';

class DateTime {
  toString() {
    return 'The current date time is: ' + moment().format();
  }
}

export default DateTime

After importing the package using the import statement you can use it anywhere within the source file.

Your main entry

In the Webpack configuration I specified a src/main.js file to be my source entry. In this file I simply import both classes that I created, I target different DOM elements and output the toString implementations from both classes into these DOM objects.

import Greeting from './Greeting.js';
import DateTime from './DateTime.js';

var h1 = document.querySelector('h1');
h1.textContent = new Greeting();

var h2 = document.querySelector('h2');
h2.textContent = new DateTime();
HTML

After setting up my ES2015 sources that will display the greeting in an h1 tag and the current date time in an h2 tag it is time to setup my index.html. Being a straightforward HTML-file, the only thing that is really important is that you point the script tag to the transpiled bundle file, in this example being build/bundle.js.

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Webpack and Babel integration example</title>
</head>
<body>

<h1></h1>

<h2></h2>

    <script src="build/bundle.js"></script>
</body>
</html>
Running the application

In this example project, running my application is as simple as opening the index.html in your favorite browser. However, before doing this you will need to instruct Webpack to actually run the loaders and thus transpile your sources into the build/bundle.js required by the index.html.

You can run Webpack in watch mode, meaning that it will monitor your source files for changes and automatically run the module loaders defined in your configuration. Execute the following command to run in watch mode:

webpack --watch

If you are using my example project from Github (link at the bottom), you can also use the following script which I've set up in the package.json:

npm run watch

Easier debugging using source maps

Debugging transpiled ES5 is a huge pain which will make you want to go back to writing ES5 without thinking. To ease development and debugging of ES2015 I can rely on source maps generated by Webpack. While running Webpack (normal or in watch mode) with the devtool property set to inline-source-map you can view the ES2015 source files and actually place breakpoints in them using your browser's development tools.

Debugging ES6 with source maps

Running the example project with a breakpoint inside the DateTime.js toString method using the Chrome developer tools.

Conclusion

As you've just seen, setting up everything you need to get started with ES2015 is extremely easy. Webpack is a great utility that will allow you to easily set up your complete front-end build pipeline and seamlessly integrates with Babel to include code transpiling into the build pipeline. With the help of source maps even debugging becomes easy again.

Sample project

The entire sample project as introduced above can be found on Github.

Xebia KnowledgeCast Episode 6: Lodewijk Bogaards on Stackstate and TypeScript

Xebia Blog - Sun, 08/30/2015 - 12:00

lodewijk
The Xebia KnowledgeCast is a podcast about software architecture, software development, lean/agile, continuous delivery, and big data.

In this 6th episode, we switch to a new format! So, no fun with stickies this time. It’s one interview. And we dive in deeper than ever before.

Lodewijk Bogaards is co-founder and CTO at Stackstate. Stackstate is an enterprise application that provides real time insight in the run-time status of all business processes, applications, services and infrastructure components, their dependencies and states.

It gives immediate insight into the cause and impact of any change or failure from the hardware level to the business process level and everything in between. To build awesome apps like that, Lodewijk and his team use a full-stack of technologies that they continue to improve. One such improvement is ditching plain Javascript in favor of TypeScript and Angular and it is that topic that we'll focus on in today's disussion.

What's your opinion on TypeScript? I'm looking forward to reading your comments!

Links:

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or Stitcher.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, send in a voice message so we can put you ON the show!

Credits

The Ultimate Personal Productivity Platform is You

“Amateurs sit and wait for inspiration, the rest of us just get up and go to work.” ― Stephen King

The ultimate personal productivity platform is you.

Let’s just put that on the table right up front so you know where personal productivity ultimately comes from.  It’s you.

I can’t possibly give you anything that will help you perform better than an organized mind firing on all cylinders combined with self-awareness.

You are the one that ultimately has to envision your future.  You are the one that ultimately has to focus your attention.  You are the one that ultimately needs to choose your goals.  You are the one that ultimately has to find your motivation.  You are the one that ultimately needs to manage your energy.  You are the one that ultimately needs to manage your time.  You are the one that ultimately needs to take action.  You are the one that needs to balance work and life.

That’s a lot for you to do.

So the question isn’t are you capable?  Of course you are.

The real question is, how do you make the most of you?

Agile Results is a personal productivity platform to help you make the most of what you’ve got.

Agile Results is a simple system for getting better results.  It combines proven practices for productivity, time management, and motivation into a simple system you can use to achieve better, faster, easier results for work and life.

Agile Results works by integrating and synthesizing positive psychology, sport psychology, project management skills, and peak performance insights into little behavior changes you can do each day.  It’s also based on more than 10 years of extensive trial and error to help people achieve high performance.

If you don’t know how to get started, start simple:

Ask yourself the following question:  “What are three things I want to achieve today?”

And write those down.   That’s it.

You’re doing Agile Results.

Categories: Architecture, Programming

Angular 1 and Angular 2 integration: the path to seamless upgrade

Google Code Blog - Fri, 08/28/2015 - 20:05

Originally posted on the Angular blog.

Posted by, Misko Hevery, Software Engineer, Angular

Have an existing Angular 1 application and are wondering about upgrading to Angular 2? Well, read on and learn about our plans to support incremental upgrades.

Summary

Good news!

    • We're enabling mixing of Angular 1 and Angular 2 in the same application.
    • You can mix Angular 1 and Angular 2 components in the same view.
    • Angular 1 and Angular 2 can inject services across frameworks.
    • Data binding works across frameworks.

Why Upgrade?

Angular 2 provides many benefits over Angular 1 including dramatically better performance, more powerful templating, lazy loading, simpler APIs, easier debugging, even more testable and much more. Here are a few of the highlights:

Better performance

We've focused across many scenarios to make your apps snappy. 3x to 5x faster on initial render and re-render scenarios.

    • Faster change detection through monomorphic JS calls
    • Template precompilation and reuse
    • View caching
    • Lower memory usage / VM pressure
    • Linear (blindingly-fast) scalability with observable or immutable data structures
    • Dependency injection supports incremental loading
More powerful templating
    • Removes need for many directives
    • Statically analyzable - future tools and IDEs can discover errors at development time instead of run time
    • Allows template writers to determine binding usage rather than hard-wiring in the directive definition
Future possibilities

We've decoupled Angular 2's rendering from the DOM. We are actively working on supporting the following other capabilities that this decoupling enables:

    • Server-side rendering. Enables blinding-fast initial render and web-crawler support.
    • Web Workers. Move your app and most of Angular to a Web Worker thread to keep the UI smooth and responsive at all times.
    • Native mobile UI. We're enthusiastic about supporting the Web Platform in mobile apps. At the same time, some teams want to deliver fully native UIs on their iOS and Android mobile apps.
    • Compile as build step. Angular apps parse and compile their HTML templates. We're working to speed up initial rendering by moving the compile step into your build process.
Angular 1 and 2 running together

Angular 2 offers dramatic advantages over Angular 1 in performance, simplicity, and flexibility. We're making it easy for you to take advantage of these benefits in your existing Angular 1 applications by letting you seamlessly mix in components and services from Angular 2 into a single app. By doing so, you'll be able to upgrade an application one service or component at a time over many small commits.

For example, you may have an app that looks something like the diagram below. To get your feet wet with Angular 2, you decide to upgrade the left nav to an Angular 2 component. Once you're more confident, you decide to take advantage of Angular 2's rendering speed for the scrolling area in your main content area.

For this to work, four things need to interoperate between Angular 1 and Angular 2:

    • Dependency injection
    • Component nesting
    • Transclusion
    • Change detection

To make all this possible, we're building a library named ng-upgrade. You'll include ng-upgrade and Angular 2 in your existing Angular 1 app, and you'll be able to mix and match at will.

You can find full details and pseudocode in the original upgrade design doc or read on for an overview of the details on how this works. In future posts, we'll walk through specific examples of upgrading Angular 1 code to Angular 2.

Dependency Injection

First, we need to solve for communication between parts of your application. In Angular, the most common pattern for calling another class or function is through dependency injection. Angular 1 has a single root injector, while Angular 2 has a hierarchical injector. Upgrading services one at a time implies that the two injectors need to be able to provide instances from each other.

The ng-upgrade library will automatically make all of the Angular 1 injectables available in Angular 2. This means that your Angular 1 application services can now be injected anywhere in Angular 2 components or services.

Exposing an Angular 2 service into an Angular 1 injector will also be supported, but will require that you to provide a simple mapping configuration.

The result is that services can be easily moved one at a time from Angular 1 to Angular 2 over independent commits and communicate in a mixed-environment.

Component Nesting and Transclusion

In both versions of Angular, we define a component as a directive which has its own template. For incremental migration, you'll need to be able to migrate these components one at a time. This means that ng-upgrade needs to enable components from each framework to nest within each other.

To solve this, ng-upgrade will allow you to wrap Angular 1 components in a facade so that they can be used in an Angular 2 component. Conversely, you can wrap Angular 2 components to be used in Angular 1. This will fully work with transclusion in Angular 1 and its analog of content-projection in Angular 2.

In this nested-component world, each template is fully owned by either Angular 1 or Angular 2 and will fully follow its syntax and semantics. This is not an emulation mode which mostly looks like the other, but an actual execution in each framework, dependending on the type of component. This means that components which are upgraded to Angular 2 will get all of the benefits of Angular 2, and not just better syntax.

This also means that an Angular 1 component will always use Angular 1 Dependency Injection, even when used in an Angular 2 template, and an Angular 2 component will always use Angular 2 Dependency Injection, even when used in an Angular 1 template.

Change Detection

Mixing Angular 1 and Angular 2 components implies that Angular 1 scopes and Angular 2 components are interleaved. For this reason, ng-upgrade will make sure that the change detection (Scope digest in Angular 1 and Change Detectors in Angular 2) are interleaved in the same way to maintain a predictable evaluation order of expressions.

ng-upgrade takes this into account and bridges the scope digest from Angular 1 and change detection in Angular 2 in a way that creates a single cohesive digest cycle spanning both frameworks.

Typical application upgrade process

Here is an example of what an Angular 1 project upgrade to Angular 2 may look like.

  1. Include the Angular 2 and ng-upgrade libraries with your existing application
  2. Pick a component which you would like to migrate
    1. Edit an Angular 1 directive's template to conform to Angular 2 syntax
    2. Convert the directive's controller/linking function into Angular 2 syntax/semantics
    3. Use ng-upgrade to export the directive (now a Component) as an Angular 1 component (this is needed if you wish to call the new Angular 2 component from an Angular 1 template)
  3. Pick a service which you would would like to migrate
    1. Most services should require minimal to no change.
    2. Configure the service in Angular 2
    3. (optionally) re-export the service into Angular 1 using ng-upgrade if it's still used by other parts of your Angular 1 code.
  4. Repeat doing step #2 and #3 in order convenient for your application
  5. Once no more services/components need to be converted drop the top level Angular 1 bootstrap and replace with Angular 2 bootstrap.

Note that each individual change can be checked in separately and the application continues working letting you continue to release updates as you wish.

We are not planning on adding support for allowing non-component directives to be usable on both sides. We think most of the non-component directives are not needed in Angular 2 as they are supported directly by the new template syntax (i.e. ng-click vs (click) )

Q&A

I heard Angular 2 doesn't support 2-way bindings. How will I replace them?

Actually, Angular 2 supports two way data binding and ng-model, though with slightly different syntax.

When we set out to build Angular 2 we wanted to fix issues with the Angular digest cycle. To solve this we chose to create a unidirectional data-flow for change detection. At first it was not clear to us how the two way forms data-binding of ng-model in Angular 1 fits in, but we always knew that we had to make forms in Angular 2 as simple as forms in Angular 1.

After a few iterations we managed to fix what was broken with multiple digests and still retain the power and simplicity of ng-model in Angular 1.

Two way data-binding has a new syntax: [(property-name)]="expression" to make it explicit that the expression is bound in both directions. Because for most scenarios this is just a small syntactic change we expect easy migration.

As an example, if in Angular 1 you have: <input type="text" ng-model="model.name" />

You would convert to this in Angular 2: <input type="text" [(ng-model)]="model.name" />

What languages can I use with Angular 2?

Angular 2 APIs fully support coding in today's JavaScript (ES5), the next version of JavaScript (ES6 or ES2015), TypeScript, and Dart.

While it's a perfectly fine choice to continue with today's JavaScript, we'd like to recommend that you explore ES6 and TypeScript (which is a superset of ES6) as they provide dramatic improvements to your productivity.

ES6 provides much improved syntax and intraoperative standards for common libraries like promises and modules. TypeScript gives you dramatically better code navigation, automated refactoring in IDEs, documentation, finding errors, and more.

Both ES6 and TypeScript are easy to adopt as they are supersets of today's ES5. This means that all your existing code is valid and you can add their features a little at a time.

What should I do with $watch in our codebase?

tldr; $watch expressions need to be moved into declarative annotations. Those that don't fit there should take advantage of observables (reactive programing style).

In order to gain speed and predictability, in Angular 2 you specify watch expressions declaratively. The expressions are either in HTML templates and are auto-watched (no work for you), or have to be declaratively listed in the directive annotation.

One additional benefit from this is that Angular 2 applications can be safely minified/obfuscated for smaller payload.

For interapplication communication Angular 2 offers observables (reactive programing style).

What can I do today to prepare myself for the migration?

Follow the best practices and build your application using components and services in Angular 1 as described in the AngularJS Style Guide.

Wasn't the original upgrade plan to use the new Component Router?

The upgrade plan that we announced at ng-conf 2015 was based on upgrading a whole view at a time and having the Component Router handle communication between the two versions of Angular.

The feedback we received was that while yes, this was incremental, it wasn't incremental enough. We went back and redesigned for the plan as described above.

Are there more details you can share?

Yes! In the Angular 1 to Angular 2 Upgrade Strategy design doc.

We're working on a series of upcoming posts on related topics including:

  1. Mapping your Angular 1 knowledge to Angular 2.
  2. A set of FAQs on details around Angular 2.
  3. Detailed migration guide with working code samples.

See you back here soon!

Categories: Programming

Failing at Entrepreneurship While You Are Young Is Great

Making the Complex Simple - John Sonmez - Fri, 08/28/2015 - 14:00

My Perfect Job It was early 2006, and I had what most people would consider a perfect job. I worked as a Developer Platform Evangelist for Microsoft Corporation, delivering labs worldwide to Microsoft Partners and teaching how to migrate applications to 64-bit computing. I was working as a contractor, commonly called a v-(dash) in the […]

The post Failing at Entrepreneurship While You Are Young Is Great appeared first on Simple Programmer.

Categories: Programming

Trying out the Serenity BDD framework; a report

Xebia Blog - Fri, 08/28/2015 - 12:47

“Serenity, that feeling you know you can trust your tests.” Sounds great, but I was thinking of Firefly first when I heard the name ‘Serenity’. In this case, we are talking about a framework you can use to automate your tests.

The selling points of this framework are that it integrates your acceptance tests (BDD) with reporting and acts like living documentation. It can also integrate with JIRA and all that jazz. Hearing this, I wasn’t ‘wowed’ per se. There are many tools out there that can do that. But Serenity isn’t supporting just one approach. Although it is heavily favouring Webdriver/Selenium, you can also use JBehave, JUnit, Cucumber. That is really nice! 

Last weekend, at the Board of Agile Testers, we tried the framework with a couple of people. Our goal was to see if it’s really easy to set up, to see if the reports are useful and how easy it is to implement features. We used Serenity ‘Cucumber-style’ with Selenium/Webdriver (Java) and the Page Object Pattern.

Setup

It maybe goes a little too far to say a totally non-technical person could set up the framework, but it was pretty easy. Using your favorite IDE, all you had to do was import a Maven archetype (we used a demo project) and all the Serenity dependencies are downloaded for you. We would recommend using Java 7 at least, Java 6 gave us problems.

Using the tool

The demo project tests ran alright, but we noticed it was quite slow! The reason is probably that Serenity takes a screenshot at every step of your test. You can configure this setting, thankfully.

At the end of each test run, Serenity generates an HTML report. This report looks really good! You get a general overview and can click on each test step to see the screenshots. There is also a link to the requirements and you can see the ‘coverage’ of your tests. I’m guessing they mean functional coverage here, since we’re writing acceptance tests.

Serenity Report

Writing our own tests

After we got a little overview of the tool we started writing our own tests, using a Calculator as the System Under Test. The Serenity specific Page Object stuff comes with the Maven archetype so the IDE could help you implement the tests. We tried to do it in a little TDD cycle. Run the test, let it fail and let the output give you a hint on how to implement the step definitions. Beyond the step definition you had to use your Java skills to implement the actual tests.

Conclusion

The tool is pretty developer oriented, there’s no denying that. The IDE integration is very good in my opinion. With the Community IntelliJ edition you have all the plugins you need to speed up your workflow. The reporting is indeed the most beautiful I had seen, personally. Would we recommend changing your existing framework to Serenity? Unless your test reports are shit: no. There is in fact a small downside to using this framework; for now there are only about 15 people who actively contribute. You are of course allowed to join in, but it is a risk that there are only a small group of people actively improving it. If the support base grows, it will be a powerful framework for your automated tests and BDD cycle. For now, I'm going to play with the framework some more, because after using it for about 3 hours I think we only scratched the surface of its possibilities. 

 

Announcing the Android Auto Desktop Head Unit

Android Developers Blog - Thu, 08/27/2015 - 19:24

Posted by Josh Gordon, Developer Advocate

Today we’re releasing the Desktop Head Unit (DHU), a new testing tool for Android Auto developers. The DHU enables your workstation to act as an Android Auto head unit that emulates the in-car experience for testing purposes. Once you’ve installed the DHU, you can test your Android Auto apps by connecting your phone and workstation via USB. Your phone will behave as if it’s connected to a car. Your app is displayed on the workstation, the same as it’s displayed on a car.

The DHU runs on your workstation. Your phone runs the Android Auto companion app.

Now you can test pre-released versions of your app in a production-like environment, without having to work from your car. With the release of the DHU, the previous simulators are deprecated, but will be supported for a short period prior to being officially removed.

Getting started

You’ll need an Android phone running Lollipop or higher, with the Android Auto companion app installed. Compile your Auto app and install it on your phone.

Install the DHU

Install the DHU on your workstation by opening the SDK Manager and downloading it from Extras > Android Auto Desktop Head Unit emulator. The DHU will be installed in the <sdk>/extras/google/auto/ directory.

Running the DHU

Be sure your phone and workstation are connected via USB.

  1. Enable Android Auto developer mode by starting the Android Auto companion app and tapping on the header image 10 times. This is a one-time step.
  2. Start the head unit server in the companion app by clicking on the context menu, and selecting “Start head unit server”. This option only appears after developer mode is enabled. A notification appears to show the server is running.
  3. Start the head unit server in the Android Auto companion app before starting the DHU on your workstation. You’ll see a notification when the head unit server is running.
  4. On your workstation, set up port forwarding using ADB to allow the DHU to connect to the head unit server running on your phone. Open a terminal and type adb forward tcp:5277 tcp:5277. Don’t forget this step!
  5. Start the DHU.
      cd <sdk>/extras/google/auto/
      On Linux or OSX: ./desktop-head-unit
      On Windows, desktop-head-unit.exe

At this point the DHU will launch on your workstation, and your phone will enter Android Auto mode. Check out the developer guide for more info. We hope you enjoy using the DHU!

Join the discussion on

+Android Developers
Categories: Programming

Building better apps with Runtime Permissions

Android Developers Blog - Thu, 08/27/2015 - 18:34

Posted by Ian Lake, Developer Advocate

Android devices do a lot, whether it is taking pictures, getting directions or making phone calls. With all of this functionality comes a large amount of very sensitive user data including contacts, calendar appointments, current location, and more. This sensitive information is protected by permissions, which each app must have before being able to access the data. Android 6.0 Marshmallow introduces one of the largest changes to the permissions model with the addition of runtime permissions, a new permission model that replaces the existing install time permissions model when you target API 23 and the app is running on an Android 6.0+ device.

Runtime permissions give your app the ability to control when and with what context you’ll ask for permissions. This means that users installing your app from Google Play will not be required to accept a list of permissions before installing your app, making it easy for users to get directly into your app. It also means that if your app adds new permissions, app updates will not be blocked until the user accepts the new permissions. Instead, your app can ask for the newly added runtime permissions as needed.

Finding the right time to ask for runtime permissions has an important impact on your app’s user experience. We’ve gathered a number of design patterns in our new Permission design guidelines including best practices around when to request permissions, how to explain why permissions are needed, and how to handle permissions being denied.

Ask up front for permissions that are obvious

In many cases, you can avoid permissions altogether by using the existing intents system to utilize other existing specialized apps rather than building a full experience within your app. An example of this is using ACTION_IMAGE_CAPTURE to start an existing camera app the user is familiar with rather than building your own camera experience. Learn more about permissions versus intents.

However, if you do need a runtime permission, there’s a number of tools to help you. Checking for whether your app has a permission is possible with ContextCompat.checkSelfPermission() (available as part of revision 23 of the support-v4 library for backward compatibility) and requesting permissions can be done with requestPermissions(), bringing up the system controlled permissions dialog to allow the user to grant you the requested permission(s) if you don’t already have them. Keep in mind that users can revoke permissions at any time through the system settings so you should always check permissions every time.

A special note should be made around shouldShowRequestPermissionRationale(). This method returns true if the user has denied your permission request at least once yet have not selected the ‘Don’t ask again’ option (which appears the second or later time the permission dialog appears). This gives you an opportunity to provide additional education around the feature and why you need the given permission. Learn more about explaining why the app needs permissions.

Read through the design guidelines and our developer guide for all of the details in getting your app ready for Android 6.0 and runtime permissions. Making it easy to install your app and providing context around accessing user’s sensitive data are key changes you can make to build better apps.

Join the discussion on

+Android Developers
Categories: Programming

Announcing Great New SQL Database Capabilities in Azure

ScottGu's Blog - Scott Guthrie - Thu, 08/27/2015 - 17:13

Today we are making available several new SQL Database capabilities in Azure that enable you to build even better cloud applications.  In particular:

  • We are introducing two new pricing tiers for our  Elastic Database Pool capability.  Elastic Database Pools enable you to run multiple, isolated and independent databases on a private pool of resources dedicated to just you and your apps.  This provides a great way for software-as-a-service (SaaS) developers to better isolate their individual customers in an economical way.
  • We are also introducing new higher-end scale options for SQL Databases that enable you to run even larger databases with significantly more compute + storage + networking resources.

Both of these additions are available to start using immediately.  Elastic Database Pools

If you are a SaaS developer with tens, hundreds, or even thousands of databases, an elastic database pool dramatically simplifies the process of creating, maintaining, and managing performance across these databases within a budget that you control. 

image

A common SaaS application pattern (especially for B2B SaaS apps) is for the SaaS app to use a different database to store data for each customer.  This has the benefit of isolating the data for each customer separately (and enables each customer’s data to be encrypted separately, backed-up separately, etc).  While this pattern is great from an isolation and security perspective, each database can end up having varying and unpredictable resource consumption (CPU/IO/Memory patterns), and because the peaks and valleys for each customer might be difficult to predict, it is hard to know how much resources to provision.  Developers were previously faced with two options: either over-provision database resources based on peak usage--and overpay. Or under-provision to save cost--at the expense of performance and customer satisfaction during peaks.

Microsoft created elastic database pools specifically to help developers solve this problem.  With Elastic Database Pools you can allocate a shared pool of database resources (CPU/IO/Memory), and then create and run multiple isolated databases on top of this pool.  You can set minimum and maximum performance SLA limits of your choosing for each database you add into the pool (ensuring that none of the databases unfairly impacts other databases in your pool).  Our management APIs also make it much easier to script and manage these multiple databases together, as well as optionally execute queries that span across them (useful for a variety operations).  And best of all when you add multiple databases to an Elastic Database Pool, you are able to average out the typical utilization load (because each of your customers tend to have different peaks and valleys) and end up requiring far fewer database resources (and spend less money as a result) than you would if you ran each database separately.

The below chart shows a typical example of what we see when SaaS developers take advantage of the Elastic Pool capability.  Each individual database they have has different peaks and valleys in terms of utilization.  As you combine multiple of these databases into an Elastic Pool the peaks and valleys tend to normalize out (since they often happen at different times) to require much less overall resources that you would need if each database was resourced separately:

databases sharing eDTUs

Because Elastic Database Pools are built using our SQL Database service, you also get to take advantage of all of the underlying database as a service capabilities that are built into it: 99.99% SLA, multiple-high availability replica support built-in with no extra charges, no down-time during patching, geo-replication, point-in-time recovery, TDE encryption of data, row-level security, full-text search, and much more.  The end result is a really nice database platform that provides a lot of flexibility, as well as the ability to save money.

New Basic and Premium Tiers for Elastic Database Pools

Earlier this year at the //Build conference we announced our new Elastic Database Pool support in Azure and entered public preview with the Standard Tier edition of it.  The Standard Tier allows individual databases within the elastic pool to burst up to 100 eDTUs (a DTU represents a combination of Compute + IO + Storage performance) for performance. 

Today we are adding additional Basic and Premium Elastic Database Pools to the preview to enable a wider range of performance and cost options.

  • Basic Elastic Database Pools are great for light-usage SaaS scenarios.  Basic Elastic Database Pools allows individual databases performance bursts up to 5 eDTUs.
  • Premium Elastic Database Pools are designed for databases that require the highest performance per database. Premium Elastic Database Pools allows individual database performance bursts up to 1,000 eDTUs.

Collectively we think these three Elastic Database Pool pricing tier options provide a tremendous amount of flexibility and optionality for SaaS developers to take advantage of, and are designed to enable a wide variety of different scenarios. Easily Migrate Databases Between Pricing Tiers

One of the cool capabilities we support is the ability to easily migrate an individual database between different Elastic Database Pools (including ones with different pricing tiers).  For example, if you were a SaaS developer you could start a customer out with a trial edition of your application – and choose to run the database that backs it within a Basic Elastic Database Pool to run it super cost effectively.  As the customer’s usage grows you could then auto-migrate them to a Standard database pool without customer downtime.  If the customer grows up to require a tremendous amount of resources you could then migrate them to a Premium Database Pool or run their database as a standalone SQL Database with a huge amount of resource capacity.

This provides a tremendous amount of flexibility and capability, and enables you to build even better applications. Managing Elastic Database Pools

One of the the other nice things about Elastic Database Pools is that the service provides the management capabilities to easily manage large collections of databases without you having to worry about the infrastructure that runs it.   

You can create and mange Elastic Database Pools using our Azure Management Portal or via our Command-line tools or REST Management APIs.  With today’s update we are also adding support so that you can use T-SQL to add/remove new databases to/from an elastic pool.  Today’s update also adds T-SQL support for measuring resource utilization of databases within an elastic pool – making it even easier to monitor and track utilization by database.

image Elastic Database Pool Tier Capabilities

During the preview, we have been and will continue to tune a number of parameters that control the density of Elastic Database Pools as we progress through the preview.

In particular, the current limits for the number of databases per pool and the number of pool eDTUs is something we plan to steadily increase as we march towards the general availability release.  Our plan is to provide the highest possible density per pool, largest pool sizes, and the best Elastic Database Pool economics while at the same time keeping our 99.99 availability SLA.

Below are the current performance parameters for each of the Elastic Database Pool Tier options in preview today:

 

Basic Elastic

Standard Elastic

Premium Elastic

Elastic Database Pool

eDTU range per pool (preview limits)

100-1200 eDTUs

100-1200 eDTUs

125-1500 eDTUs

Storage range per pool

10-120 GB

100-1200 GB

63-750 GB

Maximum database per pool (preview limits)

200

200

50

Estimated monthly pool and add-on  eDTU costs (preview prices)

Starting at $0.2/hr (~$149/pool/mo).

Each additional eDTU $.002/hr (~$1.49/mo)

Starting at $0.3/hr (~$223/pool mo). 

Each additional eDTU $0.003/hr (~$2.23/mo)

Starting at $0.937/hr (`$697/pool/mo).

Each additional eDTU $0.0075/hr (~$5.58/mo)

Storage per eDTU

0.1 GB per eDTU

1 GB per eDTU

.5 GB per eDTU

Elastic Databases

eDTU max per database (preview limits)

0-5

0-100

0-1000

Storage max per DB

2 GB

250 GB

500 GB

Per DB cost (preview prices)

$0.0003/hr (~$0.22/mo)

$0.0017/hr (~$1.26/mo)

$0.0084/hr (~$6.25/mo)

We’ll continue to iterate on the above parameters and increase the maximum number of databases per pool as we progress through the preview, and would love your feedback as we do so.

New Higher-Scale SQL Database Performance Tiers

In addition to the enhancements for Elastic Database Pools, we are also today releasing new SQL Database Premium performance tier options for standalone databases. 

Today we are adding a new P4 (500 DTU) and a P11 (1750 DTU) level which provide even higher performance database options for SQL Databases that want to scale-up. The new P11 edition also now supports databases up to 1TB in size.

Developers can now choose from 10 different SQL Database Performance levels.  You can easily scale-up/scale-down as needed at any point without database downtime or interruption.  Each database performance tier supports a 99.99% SLA, multiple-high availability replica support built-in with no extra charges (meaning you don’t need to buy multiple instances to get an SLA – this is built-into each database), no down-time during patching, point-in-time recovery options (restore without needing a backup), TDE encryption of data, row-level security, and full-text search.

image

Learn More

You can learn more about SQL Databases by visiting the http://azure.microsoft.com web-site.  Check out the SQL Database product page to learn more about the capabilities SQL Databases provide, as well as read the technical documentation to learn more how to build great applications using it.

Summary

Today’s database updates enable developers to build even better cloud applications, and to use data to make them even richer more intelligent.  We are really looking forward to seeing the solutions you build.

Hope this helps,

Scott

omni
Categories: Architecture, Programming

How Can I Live In An Expensive City For Cheap?

Making the Complex Simple - John Sonmez - Thu, 08/27/2015 - 16:00

In this episode, I talk about living in an expensive city for cheap. Full transcript: John:               Hey, John Sonmez from simpleprogrammer.com. I got a question about—another question about my book. If you haven’t seen it already, keep on plugging it, but Soft Skills: A Software Developer’s Life Manual. Here I talked about some kind of […]

The post How Can I Live In An Expensive City For Cheap? appeared first on Simple Programmer.

Categories: Programming

Learn app monetization best practices with Udacity and Google

Google Code Blog - Wed, 08/26/2015 - 18:17

Posted by, Ido Green, Developer Advocate

There is no higher form of user validation than having customers support your product with their wallets. However, the path to a profitable business is not necessarily an easy one. There are many strategies to pick from and a lot of little things that impact the bottom line. If you are starting a new business (or thinking how to improve the financial situation of your current startup), we recommend this course we've been working on with Udacity!

This course blends instruction with real-life examples to help you effectively develop, implement, and measure your monetization strategy. By the end of this course you will be able to:

  • Choose & implement a monetization strategy relevant to your service or product.
  • Set performance metrics & monitor the success of a strategy.
  • Know when it might be time to change methods.

Go try it at: udacity.com/course/app-monetization–ud518

We hope you will enjoy and earn from it!

Categories: Programming

How to Ace a Non-Technical Interview

Making the Complex Simple - John Sonmez - Wed, 08/26/2015 - 13:00

In my experience, developers often overlook the non-technical interview. I’ve been the company culture guy during quite a few interviews. The task is simply to answer one question: “Is this person a good fit for our team?” Some have been great, and others have flopped. What do top developers do to ensure they’re hired? Where […]

The post How to Ace a Non-Technical Interview appeared first on Simple Programmer.

Categories: Programming

Breaking the SQL Barrier: Google BigQuery User-Defined Functions

Google Code Blog - Tue, 08/25/2015 - 16:55

Posted by, Thomas Park, Senior Software Engineer, Google BigQuery

Many types of computations can be difficult or impossible to express in SQL. Loops, complex conditionals, and non-trivial string parsing or transformations are all common examples. What can you do when you need to perform these operations but your data lives in a SQL-based Big data tool? Is it possible to retain the convenience and speed of keeping your data in a single system, when portions of your logic are a poor fit for SQL?

Google BigQuery is a fully managed, petabyte-scale data analytics service that uses SQL as its query interface. As part of our latest BigQuery release, we are announcing support for executing user-defined functions (UDFs) over your BigQuery data. This gives you the ability to combine the convenience and accessibility of SQL with the option to use a familiar programming language, JavaScript, when SQL isn’t the right tool for the job.

How does it work?

BigQuery UDFs are similar to map functions in MapReduce. They take one row of input and produce zero or more rows of output, potentially with a different schema.

Below is a simple example that performs URL decoding. Although BigQuery provides a number of built-in functions, it does not have a built-in for decoding URL-encoded strings. However, this functionality is available in JavaScript, so we can extend BigQuery with a simple User-Defined Function to decode this type of data:



function decodeHelper(s) {
try {
return decodeURI(s);
} catch (ex) {
return s;
}
}

// The UDF.
function urlDecode(r, emit) {
emit({title: decodeHelper(r.title),
requests: r.num_requests});
}

BigQuery UDFs are functions with two formal parameters. The first parameter is a variable to which each input row will be bound. The second parameter is an “emitter” function. Each time the emitter is invoked with a JavaScript object, that object will be returned as a row to the query.

In the above example, urlDecode is the UDF that will be invoked from BigQuery. It calls a helper function decodeHelper that uses JavaScript’s built-in decodeURI function to transform URL-encoded data into UTF-8.

Note the use of try / catch in decodeHelper. Data is sometimes dirty! If we encounter an error decoding a particular string for any reason, the helper returns the original, un-decoded string.

To make this function visible to BigQuery, it is necessary to include a registration call in your code that describes the function, including its input columns and output schema, and a name that you’ll use to reference the function in your SQL:



bigquery.defineFunction(
'urlDecode', // Name used to call the function from SQL.

['title', 'num_requests'], // Input column names.

// JSON representation of output schema.
[{name: 'title', type: 'string'},
{name: 'requests', type: 'integer'}],

urlDecode // The UDF reference.
);

The UDF can then be invoked by the name “urlDecode” in the SQL query, with a source table or subquery as an argument. The following query looks for the most-visited French Wikipedia articles from April 2015 that contain a cédille character (ç) in the title:



SELECT requests, title
FROM
urlDecode(
SELECT
title, sum(requests) AS num_requests
FROM
[fh-bigquery:wikipedia.pagecounts_201504]
WHERE language = 'fr'
GROUP EACH BY title
)
WHERE title LIKE '%ç%'
ORDER BY requests DESC
LIMIT 100

This query processes data from a 5.6 billion row / 380 Gb dataset and generally runs in less than two minutes. The cost? About $1.37, at the time of this writing.

To use a UDF in a query, it must be described via UserDefinedFunctionResource elements in your JobConfiguration request. UserDefinedFunctionResource elements can either contain inline JavaScript code or pointers to code files stored in Google Cloud Storage.

Under the hood

JavaScript UDFs are executed on instances of Google V8 running on Google servers. Your code runs close to your data in order to minimize added latency.

You don’t have to worry about provisioning hardware or managing pipelines to deal with data import / export. BigQuery automatically scales with the size of the data being queried in order to provide good query performance.

In addition, you only pay for what you use - there is no need to forecast usage or pre-purchase resources.

Developing your function

Interested in developing your JavaScript UDF without running up your BigQuery bill? Here is a simple browser-based widget that allows you to test and debug UDFs.

Note that not all JavaScript functionality supported in the browser is available in BigQuery. For example, anything related to the browser DOM is unsupported, including Window and Document objects, and any functions that require them, such as atob() / btoa().

Tips and tricks

Pre-filter input

In our URL-decoding example, we are passing a subquery as the input to urlDecode rather than the full table. Why?

There are about 5.6 billion rows in [fh-bigquery:wikipedia.pagecounts_201504]. However, one of the query predicates will filter the input data down to the rows where language is “fr” (French) - this is about 262 million rows. If we ran the UDF over the entire table and did the language and cédille filtering in a single WHERE clause, that would cause the JavaScript framework to process over 21 times more rows than it would with the filtered subquery. This equates to a lot of CPU cycles wasted doing unnecessary data conversion and marshalling.

If your input can easily be filtered down before invoking a UDF by using native SQL predicates, doing so will usually lead to a faster (and potentially cheaper) query.

Avoid persistent mutable state

You must not store and access mutable state across UDF execution for different rows. The following contrived example illustrates this error:



// myCode.js
var numRows = 0;

function dontDoThis(r, emit) {
emit(rowCount: ++numRows);
}

// The query.
SELECT max(rowCount) FROM dontDoThis(myTable);

This is a problem because BigQuery will shard your query across multiple nodes, each of which has independent V8 instances and will therefore accumulate separate values for numRows.

Expand select *

You cannot execute SELECT * FROM urlDecode(...) at this time; you must explicitly list the columns being selected from the UDF: select requests, title from urlDecode(...)

For more information about BigQuery User-Defined Functions, see the full feature documentation.

Categories: Programming

Get the Do’s and Don’ts for Notifications from Game Developer Seriously

Android Developers Blog - Mon, 08/24/2015 - 17:41

Posted by Lily Sheringham, Developer Marketing at Google Play

Editor’s note: We’ve been talking to developers to find out how they’ve been achieving success on Google Play. We recently spoke to Reko Ukko at Finnish mobile game developer, Seriously, to find out how to successfully use Notifications.

Notifications on Android let you send timely, relevant, and actionable information to your users' devices. When used correctly, notifications can increase the value of your app or game and drive ongoing engagement.

Seriously is a Finnish mobile game developer focused on creating entertaining games with quality user experiences. They use push notifications to drive engagement with their players, such as helping players progress to the next level when they’ve left the app after getting stuck.

Reko Ukko, VP of Game Design at Seriously, shared his tips with us on how to use notifications to increase the value of your game and drive ongoing engagement.

Do’s and don’ts for successful game notifications table, th, td { border: clear; border-collapse: collapse; } Do’s

Don’ts

Do let the user get familiar with your service and its benefits before asking for permission to send notifications.

Don’t treat your users as if they’re all the same - identify and group them so you can push notifications that are relevant to their actions within your app.

Do include actionable context. If it looks like a player is stuck on a level, send them a tip to encourage action.

Don’t spam push notifications or interrupt game play. Get an understanding of the right frequency for your audience to fit the game.

Do consider re-activation. If the player thoroughly completes a game loop and could be interested in playing again, think about using a notification. Look at timing this shortly after the player exits the game.

Don’t just target players at all hours of the day. Choose moments when players typically play games – early morning commutes, lunch breaks, the end of the work day, and in the evening before sleeping. Take time zones into account.

Do deep link from the notification to where the user expects to go to based on the message. For example. if the notification is about "do action X in the game now to win", link to where that action can take place.

Don’t forget to expire the notifications if they’re time-limited or associated with an event. You can also recycle the same notification ID to avoid stacking notifications for the user.

Do try to make an emotional connection with the player by reflecting the style, characters, and atmosphere of your game in the notification. If the player is emotionally connected to your game, they’ll appreciate your notifications and be more likely to engage.

Don’t leave notifications up to guess work. Experiment with A/B testing and iterate to compare how different notifications affect engagement and user behavior in your app. Go beyond measuring app opening metrics – identify and respond to user behavior.

Experiment with notifications yourself to understand what’s best for your players and your game. You can power your own notifications with Google Cloud Messaging, which is free, cross platform, reliable, and thoughtful about battery usage. Find out more about developing Notifications on Android.

Join the discussion on

+Android Developers
Categories: Programming

Are YOU In Control of Your Life?

Making the Complex Simple - John Sonmez - Mon, 08/24/2015 - 13:00

Are YOU in Control of Your Life? Perhaps not. If you’ll allow me to put my philosopher’s cap on for a minute, I’ll try to explain why—oh, and it’s related to self-discipline. Free will or determinism? For a long time, a debate has raged over whether or not we, as human beings, actually have free […]

The post Are YOU In Control of Your Life? appeared first on Simple Programmer.

Categories: Programming

Every Employee is a Digital Employee

“The questions that we must ask ourselves, and that our historians and our children will ask of us, are these: How will what we create compare with what we inherited? Will we add to our tradition or will we subtract from it? Will we enrich it or will we deplete it?”
― Leon Wieseltier

Digital transformation is all around us.

And we are all digital employees according to Gartner.

In the article, Gartner Says Every Employee Is a Digital Employee, Gartner says that the IT function no longer holds a monopoly on IT.

A Greater Degree of Digital Dexterity

According to Gartner, employees are creating increasing digital dexterity from the devices and apps they use, to participating in sharing economies.

Via Gartner Says Every Employee Is a Digital Employee:

"'Today's employees possess a greater degree of digital dexterity,' said Matt Cain, research vice president at Gartner. 'They operate their own wireless networks at home, attach and manage various devices, and use apps and Web services in almost every facet of their personal lives. They participate in sharing economies for transport, lodging and more.'"

Workers are Streamlining Their Work Life

More employees are using technology to simplify, streamline, and scale their work.

Via Gartner Says Every Employee Is a Digital Employee:

"This results in unprecedented numbers of workers who enjoy using technology and recognize the relevance of digitalization to a wide range of business models. They also routinely apply their own technology and technological knowledge to streamline their work life."

3 Ways to Exploit Digital Dexterity

According to Gartner, there are 3 Ways the IT organization should exploit employees' digital dexterity:

  1. Implement a digital workplace strategy
  2. Embrace shadow IT
  3. Use a bimodal approach
1. Implement a Digital Workplace Strategy

While it’s happening organically, IT can also help shape the digital workplace experience.  Implement a strategy that helps workers use computing resources in a more friction free way and that play better with their pains, needs, and desired outcomes.

Via Gartner Says Every Employee Is a Digital Employee:

“Making computing resources more accessible in ways that match employees' preferences will foster engagement by providing feelings of empowerment and ownership. The digital workplace strategy should therefore complement HR initiatives by addressing and improving factors such as workplace culture, autonomous decision making, work-life balance, recognition of contributions and personal growth opportunities.”

2. Embrace shadow IT

Treat shadow IT as a first class citizen.  IT should partner with the business to help the business realize it’s potential, and to help workers make the most of the available IT resources.

Via Gartner Says Every Employee Is a Digital Employee:

“Rather than try to fight the tide, the IT organization should develop a framework that outlines when it is appropriate for business units and individuals to use their own technology solutions and when IT should take the lead. IT should position itself as a business partner and consultant that does not control all technology decisions in the business.”

3. Use a bimodal approach

Traditional IT is slow.   It’s heavy in governance, standards, and procedures.   It addresses risk by reducing flexibility.   Meanwhile, the world is changing fast.  Business needs to keep up.  Business needs fast IT. 

So what’s the solution?

Bimodal IT.  Bimodal IT separates the fast demands of digital business from the slow/risk-averse methods of traditional IT.

Via Gartner Says Every Employee Is a Digital Employee:

“Bimodal IT separates the risk-averse and ‘slow’ methods of traditional IT from the fast-paced demands of digital business, which is underpinned by the digital workplace. This dual mode of operation is essential to satisfy the ever-increasing demands of digitally savvy business units and employees, while ensuring that critical IT infrastructure and services remain stable and uncompromised.”

Everyone has technology at their fingertips.  Every worker has the chance to re-imagine their work in a Mobile-First, Cloud-First world. 

With infinite compute, infinite capacity, global reach, and real-time insights available to you, how could you evolve your job?

You can evolve your digital work life right under your feet.

You Might Also Like

Empower Every Person on the Planet to Achieve More

Satya Nadella on a Mobile-First, Cloud-First World

We Help Our Customers Transform

Categories: Architecture, Programming

What Life is Like with Agile Results

“Courage doesn't always roar. Sometimes courage is the little voice at the end of the day that says I'll try again tomorrow.” -- Mary Anne Radmacher

Imagine if you could wake up productive, where each day is a fresh start.  As you take in your morning breath, you notice your mind is calm and clear.

You feel strong and well rested.

Before you start your day, you picture in your mind three simple scenes of the day ahead:

In the morning, you see yourself complete a draft you’ve been working on.

In the afternoon, you see yourself land your idea and win over your peers in a key meeting.

In the evening, you see yourself enjoying some quiet time as you sit down and explore your latest adventures in learning.

With an exciting day ahead, and a chance to rise and shine, you feel the day gently pull you forward with anticipation. 

You know you’ll be tested, and you know some things won’t work out as planned.   But you also know that you will learn and improve from every setback.  You know that each challenge you face will be a leadership moment or a learning opportunity.  Your challenges make you stronger.

And you also know that you will be spending as much time in your strengths as you can, and that helps keeps you strong, all day long. 

You motivate yourself from the inside out by focusing on your vision for today and your values.  You value achievement.  You value learning.  You value collaboration.  You value excellence.  You value empowerment.   And you know that throughout the day, you will have every chance to apply your skills to do more, to achieve more, and to be more. 

Each task, or each challenge, is also a chance to learn more.  From yourself, and from everyone all around you.  And this is how you never stop learning.

You may not like some of the tasks before you, but you like the chance to master your craft.  And you enjoy the learning.  And you love how you get better.  With each task on your To-Do list for today, you experiment and explore ways to do things better, faster, and easier.

Like a productive artist, you find ways to add unique value.   You add your personal twist to everything you do.  Your twist comes from your unique experience, seeing what others can’t see from your unique vantage point, and applying your unique strengths.

And that’s how you do more art.  Your art.  And as you do your art, you feel yourself come alive.  You feel your soul sing, as you operate at a higher level.  As you find your flow and realize your potential, your inner-wisdom winks in an approving way.  Like a garden in full bloom on a warm Summer’s day, you are living your arête.

As your work day comes to an end, you pause to reflect on your three achievements, your three wins, for the day.   You appreciate the way you leaned in on the tough stuff.  You surprised yourself in how you handled some of your most frustrating moments.  And you learned a new way to do your most challenging task.  You take note of the favorite parts of your day, and your attitude of gratitude feels you with a sense of accomplishment, and a sense of fulfillment.

Fresh and ready for anything, you head for home.

Try 30 Days of Getting Results.  It’s free. Surprise yourself with what you’re capable of.

Categories: Architecture, Programming

HTTP/2 Server Push

Xebia Blog - Sun, 08/23/2015 - 17:11

The HTTP/2 standard was finalized in May 2015. Most major browsers support it, and Google uses it heavily.

HTTP/2 leaves the basic concepts of Requests, Responses and Headers intact. Changes are mostly at the transport level, improving the performance of parallel requests - with few changes to your application. The go HTTP/2 'gophertiles' demo nicely demonstrates this effect.

A new concept in HTTP/2 is Server Push, which allows the server to speculatively start sending resources to the client. This can potentially speed up initial page load times: the browser doesn't have to parse the HTML page and find out which other resources to load, instead the server can start sending them immediately.

This article will demonstrate how Server Push affects the load time of the 'gophertiles'.

HTTP/2 in a nutshell

The key characteristic of HTTP/2 is that all requests for a server are sent over one TCP connection, and responses can come in parallel over that connection.

Using only one connection reduces overhead caused by TCP and TLS handshakes. Allowing responses to be sent in parallel is an improvement over HTTP/1.1 pipelining, which only allows requests to be served sequentially.

Additionally, because all requests are sent over one connection, there is a Header Compression mechanism that reduces the bandwidth needed for headers that previously would have had to be repeated for each request.

Server Push

Server Push allows the server to preemptively send a 'request promise' and an accompanying response to the client.

The most obvious use case for this technology is sending resources like images, CSS and JavaScript along with the page that includes them. Traditionally, the browser would have to first fetch the HTML, parse it, and then make subsequent requests for other resources. As the server can fairly accurately predict which resources a client will need, with Server Push it does not have to wait for those requests and can begin sending the resources immediately.

Of course sometimes, you really do only want to fetch the HTML and not the accompanying resources. There are 2 ways to accomplish this: the client can specify it does not want to receive any pushed resources at all, or cancel an individual push after receiving the push promise. In the latter case the client cannot prevent the browser from initiating the Push, though, so some bandwidth may have been wasted. This will make deciding whether to use Server Push for resources that might already have been cached by the browser subtle.

Demo

To show the effect HTTP/2 Server Push can have, I have extended the gophertiles demo to be able to test behavior with and without Server Push, available hosted on an old raspberry pi.

Both the latency of loading the HTML and the latency of loading each tile is now artificially increased.

When visiting the page without Server Push with an artificial latency of 1000ms, you will notice that loading the HTML takes at least one second, and then loading all images in parallel again takes at least one second - so rendering the complete page takes well above 2 seconds.

With server push enabled, you will see that after the DOM has loaded, the images are almost immediately there, because they have been Push'ed already.

All that glitters, however, is not gold: as you will notice when experimenting (especially at lower artificial latencies), while Server Push fairly reliably reduces the complete load time, it sometimes increases the time until the DOM-content is loaded. While this makes sense (the browser needs to process frames relating to the Server Push'ed resources), this could have an impact on the perceived performance of your page: for example, it could delay running JavaScript code embedded in your site.

HTTP/2 does give you tools to tune this, such as Stream Priorities, but this might need careful tuning and be supported by the http2 library you are choosing.

Conclusions

HTTP/2 is here today, and can provide a considerable improvement in perceived performance - even with few changes in your application.

Server Push potentially allows you to improve your page loading times even further, but requires careful analysis and tuning - otherwise it might even have an adverse effect.

Beacons, the Internet of things, and more: Coffee with Timothy Jordan

Google Code Blog - Sat, 08/22/2015 - 00:31

Posted by Laurence Moroney, Developer Advocate

In this episode of Coffee With a Googler, Laurence meets with Developer Advocate Timothy Jordan to talk about all things Ubiquitous Computing at Google. Learn about the platforms and services that help developers reach their users wherever it makes sense.

We discuss Brillo, which extends the Android Platform to 'Internet of Things' embedded devices, as well as Weave, which is a services layer that helps all those devices work together seamlessly.

We also chat about beacons and how they can give context to the world around you, making the user experience simpler. Traditionally, users need to tell you about their location, and other types of context. But with beacons, the environment can speak to you. When it comes to developing for beacons, Timothy introduces us to Eddystone, a protocol specification for BlueTooth Low Energy (BLE) beacons, the Proximity Beacon API that allows developers to register a beacon and associate data with it, and the Nearby Messages API which helps your app 'sight' and retrieve data about nearby beacons.

Timothy and his team have produced a new Udacity series on ubiquitous computing that you can access for free! Take the course to learn more about ubiquitous computing, the design paradigms involved, and the technical specifics for extending to Android Wear, Google Cast, Android TV, and Android Auto.

Also, don't forget to join us for a ubiquitous computing summit on November 9th & 10th in San Francisco. Sign up here and we'll keep you updated.

Categories: Programming

Neo4j: Summarising neo4j-shell output

Mark Needham - Fri, 08/21/2015 - 21:59

I frequently find myself trying to optimise a set of cypher queries and I tend to group them together in a script that I fed to the Neo4j shell.

When tweaking the queries it’s easy to make a mistake and end up not creating the same data so I decided to write a script which will show me the aggregates of all the commands executed.

I want to see the number of constraints created, indexes added, nodes, relationships and properties created. The first 2 don’t need to match across the scripts but the latter 3 should be the same.

I put together the following script:

import re
import sys
from tabulate import tabulate
 
lines = sys.stdin.readlines()
 
def search(term, line):
    m =  re.match(term + ": (.*)", line)
    return (int(m.group(1)) if m else 0)
 
nodes_created, relationships_created, constraints_added, indexes_added, labels_added, properties_set = 0, 0, 0, 0, 0, 0
for line in lines:
    nodes_created = nodes_created + search("Nodes created", line)
    relationships_created = relationships_created + search("Relationships created", line)
    constraints_added = constraints_added + search("Constraints added", line)
    indexes_added = indexes_added + search("Indexes added", line)
    labels_added = labels_added + search("Labels added", line)
    properties_set = properties_set + search("Properties set", line)
 
    time_match = re.match("real.*([0-9]+m[0-9]+\.[0-9]+s)$", line)
 
    if time_match:
        time = time_match.group(1)
 
table = [
            ["Constraints added", constraints_added],
            ["Indexes added", indexes_added],
            ["Nodes created", nodes_created],
            ["Relationships created", relationships_created],
            ["Labels added", labels_added],
            ["Properties set", properties_set],
            ["Time", time]
         ]
print tabulate(table)

Its input is the piped output of the neo4j-shell command which will contain a description of all the queries it executed.

$ cat import.sh
#!/bin/sh
 
{ ./neo4j-community-2.2.3/bin/neo4j stop; } 2>&1
rm -rf neo4j-community-2.2.3/data/graph.db/
{ ./neo4j-community-2.2.3/bin/neo4j start; } 2>&1
{ time ./neo4j-community-2.2.3/bin/neo4j-shell --file $1; } 2>&1

We can use the script in two ways.

Either we can pipe the output of our shell straight into it and just get the summary e.g.

$ ./import.sh local.import.optimised.cql | python summarise.py
 
---------------------  ---------
Constraints added      5
Indexes added          1
Nodes created          13249
Relationships created  32227
Labels added           21715
Properties set         36480
Time                   0m17.595s
---------------------  ---------

…or we can make use of the ‘tee’ function in Unix and pipe the output into stdout and into the file and then either tail the file on another window or inspect it afterwards to see the detailed timings. e.g.

$ ./import.sh local.import.optimised.cql | tee /tmp/output.txt |  python summarise.py
 
---------------------  ---------
Constraints added      5
Indexes added          1
Nodes created          13249
Relationships created  32227
Labels added           21715
Properties set         36480
Time                   0m11.428s
---------------------  ---------
$ tail -f /tmp/output.txt
+-------------+
| appearances |
+-------------+
| 3771        |
+-------------+
1 row
Nodes created: 3439
Properties set: 3439
Labels added: 3439
289 ms
+------------------------------------+
| appearances -> player, match, team |
+------------------------------------+
| 3771                               |
+------------------------------------+
1 row
Relationships created: 10317
1006 ms
...

My only dependency is the tabulate package to get the pretty table:

$ cat requirements.txt
 
tabulate==0.7.5

The cypher script I’m running creates a BBC football graph which is available as a github project. Feel free to grab it and play around – any problems let me know!

Categories: Programming