Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Neo4j: Cypher – Avoiding the Eager

Mark Needham - Thu, 10/23/2014 - 06:56

Although I love how easy Cypher’s LOAD CSV command makes it to get data into Neo4j, it currently breaks the rule of least surprise in the way it eagerly loads in all rows for some queries even those using periodic commit.

Neverwhere

Beware of the eager pipe

This is something that my colleague Michael noted in the second of his blog posts explaining how to use LOAD CSV successfully:

The biggest issue that people ran into, even when following the advice I gave earlier, was that for large imports of more than one million rows, Cypher ran into an out-of-memory situation.

That was not related to commit sizes, so it happened even with PERIODIC COMMIT of small batches.

I recently spent a few days importing data into Neo4j on a Windows machine with 4GB RAM so I was seeing this problem even earlier than Michael suggested.

Michael explains how to work out whether your query is suffering from unexpected eager evaluation:

If you profile that query you see that there is an “Eager” step in the query plan.

That is where the “pull in all data” happens.

You can profile queries by prefixing the word ‘PROFILE’. You’ll need to run your query in the console of /webadmin in your web browser or with the Neo4j shell.

I did this for my queries and was able to identify query patterns which get evaluated eagerly and in some cases we can work around it.

We’ll use the Northwind data set to demonstrate how the Eager pipe can sneak into our queries but keep in mind that this data set is sufficiently small to not cause issues.

This is what a row in the file looks like:

$ head -n 2 data/customerDb.csv
OrderID,CustomerID,EmployeeID,OrderDate,RequiredDate,ShippedDate,ShipVia,Freight,ShipName,ShipAddress,ShipCity,ShipRegion,ShipPostalCode,ShipCountry,CustomerID,CustomerCompanyName,ContactName,ContactTitle,Address,City,Region,PostalCode,Country,Phone,Fax,EmployeeID,LastName,FirstName,Title,TitleOfCourtesy,BirthDate,HireDate,Address,City,Region,PostalCode,Country,HomePhone,Extension,Photo,Notes,ReportsTo,PhotoPath,OrderID,ProductID,UnitPrice,Quantity,Discount,ProductID,ProductName,SupplierID,CategoryID,QuantityPerUnit,UnitPrice,UnitsInStock,UnitsOnOrder,ReorderLevel,Discontinued,SupplierID,SupplierCompanyName,ContactName,ContactTitle,Address,City,Region,PostalCode,Country,Phone,Fax,HomePage,CategoryID,CategoryName,Description,Picture
10248,VINET,5,1996-07-04,1996-08-01,1996-07-16,3,32.38,Vins et alcools Chevalier,59 rue de l'Abbaye,Reims,,51100,France,VINET,Vins et alcools Chevalier,Paul Henriot,Accounting Manager,59 rue de l'Abbaye,Reims,,51100,France,26.47.15.10,26.47.15.11,5,Buchanan,Steven,Sales Manager,Mr.,1955-03-04,1993-10-17,14 Garrett Hill,London,,SW1 8JR,UK,(71) 555-4848,3453,\x,"Steven Buchanan graduated from St. Andrews University, Scotland, with a BSC degree in 1976.  Upon joining the company as a sales representative in 1992, he spent 6 months in an orientation program at the Seattle office and then returned to his permanent post in London.  He was promoted to sales manager in March 1993.  Mr. Buchanan has completed the courses ""Successful Telemarketing"" and ""International Sales Management.""  He is fluent in French.",2,http://accweb/emmployees/buchanan.bmp,10248,11,14,12,0,11,Queso Cabrales,5,4,1 kg pkg.,21,22,30,30,0,5,Cooperativa de Quesos 'Las Cabras',Antonio del Valle Saavedra,Export Administrator,Calle del Rosal 4,Oviedo,Asturias,33007,Spain,(98) 598 76 54,,,4,Dairy Products,Cheeses,\x
MERGE, MERGE, MERGE

The first thing we want to do is create a node for each employee and each order and then create a relationship between them.

We might start with the following query:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MERGE (employee:Employee {employeeId: row.EmployeeID})
MERGE (order:Order {orderId: row.OrderID})
MERGE (employee)-[:SOLD]->(order)

This does the job but if we profile the query like so…

PROFILE LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
WITH row LIMIT 0
MERGE (employee:Employee {employeeId: row.EmployeeID})
MERGE (order:Order {orderId: row.OrderID})
MERGE (employee)-[:SOLD]->(order)

…we’ll notice an ‘Eager’ lurking on the third line:

==> +----------------+------+--------+----------------------------------+-----------------------------------------+
==> |       Operator | Rows | DbHits |                      Identifiers |                                   Other |
==> +----------------+------+--------+----------------------------------+-----------------------------------------+
==> |    EmptyResult |    0 |      0 |                                  |                                         |
==> | UpdateGraph(0) |    0 |      0 |    employee, order,   UNNAMED216 |                            MergePattern |
==> |          Eager |    0 |      0 |                                  |                                         |
==> | UpdateGraph(1) |    0 |      0 | employee, employee, order, order | MergeNode; :Employee; MergeNode; :Order |
==> |          Slice |    0 |      0 |                                  |                            {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                              row |                                         |
==> +----------------+------+--------+----------------------------------+-----------------------------------------+

You’ll notice that when we profile each query we’re stripping off the periodic commit section and adding a ‘WITH row LIMIT 0′. This allows us to generate enough of the query plan to identify the ‘Eager’ operator without actually importing any data.

We want to split that query into two so it can be processed in a non eager manner:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
WITH row LIMIT 0
MERGE (employee:Employee {employeeId: row.EmployeeID})
MERGE (order:Order {orderId: row.OrderID})
==> +-------------+------+--------+----------------------------------+-----------------------------------------+
==> |    Operator | Rows | DbHits |                      Identifiers |                                   Other |
==> +-------------+------+--------+----------------------------------+-----------------------------------------+
==> | EmptyResult |    0 |      0 |                                  |                                         |
==> | UpdateGraph |    0 |      0 | employee, employee, order, order | MergeNode; :Employee; MergeNode; :Order |
==> |       Slice |    0 |      0 |                                  |                            {  AUTOINT0} |
==> |     LoadCSV |    1 |      0 |                              row |                                         |
==> +-------------+------+--------+----------------------------------+-----------------------------------------+

Now that we’ve created the employees and orders we can join them together:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MATCH (employee:Employee {employeeId: row.EmployeeID})
MATCH (order:Order {orderId: row.OrderID})
MERGE (employee)-[:SOLD]->(order)
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |       Operator | Rows | DbHits |                   Identifiers |                                                     Other |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |    EmptyResult |    0 |      0 |                               |                                                           |
==> |    UpdateGraph |    0 |      0 | employee, order,   UNNAMED216 |                                              MergePattern |
==> |      Filter(0) |    0 |      0 |                               |          Property(order,orderId) == Property(row,OrderID) |
==> | NodeByLabel(0) |    0 |      0 |                  order, order |                                                    :Order |
==> |      Filter(1) |    0 |      0 |                               | Property(employee,employeeId) == Property(row,EmployeeID) |
==> | NodeByLabel(1) |    0 |      0 |            employee, employee |                                                 :Employee |
==> |          Slice |    0 |      0 |                               |                                              {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                           row |                                                           |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+

Not an Eager in sight!

MATCH, MATCH, MATCH, MERGE, MERGE

If we fast forward a few steps we may now have refactored our import script to the point where we create our nodes in one query and the relationships in another query.

Our create query works as expected:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MERGE (employee:Employee {employeeId: row.EmployeeID})
MERGE (order:Order {orderId: row.OrderID})
MERGE (product:Product {productId: row.ProductID})
==> +-------------+------+--------+----------------------------------------------------+--------------------------------------------------------------+
==> |    Operator | Rows | DbHits |                                        Identifiers |                                                        Other |
==> +-------------+------+--------+----------------------------------------------------+--------------------------------------------------------------+
==> | EmptyResult |    0 |      0 |                                                    |                                                              |
==> | UpdateGraph |    0 |      0 | employee, employee, order, order, product, product | MergeNode; :Employee; MergeNode; :Order; MergeNode; :Product |
==> |       Slice |    0 |      0 |                                                    |                                                 {  AUTOINT0} |
==> |     LoadCSV |    1 |      0 |                                                row |                                                              |
==> +-------------+------+--------+----------------------------------------------------+------------------------------------------------------------

We’ve now got employees, products and orders in the graph. Now let’s create relationships between the trio:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MATCH (employee:Employee {employeeId: row.EmployeeID})
MATCH (order:Order {orderId: row.OrderID})
MATCH (product:Product {productId: row.ProductID})
MERGE (employee)-[:SOLD]->(order)
MERGE (order)-[:PRODUCT]->(product)

If we profile that we’ll notice Eager has sneaked in again!

==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |       Operator | Rows | DbHits |                   Identifiers |                                                     Other |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |    EmptyResult |    0 |      0 |                               |                                                           |
==> | UpdateGraph(0) |    0 |      0 |  order, product,   UNNAMED318 |                                              MergePattern |
==> |          Eager |    0 |      0 |                               |                                                           |
==> | UpdateGraph(1) |    0 |      0 | employee, order,   UNNAMED287 |                                              MergePattern |
==> |      Filter(0) |    0 |      0 |                               |    Property(product,productId) == Property(row,ProductID) |
==> | NodeByLabel(0) |    0 |      0 |              product, product |                                                  :Product |
==> |      Filter(1) |    0 |      0 |                               |          Property(order,orderId) == Property(row,OrderID) |
==> | NodeByLabel(1) |    0 |      0 |                  order, order |                                                    :Order |
==> |      Filter(2) |    0 |      0 |                               | Property(employee,employeeId) == Property(row,EmployeeID) |
==> | NodeByLabel(2) |    0 |      0 |            employee, employee |                                                 :Employee |
==> |          Slice |    0 |      0 |                               |                                              {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                           row |                                                           |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+

In this case the Eager happens on our second call to MERGE as Michael identified in his post:

The issue is that within a single Cypher statement you have to isolate changes that affect matches further on, e.g. when you CREATE nodes with a label that are suddenly matched by a later MATCH or MERGE operation.

We can work around the problem in this case by having separate queries to create the relationships:

LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MATCH (employee:Employee {employeeId: row.EmployeeID})
MATCH (order:Order {orderId: row.OrderID})
MERGE (employee)-[:SOLD]->(order)
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |       Operator | Rows | DbHits |                   Identifiers |                                                     Other |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |    EmptyResult |    0 |      0 |                               |                                                           |
==> |    UpdateGraph |    0 |      0 | employee, order,   UNNAMED236 |                                              MergePattern |
==> |      Filter(0) |    0 |      0 |                               |          Property(order,orderId) == Property(row,OrderID) |
==> | NodeByLabel(0) |    0 |      0 |                  order, order |                                                    :Order |
==> |      Filter(1) |    0 |      0 |                               | Property(employee,employeeId) == Property(row,EmployeeID) |
==> | NodeByLabel(1) |    0 |      0 |            employee, employee |                                                 :Employee |
==> |          Slice |    0 |      0 |                               |                                              {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                           row |                                                           |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MATCH (order:Order {orderId: row.OrderID})
MATCH (product:Product {productId: row.ProductID})
MERGE (order)-[:PRODUCT]->(product)
==> +----------------+------+--------+------------------------------+--------------------------------------------------------+
==> |       Operator | Rows | DbHits |                  Identifiers |                                                  Other |
==> +----------------+------+--------+------------------------------+--------------------------------------------------------+
==> |    EmptyResult |    0 |      0 |                              |                                                        |
==> |    UpdateGraph |    0 |      0 | order, product,   UNNAMED229 |                                           MergePattern |
==> |      Filter(0) |    0 |      0 |                              | Property(product,productId) == Property(row,ProductID) |
==> | NodeByLabel(0) |    0 |      0 |             product, product |                                               :Product |
==> |      Filter(1) |    0 |      0 |                              |       Property(order,orderId) == Property(row,OrderID) |
==> | NodeByLabel(1) |    0 |      0 |                 order, order |                                                 :Order |
==> |          Slice |    0 |      0 |                              |                                           {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                          row |                                                        |
==> +----------------+------+--------+------------------------------+--------------------------------------------------------+
MERGE, SET

I try to make LOAD CSV scripts as idempotent as possible so that if we add more rows or columns of data to our CSV we can rerun the query without having to recreate everything.

This can lead you towards the following pattern where we’re creating suppliers:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MERGE (supplier:Supplier {supplierId: row.SupplierID})
SET supplier.companyName = row.SupplierCompanyName

We want to ensure that there’s only one Supplier with that SupplierID but we might be incrementally adding new properties and decide to just replace everything by using the ‘SET’ command. If we profile that query, the Eager lurks:

==> +----------------+------+--------+--------------------+----------------------+
==> |       Operator | Rows | DbHits |        Identifiers |                Other |
==> +----------------+------+--------+--------------------+----------------------+
==> |    EmptyResult |    0 |      0 |                    |                      |
==> | UpdateGraph(0) |    0 |      0 |                    |          PropertySet |
==> |          Eager |    0 |      0 |                    |                      |
==> | UpdateGraph(1) |    0 |      0 | supplier, supplier | MergeNode; :Supplier |
==> |          Slice |    0 |      0 |                    |         {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                row |                      |
==> +----------------+------+--------+--------------------+----------------------+

We can work around this at the cost of a bit of duplication using ‘ON CREATE SET’ and ‘ON MATCH SET':

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MERGE (supplier:Supplier {supplierId: row.SupplierID})
ON CREATE SET supplier.companyName = row.SupplierCompanyName
ON MATCH SET supplier.companyName = row.SupplierCompanyName
==> +-------------+------+--------+--------------------+----------------------+
==> |    Operator | Rows | DbHits |        Identifiers |                Other |
==> +-------------+------+--------+--------------------+----------------------+
==> | EmptyResult |    0 |      0 |                    |                      |
==> | UpdateGraph |    0 |      0 | supplier, supplier | MergeNode; :Supplier |
==> |       Slice |    0 |      0 |                    |         {  AUTOINT0} |
==> |     LoadCSV |    1 |      0 |                row |                      |
==> +-------------+------+--------+--------------------+----------------------+

With the data set I’ve been working with I was able to avoid OutOfMemory exceptions in some cases and reduce the amount of time it took to run the query by a factor of 3 in others.

As time goes on I expect all of these scenarios will be addressed but as of Neo4j 2.1.5 these are the patterns that I’ve identified as being overly eager.

If you know of any others do let me know and I can add them to the post or write a second part.

Categories: Programming

AppCompat v21 — Material Design for Pre-Lollipop Devices!

Android Developers Blog - Wed, 10/22/2014 - 22:58

By Chris Banes, Android Developer Relations

The Android 5.0 SDK was released last Friday, featuring new UI widgets and material design, our visual language focused on good design. To enable you to bring your latest designs to older Android platforms we have expanded our support libraries, including a major update to AppCompat, as well as new RecyclerView, CardView and Palette libraries.

In this post we'll take a look at what’s new in AppCompat and how you can use it to support material design in your apps.

What's new in AppCompat?

AppCompat (aka ActionBarCompat) started out as a backport of the Android 4.0 ActionBar API for devices running on Gingerbread, providing a common API layer on top of the backported implementation and the framework implementation. AppCompat v21 delivers an API and feature-set that is up-to-date with Android 5.0

In this release, Android introduces a new Toolbar widget. This is a generalization of the Action Bar pattern that gives you much more control and flexibility. Toolbar is a view in your hierarchy just like any other, making it easier to interleave with the rest of your views, animate it, and react to scroll events. You can also set it as your Activity’s action bar, meaning that your standard options menu actions will be display within it.

You’ve likely already been using the latest update to AppCompat for a while, it has been included in various Google app updates over the past few weeks, including Play Store and Play Newsstand. It has also been integrated into the Google I/O Android app, pictured above, which is open-source.

Setup

If you’re using Gradle, add appcompat as a dependency in your build.gradle file:

dependencies {
    compile "com.android.support:appcompat-v7:21.0.+"
}
New integration

If you are not currently using AppCompat, or you are starting from scratch, here's how to set it up:

  • All of your Activities must extend from ActionBarActivity, which extends from FragmentActivity from the v4 support library, so you can continue to use fragments.
  • All of your themes (that want an Action Bar/Toolbar) must inherit from Theme.AppCompat. There are variants available, including Light and NoActionBar.
  • When inflating anything to be displayed on the action bar (such as a SpinnerAdapter for list navigation in the toolbar), make sure you use the action bar’s themed context, retrieved via getSupportActionBar().getThemedContext().
  • You must use the static methods in MenuItemCompat for any action-related calls on a MenuItem.

For more information, see the Action Bar API guide which is a comprehensive guide on AppCompat.

Migration from previous setup

For most apps, you now only need one theme declaration, in values/:

values/themes.xml:

<style name="Theme.MyTheme" parent="Theme.AppCompat.Light">
    <!-- Set AppCompat’s actionBarStyle -->
    <item name="actionBarStyle">@style/MyActionBarStyle</item>

    <!-- Set AppCompat’s color theming attrs -->
    <item name=”colorPrimary”>@color/my_awesome_red</item>
    <item name=”colorPrimaryDark”>@color/my_awesome_darker_red</item>
    
    <!-- The rest of your attributes -->
</style>

You can now remove all of your values-v14+ Action Bar styles.

Theming

AppCompat has support for the new color palette theme attributes which allow you to easily customize your theme to fit your brand with primary and accent colors. For example:

values/themes.xml:

<style name="Theme.MyTheme" parent="Theme.AppCompat.Light">
    <!-- colorPrimary is used for the default action bar background -->
    <item name=”colorPrimary”>@color/my_awesome_color</item>

    <!-- colorPrimaryDark is used for the status bar -->
    <item name=”colorPrimaryDark”>@color/my_awesome_darker_color</item>

    <!-- colorAccent is used as the default value for colorControlActivated,
         which is used to tint widgets -->
    <item name=”colorAccent”>@color/accent</item>

    <!-- You can also set colorControlNormal, colorControlActivated
         colorControlHighlight, and colorSwitchThumbNormal. -->
    
</style>

When you set these attributes, AppCompat automatically propagates their values to the framework attributes on API 21+. This automatically colors the status bar and Overview (Recents) task entry.

On older platforms, AppCompat emulates the color theming where possible. At the moment this is limited to coloring the action bar and some widgets.

Widget tinting

When running on devices with Android 5.0, all of the widgets are tinted using the color theme attributes we just talked about. There are two main features which allow this on Lollipop: drawable tinting, and referencing theme attributes (of the form ?attr/foo) in drawables.

AppCompat provides similar behaviour on earlier versions of Android for a subset of UI widgets:

You don’t need to do anything special to make these work, just use these controls in your layouts as usual and AppCompat will do the rest (with some caveats; see the FAQ below).

Toolbar Widget

Toolbar is fully supported in AppCompat and has feature and API parity with the framework widget. In AppCompat, Toolbar is implemented in the android.support.v7.widget.Toolbar class. There are two ways to use Toolbar:

  • Use a Toolbar as an Action Bar when you want to use the existing Action Bar facilities (such as menu inflation and selection, ActionBarDrawerToggle, and so on) but want to have more control over its appearance.
  • Use a standalone Toolbar when you want to use the pattern in your app for situations that an Action Bar would not support; for example, showing multiple toolbars on the screen, spanning only part of the width, and so on.
Action Bar

To use Toolbar as an Action Bar, first disable the decor-provided Action Bar. The easiest way is to have your theme extend from Theme.AppCompat.NoActionBar (or its light variant).

Second, create a Toolbar instance, usually via your layout XML:

<android.support.v7.widget.Toolbar
    android:id=”@+id/my_awesome_toolbar”
    android:layout_height=”wrap_content”
    android:layout_width=”match_parent”
    android:minHeight=”?attr/actionBarSize”
    android:background=”?attr/colorPrimary” />

The height, width, background, and so on are totally up to you; these are just good examples. As Toolbar is just a ViewGroup, you can style and position it however you want.

Then in your Activity or Fragment, set the Toolbar to act as your Action Bar:

@Override
public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.blah);

    Toolbar toolbar = (Toolbar) findViewById(R.id.my_awesome_toolbar);
    setSupportActionBar(toolbar);
}

From this point on, all menu items are displayed in your Toolbar, populated via the standard options menu callbacks.

Standalone

The difference in standalone mode is that you do not set the Toolbar to act as your action bar. For this reason, you can use any AppCompat theme and you do not need to disable the decor-provided Action Bar.

In standalone mode, you need to manually populate the Toolbar with content/actions. For instance, if you want it to display actions, you need to inflate a menu into it:

@Override
public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.blah);

    Toolbar toolbar = (Toolbar) findViewById(R.id.my_awesome_toolbar);

    // Set an OnMenuItemClickListener to handle menu item clicks
    toolbar.setOnMenuItemClickListener(new Toolbar.OnMenuItemClickListener() {
        @Override
        public boolean onMenuItemClick(MenuItem item) {
            // Handle the menu item
            return true;
        }
    });

    // Inflate a menu to be displayed in the toolbar
    toolbar.inflateMenu(R.menu.your_toolbar_menu);
}

There are many other things you can do with Toolbar. For more information, see the Toolbar API reference.

Styling

Styling of Toolbar is done differently to the standard action bar, and is set directly onto the view.

Here's a basic style you should be using when you're using a Toolbar as your action bar:

<android.support.v7.widget.Toolbar  
    android:layout_height="wrap_content"
    android:layout_width="match_parent"
    android:minHeight="?attr/actionBarSize"
    app:theme="@style/ThemeOverlay.AppCompat.ActionBar" />

The app:theme declaration will make sure that your text and items are using solid colors (i.e 100% opacity white).

DarkActionBar

You can style Toolbar instances directly using layout attributes. To achieve a Toolbar which looks like 'DarkActionBar' (dark content, light overflow menu), provide the theme and popupTheme attributes:

<android.support.v7.widget.Toolbar
    android:layout_height=”wrap_content”
    android:layout_width=”match_parent”
    android:minHeight=”@dimen/triple_height_toolbar”
    app:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar"
    app:popupTheme="@style/ThemeOverlay.AppCompat.Light" />
SearchView Widget

AppCompat offers Lollipop’s updated SearchView API, which is far more customizable and styleable (queue the applause). We now use the Lollipop style structure instead of the old searchView* theme attributes.

Here’s how you style SearchView:

values/themes.xml:
<style name=”Theme.MyTheme” parent=”Theme.AppCompat”>
    <item name=”searchViewStyle”>@style/MySearchViewStyle</item>
</style>
<style name=”MySearchViewStyle” parent=”Widget.AppCompat.SearchView”>
    <!-- Background for the search query section (e.g. EditText) -->
    <item name="queryBackground">...</item>
    <!-- Background for the actions section (e.g. voice, submit) -->
    <item name="submitBackground">...</item>
    <!-- Close button icon -->
    <item name="closeIcon">...</item>
    <!-- Search button icon -->
    <item name="searchIcon">...</item>
    <!-- Go/commit button icon -->
    <item name="goIcon">...</item>
    <!-- Voice search button icon -->
    <item name="voiceIcon">...</item>
    <!-- Commit icon shown in the query suggestion row -->
    <item name="commitIcon">...</item>
    <!-- Layout for query suggestion rows -->
    <item name="suggestionRowLayout">...</item>
</style>

You do not need to set all (or any) of these, the defaults will work for the majority of apps.

Toolbar is coming...

Hopefully this post will help you get up and running with AppCompat and let you create some awesome material apps. Let us know in the comments/G+/Twitter if you’re have questions about AppCompat or any of the support libraries, or where we could provide more documentation.

FAQ
Why is my EditText (or other widget listed above) not being tinted correctly on my pre-Lollipop device?

The widget tinting in AppCompat works by intercepting any layout inflation and inserting a special tint-aware version of the widget in its place. For most people this will work fine, but I can think of a few scenarios where this won’t work, including:

  • You have your own custom version of the widget (i.e. you’ve extended EditText)
  • You are creating the EditText without a LayoutInflater (i.e., calling new EditText()).

The special tint-aware widgets are currently hidden as they’re an unfinished implementation detail. This may change in the future.

Why has X widget not been material-styled when running on pre-Lollipop?
Only some of the most common widgets have been updated so far. There are more coming in future releases of AppCompat.
Why does my Action Bar have a shadow on Android Lollipop? I’ve set android:windowContentOverlay to null.
On Lollipop, the action bar shadow is provided using the new elevation API. To remove it, either call getSupportActionBar().setElevation(0), or set the elevation attribute in your Action Bar style.
Why are there no ripples on pre-Lollipop?
A lot of what allows RippleDrawable to run smoothly is Android 5.0’s new RenderThread. To optimize for performance on previous versions of Android, we've left RippleDrawable out for now.
How do I use AppCompat with Preferences?
You can continue to use PreferenceFragment in your ActionBarActivity when running on an API v11+ device. For devices before that, you will need to provide a normal PreferenceActivity which is not material-styled.
Join the discussion on

+Android Developers
Categories: Programming

Episode 212: Randy Shoup on Company Culture

Tobias Kaatz talks to former Kixeye CTO Randy Shoup about company culture in the software industry in this sequel to the show on hiring in the software industry (Episode 208). Prior to Kixeye, Randy worked as director of engineering at Google for the Google App Engine and as chief engineer and distinguished architect at eBay. […]
Categories: Programming

Think a Series of Sprints, Not Marathons

When you drive business change and digital initiatives with Cloud, Mobile, Social, and Big Data (and Internet of Things), successful businesses think a series of sprints, not marathons.

Successful businesses go digital by transforming their customer experiences, their employee experiences, and their back-office experiences through rapid prototyping, building proofs-of-concept, testing pilots, and going to production.  It’s a fast cycle of prototype –> pilot –> POC –> production.

These short cycles create rapid learning loops, build momentum, and help adapt for change.

In the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned in driving digital initiatives and agile transformation.

The Digital World Moves Quickly

Avoid Big Up Front Design.  Whenever there is a big lag time between designing it, developing it, and using it, you’re introducing more risk.  You’re breaking feedback loops.  You’re falling into the pit of analysis paralysis.   Focus on “just enough design” so that you can test what works and what doesn’t, and respond accordingly.

Via Leading Digital:

“The digital world moves quickly.  The rapid pace of technology innovation today does not lend itself to multiyear planning and waterfall development methods common in the ERP era.  Markets change, new technologies become mainstream, an disruptive entrants begin courting your customers.  Your roadmap will need to be nimble enough to recognize these changes, adapt for them, and course-correct.”

Keep a Vision in Mind and Build on Success Along the Way

Hold on to the vision and use that to guide you as you test your ideas and implement them, without getting bogged down.

Via Leading Digital:

“To design an agile transformation, borrow an approach that has become common among today's leading software companies.  Keep people committed to the end goal, but pace your initiatives as short sprints of effort.  Create prototype solutions, and experiment with new technologies or approaches.  Evaluate the results, and incorporate the results into your evolving roadmap.  Adam Brotman, Starbucks CDO, explained the iterative process: 'We didn't have all the answers, but we started thinking about other things we could do ... I think it worked not to go too far, too fast, but to keep a vision in mind and keep building on success along the way.”

Test Ideas, Save Time, Adapt to Changes

Short cycle times help you respond to market change and adapt as you learn what works and what doesn’t.

Via Leading Digital:

“The test-and-learn approach will require some new ways of working in its own right, but it enjoys some distinct advantages.  By marketing ideas quickly before they go to scale, this approach saves time and money.  It's short cycle times also make it more adaptive to external changes.  Finally, it enables your transformation to sustain momentum through small, incremental successes, rather than the big-bang approach of long-term programs.”

When it comes to your digital strategy and driving business transformation, drive your business change the agile way.

You Might Also Like

10 Ways to Make Agile Design More Effective

Building Better Business Cases for Digital Initiatives

Cloud Changes the Game from Deployment to Adoption

How Digital is Changing Physical Experiences

McKinsey on Unleashing the Value of Big Data Analytics

Categories: Architecture, Programming

Welcome Firebase to the Google Cloud Platform Team

Google Code Blog - Tue, 10/21/2014 - 18:31
Today we extend a warm welcome to Firebase, who is joining the Google Cloud Platform team. Firebase makes it very easy for developers to build mobile and web apps that store and sync data in realtime.

Mobile is one of the fastest-growing categories of app development, but it’s also still too hard for most developers. With Firebase, developers are able to easily sync data across web and mobile apps without having to manage connections or write complex sync logic. Firebase makes it easy to build applications that work offline and has full-featured libraries for all major web and mobile platforms, including Android and iOS.

By combining Firebase with Google Cloud Platform, we’ll be able to build the best end-to-end platform for mobile application development. If you’re already a Firebase developer, you’ll start seeing improvements right away and if you’re a Google Cloud Platform customer, you’ll find it even easier to create great mobile and web apps. The entire Firebase team is joining Google and, under the leadership of Firebase co-founders James Tamplin and Andrew Lee, will be working hard to bring you great new features. Not only will the products you already love continue to get better, but you’ll also gain access to the full power of Google Cloud Platform.

At Google Cloud Platform Live on November 4, we’ll be demonstrating new Firebase features and integrations with Cloud Platform. You can join us there in person or you can register to stream online for free.

If you are looking for more info check out the Firebase blog. We can’t wait to see what applications you build!

Greg DeMichillie, Director of Product Management
Categories: Programming

What's New in Android 5.0 Lollipop

Android Developers Blog - Tue, 10/21/2014 - 09:06

By Ankur Kotwal, Developer Advocate

Android 5.0 Lollipop is the biggest update of Android to date, introducing an all new visual style, improved performance, and much more. Android 5.0 Lollipop also extends across screens big and small, including phones, tablets, wearables, TVs and cars, to give your users access to information when they need it most.

To get you started on developing and testing on Android 5.0 Lollipop, here are some of the developer highlights with links to related videos and documentation.

User experience
  • Material design for the multiscreen world — Material Design is a new approach for designing apps in today’s multi-device world that takes a comprehensive strategy to visual, motion, and interaction design across a number of platforms and form factors. Android 5.0 brings Material Design to the platform, with a full set of tools for implementing material design in your apps. The system is incredibly flexible, allowing your app to express its individual character and brand with bold colors and a variety of responsive UI patterns and themeable elements.
  • Enhanced notifications — New lockscreen notifications let you surface content, updates, and actions to users at a glance, without needing to unlock their device. Heads-up notifications let you display content and actions in a small floating window managed by the system, no matter which app is in the foreground. Notifications are refreshed for Material Design and you can use accent colors to express your brand.
  • Concurrent documents in Overview — Now you can organize your app by tasks and present these concurrently as individual “documents” on the Overview screen. For example, instant messaging apps could declare each chat as a separate document. Users can flip through these on the Overview screen to find the specific chat they want and jump straight to it.
Performance
  • Android Runtime (ART) — Android 5.0 runs exclusively on the ART runtime. ART offers ahead-of-time (AOT) compilation, more efficient garbage collection, and improved development and debugging features. In many cases it improves performance of the device, without you having to change your code.
  • 64-bit support — Support for 64-bit ABIs provides additional address space and improved performance with certain compute workloads. Apps written in the Java language can run immediately on 64-bit architectures with no modifications required. NDK r10c includes 64-bit support, for apps and games using native code.
  • Project Volta — New tools and APIs help you build battery-efficient apps. Battery Historian, a tool included in the SDK, lets you visualize power events over time and understand how your app is using battery. The JobScheduler API lets you set the conditions under which your background tasks and other jobs should run, such as when the device is idle or connected to an unmetered network or to a charger, to minimize battery impact. More in this I/O video.
  • OpenGL ES 3.1 and Android Extension Pack — With OpenGL ES 3.1, you get compute shaders, stencil textures, and texture gather for your games. Android Extension Pack (AEP) is a new set of extensions to OpenGL ES that bring desktop-class graphics to Android including tessellation and geometry shaders, and use ASTC texture compression across GPU technologies. More on what's new for game developers in this DevBytes video.
  • WebView updates — We’ve updated WebView to support WebRTC, WebAudio and WebGL will be supported. WebView also includes native support for all of the Web Components specifications: Custom Elements, Shadow DOM, HTML Imports, and Templates. WebView is now unbundled from the system and will be regularly updated through Google Play.
Workplace
  • Managed provisioning and unified view of apps — to make it easier for employees to have a single device for personal and work use, framework enhancements offer a unified view of apps, notifications & recents across work apps and personal apps. Profile owner APIs, in the workplace context, let administrators create and manage work profiles and defined as part of a new managed provisioning process. More in this I/O video.
Media
  • Advanced camera capabilities — A new camera API gives you new capabilities for advanced image capture and processing. On supported devices, your app can capture uncompressed YUV capture at full 8 megapixel resolution at 30 FPS. You can also capture raw sensor data and control parameters such as exposure time, ISO sensitivity, and frame duration, on a per-frame basis.
  • Audio improvements — The sound architecture has been enhanced, with lower input latency in OpenSL, the addition of multichannel-mixing, and USB digital audio mode support. More in this I/O video.
Connectivity
  • BLE Peripheral Mode — Android devices can now function in Bluetooth Low Energy (BLE) peripheral mode. Apps can use this capability to broadcast their presence to nearby devices — for example, you can now build apps that let a device function as a beacon and transmit data to another BLE device. More in this I/O video.
  • Multi-networking — Apps can dynamically request networks based on capabilities such as metered or unmetered. This is useful when you want to use a specific network, such as cellular. Apps can also request platform to re-evaluate networks for an internet connection. This is useful when your app sees unusually high latency on a particular network, it can enable the platform to switch to a better network (if available) sooner with a graceful handoff.
Get started!

You can get started developing and testing on Android 5.0 right away by downloading the Android 5.0 Platform (API level 21), as well as the SDK Tools, Platform Tools, and Support Package from the Android SDK Manager.

Check out the DevByte video below for more of what’s new in Lollipop!



Join the discussion on
+Android Developers


Categories: Programming

Neo4j: Modelling sub types

Mark Needham - Tue, 10/21/2014 - 00:08

A question which sometimes comes up when discussing graph data modelling is how you go about modelling sub/super types.

In my experience there are two reasons why we might want to do this:

  • To ensure that certain properties exist on bits of data
  • To write drill down queries based on those types

At the moment the former isn’t built into Neo4j and you’d only be able to achieve it by wiring up some code in a pre commit hook of a transaction event handler so we’ll focus on the latter.

The typical example used for showing how to design sub types is the animal kingdom and I managed to find a data set from Louiseville, Kentucky’s Animal Services which we can use.

In this case the sub types are used to represent the type of animal, breed group and breed. We then also have ‘real data’ in terms of actual dogs under the care of animal services.

We effectively end up with two graphs in one – a model and a meta model:

2014 10 20 22 32 31

The cypher query to create this graph looks like this:

LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-subtypes/data/dogs.csv" AS line
MERGE (animalType:AnimalType {name: "Dog"})
MERGE (breedGroup:BreedGroup {name: line.BreedGroup})
MERGE (breed:Breed {name: line.PrimaryBreed})
MERGE (animal:Animal {id: line.TagIdentity, primaryColour: line.PrimaryColour, size: line.Size})
 
MERGE (animalType)<-[:PARENT]-(breedGroup)
MERGE (breedGroup)<-[:PARENT]-(breed)
MERGE (breed)<-[:PARENT]-(animal)

We could then write a simple query to find out how many dogs we have:

MATCH (animalType:AnimalType)<-[:PARENT*]-(animal)
RETURN animalType, COUNT(*) AS animals
ORDER BY animals DESC
==> +--------------------------------+
==> | animalType           | animals |
==> +--------------------------------+
==> | Node[89]{name:"Dog"} | 131     |
==> +--------------------------------+
==> 1 row

Or we could write a slightly more complex query to find the number of animals at each level of our type hierarchy:

MATCH path = (animalType:AnimalType)<-[:PARENT]-(breedGroup)<-[:PARENT*]-(animal)
RETURN [node IN nodes(path) | node.name][..-1] AS breed, COUNT(*) AS animals
ORDER BY animals DESC
LIMIT 5
==> +-----------------------------------------------------+
==> | breed                                     | animals |
==> +-----------------------------------------------------+
==> | ["Dog","SETTER/RETRIEVE","LABRADOR RETR"] | 15      |
==> | ["Dog","SETTER/RETRIEVE","GOLDEN RETR"]   | 13      |
==> | ["Dog","POODLE","POODLE MIN"]             | 10      |
==> | ["Dog","TERRIER","MIN PINSCHER"]          | 9       |
==> | ["Dog","SHEPHERD","WELSH CORGI CAR"]      | 6       |
==> +-----------------------------------------------------+
==> 5 rows

We might then decide to add an exercise sub graph which indicates how much exercise each type of dog requires:

MATCH (breedGroup:BreedGroup)
WHERE breedGroup.name IN ["SETTER/RETRIEVE", "POODLE"]
MERGE (exercise:Exercise {type: "2 hours hard exercise"})
MERGE (exercise)<-[:REQUIRES_EXERCISE]-(breedGroup);
MATCH (breedGroup:BreedGroup)
WHERE breedGroup.name IN ["TERRIER", "SHEPHERD"]
MERGE (exercise:Exercise {type: "1 hour gentle exercise"})
MERGE (exercise)<-[:REQUIRES_EXERCISE]-(breedGroup);

We could then query that to find out which dogs need to come out for 2 hours of hard exercise:

MATCH (exercise:Exercise {type: "2 hours hard exercise"})<-[:REQUIRES_EXERCISE]-()<-[:PARENT*]-(dog)
WHERE NOT (dog)<-[:PARENT]-()
RETURN dog
LIMIT 10
==> +-----------------------------------------------------------+
==> | dog                                                       |
==> +-----------------------------------------------------------+
==> | Node[541]{id:"664427",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[542]{id:"543787",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[543]{id:"584021",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[544]{id:"584022",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[545]{id:"664430",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[546]{id:"535176",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[567]{id:"613557",primaryColour:"WHITE",size:"SMALL"} |
==> | Node[568]{id:"531376",primaryColour:"WHITE",size:"SMALL"} |
==> | Node[569]{id:"613567",primaryColour:"WHITE",size:"SMALL"} |
==> | Node[570]{id:"531379",primaryColour:"WHITE",size:"SMALL"} |
==> +-----------------------------------------------------------+
==> 10 rows

In this query we ensured that we only returned dogs rather than breeds by checking that there was no incoming PARENT relationship. Alternatively we could have filtered on the Animal label…

MATCH (exercise:Exercise {type: "2 hours hard exercise"})<-[:REQUIRES_EXERCISE]-()<-[:PARENT*]-(dog:Animal)
RETURN dog
LIMIT 10

or if we wanted to only take the dogs out for exercise perhaps we’d have Dog label on the appropriate nodes.

People are often curious why labels don’t have super/sub types between them but I tend to use labels for simple categorisation – anything more complicated and we may as well use the built in power of the graph model!

The code is on github should you wish to play with it.

Categories: Programming

Your Community Door

Coding Horror - Jeff Atwood - Mon, 10/20/2014 - 20:32

What are the real world consequences to signing up for a Twitter or Facebook account through Tor and spewing hate toward other human beings?

Facebook reviewed the comment I reported and found it doesn't violate their Community Standards. pic.twitter.com/p9syG7oPM1

— Rob Beschizza (@Beschizza) October 15, 2014

As far as I can tell, nothing. There are barely any online consequences, even if the content is reported.

But there should be.

The problem is that Twitter and Facebook aim to be discussion platforms for "everyone", where every person, no matter how hateful and crazy they may be, gets a turn on the microphone. They get to be heard.

The hover text for this one is so good it deserves escalation:

I can't remember where I heard this, but someone once said that defending a position by citing free speech is sort of the ultimate concession; you're saying that the most compelling thing you can say for your position is that it's not literally illegal to express.

If the discussion platform you're using aims to be a public platform for the whole world, there are some pretty terrible things people can do and say to other people there with no real consequences, under the noble banner of free speech.

It can be challenging.

How do we show people like this the door? You can block, you can hide, you can mute. But what you can't do is show them the door, because it's not your house. It's Facebook's house. It's their door, and the rules say the whole world has to be accommodated within the Facebook community. So mute and block and so forth are the only options available. But they are anemic, barely workable options.

As we build Discourse, I've discovered that I am deeply opposed to mute and block functions. I think that's because the whole concept of Discourse is that it is your house. And mute and ignore, while arguably unavoidable for large worldwide communities, are actively dangerous for smaller communities. Here's why.

  • It allows you to ignore bad behavior. If someone is hateful or harassing, why complain? Just mute. No more problem. Except everyone else still gets to see a person being hateful or harassing to another human being in public. Which means you are now sending a message to all other readers that this is behavior that is OK and accepted in your house.

  • It puts the burden on the user. A kind of victim blaming — if someone is rude to you, then "why didn't you just mute / block them?" The solution is right there in front of you, why didn't you learn to use the software right? Why don't you take some responsibility and take action to stop the person abusing you? Every single time it happens, over and over again?

  • It does not address the problematic behavior. A mute is invisible to everyone. So the person who is getting muted by 10 other users is getting zero feedback that their behavior is causing problems. It's also giving zero feedback to moderators that this person should probably get an intervention at the very least, if not outright suspended. It's so bad that people are building their own crowdsourced block lists for Twitter.

  • It causes discussions to break down. Fine, you mute someone, so you "never" see that person's posts. But then another user you like quotes the muted user in their post, or references their @name, or replies to their post. Do you then suppress just the quoted section? Suppress the @name? Suppress all replies to their posts, too? This leaves big holes in the conversation and presents many hairy technical challenges. Given enough personal mutes and blocks and ignores, all conversation becomes a weird patchwork of partially visible statements.

  • This is your house and your rules. This isn't Twitter or Facebook or some other giant public website with an expectation that "everyone" will be welcome. This is your house, with your rules, and your community. If someone can't behave themselves to the point that they are consistently rude and obnoxious and unkind to others, you don't ask the other people in the house to please ignore it – you ask them to leave your house. Engendering some weird expectation of "everyone is allowed here" sends the wrong message. Otherwise your house no longer belongs to you, and that's a very bad place to be.

I worry that people are learning the wrong lessons from the way Twitter and Facebook poorly handle these situations. Their hands are tied because they aspire to be these global communities where free speech trumps basic human decency and empathy.

The greatest power of online discussion communities, in my experience, is that they don't aspire to be global. You set up a clubhouse with reasonable rules your community agrees upon, and anyone who can't abide by those rules needs to be gently shown the door.

Don't pull this wishy washy non-committal stuff that Twitter and Facebook do. Community rules are only meaningful if they are actively enforced. You need to be willing to say this to people, at times:

No, your behavior is not acceptable in our community; "free speech" doesn't mean we are obliged to host your content, or listen to you being a jerk to people. This is our house, and our rules.

If they don't like it, fortunately there's a whole Internet of other communities out there. They can go try a different house. Or build their own.

The goal isn't to slam the door in people's faces – visitors should always be greeted in good faith, with a hearty smile – but simply to acknowledge that in those rare but inevitable cases where good faith breaks down, a well-oiled front door will save your community.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Categories: Programming

Python: Converting a date string to timestamp

Mark Needham - Mon, 10/20/2014 - 16:53

I’ve been playing around with Python over the last few days while cleaning up a data set and one thing I wanted to do was translate date strings into a timestamp.

I started with a date in this format:

date_text = "13SEP2014"

So the first step is to translate that into a Python date – the strftime section of the documentation is useful for figuring out which format code is needed:

import datetime
 
date_text = "13SEP2014"
date = datetime.datetime.strptime(date_text, "%d%b%Y")
 
print(date)
$ python dates.py
2014-09-13 00:00:00

The next step was to translate that to a UNIX timestamp. I thought there might be a method or property on the Date object that I could access but I couldn’t find one and so ended up using calendar to do the transformation:

import datetime
import calendar
 
date_text = "13SEP2014"
date = datetime.datetime.strptime(date_text, "%d%b%Y")
 
print(date)
print(calendar.timegm(date.utctimetuple()))
$ python dates.py
2014-09-13 00:00:00
1410566400

It’s not too tricky so hopefully I shall remember next time.

Categories: Programming

4 Biggest Reasons Why Software Developers Suck at Estimation

Making the Complex Simple - John Sonmez - Mon, 10/20/2014 - 15:00

Estimation is difficult. Most people aren’t good at it–even in mundane situations. For example, when my wife asks me how much longer it will take me to fix some issue I’m working on or to head home, I almost always invariably reply “five minutes.” I almost always honestly believe it will only take five minutes, […]

The post 4 Biggest Reasons Why Software Developers Suck at Estimation appeared first on Simple Programmer.

Categories: Programming

Android 5.0 Lollipop SDK and Nexus Preview Images

Android Developers Blog - Sun, 10/19/2014 - 07:12

Two more weeks!

By Jamal Eason, Product Manager, Android

At Google I/O last June, we gave you an early version of Android 5.0 with the L Developer Preview, running on Nexus 5, Nexus 7 and Android TV. Over the course of the L Developer Preview program, you’ve given us great feedback and we appreciate the engagement from you, our developer community. Thanks!

This week, we announced Android 5.0 Lollipop. Starting today, you can download the full release of the Android 5.0 SDK, along with updated developer images for Nexus 5, Nexus 7 (2013), ADT-1, and the Android emulator.

The first set of devices to run this new version of Android -- Nexus 6, Nexus 9, and Nexus Player -- will be available in early November. In the same timeframe, we'll also roll out the Android 5.0 update worldwide to Nexus 4, 5, 7 (2012 & 2013), and 10 devices, as well as to Google Play edition devices.

Therefore, now is the time to test your apps on the new platform. You have two more weeks to get ready!

What’s in Lollipop?

Android 5.0 Lollipop introduces a host of new APIs and features including:

There's much more, so check out the Android 5.0 platform highlights for a complete overview.

What’s in the Android 5.0 SDK?

The Android 5.0 SDK includes updated tools and new developer system images for testing. You can develop against the latest Android platform using API level 21 and take advantage of the updated support library to implement Material Design as well as the leanback user interface for TV apps.

You can download these components through the Android SDK Manager and develop your app in Android Studio:

  • Android 5.0 SDK Platform & Tools
  • Android 5.0 Emulator System Image - 32-bit & 64-bit (x86)
  • Android 5.0 Emulator System Image for Android TV (32-bit)
  • Android v7 appcompat Support Library for Material Design theme backwards capability
  • Android v17 leanback library for Android TV app support

For developers using the Android NDK for native C/C++ Android apps we have:

For developers on Android TV devices we have:

  • Android 5.0 system image over the air (OTA) update for ADT-1 Developer Kit. OTA updates will appear over the next few days.

Similar to our previous release of the preview, we are also providing updated system image downloads for Nexus 5 & Nexus 7 (2013) devices to help with your testing as well. These images support the Android 5.0 SDK, but only have the minimal apps pre-installed in order to enable developer testing:

  • Nexus 5 (GSM/LTE) “hammerhead” Device System Image
  • Nexus 7 (2013) - (Wifi) “razor” Device System Image

For the developer preview versions, there will not be an over the air (OTA) update. You will need to wipe and reflash your developer device to use the latest developer preview versions. If you want to receive the official consumer OTA update in November and any other official updates, you will have to have a factory image on your Nexus device.

Validate your apps with the Android 5.0 SDK

With the consumer availability of Android 5.0 and the Nexus 6, Nexus 9, and Nexus Player right around the corner, here are a few things you should do to prepare:

  1. Get the emulator system images through the SDK Manager or download the Nexus device system images.
  2. Recompile your apps against Android 5.0 SDK, especially if you used any preview APIs. Note: APIs have changed between the preview SDK and the final SDK.
  3. Validate that your current Android apps run on the new API 21 level with ART enabled. And if you use the NDK for your C/C++ Android apps, validate against the 64-bit emulator. ART is enabled by default on API 21 & new Android devices with Android 5.0.

Once you validate your current app, explore the new APIs and features for Android 5.0.

Migrate Your Existing App to Material Design

Android 5.0 Lollipop introduces Material Design, which enables your apps to adopt a bold, colorful, and flexible design, while remaining true to a small set of key principles that guide user interaction across multiple screens and devices.

After making sure your current apps work with Android 5.0, now is the time to enable the Material theme in your app with the AppCompat support library. For quick tips & recommendations for making your app shine with Material Design, check out our Material Design guidelines and tablet optimization tips. For those of you new to Material Design, check out our Getting Started guide.

Get your apps ready for Google Play!

Starting today, you can publish your apps that are targeting Android 5.0 Lollipop to Google Play. In your app manifest, update android:targetSdkVersion to "21", test your app, and upload it to the Google Play Developer Console.

Starting November 3rd, Nexus 9 will be the first device available to consumers that will run Android 5.0. Therefore, it is a great time to publish on Google Play, once you've updated and tested your app. Even if your apps target earlier versions of Android, take a few moments to test them on the Android 5.0 system images, and publish any updates needed in advance of the Android 5.0 rollout.

Stay tuned for more details on the Nexus 6 and Nexus 9 devices, and how to make sure your apps look their best on them.

Next up, Android TV!

We also announced the first consumer Android TV device, Nexus Player. It’s a streaming media player for movies, music and videos, and also a first-of-its-kind Android gaming device. Users can play games on their HDTVs with a gamepad, then keep playing on their phones while they’re on the road. The device is also Google Cast-enabled, so users can cast your app from their phones or tablets to their TV.

If you’re developing for Android TV, watch for more information on November 3rd about how to distribute your apps to Android TV users through the Google Play Developer Console. You can start getting your app ready by making sure it meets all of the TV Quality Guidelines.

Get started with Android 5.0 Lollipop platform

If you haven’t had a chance to take a look at this new version of Android yet, download the SDK and get started today. You can learn more about what’s new in the Android 5.0 platform highlights and get all the details on new APIs and changed behaviors in the API Overview. You can also check out the latest DevBytes videos to learn more about Android 5.0 features.

Enjoy this new release of Android!

Join the discussion on

+Android Developers
Categories: Programming

Neo4j: LOAD CSV – The sneaky null character

Mark Needham - Sat, 10/18/2014 - 11:49

I spent some time earlier in the week trying to import a CSV file extracted from Hadoop into Neo4j using Cypher’s LOAD CSV command and initially struggled due to some rogue characters.

The CSV file looked like this:

$ cat foo.csv
foo,bar,baz
1,2,3

I wrote the following LOAD CSV query to extract some of the fields and compare others:

load csv with headers from "file:/Users/markneedham/Downloads/foo.csv" AS line
RETURN line.foo, line.bar, line.bar = "2"
==> +--------------------------------------+
==> | line.foo | line.bar | line.bar = "2" |
==> +--------------------------------------+
==> | <null>   | "2"     | false          |
==> +--------------------------------------+
==> 1 row

I had expect to see a “1” in the first column and a ‘true’ in the third column, neither of which happened.

I initially didn’t have a text editor with hexcode mode available so I tried checking the length of the entry in the ‘bar’ field:

load csv with headers from "file:/Users/markneedham/Downloads/foo.csv" AS line
RETURN line.foo, line.bar, line.bar = "2", length(line.bar)
==> +---------------------------------------------------------+
==> | line.foo | line.bar | line.bar = "2" | length(line.bar) |
==> +---------------------------------------------------------+
==> | <null>   | "2"     | false          | 2                |
==> +---------------------------------------------------------+
==> 1 row

The length of that value is 2 when we’d expect it to be 1 given it’s a single character.

I tried trimming the field to see if that made any difference…

load csv with headers from "file:/Users/markneedham/Downloads/foo.csv" AS line
RETURN line.foo, trim(line.bar), trim(line.bar) = "2", length(line.bar)
==> +---------------------------------------------------------------------+
==> | line.foo | trim(line.bar) | trim(line.bar) = "2" | length(line.bar) |
==> +---------------------------------------------------------------------+
==> | <null>   | "2"            | true                 | 2                |
==> +---------------------------------------------------------------------+
==> 1 row

…and it did! I thought there was probably a trailing whitespace character after the “2” which trim had removed and that ‘foo’ column in the header row had the same issue.

I was able to see that this was the case by extracting the JSON dump of the query via the Neo4j browser:

{  
   "table":{  
      "_response":{  
         "columns":[  
            "line"
         ],
         "data":[  
            {  
               "row":[  
                  {  
                     "foo\u0000":"1\u0000",
                     "bar":"2\u0000",
                     "baz":"3"
                  }
               ],
               "graph":{  
                  "nodes":[  
 
                  ],
                  "relationships":[  
 
                  ]
               }
            }
         ],
      ...
}

It turns out there were null characters scattered around the file so I needed to pre process the file to get rid of them:

$ tr < foo.csv -d '\000' > bar.csv

Now if we process bar.csv it’s a much smoother process:

load csv with headers from "file:/Users/markneedham/Downloads/bar.csv" AS line
RETURN line.foo, line.bar, line.bar = "2", length(line.bar)
==> +---------------------------------------------------------+
==> | line.foo | line.bar | line.bar = "2" | length(line.bar) |
==> +---------------------------------------------------------+
==> | "1"      | "2"      | true           | 1                |
==> +---------------------------------------------------------+
==> 1 row

Note to self: don’t expect data to be clean, inspect it first!

Categories: Programming

R: Linear models with the lm function, NA values and Collinearity

Mark Needham - Sat, 10/18/2014 - 07:35

In my continued playing around with R I’ve sometimes noticed ‘NA’ values in the linear regression models I created but hadn’t really thought about what that meant.

On the advice of Peter Huber I recently started working my way through Coursera’s Regression Models which has a whole slide explaining its meaning:

2014 10 17 06 21 07

So in this case ‘z’ doesn’t help us in predicting Fertility since it doesn’t give us any more information that we can’t already get from ‘Agriculture’ and ‘Education’.

Although in this case we know why ‘z’ doesn’t have a coefficient sometimes it may not be clear which other variable the NA one is highly correlated with.

Multicollinearity (also collinearity) is a statistical phenomenon in which two or more predictor variables in a multiple regression model are highly correlated, meaning that one can be linearly predicted from the others with a non-trivial degree of accuracy.

In that situation we can make use of the alias function to explain the collinearity as suggested in this StackOverflow post:

library(datasets); data(swiss); require(stats); require(graphics)
z <- swiss$Agriculture + swiss$Education
fit = lm(Fertility ~ . + z, data = swiss)
> alias(fit)
Model :
Fertility ~ Agriculture + Examination + Education + Catholic + 
    Infant.Mortality + z
 
Complete :
  (Intercept) Agriculture Examination Education Catholic Infant.Mortality
z 0           1           0           1         0        0

In this case we can see that ‘z’ is highly correlated with both Agriculture and Education which makes sense given its the sum of those two variables.

When we notice that there’s an NA coefficient in our model we can choose to exclude that variable and the model will still have the same coefficients as before:

> require(dplyr)
> summary(lm(Fertility ~ . + z, data = swiss))$coefficients
                   Estimate  Std. Error   t value     Pr(>|t|)
(Intercept)      66.9151817 10.70603759  6.250229 1.906051e-07
Agriculture      -0.1721140  0.07030392 -2.448142 1.872715e-02
Examination      -0.2580082  0.25387820 -1.016268 3.154617e-01
Education        -0.8709401  0.18302860 -4.758492 2.430605e-05
Catholic          0.1041153  0.03525785  2.952969 5.190079e-03
Infant.Mortality  1.0770481  0.38171965  2.821568 7.335715e-03
> summary(lm(Fertility ~ ., data = swiss))$coefficients
                   Estimate  Std. Error   t value     Pr(>|t|)
(Intercept)      66.9151817 10.70603759  6.250229 1.906051e-07
Agriculture      -0.1721140  0.07030392 -2.448142 1.872715e-02
Examination      -0.2580082  0.25387820 -1.016268 3.154617e-01
Education        -0.8709401  0.18302860 -4.758492 2.430605e-05
Catholic          0.1041153  0.03525785  2.952969 5.190079e-03
Infant.Mortality  1.0770481  0.38171965  2.821568 7.335715e-03

If we call alias now we won’t see any output:

> alias(lm(Fertility ~ ., data = swiss))
Model :
Fertility ~ Agriculture + Examination + Education + Catholic + 
    Infant.Mortality
Categories: Programming

Zumero Professional Services

Eric.Weblog() - Eric Sink - Fri, 10/17/2014 - 19:00
Building mobile apps is really hard.
  • Why does my app work in the simulator but not on the device?
  • How do I get 60 frames-per-second UI performance on Android?
  • How do I manage concurrent access to my mobile database?
  • How do I deal with limited memory?
  • How do I keep something running in the background?
  • I have to compile for how many different CPU architectures?!?

Building apps for mobile devices involves a dizzying array of technologies and tooling. The explosion of mobile offers more ways for software projects to fail than ever before.

Zumero would like to help.

We can build your app for you. Or we can assist or advise as you build it yourself.

Both New and Old

Zumero for SQL Server is the best solution for syncing SQL data with mobile devices. Now we're looking ahead to providing more ways of helping developers build great business apps for mobile devices. We have additional products in the pipeline, but it is already clear that Zumero Professional Services is going to be an important piece of our story.

To some degree, we are formalizing what we are already doing for our customers. They're building apps to solve business problems, and they're constantly experiencing the reality that mobile app development is really, really hard. We often get involved in ways that go beyond the use of our product, helping with issues like concurrency and integration with native code.

So yes, Zumero Professional Services is a new thing for us. We see this as an opportunity because of the explosion of business apps being built. But it is also an old thing. We've done this before.

A number of years ago we did a lot of custom software development on mobile devices for companies like Motorola. This was back when smartphones weren't actually very smart. We built stuff that was burned into the phone's ROM, with CPU and memory constraints that make an iPhone look like a supercomputer. We had things like massive test suites with high code coverage.

We got out of that market, but now we're back, and even though the mobile industry has changed dramatically, many of the core technology challenges are still the same. This feels like a return to our roots.

Our place in the world Specialties Solid competencies Brussels sprouts (We are especially excited about stuff in this column.) (We can provide reliable execution and high-quality results using the technologies and practices in this column.) (We actively avoid stuff in this column.) Apps that change your business for the better Apps that solve business problems Games (unless we're playing them) Tough technical issues Apps that integrate with existing systems Photoshop Offline-capable apps with sync Networking, REST Data exchange via CDROM Xamarin Objective-C, Java, PhoneGap Adobe Flash Azure, Amazon Web Services Integration with on-prem backends Apps that don't let people interact iOS, Android, Windows Phone Windows, Mac OS BlackBerry, PalmOS SQLite, Microsoft SQL Server NoSQL FoxPro Helping companies use mobile apps to make their business better Building great apps that people like to use Guessing what the next viral app will be Interested?

Email me or sales@zumero.com to explore possibilities.

 

Updated Material Design Guidelines and Resources

Google Code Blog - Fri, 10/17/2014 - 18:00

When we first published the Material Design guidelines back in June, we set out to create a living document that would grow with feedback from the community. In that time, we’ve seen some great work from the community in augmenting the guidelines with things like Sketch templates, icon downloads and screens upon screens of inspiring visual and motion design ideas. We’ve also received a lot of feedback around what resources we can provide to make using Material Design in your projects easier.

So today, in addition to publishing the final Android 5.0 SDK for developers, we’re publishing our first significant update to the Material Design guidelines, with additional resources including:

  • Updated sticker sheets in PSD, AI and Sketch formats
  • A new icon library ZIP download
  • Updated color swatch downloads
  • Updated whiteframe downloads, including better baseline grid text alignment and other miscellaneous fixes

The sticker sheets have been updated to reflect the latest refinements to the components and integrated into a single, comprehensive sticker sheet that should be easier to use. An aggregated sticker sheet is also newly available for Adobe Photoshop and Sketch—two hugely popular requests. In the sticker sheet, you can find various elements that make up layouts, including light and dark symbols for the status bar, app bar, bottom toolbar, cards, dropdowns, search fields, dividers, navigation drawers, dialogs, the floating action button, and other components. The sticker sheets now also include explanatory text for elements.

Note that the images in the Components section of the guidelines haven't yet been updated (that’s coming soon!), so you can consider the sticker sheets to be the most up-to-date version of the components.

Also, the new system icons sticker sheet contains icons commonly used in Android across different apps, such as icons used for media playback, communication, content editing, connectivity, and so on.

Stay tuned for more enhancements as we incorporate more of your feedback—remember to share your suggestions on Google+! We’re excited to continue evolving this living document with you!

For more on Material Design, check out these videos and the new getting started guide for Android developers.

Posted by Roman Nurik, Design Advocate
Categories: Programming

How to deploy a Docker application into production on Amazon AWS

Xebia Blog - Fri, 10/17/2014 - 17:00

Docker reached production status a few months ago. But having the container technology alone is not enough. You need a complete platform infrastructure before you can deploy your docker application in production. Amazon AWS offers exactly that: a production quality platform that offers capacity provisioning, load balancing, scaling, and application health monitoring for Docker applications.

In this blog, you will learn how to deploy a Docker application to production in five easy steps.

For demonstration purposes, you are going to use the node.js application that was build for CloudFoundry and used to demonstrate Deis in a previous post. A truly useful app of which the sources are available on github.

1. Create a Dockerfile

First thing you need to do is to create a Dockerfile to create an image. This is quite simple: you install the node.js and npm packages, copy the source files and install the javascript modules.

# DOCKER-VERSION 1.0
FROM    ubuntu:latest
#
# Install nodejs npm
#
RUN apt-get update
RUN apt-get install -y nodejs npm
#
# add application sources
#
COPY . /app
RUN cd /app; npm install
#
# Expose the default port
#
EXPOSE  5000
#
# Start command
#
CMD ["nodejs", "/app/web.js"]
2. Test your docker application

Now you can create the Docker image and test it.

$ docker build -t sample-nodejs-cf .
$ docker run -d -p 5000:5000 sample-nodejs-cf

Point your browser at http://localhost:5000, click the 'start' button and Presto!

3. Zip the sources

Now you know that the instance works, you zip the source files. The image will be build on Amazon AWS based on your Dockerfile.

$ zip -r /tmp/sample-nodejs-cf-srcs.zip .
4. Deploy Docker application to Amazon AWS

Now you install and configure the amazon AWS command line interface (CLI) and deploy the docker source files to elastic beanstalk.  You can do this all manually, but here you use the script deploy-to-aws.sh that I created.

$ deploy-to-aws.sh \
         sample-nodejs-cf \
         /tmp/sample-nodejs-cf-srcs.zip \
         demo-env

After about 8-10 minutes your application is running. The output should look like this..

INFO: creating application sample-nodejs-cf
INFO: Creating environment demo-env for sample-nodejs-cf
INFO: Uploading sample-nodejs-cf-srcs.zip for sample-nodejs-cf, version 1412948762.
upload: ./sample-nodejs-cf-srcs.zip to s3://elasticbeanstalk-us-east-1-233211978703/1412948762-sample-nodejs-cf-srcs.zip
INFO: Creating version 1412948762 of application sample-nodejs-cf
INFO: demo-env in status Launching, waiting to get to Ready..
...
INFO: demo-env in status Launching, waiting to get to Ready..
INFO: Updating environment demo-env with version 1412948762 of sample-nodejs-cf
INFO: demo-env in status Updating, waiting to get to Ready..
...
INFO: demo-env in status Updating, waiting to get to Ready..
INFO: Version 1412948762 of sample-nodejs-cf deployed in environment
INFO: current status is Ready, goto http://demo-env-vm2tqi3qk4.elasticbeanstalk.com
5. Test your Docker application on the internet!

Your application is now available on the Internet. Browse to the designated URL and click on start. When you increase the number of instances at Amazon, they will appear in the application. When you deploy a new version of the application, you can observe how new versions of the application  appear without any errors on the client application.

For more information, goto Amazon Elastic Beanstalk adds Docker support. and Dockerizing a Node.js Web App.

Then When Given

Xebia Blog - Fri, 10/17/2014 - 14:50

People who practice ATDD all know how frustrating it can be to write automated examples. Especially when you get stuck overthinking the preconditions of examples.

This post describes an alternative approach to writing acceptance tests: write them backwards!

Imagine that you are building the very first online phone book. We need to define an acceptance tests for viewing the location of a florist. Using the Given-When-Then formula you would probably describe the behaviour like this.


Given I am on the online phone book homepage
When I type “Florist” in the business type field
And I click …
...

Most of the time you will be discussing and describing details that have nothing to do with viewing the location of a florist. To avoid this, write down the Then clause of the formula first.
Make sure the Then clause contains an observable result.


Then I see the location “Floriststreet 123”

Next, we will try to answer the following question: What caused the Then clause?
Make sure the Then clause contains an actor and an action.


When I click “View map” of the search result
Then I see the location “Floriststreet 123”

The last thing we will need to do is answer the following question: Why can I perform that action?
Make sure the Given clause contains a simple precondition.


Given I see a search result for florist “Floral Designs”
When I click “View map” of the search result
Then I see the location “Floriststreet 123”

You might have noticed that I left out certain parts where the user goes to the homepage and selects UI objects in the search area. It was not worth mentioning in the Given-When-Then formula. Too much details make us lose focus of what we really want to check. The essence of this acceptance test is clicking on the link "View map" and exposing the location to the user.

Try it a couple of times and let me know how it went.

Building Better Business Cases for Digital Initiatives

It’s hard to drive digital initiatives and business transformation if you can’t create the business case.  Stakeholder want to know what their investment is supposed to get them

One of the simplest ways to think about business cases is to think in terms of stakeholders, benefits, KPIs, costs, and risks over time frames.

While that’s the basic frame, there’s a bit of art and science when it comes to building effective business cases, especially when it involves transformational change.

Lucky for us, in the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned in building better business cases for digital initiatives.

What I like about their guidance is that it matches my experience

Link Operational Changes to Tangible Business Benefits

The more you can link your roadmap to benefits that people care about and can measure, the better off you are.

Via Leading Digital:

“You need initiative-based business cases that establish a clear link from the operational changes in your roadmap to tangible business benefits.  You will need to involve employees on the front lines to help validate how operational changes will contribute to strategic goals.”

Work Out the Costs, the Benefits, and the Timing of Return

On a good note, the same building blocks that apply to any business case, apply to digital initiatives.

Via Leading Digital:

“The basic building blocks of a business case for digital initiatives are the same as for any business case.  Your team needs to work out the costs, the benefits, and the timing of the return.  But digital transformation is still uncharted territory.  The cost side of the equation is easier, but benefits can be difficult to quantify, even when, intuitively, they seem crystal clear.”

Start with What You Know

Building a business case is an art and a science.   To avoid getting lost in analysis paralysis, start with what you know.

Via Leading Digital:

“Building a business case for digital initiatives is both an art an a science.  With so many unknowns, you'll need to take a pragmatic approach to investments in light of what you know and what you don't know.

Start with what you know, where you have most of the information you need to support a robust cost-benefit analysis.  A few lessons learned from our Digital Masters can be useful.”

Don’t Build Your Business Case as a Series of Technology Investments

If you only consider the technology part of the story, you’ll miss the bigger picture.  Digital initiatives involves organizational change management as well as process change.  A digital initiative is really a change in terms of people, process, and technology, and adoption is a big deal.

Via Leading Digital:

“Don't build your business case as a series of technology investments.  You will miss a big part of the costs.  Cost the adoption efforts--digital skill building, organizational change, communication, and training--as well as the deployment of the technology.  You won't realize the full benefits--or possibly any benefits--without them.”

Frame the Benefits in Terms of Business Outcomes

If you don’t work backwards from the end-in-mind, you might not get there.  You need clarity on the business outcomes so that you can chunk up the right path to get there, while flowing continuous value along the way.

Via Leading Digital:

“Frame the benefits in terms of the business outcomes you want to reach.  These outcomes can be the achievement of goals or the fixing of problems--that is, outcomes that drive more customer value, higher revenue, or a better cost position.  Then define the tangible business impact and work backward into the levers and metrics that will indicate what 'good' looks like.  For instance, if one of your investments is supposed to increase digital customer engagement, your outcome might be increasing engagement-to-sales conversation.  Then work back into the main metrics that drive this outcome, for example, visits, like inquiries, ratings, reorders, and the like.

When the business impact5 of an initiative is not totally clear, look at companies that have already made similar investments.  Your technology vendors can also be a rich, if somewhat biased, source of business cases for some digital investments.”

Run Small Pilots, Evaluate Results, and Refine Your Approach

To reduce risk, start with pilots to live and learn.   This will help you make informed decisions as part of your business case development.

Via Leading Digital:

“But, whatever you do, some digital investment cases will be trickier to justify, be they investments in emerging technologies or cutting-edge practices.  For example, what is the value of gamifying your brand's social communities?  For these types of investment opportunities, experiment with a test-and-learn approach.  State your measures of success, run small pilots, evaluate results, and refine your approach.  Several useful tools and methods exist, such as hypothesis-driven experiments with control groups, or A/B testing.  The successes (and failures) of small experiments can then become the benefits rationale to invest at greater scale.  Whatever the method, use an analytical approach; the quality of your estimated return depends on it.

Translating your vision into strategic goals and building an actionable roadmap is the firs step in focusing your investment.  It will galvanize the organization into action.  But if you needed to be an architect to develop your vision, you need to be a plumber to develop your roadmap.  Be prepared to get your hands dirty.”

While practice makes perfect, business cases aren’t about perfect.  Their job is to help you get the right investment from stakeholders so you can work on the right things, at the right time, to make the right impact.

You Might Also Like

Cloud Changes the Game from Deployment to Adoption

How Digital is Changing Physical Experiences

McKinsey on Unleashing the Value of Big Data Analytics

Categories: Architecture, Programming

Docker and Microsoft: Integrating Docker with Windows Server and Microsoft Azure

ScottGu's Blog - Scott Guthrie - Wed, 10/15/2014 - 14:30

I’m excited to announce today that Microsoft is partnering with Docker, Inc to enable great container-based development experiences on Linux, Windows Server and Microsoft Azure.

Docker is an open platform that enables developers and administrators to build, ship, and run distributed applications. Consisting of Docker Engine, a lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments.

Earlier this year, Microsoft released support for Docker containers with Linux on Azure.  This support integrates with the Azure VM agent extensibility model and Azure command-line tools, and makes it easy to deploy the latest and greatest Docker Engine in Azure VMs and then deploy Docker based images within them.   Docker Support for Windows Server + Docker Hub integration with Microsoft Azure

Today, I’m excited to announce that we are working with Docker, Inc to extend our support for Docker much further.  Specifically, I’m excited to announce that:

1) Microsoft and Docker are integrating the open-source Docker Engine with the next release of Windows Server.  This release of Windows Server will include new container isolation technology, and support running both .NET and other application types (Node.js, Java, C++, etc) within these containers.  Developers and organizations will be able to use Docker to create distributed, container-based applications for Windows Server that leverage the Docker ecosystem of users, applications and tools.  It will also enable a new class of distributed applications built with Docker that use Linux and Windows Server images together.

image 

2) We will support the Docker client natively on Windows.  Developers and administrators running Windows will be able to use the same standard Docker client and interface to deploy and manage Docker based solutions with both Linux and Windows Server environments.

image

 

3) Docker for Windows Server container images will be available in the Docker Hub alongside the Docker for Linux container images available today.  This will enable developers and administrators to easily share and automate application workflows using both Windows Server and Linux Docker images.

4) We will integrate Docker Hub with the Microsoft Azure Gallery and Azure Management Portal.  This will make it trivially easy to deploy and run both Linux and Windows Server based Docker images in Microsoft Azure.

5) Microsoft is contributing code to Docker’s Open Orchestration APIs.  These APIs provide a portable way to create multi-container Docker applications that can be deployed into any datacenter or cloud provider environment. This support will allow a developer or administrator using the Docker command line client to launch either Linux or Windows Server based Docker applications directly into Microsoft Azure from his or her development machine.

Exciting Opportunities Ahead

At Microsoft we continue to be inspired by technologies that can dramatically improve how quickly teams can bring new solutions to market. The partnership we are announcing with Docker today will enable developers and administrators to use the best container tools available for both Linux and Windows Server based applications, and to run all of these solutions within Microsoft Azure.  We are looking forward to seeing the great applications you build with them.

You can learn more about today’s announcements here and here.

Hope this helps,

Scott omni

Categories: Architecture, Programming

24hrs in F#

Phil Trelford's Array - Tue, 10/14/2014 - 18:55

The easiest place to see what’s going on in the F# community is to follow the #fsharp hash tag on Twitter. The last 24hrs have been as busy as ever, to the point where it can be hard to keep up these days.

Here’s some of the highlights

Events

Build Stuff conference to feature 8 F# speakers:

@c4fsharp + @silverSpoon & the @theburningmonk too :)

— headintheclouds (@ptrelford) October 13, 2014

and workshops including:

FP Days programme is now live, and features key notes from Don Syme & Christophe Grand, and presentations from:

.@dsyme and @cgrand are Keynoting #fpdays this year!! http://t.co/7vg29eBaDV #fsharp #clojure

— FP Days (@FPDays) October 14, 2014

New Madrid F# meetup group announced:

#fsharp I just created a Madrid F# Meetup Group. Join and let's organize our first meetup! Pls RT http://t.co/L3CQuuKRJr

— Alfonso Garcia-Caro (@alfonsogcnunez) October 14, 2014

F# MVP Rodrigo Vidal announces DNA Rio de Janeiro:

Encontro do #DNA Rio de Janeiro dia 21/10/2014 na @caelum http://t.co/YLF82SG6Wd #csharp #fsharp @netarchitects

— Rodrigo Vidal (@rodrigovidal) October 13, 2014

Try out the new features in FunScript at MF#K Copenhagen:

MF#K - Let's make some fun stuff with F#unScript #MFK #fsharp http://t.co/vYSIYKFEz2

— Ramón Soto Mathiesen (@genTauro42) October 14, 2014

Mathias Brandewinder will be presenting some of his work on F# & Azure in the Bay Area

Pretty excited to show some F# & #Azure stuff I have been working on at BayAzure tomorrow Tues Oct 14: http://t.co/5ZDEfeU5Pv #fsharp

— Mathias Brandewinder (@brandewinder) October 13, 2014

Let’s get hands on session in Portland announced:

#Portland #fsharp meetup next week: Classics Mashup hands-on. http://t.co/4IfAcPyHca #ComeJoinUs!

— James D'Angelo (@jjvdangelo) October 14, 2014

Riccardo Terrell will be presenting Why FP? in Washington DC

#fsharp @dcfsharp I will present at the DC http://t.co/VZwxMXgv0u 10.14.2014 6:30 pm @DCFsharp Riccardo Terrell http://t.co/WLu10f7vx6

— DC F# (@DCFSharp) October 13, 2014

Sessions featuring FsCheck, Paket & SpecFlow scheduled in Vinnitsa (Ukraine)

Будем рассказывать с @skalinets про #FSharp RT @dou_calendar: Vinnitsa Ciklum .NET Saturday, 18 октября, Винница http://t.co/A7P4XDcfbc

— Akim Boyko (@AkimBoyko) October 14, 2014

Projects

Get Doctor Who stats with F# Data HTML Type Provider:

Get Doctor Who stats using #fsharp http://t.co/aSPAH6IjQU

— FSharp.Data (@FSharpData) October 13, 2014

Major update to FSharp.Data.SqlClient:

FSharp.Data.SqlClient major update: SqlEnumProvider merged in. #fsharp #typeprovider

— Dmitry Morozov (@mitekm) October 13, 2014

ProjectScaffold now uses Paket:

Excited to announce ProjectScaffold now uses @PaketManager by default! https://t.co/jFENRSpxu7 https://t.co/TrzMuyt3h9 #dotNET #fsharp

— Paulmichael Blasucci (@pblasucci) October 14, 2014

Microsoft Research presentation on new DBpedia Type Provider:

Ultimate powerful data access with ``F# Type Provider Combinators`` video demo by Andrew Stevenson http://t.co/wcjrJnWTqD #fsharp #DBpedia

— Functional Software (@Functional_S) October 13, 2014

Blogs

More F#, Xamarin.Forms and MVVM by Brad Pillow

More #fsharp, #xamarinForms and MVVM, blog: http://t.co/ZEC56RzeLJ, code: https://t.co/vjLrHwx4Gg

— Brad Pillow (@BradPillow) October 14, 2014

Cross-platform MSBuild API by Robin Neatherway

New blog post: Cross-platform MSBuild API usage in #fsharp at http://t.co/naMJatIyP4

— Robin Neatherway (@rneatherway) October 14, 2014

Hacking the Dream Cheeky Thunder Missle Launcher by Jamie Dixon:



Want more?

Check out Sergey Tihon’s F# Weekly!

Categories: Programming