Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Azure: Announcing New Real-time Data Streaming and Data Factory Services

ScottGu's Blog - Scott Guthrie - Fri, 10/31/2014 - 07:39

The last three weeks have been busy ones for Azure.  Two weeks ago we announced a partnership with Docker to enable great container-based development experiences on Linux, Windows Server and Microsoft Azure.

Last week we held our Cloud Day event and announced our new G-Series of Virtual Machines as well as Premium Storage offering.  The G-Series VMs provide the largest VM sizes available in the public cloud today (nearly 2x more memory than the largest AWS offering, and 4x more memory than the largest Google offering).  The new Premium Storage offering (which will work with both our D-series and G-series of VMs) will support up to 32TB of storage per VM, >50,000 IOPS of disk IO per VM, and enable sub-1ms read latency.  Combined they provide an enormous amount of power that enables you to run even bigger and better solutions in the cloud.

Earlier this week, we officially opened our new Azure Australia regions – which are our 18th and 19th Azure regions open for business around the world.  Then at TechEd Europe we announced another round of new features – including the launch of the new Azure MarketPlace, a bunch of great network improvements, our new Batch computing service, general availability of our Azure Automation service and more.

Today, I’m excited to blog about even more new services we have released this week in the Azure Data space.  These include:

  • Event Hubs: is a scalable service for ingesting and storing data from websites, client apps, and IoT sensors.
  • Stream Analytics: is a cost-effective event processing engine that helps uncover real-time insights from event streams.
  • Data Factory: enables better information production by orchestrating and managing diverse data and data movement.

Azure Event Hub is now available in general availability, and the new Azure Stream Analytics and Data Factory services are now in public preview. Event Hubs: Log Millions of events per second in near real time

The Azure Event Hub service is a highly scalable telemetry ingestion service that can log millions of events per second in near real time.  You can use the Event Hub service to collect data/events from any IoT device, from any app (web, mobile, or a backend service), or via feeds like social networks.  We are using it internally within Microsoft to monitor some of our largest online systems.

Once you collect events with Event Hub you can then analyze the data using any real-time analytics system (like Apache Storm or our new Azure Stream Analytics service) and store/transform it into any data storage system (including HDInsight and Hadoop based solutions).

Event Hub is delivered as a managed service on Azure (meaning we run, scale and patch it for you and provide an enterprise SLA).  It delivers:

  • Ability to log millions of events per second in near real time
  • Elastic scaling support with the ability to scale-up/down with no interruption
  • Support for multiple protocols including support for HTTP and AMQP based events
  • Flexible authorization and throttling device policies
  • Time-based event buffering with event order preservation

The pricing model for Event Hubs is very flexible – for just $11/month you can provision a basic Event Hub with guaranteed performance capacity to capture 1 MB/sec of events sent to your Event Hub.  You can then provision as many additional capacity units as you need if your event traffic goes higher. 

Getting Started with Capturing Events

You can create a new Event Hub using the Azure Portal or via the command-line.  Choose New->App Service->Service Bus->Event Hub in the portal to do so:

image

Once created, events can be sent to an Event Hub with either a strongly-typed API (e.g. .NET or Java client library) or by just sending a raw HTTP or AMQP message to the service.  Below is a simple example of how easy it is to log an IoT event to an Event Hub using just a standard HTTP post request.  Notice the Authorization header in the HTTP post – you can use this to optionally enable flexible authentication/authorization for your devices:

POST https://your-namespace.servicebus.windows.net/your-event-hub/messages?timeout=60&api-version=2014-01 HTTP/1.1<?xml:namespace prefix = "o" />

Authorization: SharedAccessSignature sr=your-namespace.servicebus.windows.net&sig=tYu8qdH563Pc96Lky0SFs5PhbGnljF7mLYQwCZmk9M0%3d&se=1403736877&skn=RootManageSharedAccessKey

ContentType: application/atom+xml;type=entry;charset=utf-8

Host: your-namespace.servicebus.windows.net

Content-Length: 42

Expect: 100-continue

 

{ "DeviceId":"dev-01", "Temperature":"37.0" }

Your Event Hub can collect up to millions of messages per second like this, each storing whatever data schema you want within them, and the Event Hubs service will store them in-order for you to later read/consume.

Downstream Event Processing

Once you collect events, you no doubt want to do something with them.  Event Hubs includes an intelligent processing agent that allows for automatic partition management and load distribution across readers.  You can implement any logic you want within readers, and the data sent to the readers is delivered in the order it was sent to the Event Hub.

In addition to supporting the ability for you to write custom Event Readers, we also have two easy ways to work with pre-built stream processing systems: including our new Azure Stream Analytics Service and Apache Storm.  Our new Azure Stream Analytics service supports doing stream processing directly from Event Hubs, and Microsoft has created an Event Hubs Storm Spout for use with Apache Storm clusters.

The below diagram helps express some of the many rich ways you can use Event Hubs to collect and then hand-off events/data for processing:

image

Event Hubs provides a super flexible and cost effective building-block that you can use to collect and process any events or data you can stream to the cloud.  It is very cost effective, and provides the scalability you need to meet any needs.

Learning More about Event Hubs

For more information about Azure Event Hubs, please review the following resources:

Stream Analytics: Distributed Stream Processing Service for Azure

I’m excited to announce the preview our new Azure Stream Analytics service – a fully managed real-time distributed stream computation service that provides low latency, scalable processing of streaming data in the cloud with an enterprise grade SLA. The new Azure Stream Analytics service easily scales from small projects with just a few KB/sec of throughput to a gigabyte/sec or more of streamed data messages/events.  

Our Stream Analytics pricing model enable you to run low throughput streaming workloads continuously at low cost, and enables you to only have to scale up as your business needs increase.  We do this while maintaining built in guarantees of event delivery, and state management for fast recovery which enables mission critical business continuity.

Dramatically Simpler Developer Experience for Stream Processing Data

Stream Analytics supports a SQL-like language that dramatically lowers the bar of the developer expertise required to create a scalable stream processing solution. A developer can simply write a few lines of SQL to do common operations including basic filtering, temporal analysis operations, joining multiple live streams of data with other static data sources, and detecting stream patterns (or lack thereof).

This dramatically reduces the complexity and time it takes to develop, maintain and apply time-sensitive computations on real-time streams of data. Most other streaming solutions available today require you to write complex custom code, but with Azure Stream Analytics you can write simple, declarative and familiar SQL.

Fully Managed Service that is Easy to Setup

With Stream Analytics you can dramatically accelerate how quickly you can derive valuable real time insights and analytics on data from devices, sensors, infrastructure, or applications. With a few clicks in the Azure Portal, you can create a streaming pipeline, configure its inputs and outputs, and provide SQL-like queries to describe the desired stream transformations/analysis you wish to do on the data. Once running, you are able to monitor the scale/speed of your overall streaming pipeline and make adjustments to achieve the desired throughput and latency.

You can create a new Stream Analytics Job in the Azure Portal, by choosing New->Data Services->Stream Analytics:

image

Setup Streaming Data Input

Once created, your first step will be to add a Streaming Data Input.  This allows you to indicate where the data you want to perform stream processing on is coming from.  From within the portal you can choose Inputs->Add An Input to launch a wizard that enables you to specify this:

image

We can use the Azure Event Hub Service to deliver us a stream of data to perform processing on. If you already have an Event Hub created, you can choose it from a list populated in the wizard above.  You will also be asked to specify the format that is being used to serialize incoming event in the Event Hub (e.g. JSON, CSV or Avro formats).

Setup Output Location

The next step to developing our Stream Analytics job is to add a Streaming Output Location.  This will configure where we want the output results of our stream processing pipeline to go.  We can choose to easily output the results to Blob Storage, another Event Hub, or a SQL Database:

image

Note that being able to use another Event Hub as a target provides a powerful way to connect multiple streams into an overall pipeline with multiple steps.

Write Streaming Queries

Now that we have our input and output sources configured, we can now write SQL queries to transform, aggregate and/or correlate the incoming input (or set of inputs in the event of multiple input sources) and output them to our output target.  We can do this within the portal by selecting the QUERY tab at the top.

image

There are a number of interesting queries you can write to processing the incoming stream of data.  For example, in the previous Event Hub section of this blog post I showed how you can use an HTTP POST command to submit JSON based temperature data from an IoT device to an Event Hub with data in JSON format like so:

{ "DeviceId":"dev-01", "Temperature":"37.0" }

When multiple devices are streaming events simultaneously into our Event Hub like this, it would feed into our Stream Analytics job as a stream of continuous data events that look like the sequence below:

Wouldn’t it be interesting to be able to analyze this data using a time-window perspective instead?  For example, it would be useful to calculate in real-time what the average temperature of each device was in the last 5 seconds of multiple readings.

With the Stream Analytics Service we can now dynamically calculate this over our incoming live stream of data just by writing a SQL query like so:

SELECT DateAdd(second,-5,System.TimeStamp) as WinStartTime, system.TimeStamp as WinEndTime, DeviceId, Avg(Temperature) as AvgTemperature, Count(*) as EventCount 
    FROM input
    GROUP BY TumblingWindow(second, 5), DeviceId

Running this query in our Stream Analytics job will aggregate/transform our incoming stream of data events and output data like below into the output source we configured for our job (e,g, a blog storage file or a SQL Database):

The great thing about this approach is that the data is being aggregated/transformed in real time as events are being streamed to us, and it scales to handle literally gigabytes of data event streamed per second.

Scaling your Stream Analytics Job

Once defined, you can easily monitor the activity of your Stream Analytics Jobs in the Azure Portal:

image

You can use the SCALE tab to dynamically increase or decrease scale capacity for your stream processing – allowing you to pay only for the compute capacity you need, and enabling you to handle jobs with gigabytes/sec of streamed data. 

Learning More about Stream Analytics Service

For more information about Stream Analytics, please review the following resources:

Data Factory: Fully managed service to build and manage information production pipelines

Organizations are increasingly looking to fully leverage all of the data available to their business.  As they do so, the data processing landscape is becoming more diverse than ever before – data is being processed across geographic locations, on-premises and cloud, across a wide variety of data types and sources (SQL, NoSQL, Hadoop, etc), and the volume of data needing to be processed is increasing exponentially. Developers today are often left writing large amounts of custom logic to deliver an information production system that can manage and co-ordinate all of this data and processing work.

To help make this process simpler, I’m excited to announce the preview of our new Azure Data Factory service – a fully managed service that makes it easy to compose data storage, processing, and data movement services into streamlined, scalable & reliable data production pipelines. Once a pipeline is deployed, Data Factory enables easy monitoring and management of it, greatly reducing operational costs. 

Easy to Get Started

The Azure Data Factory is a fully managed service. Getting started with Data Factory is simple. With a few clicks in the Azure preview portal, or via our command line operations, a developer can create a new data factory and link it to data and processing resources.  From the new Azure Marketplace in the Azure Preview Portal, choose Data + Analytics –> Data Factory to create a new instance in Azure:

image

Orchestrating Information Production Pipelines across multiple data sources

Data Factory makes it easy to coordinate and manage data sources from a variety of locations – including ones both in the cloud and on-premises.  Support for working with data on-premises inside SQL Server, as well as Azure Blob, Tables, HDInsight Hadoop systems and SQL Databases is included in this week’s preview release. 

Access to on-premises data is supported through a data management gateway that allows for easy configuration and management of secure connections to your on-premises SQL Servers.  Data Factory balances the scale & agility provided by the cloud, Hadoop and non-relational platforms, with the management & monitoring that enterprise systems require to enable information production in a hybrid environment.

Custom Data Processing Activities using Hive, Pig and C#

This week’s preview enables data processing using Hive, Pig and custom C# code activities.  Data Factory activities can be used to clean data, anonymize/mask critical data fields, and transform the data in a wide variety of complex ways.

The Hive and Pig activities can be run on an HDInsight cluster you create, or alternatively you can allow Data Factory to fully manage the Hadoop cluster lifecycle on your behalf.  Simply author your activities, combine them into a pipeline, set an execution schedule and you’re done – no manual Hadoop cluster setup or management required. 

Built-in Information Production Monitoring and Dashboarding

Data Factory also offers an up-to-the moment monitoring dashboard, which means you can deploy your data pipelines and immediately begin to view them as part of your monitoring dashboard.  Once you have created and deployed pipelines to your Data Factory you can quickly assess end-to-end data pipeline health, pinpoint issues, and take corrective action as needed.

Within the Azure Preview Portal, you get a visual layout of all of your pipelines and data inputs and outputs. You can see all the relationships and dependencies of your data pipelines across all of your sources so you always know where data is coming from and where it is going at a glance. We also provide you with a historical accounting of job execution, data production status, and system health in a single monitoring dashboard:

image

Learning More about Stream Analytics Service

For more information about Data Factory, please review the following resources:

Other Great Data Improvements

Today’s releases make it even easier for customers to stream, process and manage the movement of data in the cloud.  Over the last few months we’ve released a bunch of other great data updates as well that make Azure a great platform to perform any data needs.  Since August: 

We released a major update of our SQL Database service, which is a relational database as a service offering.  The new SQL DB editions (Basic/Standard/Premium ) support a 99.99% SLA, larger database sizes, dedicated performance guarantees, point-in-time recovery, new auditing features, and the ability to easily setup active geo-DR support. 

We released a preview of our new DocumentDB service, which is a fully-managed, highly-scalable, NoSQL Document Database service that supports saving and querying JSON based data.  It enables you to linearly scale your document store and scale to any application size.  Microsoft MSN portal recently was rewritten to use it – and stores more than 20TB of data within it.

We released our new Redis Cache service, which is a secure/dedicated Redis cache offering, managed as a service by Microsoft.  Redis is a popular open-source solution that enables high-performance data types, and our Redis Cache service enables you to standup an in-memory cache that can make the performance of any application much faster.

We released major updates to our HDInsight Hadoop service, which is a 100% Apache Hadoop-based service in the cloud. We have also added built-in support for using two popular frameworks in the Hadoop ecosystem: Apache HBase and Apache Storm.

We released a preview of our new Search-As-A-Service offering, which provides a managed search offering based on ElasticSearch that you can easily integrate into any Web or Mobile Application.  It enables you to build search experiences over any data your application uses (including data in SQLDB, DocDB, Hadoop and more).

And we have released a preview of our Machine Learning service, which provides a powerful cloud-based predictive analytics service.  It is designed for both new and experienced data scientists, includes 100s of algorithms from both the open source world and Microsoft Research, and supports writing ML solutions using the popular R open-source language.

You’ll continue to see major data improvements in the months ahead – we have an exciting roadmap of improvements ahead.

Summary

Today’s Microsoft Azure release enables some great new data scenarios, and makes building applications that work with data in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu omni

Categories: Architecture, Programming

Make your emails stand out in Inbox

Google Code Blog - Thu, 10/30/2014 - 19:30
Originally posted on the Google Apps Developers Blog

As we announced last week, Inbox is a whole new take on, well, the inbox. It’s built by the Gmail team, but it’s not Gmail—it’s a new product designed to help users succeed in today’s world of email overload and multiple devices. At the same time, Inbox can also help you as a sender by offering new tools to make your emails more interactive!

Specifically, you can now take advantage of a new feature called Highlights.

Exactly like it sounds, Highlights “highlight” or surface key information and actions from an email and display them as easy-to-see chips in the inbox. For example, if you’re an airline that sends flight confirmation emails, Highlights can surface the “Check-in for your flight” action and display live flight status information for recipients right in the user’s main list. The same can apply if you send customers hotel reservations, event details, event invitations, restaurant reservations, purchases, or other tickets. Highlights help ensure that your recipients see your messages and the important details at a glance.

To take advantage of Highlights, you can mark up your email messages to specify which details you want surfaced for your customers. This will make it possible for not only Inbox, but also Gmail, Google Now, Google Search, and Maps to interact more easily with your messages and give your recipients the best possible experience across Android, iOS and the web.

As an example, the following JSON-LD markup can be used by restaurants to send reservation confirmations to their users/customers:
<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "FoodEstablishmentReservation",
"reservationNumber": "WTA1EK",
"reservationStatus": "http://schema.org/Confirmed",
. . . information about dining customer . . .
"reservationFor": {
"@type": "FoodEstablishment",
"name": "Charlie’s Cafe",
"address": {
"@type": "PostalAddress",
"streetAddress": "1600 Amphitheatre Parkway",
"addressLocality": "Mountain View",
"addressRegion": "CA",
"postalCode": "94043",
"addressCountry": "United States"
},
"telephone": "+1 650 253 0000"
},
"startTime": "2015-01-01T19:30:00-07:00",
"partySize": "2"
}
</script>

When your confirmation is received, users will see a convenient Highlight with the pertinents at the top of their Inbox, then can open the message to obtain the full details of their reservation as shown above.

Getting started is simple: read about email markup, check out more markup examples, then register at developers.google.com/gmail/markup and follow the instructions from there!


by Shalini Agarwal, Product Management, Inbox by Gmail
Categories: Programming

Start Your Digital Vision by Reenvisioning Your Customer Experience

You probably hear a lot about the Mega-Trends (Cloud, Mobile, Social, and Big Data), or the Nexus of Forces (the convergence of social, mobility, cloud and data insights patterns that drive new business scenarios), or the Mega-Trend of Mega-Trends (Internet of Things).

And you are probably hearing a lot about digital transformation and maybe even about the rise of the CDO (Chief Digital Officer.)

All of this digital transformation is about creating business change, driving business outcomes, and driving better business results.

But how do you create your digital vision and strategy?   And, where do you start?

In the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned from companies that are digital masters that created their digital visions and are driving business change.

3 Perspectives of Digital Vision

When it comes to creating your digital vision, you can focus on reenvisioning the customer experience, the operational processes, or your business model.

Via Leading Digital:

“Where should you focus your digital vision? Digital visions usually take one of three perspectives: reenvisioning the customer experience, reenvisioning operational processes, or combining the previous two approaches to reenvision business models.  The approach you take should reflect your organization’s capabilities, your customer’s needs, and the nature of competition in your industry.”

Start with Your Customer Experience

One of the best places to start is with your customer experience.  After all, a business exists to create a customer.  And the success of the business is how well it creates value and serves the needs of the customer.

Via Leading Digital:

“Many organizations start by reenvisioning the way they interact with customers.  They want to make themselves easier to work with, and they want to be smarter in how they sell to (and serve) customers.  Companies start from different places when reenvisioning the customer experience.”

Transform the Relationship

You can use the waves of technologies (Cloud, Mobile, Social, Data Insights, and Internet of Things), to transform how you interact with your customers and how they experience your people, your products, and your services.

Via Leading Digital:

“Some companies aim to transform their relationships with their customers.  Adam Bortman, chief digital officer of Starbucks, shared this vision: 'Digital has to help more partners and help the company be the way we can ... tell our story, build our brand, and have a relationship with our customers.' Burberry's CEO Angela Ahredts focused on multichannel coherence. 'We had a vision, and the vision was to be the first company who was fully digital end-to-end ... A customer will have total access to Burberry across any device, anywhere.'  Mare Menesquen, managing director of strategic marketing at cosmetics gitan L'Oreal, said, 'The digital world multiples the way our brands can create an emotion-filled relationship with their customers.’”

Serve Your Customers in Smarter Ways

You can use technology to personalize the experience for your customers, and create better interactions along the customer experience journey.

Via Leading Digital:

“Other companies envision how they can be smarter in serving (and selling to) their customers through analytics.  Caesars started with a vision of using real-time customer information to deliver a personalized experience to each customer.  The company was able to increase customer satisfaction and profits per customer using traditional technologies.  Then, as new technologies arose, it extended the vision to include a mobile, location-based concierge in the palm of every customer's hand.”

Learn from Customer Behavior

One of the most powerful things you can now do with the combination of Cloud, Mobile, Social, Big Data and Internet of Things is gain better customer insights.  For example, you can learn from the wealth of social media insights, or you can learn through better integration and analytics of your existing customer data.

Via Leading Digital:

“Another approach is to envision how digital tools might help the company to learn from customer behavior.  Commonwealth Bank of Australia sees new technologies as a key way of integrating customer inputs in its co-creation efforts.  According to CIO Ian Narev, 'We are progressively applying new technology to enable customers to play a greater part in product design.  That helps us create more intuitive products and services, readily understandable to our customers and more tailored to their individual needs.”

Change Customers’ Lives

If you focus on high-value activities, you can create breakthroughs in the daily lives of your customers.

Via Leading Digital:

“Finally, some companies are extending their visions beyond influencing customer experience to actually changing customers' lives.  For instance, Novartis CEO Joseph Jimenez wrote of this potential: ‘The technologies we use in our daily lives, such as smart phones and tablet devices, could make a real difference in helping patients to manage their own health.  We are exploring ways to use these tools to improve compliance rates and enable health-care professionals to monitor patient progress remotely.’”

If you want to change the world, one of the best places to start is right from wherever you are.

With a Cloud and a dream, what can you do to change the world?

You Might Also Like

10 High-Value Activities in the Enterprise

Cloud Changes the Game from Deployment to Adoption

The Future of Jobs

Management Innovation is at the Top of the Innovation Stack

McKinsey on Unleashing the Value of Big Data

Categories: Architecture, Programming

Are You Living In The Past Or Future?

Making the Complex Simple - John Sonmez - Thu, 10/30/2014 - 15:00

In this video I talk about the important of living in the present moment and give some tips on how you can make sure you aren’t living in the past or future.

The post Are You Living In The Past Or Future? appeared first on Simple Programmer.

Categories: Programming

Episode 213: James Lewis on Microservices

Johannes Thönes talks to James Lewis, principal consultant at ThoughtWorks, about microservices. They discuss microservices’ recent popularity, architectural styles, deployment, size, technical decisions, and consumer-driven contracts. They also compare microservices to service-oriented architecture and wrap up the episode by talking about key figures in the microservice community and standing on the shoulders of giants. Recording […]
Categories: Programming

The fastest route between voice search and your app

Android Developers Blog - Wed, 10/29/2014 - 20:49
By Jarek Wilkiewicz, Developer Advocate, Google Search

How many lines of code will it take to let your users say Ok Google, and search for something in your app? Hardly any. Starting today, all you need is a small addition to your AndroidManifest.xml in order to connect the Google Now SEARCH_ACTION with your searchable activity:

<activity android:name=".SearchableActivity">
    <intent-filter>
        <action android:name="com.google.android.gms.actions.SEARCH_ACTION"/>
        <category android:name="android.intent.category.DEFAULT"/>
    </intent-filter>
</activity>

Once you make these changes, your app can receive the SEARCH_ACTION intent containing the SearchManager.QUERY extra with the search expression.

At Google, we always look for innovative ways to help you improve mobile search and drive user engagement back to your app. For example, users can now say to the Google app: “Ok Google, search pizza on Eat24” or “Ok Google, search for hotels in Maui on TripAdvisor.”

This feature is available on English locale Android devices running Jelly Bean and above with the Google app v3.5 or greater. Last but not least, users can enable the Ok Google hot-word detection from any screen, which offers them the fastest route between their search command and your app!


Join the discussion on
+Android Developers


Categories: Programming

No map is an island: Introducing a connected JavaScript Maps API experience

Google Code Blog - Wed, 10/29/2014 - 17:52
Cross-posted from the Google Geo Developers blog

Our digital lives are increasingly connected. We research on our laptops, look up directions on our phones and even navigate with our watches. And by creating maps unique to each user and offering features such as saved places, Google Maps has been making it easier to continue these tasks as we move from device to device.

However, although maps embedded from Google Maps are now built uniquely for every Google user, most of the now two million active sites and apps using the Maps APIs are still islands. When I look for a place to eat on Zagat, I can’t see how far away it is from work. When I look at a travel map in the New York Times, I can’t save those places in order to navigate to them later.

Today we’re taking a step towards connecting these two million sites and apps by introducing a signed-in JavaScript Maps API experience and a feature called attributed save. To help illustrate, we’ve partnered with the New York Times to bring this experience to their 36 hours travel column.

A connected JavaScript Maps API

When you add &signed_in=true to the Google Maps JavaScript API source url, your end users will have the option to sign into the map with their Google account. When they do so, your users will receive a map built for them, in the context of your app. Their saved places — including home and work addresses (if set by the end user) as well as other relevant places — will appear automatically on their map, providing a layer of context that anchors your content and makes it stand out even more.

Attributed save

Once users are signed into the Google Maps in your app, we can together create an integrated experience between your map content and Google Maps. With attributed save, signed-in users can save places from your app to be accessed later, with attribution and linkbacks, on Google Maps for the web, Android and iOS.

What’s more, you can also enable deep links into your mobile applications. For instance, users can save a place from your desktop app (such as Zagat.com), open up the place on Google Maps on their Android device, and deep link directly into your Android app.

Enabling attributed save is easy — just specify your app name, a link and a place search string or place ID when creating a marker and info window. Or use our SaveWidget to enable attributed save in your own custom info window.

In addition, we’re also launching attributed save across all embedded maps today. Attribution and linkback parameter will be inferred automatically from the domain and referrer of the host site, so if you’re using our embedded maps, you don’t need to do anything! If you’re using the Google Maps Embed API, you may customize the source and link back parameters yourself.

One final point: we’ve stated in the past that the JavaScript Maps API is cookieless if loaded from maps.googleapis.com. As of today, to enable the signed in maps experience on sites across the web, the signed-in version of the JavaScript Maps API now does rely on cookies to detect the end user’s signed-in state. Please review our documentation for further details.

That’s all for now. Go try it out. And remember, no map is an island, entire of itself...

Posted by Ken Hoetmer, Product Manager, Google Maps APIs
Categories: Programming

How to Dockerize your Dropwizard Application

Xebia Blog - Wed, 10/29/2014 - 10:47

If you want to deploy your Dropwizard Application on a Docker server, you can Dockerize your Dropwizard Application. Since a Dropwizard Application is already packaged as an executable Java ARchive file, creating a Docker image for such an application should be easy.

 

In this blog, you will learn how to Dockerize a Dropwizard Application using 4 easy steps.

Before you start

  • You are going to use the Dropwizard-example application, which can be found at the Dropwizard GitHub repository.
  • Additionally you need Docker. I used Boot2Docker to run the Dockerized Dropwizard Application on my laptop. If you use boot2Docker, you may need this Boot2Docker workaround to access your Dockerized Dropwizard application.
  • This blog does not describe how to create Dropwizard applications. The Dropwizard getting started guide provides an excellent starting point if you like to know more about building your own Dropwizard applications.

 

Step 1: create a Dockerfile

You can start with creating a Dockerfile. Docker can automatically build images by reading the instructions described in this file. Your Dockerfile could look like this:

FROM dockerfile/java:openjdk-7-jdk

ADD dropwizard-example-1.0.0.jar /data/dropwizard-example-1.0.0.jar

ADD example.keystore /data/example.keystore

ADD example.yml /data/example.yml

RUN java -jar dropwizard-example-1.0.0.jar db migrate /data/example.yml

CMD java -jar dropwizard-example-1.0.0.jar server /data/example.yml

EXPOSE 8080

 

The Dropwizard Application needs a Java Runtime, so you can start from an base image already available at Docker Hub, for example: dockerfile/java:openjdk-7-jdk.

You must add the Dropwizard Application files to the image, using the ADD instruction in your Dockerfile.

Next, simply specify the commands of your Dropwizard Application, which you want to execute during image build and container runtime. In the example above, the db migrate command is executed when the Docker image is build and the server command is executed when you issue a Docker run command to create a running container.

Finally, the EXPOSE instruction tells Docker that your container will listen on the specified port(s) at runtime.

 

Step 2: build the Docker image

Place the Dockerfile and your application files in a directory and execute the Docker build command to build an Docker image.

docker@boot2docker:~$ docker build -t dropwizard/dropwizard-example ~/dropwizard/

 

In the console output you should be able to that the Dropwizard Application db migrate command is executed. If everything is ok, the last line reported informs you that the image is successfully build.

Successfully built dd547483b57b

 

Step 3: run the Docker image

Use the Docker run command to create a container based on the image you have created. If you need to find your image id use the Docker images command to list your images. It should take around 3 seconds to start the Dockerized Dropwizard example application.

Docker run –p 8080:8080 dd547483b57b

Notice that I included the –p option to include a network port binding, which maps 8080 inside the container to port 8080 on the Docker host.  You can verify whether your container is running using the docker ps command.

docker@boot2docker:~$ docker ps

CONTAINER ID        IMAGE                                  COMMAND                CREATED             STATUS              PORTS                    NAMES

3b6fb75adad6        dropwizard/dropwizard-example:latest   "/bin/sh -c 'java -j   3 minutes ago       Up 3 minutes        0.0.0.0:8080->8080/tcp   high_turing

 

  1. Test the application

Now the application is ready for use. You can access the application using your Docker host ip address and the forward port 8080. For example, use the Google Advanced Rest Client App to register “John Doe”.

GoogleRestClient

Tips for integrating with Google Accounts on Android

Android Developers Blog - Tue, 10/28/2014 - 18:24
By Laurence Moroney, Developer Advocate

Happy Tuesday! We've had a few questions come in recently regarding Google Accounts on Android, so we've put this post together to show you some of our best practices. The tips today will focus on Android-based authentication, which is easily achieved through the integration of Google Play services. Let's get started.

Unique Identifiers

A common confusion happens when developers use the account name (a.k.a. email address) as the primary key to a Google Account. For instance, when using GoogleApiClient to sign in a user, a developer might use the following code inside of the onConnected callback for a registered GoogleApiClient.ConnectedCallbacks listener:

[Error prone pseudocode]
String accountName = Plus.AccountApi.getAccountName(mGoogleApiClient);
// createLocalAccount() is specific to the app's local storage strategy.
createLocalAccount(accountName);

While it is OK to store the email address for display or caching purposes, it is possible for users to change the primary email address on a Google Account. This can happen with various types of accounts, but these changes happen most often with Google Apps For Work accounts.

So what's a developer to do? Use the Google Account ID (as opposed to the Account name) to key any data for your app that is associated to a Google Account. For most apps, this simply means storing the Account ID and comparing the value each time the onConnected callback is invoked to ensure the data locally matches the currently logged in user. The API provides methods that allow you to get the Account ID from the Account Name. Here is an example snippet you might use:

[Google Play Services 6.1+]
String accountName = Plus.AccountApi.getAccountName(mGoogleApiClient);
String accountID = GoogleAuthUtil.getAccountId(accountName);
createLocalAccount(accountID);
[Earlier Versions of Google Play Services (please upgrade your client)]
Person currentUser = Plus.PeopleApi.getCurrentPerson(mGoogleApiClient);
String accountID = currentUser.getID();
createLocalAccount(accountID);

This will key the local data against a Google Account ID, which is unique and stable for the user even after changing an email address.

So, in the above scenario, if your data was keyed on an ID, you wouldn’t have to worry if your users change their email address. When they sign back in, they’ll still get the same ID, and you won’t need to do anything with your data.

Multiple Accounts

If your app supports multiple account connections simultaneously (like the Gmail user interface shown below), you are calling setAccountName on the GoogleApiClient.Builder when constructing GoogleApiClients. This requires you to store the account name as well as the Google Account ID within your app. However, the account name you’ve stored will be different if the user changes their primary email address. The easiest way to deal with this is to prompt the user to re-login. Then, update the account name when onConnected is called after login. Any time a login occurs you, can use code such as this to compare Account IDs and update the email address stored locally for the Account ID.

[Google Play Services 6.1+]
String accountName = Plus.AccountApi.getAccountName(mGoogleApiClient);
String accountID = GoogleAuthUtil.getAccountId(accountName);
// isExistingLocalAccount(), createLocalAccount(), 
// getLocalDataAccountName(), and updateLocalAccountName() 
// are all specific to the app's local storage strategy.
boolean existingLocalAccountData = isExistingLocalAccount(accountID);
if (!existingLocalAccountData) {
    // New Login.
    createLocalAccount(accountID, accountName);
} else {
    // Existing local data for this Google Account.
    String cachedAccountName = getLocalDataAccountName(accountID);    
    if (!cachedAccountName.equals(accountName)) {
        updateLocalAccountName(accountID, accountName);
    }
}

This scenario reinforces the importance of using the Account ID to store data all data in your app.

Online data

The same best practices above apply to storing data for Google Accounts in web servers for your app. If you are storing data on your servers in this manner and treating the email address as the primary key:

ID [Primary Key] Field 1 Field 2 Field 3 user1@gmail.com Value 1 Value 2 Value 3

You need to migrate to this model where the primary key is the Google Account ID.:

ID [Primary Key] Email Field 1 Field 2 Field 3 108759069548186989918 user1@gmail.com Value 1 Value 2 Value 3

If you don't make Google API calls from your web server, you might be able to depend on the Android application to notify your web server of changes to the primary email address when implementing the updateLocalAccountName method referenced in the multiple accounts sample code above. If you make Google API calls from your web server, you likely implemented it using the Cross-client authentication and can detect changes via the OAuth2 client libraries or REST endpoints on your server as well.

Conclusion

When using Google Account authentication for your app, it’s definitely a best practice to use the account ID, as opposed to the account name to distinguish data for the user. In this post, we saw three scenarios where you may need to make changes to make your apps more robust. With the growing adoption of Google for Work, users who are changing their email address, but keeping the same account ID, may occur more frequently, so we encourage all developers to make plans to update their code as soon as possible.


Join the discussion on
+Android Developers


Categories: Programming

ScanAgile 2015 submissions are open!

Software Development Today - Vasco Duarte - Tue, 10/28/2014 - 18:15


Just a quick note today to let you know that the Call for Sessions for ScanAgile, the Agile Finland annual conference is open for submissions.
You can read the whole call for sessions here. You will find the submission form in that page as well.

For me the most interesting tracks are:

  • Off-Piste: interesting lessons learned about being agile and agile related topics, from other industries 
  • Black Piste: Topics for experienced agile practitioners
These are just some of the tracks. In Scan Agile there will also be tracks for those starting up or that have already started but are in the early phases of their Agile transformation journey. 


The Agile Finland Community is very active and has a long history of agile adoption and promotion. They have some of the most advanced practitioners in the world, so I am really looking forward to see who the Scan Agile team chooses for the 2015 lineup of the conference! 


Hope to see many of you there! 

Google Fit SDK available now

Google Code Blog - Tue, 10/28/2014 - 18:13

After previewing it earlier this summer, today the Google Fit APIs are fully available on Android, Android Wear and the web so that you can build and publish apps for users on Google Play. Head to developers.google.com/fit to learn more.

The Google Fit platform gives the user one place to keep all their fitness activities. With the user’s permission, any developer can store or read the user’s data from Google Fit and use it to build powerful and useful fitness experiences for their users.

For users, we’re also launching the Google Fit app on Google Play for smartphones, tablets, Wear, and on the web at google.com/fit. The Google Fit app provides users with effortless, all-day activity tracking, as well as displaying key fitness data that our partners have stored in the platform. This app will also provide an opportunity for users to discover apps that help them track their fitness goals using Google Fit.

To get a quick introduction to the Fit APIs, check out the Dev Byte videos below.


A number of partners from around the fitness industry have been hard at work preparing their apps for Google Fit. In the coming weeks, our previously-announced launch partners, Nike+ Running, Withings HealthMate, Runkeeper, Runtastic, and Noom Coach, will launch their Google Fit integrations. We’re also happy to announce 6 new Google Fit partners: Strava, MapMyRun, LynxFit, LifeSum, FatSecret, and Azumio. These new partners are also preparing great experiences that will launch soon.

Please join the Google Fit Developer Community to share ideas and get inspired. We can’t wait to see what you come up with!

Posted by Angana Ghosh, Product Manager, Google Fit

Categories: Programming

Azure: New Marketplace, Network Improvements, New Batch Service, Automation Service, more

ScottGu's Blog - Scott Guthrie - Tue, 10/28/2014 - 15:35

Today we released a major set of updates to Microsoft Azure. Today’s updates include:

  • Marketplace: Announcing Azure Marketplace and partnerships with key technology partners
  • Networking: Network Security Groups, Multi-NIC, Forced Tunneling, Source IP Affinity, and much more
  • Batch Computing: Public Preview of the new Azure Batch Computing Service
  • Automation: General Availability of the Azure Automation Service
  • Anti-malware: General Availability of Microsoft Anti-malware for Virtual Machines and Cloud Services
  • Virtual Machines: General Availability of many more VM extensions – PowerShell DSC, Octopus, VS Release Management

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Marketplace: Announcing Azure Marketplace and partnerships with key technology partners

Last week, at our Cloud Day event in San Francisco, I announced a new Azure Marketplace that helps to better connect Azure customers with partners, ISVs and startups.  With just a couple of clicks, you can now quickly discover, purchase, and deploy any number of solutions directly into Azure.

Exploring the Marketplace

You can explore the Azure Marketplace by clicking the Marketplace title that is pinned by default to the home-screen of the Azure Preview Portal:

image

Clicking the Marketplace tile will enable you to explore a large selection of applications, VM images, and services that you can provision into your Azure subscription:

image

Using the marketplace provides a super easy way to take advantage of a rich ecosystem of applications and services integrated to run great with Azure.  Today’s marketplace release includes multi-VM templates to run Hadoop clusters powered by Cloudera or Hortenworks, Linux VMs powered by Unbuntu, CoreOS, Suse, CentOS, Microsoft SharePoint Server Farms, Cassandra Clusters powered by DataStax, and a wide range of security virtual appliances.

You can click any of the items in the gallery to learn more about them and optionally deploy them.  Doing so will walk you though a simple to follow creation wizard that enables you to optionally configure how/where they will run, as well as display any additional pricing required for the apps/services/VM images that you select.

For example, below is all it takes to stand-up an 8-node DataStax Enterprise cluster:

image

Solutions you purchase through the Marketplace will be automatically billed to your Azure subscription (avoiding the need for you to setup a separate payment method).  Virtual Machine images will support the ability to bring your own license or rent the image license by the hour (which is ideal for proof of concept solutions or cases where you need the solution for only a short period of time).  Both Azure Direct customers as well as customers who pay using an Enterprise Agreement can take advantage of the Azure Marketplace starting today.

You can learn more about the Azure Marketplace as well as browse the items within it here.

Networking: Lots and lots of New Features and Improvements

This week’s Azure update includes a ton of new capabilities to the Azure networking stack.  You can use these new networking capabilities immediately in the North Europe region, and they will be supported worldwide in all regions in November 2014.  The new network capabilities include:

Network Security Groups

You can now create Network Security groups to define access control rules for inbound and outbound traffic to a Virtual machine or a group of virtual machines in a subnet. The security groups and the rules can be managed and updated independent of the life cycle of the VM.

Multi-NIC Support

You can now create and manage multiple virtual network interfaces (NICs) on a VM.  Multi-NIC support is a fundamental requirement for a majority of network virtual appliances that can be deployed in Azure. Having this support now enabled within Azure will enable even richer network virtual appliances to be used.

Forced Tunneling

You can now redirect or “force” all Internet-bound traffic that originates in a cloud application back through an on-premises network via a Site-to-Site VPN tunnel for inspection and auditing. This is a critical security capability for enterprise grade applications.

ExpressRoute Enhancements

You can now share a single ExpressRoute connection across multiple Azure subscriptions. Additionally, a single Virtual Network in Azure can now be linked to more than one ExpressRoute circuit, thereby enabling much richer backup and disaster recovery scenarios.

image

New VPN Gateway Sizes

To cater to the growing hybrid connectivity throughput needs and the number of cross premise sites, we are announcing the availability of a higher performance Azure VPN gateway. This will enable a faster ExpressRoute and Site-to-Site VPN gateways with more tunnels.

Operations and audit logs for VNet Gateways and ExpressRoute

You can now view operations logs for Virtual Network Gateways and ExpressRoute circuits. The Azure portal will now show operations logs and information on all API calls you make and important infrastructure changes made such as scheduled updates to gateways.

Advanced Virtual Network Gateway policies

We now enable the ability for you to control encryption for the tunnel between Virtual Networks. You now have a choice between 3DES, AES128, AES256 and Null encryption, and you can also enable Perfect Forward Secrecy (PFS) for IPsec/IKE gateways.

Source IP Affinity

The Azure Load Balancer now supports a new distribution mode called Source IP Affinity (also known as session affinity or client IP affinity). You can now load balance traffic based on a 2-tuple (Source-IP, Destination-IP) or 3-tuple (Source-IP, Destination-IP and Protocol) distribution modes.

Nested policies for Traffic Manager

You can now create nested policies for traffic management. This allows tremendous flexibility in creating powerful load-balancing and failover schemes to support the needs of larger, more complex deployments.

Portal Support for Managing Internal Load Balancer, Reserved and Instance IP addresses for Virtual Machines

It is now possible to use the Azure Preview Portal to manage creating and setting up internal load balancers, as well as reserved and instance IP addresses for virtual machines.

Automation: General Availability of Azure Automation Service

I am excited to announce the General Availability of the Azure Automation service. Azure Automation enables the creation, deployment, monitoring, and maintenance of resources in an Azure environment using a highly scalable and reliable workflow engine. The service can be used to orchestrate time-consuming and frequently repeated operational tasks across Azure and third-party systems while decreasing operating expenses.

Azure Automation allows you to build runbooks (PowerShell Workflows) to describe your administration processes, provides a secure global assets store so you don’t need to hardcode sensitive information within your runbooks, and offers scheduling so that runbooks can be triggered automatically.

Runbooks can automate a wide range of scenarios – from simple day to day manual tasks to complex processes that span multiple Azure services and 3rd party systems. Because Automation is built on PowerShell, you can take advantage of the many existing PowerShell modules, or author your own to integrate with third party systems.

Creating and Editing Runbooks

You can create a runbook from scratch, or start by importing an existing template in the runbook gallery:

image

Editing experience for runbooks can also be performed directly in the administration portal:

image

Pricing

Available as a pay-as-you-go service, Automation is billed based on the number of job run time minutes used in a given Azure subscription.  500 minutes of free job runtime credits are also included each month for Azure customers to use at no charge.

Learn More

To learn more about Azure Automation, check out the following resources:

Batch Service: Preview of Azure Batch - new job scheduling service for parallel and HPC apps

I’m excited to announce the public preview of our new Azure Batch Service. This new platform service provides “job scheduling as a service” with auto-scaling of compute resources, making it easy to run large-scale parallel and high performance computing (HPC) work in Azure. You submit jobs, we start the VMs, run your tasks, handle any failures, and then shut things down as work completes.

Azure Batch is the job scheduling engine that we use internally to manage encoding for Azure Media Services, and for testing Azure itself. With this preview, we are excited to expand our SDK with a new application framework from GreenButton, a company Microsoft acquired earlier in the year. The Azure Batch SDK makes it easy to cloud-enable parallel, cluster, and HPC applications by describing jobs with the required resources, data, and one or more compute tasks.

Azure Batch can be used to run large volumes of similar tasks or applications in parallel, programmatically. A command line program or script takes a set of data files as input, processes the data in a series of tasks, and produces a set of output files. Examples of batch workloads that customers are running today in Azure include calculating risk for banks and insurance companies, designing new consumer and industrial products, sequencing genes and developing new drugs, searching for new energy sources, rendering 3D animations, and transcoding video.

Azure Batch makes it easy for these customers to use hundreds, thousands, tens of thousands of cores, or more on demand. With job scheduling as a service, Azure developers can focus on using batch computing in their applications and delivering services without needing to build and manage a work queue, scaling resources up and down efficiently, dispatching tasks, and handling failures.

image

The scale of Azure helps batch computing customers get their work done faster, experiment with different designs, run larger and more precise models, and test a large number of different scenarios without having to invest in and maintain large clusters.

Learn more about Azure Batch and start using it for your applications today. Virtual Machines: General Availability of Microsoft Anti-Malware for VMs and Cloud Services

I’m excited to announce that the Microsoft Anti-malware security extension for Virtual Machines and Cloud Services is now generally available.  We are releasing it as a free capability that you can use at no additional charge.

The Microsoft Anti-malware security extension can be used to help identify and remove viruses, spyware or other malicious software.  It provides real-time protection from the latest threats and also supports on-demand scheduled scanning.  Enabling it is a good security best practice for applications hosted either on-premises or in the cloud.

Enabling the Anti-Malware Extension

You can select and configure the Microsoft Antimalware security extension for virtual machines using the Azure preview portal, Visual Studio or API’s/PowerShell.  Antimalware events are then logged to the customer configured Azure Storage account via Azure Diagnostics and can be piped to HDInsight or a SIEM tool for further analysis. More information is available in the Microsoft Antimalware Whitepaper.

To enable antimalware feature on existing virtual machine, select the EXTENSIONS tile on a Virtual Machine in the Azure Preview Portal, then click ADD in the command bar and select the Microsoft Antimalware extension. Then, click CREATE and customize any settings:

image Virtual Machines: General Availability of even more VM Extensions

In addition to enabling the Microsoft Anti-Malware extension for Virtual Machines, today’s release also includes support for a whole bunch more new VM extensions that you can enable within your Virtual Machines.  These extensions can be added and configured using the same EXTENSIONS tile on Virtual Machine resources within the Azure Preview Portal (the same screen-shot as in the Anti-malware section above).

The new extensions enabled today include:

PowerShell Desired State Configuration

The PowerShell Desired State Configuration Extension can be used to deploy and configure Azure VMs using Desired State Configuration (DSC) technology. DSC enables you to declaratively specify how you want your software environment to be configured. DSC configuration can also be automated using the Azure PowerShell SDK, and you can push configurations to any Azure VM and have them enacted automatically. For more details, please see this desired state configuration blog post.

image 

Octopus

Octopus simplifies the deployment of ASP.NET web applications, Windows Services and other applications by automatically configuring IIS, installing services and making configuration changes. Octopus integration of Azure was one of the top requested features on Azure UserVoice and with this integration we will simplify the deployment and configuration of octopus on the VM.

image

Visual Studio Release Management

Release Management for Visual Studio is a continuous delivery solution that automates the release process through all of your environments from TFS through to production. Visual Studio Release Management is integrated with TFS and you can configure multi-stage release pipelines to automatically deploy and validate your applications on multiple environments. With the new Visual Studio Release Management extension, VMs can be preconfigured with the necessary components for required for Release Management to operate.

image Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu omni

Categories: Architecture, Programming

Material Design on Android Checklist

Android Developers Blog - Tue, 10/28/2014 - 14:48
ul.checklist { list-style-image: url('data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIxNCIgaGVpZ2h0PSIxMC41IiB2aWV3Qm94PSIwIDYuOSAxNiAxMiI+DQogIDxwYXRoIGQ9Ik01LDE2LjRsLTMuOC0zLjhMMCwxMy45bDUsNUwxNS45LDguMWwtMS4zLTEuM0w1LDE2LjR6Ii8+DQo8L3N2Zz4NCg=='); clear: left; } .sectionintro { max-width: 400px; border: 1px solid #ccc; position: relative; padding: 10px 10px 10px 90px; font-style: italic; margin-top: 10px; margin-bottom: 10px; min-height: 75px; } .sectionintro img { position: absolute; left: 5px; top: 5px; border: 0 !important; box-shadow: 0 !important; } h3 { margin-top: 20px; font-size: 150%; } h4 { margin-top: 20px; } ul.checklist li { margin-top: 10px; } div.in-code { color: #aaa; padding-left: 20px; } div.in-code em { color: #c60; font-weight: bold; font-style: normal; }

By Roman Nurik, Design Advocate

Android 5.0 brings in material design as the new design system for the platform and system apps. Consumers will soon start getting Android 5.0 and they’re already seeing glimpses of material design with apps like Google Play Newsstand, Inbox by Gmail and Tumblr. Meanwhile, developers now have the Android 5.0 SDK, along with AppCompat for backward compatibility. And designers now have access to Photoshop, Illustrator and Sketch templates. All this means that now—yes now!—is the time to start implementing material design in your Android apps. Today, let’s talk about what implementing material design really boils down to.

Below, you’ll find a material design checklist that you can use to mark progress as you implement the new design system. The checklist is divided into 4 key sections based on the 4 key aspects of material design.

If you include a good chunk of the items in the checklist below, especially the ones indicated as signature elements, and follow traditional Android design best practices (i.e. these, these, and things we discussed on ADiA), you’ll be well on your way to material design awesomeness!

Tangible Surfaces UIs consist of surfaces (pieces of “digital paper”) arranged at varying elevations, casting shadows on surfaces behind them.
Figure 1. Surfaces and layering.
  • Signature element: Shadows are used to communicate which surfaces are in front of others, helping focus attention and establish hierarchy. Read more on depth and layering in UIs. In code: This is the android:elevation and android:translationZ attribute in Android 5.0. On earlier versions, shadows are normally provided as PNG assets.
  • Shadows and surfaces are used in a consistent and structured way. Each shadow indicates a new surface. Surfaces are created thoughtfully and carefully.
  • There are generally between 2 and 10 surfaces on the screen at once; avoid too much layering/nesting of surfaces.
  • Scrollable content either scrolls to the edges of the screen or behind another surface that casts a shadow over the content’s surface. Never clip an element against an invisible edge—elements don’t just scroll off into nowhere. Put another way, you rarely scroll the ink on a surface; you scroll the surface itself. In code: android:clipToPadding=false often helps with this when using ListView and ScrollView.
  • Surfaces have simple, single-color backgrounds.
A Bold, Print-Like Aesthetic The “digital ink” you draw on those pieces of digital paper is informed by classic print design, with an emphasis on bold use of color and type, contextual imagery, and structured whitespace.
Figure 2. Primary and accent colors.

Figure 3. Keylines.
  • Signature element: Apps use a primary color and an accent color (Figure 2) to color surface backgrounds and key UI widgets such as text fields and checkboxes. The accent color contrasts very well with the primary color (for example an app can use a dark blue primary color and a neon pink accent color). The accent color is high-contrast and is used to call attention to key UI elements, like a circular floating action button, selected tab strips, or form fields. In code: Set the android:colorPrimary and android:colorAccent attributes in your theme (drop the android prefix if using AppCompat). AppCompat automatically colors text fields, checkboxes, and more on pre-L devices.
  • Signature element: On Android 5.0, the status bar is colored to match the app’s primary color, or the current screen’s content. For full-bleed imagery, the status bar can be translucent. In code: Set the android:colorPrimaryDark or android:statusBarColor attribute in your theme (drop the android prefix if using AppCompat) or call Window.setStatusBarColor.
  • Icons, photos/images, text, and other foreground elements are colored “ink” on their surfaces. They don’t have shadows and don’t use gradients.
  • Colors extracted from images can be used to color adjacent UI elements or surfaces. In code: This can be done using the Palette support library.
  • Signature element: Icons in the app follow the system icon guidelines, and standard icons use the material design icon set.
  • Photos are generally immersive and full-bleed. For example, for detail screens, run edge-to-edge and can even appear behind the app bar or status bar. In code: The new Toolbar widget (and its AppCompat equivalent) can be transparent and placed directly in your layout. For the status bar, check this Stack Overflow post.
  • Signature element: Where appropriate, elements like body text, thumbnails, app bar titles, etc. are aligned to 3 keylines (Figure 3). On phones, those keylines are 16dp and 72dp from the left edge and 16dp from the right edge of the screen. On tablets those values are 24dp and 80dp.
  • UI elements are aligned to and sized according to an 8dp baseline grid. For example, app bars are 56dp tall on phones and 64dp tall on tablets. Padding and margins can take on values like 8dp, 16dp, 24dp, etc. More precise text positioning uses a 4dp grid.
Authentic Motion Motion helps communicate what’s happening in the UI, providing visual continuity across app contexts and states. Motion also adds delight using smaller-scale transitions. Motion isn’t employed simply for motion’s sake.
Figure 4. "Hero" transitions.
  • In general, UI and content elements don’t just appear or disappear—they animate into place, either together as a unit, or individually.
  • Signature element: When touching an item to see its details, there’s a “hero” transition (Figure 4) that moves and scales the item between its position in the browsing screen and its position in the detail screen. In code: These are called “shared element transitions” in the SDK. The support version of FragmentTransaction also includes some shared element support.
  • Signature element: Ripple effects originating from where you touched the screen are used to show touch feedback on an item. In code: The default android:selectableItemBackground and android:selectableItemBackgroundBorderless have this, or you can use RippleDrawable (<ripple>) to customize the effect. On pre-5.0 devices, ripples aren’t an expected feature, so defer to the default android:selectableItemBackground behavior.
  • Signature element: UI elements can appear using a circular “reveal” animation. In code: See this doc or the ViewAnimationUtils class for more.
  • Signature element: Animations are used in more subtle, delightful ways, such as to convey the transition between icon states or text states. For example, a “+” icon can morph into an “x” symbol, or an outlined heart icon can be filled using a paint-bucket fill effect. In code: Icon transitions can be implemented using AnimatedStateListDrawable and its XML counterpart. An example can be found in the Google I/O app source. There’s also support for animated vector icons.
  • Animations and transitions are fast—generally under 300ms.
  • Crossfades are often replaced by translate/slide transitions: vertical slides for descendant navigation and horizontal slides for lateral navigation. For slide transitions, prefer quick acceleration and gentle ease-in deceleration over simple linear moves. See the material design spec on motion for more.
Adaptive Design (and UI Patterns) Tangible surfaces, bold graphic design, and meaningful motion work together to bring a consistent experience across any screen, be it phones, tablets, laptops, desktops, TVs, wearables, or even cars. Additionally, the key UI patterns below help establish a consistent character for the app across devices.
Figure 5. The floating action button.
  • The app uses responsive design best practices to ensure screens lay themselves out appropriately on any screen size, in any orientation. See the Tablet App Quality Checklist for a list of ways to optimize for tablets, and this blog post for high-level tablet optimization tips.
    • In material design, detail screens are often presented as popups that appear using “hero” transitions (see above).
    • In multi-pane layouts, the app can use multiple toolbars to place actions contextually next to their related content.
  • Signature element: Where appropriate, the app promotes the key action on a screen using a circular floating action button (FAB). The FAB (Figure 5) is a circular surface, so it casts a shadow. It is colored with a bright, accent color (see above). It performs a primary action such as send, compose, create, add, or search. It floats in front of other surfaces, and is normally at an 8dp elevation. It frequently appears at the bottom right of the screen, or centered on an edge where two surfaces meet (a seam or a step).

App bar
  • Signature element: The app uses a standard Android app bar. The app bar doesn’t have an app icon. Color and typography are used for branding instead. The app bar casts a shadow (or has a shadow cast on it by a surface below and behind it). The app bar normally has a 4dp elevation. In code: Use the new Toolbar widget in Android 5.0 that is placed directly into the activity’s view hierarchy. AppCompat also provides android.support.v7.widget.Toolbar, which supports all modern platform versions.
  • The app bar might be for example 2 or 3 times taller than the standard height; on scroll, the app bar can smoothly collapse into its normal height.
  • The app bar might be completely transparent in some cases, with the text and actions overlaying an image behind it. For example, see the Google Play Newsstand app.
  • App bar titles align to the 2nd keyline (see more info on keylines above) In code: when using the Toolbar widget, use the android:contentInsetStart attribute.
  • Where appropriate, upon scrolling down, the app bar can scroll off the screen, leaving more vertical space for content. Upon scrolling back up, the app bar should be shown again.
Tabs
Figure 6. Tabs with material design.
  • Signature element: Tabs follow the newer material design interactions and styling (Figure 6). There are no vertical separators between tabs. If the app uses top-level tabs, tabs are visually a part of the app bar; tabs are a part of the app bar’s surface. In code: See the SlidingTabsBasic sample code in the SDK or the Google I/O app source (particularly the "My Schedule" section for phones).
  • Tabs should support a swipe gesture for moving between them. In code: All tabs should be swipeable using the ViewPager widget, which is available in the support library.
  • Selected tabs are indicated by a foreground color change and/or a small strip below the tab text (or icon) colored with an accent color. The tab strip should smoothly slide as you swipe between tabs.
Navigation drawer
Figure 7. Navigation drawers
with material design.
  • Signature element: If the app uses a navigation drawer, it follows the newer material design interactions and styling (Figure 7). The drawer appears in front of the app bar. It also appears semitransparent behind the status bar. In code: Implement drawers using the DrawerLayout widget from the support library, along with the new Toolbar widget discussed above. See this Stack Overflow post for more.
  • Signature element: The leftmost icon in the app bar is a navigation drawer indicator; the app icon is not visible in the app bar. Optionally, on earlier versions of the platform, if the app has a drawer, the top-left icon can remain the app icon and narrower drawer indicator, as in Android 4.0.
  • The drawer is a standard width: No wider than 320dp on phones and 400dp on tablets, but no narrower than the screen width minus the standard toolbar height (360dp - 56dp = 304dp on the Nexus 5)
  • Item heights in the drawer follow the baseline grid: 48dp tall rows, 8dp above list sections and 8dp above and below dividers.
  • Text and icons should follow the keylines discussed above.

More and more apps from Google and across the Google Play ecosystem will be updating with material design soon, so expect Winter 2014 to be a big quarter for design on Android. For more designer resources on material design, check out the DesignBytes series. For additional developer resources, check the Creating Apps with Material Design docs!

Join the discussion on

+Android Developers
Categories: Programming

5 Ways Software Developers Can Become Better at Estimation

Making the Complex Simple - John Sonmez - Mon, 10/27/2014 - 20:06

In my last post, I detailed four of the biggest reasons why software developers suck at estimation, but I didn’t talk about how to solve any of the problems I presented. While estimation will always be inherently difficult for software developers, all hope is not lost. In this post, I am going to give you […]

The post 5 Ways Software Developers Can Become Better at Estimation appeared first on Simple Programmer.

Categories: Programming

How to create Java microservices with Dropwizard

Xebia Blog - Mon, 10/27/2014 - 15:10

On Tuesday October 14th the Amsterdam Middleware Meetup experimented with Dropwizard. The idea was to find out what this technology is about, where it could be useful and what the alternatives are. So below I’ll give you an overview of Dropwizard and compare it to Spring Boot.
The Dropwizard website claims:

Dropwizard pulls together stable, mature libraries from the Java ecosystem into a simple, light-weight package that lets you focus on getting things done.

I’ll discuss each of these claims below.

Stable and mature
Dropwizard uses Jetty, Jersey, Jackson and Metrics as its most important frameworks, but also a host of other stuff like Guava, Liquibase and Joda Time. The latest Dropwizard release is version 0.7.1, released on June 20th 2014. It depends on these versions of some core libraries:
Jetty - 9.2.3.v20140905 - May 2014
Jackson - 2.4.1 - June 2014
Jersey - 2.11 - July 2014

The table shows that stable != out-of-date which is fine of course. The versions of core libraries used are recent though. I guess ‘stable’ means libraries with a long history.

Simple
The components of a Dropwizard application are shown below (taken from the tutorial
http://dropwizard.io/getting-started.html):
Dropview components overview

  1. Application (HelloWorldApplication.java): the applications main method, responsible for startup.
  2. Configuration (HelloWorldConfiguration.java) sets configuration for an environment, this is where you may set hostnames for systems the application depends on or set usernames.
  3. Data object (Saying.java).
  4. Resource (HelloWordResource.java): service implementation entry point
  5. Health Check (TemplateHealthCheck.java): runtime tests that show if the application still works.

Light weight
We did some experiments trying to answer the question whether Dropwizard applications are light weight. The table below summarizes some of the sizes of deployments and tools.
Tomcat size 14 mb
Tomcat lib folder size 7 MB
Jetty size 14,6 MB
Jetty in Dropwizard jar: 5,4 MB
Dropwizard tutorial example 10 mb
Dropwizard extended example 20 MB
Dropwizard Hibernate classes in package: 5 MB

A Tomcat or Jetty installation takes about 14 MB, but if you count only the lib folder the size goes down to about 7 MB. The Jetty folder in Dropwizard however is only 5.5 MB. Apparently Dropwizard managed to strip away some code you don’t really need (or is packaged somewhere else, we didn’t look into that).
Building the tutorial results in a 10 MB jar, so if you would run a webapp in its own Tomcat container, switching to Dropwizard saves quite a bit. On the other hand, deployment size isn’t all that important if we’re still talking < 50 MB.
Compared to your default Weblogic install (513 MB, Weblogic-only on OSX) however, savings are humongous (but this is also true when you compare Weblogic to Tomcat or Jetty).

Productivity
We tried to run the build for the tutorial application (dropwizard-example in the dropwizard project on Github). This works fine and takes about 8 seconds using mocks for external connections. One option to explore would be to run tests against a deployed application. What we’re used to is that deploying an application for test takes lots of time and resources, but starting a Dropwizard app is quite cheap. Therefore it would be possible to run an integration test of services at the end of a build. This would be quite hard to do with e.g. Weblogic or Websphere.

Spring boot
Spring boot is interesting, as well as the discussion around the differences between Spring boot and Dropwizard. See https://groups.google.com/forum/#!topic/dropwizard-user/vH1h2PgC8bU

The official Spring boot website says: Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that can you can "just run". We take an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss. Most Spring Boot applications need very little Spring configuration.
It’s good to see a platform change according to new insights, but still, I remember Rod Johnson saying some ten years ago that J2EE was bloated and complex and Spring was the answer. Now it seems we need Spring boot to make Spring simple? Or is it just that we don’t need application servers anymore to divide resources among processes?

Dropwizard and Docker
Finally we experimented with running Dropwizard in a Docker container. This can be done with limited effort because Dropwizard applications have such a small number of dependencies. Thomas Kruitbosch will report on this later.

References
Spring boot: http://projects.spring.io/spring-boot/
Dropwizard: http://dropwizard.io/

Animating Lohse

Phil Trelford's Array - Mon, 10/27/2014 - 08:34

Richard Paul Lohse was a Swiss born painter and graphic artist who typically produced pieces that had “interacting colour elements in various logical/mathematical relations visible to the eye”

Out of curiosity I’ve taken a small number of Lohse’s work, generated them procedurally and added simple animations.

Sechs Serigraphien

Sechs Serigraphien

Here I imagined the centre blocks bleeding out and filling the adjacent blocks a line at a time.

 

15 systematische Farbreihen mit 5 gleichen horizontalen Rythmen

15 systematische Farbreihen mit 5 gleichen horizontalen Rythmen

This picture is composed of lines with a 15 row colour progression with each column having an offset. To animate it I rotate each column’s colours in an opposing direction.

Spiral

Lohse inspired animated spiral

This Lohse inspired spiral consists of alternating block patterns animated anti-clockwise.

Opposing Spiral

Lohse inspired animated opposing spiral

The same pattern with alternating block patterns rotating clockwise and anti-clockwise.

Four Similar Groups

Four Similar Groups

Here I imagined the red blocks shooting up and down and  from side-to-side.

Method

Each piece was generated procedurally using an F# script. A parameterized function is used to generate a frame as a bitmap, with a series of frames created to form an animated gif.

For example to draw the 15 lines

// Column widths
let xs = [3;2;1;4;2;1;4;2;1;4;2;1;4;2;1]
// Column color offsets 
let ys = [0;10;14;8;11;6;9;4;7;2;5;12;4;13;1]
// Image width and height
let width,height=476,450
// Block width and height
let w,h = 14,30

// Returns image using specified color offset
let draw n =
   let image = new Bitmap(width,height)
   use graphics = Graphics.FromImage(image)
   for y = 0 to 14 do
      List.zip xs ys
      |> List.mapi (fun i xy -> i,xy)
      |> List.scan (fun x (i,(dx,dy)) ->
         let n = if i%2 = 0 then -n else n
         let brush = brushes.[(15 + y + dy + n) % 15]
         graphics.FillRectangle(brush, x*w, y*h, w*dx,h)
         x + dx
      ) 0 |> ignore
   image

// http://www.codeproject.com/Articles/11505/NGif-Animated-GIF-Encoder-for-NET
#r @"Gif.Components.dll" 
open Gif.Components

let encoder = AnimatedGifEncoder()
if encoder.Start(@"c:\temp\15lines.gif") then
   encoder.SetFrameRate(2.0f)
   encoder.SetRepeat(0)
   for i = 0 to 15 do
      encoder.AddFrame(draw i) |> ignore
   encoder.Finish() |> ignore

Full Scripts

Have fun!

Categories: Programming

Data Modelling: The Thin Model

Mark Needham - Mon, 10/27/2014 - 07:55

About a third of the way through Mastering Data Modeling the authors describe common data modelling mistakes and one in particular resonated with me – ‘Thin LDS, Lost Users‘.

LDS stands for ‘Logical Data Structure’ which is a diagram depicting what kinds of data some person or group wants to remember. In other words, a tool to help derive the conceptual model for our domain.

They describe the problem that a thin model can cause as follows:

[...] within 30 minutes [of the modelling session] the users were lost…we determined that the model was too thin. That is, many entities had just identifying descriptors.

While this is syntactically okay, when we revisited those entities asking, What else is memorable here? the users had lots to say.

When there was flesh on the bones, the uncertainty abated and the session took a positive course.

I found myself making the same mistake a couple of weeks ago during a graph modelling session. I tend to spend the majority of the time focused on the relationships between the bits of data and treat the meta data or attributes almost as an after thought.

2014 10 27 06 41 19

The nice thing about the graph model is that it encourages an iterative approach so I was quickly able to bring the model to life and the domain experts back onside.

We can see a simple example of adding flesh to a model with a subset of the movies graph.

We might start out with the model on the right hand side which just describes the structure of the graph but doesn’t give us very much information about the entities.

I tend to sketch out the structure of all the data before adding any attributes but I think some people find it easier to follow if you add at least some flesh before moving on to the next part of the model.

In our next iteration of the movie graph we can add attributes to the actor and movie:

2014 10 27 06 57 32

We can then go on to evolve the model further but the lesson for me is value the attributes more, it’s not all about the structure.

Categories: Programming

100 Top Agile Blogs

Luis Goncalves has put together a list called the 100 Top Agile Blogs:

If you don't know Luis, he lives and breathes driving adoption of Agile practices.

Luis is also an Agile Coach, Co-Author, Speaker, and Blogger.  He is also the co-founder of a MeetUp group called High Performing Teams, and he is a certified Scrum Master and Product Owner.

Here is a preview of the list of top 100 Agile Blogs:

image

 

For the rest of the list, check out 100 Top Agile Blogs.

Lists like these are a great way to discover blogs you may not be aware of.  

While there will be a bunch of blogs you already know, chances are, with that many at a glance, there will be at least a few new ones you can add to your reading list.

Categories: Architecture, Programming

Implementing material design in your Android app

Android Developers Blog - Fri, 10/24/2014 - 20:18
By Chris Banes and Nick Butcher, Android Developer Relations

Material design is a comprehensive approach to visual, interaction and motion design for the multi-screen world. Android 5.0 Lollipop and the updated support libraries help you to create material UIs. Here’s a rundown of some of the major elements of material design and the APIs and widgets that you can use to implement them in your app.

Tangible surfaces

In material design, UIs are composed of pieces of digital paper & ink. The surfaces and the shadows they cast provide visual cues to the structure of the application, what you can touch and how it will move. This digital material can move, expand and reform to create flexible UIs.

Shadows

A surface’s position and depth result in subtle changes in lighting and shadows. The new elevation property lets you specify a view’s position on the Z-axis and the framework then casts a real-time dynamic shadow on items behind it. You can set the elevation declaratively in your layouts, defined in dips:

<ImageView …
    android:elevation="8dp" />

You can also set this from code using getElevation()/setElevation() (with shims in ViewCompat). The shadow a view casts is defined by its outline, which by default is derived from its background. For example if you set a circular shape drawable as the background for a floating action button, then it would cast an appropriate shadow. If you need finer control of a view’s shadow, you can set a ViewOutlineProvider which can customise the Outline in getOutline().

Cards

Cards are a common pattern for creating surfaces holding a distinct piece of information. The new CardView support library allows you to create them easily, providing outlines and shadows for you (with equivalent behaviour on prior platforms).

<android.support.v7.widget.CardView
    android:layout_width="match_parent"
    android:layout_height="wrap_content">
    <!-- Your card content -->

</android.support.v7.widget.CardView>

CardView extends FrameLayout and provides default elevation and corner radius for you so that cards have a consistent appearance across the platform. You can customise these via the cardElevation and cardCornerRadius attributes, if required. Note that Cards are not the only way of achieving dimensionality and you should be wary of over-cardifying your UI!

Print-like Design

Material utilises classic principles from print design to create clean, simple layouts that put your content front and center. Bold deliberate color choices, intentional whitespace, tasteful typography and a strong baseline grid create hierarchy, meaning and focus.

Typography

Android 5.0 updates the system font Roboto to beautifully and clearly display text no matter the display size. A new medium weight has been added (android:fontFamily=”sans-serif-medium”) and new TextAppearance styles implement the recommended typographic scale for balancing content density and reading comfort. For instance you can easily use the ‘Title’ style by setting android:textAppearance=”@android:style/TextAppearance.Material.Title”. These styles are available on older platforms through the AppCompat support library, e.g. “@style/TextAppearance.AppCompat.Title”.

Color

Your application’s color palette brings branding and personality to your app so we’ve made it simple to colorize UI controls by using the following theme attributes:

  • colorPrimary. The primary branding color for the app; used as the action bar background, recents task title and in edge effects.
  • colorAccent. Vibrant complement to the primary branding color. Applied to framework controls such as EditText and Switch.
  • colorPrimaryDark. Darker variant of the primary branding color; applied to the status bar.

Further attributes give fine grained control over colorizing controls, see: colorControlNormal, colorControlActivated, colorControlHighlight, colorButtonNormal, colorSwitchThumbNormal, colorEdgeEffect, statusBarColor and navigationBarColor.

AppCompat provides a large subset of the functionality above, allowing you to colorize controls on pre-Lollipop platforms.

Dynamic color

Material Design encourages dynamic use of color, especially when you have rich images to work with. The new Palette support library lets you extract a small set of colors from an image to style your UI controls to match; creating an immersive experience. The extracted palette will include vibrant and muted tones as well as foreground text colors for optimal legibility. For example:

Palette.generateAsync(bitmap,
        new Palette.PaletteAsyncListener() {
    @Override
    public void onGenerated(Palette palette) {
         Palette.Swatch vibrant =
                 palette.getVibrantSwatch();
          if (swatch != null) {
              // If we have a vibrant color
              // update the title TextView
              titleView.setBackgroundColor(
                  vibrant.getRgb());
              titleView.setTextColor(
                  vibrant.getTitleTextColor());
          }
    }
});
Authentic Motion

Tangible surfaces don’t just appear out of nowhere like a jump-cut in a movie; they move into place helping to focus attention, establish spatial relationships and maintain continuity. Materials respond to touch to confirm your interaction and all changes radiate outward from your touch point. All motion is meaningful and intimate, aiding the user’s comprehension.

Activity + Fragment Transitions

By declaring ‘shared elements’ that are common across two screens you can create a smooth transition between the two states.

album_grid.xml
…
    <ImageView
        …
        android:transitionName="@string/transition_album_cover" />
album_details.xml
…
    <ImageView
        …
        android:transitionName="@string/transition_album_cover" />

AlbumActivity.java
Intent intent = new Intent();
String transitionName = getString(R.string.transition_album_cover);
…
ActivityOptionsCompat options =
ActivityOptionsCompat.makeSceneTransitionAnimation(activity,
    albumCoverImageView,   // The view which starts the transition
    transitionName    // The transitionName of the view we’re transitioning to
    );
ActivityCompat.startActivity(activity, intent, options.toBundle());

Here we define the same transitionName in two screens. When starting the new Activity and this transition is animated automatically. In addition to shared elements, you can now also choreograph entering and exiting elements.

Ripples

Materials respond to users’ touch with an ink ripple surface reaction. Interactive controls such as Buttons exhibit this behaviour by default when you use or inherit from Theme.Material (as will ?android:selectableItemBackground). You can add this feedback to your own drawables by simply wrapping them in a ripple element:

<ripple
    xmlns:android="http://schemas.android.com/apk/res/android"
    android:color="@color/accent_dark">
    <item>
        <shape
            android:shape="oval">
            <solid android:color="?android:colorAccent" />
        </shape>
    </item>
</ripple>

Custom views should propagate touch location down to their drawables in the View#drawableHotspotChanged callback so that the ripple can start from the touch point.

StateListAnimator

Materials also respond to touch by raising up to meet your finger, like a magnetic attraction. You can achieve this effect by animating the translationZ attribute which is analogous to elevation but intended for transient use; such that Z = elevation + translationZ. The new stateListAnimator attribute allows you to easily animate the translationZ on touch (Buttons do this by default):

layout/your_layout.xml
<ImageButton …
    android:stateListAnimator="@anim/raise" />
anim/raise.xml
<selector xmlns:android="http://schemas.android.com/apk/res/android">
    <item android:state_enabled="true" android:state_pressed="true">
        <objectAnimator
            android:duration="@android:integer/config_shortAnimTime"
            android:propertyName="translationZ"
            android:valueTo="@dimen/touch_raise"
            android:valueType="floatType" />
    </item>
    <item>
        <objectAnimator
            android:duration="@android:integer/config_shortAnimTime"
            android:propertyName="translationZ"
            android:valueTo="0dp"
            android:valueType="floatType" />
    </item>
</selector>
Reveal

A hallmark material transition for showing new content is to reveal it with an expanding circular mask. This helps to reinforce the user’s touchpoint as the start of all transitions, with its effects radiating outward radially. You can implement this using the following Animator:

Animator reveal = ViewAnimationUtils.createCircularReveal(
                    viewToReveal, // The new View to reveal
                    centerX,      // x co-ordinate to start the mask from
                    centerY,      // y co-ordinate to start the mask from
                    startRadius,  // radius of the starting mask
                    endRadius);   // radius of the final mask
reveal.start();
Interpolators

Motion should be deliberate, swift and precise. Unlike typical ease-in-ease-out transitions, in Material Design, objects tend to start quickly and ease into their final position. Over the course of the animation, the object spends more time near its final destination. As a result, the user isn’t left waiting for the animation to finish, and the negative effects of motion are minimized. A new fast-in-slow-out interpolator has been added to achieve this motion.

For elements entering and exiting the screen (which should do so at peak velocity), check out the linear-out-slow-in and fast-out-linear-in interpolators respectively.

Adaptive design

Our final core concept of material is creating a single adaptive design that works across devices of all sizes and shapes, from watches to giant TVs. Adaptive design techniques help us realize the vision that each device reflects a different view of the same underlying system. Each view is tailored to the size and interaction appropriate for that device. Colors, iconography, hierarchy, and spatial relationships remain constant. The material design system provides flexible components and patterns to help you build a design that scales.

Toolbar

The toolbar is a generalization of the action bar pattern, providing similar functionality, but much more flexibility. Unlike the standard action bar, toolbar is a view in your hierarchy just like any other, so you can place instances wherever you like, interleave them with the rest of your views, animate, react to scroll events and so on. You can make the Toolbar act as your Activity’s Action Bar by calling Activity.setActionBar().

In this example, the blue toolbar is an extended height, overlaid by the screen content and provides the navigation button. Note that two further toolbars are used in the list and detail views.

For details of implementing toolbars, see this post.

Go Forth and Materialize

Material Design helps you to build understandable, beautiful and adaptive apps, which are alive with motion. Hopefully, this post has inspired you to apply these principles to your app and signposted some of the new (and compatibility) APIs to achieve this.

Join the discussion on

+Android Developers
Categories: Programming

The Future of IT Leaders

I’ll need to elaborate on this at some point, to share what I’ve experienced across lots of businesses large and small, as well as some of the biggest businesses on the planet, as they transform themselves for the digital economy.

Meanwhile, here is an interesting read on CIO Straight Talk magazine.

In their words, "CIO Straight Talk is a series of "straight talking" articles from senior IT executives and leading companies and government and nonprofit organizations."

This first edition is focused on learning, failing and learning in the Second Machine Age, and features two non-practitioner experts on current topics:

“Andrew McAfee, co-author of the New York Times bestseller The Second Machine Age, cofounder of MIT’s Initiative on the Digital Economy and Principal Research Scientist at MIT Sloan School of Management, talks about ‘The CIO’s role in the enterprise of the future.’ Says McAfee: ‘The overall trend is that companies of all stripes will need, proportionately, many fewer people in IT. Those who remain will be very highly valued, very highly skilled, very important… Enterprises are going to need someone to help them navigate the second machine age… I think that if the CIO plays her cards right, this can absolutely be her role in the enterprise.’”

Michelle Gallen, the CEO of Shhmooze, a social networking start-up, talks about failure, not to be confused Failure Lite – ‘I failed. How nice. I learned so much’ – often hailed breezily by management experts as something everyone should experience and every company should encourage. Real failure, according to this serial entrepreneur, isn’t pretty. Says Gallen: ‘I don’t think you learn without failing… In the start-up world, innovation is the ability to take an idea and turn it into an invoice. Lots of larger business organizations also rely on cash flow to keep them alive, and therefore innovation has to be monetized. If you’re Apple or Microsoft, you’ve got a war chest, and you can actually allow failure. A lot of companies can’t actually afford it. It’s quite an expensive hobby, failing.’”

So there you have it -- failure is an expensive hobby and the few IT leaders left in organizations will be very highly valued, very highly skilled, and very important.

There’s more to the story and I’ll share what I’ve learned over the past few years helping companies cross the Cloud chasm and accelerating their digital transformation.

Categories: Architecture, Programming